Race, Teaching, and AI: The Same Old Efficiency or a Liberatory Transformation?

By Jeannette Lee-Parikh

In a study conducted at Brigham and Women’s Hospital four years ago, it was found that the AI system, which is supposed to help patients in need of extra care, privileged relatively healthy white patients over sicker black patients. The AI was designed to reduce costs and sort patients based on their previous healthcare costs. However, according to Rachel Thomas, director of the University of San Francisco Center for Applied Data Ethics and co-founder of the research lab fast.ai, focusing on previous healthcare costs is what opened the door to bias: “The healthcare system is less inclined to give treatment to black patients dealing with similar chronic illnesses compared to white patients.” Studies reveal that black patients often have fewer convenient healthcare options despite possessing similar levels of insurance. “So when black patients spend less on medical care for the same illnesses, the algorithm assumes they do not need extra care as much as white patients.” Thomas explained that the root cause of this bias is that the algorithm was given the wrong data for the problem to be solved. Even more concerning is that this AI wasn’t isolated to Brigham and Women’s but was emblematic of algorithms of this kind that are sold to hospitals and affect up to 200 million people.

The above reminds me of the Jean-Baptiste Alphonse Karr quote: “plus ça change, plus c’est la même chose”–the more things change, the more they stay the same. It raises the question of whether AI can be trained on the right data to redress systemic racism or if it will only serve to perpetuate structural inequities.

This is the same question for those of us in education who are concerned about equity, given the many applications of AI already in use and wondering if AI can be applied in innovative ways to achieve better learning outcomes. According to Philipa Hardman, Creator of the DOMS™️, “For every AI-powered piece of ed-tech that pushes us towards more effective instruction, there are ten examples which push is in the opposite direction, using AI to automate and scale ineffective “chalk and talk” practices.” She cites two promising examples as the exception, “but the vast majority accelerate, automate and scale traditional, broken methods of instruction.” Given Hardman’s observation, it is apparent that AI’s potential for change depends on our appetite for transformation. Teachers, students, parents, and edtech therefore must partner to ensure that AI makes education more effective and equitable. One way to achieve this pivot is to train AI to incorporate the lived experiences of BIPOC and female-identifying students. This culturally responsive AI can then deliver personalized adaptive learning that allows students to actively explore, co-create, and apply knowledge. Such an approach is less likely to perpetuate persistent historic inequities that impact students in racist, sexist, classist, and homophobic ways.

Instead of uncritically embracing the hype about new technology, which is a narrative that can disempower teachers, educators, along with our students, should look in the mirror and embrace ourselves as the real pedagogical innovators. After all, we are the ones in the classroom. In “Teachers Matter: Understanding Teacher Impact on Student Achievement,” Isaac Opper explains that teachers matter to student achievement more than any other aspect of schooling. For Black students in particular, researchers have found that teachers of color achieve better outcomes in the short term–standardized tests, attendance, discipline–and in the long term–high school graduation rate and college-going aspiration. This research points to the truly invaluable role that effective teachers play in producing favorable learning outcomes.

In the current frenzy to view AI as the solution to all learning/pedagogical problems in education, much of the public discourse doesn’t sufficiently account for the intersection of two critical factors: the data inputs of AI and the systemic implementation of the science of learning–the absence of the latter in most AI tech is essentially what Hardman laments. We need to remember that AI, like other edtech, is simply a tool that can support teachers designing learning activities premised on dual coding, retrieval practices, and interleaving—ie. the science of learning—which psychologists and neuroscientists have shown are effective in promoting learning. The question, therefore, isn’t whether schools should integrate AI, but instead, whether a specific type of AI allows a specific set of learning goals to be achieved by a specific set of learners in a specific location. And the only way for this question to be sufficiently transparently answered is if AI companies are sufficiently transparent about their training data and methods.

However, the convention is for AI companies to protect this information through claims of “trade secrets.” This lack of clarity matters because humans possess a remarkable tendency to repeat mistakes. Our inability to not learn the right lessons from history or even learn the wrong lessons is further compounded by the very nature of large language models. LLMs, like ChatGPT, that have been trained on humanity’s long history necessarily encompass our mistakes, misadventures, and just plain bad acts. As a result, educators should approach AI as an emerging skill that needs to be mastered and whose data needs to be properly vetted. We should be asking: Whose story is being told from which perspective? What values are encoded and subtly conveyed in this data? Do these models perpetuate racism, sexism, single stories, etc, even as their potential is harnessed? What are their more apparent and subtle short- and long-term effects on BIPOC and female-identifying students? Knowing this information will help educators to frame and make meaningful fully intentional choices about the AI tool, like what is the role of an LLM such as ChatGPT in a writing assignment.

LLMs by their very definition exploit vast amounts of data at scale to find patterns, which makes them very efficient. They are vastly different from a child’s brain. As Alison Gopnik, who runs the Cognitive Development and Learning Lab and is in the AI working group at UC Berkeley, points out: a child’s mind is “tuned to learn”ie. explore and the brain that is geared towards learning works differently from the brain that works to exploit what it already knows—ie. the adult brain. AI, unlike a child’s brain, Gopnik further explains, is not good at things it wasn’t optimized for, like change–ie. resilient–and even AI researchers admit that AI needs a period of play to develop resiliency. To play means a chosen (therefore meaningful), imaginative, engaging, often social, generally enjoyable activity that has a repetitive or iterative quality where players try out ideas until they are satisfied with the results. Gopnick’s explanation and AI researchers’ acknowledgment reveal an intriguing parallel: To make a better, more resilient AI, AI needs to experience a period of play. The current model of schooling in which play is mostly removed and learning is largely premised on the transmission of knowledge is fundamentally designed to produce poor outcomes. No play means poor outcomes for both AI and students! Instead of focusing on efficiency and how AI can make teachers more efficient, what we should do is model AI on the brains of children, return play to learning, and redesign learning based on how human brains actually learn. Learning is a complex process that needs sustained opportunities for solving problems or creating products that are meaningful. Learning is a social activity that involves the learner thinking about and with the content and responding to feedback. For humans, learning is both cognitive and emotional. The challenge for those building AI, incorporating AI in edtech, and using AI in the classroom is how to create the social-emotional reciprocity that is integral to learning.

Given all of the above, the imperative for efficiency, which is both the model of LLMs and the  conventional model of education, is not the way forward unless we want to repeat the mistakes of the past to perpetuate structural inequities and poor pedagogical practices. Alternatively, we should focus on training a playful culturally responsive AI that can be effectively harnessed by teachers who design relevant learning experiences. This trajectory not only offers us, educators, a way to redress structural inequities but also real learning liberation for BIPOC students to fully realize their full learning potential.

This blog post is part of the #31DaysIBPOC Blog Series, a month-long movement to feature the voices of indigenous and teachers of color as writers and scholars. Please CLICK HERE to read yesterday’s blog post by Chandra Singh. Please CLICK HERE to to be uplifted by the rest of the blog series.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s