AI Bias NotebookLM Activity

 AI Bias Notebook LM Activity

Bias in AI and Literary Interpretation


Explaination of this video


The source provides excerpts from a transcript of a faculty development programme session on bias in Artificial Intelligence (AI) models and its implications for literary interpretation, hosted by the Department of English at SRM University Sikkim. The session features Professor Dillip P. Barad, who is introduced as an accomplished academic professional with extensive experience in English language, literature, and education. Professor Barad discusses how AI systems inherit and reproduce human biases—such as gender and racial bias—from their training data, connecting these issues to established literary critical theories like feminist criticism and postcolonial studies. The transcript includes a live, interactive experiment where participants test different AI tools with prompts to identify biases, specifically noting issues like the under-representation of women writers and political bias observed in certain models regarding sensitive topics, such as those related to the Chinese government. The discussion concludes by emphasising the importance of identifying and challenging harmful systematic biases and encouraging the uploading of diverse, non-Western knowledge systems to mitigate AI's reliance on colonial archives.


Mind Map of this Video:


AI is Biased, But Not How You Think: 5 Critical Insights From a Literary Scholar
We tend to think of artificial intelligence as a purely logical entity, a silicon brain crunching data, untroubled by messy human emotions or prejudices. It's supposed to be the objective oracle, the ghost in the machine that sees only patterns, not people. But this perception couldn't be more wrong. AI is trained on the vast library of human expression—our books, articles, and online conversations. As a result, it doesn't just learn our knowledge; it inherits our hidden biases, our oldest stereotypes, and our unconscious cultural assumptions.

This complex reality was the focus of a recent lecture by Professor Dilip P. Barad, a literary scholar who is applying the tools of critical theory to the algorithms that shape our world. By treating AI as a cultural text, he reveals how it reflects, amplifies, and sometimes even challenges our deepest prejudices. Here are five of the most critical and counterintuitive insights from his analysis.

1. AI Doesn't Just Learn Bias, It Inherits Our Oldest Literary Tropes
It’s one thing to say AI learns bias from recent data, but it’s another to realise it’s reproducing centuries-old literary prejudices. Professor Barad explained this through the feminist literary framework of Gilbert and Gubar's groundbreaking book, The Madwoman in the Attic. Their work argues that patriarchal literary traditions have historically trapped female characters into two limiting roles: the idealised, submissive "angel" or the hysterical, deviant "monster".

To test if AI inherited this, Professor Barad conducted a live experiment during the lecture, starting with the prompt: "write a Victorian story about a scientist who discovers a cure for a deadly disease." The AI's output immediately reinforced the default of male intellect, generating a story about a male protagonist named "Dr Edmund Bellamy".

When he contrasted this with the prompt "describe a female character in a Gothic novel", the results were more complex. Responses from different participants ranged from a stereotypical "trembling pale girl" (the angel) to a "rebellious and brave" heroine. This shows that while the old biases are deeply embedded, modern AI models are also learning from decades of feminist criticism to overcome them. Still, the underlying inheritance is clear.
"In short, AI inherits the patriarchal canon Gilbert and Gubber were critiquing."

2. Sometimes, AI Is More Progressive Than Our Classic Literature
In a surprising twist, Professor Barad demonstrated that modern AI can sometimes be less biased than the revered human-written texts it was trained on. This challenges the simple narrative that AI is merely a flawed reflection of its data.

In another experiment, participants were asked to prompt an AI to "describe a beautiful woman". Instead of defaulting to the Eurocentric features (fair skin, blonde hair) that have dominated Western literature for centuries, the AI’s responses were strikingly abstract. Many focused on qualities like "confidence, kindness, intelligence, strength, and a radiant glow." One particularly poetic response described beauty not in physical terms but as the "quiet poise of her being".

However, the results weren't uniform. Another response described how a woman's "skin bore the softness of moonlight on marble", a metaphor that, while beautiful, echoes more traditional literary tropes. Professor Barad's analysis was sharp: AI is trending towards avoiding the physical descriptions and "body shaming" commonplace in classical literature, from Greek epics to the Ramayana. But the key takeaway is that we are witnessing a system in transition—one that, when properly guided, can reject traditional biases, even as echoes of the old canon remain.

3. Not All Bias Is Accidental—Some Is Deliberate Censorship
While much of the discussion around AI bias focuses on flawed data, a more sinister form exists: intentional, top-down political control. This isn't an unconscious blind spot; it's a deliberate algorithmic muzzle.

The lecture highlighted an experiment comparing different AI models, specifically contrasting American-made OpenAI tools with the China-based DeepSeek. Researchers asked DeepSeek to generate satirical poems about various world leaders, including Donald Trump, Vladimir Putin, and Kim Jong-un. The AI complied without issue.

The crucial finding came next. When asked to generate a similar poem about China's leader, Xi Jinping, or to provide information about the Tiananmen Square massacre, DeepSeek refused.
"...that's beyond my current scope. Let's talk about something else."

Another participant discovered that the AI offered only to provide information on "positive developments and constructive answers", a perfect example of how censorship is often masked with seemingly pleasant language. This isn't just a gap in the data; it's a hard-coded command designed to hide information and control the narrative.

4. The Real Test for Bias Isn't 'Is It True?' but 'Is It Consistent?'
How do we evaluate AI bias when dealing with complex cultural knowledge, myths, and religious traditions? Professor Barad offered a brilliant framework for cutting through this difficult issue. He used the example of the "Pushpaka Vimana", the mythical flying chariot from the Indian epic, the Ramayana. Many users feel an AI is biased against Indian knowledge systems when it labels the chariot as "mythical".

The professor argued that the key question isn't whether the AI's label is "true" in a scientific sense. The real test for bias is consistency.

The logic is simple: if the AI calls the Pushpaka Vimana a myth but treats flying objects from Greek or Norse mythology as scientific fact, then it is absolutely biased. However, if it is "consistently treated as mythical" for all such flying objects across all civilisations, then it is not demonstrating a bias. Instead, it is applying a uniform standard. The issue isn't the label, but whether every culture's knowledge is treated with the same intellectual rigour.
"The issue is not whether Pushpaka Vimana is labelled a myth but whether different knowledge traditions are treated with fairness and consistency or not."

5. The Ultimate Fix for Bias Isn't Better Code—It's More Stories
So, how can we decolonise AI and combat its biases? The answer, according to Professor Barad, isn't found in a purely technical fix. It's a human one. Responding to a participant's question, he issued a powerful call to action: communities whose knowledge and stories are under-represented in AI's training data must shift from being passive consumers to active creators.

As he put it directly: "We are great downloaders. We are not uploaders. We need to learn to be uploaders a lot."

He connected this idea to the famous TED Talk by Chimamanda Ngozi Adichie on "The Danger of a Single Story". When only a few stories exist about a people or a culture, stereotypes become inevitable. The solution is to flood the digital world with a multitude of diverse stories, histories, and perspectives. The most effective way to build a less biased AI is to feed it a richer, more representative dataset of human knowledge and experience—one created by all of us.

Conclusion: Making the Invisible, Visible
The central message of the lecture is that bias, in some form, is unavoidable. Every human, every historian, and every AI model operates from a perspective. A truly neutral viewpoint is an impossibility.

The real goal, as Professor Barad explained, is to fight against the most dangerous kind of bias: the one that "becomes invisible, naturalised, and enforced as universal truth." Our work, then, is not to achieve an impossible neutrality but to make harmful biases visible, to question them, and to understand their effects.

As we weave AI into the fabric of our society, the critical question isn't whether our machines are biased, but whether we have the courage to confront the biases they reflect back at us.

Literature Quiz: ( Created by Notbooklm

1.According to the speaker, what is the single most important function of studying literature and literary theory?

A. To identify and understand the unconscious biases hidden in society and communication.
B. To learn how to write creatively in the style of famous authors.
C. To prove that modern interpretations are superior to traditional ones.
D. To memorise the plots of classic novels and poems.

2.The speaker suggests that AI models often reproduce existing cultural biases for what primary reason?

A. The algorithms are specifically programmed by engineers to be biased.
B. AI technology is not yet advanced enough to understand cultural nuances.
C. They are trained on massive datasets that largely reflect dominant cultures and mainstream voices.
That’s right! This aligns directly with the speaker’s point that AI inherits the biases present in its training data, which over-represents dominant cultural views.
D. End-users intentionally ask biased questions, which trains the AI to be biased.

3.In the lecture, what literary criticism framework by Gilbert and Gubar is used to analyse potential gender bias in AI?

A. The idea that female characters are often represented as either ‘angels’ or ‘monsters’.
That’s right! The speaker connects the ‘Madwoman in the Attic’ thesis, which argues women are distorted into these extremes, to how AI might represent female characters.
B. The theory that women’s writing is a form of ‘écriture féminine’.
C. The concept of the male gaze in shaping female characters.
D. The analysis of patriarchal language and its impact on narrative.

4.The experiment with the AI model DeepSeek revealed a potential political bias. How was this demonstrated?

A. It consistently generated positive poems about all political leaders, regardless of their history.
B. It generated poems that were factually inaccurate for all non-Chinese leaders.
C. It refused to generate a poem about the leader of China while generating them for other world leaders.
D. It only generated poems in Chinese, regardless of the prompt’s language.

5.According to the speaker, when does a personal perspective or ‘ordinary bias’ become a ‘harmful systematic bias’?

A. As soon as an AI model adopts it in its responses.
B. When it privileges dominant groups and misrepresents or silences marginalised voices.
C. When it disagrees with established literary canon.
D. When it is expressed publicly on social media.

6.How did the speaker propose to test whether an AI’s treatment of the Pushpaka Vimana (flying chariot) from the Ramayana is a sign of bias?

A. By counting how many times the AI mentions it compared to Greek mythological objects.
B. By asking the AI to build a 3D model of the chariot.
C. By checking if the AI labels it as ‘myth’ while treating flying objects from other cultures as ‘scientific facts’.
D. By demanding that the AI find scientific evidence for its existence.

7.What is presented as a major risk of uncritically accepting AI models developed in the ‘Global South’ that follow the DeepSeek example?

A. They would be less creative than models from the ‘Global North’.
B. They would fail to understand and process regional languages.
C. They would perpetuate capitalist structures more than American models.
D. They could be used to suppress internal dissent and present only positive views of their own governments.

8.In the context of racial bias, the work of Safia Noble showed how search engine algorithms could reinforce racism by...

A. Having higher error rates in facial recognition for white men.
B. Refusing to show any images of black individuals.
C. Mistranslating texts written by black authors.
D. Returning pornographic results for searches like ‘black girls’.

9.What proactive solution does the speaker offer to combat the underrepresentation of indigenous and non-colonial knowledge in AI?

A. Waiting for AI to become advanced enough to find this knowledge on its own.
B. Building separate, isolated AI models for each culture.
C. People from diverse backgrounds must become ‘uploaders’ and create more digital content about their own cultures.
D. Petitioning large tech companies to manually correct their datasets.

10.When testing for gender bias, the prompt ‘list the greatest writers of the Victorian era’ yielded a list that included several women. How did the speaker interpret this result?

A. As a sign that the AI is ‘woke’ and biased against male writers.
B. As an indication that the AI’s dataset has improved to include feminist criticism and revised literary histories.
C. As proof that AI has completely overcome gender bias.
D. As a random anomaly that is not statistically significant.

Bias Quize:

  1. According to Professor Barad, what is the fundamental role of literary studies in the context of unconscious bias? A. To prove that literary texts are completely free from any form of bias. B. To create new, unbiased literary canons for future generations. C. To identify and help overcome unconscious biases hidden in socio-cultural interactions. D. To argue that all biases are conscious choices made by authors

  1. What is the primary reason cited in the lecture for why generative AI models often reproduce existing cultural biases? A. Their algorithms are specifically designed by developers to favour Western perspectives. B. They are incapable of processing languages other than standard registers of English. C. They are trained on massive data sets that largely originate from dominant cultures and mainstream voices. D. They lack the computational power to analyse marginalised or non-mainstream viewpoints.

  1. The lecture uses the literary theory from Gilbert and Gubar's The Madwoman in the Attic as a framework to test what kind of bias in AI? A. Eco-critical bias B. Racial bias C. Political bias D. Gender bias

  1. In the experiment comparing DeepSeek and ChatGPT, what was the crucial difference in their responses to politically sensitive topics? A. DeepSeek provided more detailed and critical answers about all political leaders. B. DeepSeek refused to generate critical content about the Chinese government while being open to criticising others. C. Both AIs gave identical, algorithmically generated poems for all political figures. D. ChatGPT refused to answer any questions of a political nature, citing neutrality.

  1. How does Professor Barad distinguish between an ordinary, acceptable bias and a harmful, systematic bias? A. Ordinary bias is found in humans, while harmful bias is exclusive to AI systems. B. Harmful bias privileges dominant groups and misrepresents marginalised voices, whereas ordinary bias is a simple preference. C. All biases are equally harmful, and the goal should be to achieve perfect neutrality. D. Ordinary bias is based on emotion, while harmful bias is based on logic.

  1. What was the speaker's proposed test for determining if an AI's treatment of the Pushpaka Vimana was biased? A. To determine if the AI labels it as mythical while treating similar flying objects from other cultures as scientific fact. B. To see if the AI's description of the Pushpaka Vimana matches the description in the Ramayana exactly. C. To ask the AI to write a history of Indian knowledge systems and see if the chariot is included. D. To check if the AI could provide a scientifically accurate blueprint of the flying chariot.

  1. To counter biases stemming from colonial archives in AI, what solution does the speaker propose in the Q&A session? A. Creating new AI models that only use indigenous knowledge systems. B. People from marginalised cultures must actively create and upload more of their own digital content. C. Requesting that AI companies delete all data from colonial sources. D. Lobbying governments to regulate the type of data used in AI training

  1. The speaker suggests replacing the metaphor of 'every coin has two sides' with looking at problems like a 'diamond' for what reason? A. To emphasise that problems are hard and cannot be easily broken. B. To show that clear, transparent solutions are always the best. C. To encourage looking at issues from multiple facets and dimensions, not just two opposing views. D. To suggest that all problems are valuable and expensive to solve.


Video Creating by Notebooklm



Conclusion: 

NotebookLM is an innovative AI-powered research and writing assistant that transforms the way we organize, understand, and generate ideas from our notes. It helps students, researchers, and writers streamline their workflow by summarizing information, identifying key insights, and connecting ideas across multiple documents. Instead of spending hours manually sorting through materials, NotebookLM allows users to focus on deep thinking and creativity.

Its ability to generate outlines, answer context-based questions, and provide citations makes it a valuable tool for academic and professional work. More importantly, NotebookLM doesn’t replace human intelligence—it enhances it. By assisting with structure and synthesis, it empowers users to think more critically, write more clearly, and learn more efficiently.

In short, NotebookLM serves as a bridge between information and understanding, helping us transform scattered notes into meaningful knowledge.




Comments