Digital Humanism Work with AI

 Digital Humanism Work with AI

Lab Session: Digital Humanities

In our recent digital humanities / corpus analysis exercise, Dr. Dilip Barad assigned LAB ACTIVITY. In which  we explored two web‑based text tools: CLiC and Voyant Tools. The goal was to see how they can assist with analyzing texts, discovering patterns, and drawing insights. In this blog I share my process, what I learned, and reflections on their strengths and limitations.

 VOYANT  see through your text


Activity with voyant

Example: Work by AI

This is the better example of analysis of AI. This data describes how AI helps in the analysis of data through images.



Analysis of this image is here by Chatgpt:

🔹 Left Panel – Word Cloud (Cirrus)

  • The word cloud highlights the most frequently used words in Frankenstein.

  • Words like life, day, night, elizabeth, shall, mind, thought, death, time are prominently displayed.

  • These words reveal the central themes of the novel:

    • Life → creation of the monster, scientific ambition.

    • Death → destruction, tragedy, and consequences of Victor’s experiment.

    • Elizabeth is a key character symbolising love and loss.

    • Night, day, thought, mind → the mental and emotional struggles of characters, Gothic atmosphere.


🔹 Middle Panel – Text Viewer

  • Shows passages from Frankenstein, allowing for close reading within the corpus.

  • The excerpt includes names and dialogues, showing how Voyant integrates quantitative analysis with the primary text.


🔹 Right Panel – Trends Graph

  • The line graph represents the frequency of selected words across the text.

  • For example, words like 'life', 'mind', 'death', 'thought', 'night', 'rise' and 'fall' appear in different segments of the novel.

  • This shows thematic progression:

    • Early sections emphasise life and creation.

    • Later sections show increased mentions of death and night, reflecting Gothic tragedy.

    • Mentions of Elizabeth spike in particular chapters (her central role in the narrative).


🔹 Bottom Section – Metadata & Corpus Tools

  • Displays word counts, document statistics, and analysis tools.

  • This confirms that the uploaded text is Frankenstein and gives access to summaries, keyword-in-context, and further analysis.


 Review of the Image

This visualisation of Frankenstein in Voyant Tools provides a digital humanities perspective on the novel. Instead of only close reading, it uses distant reading techniques:

  • The word cloud highlights core themes of life, death, thought, and characters like Elizabeth.

  • The trends graph shows how themes shift across the narrative, mapping emotional and thematic development.

  • Together, these tools reveal how Shelley weaves recurring motifs to explore creation, responsibility, and the Gothic tension between science and morality.



MY EXPERIENCE:

When she was working on analysing Mary Shelley’s Frankenstein using Voyant Tools, she received valuable help from ChatGPT. At first, the image she had—with a word cloud, a text reader, and a trends graph—looked complicated. But with ChatGPT’s guidance, she was able to clearly understand what each part represented.

The word cloud highlighted the most frequent words, like 'life', 'death', 'Elizabeth', 'thought', 'night', and 'mind'. ChatGPT explained how these words point to the main themes of the novel—creation, destruction, love, and the Gothic atmosphere. The trends graph showed how the use of these words changes across the text, and this helped her see how themes like life and death dominate at different points in the novel. The text reader allowed her to connect this data with the actual narrative of Frankenstein.

What impressed her most was how ChatGPT combined digital analysis with literary interpretation. It was not just a technical explanation but a meaningful review of how Voyant Tools can be applied in literary studies. This experience helped her realise the importance of Digital Humanities in exploring texts: instead of only close reading, she could also use distant reading to identify patterns, themes, and character importance in a new way.

She truly appreciated this support, because it made the analysis both easier and deeper. ChatGPT acted like a bridge between technology and literature, and her understanding of Frankenstein improved through this guided exploration.

My Experience with the Moral Machine Activity

Recently, as part of our lab session in digital/information & technology studies (ITS), I engaged with the Moral Machine online platform created by the MIT Media Lab.The tool asks users to make ethical decisions in hypothetical scenarios involving autonomous (self-driving) vehicles: for example, a car must choose between endangering a group of pedestrians or sacrificing its occupants. It draws on variations of the classic trolley problem.

Here’s how I carried out the activity, what I noticed, and what I learned.

What I did

  • I logged into the platform and chose several scenarios of driverless-car dilemmas. For each scenario I had to decide whom to spare and whom not to—passengers vs. pedestrians, younger vs older people, humans vs animals, law-abiders vs criminals, etc.

  • I reflected on my decision-making: what I instinctively spared, what I hesitated on, what emotional or cultural intuitions shaped my choices.

  • I noted the complexity: my immediate moral intuition often clashed with a “rational” calculation of harm (e.g., save more lives vs. save the law-abiding).

  • After finishing several rounds, I reviewed how difficult it is to translate moral values into binary choices in the machine’s “algorithmic” format.

  • I also reflected on the broader implications: if machines are going to make or assist in life-and-death decisions, how will our values and biases get coded into them?

My Observations & Reflections 



  • Emotional tension: Even though it’s a simulation, each scenario felt heavy. Choosing one life over another, or sparing one group at the cost of another, makes you confronted by the weight of moral responsibility.

  • Bias awareness: I noticed that I had certain biases (for example: spare younger over older; spare more lives rather than fewer; spare humans over animals). It made me aware that even my “moral reasoning” comes with built-in assumptions.

  • Cultural dimension: I wondered how my decisions might differ if I were from another cultural background. The research shows there are cross-cultural differences in how people respond in the Moral Machine experiment.

  • The machine problem: A machine (or algorithm) that implements decisions like these must reduce complex human lives to a set of coded values. That reduction itself is morally loaded and problematic.

  • Learning through doing: Just engaging with the tool increased my awareness of how technology, ethics, and everyday decision-making intersect. It wasn’t purely theoretical; it felt immediate.


Learning Outcomes

From the activity I derived several important learning outcomes:

1. Increased ethical literacy for technology

One of the biggest take-aways is that when technology enters the domain of life-or-death decisions (as with autonomous vehicles), ethics cannot be an afterthought. The Moral Machine activity made me understand that engineers, designers, policy-makers—and users—must ask: Who decides? On what basis? With what values?

2. Recognising the limits of simple moral decision models

The activity exposed how difficult it is to reduce moral choices into yes/no, spare/kill binaries. Real life is far more messy: relationships, intentions, context, regret, responsibility—all play a role. The simulation forces simplification, which helps reveal tensions. But also, it shows the limitation: a machine cannot see full human context.

3. Awareness of human bias and cultural variation

By doing the exercise, I became more conscious of my own moral inclinations and the fact they are shaped by culture, upbringing, social norms. The data from the Moral Machine project indicate that different societies make different choices—so when building global technology, universal moral rules may not exist or must be negotiable. 

4. Reflection on responsibility and accountability

If a self-driving car makes a decision, who is responsible? The manufacturer? The programmer? The user? The vehicle? The activity prompts these questions. It helped me see that “machine ethics” is not simply coding ethics into a device; it involves social, legal, philosophical layers.

5. Connecting classroom theory to real world tech

From my previous blog-style documentation of our text-analysis tool activity (with tools like Voyant & CLiC) I had learned how digital tools help research. But this Moral Machine activity extended that: instead of analysing texts, we engaged with ethical decision-making in emerging tech. It helped link theory (digital humanities, ethics) with practice (AI/automation).

6. A tentative awareness of future challenges

Finally, I left the activity thinking: the world of autonomous machines is not far off. The questions we faced in simulation may become everyday. Are our societies prepared? How will disadvantaged groups be treated by such machines? Will bias be codified? The Moral Machine pointed to these challenges.


Personal Reflections

On a personal note, this activity made me reconsider some of my own assumptions. I realised that when I was making decisions I often defaulted to “save the most lives” as a principle, but I also felt I should spare the vulnerable (children, law-abiding). That dual impulse pulled me in different directions. The game forced a confrontation with that tension.

At first I felt uneasy: is it okay to treat a simulation like this lightly when the outcomes are life and death? But I concluded: the simulation works precisely because of its discomfort. It makes you uneasy, yes—but that unease is part of learning. It forces you to ask uncomfortable questions.

I also felt a sense of humility: I realised I don’t have ready-made moral code for all situations. My moral reasoning is messy. That is okay—but it means we should be humble about how we build systems (machines) that might have to make decisions when we are not present.

In terms of my learning style (as an MA student of English and digital humanities), this activity was valuable because it bridged disciplines: ethical theory, technology studies, digital humanities, cultural studies. I could see how digital tools (in this case the Moral Machine) can be used not just for textual analysis, but for ethical and cultural reflection.


Implications for my Studies & Beyond

Given my major in English/Cultural Studies and interest in digital humanities, this activity offers a few concrete implications:

  • When studying literature or culture, one must now also consider how technology mediates representation, - how machine algorithms might reproduce cultural bias.

  • In assignments (like blog posts, presentations) I can refer to the Moral Machine as a case study of how moral and technological logics intersect—adding depth to work on cultural theory or post-colonial ethics.

  • It invites me to think: if texts and machines are sites of ideology, then digital humanities must engage not just with literature and archives but with emerging algorithms, automation, and human-machine ethical frameworks.

In broader terms: in our society, decisions about AI, driverless cars, automation will have ethical, cultural, legal consequences. As a student (and future professional), I feel a responsibility to be aware—rather than assume technology is neutral. The Moral Machine activity reinforced that.


Conclusion

Engaging with the Moral Machine activity was more than a classroom exercise—it was a mirror into the ethical dimensions of technology and a test of my own moral intuitions. It taught me that digital tools can be reflective—not just technical. It made me aware of the complexity of moral decision‐making in automated contexts, the influence of bias and culture in such decisions, and the necessity of critical engagement when we build or deploy technology that impacts lives.

In sum, the experience strengthened my ethical literacy for technology, sharpened my awareness of cultural and moral complexity, and prepared me to ask deeper questions about the digital humanities of the future. I’m grateful for the lab session that prompted this reflection, and I look forward to integrating these insights into my further studies and work.



Comments