2025-26: Being Human in the AI Era

Commonwealth Center for the Humanities and Society (CCHS)

2025-26: Being Human in the AI Era

For the Call for Applications for this theme follow this link.

Guy Dove, Philosophy

Project - I'll be your mirror: Reimagining AI

How closely does generative AI mirror human intelligence? While supporters sing its praises, critics argue that it mimics our cognitive output without any real understanding. My project seeks to avoid these polemical positions and to offer a more nuanced answer to this question. In keeping with the skeptics, I believe that AI is insufficiently connected to the world to achieve rich semantic understanding. However, I also think that the way that AI tackles many cognitive tasks echoes the way that we tackle similar challenges. This leads to the sobering thought that some of the failings of AI arise precisely because these models think like we do. Dismissing generative AI as a poor substitute for human intelligence misses this important insight.

Yi "Jasmine" Wang, Communication

Project - Evaluating LLMs as Reliable Coders for Social Media Data

My research investigates how Artificial Intelligence (AI), particularly Large Language Models (LLMs), can aid in analyzing extensive amounts of social media data. Traditionally, researchers manually review and categorize online discussions, but this process is slow and challenging to scale. LLMs, like ChatGPT, provide a quicker alternative, but they also bring forth challenges—such as bias, accuracy, and ethical concerns. My project will compare AI-driven and human-led analyses of social media conversations concerning public health issues, such as COVID-19 vaccines. A key focus is assessing AI's role as a passive tool versus an active partner in research workflows. By testing both methods, I aim to determine the most effective and responsible way to integrate AI into humanities research.

I'm particularly interested in AI's reliability, biases, and ethical challenges. My research will explore whether AI should simply assist researchers or take on a more active, adaptive role in shaping analysis, ensuring that human oversight remains central to responsible data interpretation.

Kendra Sheehan, Classical and Modern Languages

Project - When Machines Dream of Us: Comparative Analysis of AI Narratives

This project explores what it means to be human in the AI era through Japanese and American popular culture. Analyzing narratives in Japanese and American media, this project examines how these cultures depict the boundaries between humans and AI, influenced by historical, cultural, and societal values. Japanese popular culture, particularly anime, often portrays AI as a solution to societal issues, reflecting values of technological harmony. In contrast, American media frequently highlights anxieties about technology's impact on control and individuality. This study will analyze key media from both cultures, such as Roujin Z, Time of Eve, and AI Amok from Japan, and Alien, Her, and Alien: Romulus from the US to trace changing views and concerns about AI over time. This project aims to reveal how cultural narratives shape societal concerns about AI and contribute to discussions on the ethical and emotional dimensions of technology's impact on human identity.

I'm particularly focused on how AI is portrayed in popular culture and its impact on societal values and anxieties. Japanese and American media offer contrasting views: Japanese narratives often see AI as a harmonious solution to societal issues, while American narratives tend to highlight fears about control and individuality. Through analysis of these portrayals in popular media, we can better understand the complex relationship between humanity and AI in a non-scientific manner.

Kushan Dasgupta, Sociology

Project - Contesting and Constructing Racial Knowledge: Scientific Racism and the Internet

Since the early 2000s, a wide cadre of internet users have maintained an online movement to popularize ideas associated with race science and scientific racism. These efforts have been hosted on numerous digital media platforms and facilitated by users with extremist, right-leaning, and centrist political views. In recent years, these efforts have also become contested, as journalists, academic scientists, and professional science organizations have drawn attention to this movement and published pieces contesting the worldviews they promote. My project explores these efforts to study how the notion of knowledgeability is framed and disputed when it comes to race. As AI becomes increasingly subjected to scrutiny, particularly in regards to its capacity to depict and generate information about race, my project provides insights into how various human communities, with different positions regarding race, construct what knowledgeability and learnedness about race means in the first place.

My project aims to address questions related to AI governance and AI ethics. For example, various stakeholders have challenged AI tools for having the potential to reaffirm biases or falsehoods related to race. While these concerns are valid, they ultimately beckon questions about what "knowledge" or "learning" about race means in the first place. My project offers an opportunity to inventory how various human communities implicitly or explicitly make sense of these notions. This should be of interest to those interested in AI governance, as such notions will play a role in how humans interrogate, adjudicate, and problem-solve at the intersection of race and AI.

Margath "Maggie' Walker, Geography and Geosciences

Project - The Prospects for a Responsible AI: Thinking through Place

At root my project is about how emerging technologies are transforming our lives and the worlds we live in. I'm interested in both the materiality of AI, the infrastructural practices flowing in and through artificial intelligence and the representation of AI, the ideas and discourses constitutive of emerging technologies. To fully grasp the world-making capacity of these technologies, I think they need to be thought about together. Because I am a geographer by training my project aims to link up the idea and practice of AI through archival research and an empirical project grounded in place. I want to do this through a three-pronged approach. The first part charts the connections and gaps between a language of responsibility in generative AI and its technological manifestation, the second part maps instances of emerging technologies to think about how AI is grounded in and through place and the third takes findings from the second portion of the project to consider where and if a counter AI might surface. In other words, what are the prospects for resisting some of these technologies that seem ubiquitous and inescapable?

I'm paying attention to how AI does or does not align with human values. I am also looking at how the goals of AI reinforce current logics of our economy like neoliberalism and things like optimization and efficiency. I'm also interested in how AI defines certain terms like responsibility and public interest.