How the College of Arts & Sciences Is Approaching AI Ethically and Responsibly
In a panel discussion during Tech & Society Week, faculty members in the College of Arts & Sciences discussed the ethical and societal dimensions of AI and offered their predictions on the future of the technology.
Illustration by Chiara Vercesi
The College of Arts & Sciences announced in March that students will be able to enroll in a nine-credit undergraduate certificate in artificial intelligence starting this fall.
As part of the program, students are required to complete one course each from three domains: The Problem of AI, which examines ethical, social and political dimensions of AI; The Science of AI, which focuses on understanding how systems work and their capabilities and limitations; and The Application of AI, which explores the practical use of AI tools in disciplinary and professional contexts.
David Edelstein, the dean of the College, explained in a Tech & Society Week panel in late March, that Problem of AI is a “cheeky nod” to one of Georgetown’s signature courses, The Problem of God, where undergraduate students critically examine religious dimensions of human nature and reflect on their own experience with religion.
“In the same way in which the word ‘problem’ has about eight different meanings as it’s used in Problem of God, the same applies in many ways for artificial intelligence,” Edelstein said. “It’s a problem, and it’s a challenge. We need to think about the impact it’s having.”
Edelstein moderated the Problem of AI panel that featured Laura DeNardis, the director of the Center for Digital Ethics; Mark Fisher, an assistant professor in the Department of Government; Lisa Singh, a professor and chair of the Department of Computer Science; and Nathan Hensley, a professor in the Department of English. Each faculty member brought their own perspective and expertise to the conversation on how the College is approaching the questions AI raises and how it is developing curriculum to prepare students to engage with AI thoughtfully and deliberately.
“What is incumbent upon us as a community is a sustained critical engagement with this technology, to understand the way that the technology is affecting our society and everything we do,” Edelstein said.
The Joy of Inquiry
One of the goals of liberal arts is to cultivate the joy of inquiry. The introduction of AI can be seen as a threat to that, as Edelstein posed to the panelists.
“When we have tools around us that make our lives super easy, we tend to use them,” Singh said.
The question reminded her of a moment from teaching 15 years ago, where she realized that Google had revolutionized how her students memorized and learned information.
“This is the moment where we have to reinvent that joy of inquiry in different ways, and we can,” Singh said. “AI doesn’t have to rob us of that. It just means that we have to think more deeply about new pathways to capturing that type of inquiry we care most about.”
Fisher, the founding director for Artificial Intelligence and Democratic Citizenship (AIDC), believes that there are approaches to education where AI is not a threat and that students are invested in developing certain cognitive tools. He hopes that students can think of their pedagogical experience as development of certain capacities or the cultivation of curiosity.
“When they are curious, there is genuine motivation to learn,” said Fisher, whose research focuses on the history and future of democratic thought. “But if students are really only encouraged to think about the grade or the outcomes or they’re really anxious about balancing many different priorities, I think AI is an obvious solution to that.”
Hensley warned that usage of AI can lead to cognitive offloading and deskilling and noted that the continual expansion of AI will ultimately make studying the humanities more important than ever.
“I can testify that many of us see this as a paradoxical moment of revitalization: a chance to reinvest in the core functions of the liberal arts enterprise,” he said.
DeNardis, the inaugural endowed chair in Technology, Ethics and Society who is teaching the flagship Problem of AI course as part of the College’s certificate program, cautioned the audience against thinking of AI purely as large language models. AI, she added, is already in everything from cybersecurity to drug discovery, and can aid in the pursuit of knowledge.
“That’s the world that students are entering, and we have to prepare them for that,” DeNardis said.
The Future of AI
Edelstein asked the panelists to look into the future as AI continues to proliferate and evolve.
Will AI be seen as a moment that fell short of its promises? Will it spread even further and have a revolutionary effect on society? Or will it be somewhere in between?
DeNardis believes that AI will only continue to grow and be transformational in our lives. She hopes that as it progresses, reasonable governance frameworks will be developed to address some of the safety challenges that arise during these moments of transformation.
“I think AI is going to get astronomically large as it moves into the solar-system internet and space networks,” she said. “And I think it’s going to get infinitesimally small as it moves into nanotechnologies and medical devices.”
Fisher can imagine one of two scenarios.
The first, he said, is one where an acceptance of AI leads to passivity, where people begin to see AI as a way of offloading or outsourcing desires by having them met without much effort. “I think that opens up to a really dangerous world, not only which bridges various forms of authoritarianism, but also just extreme forms of human disenchantment,” Fisher said.
The second scenario is one where people figure out how to recenter humans and think about AI as something that can add to their abilities. In that case, AI would be something that is viewed as a tool where humans are still responsible agents, Fisher said.
For Singh, this generation of AI has already transformed society in the core ways that it will. It is the next generation of AI that will bring new types of transformations. But even before that next generation, large tech companies, she said, will continue getting new technologies out as fast as they can.
“If governance frameworks and entities like universities don’t ensure ethical use of AI, then we could see it used in ways we may not want it to be used,” Singh said.
Hensley envisions a world where the proliferation of AI will need to confront the limits in the physical world that requires large data centers to power AI. Through it all, he said, the importance of human connection will remain.
“I think these residual areas in which people can find each other and be real with each other in physical spaces will be more and more important,” Hensley said.


