Ethical Frameworks for Public Interest Technologists at MIT: An Interview with Caspar Hare.

Caspar Hare is a Professor of Philosophy and Associate Dean for Social and Ethical Responsibilities of Computing (SERC) in the MIT Schwarzman College of Computing.

Ethical Frameworks for Public Interest Technologists at MIT: An Interview with Caspar Hare.

Question 1: What technologies are you working with, or have you worked with?

My background is in Philosophy, where I am interested in ethical questions about multidimensional decision-making and the tradeoffs that we must make when we reconcile values of different kinds.  

Practically, these tensions emerge when we confront the emergence of AI. It is a major, disruptive technology, and history shows us that social changes brought about by such technologies may be hard to anticipate. Imagine, for example, how hard it would have been for someone living at the end of the 18th century to anticipate the changes that fossil fuel technologies would bring in the 19th and 20th centuries. But, though we don’t know exactly what changes AI will bring, we do know that the changes will be big, and we want the changes to be good. As Associate Dean for the Social and Ethical Responsibilities of Computing (SERC) within the Schwarzman College of Computing, I engage with a diverse set of researchers working on how AI can make our lives better.  

For example, there’s a lot of interest in large language models and agency. It may soon be that large language models don't just say things, but also do things. So we should ask: What goals are these models going to pursue? What priorities will they have? What will their values be? We want to structure their goals, priorities and values so they align with those that we have, as researchers, scientists, and citizens.  

For another example, groups of people within SERC are interested in privacy and digital intrusion. As surveillance technologies become ever more powerful we need an ever more refined understanding of what privacy rights are. 

These are some of the many issues that are of interest to me and to people involved with SERC.  

Question 2: How do you take account of MIT’s obligation to pursue the public interest in the work that you do? 

With AI threatening major social transformation, and with so many MIT undergraduates graduating with a degree in Computer Science (Course 6), we must encourage these students to ask questions about the engineering / programming / design projects they take on: What are the implicit goals that I am pursuing by doing this? Why are these goals being set? What is the likely social effect of my doing this? Is it good? And we must give students tools to think about these questions in a rigorous way. 

To that end, we are developing a new suite of courses. The first one, 24C.401/6C.401 The Ethics of Computing, is being taught this fall. It is a ‘Common Ground’ course listed in Philosophy (Course 24) and Computer Science (Course 6). It offers systematic ways of thinking about risk, algorithmic fairness, and other issues. We hope to offer further such courses next year. 

Beyond MIT, there's a tremendous amount of public interest in technology which an institution like MIT is well-situated to be a thought leader on. It is striking that (in contrast to the internet, for example, which for the most part arose from government-university collaborations) private industry has been driving innovation in generative AI. There is an interesting question of what role a non-profit institution like MIT can play to push the technologies forward, and at the very least what role can it play as a trusted information source on the technologies, one that the general public or state policymakers could look to in order to keep abreast of what the heck's going on. There are people at MIT who are well-placed to be a public source of information, given their expertise and their lack of a profit motive.  

Question 3: What more could you and others do to help MIT team meet its social obligation to pursue public interest technology?

As administrators we may incentivize faculty to work on technology that serves the public interest through research grants. But I don’t see that MIT faculty need much incentivizing. In my experience, they are wonderful, brilliant, opinionated people, all committed to pursuing their conception of the public good. In my view the area where we can make major progress is in education. We can educate students inside and outside the Institute in how to think about technological progress in a human-centered way. 



Caspar Hare is a Professor of Philosophy in the Department of Linguistics and Philosophy. Along with Nikos Trichakis, Hare is the Associate Dean for Social and Ethical Responsibilities of Computing (SERC) in the MIT Schwarzman College of Computing. Hare and Perakis work together to create multidisciplinary connections on campus and to weave social, ethical, and policy considerations into the teaching, research, and implementation of computing.

A member of the MIT faculty since 2003, Hare’s main interests are in ethics, metaphysics, and epistemology. The general theme of his recent work has been to bring ideas about practical rationality and metaphysics to bear on issues in normative ethics and epistemology. He is the author of two books: “On Myself, and Other, Less Important Subjects” (Princeton University Press 2009), about the metaphysics of perspective, and “The Limits of Kindness” (Oxford University Press 2013), about normative ethics.