Question 1: What technologies are you working with, or have you worked with?
At the core of my work, I am an Artificial Intelligence (AI) researcher. I run the Interactive Robotics Group, a lab at MIT that focuses on developing AI to make machines and robots more capable of working in collaboration with people to enhance rather than replace human capabilities. This is a field known as collaborative robotics, a discipline focusing on the development and deployment of collaborative robots in factory systems. Robots in this case work alongside people to help provide intelligent decision support for expert decision-makers, nurses and doctors, as well as for laborers who build planes and cars. Our work translates insights from cognitive science and behavioral psychology into AI we develop to inform effective human teamwork with robots in their workplace. From there, we model workflows to better anticipate the user needs, as well as provide greater interactivity and support for those users during challenging tasks. This research on interactivity feeds into my work as faculty director of the Industrial Performance Center, along with Ben Armstrong, co-lead of the Work of the Future Initiative, and has transformed the translation process on effective team building and collaborative robotics by turning learning from industry business cases through a collaboration with social scientists.
Through this process of collaborating as engineers and social scientists, we learned that despite advancements in assistive robotics, few robots were utilized in the ways we thought they might be. Ultimately, we understood that the integration of robots into workplaces remains a very human process requiring more flexible, usable, and accessible systems for users across sectors. Through the Automation Clinic, a combined design, business, and technology initiative, we’re developing robots that can work alongside workers. These robots are programmed to not only pass the right part at the right time, but also to be able to model a worker’s intent while they try to teach the robot a task without programming any additional lines of code during this process. This is achievable through deep learning, deep reinforcement learning, and large language models (LLMs) that are integrated into our robots from the start. Through case studies with different companies and industries, we aim to design robots that are capable of adjusting to organizational and systemic differences while still achieving high performance and high reliability.
Question 2: How do you take account of MIT’s obligation to pursue the public interest in the work that you do?
My colleague Ben Armstrong and I have a Harvard Business Review article titled “A Smarter Strategy for Using Robots” to examine positive some automation in design, integration, and Return on Asset (ROA) decisions for organizations. The piece also gives a few examples of firms or contexts where this strategy has been implemented. If we talk about technology in the public interest, this piece highlights the core of our research agenda: an interest in how an increased use of automation can go hand in hand with higher quality jobs, with better jobs for workers. This does not necessarily relate to a technologically improved human productivity, which might be top of mind for a firm. Rather, this process relates to how automated decisions can interact with skills and training to open new job opportunities or higher paid wages for workers as our economy continues to incorporate new technologies. Through this work, framing societal needs and envisioning potential unintended consequences in our approach depends on deep collaborations with social scientists, as well as stakeholders such as companies and users over long periods of time. This is a crucial time investment up front in our project development that has fundamentally changed how we’ve formulated how our work considers the usefulness and benefit of the technologies we develop.
Through collaborations with social scientists, engineers, and business colleagues, my team has better understood how to frame our technical approach and the technical problems we're solving to meet societal needs. When diverse and interdisciplinary collaboration is integrated into the whole innovation process, we really start to put societal—or public interests—first.
Question 3: What more could you and others do to help MIT team meet its social obligation to pursue public interest technology?
I served as Associate Dean of Social and Ethical Responsibilities of Computing (SERC), building up that program right at the start of the Schwartzman College of Computing for three years. In this role, I was enormously proud of what our colleagues at the Institute were able to accomplish with SERC. Instructors across the institute came together in small groups to think about developing new standalone courses related to social responsibilities of computing and generate points of connection across their respective disciplines in the social sciences, sciences, and arts. Through SERC initiatives, we took an approach that aimed to seed social and ethical considerations into all aspects of an education for computing at MIT, while also recognizing the inherent difficulties that arise when asking faculty to change course syllabus material.
Rather than be prescriptive, we designed SERC to reflect Harvard’s Embedded Ethics Model, an approach to Computer Science (CS), that brings together faculty from the social sciences and humanities with faculty from engineering and computer science to build person to person relationships. Through these facilitated interactions, SERC material has been integrated into coursework and into co-teaching opportunities over one term or into several years of collaborative work through various faculty teaching and research initiatives. Beyond on-campus curricular development, both the approach and materials developed within SERC are now also available as open courseware and accessible to communities within and outside MIT. In this sense, I think the efforts led through SERC represent a great model for how public interest technology can be done effectively with forms of public transparency about the process we utilized to make this effort a reality.
Julie Shah is the H.N. Slater professor in the Department of Aeronautics and Astronautics and leads the interactive robotics group as a part of the Computer Science and Artificial Intelligence Laboratory (CSAIL). She serves as the faculty director of the Industrial performance Center and co-leads the Work of the Future Initiative in the Industrial Performance Center with Ben Armstrong.
Shah received her SB (2004) and SM (2006) from the Department of Aeronautics and Astronautics at MIT, and her PhD (2010) in Autonomous Systems from MIT. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. She has developed innovative methods for enabling fluid human-robot teamwork in time-critical, safety-critical domains, ranging from manufacturing to surgery to space exploration. Her group draws on expertise in artificial intelligence, human factors, and systems engineering to develop interactive robots that emulate the qualities of effective human team members to improve the efficiency of human-robot teamwork. In 2014, Shah was recognized with an NSF CAREER award for her work on “Human-aware Autonomy for Team-oriented Environments,” and by the MIT Technology Review TR35 list as one of the world’s top innovators under the age of 35. Her work on industrial human-robot collaboration was also recognized by the Technology Review as one of the 10 Breakthrough Technologies of 2013, and she has received international recognition in the form of best paper awards and nominations from the International Conference on Automated Planning and Scheduling, the American Institute of Aeronautics and Astronautics, the IEEE/ACM International Conference on Human-Robot Interaction, the International Symposium on Robotics, and the Human Factors and Ergonomics Society.