Outshift Logo


4 min read

Blog thumbnail
Published on 05/16/2021
Last updated on 05/03/2024

AI Ethics: Listening to Stakeholders


By External Authors:

Kenneth R. Fleischmann, Sherri R. Greenberg, and Stephen C. Slota Given the life-and-death stakes of AI innovation, it is critical for designers to reflect and to seek input from experts in other domains, such as the legal and policy domains. In our research, funded by Cisco Research’s Legal Implications for IoT, Machine Learning and Artificial Intelligence Systems program, we interviewed 26 experts across these three domains, including eight AI researchers, eight legal scholars and practitioners, and ten policy scholars and practitioners. We used an innovative study design to connect the expertise of our respective participant groups, including first interviewing the technology experts to identify emerging ethical challenges in AI, and then interviewing the legal and policy experts to understand the societal implications of these new applications of AI. One theme across these interviews was the tendency to blame data for the shortcomings of AI. Flawed training data can be highly problematic, and data makes a convenient scapegoat when AI goes wrong. However, simply blaming the data does not absolve the developers of the system from any wrongdoing. AI researchers have the responsibility to identify and acquire high-quality datasets, and to work to mitigate any bias as well as other flaws and limitations present in the data. For more information, please see our paper which was published in the Proceedings of the 83rd Annual Meeting of the Association for Information Science and Technology in October 2020. We also found that AI is valued for providing innovative solutions to important problems, but AI that is innovative does not always lead to societal benefits. Innovation can interfere with reliability, as a newer and less tested system may be less robust. Innovation can also conflict with security, as AI technology can be dangerous in the wrong hands. Finally, innovation can be viewed as being hampered by regulation. In designing AI-based systems, it is important to consider the value trade-offs involved in when to innovate and when to use tried and true solutions. Our paper on innovation in AI was recently selected, out of 122 submitted short papers, as the sole winner of the iConference 2021 Best Short Paper Award. Given that one of the goals of our project was to improve how AI is regulated, we applied to and were selected to participate in the Public Interest Tech Accelerator Cohort, a result of a partnership between the Day One Project and the Public Interest Technology University Network (PIT-UN). We argue that, given the stakes of AI innovation, there is a need for a non-partisan approach to unifying AI research and regulation. Thus, we propose the creation of the Fair Artificial Intelligence Research & Regulation (FAIRR) Bureau. Our policy proposal was published by the Day One Project earlier this year. We are also working to put Austin on the map as the place to do research about and learn about social justice informatics. Social justice informatics involves working toward a more equitable and just society by leveraging data, information, and technology to solve complex societal challenges. One example is the PIT-UN Social Justice Informatics Faculty Fellows Program, a collaboration between the University of Texas at Austin, Huston-Tillotson University, Capacity Catalyst, MEASURE, and the City of Austin. Another example is the new concentration in Social Justice Informatics that the UT-Austin School of Information (iSchool) is launching as part of our new undergraduate Informatics major. UT-Austin is playing a leading role in AI ethics through Good Systems, a UT Grand Challenge. Good Systems aims to design AI technologies that benefit society. Good Systems is currently partnering with the City of Austin on seven research projects, including Smart Cities Should Be Good Cities: AI, Equity, and Homelessness, which has been recognized as the MetroLab Innovation of the Month for July 2020. It has been an honor to collaborate with Cisco Research, and we are looking forward to future opportunities to advance our shared goals around ensuring that AI can be leveraged to benefit society.
Subscribe card background
Subscribe to
the Shift!

Get emerging insights on emerging technology straight to your inbox.

Unlocking Multi-Cloud Security: Panoptica's Graph-Based Approach

Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.

the Shift
emerging insights
on emerging technology straight to your inbox.

The Shift keeps you at the forefront of cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations that are shaping the future of technology.

Outshift Background