About the Lab
The AI Ethics and Governance Lab at the Centre for Artificial Intelligence Research (CAiRE), Hong Kong University of Science and Technology (HKUST) works at the cutting edge of technology, ethics, and public policy. Our mission is to advance the ethical research, development, and implementation of AI in a manner that is respectful of the diverse range of human values across cultures. This includes encouraging responsible AI practices that are safe, transparent, and explainable, while also promoting the benefits of AI for society as a whole. We are dedicated to offering theoretical insights and practical guidance to decision-makers in both the private and public sectors. By engaging with a broad array of stakeholders, including industry leaders, policymakers, academics, and the general public, we will produce knowledge and policy recommendations nuanced in their understanding of cultural, ethical, and technological complexities.
Our Research Methodology
Our Lab operates at the intersection of multiple fields, combining computer science, philosophy, law, and social sciences to create a multidimensional view of the opportunities and risks presented by AI. Our research methodology is comprehensive and multidisciplinary, blending empirical analysis, conceptual inquiry, and policy-oriented studies. Central to our approach is the recognition of the importance of a comparative East-West perspective in addressing the ethical and governance challenges of AI. We strive to break down barriers to create a vibrant space where different intellectual approaches can enrich one another.
The rapid pace of AI development and deployment necessitates immediate attention to a range of ethical and governance challenges. These challenges our lab is currently focusing on include:
- How can we design AI systems that reduce rather than amplify societal biases and discrimination?
- What methods can we use to improve the safety and reliability of generative conversational AI systems?
- How can AI tools be effectively and ethically used in creative arts?
- How can we use technology and policy to foster sustainability innovation?
- How can the understanding of differing East-West intellectual traditions influence the design of globally applicable ethical and governance frameworks for AI?
Kellee S. Tsai is the is the founding Director of the Lab, Associate Director of CAiRE, and Dean of Humanities and Social Science at HKUST. Trained as a political scientist, her areas of expertise include comparative politics, political economy of China and India, and informal institutions.
Kira Matus is a founding member of the Lab, Professor in the Division of Public Policy and the Division of Environment and Sustainability, and an Associate Dean in the Academy of Interdisciplinary Studies at the Hong Kong University of Science and Technology. She is a scholar of public policy, innovation, and regulation, with a particular interest in the use of policy to incentivize, and to regulate, emerging technologies. She has an especial interest in the roles of non-state/private governance institutions, such as certification systems, as well as the sustainability implications of new technologies.
- Masaru Yarime interviewed by Público, April 23 (2023). “Todos querem regular a inteligência artificial, mas ninguém se entende” (Everyone wants to regulate artificial intelligence, but no one understands). https://www.publico.pt/2023/04/22/tecnologia/noticia/querem-regular-inteligencia-artificial-ninguem-entende-2046879
Personal Website: https://yarime.net/
Gleb Papyshev is a Research Assistant Professor in the Division of Social Science. His research interests include AI policy and regulation, AI ethics, and corporate governance mechanisms for emerging technologies. The results of his work have been published in Policy Design and Practice, AI & Society, Data & Policy, and Elgar Companion to Regulating AI and Big Data in Emergent Economies.
- Measuring the Impact of Harmful AI Incidents onCorporate Digital Responsibility Disclosure
Linus Huang is a Research Assistant Professor at the Division of Humanities. Trained as a philosopher of science, his research focuses on algorithmic bias, explainable AI, AI for social good, ethics of emerging technology, as well as theoretical issues in computational cognitive neuroscience. He has published on these topics in Philosophy and Technology and Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.
Personal Website: https://philpeople.org/profiles/linus-huang
Hailong Qin is a Post-Doctoral Fellow of Social Science. He has PhD degree in Computer Science from Harbin Institute of Technology. And he has five years of experience as an algorithm engineer in Internet Companies. His research interests include the history of Chinese artificial intelligence development, natural language processing, and social network analysis.