AI Ethics and Governance Lab

Sections
Text Area

About the Lab

The AI Ethics and Governance Lab at the Centre for Artificial Intelligence Research (CAiRE), Hong Kong University of Science and Technology (HKUST) works at the cutting edge of science, technology, ethics, and public policy. Our mission is to advance the ethical research, development, and implementation of AI in a manner that is respectful of the diverse range of human values across cultures. This includes encouraging responsible AI practices that are safe, transparent, and explainable, while also promoting the benefits of AI for society as a whole. We are dedicated to offering theoretical insights and practical guidance to decision-makers in both the private and public sectors. By engaging with a broad array of stakeholders, including industry leaders, policymakers, academics, and the general public, we will produce knowledge and policy recommendations nuanced in their understanding of cultural, ethical, and technological complexities.

Text Area

Our Research Methodology

Our Lab operates at the intersection of multiple fields, combining computer science, psychology, philosophy, public policy, and social sciences to create a multidimensional view of the opportunities and risks presented by AI. Our research methodology is comprehensive and multidisciplinary, blending empirical analysis, conceptual inquiry, and policy-oriented studies. Central to our approach is the recognition of the importance of a comparative East-West perspective in addressing the ethical and governance challenges of AI. We strive to break down barriers to create a vibrant space where different intellectual approaches can enrich one another.

 

Our Challenges

The rapid pace of AI development and deployment necessitates immediate attention to a range of ethical and governance challenges. These challenges our lab is currently focusing on include:

  1. How can we design AI systems that reduce rather than amplify societal biases and discrimination?
  2. What methods can we use to improve the safety, reliability, and usability of AI systems?
  3. How can AI tools be effectively and ethically used in society, including in critical systems as well as in creative arts?
  4. How can we use technology and policy to foster sustainability innovation?
  5. How can the understanding of differing East-West intellectual traditions influence the design of globally applicable ethical and governance frameworks for AI?
Text Area

AI Ethics and Governance Lab Event Calendar

 

Text Area
Text Area

Team members (in alphabetical order)

 

Co-Directors

Left Column
Image
Image
Janet Hui-Wen Hsiao
Image Caption
Professor Janet Hui-Wen Hsiao
Right Column
Text Area

Janet H. Hsiao is a Professor at the Division of Social Science at HKUST. Her research interests include cognitive science, explainable AI, computational modeling, theory of mind, visual cognition, and psycholinguistics.

Personal Website

Left Column
Image
Image
Associate Professor Masaru YARIME
Image Caption
Associate Professor Masaru Yarime
Right Column
Text Area
Masaru Yarime is an Associate Professor at the Division of Public Policy and the Division of Environment and Sustainability at HKUST. His research interests focus on emerging technologies including artificial intelligence, the internet of things, blockchain, and smart cities, and their implications for public policy and governance.
 

Personal Website

Media Mentions

Text Area

Core members

Left Column
Image
Image
Prof. Keith Chan
Image Caption
Dr. Keith Chan
Right Column
Text Area

Keith Jin Deng Chan is an Assistant Professor at the Division of Environment and Sustainability at HKUST. His research interests focus on applying economic and game theory to study the optimal design of governance mechanisms for sustainable finance and artificial intelligence.

 

Left Column
Image
Image
Hao Chen
Image Caption
Dr. Hao Chen
Right Column
Text Area

Hao Chen is an Assistant Professor at the Department of Computer Science and Engineering and Department of Chemical and Biological Engineering. He leads the Smart Lab focusing on developing trustworthy AI for healthcare. He has 100+ publications (Google Scholar Citations 24K+, h-index 63) in MICCAI, IEEE-TMI, MIA, CVPR, AAAI, Nature Communications, Radiology, Lancet Digital Health, Nature Machine Intelligence, JAMA, etc. He also has rich industrial research experience (e.g., Siemens), and holds a dozen of patents in AI and medical image analysis.

 

Left Column
Image
Image
Dave Haslett
Image Caption
Dr. David Haslett
Right Column
Text Area

David Haslett is a Research Assistant Professor in the Division of Social Science. He received his PhD in the Language Processing Lab at the Chinese University of Hong Kong. His dissertation investigated how people rely on similar-sounding words to represent the meanings of unfamiliar words, and his current research applies those findings to large language models. By better understanding how word meanings are represented in large language models, we can more accurately instruct and interpret artificial intelligence.

 

Left Column
Image
Image
Linus Huang
Image Caption
Dr. Linus Huang
Right Column
Text Area

Linus Huang is a Research Assistant Professor at the Division of Humanities. Trained as a philosopher of science, his research focuses on algorithmic bias, explainable AI, value alignment, ethics of emerging technology, as well as theoretical issues in computational cognitive neuroscience.

Personal Website

 

Left Column
Image
Image
Mushan Jin
Image Caption
Dr. Mushan Jin
Right Column
Text Area

Mushan Jin is a postdoctoral fellow in the Division of Public Policy at HKUST. Her research interests include digital technologies and urban policy, policy discourse analysis, and data sharing for sustainability.

 

Left Column
Image
Image
Chengzhong Liu
Image Caption
Chengzhong Liu
Right Column
Text Area
Chengzhong Liu is currently a final year PhD student at CSE department, focusing on Human-computer Interaction (HCI). His research focus is the design and governance of generative AI applications, e.g., how to design AI applications with proper ethical considerations.
 
Left Column
Image
Image
Yang Liu
Image Caption
Dr. Yang Liu
Right Column
Text Area
Yang Liu is a Research Assistant Professor of Humanities and a Fellow of the Institute of Advanced Study at HKUST. He is also a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence and the Faculty of Philosophy at the University of Cambridge. His research interests include Philosophy of AI, Logic and the Foundations of Decision and Probability theory.
 
Left Column
Image
Image
Kira Matus
Image Caption
Professor Kira Matus
Right Column
Text Area

Kira Matus is a founding member of the Lab, Professor in the Division of Public Policy and the Division of Environment and Sustainability, and an Associate Dean in the Academy of Interdisciplinary Studies at the Hong Kong University of Science and Technology. She is a scholar of public policy, innovation, and regulation, with a particular interest in the use of policy to incentivize, and to regulate, emerging technologies. She has an especial interest in the roles of non-state/private governance institutions, such as certification systems, as well as the sustainability implications of new technologies.

Left Column
Image
Image
Papyshev Gleb
Image Caption
Dr. Gleb Papyshev
Right Column
Text Area

Gleb Papyshev is a Research Assistant Professor at the Division of Social Science at HKUST. His research covers the areas of AI policy and regulation, AI ethics, and corporate governance mechanisms for emerging technologies.

Personal Website

 

Left Column
Image
Image
Hailong Qin
Image Caption
Dr. Hailong Qin
Right Column
Text Area

Hailong Qin is a Post-Doctoral Fellow of Social Science. He has PhD degree in Computer Science from Harbin Institute of Technology. And he has five years of experience as an algorithm engineer in Internet Companies. His research interests include the history of Chinese artificial intelligence development, natural language processing, and social network analysis.

 

Left Column
Image
Image
Zijun Shi
Image Caption
Dr. Zijun Shi
Right Column
Text Area

Zijun (June) Shi is an Assistant Professor at the HKUST Business School. Her research interest is at the intersection of economics of AI, technology-driven business, and consumer psychology. She has received several prestigious research awards including the Paul E. Green Award and the Don Lehmann Award. She was recognised as the MSI Young Scholar in 2023. She received her Ph.D. in Industrial Administration and M.S. in Machine Learning from Carnegie Mellon University.

 

Left Column
Image
Image
Yangqiu Song
Image Caption
Dr. Yangqiu Song
Right Column
Text Area

Yangqiu Song is now an associate professor at Department of CSE at HKUST, and an associate director of HKUST-WeBank Joint Lab. He was an assistant professor at Lane Department of CSEE at WVU (2015-2016); a post-doc researcher at UIUC (2013-2015), a post-doc researcher at HKUST and visiting researcher at Huawei Noah's Ark Lab, Hong Kong (2012-2013); an associate researcher at Microsoft Research Asia (2010-2012); a staff researcher at IBM Research-China (2009-2010). He received his B.E. and PhD degree from Tsinghua University, China, in July 2003 and January 2009. He also worked as interns at Google in 2007-2008 and at IBM Research-China in 2006-2007.

 

Left Column
Image
Image
Wenjuan Zheng
Image Caption
Dr. Wenjuan Zheng
Right Column
Text Area

Wenjuan Zheng is an Assistant Professor in the Division of Social Science at the Hong Kong University of Science and Technology. Before joining HKUST, she worked as a postdoctoral fellow at the Stanford Center on Philanthropy and Civil Society. She is an organizational sociologist with a specialized focus on the nonprofit sector. Her research, particularly her book project on the evolving role of technology in shaping an alternative civic sector in authoritarian China, combines both qualitative and quantitative analyses. By studying the interactions between nonprofit organizations, corporate technology, and government entities, Zheng uncovers the socio-technical dynamics that support the growth of civil society. This empirical foundation provides a critical lens through which Zheng examines how AI governance can be designed to empower civil society actors, ensuring that AI systems are developed and implemented in ways that reflect the values, needs, and agency of diverse communities.

 

Left Column
Image
Image
Yueyuan Zheng
Image Caption
Dr. Yueyuan Zheng
Right Column
Text Area

Yueyuan Zheng is a Research Assistant Professor at the Division of Social Science at HKUST. She received her PhD in Cognitive Psychology at the University of Hong Kong. Her research interests include visual cognition, science of learning, computational modeling, and explainable AI.

 

Text Area

Affiliated members

Left Column
Image
Image
Professor Antoni B. Chan
Image Caption
Professor Antoni B. Chan
Right Column
Text Area

Antoni Chan is a Professor at the City University of Hong Kong in the Department of Computer Science. Before joining CityU, he was a postdoctoral researcher in the Department of Electrical and Computer Engineering at the University of California, San Diego (UC San Diego). He received the Ph.D. degree from UC San Diego in 2008 studying in the Statistical and Visual Computing Lab (SVCL). He received the B.Sc. and M.Eng. in Electrical Engineering from Cornell University in 2000 and 2001. From 2001 to 2003, he was a Visiting Scientist in the Computer Vision and Image Analysis lab at Cornell. In 2005, he was a summer intern at Google in New York City. In 2012, he was the recipient of an Early Career Award from the Research Grants Council of the Hong Kong SAR, China.

Left Column
Image
Image
Kelle S. Tsai
Image Caption
Professor Kellee S. Tsai
Right Column
Text Area

Kellee S. Tsai is the founding Director of the Lab. She is currently the Dean of Social Science and Humanities at Northeastern University. Trained as a political scientist, her areas of expertise include comparative politics, political economy of China and India, and informal institutions.

Media Mentions

 

Text Area

Resources

Text Area

Publications

  • Hsiao, J. H., & Chan, A. B. (2023). Towards the next generation explainable AI that promotes AI-human mutual understanding. NeurIPS XAIA 2023. https://openreview.net/forum?id=d7FsEtYjvN
  • Veale, Michael, Kira Matus, and Robert Gorwa. 2023. Global Governance of Machine Learning Algorithms. Annual Review of Law and Social Science, 19. https://www.annualreviews.org/doi/abs/10.1146/annurev-lawsocsci-020223-040749
  • Liu, Yang et al. 2023. “The Meanings of AI : A Cross-Cultural Comparison.” In Cave, S., K. Dihal (ed.), Imagining AI — How the World Sees Intelligent Machines. Oxford University Press, pp. 16 – 39.
  • Matthew Stephenson, Iza Lejarraga, Kira Matus, Yacob Mulugetta, Masaru Yarime, and James Zhan. 2023. “AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable Development through Value Chains.” In Francesca Mazzi and Luciano Floridi, eds., The Ethics of Artificial Intelligence for the Sustainable Development Goals, Springer Nature, 183-201. https://link.springer.com/chapter/10.1007/978-3-031-21147-8_11
  • Aoki, Naomi, Melvin Tay, and Masaru Yarime. 2024. “Trustworthy Public-Sector AI: Research Progress and Future Agendas,” in Yannis Charalabidis, Rony Medaglia, and Colin van Noordt, eds., Research Handbook on Public Management and Artificial Intelligence, Edward Elgar, 260-273. https://www.e-elgar.com/shop/gbp/research-handbook-on-public-management-and-artificial-intelligence-9781802207330.html
  • Xie, Siqi, Ning Luo, and Masaru Yarime. 2023. “Data Governance for Smart Cities in China: The Case of Shenzhen,” Policy Design and Practice. DOI: 10.1080/25741292.2023.2297445. https://www.tandfonline.com/doi/full/10.1080/25741292.2023.2297445
  • Papyshev, Gleb, and Masaru Yarime. 2023. “The Challenges of Industry Self-Regulation of AI in Emerging Economies: Implications of the Case of Russia for Public Policy and Institutional Development,” in Mark Findlay, Ong Li Min, and Zhang Wenxi, eds., Elgar Companion to Regulating AI and Big Data in Emerging Economies, Edward Elgar, 81-98. https://www.e-elgar.com/shop/gbp/elgar-companion-to-regulating-ai-and-big-data-in-emergent-economies-9781785362392.html
  • Li, Zhizhao, Yuqing Guo, Masaru Yarime, and Xun Wu. 2023. “Policy Designs for Adaptive Governance of Disruptive Technologies: The Case of Facial Recognition Technology (FRT) in China,” Policy Design and Practice, 6 (1), 27-40. https://www.tandfonline.com/doi/full/10.1080/25741292.2022.2162248
  • Matthew Stephenson, Iza Lejarraga, Kira Matus, Yacob Mulugetta, Masaru Yarime, and James Zhan. 2023. “AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable Development through Value Chains.” In Francesca Mazzi and Luciano Floridi, eds., The Ethics of Artificial Intelligence for the Sustainable Development Goals, Springer Nature, 183-201. https://link.springer.com/chapter/10.1007/978-3-031-21147-8_11
  • Éigeartaigh, Seán Ó, Jess Whittlestone, Yang Liu, Yi Zeng, Zhe Liu. 2020. “Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance.” Philosophy and Technology 33: 571–593.
  • Papyshev, Gleb, and Masaru Yarime. 2022. “The Limitation of Ethics-based Approaches to Regulating Artificial Intelligence: Regulatory Gifting in the Context of Russia.” AI & SOCIETY. https://doi.org/10.1007/s00146-022-01611-y
  • Thu, Moe Kyaw, Shotaro Beppu, Masaru Yarime, and Sotaro Shibayama. 2022. "Role of Machine and Organizational Structure in Science." PLoS ONE, 17 (8), e0272280 (2022). https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0272280
  • Chan, Keith Jin Deng, Gleb Papyshev, and Masaru Yarime. 2022. "Balancing the Tradeoff between Regulation and Innovation for Artificial Intelligence: An Analysis of Top-down Command and Control and Bottom-up Self-Regulatory Approaches," SSRN, October 19. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4223016#
  • Provided input to The Presidio Recommendations on Responsible Generative AI. 2023. Based on Responsible AI Leadership: A Global Summit on Generative AI, World Economic Forum in collaboration with AI Commons, June. https://www3.weforum.org/docs/WEF_Presidio_Recommendations_on_Responsible_Generative_AI_2023.pdf
  • Matus, Kira and Veale, Michael. 2022. The use of certification to regulate the social impacts of machine learning: Lessons from sustainability certification. Regulation & Governance, 16:177-196. https://doi.org/10.1111/rego.12417
  • Sivarudran Pillai, V. and Matus, KJM. 2021. Towards a responsible integration of artificial intelligence technology in the construction sector. Science and Public Policy, 47 (5), 689-704. https://doi.org/10.1093/scipol/scaa073
  • Stepehnson, Matthew., Lejarrage, Iza., Matus, Kira., Mulugetta, Yacob., Yarime, Masaru., Zhan, James. 2021. SusTech: enabling new technologies to drive sustainable development through value chains. G20 Insights: TF4-Digital Transformation, 2021 https://www.t20italy.org/wp-content/uploads/2021/09/TF4-PB8_final.pdf
  • Lin, Y.-T., Hung, T.-W., & Huang, L. T.-L. (2020). Engineering Equity: How AI Can Help Reduce the Harm of Implicit Bias. Philosophy and Technology, 34(1).
  • Huang, L. T.-L., Chen, H.-Y., Lin, Y.-T., Huang, T.-R., & Hung, T.-W. (2022). Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.
  • Liu, G., Zhang, J., Chan, A. B., & Hsiao, J. H. (2024). Human Attention-Guided Explainable Artificial Intelligence for Computer Vision Models. Neural Networks, 177, 106392. https://doi.org/10.1016/j.neunet.2024.106392

  • Qi, R., Zheng, Y., Yang, Y., Cao, C. C., & Hsiao, J. H. (2024). Explanation Strategies in Humans vs. Current Explainable AI: Insights from Image Classification. British Journal of Psychology. https://doi.org/10.1111/bjop.12714

  • Zhao, C., Hsiao, J. H., & Chan, A. B. (2024). Gradient-based Instance-Specific Visual Explanations for Object Specification and Object Discrimination. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/tpami.2024.3380604

  • Qi, R., Liu, G., Zhang, J., & Hsiao, J. H. (2024). Do saliency-based explainable AI methods help us understand AI’s decisions? The case of object detection AI. In L. K. Samuelson, S. L. Frank, M. Toneva, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Conference of the Cognitive Science Society, pp. 1917-1924. Cognitive Science Society. https://escholarship.org/uc/item/85w091p7

  • Zhang, J., Liu, G., Chen, Y., Chan, A. B., & Hsiao, J. H. (2024). Demystify deep-learning AI for object detection using human attention data. In L. K. Samuelson, S. L. Frank, M. Toneva, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Conference of the Cognitive Science Society, pp.1983-1990. Cognitive Science Society. https://escholarship.org/uc/item/5tg5t4bq

  • Liao, W., Wang, Z., Shum, K., Chan, A. B., & Hsiao, J. H. (2024). Do large language models resolve semantic ambiguities in the same way as humans? The case of word segmentation in Chinese sentence reading. In L. K. Samuelson, S. L. Frank, M. Toneva, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Conference of the Cognitive Science Society, pp.1961-1967. Cognitive Science Society. https://escholarship.org/uc/item/2sm8g139