Keynotes

Gopal Ramchurn

University of Southampton, United Kingdom

Sarvapali (Gopal) Ramchurn is Professor of Artificial Intelligence at University of Southampton, CEO of the £31M Responsible Ai UK programme and Director of the UKRI Trustworthy Autonomous Systems (TAS) Hub, which sits at the centre of the £33M Trustworthy Autonomous Systems Programme. He is also co-founder and co-CEO of the AI start-up Empati Ltd, which delivers solutions for real-time carbon accounting across grid, generation and consumption. His research focuses on the design of Responsible Artificial Intelligence for socio-technical applications including energy systems and disaster management, applying techniques from Machine Learning, HCI and Game Theory. He has won multiple best paper awards for his research and is a winner of the AXA Research Fund Award (2018) for his work on Responsible Artificial Intelligence.

Preventing Irresponsible AI: an interdisciplinary research challenge.

There is a growing gap between the technical community and other disciplines around the theme of Responsible AI. On the one hand, the technical community is looking to accelerate the development of novel AI systems and limit barriers imposed by regulation, and on the other hand, social scientists, lawyers, and creatives are worried about the impact AI is starting to have on society and calling for measures to limit its capability. In this talk, I will draw out some of the new socio-technical challenges that are emerging and present some of the solutions being worked on by the Responsible AI UK programme and the UKRI Trustworthy Autonomous Systems Hub. I will present some case studies that demonstrate the need for an interdisciplinary perspective and cannot be just left to the technical community to solve for. I will highlight the pitfalls to be avoided, particularly when working at the interface between AI and other disciplines, to avoid the development of Irresponsible AI.

Timothy Baldwin

MBZUAI, United Arab Emirates

The University of Melbourne, Australia

Tim Baldwin is Acting Provost and Professor of Natural Language Processing at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), in addition to being a Melbourne Laureate Professor in the School of Computing and Information Systems, The University of Melbourne.
Tim completed a BSc(CS/Maths) and BA(Linguistics/Japanese) at The University of Melbourne in 1995, and an MEng(CS) and PhD(CS) at the Tokyo Institute of Technology in 1998 and 2001, respectively. He joined MBZUAI at the start of 2022, prior to which he was based at The University of Melbourne for 17 years. His research has been funded by organisations including the Australian Research Council, Google, Microsoft, Xerox, ByteDance, SEEK, NTT, and Fujitsu, and has been featured in MIT Tech Review, Bloomberg, Reuters, CNN, IEEE Spectrum, The Times, ABC News, The Age/Sydney Morning Herald, and Australian Financial Review. He is the author of nearly 500 peer-reviewed publications across diverse topics in natural language processing and AI, with over 22,000 citations and an h-index of 72 (Google Scholar), in addition to being an ARC Future Fellow, and the recipient of a number of awards at top conferences.

Safe, open, locally-aligned language models

In this talk, I will present recent work at MBZUAI targeted at the
development and release of open-weight and open-source language models
(LMs) for a range of different languages, with a particular focus on:
AI safety, open-sourcing, and localisation. I will first motivate the
need for (genuinely) open-source LMs, and then go on to describe the
process we have developed for localisation and safety alignment of
public-release LMs, including auto red-teaming, evaluation dataset
creation, and safety alignment. I will further present details of a
new safety AI leaderboard we are in the process of releasing.

Nitesh V Chawla

University of Notre Dame, United States

Nitesh Chawla is the Frank M. Freimann Professor of Computer Science and Engineering and the Founding Director of the Lucy Family Institute for Data and Society at the University of Notre Dame. His research is focused on artificial intelligence and data science, and is also motivated by the question of how technology can advance the common good. He is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), a Fellow of the Association of Computing Machinery (ACM), Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), and a Fellow of the American Association for the Advancement of Science (AAAS). He is the recipient of multiple awards, including the National Academy of Engineers New Faculty Fellowship, IEEE CIS Outstanding Early Career Award, Rodney F. Ganey Community Impact Award, IBM Big Data & Analytics Faculty Award, IBM Watson Faculty Award, and the 1st Source Bank Technology Commercialization Award. He is founder of Aunalytics, a data science software and cloud computing company.

Graphs, Multimodal Data, LLMs, and Science: From Foundation Models to Applications

Recent advances in AI, have also seen a burgeoning growth of applications in propeling scientific discovery. Specifically there are foundational opportunities in raph learning, LLMS, and learning on multimodal data, as well as a challenge for how to successfully integrate them into scientific domains to advance knowledge discovery. In this talk, I will present our research at the interface of AI for Science, and also identify research opportunities and challenges.

Zhou, Minghui

Peking University, China

Minghui Zhou is a Distinguished Professor of Peking University Boya Program, and Vice Dean, at the School of Computer Science, Peking University. She is a recipient of the National Distinguished Youth Fund, Deputy Director of the CCF Open Source Development Committee, and Chairman of ACM CSOFT. Her main research areas include open-source software development, data analysis, and intelligent recommendation. She has published over 100 refereed papers in top international journals and conferences including TSE, TOSEM, ICSE, FSE, and CSCW. She invented novel concepts and fine metrics for global and open source development, including developer fluency, long-term contributors, commercial participation, group collaboration scalability, and software supply chain. She has received multiple ACM SIGSOFT Distinguished Paper Awards and has twice won the National Technology Invention Second Prize. She is the PC Co-Chair of the prestigious international conference ASE 2024 in software engineering. She serves on multiple prestigious journals including TSE, EMSE and ASE Journal. The permissive Mulan License she developed has been adopted by over 330,000 open-source code repositories. For more information, one can refer to http://osslab-pku.org/.

Open Source Software Digital Sociology: Quantifying and Managing Complex Open Source Software Ecosystem

Open Source Software (OSS) ecosystems have revolutionized computing and society. However, the complex nature of their formation and sustainability presents significant challenges for practitioners and researchers. To understand and manage these complex ecosystems, we propose the concept of OSS digital sociology, aiming to uncover the mechanisms behind OSS ecosystems.

This talk will illustrate why OSS digital sociology emerges, what the challenges are, and what has been achieved.

Yasuyuki Matsushita

Microsoft, United States

Osaka University, Japan

Yasuyuki Matsushita received his B.S., M.S. and Ph.D. degrees in EECS from the University of Tokyo in 1998, 2000, and 2003, respectively. From April 2003 to March 2015, he was with Visual Computing group at Microsoft Research Asia. In April 2015, he joined Osaka University as a professor. His research area includes computer vision, machine learning and optimization. He is an Editor-in-Chief of International Journal of Computer Vision (IJCV) and is/was on editorial board of IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), The Visual Computer journal, IPSJ Transactions on Computer Vision Applications (CVA), and Encyclopedia of Computer Vision. He served/is serving as a Program Co-Chair of PSIVT 2010, 3DIMPVT 2011, ACCV 2012, ICCV 2017, and a General Co-Chair for ACCV 2014 and ICCV 2021. He has won the Osaka Science Prize in 2022. He is a senior member of IEEE and a member of IPSJ. By the end of September 2024, he is leaving Osaka University to join Microsoft Research Asia to open its new research lab in Tokyo.

Making Sense of the Real-World via 3D Computer Vision

3D Computer Vision is crucial for understanding and interpreting the spatial aspects of real-world scenes. It is particularly important for the coming Embodied AI, where machines need to interact with and understand their surroundings. Sensing is crucial in this context, because it generates rich, multidimensional data that enhances AI’s understanding of the world and elevates its perceptual capabilities. This talk covers recent studies on 3D sensing and discuss the future directions.

Du Tran

Google, United States

Dr. Du Tran is a research scientist at Google. Before coming to Google, He was a research lead at Samsung Research America, a research scientist at Meta AI, and a researcher at NTU. He graduated with a Ph.D. in computer science from Dartmouth College, an M.Sc. in computer science from University of Illinois at Urbana-Champaign, and a B.Sc. in Information Technology from Ho Chi Minh city University of Science. His research interests are computer vision, machine learning and computer graphics, with specific interests in video understanding, representation learning, and vision for robotics.

Can Machines Understand Long Videos?

Video understanding, an important sub-area of computer vision, has various useful applications ranging from video retrieval, visual sensing to robot learning. While state-of-the-art methods excel at simple tasks like classification and detection on short-form videos, they often fall short when confronted with the complexity of longer-duration content. In this talk, I’ll delve into our recent work on long video understanding. Our approach enables the analysis of hour-long videos, unlocking new possibilities for applications like egocentric video retrieval and video question-answering. I’ll share insights into how we’ve overcome the challenges posed by long-form video and discuss the potential research directions in this area.

Uichin Lee

Korea Advanced Institute of Science and Technology, Korea

Dr. Uichin Lee is a Professor in the School of Computing at the Korea Advanced Institute of Science and Technology (KAIST), leading the Interactive Computing Lab, whose mission is to study intelligent positive computing systems that can intervene in threats to human health and digital wellbeing. He received a Ph.D. degree in computer science from UCLA in 2008. He worked for Alcatel-Lucent Bell Labs as a member of the technical staff before joining KAIST in 2010. He has joint affiliations with the Department of Industrial and Systems Engineering, the Graduate School of Data Science at KAIST, and the KAIST Health Science Institute. In 2023, he was inducted as a member of the SIGCHI Academy, an honorary group of individuals who have made substantial contributions to the field of human-computer interaction (HCI). He served as a program committee member of the key HCI conferences and journals, such as ACM CHI, CSCW, and Ubicomp, and as an editor for PACM HCI (CSCW) and IMWUT (Ubicomp). He received the best paper awards at ACM Ubicomp’24 (IMWUT), ACM CHI’16, AAAI ICWSM’13, IEEE CCGrid’11, and IEEE PerCom’07, and an impact award from IEEE IoT Fourm’19.

Data-Driven Digital Health and Wellbeing

The use of mobile, wearable, and Internet of Things (IoT) technologies fosters unique opportunities for designing novel intelligent positive computing services that aim to realize human wellbeing and potential. These services can help with mental and social wellbeing (e.g., stress care and social-emotional learning), physical wellbeing (e.g., diet, exercise, and sleep coaching), and work productivity (e.g., attention management). This talk overviews the concept of data-driven digital health and wellbeing from the perspective of engagement empowerment by illustrating how sensor and interaction data collected from mobile, wearable, and IoT technologies are used to detect health and wellbeing issues, enable novel context-aware interventions, and analyze digital health and wellbeing services. Through critically reflecting on the literature and services, this talk discusses several research directions on empowering engagement, such as user and intervention modeling and management, as well as concerns and challenges, such as side effects, privacy, and ethical issues.