Skip to the content.

Tentative Speakers

Flora Salim Flora Salim
University of New South Wales, Australia

Title: Robustness and Fairness: Do they go hand in hand? Exploring machine learning robustness and fairness in ubiquitous computing

Abstract: The notion of fairness in machine learning has been deeply investigated by many researchers, with many definitions of algorithmic fairness have been made available in the literature. However, the generalisability of these methods remains an open problem, particularly to different data qualities. The ability of generalisation to different data qualities can also be referred to as robustness. Current fairness mitigation strategies, however, have largely been observed only in model training. We have studied various fairness mitigation strategies under the presence of noise, and discovered methods that are sensitive to noise and conversely also those that are more robust and can still produce fair outcomes under noisy or perturbed data. Although this research has only been conducted on benchmark datasets, I will argue why this is widely applicable to the ubiquitous computing domain, and with wide implications in wearable, mobile and ubiquitous computing systems. I will share some of our recent work leveraging self-supervised learning for ML robustness in ubiquitous computing applications. I will also share some initial (unpublished) ideas on how both robustness and fairness can potentially be observed hand in hand. I would very much welcome discussions at the end on collaboration, as ideas are plenty, but we need more hands on deck to ensure that we can minimise disparity and improve robust and safe algorithms for the end users and stakeholders of UbiComp systems.

Bio: Professor Flora Salim is the inaugural Cisco Chair of Digital Transport and AI, University of New South Wales (UNSW) Sydney, Australia, and the Deputy Director (Engagement) of the UNSW AI Institute. Her research is on ubiquitous computing, behaviour modelling, trustworthy and robust AI, and machine learning for multimodal sensor data. She is a Chief Investigator of the ARC Centre of Excellence in Automated Decision Making and Society (ADM+S), and the Co-Lead of the ADM+S Machines Program, and the Transport and Mobilities Focus area. She serves as a member of the Australian Research Council (ARC) College of Experts, an Editor of Proceedings of the ACM on Interactive, Mobile, Wearable, Ubiquitous Technologies (IMWUT), the Associate Editor-in-Chief (AEIC) of IEEE Pervasive Computing, and an Associate Editor of ACM Transactions on Spatial Algorithms and Systems. She has received several fellowships and funding from the Australian Research Council, the Humboldt Foundation, Bayer Foundation, Microsoft Research, Cisco, IBM, Qatar National Research Fund, CSIRO (in partnership with NSF), local and state government agencies, and many other industry partners. She received the Women in AI Awards 2022 ANZ - Defence and Intelligence category. More info here.

Ricardo Baeza-Yates Ricardo Baeza-Yates
Northeastern University, USA

Title: Responsible AI

Abstract: In the first part, to set the stage, we cover irresponsible AI: (1) discrimination (e.g., facial recognition, justice); (2) phrenology (e.g., biometric based predictions); (3) limitations (e.g., human incompetence, minimal adversarial AI) and (4) indiscriminate use of computing resources (e.g., large language models). These examples do have a personal bias but set the context for the second part where we address three challenges: (1) principles & governance, (2) regulation and (3) our cognitive biases. We finish discussing our responsible AI initiatives and the near future.

Bio: Ricardo Baeza-Yates is Director of Research at the Institute for Experiential AI of Northeastern University. Before, he was VP of Research at Yahoo Labs, based in Barcelona, Spain, and later in Sunnyvale, California, from 2006 to 2016. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 1999 and 2011 (2nd ed), that won the ASIST 2012 Book of the Year award. From 2002 to 2004 he was elected to the Board of Governors of the IEEE Computer Society and between 2012 and 2016 was elected for the ACM Council. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow, among other awards and distinctions. He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989, and his areas of expertise are web search and data mining, information retrieval, bias on AI, data science and algorithms in general. More info here.

Akhil Mathur Akhil Mathur
Meta AI London, UK

Title: Responsible AI (RAI) challenges in Ubiquitous Computing systems

Abstract: This talk will begin by briefly reviewing previous work in AI fairness, showcasing concepts and methodologies that researchers in ubiquitous computing can leverage. I will then delve into the distinctive challenges and opportunities in RAI specific to ubicomp systems, and share our recent work on understanding bias propagation in on-device machine learning workflows. Lastly, I will discuss the complexities involved in scaling fairness and RAI solutions in large-scale industrial AI systems.

Bio: Dr. Akhil Mathur is a Research Scientist at Meta AI. Previously, he was a Principal Research Scientist and Distinguished Member of Technical Staff at Bell Labs and a Visiting Industry Fellow at the University of Cambridge. He also serves on the Editorial Board of ACM IMWUT journal as an Associate Editor. His research focuses on machine learning and algorithmic fairness, with a current focus on inducing fairness and safety properties in Generative AI models. In addition, he is also interested in the topics of Federated Learning, Self-Supervised Learning, On-Device ML, and wearable computing. More info here.