May 15
CS Bits & Bytes: Dr. Betty Cheng presents "Be careful what you wish for...When should we trust AI?"

CS Bits and Bytes hosts a research talk by Dr Betty H.C. Cheng.
Abstract: Trustworthy artificial intelligence (Trusted AI) is essential when autonomous, safety-critical systems use learning-enabled components (LECs) in uncertain environments. When reliant on deep learning, learning-enabled autonomous systems (LEAS) must address the reliability, interpretability, and robustness (collectively, the assurance) of learning models. Three types of uncertainty most significantly affect assurance. First, uncertainty about the physical environment can cause suboptimal, and sometimes catastrophic, results as the system struggles to adapt to unanticipated or poorly-understood environmental conditions. For example, when lane markings are occluded (either on the camera and/or the physical lanes), lane management functionality can be critically compromised. Second, uncertainty in the cyber environment can create unexpected and adverse consequences, including not only performance impacts (network load, real-time responses, etc.) but also potential threats or overt (cybersecurity) attacks. Third, uncertainty can exist with the components themselves and affect how they interact upon reconfiguration. Left unchecked, it may cause unexpected and unwanted feature interactions. While learning-enabled technologies have made great strides in addressing uncertainty, challenges remain in addressing the assurance of such systems when encountering uncertainty not addressed in training data. Furthermore, we need to consider LEASs as first-class software-based systems that should be rigorously developed, verified, and maintained — i.e., software engineered. In addition to developing specific strategies to address these concerns, appropriate software architectures are needed to coordinate LECs and ensure they deliver acceptable behavior even under uncertain conditions. To this end, this presentation overviews a number of our multi-disciplinary research projects involving industrial collaborators, which collectively support a software engineering, model-based approach to address Trusted AI and provide assurance for learning-enabled autonomous systems. In addition to sharing lessons learned from more than two decades of research addressing assurance for self-adaptive, autonomous systems operating under a range of uncertainty, near-term and longer-term research challenges for SE4SafeML (Software Engineering for Safe Machine Learning) will be overviewed.
Join us for treats, beverages, community and conversation.
← Return to site Calendar
Go to Campus Calendar →