Can we ensure artificial intelligence is safe?
“AI safety” refers broadly to the field of research dedicated to ensuring that AI systems behave safely and reliably, with minimal risk of unintended consequences or harm to humanity. However, because AI systems operate in a world filled with uncertainty and volatility, the challenge of building safe and reliable AI is not easy and mitigation strategies vary widely. So what is safe AI, exactly, and why is it so difficult to achieve? How can we ensure transparency in how AI makes decisions? Can LLMs be hacked? Are the existential risks of advanced AI exaggerated—or justified?
Join a panel of experts to explore pressing questions of safety in this unprecedented moment of proliferating and advancing AI technologies. The 2024 September Soiree in Technophilosophy promises to be a unique opportunity to expand your horizons and be part of a stimulating event that aims to provoke deep and meaningful engagement with our future with AI.
The Soiree will bring together four experts in AI: Roger Grosse, Sedef Kocak, Sheila McIlraith, and Karina Vold. The event is presented by the Schwartz Reisman Institute in collaboration with the University of Toronto’s Institute for the History & Philosophy of Science & Technology (IHPST), Centre for Ethics, and Victoria College.
Roger Grosse is Schwartz Reisman Chair in Technology and Society, associate professor of computer science at the University of Toronto, Canada CIFAR AI Chair, and founding member of the Vector Institute. Sedef Kocak is director of professional development at the Vector Institute. Sheila McIlraith is associate director at the Schwartz Reisman Institute for Technology and Society, professor of computer science at the University of Toronto, and Canada CIFAR AI Chair at the Vector Institute. Karina Vold is a research lead at the Schwartz Reisman Institute for Technology and Society, an assistant professor of philosophy at the Institute for the History & Philosophy of Science & Technology (IHPST) at the University of Toronto, and an AI2050 Schmidt Foundation Fellow.
5:00 pm – 8:00 pm EST
allTags
No allTags saved to the post yet …