Overview
As AI systems grow more advanced, ensuring their safety and predictability becomes increasingly critical. This STACK Meetup explores how safety testing and guardrails, and mechanistic interpretability, can reduce misinformation and bias. These approaches work together to ensure that AI functions safely and as intended, especially in high-stakes settings.
Get tips from our GovTech's AI Practice team on safeguarding LLM applications against safety risks. Our speaker will guide you through the Responsible AI journey through the steps of defining a customised safety risk taxonomy, evaluating safety risks, and implementing safeguards to mitigate them.
Also, hear from a researcher at the Singapore AI Safety Institute on mechanistic interpretability, an approach akin to a brain scan for AI systems. This field seeks to uncover the inner workings of AI systems to identify backdoors, misalignment and unintended behaviours. This understanding powers applications such as model editing, behaviour steering, and the design of more robust guardrails, helping ensure that AI operates predictably and can be audited effectively.
Who should attend: AI Researchers/Engineers, Research Engineers, Data Scientists, Software Engineers/Developers and Designers who use AI in their products or solutions
Recommended knowledge level: Conceptual understanding of LLMs is helpful and experience building with LLMs is a bonus

Event details
Timing displayed in agenda is based on Singapore Timezone (GMT +8)