How does a security team evaluate the safety of the AI systems exposed internally and externally?  What if the AI solution can be tricked into answering questions it shouldn’t?  How can I build protections against jailbreaking or answering questions it shouldn’t?  What about hallucinations?  … curious?  This session is for you.

Importance: This is potentially the most significant new frontier that security professionals will encounter in the next decade.  Arming security professionals with the ability to analyze an AI solution and mitigate misuse is a new skill for security teams.