Securing LLM Applications: the devil is in the detail

Thursday, July 25, 2024
08:00 PDT / 11:00 EDT / 16:00 BST
Most likely, LLM applications are popping up in your organization like mushrooms after a Spring rain. What can you do to control security risk? Who should do what work? And how should they do it? Join our upcoming webinar with Dr Gary McGraw, Co-founder of the Berryville Institute of Machine Learning (BIML) to hear about LLM-related risks and controls.

Webinar Description

Securing a modern LLM system (even if what’s under scrutiny is only an application involving LLM technology) must involve diving into the engineering and design of the specific LLM system itself. During this webinar, we will discuss an approach intended to make that kind of detailed work easier and more consistent by providing a baseline and a set of risks to consider.  Fortunately, modern threat modeling tools already have knowledge of these risks built in.

In our view, LLM systems engineers can (and should) devise and field a more secure LLM application by carefully considering ML-related risks while designing, implementing, and fielding their own specific systems. In security, the devil is in the details, and we attempt to provide as much detail as possible regarding LLM security risks and some basic controls.

  • Are you a user of Large Language Models (LLMs)?
  • Are you a CISO or an application security leader confronted with ML security and Adversarial AI?
  • Do you or your teams utilize output from Machine Learning (ML)/Artificial Intelligence (AI) applications and systems?
  • Are you looking for risk management and threat modeling guidance for AI/ML?
  • Do you wonder how NIST, OWASP, and BIML stack up when it comes to ML risks?

If you said yes to any of the above, then this webinar is for you. Listen in as we discuss 81 specific risks identified in the 2024 LLM Risk Analysis Paper from Berryville Institute of Machine Learning (BIML). Gary McGraw, the “father of software security,” is Co-Founder and CEO of BIML and will go into detail of what the risks mean for you and your organization, why you need to take notice, and why the time to act is now.

Jacob Teale, Head of Security Research at IriusRisk, will contribute on why security teams need to ensure they are not just considering risks from LLMs, but incorporating it into their wider cybersecurity strategies for 2024 and beyond.

Presenters
Dr Gary McGraw
Co-founder of the Berryville Institute of Machine Learning
Stephen de Vries
CEO and Co-Founder at IriusRisk
Jacob Teale
Head of Security Research at IriusRisk

Key Takeaways

See what LLM risks really mean to your organization
How you can act now to get ahead for the future
Best practices for your LLM security requirements