Mark your calendars for March 27th — and join us online from anywhere.
Security Frontiers, hosted by Dropzone AI, is more than a virtual conference—it’s a roadmap for making AI work for your security team. You’ll learn how your peers are building with AI to make security operations more efficient and effective.
This event is for security professionals who are looking for practical solutions to real-world challenges. Whether you’re a CISO, SOC manager, analyst, or security integrator, you’ll leave with a better understanding of what’s possible with AI today.
Security Frontiers 2025 Agenda
Security Frontiers focuses on real-world solutions, not theories. We have a line-up of speakers that will present projects that are in production at organizations like Lyft and Databricks. Let’s take a look at the agenda for the 3-hour event:
Where Are We Now, and Where Are We Headed?
Daniel Miessler, Caleb Sima, Edward Wu
AI is making a tremendous impact on cybersecurity, with both opportunities and threats. In this panel, we will discuss the current state of GenAI in security and what practitioners and security leaders can do to make sure this technology is working for them.
Subscribe to Daniel Miessler’s Unsupervised Learning newsletter.
LLMs as a Force Multiplier: Practical Patterns for Security Teams
Dylan Williams, Co-Founder and Head of Research at Stealth
Prompt engineering, LLM flows, RAG, agents, fine-tuning, evaluations … not sure where to start? This session offers a practical roadmap for integrating LLMs into your security workflows amidst the overwhelming choices and hype. Regardless if your specialty is AppSec, GRC, Red Teaming, or Security Operations, this talk will help you choose the right techniques, avoid common pitfalls, and apply proven patterns with real-world examples.
What you’ll learn:
- Tools and techniques to take home and start building right away
- Lessons learned to save you pain and suffering
- Building around LLMs is sometimes more of an art than a science
Follow Dylan on LinkedIn for weekly round-ups on AI for cybersecurity.
Lessons from Building ReAct Security Agents
Anshuman Bhartiya, Staff Security Engineer at Lyft
In this session, Anshuman will share his lessons from the journey of building AI security agents using the ReAct framework. These lessons will include some real world code examples and prompts provided to the LLM that will be more practical and helpful for others in the community to build their own AI agents.
What you’ll learn:
- Understand the basic concepts of AI Agents
- How to build ReAct AI security agents
- See some real world examples for better context and understanding"
Subscribe to Anshuman’s Boring AppSec podcast.
LLM-Powered Private Threat Modeling
Murat Zhumagali Security Engineer at Progress
This session will explore the development of an in-house threat modeling assistant that leverages LLMs through AWS Bedrock and Anthropic Claude. Learn how we're building a private solution that automates and streamlines the threat modeling process while keeping sensitive security data within our control. We'll demonstrate how this proof-of-concept tool combines LangChain and Streamlit to create an interactive threat modeling experience.
What you’ll learn:
- Technical bar for using GenAI/LLM has been lowered, anyone can use it
- Start with simple use cases; not like me with threat modeling
- Better candidates are vulnerability management and compliance
Building Security Tools by Humans, for AI Agents
Josh Larsen, Co-founder and CTO at Ghost Security
Getting an agent to reliably interact with a legacy security tool can be quite challenging—especially when dealing with output parsing, state management, and error handling. But what if we designed our tools to be AI-ready from the outset? Join us in a practical demo using AI agents to interact with Reaper, an open-source, API-based intercepting web attack proxy and fuzzing tool.
What you’ll learn:
- The benefits of building security tools specifically for tool usage by LLMs and how that translates to better state management, improved accuracy, and higher composability
- A practical use case with the open-source tool, Reaper being controlled via agentic AI to enumerate, prioritize, and successfully fuzz a vulnerability in a live application
- How and why this perspective will drive better outcomes in the industry’s use of AI going forward
Check out Ghost Security’s Reaper project on Github.
Using LLMs for Cost-Effective PII Detection
Kyle Polley, Member of Technical Staff at LLM Search Provider
This session will describe PII Detective, an open-source tool that uses Large Language Models (LLMs) to identify and classify PII with exceptional accuracy by analyzing table metadata. Participants will learn how to leverage LLMs for security operations and implement Dynamic Data Masking to seamlessly protect sensitive data.
What you’ll learn:
- How to leverage LLMs while leaving humans decision makers "in the loop"
- How to protect PII in data lake infrastructure such as Snowflake by creating and applying data masking policies
- How to deploy PII Detective in your own environment
Check out the PII Detective project on Github.
AI-Enhanced Prioritization of Vulnerabilities
Anirudh Kondaveeti, Data Scientist at Databricks
This session will explain how Databricks uses AI to automatically identify and rank vulnerabilities in third-party libraries based on severity and relevance to Databricks infrastructure. The VulnWatch system has significantly reduced manual effort for the security team.
What you’ll learn:
- How AI-driven vulnerability prioritization enhances security at Databricks by automating threat detection, reducing manual tasks, and improving focus on critical risks
- How the Databricks team went about building VulnWatch
Read the Databricks blog about VulnWatch.
Red Swarm: Drones of 1337 Intelligence
Jeff Sims, Senior Staff Data Scientist at Infoblox
This session outlines how decentralized, multi-agent swarms can collaboratively design, plan, and produce red team tools in a structured, iterative process. A high-level user input is transformed by the swarm into refined goals and constraints, generating a coding prompt that a coding agent uses to create fully executable Python scripts with unit tests.
What you’ll learn:
- Leveraging indirect communication methods, like environmental cues, enhances creativity, reflection and adaptability in agent systems
- Specialized agents designed with distinct roles, competing sub-goals, and adversarial tensions will produce emergent and creative behavior
- Lessons learned from building coding agents
Check out Jeff’s Red Reaper project on Github.
Built for Security Leaders and Practitioners
Security Frontiers is designed for everyone involved in advancing their organization’s security posture:
- CISOs looking to optimize resource allocation, reduce costs, and improve key metrics like MTTD and MTTR.
- SOC Managers aiming to reduce alert fatigue, improve team morale, and meet SLA targets.
- SOC Analysts wanting to offload repetitive tasks and focus on high-impact investigations.
- Security Engineers focused on streamlining workflows and reducing tool sprawl.
Learn From the Best, Without Leaving Your Desk
This fully virtual 3-hour event gives you direct access to leading voices in cybersecurity and AI. Skip the travel and log in from anywhere. And invite your colleagues! Security Frontiers will be an excellent learning opportunity for any organization that’s investigating the possibilities of AI automation.