Applause
Webinar

Developing Reliable Agentic AI: Planning, Testing and Real-World Lessons

Learn to build safe, effective agentic workflows and systems

Organizations are eager to adopt agentic AI to streamline business processes and increase efficiency. However, the shift from static tools to autonomous, goal-driven agents introduces new risk profiles, including emergent behaviors, self-directed decision-making misaligned with human intent, and opaque interaction dynamics between agents. Building safe, effective agentic systems requires careful planning, rigorous agent-level testing, and new system-level evaluation methods to uncover vulnerabilities from emergent behavior, unintended goal optimization, and multi-agent interactions.

Connecting multiple agents to create a workflow expands the surface area for failure – particularly alignment breakdowns and security vulnerabilities. Preventing these issues calls for adversarial testing, scenario simulation, and behavioral alignment evaluations at both the agent and system levels.

Our panel shares practical guidance on how to systematically design, test, and evaluate agentic systems so they are more predictable, safe, and aligned with human intent.

Join us on Friday, September 19, 9am PT / 12pm ET / 6pm CEST.

Register for this webinar to learn more about:

  • What questions to ask to set your project up for success
  • How to overcome common challenges in building effective agents
  • Some real-world examples of how companies are developing and testing agents and agentic workflows
  • Cutting-edge evaluation techniques to incorporate into your testing process, including modern alignment methods, AI safety frameworks and red teaming strategies

Speakers

David Bacarella

David Bacarella

Senior Solution Architect, Client EngineeringEcosystems, IBM Technology, Americas

David is a senior AI architect and applied engineer specializing in the design and delivery of advanced, enterprise-grade artificial intelligence solutions within IBM Client Engineering – Ecosystem. He holds a degree in electrical engineering and his inventions have earned him 15 patents, including 8 directly related to innovations in artificial intelligence. He is a dedicated lifelong learner who actively fosters diverse perspectives through collaboration and cross-functional teamwork.

Chris Sheehan

Chris Sheehan

EVP, High Tech & AIApplause

As EVP, High Tech & AI, Chris is responsible for the overall strategic direction and performance of Applause’s business in the high-tech sector and AI practice. Since joining Applause in 2015, Chris has held roles on multiple teams including software delivery, product strategy, customer success, and leading the strategic account segment in North America.

Jon Perreira

Jon Perreira

Sr. Director, AI Research, Red Team Engineering & ArchitectureApplause

Jon Perreira serves as Senior Director of AI Research and Red Team Engineering & Architecture, specializing in adversarial testing, safety evaluation, and analysis of AI systems. His career trajectory reflects a deliberate evolution from experimental science to applied AI safety leadership. Perreira’s foundational training spans biotechnology, computer science, and engineering. His work at Applause focuses on systematically testing adversarial resilience, identifying critical failure modes, and implementing scalable safety evaluation frameworks designed for high-stakes AI deployment environments.

LEADING COMPANIES UNDERSTAND DIGITAL EXPERIENCE



Applause is the worldwide leader in crowdsourced digital quality testing. With testers available on-demand around the globe, Applause provides brands with a full suite of testing and feedback capabilities. This approach drastically improves testing coverage, eliminates the limitations of offshoring and traditional QA labs, and speeds up time-to-market.

SAVE YOUR SPOT