How Human Testing Helps Overcome LLM Limitations


Capture the value of human feedback in AI systems

Large language models (LLMs) are more prevalent than ever in digital products. These powerful models are behind the meteoric rise of generative AI, which is on nearly every business roadmap. But, without sufficient human validation, LLMs can generate problematic or inaccurate outputs that expose businesses to risk of reputation damage and litigation.

In this webinar, experts Josh Poduska and Chris Sheehan discuss the power of human involvement throughout the entire development lifecycle. From establishing a foundational data model to optimization through validating LLM outputs, organizations must incorporate human feedback at multiple junctures. Hear from our experts how a diverse, global, human-based testing approach is a direct solution to these risks and supports your goals of delivering helpful and harmless models and applications.

Join us on Wednesday, June 26 at 9 AM PDT / 10 AM MDT / 12 PM EDT / 6 PM CET

Register for this webinar to learn more about:

  • Why a proactive approach helps ensure safe, fair and responsible AI products
  • What goes into model optimization
  • How a diverse crowdsourced team can cover edge cases


Josh Poduska

Josh Poduska

Client Partner and AI Strategist Applause

Josh Poduska is a Client Partner and AI Strategist at Applause. He has more than 20 years of experience as an AI Leader, Strategist and Advisor. He previously held the position of Chief Field Data Scientist at Domino Data Lab. Josh has managed top analytical teams and led data science strategy at multiple companies. His primary research interest is in AI and ML validation, observability, and risk management.

Chris Sheehan

Chris Sheehan

SVP & General Manager, Strategic SalesApplause

Chris Sheehan is the SVP and GM of Strategic Accounts and AI at Applause. Chris leads Applause's strategic account division in North America, partnering with Applause's largest clients to help improve their digital experiences. Chris also manages Applause's AI practice which encompasses human data collection, model fine tuning, and Generative AI testing and user feedback programs.


Applause is the worldwide leader in crowdsourced digital quality testing. With testers available on-demand around the globe, Applause provides brands with a full suite of testing and feedback capabilities. This approach drastically improves testing coverage, eliminates the limitations of offshoring and traditional QA labs, and speeds up time-to-market.