The Cloudcast

The Cloudcast (@cloudcastpod) is the industry's #1 Cloud Computing podcast, and the place where Cloud meets AI.  Co-hosts Aaron Delp (@aarondelp) & Brian Gracely (@bgracely) speak with technology and business leaders that are shaping the future of business. Topics will include Cloud Computing | AI | AGI | ChatGPT | Open Source | AWS | Azure | GCP | Platform Engineering | DevOps | Big Data | ML | Security | Kubernetes | AppDev | SaaS | PaaS . 

https://www.thecloudcast.net

subscribe
share






Validation and Guardrails for LLMs


Shreya Rajpal (@ShreyaR, CEO @guardrails_ai ) talks about the need to provide guardrails and validation of LLM’s, along with common use cases and Guardrail AI’s new Hub.

SHOW: 797

CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw

NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS"

SHOW SPONSORS:

  • Learn More About Azure Offerings : Learn more about Azure Migrate and Modernize & Azure Innovate!
  • Azure Free Cloud Resource Kit : Step-by-step guidance, resources and expert advice, from migration to innovation.CloudZero – Cloud Cost Visibility and Savings
  • Find "Breaking Analysis Podcast with Dave Vellante" on Apple, Google and Spotify
  • Keep up to date with Enterprise Tech with theCUBE

SHOW NOTES:

  • Guardrails AI (homepage)
  • Guardrails AI Hub
  • Guardrails AI GitHub
  • Guardrails AI Discord
  • Shreya on TWIML podcast
  • Guardrails AI on TechCrunch

Topic 1 - Welcome to the show. Before we dive into today’s discussion, tell us a little bit about your background.

Topic 2 - Our topic today is the validation and accuracy of AI with guardrails. Let’s start with the why… Why do we need guardrails for LLMs today?

Topic 3 - Where and how do you control (maybe validate is a better word) outputs from LLM’s today? What are your thoughts on the best way to validate outputs?

Topic 4 - Will this workflow work with both closed-source (ChatGPT) and opensource (Llama2) models? Would this process apply to training/fine-tuning or more for inference? Would this potentially replace humans in the loop that we see today or is this completely different?

Topic 5 - What are some of the most common early use cases and practical examples? PII detection comes to mind, violation of ethics or laws, off-topic/out of scope, or simply just something the model isn’t designed to provide?

Topic 6 - What happens if it fails? Does this create a loop scenario to try again?

Topic 7 - Let’s talk about Guardrails AI specifically. Today you offer an open-source marketplace of Validators in the Guardrails Hub, correct? As we mentioned earlier, almost everyone’s implementation and guardrails they want to implement will be different. Is the best way to think about this as building blocks using validators that are pieced together?  Tell everyone a little bit about the offering

FEEDBACK?

  • Email: show at the cloudcast dot net
  • Twitter: @cloudcastpod
  • Instagram: @cloudcastpod
  • TikTok: @cloudcastpod


fyyd: Podcast Search Engine
share








 February 21, 2024  27m