The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.

https://twimlai.com

Eine durchschnittliche Folge dieses Podcasts dauert 43m. Bisher sind 702 Folge(n) erschienen. Dieser Podcast erscheint alle 4 Tage.

Gesamtlänge aller Episoden: 21 days 17 hours 43 minutes

subscribe
share






episode 654: AI Sentience, Agency and Catastrophic Risk with Yoshua Bengio


Today we’re joined by Yoshua Bengio, professor at Université de Montréal. In our conversation with Yoshua, we discuss AI safety and the potentially catastrophic risks of its misuse. Yoshua highlights various risks and the dangers of AI being used to manipulate people, spread disinformation, cause harm, and further concentrate power in society...


share








 November 6, 2023  48m
 
 

episode 653: Delivering AI Systems in Highly Regulated Environments with Miriam Friedel


Today we’re joined by Miriam Friedel, senior director of ML engineering at Capital One. In our conversation with Miriam, we discuss some of the challenges faced when delivering machine learning tools and systems in highly regulated enterprise environments, and some of the practices her teams have adopted to help them operate with greater speed and agility...


share








 October 30, 2023  44m
 
 

episode 652: Mental Models for Advanced ChatGPT Prompting with Riley Goodside


Today we’re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF...


share








 October 23, 2023  39m
 
 

episode 651: Multilingual LLMs and the Values Divide in AI with Sara Hooker


Today we’re joined by Sara Hooker, director at Cohere and head of Cohere For AI, Cohere’s research lab. In our conversation with Sara, we explore some of the challenges with multilingual models like poor data quality and tokenization, and how they rely on data augmentation and preference training to address these bottlenecks...


share








 October 16, 2023  1h18m
 
 

episode 650: Scaling Multi-Modal Generative AI with Luke Zettlemoyer


Today we’re joined by Luke Zettlemoyer, professor at University of Washington and a research manager at Meta. In our conversation with Luke, we cover multimodal generative AI, the effect of data on models, and the significance of open source and open science...


share








 October 9, 2023  38m
 
 

episode 649: Pushing Back on AI Hype with Alex Hanna


Today we’re joined by Alex Hanna, the Director of Research at the Distributed AI Research Institute (DAIR). In our conversation with Alex, we discuss the topic of AI hype and the importance of tackling the issues and impacts it has on society. Alex highlights how the hype cycle started, concerning use cases, incentives driving people towards the rapid commercialization of AI tools, and the need for robust evaluation tools and frameworks to assess and mitigate the risks of these technologies...


share








 October 2, 2023  49m
 
 

episode 648: Personalization for Text-to-Image Generative AI with Nataniel Ruiz


Today we’re joined by Nataniel Ruiz, a research scientist at Google. In our conversation with Nataniel, we discuss his recent work around personalization for text-to-image AI models. Specifically, we dig into DreamBooth, an algorithm that enables “subject-driven generation,” that is, the creation of personalized generative models using a small set of user-provided images about a subject. The personalized models can then be used to generate the subject in various contexts using a text prompt...


share








 September 25, 2023  44m
 
 

episode 647: Ensuring LLM Safety for Production Applications with Shreya Rajpal


Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes...


share








 September 18, 2023  40m
 
 

episode 646: What’s Next in LLM Reasoning? with Roland Memisevic


Today we’re joined by Roland Memisevic, a senior director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents...


share








 September 11, 2023  59m
 
 

episode 645: Is ChatGPT Getting Worse? with James Zou


Today we’re joined by James Zou, an assistant professor at Stanford University. In our conversation with James, we explore the differences in ChatGPT’s behavior over the last few months. We discuss the issues that can arise from inconsistencies in generative AI models, how he tested ChatGPT’s performance in various tasks, drawing comparisons between March 2023 and June 2023 for both GPT-3.5 and GPT-4 versions, and the possible reasons behind the declining performance of these models...


share








 September 4, 2023  42m