The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.

https://twimlai.com

Eine durchschnittliche Folge dieses Podcasts dauert 43m. Bisher sind 701 Folge(n) erschienen. Alle 4 Tage erscheint eine Folge dieses Podcasts.

Gesamtlänge aller Episoden: 21 days 16 hours 48 minutes

subscribe
share






episode 673: Training Data Locality and Chain-of-Thought Reasoning in LLMs with Ben Prystawski


Today we’re joined by Ben Prystawski, a PhD student in the Department of Psychology at Stanford University working at the intersection of cognitive science and machine learning. Our conversation centers on Ben’s recent paper, “Why think step by step? Reasoning emerges from the locality of experience,” which he recently presented at NeurIPS 2023...


share








 February 26, 2024  24m
 
 

episode 672: Reasoning Over Complex Documents with DocLLM with Armineh Nourbakhsh


Today we're joined by Armineh Nourbakhsh of JP Morgan AI Research to discuss the development and capabilities of DocLLM, a layout-aware large language model for multimodal document understanding. Armineh provides a historical overview of the challenges of document AI and an introduction to the DocLLM model...


share








 February 19, 2024  45m
 
 

episode 671: Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo


Today we’re joined by Sanmi Koyejo, assistant professor at Stanford University, to continue our NeurIPS 2024 series. In our conversation, Sanmi discusses his two recent award-winning papers. First, we dive into his paper, “Are Emergent Abilities of Large Language Models a Mirage?”...


share








 February 12, 2024  1h6m
 
 

episode 670: AI Trends 2024: Reinforcement Learning in the Age of LLMs with Kamyar Azizzadenesheli


Today we’re joined by Kamyar Azizzadenesheli, a staff researcher at Nvidia, to continue our AI Trends 2024 series. In our conversation, Kamyar updates us on the latest developments in reinforcement learning (RL), and how the RL community is taking advantage of the abstract reasoning abilities of large language models (LLMs)...


share








 February 5, 2024  1h10m
 
 

episode 669: Building and Deploying Real-World RAG Applications with Ram Sriharsha


Today we’re joined by Ram Sriharsha, VP of engineering at Pinecone. In our conversation, we dive into the topic of vector databases and retrieval augmented generation (RAG)...


share








 January 29, 2024  35m
 
 

episode 668: Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao


Today we’re joined by Ben Zhao, a Neubauer professor of computer science at the University of Chicago. In our conversation, we explore his research at the intersection of security and generative AI. We focus on Ben’s recent Fawkes, Glaze, and Nightshade projects, which use “poisoning” approaches to provide users with security and protection against AI encroachments...


share








 January 22, 2024  39m
 
 

episode 667: Learning Transformer Programs with Dan Friedman


Today, we continue our NeurIPS series with Dan Friedman, a PhD student in the Princeton NLP group. In our conversation, we explore his research on mechanistic interpretability for transformer models, specifically his paper, Learning Transformer Programs. The LTP paper proposes modifications to the transformer architecture which allow transformer models to be easily converted into human-readable programs, making them inherently interpretable...


share








 January 15, 2024  38m
 
 

episode 666: AI Trends 2024: Machine Learning & Deep Learning with Thomas Dietterich


Today we continue our AI Trends 2024 series with a conversation with Thomas Dietterich, distinguished professor emeritus at Oregon State University. As you might expect, Large Language Models figured prominently in our conversation, and we covered a vast array of papers and use cases exploring current research into topics such as monolithic vs. modular architectures, hallucinations, the application of uncertainty quantification (UQ), and using RAG as a sort of memory module for LLMs...


share








 January 8, 2024  1h5m
 
 

episode 665: AI Trends 2024: Computer Vision with Naila Murray


Today we kick off our AI Trends 2024 series with a conversation with Naila Murray, director of AI research at Meta. In our conversation with Naila, we dig into the latest trends and developments in the realm of computer vision. We explore advancements in the areas of controllable generation, visual programming, 3D Gaussian splatting, and multimodal models, specifically vision plus LLMs...


share








 January 2, 2024  52m
 
 

episode 664: Are Vector DBs the Future Data Platform for AI? with Ed Anuff


Today we’re joined by Ed Anuff, chief product officer at DataStax. In our conversation, we discuss Ed’s insights on RAG, vector databases, embedding models, and more. We dig into the underpinnings of modern vector databases (like HNSW and DiskANN) that allow them to efficiently handle massive and unstructured data sets, and discuss how they help users serve up relevant results for RAG, AI assistants, and other use cases...


share








 December 28, 2023  48m