Linear Digressions

In each episode, your hosts explore machine learning and data science through interesting (and often very unusual) applications.

http://lineardigressions.com

subscribe
share






Multi - Armed Bandits


Multi-armed bandits: how to take your randomized experiment and make it harder better faster stronger. Basically, a multi-armed bandit experiment allows you to optimize for both learning and making use of your knowledge at the same time. It's what the pros (like Google Analytics) use, and it's got a great name, so... winner! Relevant link: https://support.google.com/analytics/answer/2844870?hl=en


fyyd: Podcast Search Engine
share








 March 7, 2016  11m