Data Skeptic

The Data Skeptic Podcast features interviews and discussion of topics related to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches.

https://dataskeptic.com

Eine durchschnittliche Folge dieses Podcasts dauert 31m. Bisher sind 530 Folge(n) erschienen. Jede Woche gibt es eine neue Folge dieses Podcasts.

Gesamtlänge aller Episoden: 11 days 3 hours 4 minutes

subscribe
share






recommended podcasts


[MINI] Conditional Independence


In statistics, two random variables might depend on one another (for example, interest rates and new home purchases). We call this conditional dependence. An important related concept exists called conditional independence. This phrase describes...


share








 July 21, 2017  14m
 
 

Estimating Sheep Pain with Facial Recognition


Animals can't tell us when they're experiencing pain, so we have to rely on other cues to help treat their discomfort. But it is often difficult to tell how much an animal is suffering. The sheep, for instance, is the most inscrutable of animals....


share








 July 14, 2017  27m
 
 

CosmosDB


This episode collects interviews from my recent trip to Microsoft Build where I had the opportunity to speak with Dharma Shukla and Syam Nair about the recently announced CosmosDB. CosmosDB is a globally consistent, distributed datastore that supports...


share








 July 7, 2017  33m
 
 

[MINI] The Vanishing Gradient


This episode discusses the vanishing gradient - a problem that arises when training deep neural networks in which nearly all the gradients are very close to zero by the time back-propagation has reached the first hidden layer. This makes learning...


share








 June 30, 2017  15m
 
 

Doctor AI


hen faced with medical issues, would you want to be seen by a human or a machine? In this episode, guest Edward Choi, co-author of the study titled shares his thoughts. Edward presents his team’s efforts in developing a temporal model that can...


share








 June 23, 2017  41m
 
 

[MINI] Activation Functions


In a neural network, the output value of a neuron is almost always transformed in some way using a function. A trivial choice would be a linear transformation which can only scale the data. However, other transformations, like a step function allow...


share








 June 16, 2017  14m
 
 

MS Build 2017


This episode recaps the Microsoft Build Conference.  Kyle recently attended and shares some thoughts on cloud, databases, cognitive services, and artificial intelligence.  The episode includes interviews with Rohan Kumar and David...


share








 June 9, 2017  27m
 
 

[MINI] Max-pooling


Max-pooling is a procedure in a neural network which has several benefits. It performs dimensionality reduction by taking a collection of neurons and reducing them to a single value for future layers to receive as input. It can also prevent...


share








 June 2, 2017  12m
 
 

Unsupervised Depth Perception


This episode is an interview with .  In the recent paper "", Tinghui and collaborators propose a deep learning architecture which is able to learn depth and pose information from unlabeled videos.  We discuss details of this project and its...


share








 May 26, 2017  23m
 
 

[MINI] Convolutional Neural Networks


CNNs are characterized by their use of a group of neurons typically referred to as a filter or kernel.  In image recognition, this kernel is repeated over the entire image.  In this way, CNNs may achieve the property of translational...


share








 May 19, 2017  14m