SEMINAR: Deep Neural Networks (DNNs) and the Human Brain

Dr. Patrick McClure
When Sep 01, 2017
from 01:00 PM to 02:00 PM
Where Sackett Hall, RM 103
Contact Name
Contact Phone 502-852-7485
Add event to calendarvCal
iCal

Abstract:As deep neural networks (DNNs) are applied to increasingly complex problems, they will need to represent their own uncertainty. In machine learning, modelling uncertainty is one of the key features of Bayesian methods. Large DNNs have been successful for many complex tasks, especially for visual perception. Recently, stochastic techniques commonly used for regularization (e.g. dropout) combined with sampling have been shown to efficiently approximate variational inference in DNNs. We compared how using noise masks sampled from different distributions affects a DNN’s accuracy and ability to represent its own uncertainty. We found that sampling weights or units during learning led to increased accuracy and sampling during inference improves a DNN’s ability to model its own uncertainty.  Humans also need to represent their own uncertainty for complex tasks.   Often, multiple interpretations of an event are possible given sensory evidence, even if one interpretation is most probable. The exact neurobiological mechanism for this is unknown, but increasing evidence that the human brain could use its inherent stochasticity to represent uncertainty. However, the prominent convolutional neural networks (CNNs) currently used to model human visual perception implement deterministic mappings from input to output. We used Gaussian unit noise and sampling to approximate Bayesian inference in these networks. We tested how sampling during learning and inference affected a CNN’s accuracy, its ability to model its own uncertainty, and its prediction of human behavioral confusions for color and grayscale images. We found that sampling with Gaussian noise during learning and inference improved a CNN’s improved the accuracy and the represented uncertainty for the trained task. However, for classification on a subset of the trained classes and for grayscale images, sampling noise during learning and inference did not affect accuracy, but did lead to better representation of uncertainty and improved prediction of the mistakes that humans make when performing object recognition. These results add to the evidence that human visual perception is well modelled by Bayesian methods. 

Speaker:Patrick McClure started attending the University of Louisville in 2009. Since he was interested in both mathematics and medicine, Patrick chose to major in Bioengineering. During his junior year, Patrick was awarded the Barry M. Goldwater Scholarship, in large part due to the research he did with Dr. El-Baz. During his senior year, Patrick received the Jerry and Pat Sturgeon Academic Excellence Award.  After graduating with his Bachelor’s in Bioengineering in 2013, he pursued a Master’s in the CECS department focusing on machine learning and optimization, while continuing to conduct research in medical image analysis. In 2014, Patrick was awarded the Cambridge Trust International Scholarship to pursue a PhD in Computational Neuroscience at the University of Cambridge. While at Cambridge, he researched deep neural network models of human visual perception and decision making.  After his PhD, Patrick plans to continue his career as a Research Scientist in Machine Learning at the NIH, where he plan to pursue research in both computational neuroscience models and medical image analysis for neurological diseases and disorders. 

Seminar Video on YouTube