Researchers at MIT’s CSAIL elucidate the thought process behind neural network predictions through their fascinating paper “Network Dissection: Quantifying Interpretability of Deep Visual Representations”.

Photo by Alina Grubnyak on Unsplash

Have you ever wondered how Neural Networks (NN) arrive at predictions once it’s trained? Wouldn’t it be interesting to dissect NN and find out what the hidden units have learned? How do you think hidden units contribute to NN predictions post training? Well, one has plenty of time to think of such intricacies of deep networks when one’s model goes on training. Alas, how can a novice in deep learning put a probe on the hidden units and interpret them. So, I naturally discarded these thoughts until I stumbled upon the paper “Network Dissection: Quantifying Interpretability of Deep Visual Representations”.

About the Paper:


A window into the field of synthetic data generation for computer vision.

Source: Analytics India magazine

Earlier this year, I had a conversation with a manager at Cognizant heading the deep learning guild team. His team creates proofs-of-concept (pilot projects to demonstrate a business opportunity) using deep learning algorithms. One of the major challenges that he noticed his team face was getting data for such POCs. Acquiring a well represented data specific to a problem was arduous. Additionally, utilizing real-world data to test if the system is providing the desired output was impossible in most cases as it imposed privacy-related issues. As we concluded the conversation, he indicated that a possible solution is to generate synthetic…

Reshma Abraham

An Electronics Engineer with a keen interest in Deep learning and Computer Vision.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store