Introduction to Artificial Neural Networks and Deep Learning
A Practical Guide with Applications in Python
Machine learning has become a central part of our life — as consumers, customers, and hopefully as researchers and practitioners! I appreciate all the nice feedback that you sent me about “Python Machine Learning,” and I am so happy to hear that you found it so useful as a learning guide, helping you with your business applications and research projects. I have received many emails since its release. Also, in these very emails, you were asking me about a possible prequel or sequel.
Initially, I was inclined to write more about the “math” parts, which can be a real hurdle for almost everyone without (or even with) a math major in college. Initially, I thought that writing a book about “machine learning math” was a cool thing to do. Now, I have ~15 chapters worth of notes about precalculus, calculus, linear algebra, statistics, and probability theory. However, I eventually came to a conclusion that there were too many other math books out there, already! Most of them are far better and more comprehensive and accurate than my potential ~500page introduction to the topics that I had in store. After all, I think that the real motivation for learning and understanding a subject comes from being excited about it in the first place; if you are passionate about machine learning and you stumble upon the chain rule in calculus, you wouldn’t have any problems to find a trusted resource via your favorite search engine these days.
So, instead of writing that “prequel,” let me write about something that’s built upon the concepts that I introduced in the later chapters of “Python Machine Learning” – algorithms for deep learning. After we coded a multilayer perceptron (a certain kind of feedforward artificial neural network) from scratch, we took a brief look at some Python libraries for implementing deep learning algorithms, and I introduced convolutional and recurrent neural networks on a conceptual level.
In this book, I want to continue where I left off and want to implement deep neural networks and algorithms for deep learning algorithms from scratch, using Python, NumPy, and SciPy throughout this educational journey. In addition to the vanilla Python sciencestack, we will implement these algorithms in TensorFlow, highly performant yet very accessible deep learning library for implementing and applying deep learning to realworld problems.
ISBN10: [TBA]
ISBN13: [TBA]
Paperback: est. 20172018
For more information, please visit https://github.com/rasbt/deeplearningbook
Manuscripts / Early Access Drafts

Introduction

The Perceptron [Code Notebook]

Optimizing Cost Functions with Gradient Descent

Logistic Regression and Softmax Regression

From Softmax Regression to Multilayer Perceptrons

Cross Validation and Performance Metrics

Regularization in Neural Networks

Learning Rates and Weight Initialization

Convolutional Neural Networks

Recurrent Neural Networks

Echostate Networks

Autoencoders

General Adverserial Neural Networks

Deep Generative Models

Reinforcement Learning

Appendix C: Linear Algebra Essentials

Appendix D: Calculus and Differentiation Primer [PDF] [EPUB]

Appendix E: Python Setup

Appendix F: Introduction to NumPy [PDF] [EPUB] [Code Notebook]

Model Zoo: A collection of standalone TensorFlow models in Jupyter Notebooks