My Uni days are long gone, I was reasonably good at math, but over the years of not using it, I almost feel like the knowledge was never there.

Here is my journey through the process of remembering some math that I need to feel more comfortable with for the basics of deep learning and to be able to digest papers in the broad area of deep learning research.

Advice to my former self: First read some papers, struggle through them, let the frustration build up so you have a motivation to learn + you will also build an intuition of what tools you may actually need!

As for the resouces, I started with Deep Learninig book few years ago, but got discouraged by the theory and I didn’t have enough practice to know that it will be useful some day.

Recently I found out that there is a growing comunity of people around that book led by Sanyam Buthani. Apart from the community and help + motivation to learn that comes with it, there are number of resources for self study, such as notebooks with all the concepts translated into code, which makes it so much more practical.

Staying around the same book, there are great lectures going through each chapter of that book.

If you want to learn more about torch.autograd, frameworks that do automatic differentiation in general and understand calculus that is the engine of deep learning machine there is an excelent The Matrix Calculus You Need For Deep Learning.

Another interesting resources are Mathematics for Machine Learning and short lectures covering pretty much all you need for the topic in a very accessible, visual and short form.

Last but not least, Andrew Ng’s advice on how to read a paper which I also sumarize here!

All these resources are freely available online.