

Get my Free NumPy Handbook:
In this Machine Learning from Scratch Tutorial, we are going to implement the LDA algorithm using only built-in Python modules and numpy. LDA (Linear Discriminant Analysis) is a feature reduction technique and a common preprocessing step in machine learning pipelines. We will learn about the concept and the math behind this popular ML algorithm, and how to implement it in Python.
⭐ Kite is a free AI-powered coding assistant that will help you code faster and smarter. The Kite plugin integrates with all the top editors and IDEs to give you smart completions and documentation while you’re typing. I’ve been using Kite for 6 months and I love it!
🚀🚀 Get access to the ML notebooks on Patreon: 🚀🚀
If you enjoyed this video, please subscribe to the channel!
The code can be found here:
Further readings:
You can find me here:
Website:
Twitter:
GitHub:
#Python #MachineLearning
source
THANK YOU SO MUCH
excellent explanation, Many thanks
you are nicely reading the ppt, very good, 😒
try to explain it , dont just read it.
When I use this code to sklearn.datasets.load_digits, it occurs singular matrix error when calculate np.linalg.inv(SW).
Why this occurs?
It is great video. Can you send me the code into email: sujalbhagat97@gmail.com
I didn't understand what's the mathematical references of your work. Why did you use that transformation? is it the same with SVD? please note the reference or some keywords for me to study more about the mathematical stuff related to the transformation you made use of that. thanks
permission to learn sir
hi getting this error,. still LDA is able to reduce the dimensions to 2
"Value 'eigenvectors' is unsubscriptable"
and why cant we sort the idxs with eign vectors argument?
update: solved that issue by converting the eignvectors to numpy array… thankx to you , my programing skills are getting better.
Are you gonna do a playlist for NN from scratch?
Finally, finished learning your 14 ML videos. Learned a lot about ML algorithms and numpy skills. Thanks a lot !
One of the recent interesting works in DL is Batch Normalization.
I tried a lot, understanding online, how to implement batch Norm from scratch, but couldn't understand.
My main problem is understanding Backpropagation in Batch Norm. Can you do a video on it, it'll be very helpful or at least share any resources (if you have any) for backpropagation in Batch Norm.
Great Work!
PS: Please make video on on implementing CNN algorithm. It will mean a lot of help for academic students
hi great job .can you also make a viedo on EM(expectation maximization)?tnx alot
of topic. what are your thoughts on Andrew-ng DL course? some feedback says it's hard to understand the Math. if you know the high school math basics, is it possible to follow that course?
The biggest thing I am struggling with are the shapes for the neural networks, data feeding in the e.g Embedding layer and reshapings. Is this a topic you can recommend a good learning material or even dive into in one of your videos?
Thanks
Excellent tutorial on PCA and LDA
first again!