Recurrent Neural Networks (LSTM / RNN) Implementation with Keras – Python

///Recurrent Neural Networks (LSTM / RNN) Implementation with Keras – Python

Recurrent Neural Networks (LSTM / RNN) Implementation with Keras – Python

FavoriteLoadingAdd to favorites

#RNN #LSTM #RecurrentNeuralNetworks #Keras #Python #DeepLearning

In this tutorial, we implement Recurrent Neural Networks with LSTM as example with keras and Tensorflow backend. The same procedure can be followed for a Simple RNN.

We implement Multi layer RNN, visualize the convergence and results. We then implement for variable sized inputs.

Recurrent Neural Networks RNN / LSTM / GRU are a very popular type of Neural Networks which captures features from time series or sequential data. It has amazing results with text and even Image Captioning.

In this example we try to predict the next digit given a sequence of digits. Same concept can be extended to text images and even music.

Find the codes here
GitHub :
Facebook :
Support us on Patreon :
Good Reads :

Check out the machine learning, deep learning and developer products
USA:
India:

source

By |2019-06-01T19:27:21+00:00June 1st, 2019|Python Video Tutorials|40 Comments

40 Comments

  1. liu haokun June 1, 2019 at 7:27 pm - Reply

    toooooooo good, its the best tutorial ive ever seen

  2. Mansha Chandna June 1, 2019 at 7:27 pm - Reply

    What to do if in any case, the loss graph is increasing instead of decreasing?

  3. AngelSmart General June 1, 2019 at 7:27 pm - Reply

    (i+j)/100 gives you 0 or 1 because they are int

  4. Antonio Luis Sombra June 1, 2019 at 7:27 pm - Reply

    Best and easiest to understand LSTM implementation on Youtube. Loved how you show things not working at first and then you trying different solutions to make the model learn.

  5. Jidda Dawud Jidda June 1, 2019 at 7:27 pm - Reply

    Hello, thanks for your video its really helps, and i want to use RNN, LSTM and GRU to work on a work, that adopt five snapshots of temporal
    datasets for link prediction task.train a network base on snapshot, time series, jiddadawudjidda@gmail.com

  6. Pranav Raj June 1, 2019 at 7:27 pm - Reply

    It was pleasure watching out this video. You are doing really GREAT

  7. Banu Gopugari June 1, 2019 at 7:27 pm - Reply

    suprb bro

  8. Neha Ganv June 1, 2019 at 7:27 pm - Reply

    After making the epoch 400, he did not normalise it ? why? Wouldn't that have helped model to converge?

  9. Jules Wombat June 1, 2019 at 7:27 pm - Reply

    Cool I used keras LSTM network for Ballistics Missile predictions. I put my code here:
    https://github.com/JulesVerny/BallisticsRNNPredictions

  10. Omer Zchut June 1, 2019 at 7:27 pm - Reply

    Hi, thanks. How can you use this for regression inside of classify ?

  11. Saichand Sharma June 1, 2019 at 7:27 pm - Reply

    How is the parameters in model.summary() calculated?

  12. Jim Greene June 1, 2019 at 7:27 pm - Reply

    Welcome to the semicolon, where we'll be using Python…

  13. Penguinz 9000 June 1, 2019 at 7:27 pm - Reply

    You should add a dense(1) with Linear activation before the output layer.

  14. YOU TUBEN June 1, 2019 at 7:27 pm - Reply

    I want a code for rainfall prediction using a CSV file with RNN algorithm and Time series algorithm. The CSV file contains 1901 – 2015 and I want to get actual 2016,17,18 and prediction of 2020,21,22.

  15. Shobhit Srivastava June 1, 2019 at 7:27 pm - Reply

    You have solved a big problem of mine that took me whole 2 days to solve ,thank you man,
    may god bless you

  16. air roboticaa June 1, 2019 at 7:27 pm - Reply

    Excellent teaching plz let me share your number sir 918008955452 shaikrasul@airoboticatech.com

  17. Soumyajit Rout June 1, 2019 at 7:27 pm - Reply

    Tutorials like this makes you wanna do more and learn more 🙂

  18. YOU TUBEN June 1, 2019 at 7:27 pm - Reply

    How to do from csv file ?

  19. Dean Hope Robertson June 1, 2019 at 7:27 pm - Reply

    Do you have any examples of Seq2Seq RNN implementation with keras?

  20. Arafat Sahin June 1, 2019 at 7:27 pm - Reply

    too fast

  21. Danish Hyat Khan June 1, 2019 at 7:27 pm - Reply

    how can i build a bi-directional rnn to implement neural machine translation???
    please help.

  22. Gabriele Taranto June 1, 2019 at 7:27 pm - Reply

    In 3'45'' you say that in batch_input_shape, 5 represent the length of input because we are dealing with 5 inputs each time. If I have each time a different number of inputs each time, instead of 5 should I write None? Is it possible to use RNN in a dataset with a different number of independent variables for each observations?

  23. Marius Trollmann June 1, 2019 at 7:27 pm - Reply

    Thank you very much! Short and full of Information, without fancy or difficult stuff! Like IT!

  24. Prabhath Kota June 1, 2019 at 7:27 pm - Reply

    plt.scatter(range(20), results, c='r')

    plt.scatter(range(20), y_test, c='g')

    plt.show()

    I am getting error as 'ValueError: x and y must be the same size'

  25. root jatin June 1, 2019 at 7:27 pm - Reply

    Very well explained

  26. Ahmed Iqbal June 1, 2019 at 7:27 pm - Reply

    Can we perform classification task in RNN ? i mean, output class 0 and 1?

  27. Ashray Aman June 1, 2019 at 7:27 pm - Reply

    One of the best videos on this topic

  28. jaideep negi June 1, 2019 at 7:27 pm - Reply

    Hi, I am getting the below error
    AttributeError: 'Descriptor' object has no attribute '_serialized_options'
    At
    model.add(LSTM((1),batch_input_shape=(None,5,1),return_sequences=False)).
    Kindly help.

  29. Portgas d Ace June 1, 2019 at 7:27 pm - Reply

    your channel is underrated
    u're doing a pretty good job keep up 🙂

  30. Daniel Franco June 1, 2019 at 7:27 pm - Reply

    Thanks for the effort doing this. Excellent job! I'm a total beginner to deep learning and have a question: can I make the targets to be sequences of 3 or 5 elements?

  31. Raja Harsha Chinta June 1, 2019 at 7:27 pm - Reply

    Simple and perfect

  32. Mo Gal June 1, 2019 at 7:27 pm - Reply

    The best LSTM example in Youtube, thanks a lot

  33. Brandon Boynton June 1, 2019 at 7:27 pm - Reply

    With such simple data and high epochs, there's a very good chance that you're over-fitting. You should try testing it with an input sequence it hasn't seen.

  34. Nikoloz June 1, 2019 at 7:27 pm - Reply

    So why are you making your data and target 3D and 1D? why wouldn't you want to make the data be a (100,5) and target be a (100,1)? I'm asking because I learned the theory with matrices

  35. Daniel Weikert June 1, 2019 at 7:27 pm - Reply

    Great work SemiColon.Highly appreciated! Would be great if you continue this series with GANS, Autoencoder and other advanced nets. Do you have additional methods to improve performance of the model?

  36. Christian McDaniel June 1, 2019 at 7:27 pm - Reply

    You are introducing data leakage by normalizing the entire dataset before splitting for train/test. Consider revising.

  37. Gongwei Wang June 1, 2019 at 7:27 pm - Reply

    So surprised to see the performance improvement by normalising the input. Thank you.

  38. Bakiki77 June 1, 2019 at 7:27 pm - Reply

    Yes very helpful and simple. Could you consider talking about GANS….

  39. chinnu siddu June 1, 2019 at 7:27 pm - Reply

    respected sir,
    im from india and i have a big problem.
    This is my kaggle notebook https://www.kaggle.com/bsivavenu/simple-eda-on-fgvc5. Here they gave us set of image urls and labels. they gave us train,test and validation sets also. i want to develop a model so that for every url in test set i must predict a label for it. so for this i want to download some images from train set (actually they are 19000+) and train them and cross validate them.
    But here is the issue, unlike in dog/cat problem this is multi class classification model. so how can i give labels to images that i downloaded and how to cross validate. some ppl are saying we have to change the image to numpy array and some are saying not required.please guide me sir. I have little knowledge in keras about convnet and RNN and LSTM.

    Thank you.

  40. Kadir Erturk June 1, 2019 at 7:27 pm - Reply

    Really helpful and great presentation.

    How about similar result with less epochs?
    model.add(LSTM(5, batch_size=None, input_shape=(5,1), return_sequences=True))
    model.add(Dropout(0.2))
    model.add(LSTM(20))
    model.add(Dense(1))
    model.compile(loss="mse", optimizer='adadelta')
    model.summary()
    history = model.fit(x_train, y_train, epochs=50, validation_data=(x_test, y_test), verbose=0)
    results = model.predict(x_test)

Leave A Comment

*