

In this Python Programming Tutorial, we will be learning how to use the Requests library. The Requests library allows us to send HTTP requests and interact with web pages. We will be learning how to grab the source code of a site, download images, POST form data to routes, read JSON responses, perform authentication, and more. Let’s get started…
BeautifulSoup Tutorial –
File Objects Tutorial –
✅ Support My Channel Through Patreon:
✅ Become a Channel Member:
✅ One-Time Contribution Through PayPal:
✅ Cryptocurrency Donations:
Bitcoin Wallet – 3MPH8oY2EAgbLVy7RBMinwcBntggi7qeG3
Ethereum Wallet – 0x151649418616068fB46C3598083817101d3bCD33
Litecoin Wallet – MPvEBY5fxGkmPQgocfJbxP6EmTo5UUXMot
✅ Corey’s Public Amazon Wishlist
✅ Equipment I Use and Books I Recommend:
▶️ You Can Find Me On:
My Website –
My Second Channel –
Facebook –
Twitter –
Instagram –
#Python #Requests
source
My next video will be a real-world example of a script I wrote to monitor my personal website. We'll use the Request library to monitor the site, and if it is down then we will learn how to send an email and automatically restart the server. I hope everyone finds this useful! Hope you're having a great week!
Great video. You covered exactly what I needed!
This Video is so helpful. I can not thank you enough
Hi, does your next video show adding an ssl certificate and more on basic authentication (UTF)? Thanks,
I was looking for a video tutorial for httpbins.org but I could not find one. You should add the keyword httpsbins as well so people can find your video.
Pardon me if this has already been requested, but do you have any plans to explore authentication schemes like oauth?
Concise, clear, explanation and examples. Always hitting the key points with any fluff, thanks!!!
Took me a couple hours, but finally managed to modify the code so I can download entire comics by iterating through the pages., and storing the downloaded comics in individual folders
Awesome, now i don't have to save each page 1 by 1 when downloading comics, just add the link and presto!
I really love your videos. Thank you so much and wish you're always happy.
great piece of work
Best
Did he ever do the requests-html video? Can't find it. Edit – oh yes he did: https://youtu.be/a6fIbtFB46g
when i tried r.json() it gives "JSONDecodeError: Expecting value" error
I didn't see the email of the job im applying for for a few days.
They need python to request and download using python.
Thanks for this!
10:27
No words can define Corey <3
That was one of the few clean videos on the entire internet………
Awesome Tutorial!
❤️
I'm crying inside cause that's exactly what I was looking for
would be nice if the actual url worked….nice fact checking…
Mind fucking blown man. I wrote my two py scripts today. Have limited experience with JS and AHK. I'm so excited to get more into python now. This video was so informational you just don't even know.
You are the best
<3 <3 <3 . Thanks a lot. Great Video !
Hey Corey,I have been using urllib for accessing data..and there I have to ignore ssl certificates in order to access https…here I don't see anything like that..is it already done by the get command or am I missing something here.. forgive me for my newbie doubts
this video is worth watching. full of knowledge
awesome and great tutorial, thanks. before it as a non-tech user,
Also, I can tell that I am using to image download from url https://e-scraper.com/useful-articles/scrape-and-download-images-from-a-website-without-coding/
maybe it helps to somebody too
10:04 Yeah, I think that same thing every time I open any library. Respect to those guys
r.json() it’s not working for me 😤😤😡
Hi Corey! thanks for this video.
Can you please upload video about how to parse html page after login and save the cookies?
I tried to take some value from http page each day and it gives me error since the cookies change (using on curl)
People are excellent, and so are you(^///^)
Genial Corey, me gusto el video y era lo que buscaba, una explicación rápida y clara de que es requests y ademas lo haces con profesionalidad (+sub)
@Corey Schafer Amazing video , although I have a question which is kind of specific. I am trying to parse my companies website to create a database but am not able to do that. I found a library called requests_ntlm but coudn't make the code work. Any guidance will be really helpful
hi Corey, I have a question: How do i determine the download speed using requests library through a script? I don't wanna use pyspeedtest.
10:06 dude, welcome to the club. Your videos alone make most of us feel the same way so I can only imagine if these people from your perspective are another step up in productive… then I'm completely puzzled! Anyway, thanks for a great video Corey!
How could you import text files from sharepoint to python and save it as an excel file in a folder
"Module 'requests' has no 'get' member"
Hi Corey, can you do a video on http redirects and how to capture the tokens that are generated in the url… your response is much appreciated
Corey, you're the best 🙂
is this good if I want to learn RESTFUL API?
thanks
For anyone who needs the most of the code written in this session :
#pip install requests
import requests
r=requests.get("https://xkcd.com/353/"😉
print(r)
print(dir(r))
print(r.text)
#downloading images
r=requests.get("https://imgs.xkcd.com/comics/python.png"😉
print(r.content)
#writing a downloaded image to system
with open ('comic.png', 'wb') as f:
f.write(r.content)
print(r.status_code)
#200 is success
#300 is redirects
#400 are client errors .ie. if you don't have access or permission for you
#500 are server errors
print(r.ok)
#Return true for anything less than 400
payload= {'page':2,'count':25}
r=requests.get('https://httpbin.org/get', params=payload)
print(r.text)
print(r.url)
payload= {'username':'corey','password':'testing'}
r=requests.post('https://httpbin.org/post', data=payload)
#data is used for payload to be more likely in a form
print(r.text)
#no args as it is a url parameter
#form is uploaded
#to know what values the 'form' url expects, we need to look at the source code
#of the url
#most of the times the output we get will be in json, so we have method we could use
print(r.json())
#created a python dictionary from json response
#to capture that in a variable
r_dict=r.json()
print(r_dict['form'])
#the authentication done above is form based authentication
#there are other types of authentication like basic authentication
#'https://httpbin.org/basic-auth/corey/testing' this a url which basic authentication
#and accepts only username= corey and password=testing
#in auth, we provide a tuple for input parameters
r=requests.get('https://httpbin.org/basic-auth/corey/testing', auth=("corey","testing"))
print(r.text)
print(r)
r=requests.get('https://httpbin.org/basic-auth/corey/testing', auth=("coreyms","testing"))
print(r.text)
print(r)
#<Response [401]> , so unauthorized response
#when checking if website is working or not it is good practice to keep "timeout"
#or else it might hand indefinitely, exceptions if api's in website take too much time to load
#'https://httpbin.org/delay/{delay} is used to delay the site by certain time
r=requests.get('https://httpbin.org/delay/6', timeout=3); print(r)
r=requests.get('https://httpbin.org/delay/1', timeout=3) ; print(r)