Learn how to import flat files from the web with Python: https://www.datacamp.com/courses/importing-data-in-python-part-2
You’re now able to import data in Python from all sorts of file types: flat files such as .txts and .csvs, othe file types such as pickled files, Excel spreadsheets and MATLAB files. You’ve also gained valuable experience in querying relational databases to import data from them using SQL. You have really come a long way: congratulations!
However, all of these skills involve importing data from files that you have locally. Much of the time as a data scientist, these skills won’t be quite enough because you won’t always have the data that you need. You will need to import it from the web. Say, for example, you want to import the Wine Quality dataset from the Machine Learning Repository hosted by the University of California, Irvine. How do you get this file from the web? Now you could use your favourite web browser of choice to navigate to the relevant URL, point and click on the appropriate hyperlinks to download the file but this poses a few problems:
Firstly, it isn’t written in code and so poses reproducibility problems. If another Data Scientist wanted to reproduce your workflow it, she would necessarily have to do so ouside Python;
Secondly, it is NOT scalable: if you wanted to download one hundred or one thousand such files, it would take one hundred or one thousand times as long, respectively, whereas if you wrote it in code, your workflow could scale.
As reproducibility and scalability are situated at the very heart of Data Science, you’re going to learn in this Chapter how to use Python code to import and locally save datasets from the world wide web.
You’ll also learn how to load such datasets into pandas dataframes directly from the web, whether they be flat files or otherwise. Then you’ll place these skills in the wider context of making HTTP requests: in particular, you’ll make HTTP GET requests, which in plain English means getting data from the web. You’ll use these new Request skills to learn the basics of scraping HTML from the internet and you’ll use the wonderful Python package BeautifulSoup to parse the HTML and turn it into data.
There are a number of great packages to help us import web data: herein, you’ll become familiar with the urllib and requests packages. We’ll first check out urllib:
This module provides a high-level interface for fetching data across the World Wide Web. In particular, the urlopen() function is similar to the built-in function open(), but accepts Universal Resource Locators (URLs) instead of filenames.
Let’s now dive directly in to importing data from the web with an example, importing the Wine quality dataset for White wine. Don’t get jealous: in the first interactive exercise, it will be your job to import the Red wine dataset!
All we have done here is
imported a function called urlretrieve from the request subpackage of the urllib package;
we assigned the relevant URL as a string to the variable url;
we then used the urlretrieve function to write the contents of the url to a file ‘winequality-white.csv’.
Now it’s your turn to do the same but for red wine! In the following interactive exercises you’ll also figure out how to use pandas to load the contents of web files directly into pandas dataframes without first having to save them locally.
data science, data, data analysis, learn python, python tutorial, python for data science, python for data analysis, data science tutorials, data analysis tutorials, importing data