For many quantitative analysis, we often consider factors such climate or economics data. Those data are usually displayed as tables on the websites and can be collected from there. However, to copy-paste with EXCEL is an extremely inefficient way.
We can use R to read data directly from the web pages and store them uniformly without too may data clean steps.

To be clarified, most of the statistical models are not instantly renewed. The “instant” here means along with the data increase by time, the model need to be / can be renewed in the same time or in a short time.

Our task is to realize a semi-automated web data scraping program, whenever a modeling refresh is needed, the raw data should be prepared.
So we need to realize 2 things:

• read data by specifying various arguments;
• store the data into local data base server.

Here we take 2 examples:

• Climate Data (daily)
• Stock Index (daily)

Several packages need to be loaded before hand.

## Climate Data Collection - XML::readHTMLTable

In XML package, there is a function which enable R to directly read tables from website (when it really contains tables).
Let’s take a look on this function first:

readHTMLTable() inherits read.table() in base package. You can specify more arguments than what you see here if you are familiar with the latter function.
doc requires a HTML document - either a file name or a URL. To simplify it, we can directly assign a URL to it.
which an integer vector identifying which tables to return from within the document. This applies to the method for the document, not individual tables. To judge the number of tables, you could inspect the elements of the website and count the number of <table> tags.
We use “http://en.tutiempo.net/climate/“ for gathering the climate data. The data in this website is on city basis. By selecting a city, year and month (i.e. China -> Beijing -> 2015 -> jun), you can see a URL like “http://en.tutiempo.net/climate/06-2015/ws-545110.html“.
Open this page, there are 2 tables, one for climate data and one for data field interpretation. Right click on that page, select Inspect elements, it’s also not hard to find 2 <table> tags.
To read the first table, specify the table number which = 1:

What if you want to download data of different countries and different period?
Noticed that the URL is composed of 3 parts, take the previous case for instance.

Changing the latter 2 parts, you could collect as many info as you want.

Using this function in embedded loops you can get the full set of data (Suppose we assign the data to climate, we will use it later.).

## Stock Index Data Collection - quantmod::getSymbols

In quantmod package, getSymbols() is well developed function. Specifying the index symbol, the function automatically returns the table read from Yahoo Finance (you can also set other website), and save it in the a variable named by the symbol. You can also specify the data start and end date.

dplyr and RMySQL both provide method for R to connect to MySQL.
To create new tables, we can use dplyr::copy_to to create MySQL table.
We can use dplyr::tbl() and a series of dplyr functions to manipulate data in MySQL databasess.