R, Tidyverse and Databases

This semester at uni I’ve been doing a capstone project (an iLab). Due to the volume of data, I’ve slapped it into a local MySQL instance (as I already had MySQL installed). To get the dataset down to a manageable size before loading it into R, I’ve been toying around with tidyverse’s dbplyr. It uses the same standard approach as the rest of the tidyverse family, something I’m already comfortable with.

This post does assume some familiarity with SQL. Rather than risk violating my client’s non-disclosure agreement, I’ve created a small database of the early days of NASA, with the following structure:

If you want to follow along, the code to create and populate the MySQL database is here, and the R code is here. In both cases, change the password from xxxxxxxx to something more secure.

You’ll need to install the dbplyr dbplyr package too.

Connecting to the database

Lazy loading

Once defined, our  tbls can be used much like any other R dataframe.

There are, however, a few things to be aware of. First, the tbls don’t contain the complete dataset at this point – it’s easier to think of them as promises to fetch the data from the database when needed. Looking more closely at the tbl makes this clearer:


We see it’s not a standard R dataframe, but an implementation of  tbl called tbl_lazy . It’s the “lazy” part of the name that signifies the data won’t be loaded until required.

Generated SQL

The show_query()  function illustrates exactly what form the SQL will take when it’s run. For example:



As the second example shows, dbplyr  will convert chained commands into a single piece of SQL where possible, having the database do the work, rather than retrieving data into memory and manipulating it there.

Examining the data

As mentioned, dbplyr  won’t fetch the data from the database until it’s needed. But it will pull in a little so we can examine the data. For example, looking at the astronauts tbl :

Note that the first line says [?? x 3] , and the last line says # ... with more rows , without specifying how many more rows. These are both indications that the full query hasn’t been run. In fact, looking at the database logs, the query that has been run is SELECT * FROM ``astronauts`` LIMIT 10 . The use of the limit clause is dbplyr ’s way of getting enough data to display, without needing to pull in all the data.

Note that the glimpse()  function in the dplyr  package isn’t quite as helpful:

Observations: 25  gives the impression that the size of the dataset is known, where it is not. This time, the database logs show SELECT * FROM ``astronauts`` LIMIT 25 , and

shows there are in fact 35 astronauts in the table.

Forcing data to be loaded

The previous command shows how to force R to load the data into memory: the collect()  function. It converts the promise of data into a concrete data frame. Compare


Complex queries

As mentioned above, dbplyr  will try to do as much processing as possible in the database. For example, in order to determine the astronauts who flew multiple missions in Gemini or Apollo, you’d do something like this with dplyr:



Looking at how dbplyr  handles this, we find it’s all pushed into the database, and run as a single query:



A big, ugly, query to be sure. And while I could write it more succinctly by hand, I’ve found the query plans are identical, and I need not use SQL if I already know dplyr .


dplyr  users will find it’s not a completely painless transition to dbplyr  however. Take, for example, finding all the astronauts whose surnames start with “C”. In dplyr , you might use something like:

Unfortunately, this will return an error reporting that FUNCTION nasa.GREPL does not exist .

One alternative is to collect()  the data before applying the filter:

That may not be efficient if we’re doing a lot of processing after the filter()  command, and it flies in the face of our goal to do as much processing as possible in the database.

In this case, we’d like to use the SQL LIKE  operator. We could guess that something like filter(surname %like% 'C%')  might do the trick. Rather than running the full (possibly expensive) query, we can use dbplyr ’s translate_sql()  function, which will show how an R command will be converted to SQL. For example:

If translate_sql()  doesn’t know what to do with a piece of code, it’ll pass the code to the database more or less as is. This was the cause of the FUNCTION nasa.GREPL does not exist  error we got above:

But it does mean that even if translate_sql()  doesn’t know about the SQL LIKE  operator, our guess of filter(surname %like% 'C%')  will be passed through to the database as we need:



Query plans

As well as show_query() , dbplyr  provides the explain()  function, which gives detail about how the database optimiser will run the query. This is useful to check that (for example) the database will make efficient use of indices.