tfcb_2021

Lecture 2: Introduction to Data

Trevor Bedford (@trvrb, bedford.io)

Learning objectives

Class materials

  1. Reproducibility and collaborative science
  2. File organization and naming
  3. Tidy data

This class requires Microsoft Excel (or an equivalent program that can open .xlsx files); see Software installation for more information.

Reminders

Reproducible science

Motivation

There is a lot of interest and discussion of the reproducibility “crisis”. In one example, “Estimating the reproducibility of psychological science” (Open Science Collaboration, Science 2015), authors attempt to replicate 100 studies in psychology and find that only 36 of the studies had statistically significant results.

The Center for Open Science has also embarked on a Reproducibility Project for Cancer Biology, with results being reported in an ongoing fashion.

There are a lot of factors at play here, including “p hacking” lead by the “garden of forking paths” and selective publication of significant results. I would call this a crisis of replication and have this as a separate concept from reproducibility.

But even reproducibility is also difficult to achieve. In “An empirical analysis of journal policy effectiveness for computational reproducibility” (Stodden et al, PNAS 2018), Stodden, Seiler and Ma:

Evaluate the effectiveness of journal policy that requires the data and code necessary for reproducibility be made available postpublication by the authors upon request. We assess the effectiveness of such a policy by (i) requesting data and code from authors and (ii) attempting replication of the published findings. We chose a random sample of 204 scientific papers published in the journal Science after the implementation of their policy in February 2011. We found that we were able to obtain artifacts from 44% of our sample and were able to reproduce the findings for 26%.

They get responses like:

“When you approach a PI for the source codes and raw data, you better explain who you are, whom you work for, why you need the data and what you are going to do with it.”

“I have to say that this is a very unusual request without any explanation! Please ask your supervisor to send me an email with a detailed, and I mean detailed, explanation.”

Tables in paper are interesting.

At the very least, it should be possible to take the raw data that forms the the basis of a paper and run the same analysis that the author used and confirm that it generates the same results. This is my bar for reproducibility.

Reproducible science guidelines

My number one suggestion for reproducible research is to have:

One paper = One GitHub repo

Put both data and code into this repository. This should be all someone needs to reproduce your results.

Digression to demo GitHub.

This has a few added benefits:

  1. Versioning data and code through GitHub allows you to collaborate with colleagues on code. It’s extremely difficult to work on the same computational project otherwise. Even Dropbox is messy.

  2. You’re always working with future you. Having a single clean repo with documented readme makes it possible to come back to a project years later and actually get something done.

  3. Other people can build off your work and we make science a better place.

I have a couple examples to look at here:

Some things to notice:

If there is too much raw data to include in GitHub (100Mb max is allowed), my preferred strategy is to store raw data in Amazon S3 (or the equivalent) and fetch this data as part of processing scripts.

More sophisticated examples will use a workflow manager like Snakemake to automate builds. For example:

With GitHub as lingua franca for reproducible research, there are now services built on top of this model. For example:

Project communication

For me, as PI, I enforce a further rule:

One paper = One GitHub repo = One Slack channel

It’s much easier if all project communication goes in one place.

Further reading

Some suggested readings on reproducible research include:

Project and data organization

It’s important to keep a tidy project directory, even if something is not as the stage of being versioned on GitHub.

Some general advice:

  1. Encapsulate everything within one directory, which is named after the project. Have a single directory for a project, containing all of the data, code, and results for that project. This makes it easier to find things, or to zip it all up and hand it off to someone else.
  2. Separate the data from the code. I prefer to put code and data in separate subdirectories. I’ll often have a data/ subdirectory and a scripts/ (or src/) subdirectory.
  3. Use relative paths (never absolute paths). If you encapsulate all data and code within a single project directory, then you can refer to data files with relative paths (e.g., ../data/some_file.csv). If you were to use an absolute path (like ~/Projects/SomeProject/data/some_file.csv or C:\Users\SomeOne\Projects\SomeProject\data\some_file.csv) then anyone who wanted to reproduce your results but had the project placed in some other location would have to go in and edit all of those directory/file names.
  4. Write dates as YYYY-MM-DD. This sorts properly and also avoids ambiguities.
  5. Include readme files. This bit of documentation greatly helps in describing what a folder contains.
  6. Continually up-to-date directory. I aim for a clean and up-to-date directory that is continually modified rather than a chronological directory structure (as described in this article). A separate electronic lab notebook with chronological entries can be hugely helpful for record-keeping purposes.

File names

Borrowing excellent slide deck from Ciera Martinez and colleagues: Reproducible Science Workshop: File Naming

File organization

Continuing slide deck from Ciera Martinez and colleagues: Reproducible Science Workshop: Organization

Documenting your data

Ideally, your data/ directory will include an additional README that (at bare minimum) includes a data dictionary (e.g., what the rows and columns represent). Fully documented metadata (data about the data) will include:

Documenting data can be a time-consuming process, but is often required to submit data to repositories. Since data publishing is a requirement for most academic research as a part of publication, keeping track of this information early on can save you time later, and increase the chances of other researchers using your data later (which means more citations for you).

Miscellaneous advice

More excellent advice from Karl Broman

Tidy data

Tidy data is term from Hadley Wickham and refers to:

A standard method of displaying a multivariate set of data is in the form of a data matrix in which rows correspond to sample individuals and columns to variables, so that the entry in the ith row and jth column gives the value of the jth variate as measured or observed on the ith individual.

Data in this form is much easier to deal with programmatically. This is also known as a data frame. This tutorial presents a nice overview.

Observations as rows and variables as columns is an excellent standard to adhere to.

  1. Each variable forms a column
  2. Each observation forms a row.
  3. Each type of observational unit forms a table

See for example, single cell RNA sequencing data, with cells as rows and genes as columns. This is also the way that relational databases (MySQL, Postgres, etc…) are constructed.

Exercise on tidy data

  1. Demonstrate conversion of simple example dataset. Work from Table 2 in Bedford et al. 2014, available as an Excel table in the course repo.

  2. Split into small groups of 3-4 people to work from an HI (haemagglutination-inhibition) table and convert to tidy data. Data available as an Excel table in the course repo.

File formats

Saving data as plain text files is necessarily to process this data with either R or Python. You can export from Excel to .tsv (tab-delimited, my preferred format) or .csv (comma-delimited). A few things to note when exporting data files in these formats:

Further reading

Some suggested readings include: