Jason L Causey

Imperfect Options for Sharing Big Research Data

I seem to keep running into the problem of maintaining large datasets related to research projects. I’m not alone — this is a requirement for research in many domains, not just *informatics and *omics. One pain point that we feel more acutely in the “Bio-[insert-suffix-here]” fields is controlled access datasets. Because of (mostly good) laws like HIPPA,1 we often have to be very careful how we store and share our data. I suppose that even in fields without these restrictions, it is probably attractive to maintain secrecy at least prior to an initial publication.

From my standpoint, a “perfect” file container for research would meet all of the requirements below:

  1. Versioning: We would like to be able to track changes made to a dataset. Ideally, it would be nice to be able to “clone” a dataset at any point in its history, or to “revert” changes to a previous version.
  2. Access Control: We need to be able to limit who has access to a dataset, including rights to view versus edit. Ideally, being able to add new viewers/editors over time, and revoke access would be key features.
  3. Sharing / Transport: Research teams are rarely all in the same room. There should be an efficient-as-possible means of sharing the research dataset with others. This sharing mechanism should respect the access control requirements for the dataset. A “wishlist” feature here would be “partial” or “on-demand” access to certain parts of a large dataset without requiring space to store the whole thing.

As far as I can tell, there is no current file or container format that meets all of these requirements (it may not even be possible to fulfill all the “wishes”). Still, there are some options that come close, and some that do parts very well:

Git-LFS

Git seems to be the most popular source code versioning system, at least amongst the open-source community. The Git-LFS (Git Large File Storage) extension allows “large”, binary files to be stored using the same tools that developers use to version source code (and other text-based content). Git-LFS would handle the versioning problem nicely, and the sharing/transport problem pretty well; it cannot help with access control. (You could encrypt the files within the repository to provide access control.)

Dat

I’ve been following the Dat project intently for a couple of years now. A “Dat” archive is basically a directory with some “magic” (hidden configuration) added, much like the way Git works. The project’s early focus was on boosting availability of open scientific datasets, although they seem to be shifting focus toward peer-to-peer web publishing now. In theory, a Dat archive allows files within the folder to be version-controlled through the use of an append-only log so you have a cryptographically verifiable audit trail of changes. Supposedly, you could look back at historical versions, although at the time of this writing, that feature is either not implemented yet or it is totally undocumented. Dats are shared over a peer-to-peer protocol, (potentially) improving transport and availability.

Dropbox

Well, this one is ubiquitous. I’ll also lump Box in here, as they are pretty similar. Dropbox creates a special directory on your machine that is automatically backed up to their servers. On top of this, they allow easy sharing across machines (with one account) or to others with Dropbox logins, or even to anyone with a special Dropbox link. The security model is better for the “has a Dropbox account” scenario, because you can assign read/edit access on a per-user basis, and revoke it. The links are a little more scary, since having the link grants access with no other safeguards, but at lest it is easy to revoke a link (of course, if someone unauthorized has already downloaded the files, it is too late). This one is super easy to use, and very dependable, but requires you to trust a third-party company. One limitation might be available space if your datasets are very large, not to mention transfer time.

BitTorrent

BitTorrent is a well known and established protocol/service for sharing files over peer-to-peer connections. It is primarily a file-oriented protocol, so you would need to place your datasets in a container (a tarball would work) before sharing. There is essentially no security beyond discoverability (if someone can find your torrent ID, they can download your file), so you have to provide access control mechanisms at the file container level. But for distribution of a resource that many end users need at about the same time, it is hard to beat.

Syncing Services

Here I include things like BitTorrent Sync (Resilio Sync) and SyncThing. These share directories using peer-to-peer technology.

Direct Download

I will lump together all of the methods of downloading a file directly from a server here: HTTP, FTP / SFTP, Rsync, etc. This is currently how a large amount of research data is shared online. It is challenging to set up and maintain, but the methods are well known and time-tested. Security can be non-existent (HTTP/FTP) or as good as you desire (and have the skill) to make it.

[Edited 2018-06-15 to fix “Git-LFS” versioning bullet point.]


  1. https://www.hhs.gov/hipaa/index.html ↩︎

  2. https://github.com/datproject/docs/blob/master/papers/dat-paper.md ↩︎