The preprocessing and transformation of input data, also known as “data wrangling,” is an essential step in the data analysis pipeline. A variety of challenges must be overcome when preparing data for analysis, including handling missing data, integrating diverse data types from multiple sources, and ensuring that both individual data fields as well as the larger organization of data are in the correct format for downstream analysis.
On this page, we provide an overview of resources for learning how to wrangle data, software for data wrangling, and tools developed at Fred Hutch. While this is not an exhaustive list, we have highlighted many of the most commonly used and readily accessible resources for data scientists.
Beyond wrangling/cleaning your data, the practice of “data tidying” ensures that your datasets have a consistent structure and and are easy to manipulate, model, and visualize. Tidy datasets list individual observations as rows and variables as columns. We highly recommend you include tidying your data as a key step in your data wrangling process!
Although base R offers some basic functions for data wrangling, there are a variety of fast, intuitive packages available in the R ecosystem for cleaning, transforming, and reshaping data.
tidyverse
is a comprehensive collection of R packages for data science - two relevant packages for data wrangling are dplyr
for data manipulation and tidyr
for data tidying.
tidyr
makes it easy to reshape tabular data into different data structures
dplyr
works on tidy data and makes it easier to perform operations like filtering, selecting, and summarizing
lubridate
helps to process and parse date-time variables from a variety of formats and time zones
stringr
offers functions for fast, simple manipulations of character-based strings
Bioconductor
is a specialized set of R packages designed for the analysis of genomic data and other high-throughput biological data.
Bioconductor
packagesduckdb
is a fast in-process SQL database for querying and wrangling of very large datasets in a wide variety of formats.
data.table
is a high-performance version of base R’s data.frame
for efficient data manipulation from large datasets. It works by loading data in memory for fast manipulationsPython is also widely used for data wrangling, particularly for handling complex and large-scale biomedical datasets. Several libraries in Python simplify the process of cleaning, transforming, and analyzing data.
pandas
: powerful, easy to use open-source data analysis and manipulation tool
numpy
: fast and versatile package for handling numerical and vectorized data, including N-dimensional arrays (i.e.ndarray
’s)biopython
: Python-based freely available tools for biological computation, including data wrangling / analysis for genomic and proteomic data
scikit-bio
: library for working with bioinformatics / biological data, including genomics, microbiomics, ecology, evolutionary biology, and more
duckdb
- in addition to R, duckdb
can be applied in Python.
dask
is a flexible open-source Python library for parallel computing on large datasetsThe FH-Data Slack is always available as a space for researchers to ask questions and share resources about data wrangling.
#question-answer
channel on the FH-Data Slack to ask questions, share resources, and discuss strategies for managing complex biomedical data.Books and online tutorials can provide in-depth coverage of data wrangling techniques, offering a solid foundation for both novice and advanced biomedical data scientists.