Nextflow

Nextflow is a scientific workflow system predominantly used for bioinformatic data analyses. It imposes standards on how to programmatically author a sequence of dependent compute steps and enables their execution on various local and cloud resources.

Purpose
Many scientific data analyses require a significant amount of sequential processing steps. Custom scripts may suffice when developing new methods or infrequently running a particular analysis, but scale poorly to complex task successions or many samples.

Scientific workflow systems like Nextflow allow formalizing an analysis as a data analysis pipeline. Pipelines, also known as workflows, are instructions that specify order and conditions of computing steps to be performed. They are carried out by special purpose programs, so-called workflow executors, which ensure predictable and reproducible behavior in various compute environments.

Workflow systems also provide built-in solutions to common challenges of workflow development, such as the application to multiple samples, the validation of input and intermediate results, conditional execution of steps, error handling, and report generation. Advanced features of workflow systems may also include scheduling capabilities, graphical user interfaces for monitoring workflow executions, and the management of dependencies by containerizing the whole workflow or its components.

Typically, scientific workflow systems initially present a steep learning challenge as all their features and complexities are added on top of and in addition to the actual analysis. However, the standards and abstraction imposed by workflow systems ultimately improve the traceability of analysis steps, which is particularly relevant when collaborating on pipeline development, as is customary in scientific settings.

Specification of workflows
In Nextflow, pipelines are constructed from individual processes that correspond to computational tasks. Each process is set up with input requirements and output declarations. Rather than running in a fixed succession, the execution of a process commences when all its input requirements are met. By specifying the output of one process as the input of another step, a logical and sequential connection between processes is created.

This reactive implementation of processes is a characteristic design pattern of Nextflow and also known as functional dataflow model.

Processes and whole workflows are programmed in a domain-specific language (DSL) that is provided by Nextflow and based on Apache Groovy. While Nextflow's DSL is used to declare the workflow logic, developers can use their scripting language of choice within a process and mix multiple languages in a workflow. Porting existing scripts and workflows to Nextflow is therefore possible. Supported scripting languages include bash, csh, ksh, Python, Ruby, and R. Any scripting language that uses the standard Unix shebang declaration is supported in Nextflow.

An exemplary workflow consisting of only one process is shown below:

To facilitate straightforward collaboration on workflows, Nextflow has native support for source-code management systems and DevOps-platforms including GitHub, GitLab, and others.

Execution of workflows
Workflows written in Nextflow's DSL can be deployed and run across diverse computing environments without modifications to the pipeline code.

To enable portability, Nextflow ships with dedicated executors for a variety of platforms including those of major cloud providers. Because Nextflow decouples individual process steps, it can optionally be configured to spread execution across multiple computing platforms. It supports the following environments for pipeline execution:


 * Local – the default executor. Nextflow pipelines run on Linux or Mac OS and execution occurs on the computer where the pipeline is launched.
 * HPC workload managers – Slurm, SGE, LSF, Moab, PBS Pro, PBS/Torque, HTCondor, NQSII, OAR
 * Kubernetes – local or cloud-based Kubernetes implementations (GKE, EKS, or AKS)
 * Cloud batch services – AWS Batch, Azure Batch
 * Other environments – Apache Ignite, Google Life Sciences

Containers for portability across computing environments
A fundamental concept of Nextflow is its tight integration with software containers. Whole workflows and, in later versions, also single processes can harness containers to allow their execution across various compute environments without tedious installation and configuration routines.

This design choice was strongly influenced by Solomon Hyke's talk at dotScale in 2013, which had a significant impact on Nextflow's principal developer, Paolo Di Tommaso.

Container frameworks supported by Nextflow include Docker, Singularity, Charliecloud, Podman, and Shifter. Those type of containers can be utilized in a workflow and are automatically retrieved from external repositories when the pipeline is executed. At Nextflow Summit 2022, it was unveiled that future versions of Nextflow will support a dedicated container provisioning service for an improved integration of customized containers into workflows.

Developmental history
Nextflow was originally developed at the Centre for Genomic Regulation in Spain and released as an open-source project on GitHub in July 2013. In October 2018, the project license for Nextflow was changed from GPLv3 to Apache 2.0.

In July 2018, Seqera Labs was launched as a spin-off from the Centre for Genomic Regulation. The company employs many of Nextflow's core developers and maintainers and provides commercial services and consulting with a focus on Nextflow.

In July 2020, a major extension and revision of Nextflow's domain-specific language was introduced to allow for sub-workflows and additional improvements. In the same year, monthly downloads of Nextflow sat at approximately 55,000 per month.

The nf-core community
In addition to the Centre for Genomic Regulation, other sequencing facilities have adopted Nextflow as their preferred Scientific workflow system, among them the Quantitative Biology Center in Tübingen, the Francis Crick Institute, A*STAR Genome Institute of Singapore, and the Swedish National Genomics Infrastructure.

Efforts to share, harmonize, and curate the bioinformatic pipelines used by those facilities   eventually turned into the nf-core project. Spearheaded by Phil Ewels from the Swedish National Genomics Infrastructure the focus of the nf-core project is to ensure that pipelines are reproducible and portable across different hardware, operating systems, and software versions. In July 2020, Nextflow and nf-core were awarded a grant from the Chan Zuckerberg Initiative, recognizing its role as a vital open-source software.

As of 2022, the nf-core organization hosts 73 Nextflow pipelines for the biosciences and more than 700 process modules. Uniting more than 500 developers and scientists, it is the largest collaborative effort and community to develop bioinformatic data analysis pipelines.

By domain and research subject
Nextflow is preferred in sequencing data processing and genomic data analysis. Over the last five years, numerous pipelines for many different applications and analyses in the field of genomics have been published.

A notable use case in this regard was for pathogen surveillance during the COVID-19 pandemic. Monitoring the emergence of new virus variants and retracing its global spread required the swift and highly automatized, yet accurate, processing of raw data, variant analysis, and the designation of lineages, which was enabled by pipelines written in Nextflow.

Nextflow also plays an important role for the non-profit plasmid repository Addgene, which uses it to corroborate the integrity of all deposited plasmids.

Apart from genomics, Nextflow is gaining popularity in other domains of biomedical data processing, which also require the application of complex workflows on large amounts of primary data: Drug screening,  Diffusion magnetic resonance imaging (dMRI) in radiology, and mass spectrometry data processing,   the latter with a particular focus on proteomics