nf-core

type access

  • Operating System:

  • Terminal:

  • Shell:

  • Editor:

  • Package Manager:

  • Programming Language:

  • Utility:

  • Extension:

type access

  • Operating System:

  • Terminal:

  • Shell:

  • Editor:

  • Package Manager:

  • Programming Language:

  • Utility:

  • Extension:

type access

  • Operating System:

  • Terminal:

  • Shell:

  • Editor:

  • Package Manager:

  • Programming Language:

  • Utility:

  • Extension:

type access

  • Operating System:

  • Terminal:

  • Shell:

  • Editor:

  • Package Manager:

  • Programming Language:

  • Utility:

  • Extension:

type access

  • Operating System:

  • Terminal:

  • Shell:

  • Editor:

  • Package Manager:

  • Programming Language:

  • Utility:

  • Extension:

type access

  • Operating System:

  • Shell:

  • Editor:

  • Package Manager:

  • Programming Language:

type access

  • Operating System:

  • Shell:

  • Editor:

  • Package Manager:

  • Programming Language:

nf-core is a collection of analysis pipelines built using Nextflow.

For more information, check here.

Supported pipelines

Pipeline

Release

nf-core

Description

airrflow

Define where the pipeline should find input data and save output data.

ampliseq

Bioinformatics analysis pipeline used for amplicon sequencing.

atacseq

Bioinformatics analysis pipeline used for ATAC-seq data.

bacass

Bioinformatics best-practice analysis pipeline for simple bacterial assembly and annotation.

bamtofastq

Workflow designed to convert one or multiple bam/cram files into fastq format.

chipseq

Bioinformatics analysis pipeline used for Chromatin ImmunopreciPitation sequencing (ChIP-seq) data.

circdna

Bioinformatics best-practice analysis pipeline for the identification of extrachromosomal circular DNAs (ecDNAs) in eukaryotic cells.

crisprseq

Bioinformatics best-practice analysis pipeline for the analysis of CRISPR edited next generation sequencing (NGS) data, it allows the evaluation of the quality of gene editing experiments using targeted NGS data.

cutandrun

Best-practice bioinformatic analysis pipeline for CUT&RUN and CUT&Tag experimental protocols that were developed to study protein-DNA interactions and epigenomic profiling.

differentialabundance

Bioinformatics pipeline that can be used to analyze data represented as matrices, comparing groups of observations to generate differential statistics and downstream analyses. The initial feature set is built around RNA-seq, but we anticipate rapid expansion to include other platforms.

epitopeprediction

Bioinformatics best-practice analysis pipeline for epitope prediction and annotation.

fetchngs

Bioinformatics best-practice analysis pipeline to fetch metadata and raw FastQ files from both public and private databases. At present, the pipeline supports SRA / ENA / DDBJ / GEO / Synapse ids.

funcscan

Bioinformatics pipeline for efficient and parallelised screening of long nucleotide sequences such as contigs for antimicrobial peptide genes, antimicrobial resistance genes, and biosynthetic gene clusters.

hgtseq

Bioinformatics best-practice analysis pipeline built to investigate horizontal gene transfer from NGS data.

hic

Bioinformatics best-practice analysis pipeline for Analysis of Chromosome Conformation Capture data (Hi-C).

hicar

Bioinformatics best-practice analysis pipeline for HiC on Accessible Regulatory DNA (HiCAR) data, a robust and sensitive assay for simultaneous measurement of chromatin accessibility and cis-regulatory chromatin contacts.

hlatyping

Bioinformatics best-practice analysis pipeline for Precision HLA typing from next-generation sequencing data.

isoseq

Bioinformatics best-practice analysis pipeline for Isoseq gene annotation with uLTRA and TAMA.

mag

Bioinformatics best-practise analysis pipeline for assembly, binning and annotation of metagenomes.

methylseq

Bioinformatics analysis pipeline used for Methylation (Bisulfite) sequencing data. It pre-processes raw data from FastQ inputs, aligns the reads and performs extensive quality-control on the results.

mhcquant

Bioinformatics analysis pipeline used for quantitative processing of data dependent (DDA) peptidomics data.

nanostring

Bioinformatics pipeline that can be used to analyze NanoString data. The performed analysis steps include quality control and data normalization.

nascent

Bioinformatics best-practice analysis pipeline for nascent transcript (NT) and Transcriptional Start Site (TSS) assays.

pangenome

Bioinformatics best-practice analysis pipeline for pangenome graph construction. The pipeline renders a collection of sequences into a pangenome graph.

phyloplace

Bioinformatics best-practice analysis pipeline that performs phylogenetic placement with EPA-NG.

proteinfold

Bioinformatics best-practice analysis pipeline for Protein 3D structure prediction pipeline.

quantms

Bioinformatics best-practice analysis pipeline for Quantitative Mass Spectrometry (MS). Currently, the workflow supports three major MS-based analytical methods: (i) Data dependant acquisition (DDA) label-free and Isobaric quantitation (e.g. TMT, iTRAQ); (ii) Data independent acquisition (DIA) label-free quantification.

rnafusion

Bioinformatics best-practice analysis pipeline for RNA sequencing analysis pipeline with curated list of tools for detecting and visualizing fusion genes.

rnaseq

Bioinformatics pipeline that can be used to analyze RNA sequencing data obtained from organisms with a reference genome and annotation.

sarek

Workflow designed to detect variants on whole genome or targeted sequencing data. Initially designed for Human, and Mouse, it can work on any species with a reference genome. Sarek can also handle tumour / normal pairs and could include additional relapses.

scrnaseq

Bioinformatics best-practice analysis pipeline for processing 10x Genomics single-cell RNA-seq data.

smrnaseq

Bioinformatics best-practice analysis pipeline for Small RNA-Seq.

taxprofiler

Analysis pipeline for taxonomic classification and profiling of shotgun metagenomic data. It allows for in-parallel taxonomic identification of reads or taxonomic abundance estimation with multiple classification and profiling tools against multiple databases, produces standardised output tables.

Batch mode

Nextflow has built-in support for Conda that allows the configuration of workflow dependencies using Conda recipes and environment files.

In batch mode all pipelines are run by default with the -profile conda option. In addition, for each pipeline, the necessary Conda environments are pre-installed in the app container. The Nextflow Conda cache directory is set equal to /home/ucloud/.cache/nextflow.

Note

On UCloud is not possible to use singularity or docker profiles. However most of the pipelines support Conda/Mamba.

Interactive mode

The Interactive mode parameter is used to start an interactive job session where the user can open a terminal interface from the job progress page and execute shell commands.

Pipelines are executed as follows:

$ nextflow run nf-core/<pipeline> -r <release> <options>

Pipeline testing

In interactive mode you can test any nf-core pipeline with the following commands:

$ nextflow run nf-core/<pipeline_name> -r <release> -profile test,conda

or

$ nextflow run nf-core/<pipeline_name> -r <release> -profile test,mamba