nf-core/configs: ceci_dragon2
CÉCI Dragon2 cluster profiles provided by the GIGA Bioinformatics Team.
Dragon2 CÉCI Cluster Configuration
This configuration provides sensible default parameters to run nf-core pipelines (and nf-core compatible pipelines) on the CÉCI Dragon2 cluster.
To use it, add -profile ceci_dragon2
when running a pipeline.
This will automatically download the ceci_dragon2.config
.
The configuration sets slurm
as the default executor and enables singularity
as container runner.
Loading the required modules
Before running a pipeline, one should load nextflow
using the environment module system.
This can be achieved with:
# Load modules
module load 'Nextflow/24.04.2'
Built with ❤️ by the GIGA Bioinformatics Team
Config file
/*
* CÉCI DRAGON2 CLUSTER
* ====================
* This configuration file provides sensible defaults to run Nextflow pipelines
* on the CÉCI Dragon2 cluster.
*
* For more information on the CÉCI Dragon2 cluster, refer to this page of the
* wiki:
* https://www.ceci-hpc.be/clusters.html#dragon2
*/
params {
config_profile_name = 'CÉCI'
config_profile_description = 'CÉCI Dragon2 cluster profiles provided by the GIGA Bioinformatics Team.'
config_profile_contact = 'Martin Grignard (@MartinGrignard)'
config_profile_url = 'https://www.ceci-hpc.be/clusters.html#dragon2'
}
/*
* Resources limitations
* ---------------------
* These resources limitations are maximum values across all nodes of all
* queues. At least one node matches these maximum resources in all of the
* available queues.
*
* For more information on the available nodes, use the `sinfo` command.
*/
params {
max_cpus = 32
max_memory = 384.GB
max_time = 21.days
}
/*
* Singularity configuration
* -------------------------
* Singularity is used to run containerised tools.
*/
singularity {
autoMounts = true
cacheDir = "${HOME}/.cache/singularity"
enabled = true
pullTimeout = 3.hours
}
/*
* Slurm configuration
* -------------------
* Slurm is used as a workload manager. This configuration makes sure to share
* the available resources in a fairly.
*
* For more information on how to use Slurm on the CÉCI clusters, refer to this
* page of the wiki:
* https://support.ceci-hpc.be/doc/_contents/QuickStart/SubmittingJobs/SlurmTutorial.html
*/
executor {
name = 'slurm'
queueSize = 200
pollInterval = 10.s
}
/*
* Process configuration
* ---------------------
* Several queues are available on the cluster, based on the required
* resources. This configuration makes sure to request resources on the most
* relevant queue.
*
* For more information on the available queues, refer to this page of the
* wiki:
* https://www.ceci-hpc.be/clusters.html#dragon2
*/
process {
queue = {
task.accelerator != null ? 'gpu' :
task.time <= 30.minutes ? 'debug' :
task.time <= 5.days ? 'batch' :
'long'
}
clusterOptions = {
task.accelerator ? "--gres='${task.accelerator.request}'" : ''
}
resourceLimits = [
cpus : 32,
memory: 384.GB,
time : 21.days,
]
stageInMode = 'symlink'
stageOutMode = 'rsync'
}