playmolecule.apps package#
Module contents#
- class playmolecule.apps.ExecutableDirectory(dirname, runsh='run.sh', _execution_resources=None)#
Bases:
object
Executable directory class.
All app functions will create a folder and return to you an ExecutableDirectory object. This is a self-contained directory including all app input files which can be executed either locally or on a cluster. If it’s not executed locally make sure the directory can be accessed from all machines in the cluster (i.e. is located on a shared filesystem).
- pmws(name=None, group=None, child_of=None, queue_config=None, pm_options=None, description=None, _logger=True)#
Submits the job to the PlayMolecule backend server.
- Parameters:
child_of (str) – The id of another job. If provided, the new job will be submited as a child of that job.
queue_config (dict) – A dictionary containing key-value pairs for defailed configuration for this job on the queueing system. You can specify “cpus”, “memory” and “priority” for the job.
_logger (bool) – Set to False to reduce verbosity
Examples
>>> ed = proteinprepare(outdir="test", pdbid="3ptb") >>> ed.pmws(queue_config={"ncpu": 2, "memory": 4000, "priority": 500})
- run(queue=None, **kwargs)#
Execute the directory locally
If no queue is specified it will run the job locally.
Examples
>>> ed = proteinprepare(outdir="test", pdbid="3ptb") >>> ed.run()
Specifying a queue
>>> ed.run(queue="slurm", partition="normalCPU", ncpu=3, ngpu=1)
Alternative syntax for
>>> ed.slurm(partition="normalCPU", ncpu=3, ngpu=1)
- slurm(**kwargs)#
Submit simulations to SLURM cluster
- Parameters:
partition (str or list of str) – The queue (partition) or list of queues to run on. If list, the one offering earliest initiation will be used.
jobname (str) – Job name (identifier)
priority (str) – Job priority
ncpu (int) – Number of CPUs to use for a single job
ngpu (int) – Number of GPUs to use for a single job
memory (int) – Amount of memory per job (MiB)
gpumemory (int) – Only run on GPUs with at least this much memory. Needs special setup of SLURM. Check how to define gpu_mem on SLURM.
walltime (int) – Job timeout (s)
mailtype (str) – When to send emails. Separate options with commas like ‘END,FAIL’.
mailuser (str) – User email address.
outputstream (str) – Output stream.
errorstream (str) – Error stream.
nodelist (list) – A list of nodes on which to run every job at the same time! Careful! The jobs will be duplicated!
exclude (list) – A list of nodes on which not to run the jobs. Use this to select nodes on which to allow the jobs to run on.
envvars (str) – Envvars to propagate from submission node to the running node (comma-separated)
prerun (list) – Shell commands to execute on the running node before the job (e.g. loading modules)
Examples
>>> ed = proteinprepare(outdir="test", pdbid="3ptb") >>> ed.slurm(partition="normalCPU", ncpu=1, ngpu=0)
- property status#
Returns current status of the ExecutableDirectory
Examples
>>> ed = proteinprepare(outdir="test", pdbid="3ptb") >>> ed.slurm(ncpu=1, ngpu=0) >>> print(ed.status)
- class playmolecule.apps.JobStatus(value)#
Bases:
IntEnum
Job status codes describing the current status of a job
WAITING_INFO : Waiting for status from the job. Job has not started yet computation.
RUNNING : Job is currently running
COMPLETED : Job has successfully completed
ERROR : Job has exited with an error
- COMPLETED = 2#
- ERROR = 3#
- RUNNING = 1#
- WAITING_INFO = 0#
- describe()#
- playmolecule.apps.slurm_mps(exec_dirs, **kwargs)#
Submit a list of ExecutableDirectories to SLURM as a single MPS job.
This means that all jobs submitted will be executed on the same GPU
- Parameters:
exec_dirs (list[ExecutableDirectory]) – An iterable of ExecutableDirectory objects
partition (str or list of str) – The queue (partition) or list of queues to run on. If list, the one offering earliest initiation will be used.
jobname (str) – Job name (identifier)
priority (str) – Job priority
ncpu (int) – Number of CPUs to use for a single job
ngpu (int) – Number of GPUs to use for a single job
memory (int) – Amount of memory per job (MiB)
gpumemory (int) – Only run on GPUs with at least this much memory. Needs special setup of SLURM. Check how to define gpu_mem on SLURM.
walltime (int) – Job timeout (s)
mailtype (str) – When to send emails. Separate options with commas like ‘END,FAIL’.
mailuser (str) – User email address.
outputstream (str) – Output stream.
errorstream (str) – Error stream.
nodelist (list) – A list of nodes on which to run every job at the same time! Careful! The jobs will be duplicated!
exclude (list) – A list of nodes on which not to run the jobs. Use this to select nodes on which to allow the jobs to run on.
envvars (str) – Envvars to propagate from submission node to the running node (comma-separated)
prerun (list) – Shell commands to execute on the running node before the job (e.g. loading modules)
Examples
>>> ed1 = kdeep(outdir="test1", pdb=apps.kdeep.files["tests/10gs_protein.pdb"], sdf=apps.kdeep.files["tests/10gs_ligand.sdf"], modelfile=kdeep.datasets.default) >>> ed2 = kdeep(outdir="test2", dataset=apps.kdeep.files["tests/dataset.zip"], modelfile=kdeep.datasets.default) >>> slurm_mps([ed1, ed2], partition="normalGPU", ncpu=1, ngpu=1)