|
||||||||||||
|
6.0 A Simple DAG6.1 What is DAGMan?Your tutorial leader will introduce you to DAGMan and DAGs. In short, DAGMAn, lets you submit complex sequences of jobs as long as they can be expressed as a directed acylic graph. For example, you may wish to run a large parameter sweep but before the sweep run you need to prepare your data. After the sweep runs, you need to collate the results. This might look like this, assuming you want to sweep over five parameters:
DAGMan has many abilities, such as throttling jobs, recovery from failures, and more. More information about DAGMan can be found at in the Condor manual. 6.2 Submitting a simple DAGMake sure that your submit file has only one queue command in it, as when we first wrote it. And we will just run vanilla universe jobs for now, though we could equally well run standard universe jobs. Universe = vanilla Executable = simple Arguments = 4 10 Log = simple.log Output = simple.out Error = simple.error Queue We are going to get a bit more sophisticated in submitting our jobs now. Let's have three windows open. In one window, you'll submit the job. In another you will watch the queue, and in the third you will watch what DAGMan does. To prepare for this, we'll create a script to help watch the queue. Name it watch_condor_q. (Where it says Ctrl-D, type the character, not the full name. This will end the input for cat.) % cat > watch_condor_q #! /bin/sh while true; do condor_q -dag sleep 10 done Ctrl-D % chmod a+x watch_condor_q If you like, modify watch_condor_q so it just watches your jobs, not everyone's. Now we will create the most minimal DAG that can be created: a DAG with just one node. % cat > simple.dag Job Simple submit Ctrl-D In your first window, submit the job: % rm -f simple.log simple.out % condor_submit_dag -force simple.dag ----------------------------------------------------------------------- File for submitting this DAG to Condor : simple.dag.condor.sub Log of DAGMan debugging messages : simple.dag.dagman.out Log of Condor library debug messages : simple.dag.lib.out Log of the life of condor_dagman itself : simple.dag.dagman.log Condor Log file for all Condor jobs of this DAG: simple.dag.dummy_log Submitting job(s). Logging submit event(s). 1 job(s) submitted to cluster 7. ----------------------------------------------------------------------- % condor_reschedule <=====Don't miss this! In the second window, watch the queue. % ./watch_condor_q -- Submitter: ws-01.gs.unina.it : <192.167.1.21:32783> : ws-01.gs.unina.it ID OWNER/NODENAME SUBMITTED RUN_TIME ST PRI SIZE CMD 27.0 roy 6/2 23:26 0+00:00:22 R 0 9.8 condor_dagman -f - 28.0 |-Simple 6/2 23:26 0+00:00:00 I 0 9.8 simple 4 10 2 jobs; 1 idle, 1 running, 0 held -- Submitter: ws-01.gs.unina.it : <192.167.1.21:32783> : ws-01.gs.unina.it ID OWNER/NODENAME SUBMITTED RUN_TIME ST PRI SIZE CMD 27.0 roy 6/2 23:26 0+00:00:32 R 0 9.8 condor_dagman -f - 28.0 |-Simple 6/2 23:26 0+00:00:00 I 0 9.8 simple 4 10 2 jobs; 1 idle, 1 running, 0 held -- Submitter: ws-03.gs.unina.it : <192.167.1.21:34353> : ws-01.gs.unina.it ID OWNER/NODENAME SUBMITTED RUN_TIME ST PRI SIZE CMD 27.0 roy 6/2 23:26 0+00:00:32 R 0 9.8 condor_dagman -f - 28.0 |-Simple 6/2 23:26 0+00:00:04 R 0 9.8 simple 4 10 2 jobs; 0 idle, 2 running, 0 held -- Submitter: ws-01.gs.unina.it : <192.167.1.21:32783> : ws-01.gs.unina.it ID OWNER/NODENAME SUBMITTED RUN_TIME ST PRI SIZE CMD 0 jobs; 0 idle, 0 running, 0 held Ctrl-C In the third window, watch what DAGMan does: % tail -f --lines=500 simple.dag.dagman.out 6/2 23:26:04 ****************************************************** 6/2 23:26:04 ** condor_scheduniv_exec.27.0 (CONDOR_DAGMAN) STARTING UP 6/2 23:26:04 ** /opt/condor-6.7.19/bin/condor_dagman 6/2 23:26:04 ** $CondorVersion: 6.7.19 May 10 2006 $ 6/2 23:26:04 ** $CondorPlatform: I386-LINUX_RH9 $ 6/2 23:26:04 ** PID = 16317 6/2 23:26:04 ** Log last touched time unavailable (No such file or directory) 6/2 23:26:04 ****************************************************** 6/2 23:26:04 Using config file: /opt/condor-6.7.19/etc/condor_config 6/2 23:26:04 Using local config files: /opt/condor-6.7.19/local.ws-01/condor_config.local 6/2 23:26:04 DaemonCore: Command Socket at <192.167.1.21:33682> 6/2 23:26:04 DAGMAN_SUBMIT_DELAY setting: 0 6/2 23:26:04 DAGMAN_MAX_SUBMIT_ATTEMPTS setting: 6 6/2 23:26:04 DAGMAN_STARTUP_CYCLE_DETECT setting: 0 6/2 23:26:04 DAGMAN_MAX_SUBMITS_PER_INTERVAL setting: 5 6/2 23:26:04 allow_events (DAGMAN_IGNORE_DUPLICATE_JOB_EXECUTION, DAGMAN_ALLOW_EVENTS) setting: 50 6/2 23:26:04 DAGMAN_RETRY_SUBMIT_FIRST setting: 1 6/2 23:26:04 DAGMAN_RETRY_NODE_FIRST setting: 0 6/2 23:26:04 DAGMAN_MAX_JOBS_IDLE setting: 0 6/2 23:26:04 DAGMAN_MAX_JOBS_SUBMITTED setting: 0 6/2 23:26:04 DAGMAN_MUNGE_NODE_NAMES setting: 1 6/2 23:26:05 DAGMAN_DELETE_OLD_LOGS setting: 1 6/2 23:26:05 argv[0] == "condor_scheduniv_exec.27.0" 6/2 23:26:05 argv[1] == "-Debug" 6/2 23:26:05 argv[2] == "3" 6/2 23:26:05 argv[3] == "-Lockfile" 6/2 23:26:05 argv[4] == "simple.dag.lock" 6/2 23:26:05 argv[5] == "-Condorlog" 6/2 23:26:05 argv[6] == "/home/users/roy/condor-test/simple.log" 6/2 23:26:05 argv[7] == "-Dag" 6/2 23:26:05 argv[8] == "simple.dag" 6/2 23:26:05 argv[9] == "-Rescue" 6/2 23:26:05 argv[10] == "simple.dag.rescue" 6/2 23:26:05 DAG Lockfile will be written to simple.dag.lock 6/2 23:26:05 DAG Input file is simple.dag 6/2 23:26:05 Rescue DAG will be written to simple.dag.rescue 6/2 23:26:05 All DAG node user log files: 6/2 23:26:05 /home/users/roy/condor-test/simple.log (Condor) 6/2 23:26:05 Parsing simple.dag ... 6/2 23:26:05 Dag contains 1 total jobs 6/2 23:26:05 Deleting any older versions of log files... 6/2 23:26:05 Bootstrapping... 6/2 23:26:05 Number of pre-completed nodes: 0 6/2 23:26:05 Registering condor_event_timer... 6/2 23:26:06 Submitting Condor Node Simple job(s)... 6/2 23:26:06 submitting: condor_submit -a dag_node_name' '=' 'Simple -a +DAGManJobId' '=' '27 -a submit_event_notes' '=' 'DAG' 'Node:' 'Simple -a +DAGParentNodeNames' '=' '"" submit 6/2 23:26:06 assigned Condor ID (28.0) 6/2 23:26:06 Just submitted 1 job this cycle... 6/2 23:26:06 Event: ULOG_SUBMIT for Condor Node Simple (28.0) 6/2 23:26:06 Number of idle job procs: 1 6/2 23:26:06 Of 1 nodes total: 6/2 23:26:06 Done Pre Queued Post Ready Un-Ready Failed 6/2 23:26:06 === === === === === === === 6/2 23:26:06 0 0 1 0 0 0 0 6/2 23:26:41 Event: ULOG_EXECUTE for Condor Node Simple (28.0) 6/2 23:26:41 Number of idle job procs: 0 6/2 23:26:46 Event: ULOG_JOB_TERMINATED for Condor Node Simple (28.0) 6/2 23:26:46 Node Simple job proc (28.0) completed successfully. 6/2 23:26:46 Node Simple job completed 6/2 23:26:46 Number of idle job procs: 0 6/2 23:26:46 Of 1 nodes total: 6/2 23:26:46 Done Pre Queued Post Ready Un-Ready Failed 6/2 23:26:46 === === === === === === === 6/2 23:26:46 1 0 0 0 0 0 0 6/2 23:26:46 All jobs Completed! 6/2 23:26:46 Note: 0 total job deferrals because of -MaxJobs limit (0) 6/2 23:26:46 Note: 0 total job deferrals because of -MaxIdle limit (0) 6/2 23:26:46 Note: 0 total PRE script deferrals because of -MaxPre limit (0) 6/2 23:26:46 Note: 0 total POST script deferrals because of -MaxPost limit (0) 6/2 23:26:46 **** condor_scheduniv_exec.27.0 (condor_DAGMAN) EXITING WITH STATUS 0 Now verify your results: % cat simple.log 000 (028.000.000) 06/02 23:26:06 Job submitted from host: <192.167.1.21:32783> DAG Node: Simple ... 001 (028.000.000) 06/02 23:26:40 Job executing on host: <192.167.1.21:32782> ... 005 (028.000.000) 06/02 23:26:44 Job terminated. (1) Normal termination (return value 0) Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage 0 - Run Bytes Sent By Job 0 - Run Bytes Received By Job 0 - Total Bytes Sent By Job 0 - Total Bytes Received By Job ... % cat simple.out Thinking really hard for 4 seconds... We calculated: 20 Looking at DAGMan's various files, we see that DAGMan itself ran as a Condor job (specifically, a scheduler universe job). % ls simple.dag.* simple.dag.condor.sub simple.dag.dagman.log simple.dag.dagman.out simple.dag.lib.out % cat simple.dag.condor.sub # Filename: simple.dag.condor.sub # Generated by condor_submit_dag simple.dag universe = scheduler executable = /opt/condor-6.7.19/bin/condor_dagman getenv = True output = simple.dag.lib.out error = simple.dag.lib.out log = simple.dag.dagman.log remove_kill_sig = SIGUSR1 on_exit_remove = (ExitBySignal == false || ExitSignal =!= 9) copy_to_spool = False arguments = -f -l . -Debug 3 -Lockfile simple.dag.lock -Condorlog /home/users/roy/condor-test/simple.log -Dag simple.dag -Rescue simple.dag.rescue environment = _CONDOR_DAGMAN_LOG=simple.dag.dagman.out;_CONDOR_MAX_DAGMAN_LOG=0 queue % cat simple.dag.dagman.log 000 (027.000.000) 06/02 23:26:04 Job submitted from host: <192.167.1.21:32783> ... 001 (027.000.000) 06/02 23:26:04 Job executing on host: <192.167.1.21:32783> ... 005 (027.000.000) 06/02 23:26:46 Job terminated. (1) Normal termination (return value 0) Usr 0 00:00:00, Sys 0 00:00:00 - Run Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Run Local Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Remote Usage Usr 0 00:00:00, Sys 0 00:00:00 - Total Local Usage 0 - Run Bytes Sent By Job 0 - Run Bytes Received By Job 0 - Total Bytes Sent By Job 0 - Total Bytes Received By Job ... Clean up some of these files: % rm simple.dag.*
Question
Why does DAGMan run as a Condor job?
|
|||||||||||
|