Lumerical MODE, DEVICE, and Interconnect

Status
Yes

UCSB has a license for Lumerical's MODE and DEVICE and Interconnect software.

We now have a script to run MODE which will automatically submit it to the queue, use

/sw/cnsi/lumerical/mode/bin/varfdtd-run-pbs.sh

Or, better yet, do a "module load lumerical/2019b" (for example)  which will define the path so you only need to type 'varfdtd-run-pbs.sh'

to submit your MODE jobs.

To manually run it, see below - especially if you used to have it being fairly interactive.

For MODE:

You'll need some slight modifications to your lsf file

For an graphs that print out, you won't be able to see them, so the line after the "plot" command put a

exportfigure("figure-test");

which will just save you a jpeg of the plot.

At the end of the LSF file, put a

save("my-run");

which will create a 'my-run.lsf' file that you can read back in to look at the results.

To run in the queue, create a job file that looks like this.

 

submit the job with 'qsub my-job.job', and you can monitor it with 'qstat -u $USER'

 

For DEVICE:

You can run the solvers directly, on your 'ldev' files [if you have a 'lsf' file you can use the command at the endof it 'save("myfile.ldev") to make it]. You need a simple batch script (e.g. a slurm file for pod is shown below)

#!/bin/bash -l
#  how many cores/node
#SBATCH  -N 1 --ntasks-per-node=10

cd $SLURM_SUBMIT_DIR
module load lumerical/2018b

# edit stuff below here...
# Input filename
INPUT="pn_wg_dcsweep_R2018.ldev"
# which solver you want to run on your file
device-engine  ./${INPUT}

#  end of edits...
echo "Job finished at: `date`"
exit

and the other solvers are available at https://kb.lumerical.com/en/user_guide_run_linux_solver_command_line_direct.html

e.g. for Heat it's 'termal-engine', etc..

So to run this job, it's just

sbatch myfile.job

and you can monitor it with

showq $USER

you can edit the job file to change the names with any editor, such as 'nano' or 'vi' (or your favorite editor)

For Interconnect:

You need to make some job files for the queue, and then run it with a fake graphical interface (even though you don't want it to output any graphics).  Some example scripts are below.  Note Knot and pod currently have different queue software, so the files are slightly different.

On Knot:

#!/bin/bash
#PBS -l nodes=1:ppn=4
cd $PBS_O_WORKDIR
sleep 10
/bin/hostname
mkdir OutputDir
module load lumerical/2019b
xvfb-batch  interconnect -hide -run runTWLM.lsf --logall -o OutputDir

submit the job with 'qsub myfile.job', and monitor it with 'qstat -u $USER'.  

On Pod:

#!/bin/bash  -l
#SBATCH --nodes=1 --ntasks-per-node 4
#  this is asking for 1 node, with 4 cores - the -l is needed on first line if you want to use modules
module load  lumerical/2019b
cd $SLURM_SUBMIT_DIR
/bin/hostname
# Will store data in OutputDir
mkdir OutputDir
 xvfb-batch   interconnect -hide -run runTWLM.lsf -logall -o OutputDir

Note the 'xvfb-batch' is just an alias for the more wordy command " xvfb-run --auto-servernum --server-num=1 " which is the full set of arguments you need with xvfb-run to run it on our clusters.