Lumerical MODE, DEVICE, and Interconnect

UCSB has a license for Lumerical's MODE and DEVICE and Interconnect software.

We now have a script to run MODE which will automatically submit it to the queue, use


Or, better yet, do a "module load lumerical/2019b" (for example)  which will define the path so you only need to type ''

to submit your MODE jobs.

To manually run it, see below - especially if you used to have it being fairly interactive.


You'll need some slight modifications to your lsf file

For an graphs that print out, you won't be able to see them, so the line after the "plot" command put a


which will just save you a jpeg of the plot.

At the end of the LSF file, put a


which will create a 'my-run.lsf' file that you can read back in to look at the results.

To run in the queue, create a job file that looks like this.


submit the job with 'qsub my-job.job', and you can monitor it with 'qstat -u $USER'



You can run the solvers directly, on your 'ldev' files [if you have a 'lsf' file you can use the command at the endof it 'save("myfile.ldev") to make it]. You need a simple batch script (e.g. a slurm file for pod is shown below)

#!/bin/bash -l
#  how many cores/node
#SBATCH  -N 1 --ntasks-per-node=10

module load lumerical/2018b

# edit stuff below here...
# Input filename
# which solver you want to run on your file
device-engine  ./${INPUT}

#  end of edits...
echo "Job finished at: `date`"

and the other solvers are available at

e.g. for Heat it's 'termal-engine', etc..

So to run this job, it's just

sbatch myfile.job

and you can monitor it with

showq $USER

you can edit the job file to change the names with any editor, such as 'nano' or 'vi' (or your favorite editor)

For Interconnect:

You need to make some job files for the queue, and then run it with a fake graphical interface (even though you don't want it to output any graphics).  Some example scripts are below.  Note Knot and pod currently have different queue software, so the files are slightly different.

On Knot:

#PBS -l nodes=1:ppn=4
sleep 10
mkdir OutputDir
module load lumerical/2019b
xvfb-batch  interconnect -hide -run runTWLM.lsf --logall -o OutputDir

submit the job with 'qsub myfile.job', and monitor it with 'qstat -u $USER'.  

On Pod:

#!/bin/bash  -l
#SBATCH --nodes=1 --ntasks-per-node 4
#  this is asking for 1 node, with 4 cores - the -l is needed on first line if you want to use modules
module load  lumerical/2019b
# Will store data in OutputDir
mkdir OutputDir
 xvfb-batch   interconnect -hide -run runTWLM.lsf -logall -o OutputDir


Note the 'xvfb-batch' is just an alias for the more wordy command " xvfb-run --auto-servernum --server-num=1 " which is the full set of arguments you need with xvfb-run to run it on our clusters.