Lumerical Script files in the queue

Status
Yes

While you can use the simple 'fdtd-pbs-run.sh' to run 'FSP' files for Lumerical's FDTD, to run scripts is slightly more involved, but you can really make use of the cluster to automate much of your Lumerical runs.  Note this only works on the Pod cluster!

Also, there's a startup time involved with checking out licenses, on the order of a minute - so if your job only runs for 30 seconds, it'll take a minute and a half to run, so not a great efficiency.  But if your jobs run for a few hours, the extra minute doesn't matter.  The lesson being, if it's small enough to run in a minute or two, that's probably something to just run on your laptop or desktop!

A sample lsf script is here, where it loads the fsp file, then runs it, and saves some info as a matlab file.  Note that if you already have some script files, you'll probably need to add the 'load' part to load the fsp file, since in the GUI you normally have the fsp file already open, but with the batch script you need to manually tell it to open the fsp.

# grating_run.lsf
filename="grating_coupler_2D.fsp";
load(filename);

clear;
switchtolayout;
run;
T = getresult('trans_series','T');
theta_out = getresult('trans_series','theta_out');
matlabsave('data1',T,theta_out);

There's a few simple steps you need to do initially to set it up, then it's a pretty simple process to run your script/LSF files.

1) (only need to do this once) Decide on how many cores you generally want to use.  Pod nodes have 40 cores, but there's a lot of people running jobs, so your jobs will get in the queue quicker with a more conservative number, e.g. 8 (but you can experiment).  Start FDTD, by 'module load lumerical/2023a', then 'fdtd-solutions'.  Click on menu 'simulation' and 'configure resources'.  It will bring up a window with two tabs, FDTD Solver and Design Environment.  Click on the 'Design Env.' tab, and then uncheck the 'autodetect' and enter the number of threads/cores you want (8 in this example).  Click save.  That's it!!  The only annoyance is if you want to use different numbers of cores you have to change it in here, and in your job file.  For the command line savvy, you can simply edit the file ~/.config/Lumerical/FDTD Solutions.ini

2) You need a slurm job file which will look like this.  Basically while it runs without creating a window, you need to fool it into thinking there's a graphical interface, that's what the 'xvfb' stuff is.  Of course, change the name of the lsf file to match yours!  (grating_run.lsf in this example).

#!/bin/bash  -l
#SBATCH --nodes=1 --ntasks-per-node 8

# Load lumerical, and set the QT variable for it to run headless
module load lumerical/2023R1
export QT_QPA_PLATFORM=offscreen

cd $SLURM_SUBMIT_DIR
time xvfb-run fdtd-solutions -nw -run grating_run.lsf

Then you submit your jobs with 'sbatch myfile.job' and you can monitor them with 'showq $USER'.