Lumerical Script files in the queue

While you can use the simple 'fdtd-pbs-run.sh' to run 'FSP' files for Lumerical's FDTD, to run scripts is slightly more involved, but you can really make use of the cluster to automate much of your Lumerical runs.  Note this only works on the Pod cluster!

A sample lsf script is here, where it loads the fsp file, then runs it, and saves some info as a matlab file.  Note that if you already have some script files, you'll probably need to add the 'load' part to load the fsp file, since in the GUI you normally have the fsp file already open, but with the batch script you need to manually tell it to open the fsp.

# grating_run.lsf
filename="grating_coupler_2D.fsp";
load(filename);

clear;
switchtolayout;
run;
T = getresult('trans_series','T');
theta_out = getresult('trans_series','theta_out');
matlabsave('data1',T,theta_out);

 

There's a few simple steps you need to do initially to set it up, then it's a pretty simple process to run your script/LSF files.

1) (only need to do this once) Decide on how many cores you generally want to use.  Pod nodes have 40 cores, but there's a lot of people running jobs, so your jobs will get in the queue quicker with a more conservative number, e.g. 8 (but you can experiment).  Start FDTD, by 'module load lumerical/2019b', then 'fdtd-solutions'.  Click on menu 'simulation' and 'configure resources'.  It will bring up a window with two tabs, FDTD Solver and Design Environment.  Click on the 'Design Env.' tab, and then uncheck the 'autodetect' and enter the number of threads/cores you want (8 in this example).  Click save.  That's it!!  The only annoyance is if you want to use different numbers of cores you have to change it in here, and in your job file.  For the command line savvy, you can simply edit the file ~/.config/Lumerical/FDTD Solutions.ini

2) You need a slurm job file which will look like this.  Basically while it runs without creating a window, you need to fool it into thinking there's a graphical interface, that's what the 'xvfb' stuff is.  Of course, change the name of the lsf file to match yours!  (grating_run.lsf in this example).

#!/bin/bash  -l
#SBATCH --nodes=1 --ntasks-per-node 8
module load lumerical/2019b
cd $SLURM_SUBMIT_DIR
time xvfb-run fdtd-solutions -nw -run grating_run.lsf

 

Then you submit your jobs with 'sbatch myfile.job' and you can monitor them with 'showq $USER'.