UCSB has a site license for Lumerical's FDTD software (as well as DEVICE and MODE - see this page for how to run them).
To run on the head node - just for data analysis, or short jobs - see the 'batch' section for longer job -, login with X forwarding (i.e. ssh -X) or use NX,
First, load the module (do "module avail" to see the different versions available)
module load lumerical/2019b
then you can run with
fdtd-solutions
When you're finished, please quit the program so other people can use it, we have a lot of licenses, but not unlimited, and it's a popular program.
Batch jobs for FSP files - see this page for how to run a script (LSF) file.
To run a 'fsp' file in the queue, you set your project up with the gui, then submit with the command fdtd-run-pbs.sh (on Pod, it's fdtd-run-slurm.sh).
[pcw@knot ~/lumerical]$ fdtd-solutions 2d.fsp [pcw@knot ~/lumerical]$ fdtd-run-pbs.sh -n 8 2d.fsp Submitting: 2d.sh 580512.node96
This will have submitted the job. You can check on them with the 'qstat -u username' command, where you substitute your username in there so you only see your jobs.
[pcw@knot ~/lumerical]$ qstat -u $USER
node96: Req'd Req'd Elap Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time ----------------------- ----------- -------- ---------------- ------ ----- ------ ------ --------- - --------- 580512.node96 pcw batch 2d.sh -- 1 8 667mb 00:26:08 R 00:00:02
In this case it was run with 8 cores - that's the -n flag (you can run it with up to 12 cores on knot, and 40 cores on pod.)
The last command 'qstat' (showq $USER on pod) shows you what the job is doing. Once it shows C in the status column (in this example it's running 'R') you can open the fsp file with the GUI and do the analysis.