VASP compilation

Most of the information on how to compile VASP 5.4 is included in the VASP wiki.

This is the current (2018) way to compile on the CSC clusters - the older version for Knot, and OpenMPI is still included for history below.

======Newer way, using Intel compiler with Intel MPI==============

Make sure in your .bash_profile you have

module load intel/18

and it can't hurt to have that in your .job files as well.

Make sure that mpif90 is in your path, i.e.

which mpiifort

it should say it's in /sw/intel/blah-blah-blah

Then follow the directions at the VASP wiki, e.g.

cp arch/makefile.include.linux_intel ./makefile.include

as of 5.4.4 there's one slight change to make, edit makefile.include and change on the line with CFLAGS

the -openmp to -qopenmp   (i.e. add a q)

then just

make -j 4 all

and the binaries will be created in the directories

build/std build/ncl and build/gam (all named 'vasp') for the standard, non-collinear, and gamma point versions.

then you just run with the intel mpirun (i.e. make sure you've 'module load intel/18' in your script)

 

 

 

=========Older method/Knot with OpenMPI==============

Below are recommended settings for the Knot cluster. Users can tweak the compilation options according to their own preferences. For other clusters, slight adjustments may also be necessary. 

Note that VASP is licensed per group.  Several groups at UCSB have a license for it, check with your PI if your group is licensed

CPU version:

In your .bashrc, make sure that the following are present

 . /opt/intel/composer_xe_2015.2.164/bin/compilervars.sh intel64
 . /opt/intel/composer_xe_2015.2.164/mkl/bin/mklvars.sh intel64
export PATH=/opt/openmpi-1.6.4/bin/:$PATH
export LD_LIBRARY_PATH=/opt/openmpi-1.6.4/lib:$LD_LIBRARY_PATH

Note: You can compile with a different version of openmpi (to check what is available, type locate mpirun on the terminal).

After going into your VASP folder (e.g. /home/user/VASP), copy the file makefile.include.linux_intel from the /home/user/VASP/arch folder to /home/user/VASP. Rename makefile.include.linux_intel as makefile.include. The following changes are necessary for correct compilation on Knot:

1. Remove -DscaLAPACK

CPP_OPTIONS= -DMPI -DHOST=\"IFC91_ompi\" -DIFC \
             -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc \
             -DMPI_BLOCK=8000 -Duse_collective \
             -DnoAugXCmeta -Duse_bse_te \
             -Duse_shmem -Dtbdyn

2.  Change mpi Fortran

FC         = /opt/openmpi-1.6.4/bin/mpif90
FCL        = $(FC)

3. Change BLAS

BLAS       = -mkl=sequential

4. Remove BLACS and SCALAPACK options

BLACS      = 
SCALAPACK  = 

After these changes, simply execute make std, and VASP will compile and the executable will be located under the /home/user/VASP/bin directory. 

GPU version:

First, login to the GPU head node:

ssh knot-gpu

You need the following in your environment variables:

export LD_LIBRARY_PATH=/opt/openmpi-1.8.5/lib:/usr/local/cuda-7.5/lib64:$LD_LIBRARY_PATH

. /opt/intel/composer_xe_2015.2.164/bin/compilervars.sh intel64
. /opt/intel/composer_xe_2015.2.164/mkl/bin/mklvars.sh intel64

​The only difference here is that the openmpi version on the GPU nodes are different and we need cuda. You can add these in your .bashrc under the following if statement to make sure it does not conflict with the CPU version:

if [ "$(hostname)" == 'node139' ] || [ "$(hostname)" == 'node43' ] || [ "$(hostname)" == 'node44' ] \
   || [ "$(hostname)" == 'node45' ] || [ "$(hostname)" == 'node88' ] || [ "$(hostname)" == 'node89' ] \
   || [ "$(hostname)" == 'node90' ]; then
   export PATH=/opt/openmpi-1.8.5/bin:/usr/local/cuda-7.5/bin:$PATH

   export LD_LIBRARY_PATH=/opt/openmpi-1.8.5/lib:/usr/local/cuda-7.5/lib64:$LD_LIBRARY_PATH
fi   

Note: You can compile with a different version of openmpi (to check what is available, type locate mpirun on the terminal).

After going into your VASP folder (e.g. /home/user/VASP), copy the file makefile.include.linux_intel_cuda from the /home/user/VASP/arch folder to /home/user/VASP. Rename makefile.include.linux_intel_cuda as makefile.include. The following changes are necessary for correct compilation on Knot:

1. Change mpi Fortran

FC         = /opt/openmpi-1.8.5/bin/mpif90
FCL        = /opt/openmpi-1.8.5/bin/mpif90 -mkl -lstdc++

2.  Remove SCALAPACK

SCALAPACK  =

3. Change FC_LIB

FC_LIB     = ifort

​4. Change CUDA_ROOT

CUDA_ROOT  := /usr/local/cuda-7.5

​5.  Change the compute architecture for Knot (see http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc)

GENCODE_ARCH    := -gencode=arch=compute_30,code=\"sm_30,compute_30\" -gencode=arch=compute_20,code=\"sm_20,compute_20\"

Note: If you are compiling for a different cluster, make sure that the compute capability of the GPU card is included in this option. To check the compute availability, use the nvidia-smi command on the GPU node and get the name of the card. Then compare from (https://en.wikipedia.org/wiki/CUDA#Supported_GPUs) which compute capability they have.

For example, compute capability 2.0 corresponds to compute_20,sm_20 etc.

5. Change MPI_INC

MPI_INC    = /opt/openmpi-1.8.5/include

After these changes, simply execute make gpu, and VASP will compile and the executable will be located under the /home/user/VASP/bin directory.   

The GPU version accelerates calculations, especially those that depend on lots of FFTs (for example calculations containing exact exchange). Below is a simple test performed on the Knot cluster:

Si 2x2x2 supercell (16 atoms), 2x2x2 Monkhorst-Pack grid

HSE functional, PRECFOCK = Fast

# of CPUs     # of GPUs     Elapsed time (sec)

1                     0                    8309.662

2                     0                    4514.926  

4                     0                    3297.162

1                     1                    1863.783

2                     2                    1048.171

4                     4                    957.706