Wiki‎ > ‎

Amber11 Installation

posted May 18, 2011, 6:35 PM by Dong Xu   [ updated Oct 27, 2011, 7:16 PM ]
https://docs.google.com/open?id=0B8ACWG4kGvuBYjUwZWM3OTgtMDEzMy00MGZmLThlMjUtNGIyOTA0NjZjNzA5

Needs package flex, zlib, zlib-devel, libbz2, libbz2-devel
So far intel compiler + mpich2 works better than gnu and openmpi
if Amber is pulled from a git tree dated after the latest patch, patch and AT15_Amber.py can be skipped.

To log compilation/installation details in bash, do

make serial 2>&1 | tee make.serial.txt

Serial

AmberTools-1.5

cd $AMBERHOME
patch -p0 -N < bugfix.all
cd $AMBERHOME/AmberTools/src
./configure gnu
make install
cd ../test
make test

Amber11

cd $AMBERHOME
chmod 700 apply_bugfix.x
./apply_bugfix.x bugfix.all

Serial

Go to the $AMBERHOME/src directory, and check that there is a config.h file present. If
not, you need to follow the configuration steps in the AmberTools Users’ Manual

make serial

cd $AMBERHOME/test
make test

Parallel

export MPI_HOME=$AMBERHOME
add $AMBERHOME/bin to $PATH
add $AMBERHOME/lib to $LD_LIBRARY_PATH
Download openmpi-1.4.3 and extract to $AMBERHOME/AmberTools/src
cd $AMBERHOME/AmberTools/src
tar jxvf ~/Downloads/openmpi-1.4.3.tar.bz2
./configure_openmpi gnu (skip this step if mpich2 is used)
./configure -mpi gnu
make parallel

cd ../../src; make clean
./AT15_Amber11.py; make parallel
cd $AMBERHOME/test
export DO_PARALLEL='mpirun -np 4'
make test.parallel


Single GPU

export CUDA_HOME=/home/apps/cuda/
cd $AMBERHOME/AmberTools/src/
make clean
./configure -cuda
(or -cuda_SPSP, -cuda_DPDP) intel    (or gnu)
cd ../../
./AT15_Amber.py
cd src/
make clean
make cuda


cd $AMBERHOME/test/
./test_amber_cuda.sh

cd $AMBERHOME/test/
./test_amber_cuda.sh -1 DPDP (to test the DPDP precision model)

Multi-CPU

Compilation has been a pain. Intel compilers+Mpich2 worked on a clean Amber src from git tree.

Setup compiler enviroment for Mpich2

export CFLAGS="-xHost -I/home/apps/intel/Compiler/11.1/069/lib"
export FFLAGS="-xHost -I/home/apps/intel/Compiler/11.1/069/lib"
export FCFLAGS="-xHost -I/home/apps/intel/Compiler/11.1/069/lib"
export CXXFLAGS="-xHost -I/home/apps/intel/Compiler/11.1/069/lib"
export LDFLAGS="-L/home/apps/intel/Compiler/11.1/069/lib"
export CC=icc
export F77=ifort
export FC=ifort
export CXX=icpc

Then follow Mpich2 README to install
Then
export MPI_HOME=/home/apps/mpich2-1.3.2p1-intel
export PATH=$MPI_HOME/bin:${PATH}

test by running "which mpif90"

cd $AMBERHOME/AmberTools/src/
make clean
./configure -cuda
(or -cuda_SPSP, -cuda_DPDP) -mpi intel    (or gnu)
cd ../../
./AT15_Amber.py
cd src/
make clean
make cuda_parallel



Comments