[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

running CUDA cards



Hello:

I have just set up a gaming machine with

Gigabyte GA 890FXUD5
AMD Phenom II 10775T
2 x GTX470 GPU cards
4 x 4GB RAM
2 x 1 Tb HD for RAID1

and need to install amd64 to run molecular dynamics using (free for
non-commercial use) NAMD software (released binary below or
compilation from source). All that is experimental, with little
experience and I have no experience whatsoever with CUDA cards. My
question is about the version of amd6a to be best used (lenny or
squeeze) and what should be added to the typical server installation
according to the requirements:

(1) NVIDIA Linux driver version 195.17 or newer (released Linux
binaries are built with CUDA 2.3, but can be built with newer versions
as well).

(2) libcudart.so.2 included with the binary (the one copied from the
version of CUDA it was built with) must be in a directory in your
LD_LIBRARY_PATH before any other libcudart.so libraries. For example:

  setenv LD_LIBRARY_PATH ".:$LD_LIBRARY_PATH"
  (or LD_LIBRARY_PATH=".:$LD_LIBRARY_PATH"; export LD_LIBRARY_PATH)
  ./namd2 +idlepoll <configfile>
  ./charmrun ++local +p4 ./namd2 +idlepoll <configfile>

THE FOLLOWING CAN BE SKIPPED, unless one is specifically interested in
the matter: The +idlepoll in the command line is needed to poll the
GPU for results rather than sleeping while idle, i.e. NAMD does not
use any non-specified GPU card. Each namd2 process can use only one
GPU. Therefore you will need to run at least one process for each GPU
you want to use. Multiple processes can share a single GPU, usually
with an increase in performance. NAMD will automatically distribute
processes equally among the GPUs on a node. Specific GPU device IDs
can be requested via the +devices argument on the namd2 command line,
for example:
  ./charmrun ++local +p4 ./namd2 +idlepoll +devices 0,2 <configfile>

Thanks for advice
francesco pietra


Reply to: