Commit 9b012758 authored by sjplimp's avatar sjplimp
Browse files

neighbor list bug fixes, new compute coord/atom option

git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16018 f3b2605a-c512-4ea7-a41b-209d697bcdaa
parent 23cfb88b
Loading
Loading
Loading
Loading
−31.7 KiB (25.3 KiB)
Loading image diff...
+25.4 KiB
Loading image diff...
+4 −3
Original line number Diff line number Diff line
@@ -1153,7 +1153,7 @@ Package, Description, Author(s), Doc page, Example, Pic/movie, Library
"USER-MISC"_#USER-MISC, single-file contributions, USER-MISC/README, USER-MISC/README, -, -, -
"USER-MANIFOLD"_#USER-MANIFOLD, motion on 2d surface, Stefan Paquay (Eindhoven U of Technology), "fix manifoldforce"_fix_manifoldforce.html, USER/manifold, "manifold"_manifold, -
"USER-MOLFILE"_#USER-MOLFILE, "VMD"_VMD molfile plug-ins, Axel Kohlmeyer (Temple U), "dump molfile"_dump_molfile.html, -, -, VMD-MOLFILE
"USER-NC-DUMP"_#USER-NC-DUMP, dump output via NetCDF, Lars Pastewka (Karlsruhe Institute of Technology, KIT), "dump nc, dump nc/mpiio"_dump_nc.html, -, -, lib/netcdf
"USER-NC-DUMP"_#USER-NC-DUMP, dump output via NetCDF, Lars Pastewka (Karlsruhe Institute of Technology, KIT), "dump nc / dump nc/mpiio"_dump_nc.html, -, -, lib/netcdf
"USER-OMP"_#USER-OMP, OpenMP threaded styles, Axel Kohlmeyer (Temple U), "Section 5.3.4"_accelerate_omp.html, -, -, -
"USER-PHONON"_#USER-PHONON, phonon dynamical matrix, Ling-Ti Kong (Shanghai Jiao Tong U), "fix phonon"_fix_phonon.html, USER/phonon, -, -
"USER-QMMM"_#USER-QMMM, QM/MM coupling, Axel Kohlmeyer (Temple U), "fix qmmm"_fix_qmmm.html, USER/qmmm, -, lib/qmmm
@@ -1610,6 +1610,7 @@ and a "dump nc/mpiio"_dump_nc.html command to output LAMMPS snapshots
in this format.  See src/USER-NC-DUMP/README for more details.

NetCDF files can be directly visualized with the following tools:

Ovito (http://www.ovito.org/). Ovito supports the AMBER convention
and all of the above extensions. :ulb,l
VMD (http://www.ks.uiuc.edu/Research/vmd/) :l
+7 −7
Original line number Diff line number Diff line
@@ -1727,7 +1727,7 @@ thermodynamic state and a total run time for the simulation. It then
appends statistics about the CPU time and storage requirements for the
simulation.  An example set of statistics is shown here:

Loop time of 2.81192 on 4 procs for 300 steps with 2004 atoms
Loop time of 2.81192 on 4 procs for 300 steps with 2004 atoms :pre

Performance: 18.436 ns/day  1.302 hours/ns  106.689 timesteps/s
97.0% CPU use with 4 MPI tasks x no OpenMP threads :pre
@@ -1757,14 +1757,14 @@ Ave special neighs/atom = 2.34032
Neighbor list builds = 26
Dangerous builds = 0 :pre

The first section provides a global loop timing summary. The loop time
The first section provides a global loop timing summary. The {loop time}
is the total wall time for the section.  The {Performance} line is
provided for convenience to help predicting the number of loop
continuations required and for comparing performance with other
similar MD codes.  The CPU use line provides the CPU utilzation per
continuations required and for comparing performance with other,
similar MD codes.  The {CPU use} line provides the CPU utilzation per
MPI task; it should be close to 100% times the number of OpenMP
threads (or 1). Lower numbers correspond to delays due to file I/O or
insufficient thread utilization.
threads (or 1 of no OpenMP). Lower numbers correspond to delays due
to file I/O or insufficient thread utilization.

The MPI task section gives the breakdown of the CPU run time (in
seconds) into major categories:
@@ -1791,7 +1791,7 @@ is present that also prints the CPU utilization in percent. In
addition, when using {timer full} and the "package omp"_package.html
command are active, a similar timing summary of time spent in threaded
regions to monitor thread utilization and load balance is provided. A
new entry is the {Reduce} section, which lists the time spend in
new entry is the {Reduce} section, which lists the time spent in
reducing the per-thread data elements to the storage for non-threaded
computation. These thread timings are taking from the first MPI rank
only and and thus, as the breakdown for MPI tasks can change from MPI
+2 −2
Original line number Diff line number Diff line
@@ -110,14 +110,14 @@ mpirun -np 96 -ppn 12 lmp_g++ -k on t 20 -sf kk -in in.lj # ditto on 8 Phis :p
[Required hardware/software:]

Kokkos support within LAMMPS must be built with a C++11 compatible
compiler.  If using gcc, version 4.8.1 or later is required.
compiler.  If using gcc, version 4.7.2 or later is required.

To build with Kokkos support for CPUs, your compiler must support the
OpenMP interface.  You should have one or more multi-core CPUs so that
multiple threads can be launched by each MPI task running on a CPU.

To build with Kokkos support for NVIDIA GPUs, NVIDIA Cuda software
version 6.5 or later must be installed on your system.  See the
version 7.5 or later must be installed on your system.  See the
discussion for the "GPU"_accelerate_gpu.html package for details of
how to check and do this.

Loading