Commit 9a2f7386 authored by Steve Plimpton's avatar Steve Plimpton
Browse files

sync with SVN

parent a3a3af69
Loading
Loading
Loading
Loading
+46 −3
Original line number Diff line number Diff line
Generation of LAMMPS Documentation
LAMMPS Documentation

Depending on how you obtained LAMMPS, this directory has 2 or 3
sub-directories and optionally 2 PDF files:

src             content files for LAMMPS documentation
html            HTML version of the LAMMPS manual (see html/Manual.html)
tools           tools and settings for building the documentation
Manual.pdf      large PDF version of entire manual
Developer.pdf   small PDF with info about how LAMMPS is structured

If you downloaded LAMMPS as a tarball from the web site, all these
directories and files should be included.

If you downloaded LAMMPS from the public SVN or Git repositories, then
the HTML and PDF files are not included.  Instead you need to create
them, in one of three ways:

(a) You can "fetch" the current HTML and PDF files from the LAMMPS web
site.  Just type "make fetch".  This should create a html_www dir and
Manual_www.pdf/Developer_www.pdf files.  Note that if new LAMMPS
features have been added more recently than the date of your version,
the fetched documentation will include those changes (but your source
code will not, unless you update your local repository).

(b) You can build the HTML and PDF files yourself, by typing "make
html" followed by "make pdf".  Note that the PDF make requires the
HTML files already exist.  This requires various tools including
Sphinx, which the build process will attempt to download and install
on your system, if not already available.  See more details below.

(c) You can genererate an older, simpler, less-fancy style of HTML
documentation by typing "make old".  This will create an "old"
directory.  This can be useful if (b) does not work on your box for
some reason, or you want to quickly view the HTML version of a doc
page you have created or edited yourself within the src directory.
E.g. if you are planning to submit a new feature to LAMMPS.

----------------

The generation of all documentation is managed by the Makefile in this
dir.

----------------

Options:

make html         # generate HTML in html dir using Sphinx
@@ -51,3 +87,10 @@ Once Python 3 is installed, open a Terminal and type
pip3 install virtualenv

This will install virtualenv from the Python Package Index.

----------------

Installing prerequisites for PDF build


+2 −2
Original line number Diff line number Diff line
<!-- HTML_ONLY -->
<HEAD>
<TITLE>LAMMPS Users Manual</TITLE>
<META NAME="docnumber" CONTENT="26 Sep 2016 version">
<META NAME="docnumber" CONTENT="27 Sep 2016 version">
<META NAME="author" CONTENT="http://lammps.sandia.gov - Sandia National Laboratories">
<META NAME="copyright" CONTENT="Copyright (2003) Sandia Corporation.  This software and manual is distributed under the GNU General Public License.">
</HEAD>
@@ -21,7 +21,7 @@
<H1></H1>

LAMMPS Documentation :c,h3
26 Sep 2016 version :c,h4
27 Sep 2016 version :c,h4

Version info: :h4

+8 −8
Original line number Diff line number Diff line
@@ -48,14 +48,14 @@ follows the discussion in these 3 papers: "(HenkelmanA)"_#HenkelmanA,

Each replica runs on a partition of one or more processors.  Processor
partitions are defined at run-time using the -partition command-line
switch; see "Section 2.7"_Section_start.html#start_7 of the
manual.  Note that if you have MPI installed, you can run a
multi-replica simulation with more replicas (partitions) than you have
physical processors, e.g you can run a 10-replica simulation on just
one or two processors.  You will simply not get the performance
speed-up you would see with one or more physical processors per
replica.  See "this section"_Section_howto.html#howto_5 of the manual
for further discussion.
switch; see "Section 2.7"_Section_start.html#start_7 of the manual.
Note that if you have MPI installed, you can run a multi-replica
simulation with more replicas (partitions) than you have physical
processors, e.g you can run a 10-replica simulation on just one or two
processors.  You will simply not get the performance speed-up you
would see with one or more physical processors per replica.  See
"Section 6.5"_Section_howto.html#howto_5 of the manual for further
discussion.

NOTE: The current NEB implementation in LAMMPS only allows there to be
one processor per replica.
+29 −27
Original line number Diff line number Diff line
@@ -63,14 +63,14 @@ event to occur.

Each replica runs on a partition of one or more processors.  Processor
partitions are defined at run-time using the -partition command-line
switch; see "Section 2.7"_Section_start.html#start_7 of the
manual.  Note that if you have MPI installed, you can run a
multi-replica simulation with more replicas (partitions) than you have
physical processors, e.g you can run a 10-replica simulation on one or
two processors.  For PRD, this makes little sense, since this offers
no effective parallel speed-up in searching for infrequent events. See
"Section 6.5"_Section_howto.html#howto_5 of the manual for further
discussion.
switch; see "Section 2.7"_Section_start.html#start_7 of the manual.
Note that if you have MPI installed, you can run a multi-replica
simulation with more replicas (partitions) than you have physical
processors, e.g you can run a 10-replica simulation on one or two
processors.  However for PRD, this makes little sense, since running a
replica on virtual instead of physical processors,offers no effective
parallel speed-up in searching for infrequent events.  See "Section
6.5"_Section_howto.html#howto_5 of the manual for further discussion.

When a PRD simulation is performed, it is assumed that each replica is
running the same model, though LAMMPS does not check for this.
@@ -163,7 +163,7 @@ runs for {N} timesteps. If the {time} value is {clock}, then the
simulation runs until {N} aggregate timesteps across all replicas have
elapsed.  This aggregate time is the "clock" time defined below, which
typically advances nearly M times faster than the timestepping on a
single replica.
single replica, where M is the number of replicas.

:line

@@ -183,25 +183,26 @@ coincident events, and the replica number of the chosen event.

The timestep is the usual LAMMPS timestep, except that time does not
advance during dephasing or quenches, but only during dynamics.  Note
that are two kinds of dynamics in the PRD loop listed above.  The
first is when all replicas are performing independent dynamics,
waiting for an event to occur.  The second is when correlated events
are being searched for and only one replica is running dynamics.
that are two kinds of dynamics in the PRD loop listed above that
contribute to this timestepping.  The first is when all replicas are
performing independent dynamics, waiting for an event to occur.  The
second is when correlated events are being searched for, but only one
replica is running dynamics.

The CPU time is the total processor time since the start of the PRD
run. 
The CPU time is the total elapsed time on each processor, since the
start of the PRD run.

The clock is the same as the timestep except that it advances by M
steps every timestep during the first kind of dynamics when the M
steps per timestep during the first kind of dynamics when the M
replicas are running independently.  The clock advances by only 1 step
per timestep during the second kind of dynamics, since only a single
per timestep during the second kind of dynamics, when only a single
replica is checking for a correlated event.  Thus "clock" time
represents the aggregate time (in steps) that effectively elapses
represents the aggregate time (in steps) that has effectively elapsed
during a PRD simulation on M replicas.  If most of the PRD run is
spent in the second stage of the loop above, searching for infrequent
events, then the clock will advance nearly M times faster than it
would if a single replica was running.  Note the clock time between
events will be drawn from p(t).
successive events should be drawn from p(t).

The event number is a counter that increments with each event, whether
it is uncorrelated or correlated.
@@ -212,14 +213,15 @@ replicas are running independently. The correlation flag will be 1
when a correlated event occurs during the third stage of the loop
listed above, i.e. when only one replica is running dynamics.

When more than one replica detects an event at the end of the second
stage, then one of them is chosen at random. The number of coincident 
events is the number of replicas that detected an event. Normally, we
expect this value to be 1. If it is often greater than 1, then either
the number of replicas is too large, or {t_event} is too large.
When more than one replica detects an event at the end of the same
event check (every {t_event} steps) during the the second stage, then
one of them is chosen at random.  The number of coincident events is
the number of replicas that detected an event.  Normally, this value
should be 1.  If it is often greater than 1, then either the number of
replicas is too large, or {t_event} is too large.

The replica number is the ID of the replica (from 0 to M-1) that
found the event.
The replica number is the ID of the replica (from 0 to M-1) in which
the event occurred.

:line

@@ -286,7 +288,7 @@ This command can only be used if LAMMPS was built with the REPLICA
package.  See the "Making LAMMPS"_Section_start.html#start_3 section
for more info on packages.

{N} and {t_correlate} settings must be integer multiples of
The {N} and {t_correlate} settings must be integer multiples of
{t_event}.

Runs restarted from restart file written during a PRD run will not
+0 −225
Original line number Diff line number Diff line
LAMMPS (15 Feb 2016)
# 2d circle of particles inside a box with LJ walls

variable        b index 0

variable	x index 50
variable	y index 20
variable	d index 20
variable	v index 5
variable	w index 2

units		lj
dimension       2
atom_style	bond
boundary        f f p

lattice		hex 0.85
Lattice spacing in x,y,z = 1.16553 2.01877 1.16553
region		box block 0 $x 0 $y -0.5 0.5
region		box block 0 50 0 $y -0.5 0.5
region		box block 0 50 0 20 -0.5 0.5
create_box	1 box bond/types 1 extra/bond/per/atom 6
Created orthogonal box = (0 0 -0.582767) to (58.2767 40.3753 0.582767)
  1 by 1 by 1 MPI processor grid
region		circle sphere $(v_d/2+1) $(v_d/2/sqrt(3.0)+1) 0.0 $(v_d/2)
region		circle sphere 11 $(v_d/2/sqrt(3.0)+1) 0.0 $(v_d/2)
region		circle sphere 11 6.7735026918962581988 0.0 $(v_d/2)
region		circle sphere 11 6.7735026918962581988 0.0 10
create_atoms	1 region circle
Created 361 atoms
mass		1 1.0

velocity	all create 0.5 87287 loop geom
velocity        all set $v $w 0 sum yes
velocity        all set 5 $w 0 sum yes
velocity        all set 5 2 0 sum yes

pair_style	lj/cut 2.5
pair_coeff	1 1 10.0 1.0 2.5

bond_style      harmonic
bond_coeff      1 10.0 1.2

# need to preserve 1-3, 1-4 pairwise interactions during hard collisions

special_bonds   lj/coul 0 1 1
  0 = max # of 1-2 neighbors
  1 = max # of special neighbors
create_bonds    all all 1 1.0 1.5
Neighbor list info ...
  2 neighbor list requests
  update every 1 steps, delay 10 steps, check yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 2.8
  ghost atom cutoff = 2.8
  binsize = 1.4 -> bins = 42 29 1
Added 1014 bonds, new total = 1014
  6 = max # of 1-2 neighbors
  6 = max # of special neighbors

neighbor	0.3 bin
neigh_modify	delay 0 every 1 check yes

fix		1 all nve

fix             2 all wall/lj93 xlo 0.0 1 1 2.5 xhi $x 1 1 2.5
fix             2 all wall/lj93 xlo 0.0 1 1 2.5 xhi 50 1 1 2.5
fix             3 all wall/lj93 ylo 0.0 1 1 2.5 yhi $y 1 1 2.5
fix             3 all wall/lj93 ylo 0.0 1 1 2.5 yhi 20 1 1 2.5

comm_style      tiled
comm_modify     cutoff 7.5
fix             10 all balance 50 0.9 rcb

#compute         1 all property/atom proc
#variable        p atom (c_1%10)+1
#dump            2 all custom 50 tmp.dump id v_p x y z

#dump            3 all image 50 image.*.jpg v_p type bond atom 0.25 #                adiam 1.0 view 0 0 zoom 1.8 subbox yes 0.02
#variable        colors string #                "red green blue yellow white #                purple pink orange lime gray"
#dump_modify     3 pad 5 amap 0 10 sa 1 10 ${colors}

thermo_style    custom step temp epair press f_10[3] f_10
thermo          100

run		10000
Neighbor list info ...
  1 neighbor list requests
  update every 1 steps, delay 0 steps, check yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 2.8
  ghost atom cutoff = 7.5
  binsize = 1.4 -> bins = 42 29 1
Memory usage per processor = 4.44301 Mbytes
Step Temp E_pair Press 10[3] 10 
       0    25.701528   -2.2032569    3.1039469            1            1 
     100    27.623422    -6.228166    2.6542136            1            1 
     200     33.35302   -15.746749    3.2018248            1            1 
     300     39.17734     -24.1557    4.9116986            1            1 
     400    41.660701   -27.615203    8.6214678            1            1 
     500    37.154935   -24.096962    3.2656162            1            1 
     600    35.061294    -21.52655    2.3693223            1            1 
     700    37.204395   -22.313267    2.7108913            1            1 
     800    39.050704   -24.972147    5.5398741            1            1 
     900     38.37275   -24.777769    3.9291488            1            1 
    1000    39.147816   -26.003699    4.3586203            1            1 
    1100    36.084337    -24.88638    4.5496174            1            1 
    1200    32.404559   -20.810803    6.0760128            1            1 
    1300    32.625538   -19.709411    4.3718289            1            1 
    1400    32.246777   -18.785184     3.435959            1            1 
    1500    29.174368   -17.434726    2.2702916            1            1 
    1600    27.359273    -15.40756     1.033659            1            1 
    1700    26.046626   -14.318045   0.87714473            1            1 
    1800    24.540401   -13.017686   0.84464169            1            1 
    1900    26.259688   -12.777739   0.80954004            1            1 
    2000    27.491023   -13.363863    1.4519188            1            1 
    2100    27.839831   -13.709118    3.0184763            1            1 
    2200    26.669065   -12.710422    1.4560094            1            1 
    2300     26.86742   -12.730386   0.16986139            1            1 
    2400    26.375504   -12.476682     1.907352            1            1 
    2500    26.581263   -12.530908    1.5507765            1            1 
    2600     27.67091   -12.922702    2.0391206            1            1 
    2700    27.158784   -13.306789    3.7355268            1            1 
    2800    25.635671   -13.502047    2.9431633            1            1 
    2900    24.648357   -12.388002   0.44910075            1            1 
    3000    22.988768   -10.685349   0.37214853            1            1 
    3100    21.788719   -10.171928  -0.95734833            1            1 
    3200    22.707514   -9.6682633  -0.32868418            1            1 
    3300    22.907772   -10.612766 -0.024319089            1            1 
    3400    24.276426   -10.802246   0.44731188            1            1 
    3500    25.086959   -10.797849    2.3218091            1            1 
    3600    26.064365   -12.589537    1.2460738            1            1 
    3700    24.656426   -11.956895   0.57862216            1            1 
    3800    22.316856   -11.174148   -0.7567936            1            1 
    3900    22.590299   -9.5928781    0.4127727            1            1 
    4000    22.353461   -9.5887736  -0.34247396            1            1 
    4100    24.103395     -9.76584   0.98989862            1            1 
    4200     23.92261   -10.566828  -0.71536268            1            1 
    4300     24.44409   -11.358378   0.37166197            1            1 
    4400    24.772419   -11.324888   0.26732853            1            1 
    4500    23.150748   -11.309892  -0.43134573            1            1 
    4600    24.008361   -10.212365   0.43277527            1            1 
    4700    25.107401   -9.5753673  0.020406689            1            1 
    4800    23.658604   -8.9131426   0.46554745            1            1 
    4900    22.530251    -9.023311 -0.014405315            1            1 
    5000    23.110692   -9.6567397    0.9033234            1            1 
    5100    23.760144   -9.7623416   0.32059726            1            1 
    5200    25.048012   -9.6748253   0.66411561            1            1 
    5300     24.09835   -9.7867216   0.61128267            1            1 
    5400    22.984982   -9.9464053   0.28096544            1            1 
    5500    22.502003   -9.9294451  -0.53666181            1            1 
    5600    23.712298   -10.054318   0.64334761            1            1 
    5700    23.350796   -10.217344    2.1979894            1            1 
    5800    25.246549   -12.458753  0.055553025            1            1 
    5900    24.422272   -10.641177   0.82506839            1            1 
    6000    22.478315   -10.629525    -0.774321            1            1 
    6100    22.970846   -10.218868   0.59819592            1            1 
    6200    24.500063   -10.355481   0.55427078            1            1 
    6300    22.358071   -9.9041539   0.89500518            1            1 
    6400    23.924951   -11.121442  0.045999129            1            1 
    6500     24.83773   -10.464191    2.0048038            1            1 
    6600    24.752158   -9.9939162   0.53794465            1            1 
    6700    23.073765   -9.3662561   0.38618685            1            1 
    6800    21.940219   -8.4948475  -0.25184019            1            1 
    6900     22.23783   -8.8668868 0.0072863367            1            1 
    7000    25.667836   -10.473211   0.59852886            1            1 
    7100    23.352123   -9.0862268   0.85289283            1            1 
    7200    24.072107   -9.4020576  0.090222808            1            1 
    7300    22.806746   -8.4687857  -0.46892989            1            1 
    7400    24.798425   -9.1144357  -0.38738146            1            1 
    7500    24.748499   -9.1560558   0.94929896            1            1 
    7600    25.364753   -10.176533    0.2649225            1            1 
    7700    25.137988   -9.6617897    1.3920543            1            1 
    7800    25.502583   -10.320832   0.64812816            1            1 
    7900      24.5208   -9.9466543 -0.084071026            1            1 
    8000    24.653522   -10.312942   0.32535023            1            1 
    8100    23.129565   -9.6250435  0.016356303            1            1 
    8200     23.82421   -9.7608023   0.11631418            1            1 
    8300    25.081262   -9.3510452   0.92337854            1            1 
    8400    24.328205   -9.2875396   0.28266968            1            1 
    8500    25.041711   -11.254976  -0.21368615            1            1 
    8600    24.111473   -9.0389585    1.2102938            1            1 
    8700     23.50066   -9.0926498   0.78819229            1            1 
    8800    23.840962   -9.3434474  0.091313007            1            1 
    8900    23.081841   -9.0635966   0.56672001            1            1 
    9000    24.712103   -9.3243213   0.60301629            1            1 
    9100    24.457422    -9.439298  -0.60457515            1            1 
    9200    25.070662   -9.1945782    1.2399235            1            1 
    9300    25.019869   -8.7910068   0.42340497            1            1 
    9400     24.23662   -9.3111098  -0.75379175            1            1 
    9500    24.836827   -8.7324281   0.81857501            1            1 
    9600    24.901993   -8.6624128   0.84890877            1            1 
    9700    24.936686   -8.9869503    1.9627894            1            1 
    9800    25.393368   -9.8538595   0.45344428            1            1 
    9900    25.942336   -9.7854728   0.68352091            1            1 
   10000    24.636319   -9.3369442   0.62793231            1            1 
Loop time of 1.67474 on 1 procs for 10000 steps with 361 atoms

Performance: 2579511.004 tau/day, 5971.090 timesteps/s
99.8% CPU use with 1 MPI tasks x no OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 0.47884    | 0.47884    | 0.47884    |   0.0 | 28.59
Bond    | 0.24918    | 0.24918    | 0.24918    |   0.0 | 14.88
Neigh   | 0.82974    | 0.82974    | 0.82974    |   0.0 | 49.54
Comm    | 0.01265    | 0.01265    | 0.01265    |   0.0 |  0.76
Output  | 0.00085878 | 0.00085878 | 0.00085878 |   0.0 |  0.05
Modify  | 0.075636   | 0.075636   | 0.075636   |   0.0 |  4.52
Other   |            | 0.02783    |            |       |  1.66

Nlocal:    361 ave 361 max 361 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost:    0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs:    2421 ave 2421 max 2421 min
Histogram: 1 0 0 0 0 0 0 0 0 0

Total # of neighbors = 2421
Ave neighs/atom = 6.70637
Ave special neighs/atom = 5.61773
Neighbor list builds = 4937
Dangerous builds = 5
Total wall time: 0:00:01
Loading