Commit 8f946b76 authored by Richard Berger's avatar Richard Berger
Browse files

Add missing RST files

parent eac5f6e5
Loading
Loading
Loading
Loading
+91 −0
Original line number Diff line number Diff line
Input script structure
======================

This page describes the structure of a typical LAMMPS input script.
The examples directory in the LAMMPS distribution contains many sample
input scripts; it is discussed on the :doc:`Examples <Examples>` doc
page.

A LAMMPS input script typically has 4 parts:

1. Initialization
2. Atom definition
3. Settings
4. Run a simulation

The last 2 parts can be repeated as many times as desired.  I.e. run a
simulation, change some settings, run some more, etc.  Each of the 4
parts is now described in more detail.  Remember that almost all
commands need only be used if a non-default value is desired.

(1) Initialization

Set parameters that need to be defined before atoms are created or
read-in from a file.

The relevant commands are :doc:`units <units>`,
:doc:`dimension <dimension>`, :doc:`newton <newton>`,
:doc:`processors <processors>`, :doc:`boundary <boundary>`,
:doc:`atom\_style <atom_style>`, :doc:`atom\_modify <atom_modify>`.

If force-field parameters appear in the files that will be read, these
commands tell LAMMPS what kinds of force fields are being used:
:doc:`pair\_style <pair_style>`, :doc:`bond\_style <bond_style>`,
:doc:`angle\_style <angle_style>`, :doc:`dihedral\_style <dihedral_style>`,
:doc:`improper\_style <improper_style>`.

(2) Atom definition

There are 3 ways to define atoms in LAMMPS.  Read them in from a data
or restart file via the :doc:`read\_data <read_data>` or
:doc:`read\_restart <read_restart>` commands.  These files can contain
molecular topology information.  Or create atoms on a lattice (with no
molecular topology), using these commands: :doc:`lattice <lattice>`,
:doc:`region <region>`, :doc:`create\_box <create_box>`,
:doc:`create\_atoms <create_atoms>`.  The entire set of atoms can be
duplicated to make a larger simulation using the
:doc:`replicate <replicate>` command.

(3) Settings

Once atoms and molecular topology are defined, a variety of settings
can be specified: force field coefficients, simulation parameters,
output options, etc.

Force field coefficients are set by these commands (they can also be
set in the read-in files): :doc:`pair\_coeff <pair_coeff>`,
:doc:`bond\_coeff <bond_coeff>`, :doc:`angle\_coeff <angle_coeff>`,
:doc:`dihedral\_coeff <dihedral_coeff>`,
:doc:`improper\_coeff <improper_coeff>`,
:doc:`kspace\_style <kspace_style>`, :doc:`dielectric <dielectric>`,
:doc:`special\_bonds <special_bonds>`.

Various simulation parameters are set by these commands:
:doc:`neighbor <neighbor>`, :doc:`neigh\_modify <neigh_modify>`,
:doc:`group <group>`, :doc:`timestep <timestep>`,
:doc:`reset\_timestep <reset_timestep>`, :doc:`run\_style <run_style>`,
:doc:`min\_style <min_style>`, :doc:`min\_modify <min_modify>`.

Fixes impose a variety of boundary conditions, time integration, and
diagnostic options.  The :doc:`fix <fix>` command comes in many flavors.

Various computations can be specified for execution during a
simulation using the :doc:`compute <compute>`,
:doc:`compute\_modify <compute_modify>`, and :doc:`variable <variable>`
commands.

Output options are set by the :doc:`thermo <thermo>`, :doc:`dump <dump>`,
and :doc:`restart <restart>` commands.

(4) Run a simulation

A molecular dynamics simulation is run using the :doc:`run <run>`
command.  Energy minimization (molecular statics) is performed using
the :doc:`minimize <minimize>` command.  A parallel tempering
(replica-exchange) simulation can be run using the
:doc:`temper <temper>` command.


.. _lws: http://lammps.sandia.gov
.. _ld: Manual.html
.. _lc: Commands_all.html
+86 −0
Original line number Diff line number Diff line
Calculate thermal conductivity
==============================

The thermal conductivity kappa of a material can be measured in at
least 4 ways using various options in LAMMPS.  See the examples/KAPPA
directory for scripts that implement the 4 methods discussed here for
a simple Lennard-Jones fluid model.  Also, see the :doc:`Howto viscosity <Howto_viscosity>` doc page for an analogous discussion
for viscosity.

The thermal conductivity tensor kappa is a measure of the propensity
of a material to transmit heat energy in a diffusive manner as given
by Fourier's law

J = -kappa grad(T)

where J is the heat flux in units of energy per area per time and
grad(T) is the spatial gradient of temperature.  The thermal
conductivity thus has units of energy per distance per time per degree
K and is often approximated as an isotropic quantity, i.e. as a
scalar.

The first method is to setup two thermostatted regions at opposite
ends of a simulation box, or one in the middle and one at the end of a
periodic box.  By holding the two regions at different temperatures
with a :doc:`thermostatting fix <Howto_thermostat>`, the energy added to
the hot region should equal the energy subtracted from the cold region
and be proportional to the heat flux moving between the regions.  See
the papers by :ref:`Ikeshoji and Hafskjold <howto-Ikeshoji>` and
:ref:`Wirnsberger et al <howto-Wirnsberger>` for details of this idea.  Note
that thermostatting fixes such as :doc:`fix nvt <fix_nh>`, :doc:`fix langevin <fix_langevin>`, and :doc:`fix temp/rescale <fix_temp_rescale>` store the cumulative energy they
add/subtract.

Alternatively, as a second method, the :doc:`fix heat <fix_heat>` or
:doc:`fix ehex <fix_ehex>` commands can be used in place of thermostats
on each of two regions to add/subtract specified amounts of energy to
both regions.  In both cases, the resulting temperatures of the two
regions can be monitored with the "compute temp/region" command and
the temperature profile of the intermediate region can be monitored
with the :doc:`fix ave/chunk <fix_ave_chunk>` and :doc:`compute ke/atom <compute_ke_atom>` commands.

The third method is to perform a reverse non-equilibrium MD simulation
using the :doc:`fix thermal/conductivity <fix_thermal_conductivity>`
command which implements the rNEMD algorithm of Muller-Plathe.
Kinetic energy is swapped between atoms in two different layers of the
simulation box.  This induces a temperature gradient between the two
layers which can be monitored with the :doc:`fix ave/chunk <fix_ave_chunk>` and :doc:`compute ke/atom <compute_ke_atom>` commands.  The fix tallies the
cumulative energy transfer that it performs.  See the :doc:`fix thermal/conductivity <fix_thermal_conductivity>` command for
details.

The fourth method is based on the Green-Kubo (GK) formula which
relates the ensemble average of the auto-correlation of the heat flux
to kappa.  The heat flux can be calculated from the fluctuations of
per-atom potential and kinetic energies and per-atom stress tensor in
a steady-state equilibrated simulation.  This is in contrast to the
two preceding non-equilibrium methods, where energy flows continuously
between hot and cold regions of the simulation box.

The :doc:`compute heat/flux <compute_heat_flux>` command can calculate
the needed heat flux and describes how to implement the Green\_Kubo
formalism using additional LAMMPS commands, such as the :doc:`fix ave/correlate <fix_ave_correlate>` command to calculate the needed
auto-correlation.  See the doc page for the :doc:`compute heat/flux <compute_heat_flux>` command for an example input script
that calculates the thermal conductivity of solid Ar via the GK
formalism.


----------


.. _howto-Ikeshoji:



**(Ikeshoji)** Ikeshoji and Hafskjold, Molecular Physics, 81, 251-261
(1994).

.. _howto-Wirnsberger:



**(Wirnsberger)** Wirnsberger, Frenkel, and Dellago, J Chem Phys, 143, 124104
(2015).


.. _lws: http://lammps.sandia.gov
.. _ld: Manual.html
.. _lc: Commands_all.html
+86 −0
Original line number Diff line number Diff line
Build LAMMPS as a shared library
================================

Build LAMMPS as a shared library using make
-------------------------------------------

Instructions on how to build LAMMPS as a shared library are given on
the :doc:`Build\_basics <Build_basics>` doc page.  A shared library is
one that is dynamically loadable, which is what Python requires to
wrap LAMMPS.  On Linux this is a library file that ends in ".so", not
".a".

From the src directory, type


.. parsed-literal::

   make foo mode=shlib

where foo is the machine target name, such as mpi or serial.
This should create the file liblammps\_foo.so in the src directory, as
well as a soft link liblammps.so, which is what the Python wrapper will
load by default.  Note that if you are building multiple machine
versions of the shared library, the soft link is always set to the
most recently built version.

.. note::

   If you are building LAMMPS with an MPI or FFT library or other
   auxiliary libraries (used by various packages), then all of these
   extra libraries must also be shared libraries.  If the LAMMPS
   shared-library build fails with an error complaining about this, see
   the :doc:`Build\_basics <Build_basics>` doc page.

Build LAMMPS as a shared library using CMake
--------------------------------------------

When using CMake the following two options are necessary to generate the LAMMPS
shared library:


.. parsed-literal::

   -D BUILD_LIB=on            # enable building LAMMPS as a library
   -D BUILD_SHARED_LIBS=on    # enable building of LAMMPS shared library (both options are needed!)

What this does is create a liblammps.so which contains the majority of LAMMPS
code. The generated lmp binary also dynamically links to this library. This
means that either this liblammps.so file has to be in the same directory, a system
library path (e.g. /usr/lib64/) or in the LD\_LIBRARY\_PATH.

If you want to use the shared library with Python the recommended way is to create a virtualenv and use it as
CMAKE\_INSTALL\_PREFIX.


.. parsed-literal::

   # create virtualenv
   virtualenv --python=$(which python3) myenv3
   source myenv3/bin/activate

   # build library
   mkdir build
   cd build
   cmake -D PKG_PYTHON=on -D BUILD_LIB=on -D BUILD_SHARED_LIBS=on -D CMAKE_INSTALL_PREFIX=$VIRTUAL_ENV ../cmake
   make -j 4

   # install into prefix
   make install

This will also install the Python module into your virtualenv. Since virtualenv
doesn't change your LD\_LIBRARY\_PATH, you still need to add its lib64 folder to
it, which contains the installed liblammps.so.


.. parsed-literal::

   export LD_LIBRARY_PATH=$VIRTUAL_ENV/lib64:$LD_LIBRARY_PATH

Starting Python outside (!) of your build directory, but with the virtualenv
enabled and with the LD\_LIBRARY\_PATH set gives you access to LAMMPS via Python.


.. _lws: http://lammps.sandia.gov
.. _ld: Manual.html
.. _lc: Commands_all.html
+82 −0
Original line number Diff line number Diff line
Benchmarks
==========

Current LAMMPS performance is discussed on the `Benchmarks page <http://lammps.sandia.gov/bench.html>`_ of the `LAMMPS website <lws_>`_
where timings and parallel efficiency are listed.  The page has
several sections, which are briefly described below:

* CPU performance on 5 standard problems, strong and weak scaling
* GPU and Xeon Phi performance on same and related problems
* Comparison of cost of interatomic potentials
* Performance of huge, billion-atom problems

The 5 standard problems are as follow:

#. LJ = atomic fluid, Lennard-Jones potential with 2.5 sigma cutoff (55
   neighbors per atom), NVE integration
#. Chain = bead-spring polymer melt of 100-mer chains, FENE bonds and LJ
   pairwise interactions with a 2\^(1/6) sigma cutoff (5 neighbors per
   atom), NVE integration
#. EAM = metallic solid, Cu EAM potential with 4.95 Angstrom cutoff (45
   neighbors per atom), NVE integration
#. Chute = granular chute flow, frictional history potential with 1.1
   sigma cutoff (7 neighbors per atom), NVE integration
#. Rhodo = rhodopsin protein in solvated lipid bilayer, CHARMM force
   field with a 10 Angstrom LJ cutoff (440 neighbors per atom),
   particle-particle particle-mesh (PPPM) for long-range Coulombics, NPT
   integration


Input files for these 5 problems are provided in the bench directory
of the LAMMPS distribution.  Each has 32,000 atoms and runs for 100
timesteps.  The size of the problem (number of atoms) can be varied
using command-line switches as described in the bench/README file.
This is an easy way to test performance and either strong or weak
scalability on your machine.

The bench directory includes a few log.\* files that show performance
of these 5 problems on 1 or 4 cores of Linux desktop.  The bench/FERMI
and bench/KEPLER dirs have input files and scripts and instructions
for running the same (or similar) problems using OpenMP or GPU or Xeon
Phi acceleration options.  See the README files in those dirs and the
:doc:`Speed packages <Speed_packages>` doc pages for instructions on how
to build LAMMPS and run on that kind of hardware.

The bench/POTENTIALS directory has input files which correspond to the
table of results on the
`Potentials <http://lammps.sandia.gov/bench.html#potentials>`_ section of
the Benchmarks web page.  So you can also run those test problems on
your machine.

The `billion-atom <http://lammps.sandia.gov/bench.html#billion>`_ section
of the Benchmarks web page has performance data for very large
benchmark runs of simple Lennard-Jones (LJ) models, which use the
bench/in.lj input script.


----------


For all the benchmarks, a useful metric is the CPU cost per atom per
timestep.  Since performance scales roughly linearly with problem size
and timesteps for all LAMMPS models (i.e. interatomic or coarse-grained
potentials), the run time of any problem using the same model (atom
style, force field, cutoff, etc) can then be estimated.

Performance on a parallel machine can also be predicted from one-core
or one-node timings if the parallel efficiency can be estimated.  The
communication bandwidth and latency of a particular parallel machine
affects the efficiency.  On most machines LAMMPS will give a parallel
efficiency on these benchmarks above 50% so long as the number of
atoms/core is a few 100 or greater, and closer to 100% for large
numbers of atoms/core.  This is for all-MPI mode with one MPI task per
core.  For nodes with accelerator options or hardware (OpenMP, GPU,
Phi), you should first measure single node performance.  Then you can
estimate parallel performance for multi-node runs using the same logic
as for all-MPI mode, except that now you will typically need many more
atoms/node to achieve good scalability.


.. _lws: http://lammps.sandia.gov
.. _ld: Manual.html
.. _lc: Commands_all.html
+110 −0
Original line number Diff line number Diff line
.. index:: angle\_style hybrid

angle\_style hybrid command
===========================

Syntax
""""""


.. parsed-literal::

   angle_style hybrid style1 style2 ...

* style1,style2 = list of one or more angle styles

Examples
""""""""


.. parsed-literal::

   angle_style hybrid harmonic cosine
   angle_coeff 1 harmonic 80.0 30.0
   angle_coeff 2\* cosine 50.0

Description
"""""""""""

The *hybrid* style enables the use of multiple angle styles in one
simulation.  An angle style is assigned to each angle type.  For
example, angles in a polymer flow (of angle type 1) could be computed
with a *harmonic* potential and angles in the wall boundary (of angle
type 2) could be computed with a *cosine* potential.  The assignment
of angle type to style is made via the :doc:`angle\_coeff <angle_coeff>`
command or in the data file.

In the angle\_coeff commands, the name of an angle style must be added
after the angle type, with the remaining coefficients being those
appropriate to that style.  In the example above, the 2 angle\_coeff
commands set angles of angle type 1 to be computed with a *harmonic*
potential with coefficients 80.0, 30.0 for K, theta0.  All other angle
types (2-N) are computed with a *cosine* potential with coefficient
50.0 for K.

If angle coefficients are specified in the data file read via the
:doc:`read\_data <read_data>` command, then the same rule applies.
E.g. "harmonic" or "cosine", must be added after the angle type, for each
line in the "Angle Coeffs" section, e.g.


.. parsed-literal::

   Angle Coeffs

   1 harmonic 80.0 30.0
   2 cosine 50.0
   ...

If *class2* is one of the angle hybrid styles, the same rule holds for
specifying additional BondBond (and BondAngle) coefficients either via
the input script or in the data file.  I.e. *class2* must be added to
each line after the angle type.  For lines in the BondBond (or
BondAngle) section of the data file for angle types that are not
*class2*\ , you must use an angle style of *skip* as a placeholder, e.g.


.. parsed-literal::

   BondBond Coeffs

   1 skip
   2 class2 3.6512 1.0119 1.0119
   ...

Note that it is not necessary to use the angle style *skip* in the
input script, since BondBond (or BondAngle) coefficients need not be
specified at all for angle types that are not *class2*\ .

An angle style of *none* with no additional coefficients can be used
in place of an angle style, either in a input script angle\_coeff
command or in the data file, if you desire to turn off interactions
for specific angle types.


----------


Restrictions
""""""""""""


This angle style can only be used if LAMMPS was built with the
MOLECULE package.  See the :doc:`Build package <Build_package>` doc page
for more info.

Unlike other angle styles, the hybrid angle style does not store angle
coefficient info for individual sub-styles in a :doc:`binary restart files <restart>`.  Thus when restarting a simulation from a restart
file, you need to re-specify angle\_coeff commands.

Related commands
""""""""""""""""

:doc:`angle\_coeff <angle_coeff>`

**Default:** none


.. _lws: http://lammps.sandia.gov
.. _ld: Manual.html
.. _lc: Commands_all.html
Loading