Simian Parallel Discrete Event Simulator
Nandakishore Santhi (Email: n s a n t h i "at" l a n l "dot" g o v) (For email id, remove spaces and then replace "at"/"dot" with @/.)
Simian is a just-in-time compiled, process oriented, conservative parallel discrete event simulator from LANL with proven scaling performance. It aims to introduce JIT compilation to the mature PDES community, while at the same time introducing a limited form of structured parallelism to the JIT community. Using Simian, a user can write a discrete time simulation model at an appropriate level of abstraction of any system, using just a few lines of code. The user can then run the simulation model either in a sequential manner (without using MPI) or in a parallel fashion, using MPI.
Why You Should Try Out Simian:
Programmer productivity is improved through exclusive use of scripting interface in Python or Lua for describing the PDES models.
Very small code base (<600 lines with comments for core parts)
Extremely competitive in simulation speed in terms of events/second. For example, Simian can reach sequential simulation speeds of 2 million events/sec on a pHold type example application on modern desktops on just one processor core.
Your models work either with or without MPI (either MPICH2 or OpenMPI) with no change to your code for quick prototyping
You can mix both event based and process-oriented simulation techniques in the same model
Simian directory contains the Lua implementation and it needs luajit-2.1.
SimianPie directory contains the Python implementation, which needs Python 2.7.x with greenlets (optional) or Pypy 2.4.x.
MPICH 3.1.4 or OpenMPI 1.6.x are optionally needed if using MPI.
See Docs directory for API documentation. Example.Lua directory has examples of Simian (Lua) usage. Example.Py directory has examples of SimianPie usage.
If not using MPI:
Set useMPI flag to false when initializing the Simian PDES Engine
To use MPI with Simian (Lua):
If using MPICH:
(tested with 3.1.3) Set useMPI flag to true. Set a link to libmpich.[dylib/so/dll] in the top directory
If using OpenMPI:
(some later versions such as 1.8.3 have a serious bug in message size reporting; use 1.6.x) Set useMPI flag to true. Set a link to libmpi.[dylib/so/dll] in the top directory, and then comment within file Simian/MPI.lua line 'require "MPICH"' and uncomment in file Simian/MPI.lua line 'require "MPI"'
To use the Python version SimianPie:
SimianPie is tested to work with CPython 2.7.x and PyPy 2.4.x At present when needing MPI, SimianPie works with either MPICH2 using CTypes or using MPI4Py module - if only OpenMPI is available use Simian (Lua) or if Python version is unavoidable, then user should write a CTypes wrapper for OpenMPI similar to the distributed CTyped wrapper for MPICH 3.1.3 Set useMPI flag to true. Set a link to libmpich.[dylib/so/dll] in the top directory or pass absolute path to the library to Simian() when creating the engine.
Test with pHold app without MPI:
Testing with MPI on LANL PDES benchmark app:
(on a medium sized cluster with more than 1000 cores) mpirun -np 1000 luajit-2.1.0-alpha Examples.Lua/pdes_lanl_benchmarkV8.lua 1000 100 1 0 0 false 1 0 100000 0 0.5 1 10 1000 1 true LANL_PDES.log
LANL internal reference:
CODE Title: Simian, version 1.5 (OSS)
LACC #: LA-CC-15-015
Copyright Number Assigned: C15036
Funding source: Laboratory-Directed Research and Development (LDRD)