Scottish Shelf Model. Part 6: Wider Domain and Sub-Domains Integration
Part 6 of the hydrodynamic model developed for Scottish waters.
Particle Tracking in FVCOM
- Method
From the FVCOM manual (Chen et al., 2013) the Lagrangian particle tracking module consists of solving a nonlinear system of ordinary differential equations (ODE) as follows
(2.1)
where x is the particle position at a time t, d x/ dt is the rate of change of the particle position in time and v( x, t) is the 3-dimensional velocity field generated by the model. This equation can be solved using any method suitable for solving coupled sets of nonlinear ODE's such as the explicit Runge-Kutta (ERK) multi-step methods which are derived from solving the discrete integral:
(2.2)
Assume that x n= x(t n) is the position of a particle at time t=t n , then the new position x n+1= x(t n+1) of this particle at time t = t n+1 (= t n +Δt) can be determined by the 4th order 4-stage ERK method where Δt is the time step. The dependence of the velocity field on time has been eliminated since the velocity field is considered stationary during the tracking time interval of Δt. It is important to understand that in a multidimensional system, the local functional derivative of v must be evaluated at the correct sub-stage point x in (x, y, z) space. On a 2-dimensional ( x,y) plane, for example, a particle can be tracked by solving the x and y velocity equations given as
Many users have added a random walk-type process into this 3-D Lagrangian tracking code to simulate sub-grid-scale turbulent variability in the velocity field. In FVCOM version 2.5, only the advection tracking program was included as described above, however version 3.1.6 (as used here) includes an optional random walk, R (uniformly distributed within the domain [-1,1]), where the horizontal diffusivity D h is specified as a constant and the vertical diffusivity D v is provided from the circulation model.
(2.3)
Some smoothing of the diffusivity may be required in the vertical (Brickman and Smith, 2002). A key property of a correct Lagrangian Stochastic Model is that it maintains an initially uniform concentration of particles uniform for all time. This is called maintaining the well-mixed condition (WMC). If the result does not conform to WMC this may be due to inadequate resolution of the hydrodynamic model. Ideally one should conduct statistical tests to ensure that the number of particles is sufficient for the investigation (Brickman and Smith, 2002).
The in-line particle tracking program can be run on both single and multi-processor computers. However, in the MPI parallel system, tracking many particles simultaneously with the model run on a multi-processor computer can significantly slow down computational efficiency, since particles moving from one sub-domain to another require additional information passing. Multiple runs of the particle tracking would also require re-running the hydrodynamic model. For this reason, it is suggested that users use the offline version of the particle tracking code.
- Offline particle tracking code
The offline code is provided in the FVCOM package (under 'root'/output/ FVCOM_particle_tracking). This is not parallelised and runs on a single processor only. For large numbers of particles (O(100,000)) the calculations may be run on parallel processors by dividing the particles between processors, which is what has been done here. Note that the supplied version had some software errors which have now been fixed. This code includes no biological behaviour, but particles can be released at different vertical levels and there are options for inclusion of a random walk diffusion term in the horizontal and vertical, as well as 3-dimensional advection. There is an option for a fixed level or a sigma-coordinate level for the release points.
The modules of the offline package are shown in Table 1. The main code is in offlag.f90, which includes the main program PARTICLE_TRAJ which in turn calls subroutines in offlag.f90, alloc_vars.f90, data_run.f90, ncdio.f90 and triangle_grid_edge.f90. The other modules set array sizes, initialise arrays and provide certain utility subroutines and functions.
Table 1: Modules of Lagrangian off-line particle tracking program (more details of the various subroutines in each module are given in Appendix)
Module | Purpose |
---|---|
alloc_vars.f90 | Allocates and initialises most arrays |
data_run.f90 | Inputs parameters which control the model run |
mod_inp.f90 | Decompose input line into variable name and variable value(s) |
mod_ncd.f90 | NetCDF utilities |
mod_prec.f90 | Defines floating point precision using kind |
mod_var.f90 | Sets global limits and array sizing parameters |
ncdio.f90 | NetCDF I/O: input time and output surface elevation and bottom depth |
offlag.f90 | Lagrangian particle tracking off-line program |
triangle_grid_edge.f90 | Define the non-overlapped, unstructured triangular meshes used for flux computations |
util.f90 | Utilities, from Numerical Recipes |
The steps in the particle tracking code are as follows:
1. CALL GETARG (implicit function) to read command line input: 'runname' and output stream
2. CALL DATA_RUN - reads 'basename'_run.dat for control parameters
3. CALL NCD_READ_GRID - reads model grid data from 1 st daily file
4. CALL NCD_READ_SHAPE - reads model bathymetry, sigma levels and grid coefficients from 1st daily file
5. CALL NCD_READ_METRICS - reads 'runname'_metrics.nc file if it exists (output from TRIANGLE_GRID_EDGE)
6. CALL TRIANGLE_GRID_EDGE - calculates grid metrics (NTVE, NBVE, NBVT, ISBCE, ISONB) if 'runname'_metrics.nc does not exist
7. CALL NCD_WRITE_METRICS - writes 'runname'_metrics.nc if does not exist
8. CALL SET_LAG - calculate initial particle positions in model grid
a. Reads initial particle positions from file
b. CALL FHE_ROBUST - determine element containing each particle
c. CALL NCD_READ - read model data from daily files
d. CALL INTERP_ELH - linear interpolation of surface elevation
e. CALL NCD_WRITE - write initial particle positions to file<<br />9. CALL LAG_UPDATE - updates particle positions
a. CALL NCD_READ - read next hourly data
b. CALL TRAJECT - calculates particle position due to advection by velocity field
i. CALL INTERP_V - interpolates velocity
ii. CALL INTERP_ELH
iii. CALL FHE_QUICK/ FHE_ROBUST - locate particles in grid
iv. CALL INTERP_ELH
c. If IRW>=1 CALL RAND_WALK - calculates turbulent diffusion using random walk
i. Smooth vertical eddy diffusivity (KH) from model (in vertical), referring to Visser (1997) and Ross and Sharples (2004)
ii. Calculate 2 nd derivative of KH using spline
iii. Calculate horizontal random walk (if IRW = 2 or 3)
iv. Calculate vertical random walk (if IRW = 1 or 3)
d. CALL NCD_WRITE - write particle positions to file - in ncdio.f90
- Fixing the code
- The code as supplied does work for advection only and for Cartesian coordinates but not for lat/lon coordinates.
- The start-up time is quite long (~2 hours for the large combined model grid). Pierre Cazenave at PML provided a modified code which shortened this set-up time for subsequent runs, by saving metrics calculated within triangle_grid_edge.f90. However there was a bug in this code.
- Horizontal random walk - some variable declarations had to be corrected from integer to real in RAND_WALK.
- There was a problem with the vertical random walk: there was an error in the dimension of the vertical diffusivity, which is read in at nodes but has to be interpolated to elements and this wasn't being done.
- Input output files
The necessary input files are: (i) a control file; (ii) one or more daily model output files; and (iii) an initial particle release location file.
The particle tracking configuration is controlled by the file 'runname'_run.dat, which is in the 'root'/output/ FVCOM_particle_tracking/run/'basename'/ directory (for the combined climatology output 'basename'=combined). An example of this file is given below:
The following parameters are set in this file:
1. Time steps:
DTI - advection time-step (seconds): this must be divisible into INSTP - 300 seconds has been chosen as a compromise between a shorter time step which will take more computational time and a longer time step which would possibly allow particles to jump across cells or across land;
INSTP - input time step (seconds): here we use 1 hr which is the output time-step of the model;
DTOUT - output time step (hours): 1 hour has been chosen;
TDRIFT - the total tracking time (hours): this is defined for different seasons and different representative particles.
2. Date parameters:
YEARLAG, MONTHLAG, DAYLAG and HOURLAG are the start time of particle tracking (year, month, day and hour).
Although YEARLAG is defined in this control file, the source code currently has hardcoded values for the year, so you need to edit the first line of offlag.f90 (after INTEGER declarations) to use the correct year for your model run, i.e. set YEAR0='YYYY'. For the climatology run an arbitrary year of 1993 is used.
3. Input and output file and directory names:
INPDIR is the input directory where output from the FVCOM model run is located - convention is for this to be the 'root'/output/ FVCOM_particle_tracking/input/'basename'/ directory (maybe using a softlink - see below);
GEOAREA is not used, so can be set to any value;
OUTDIR is the output directory for the particle tracking output - here we use a sub-directory of 'root'/output/ FVCOM_particle_tracking/output/'basename'/ e.g. Jan01, Jan02…for multiple runs of 'runname'= January (standard files 'runname'_LAG.nc and 'runname'_diag.dat are written to this directory);
INFOFILE is where the particle tracking progress reports are sent, default = screen; this may be re-directed at run-time;
LAGINI.dat is the file containing the initial particle positions, which must be in the input directory 'INPDIR'.
4. Options for vertical coordinate:
F_DEPTH, P_SIGMA, OUT_SIGMA are logical variables (T or F).
F_DEPTH = T indicates the particle is released at a fixed Cartesian depth (in LAGINI.dat); P_SIGMA = T means the input particle coordinate is in σ-coordinates;
OUT_SIGMA = T gives the output particle coordinate in σ-coordinates.
5. Random walk settings:
IRW = 0 - no random walk (or turbulent diffusion); 1 - horizontal random walk only; 2 - vertical random walk only; 3 - both horizontal and vertical random walk;
DHOR is the (constant) horizontal diffusivity (m 2s -1);
DTRW is the time-step for the random walk (seconds), which must be divisible into DTI.
Model data input file name format: The model output netCDF files (input to particle tracking) must be in a separate directory for each year, since the naming of the model output files must be 'runname'_dddd.nc where dddd is the Julian day number, starting from 0001. In the following runs we have used 'runname' = 'month'. So for example the first day in April is day 0091 (assuming it is not a leap year) and the filename looked for is April_0091.nc. This file (in the nominated input directory) is soft-linked to the actual model output which is in 'root'/output/ FVCOM_output/output/combined.
In the supplied code there is presently no support for spherical coordinates, either the FVCOM model should be run in Cartesian coordinates, or enable FLAG_6 = -DPROJ when compiling FVCOM (and subsequently set the PROJECTION_REFERENCE in the 'basename'_run.nml file). In this case the model had been run in spherical coordinates, without using this option, so the output had to be converted to Cartesian (the OS coordinate system was used). This was done by running a Matlab script
Table 2: Model output variables used in particle tracking
Variable name | Long name | Dimensions | Units |
---|---|---|---|
x | nodal x-coordinate | Node | metres |
y | nodal y-coordinate | Node | metres |
nv | nodes surrounding element | three, nele | -- |
h | bathymetry | node | metres |
siglev | sigma levels | siglev, node | -- |
a1u | a1u | three, nele | -- |
a2u | a2u | three, nele | -- |
aw0 | aw0 | three, nele | -- |
awx | awx | three, nele | -- |
awy | awy | three, nele | -- |
time | time | time | days since 1858-11-17 00:00:00 |
zeta | water surface elevation | time, node | metres |
u | eastward water velocity | time, siglay, nele | ms -1 |
v | northward water velocity | time, siglay, nele | ms -1 |
omega | vertical sigma coordinate velocity | time, siglev, node | s -1 |
km | turbulent eddy viscosity for momentum | time, siglev, node | m 2s -1 |
Particle initial position file
The offline code requires only a single file to define the release sites for the particles (LAGINI.dat). An example is given below:
3
1 510876 6555485 5.0 !5m depth
2 500064 6563972 0.0 !Surface
3 530350 6561007 10.0! 10m depth
The first line defines the number of particles to track (this is read in with a DO loop in offlag.f90, so if you have a long list of positions but only wish to use the first few, set the header number to the number you wish to read in, leaving the remaining points defined in the list; the code will ignore them).
The definition of each point is ID XPOS YPOS DEPTH !NAME with DEPTH being positive downwards (for Cartesian option this is the depth in metres below surface). ID (integer) is not used and can be an arbitrary number, but is useful to keep track of the particle release locations. !NAME is optional to label the releases particle. Set the name of this file to match the LAGINI value in the 'basename'_run.dat file.
- Output files
The particle tracking generates 3 output files: (i) an asci text file reporting progress, reported to the screen or redirected to an output file if desired; (ii) the particle position file ('runname'_LAG.nc) and (iii) a diagnostic file ('runname'_DIAG.nc). The latter 2 files are written into OUTDIR as specified in the control file.
- Compiling and running particle tracking
Edit the makefile to point to your netCDF library paths and compile the code with make. Some commands to load the necessary compiler may be required in advance. The output executable file is then called ptraj. Re-compile for different architecture or if any changes are made to the code. For ease of use on the NOCL system the following commands are used:
Linux workstation:
module load intel/12.0.2
cd 'root'/output/ FVCOM_particle_tracking/source
make clean
make
mv ptraj ptraj_linux
cd ../run/'basename'/
../../ptraj_linux 'runname >output.txt
Mobius cluster (but run on single processor):
module add intel/compiler/64/14.0/2013_sp1.3.174
cd 'root'/output/ FVCOM_particle_tracking/source
make clean
make
mv ptraj ptraj_mobius
cd ../run/'basename'/
../../ptraj_linux 'runname >output.txt
STEP-BY-STEP RUNNING PTRAJ
STEP 1: Create directory 'root'/output/FVCOM_particle_tracking/input/'basename'/
STEP 2: Copy particle_tracking.dat file into FVCOM_particle_tracking/input/'basename'/. from particle_tracking/ or other *.dat file, rename to 'runname'.dat
STEP 3: Change particle locations in 'runname'.dat from
1
1 1139800 2063300 0.0 !
to
NP
1 x 1 y 1 z 1 !name 1 (NB 0=surface)
2 x 2 y 2 z 2 !name 2
….
NP x NP y NP z NP !name NP (where NP = number of particles)
N.B. It is only possible to use x/y coordinates - these arrays must be populated in model output or calculated from lat/lon - a script for this has been written
STEP 4: Use soft-link to link daily NetCDF FVCOM model output files to
'root'/output/FVCOM_particle_tracking /input/'basename'/'runname'_dddd.nc
N.B input to particle tracking is daily files of hourly data- convert 5-day (or other length) files to daily - a shell script has been written to do this
STEP 5: Create output directories
'root'/output/FVCOM_particle_tracking/output/'basename'/run_num 1…run_num NP
e.g. Apr01..Apr20
STEP 6: Copy 'runname'_run.dat to
'root'/output/FVCOM_particle_tracking/run/'basename'/'runname'_run.dat
STEP 7: Edit 'runname'_run.dat for each run to point to new OUTDIR and LAGINI files, for multiple runs using different particle release points during same tracking period
STEP 8: Compile code (see above)
STEP 9: Run code: ptraj 'runname' >output.txt
STEP 10: Check results /output/'basename'/run_num/'runname'_LAG.nc
Contact
There is a problem
Thanks for your feedback