New User Resources
Step 1 Installation
Download the most recent release of HydroGeoSphere. Follow the installation direction and email the hostid.txt (typically found in C:\Program_Files\HydroGeoSphere) file to firstname.lastname@example.org. You will receive the hgs.lic, place this file into the C:\Program_Files\HydroGeoSphere directory.
Note: HGS only runs on Windows and Linux machines.
Step 2 GROK
The first step for all HydroGeoSphere simulations is to run GROK, the HGS' preprocessor program. GROK reads and translates all of the input files into the model runtime code. All of the input commands are placed into the *.grok file. These commands are described in the HGS user manual, which you can find in your doc folder. As a first model, we will implement the abdul test problem.
For your first model, the Abdul experimental catchment. The Abdul test case folder is located in ~/program_files/examples/abdul. Copy and paste these seven files into the abdul folder:
The folder should look like this:
Step 3 Run GROK
Open the abdul.grok file with your favorite text editor (e.g. notepad, sublime text, notepad++, vim). Look at the input file, this contains all of the commands to create the abdul test case. Run GROK by double clicking on the grok.exe program in the abdul folder.
To check the output from GROK, open the abdulo.eco file and scroll to the bottom. A successful GROK run will say ---- Normal exit ---- at the bottom of the abdulo.eco file. If an error message was generated, restart Step 2.
Step 4 Run HydroGeoSphere
Run HydroGeoSphere by double clicking on the phgs.exe program. This simulation may take several minutes to complete. Once the simulation is completed you will have several output files like abdulo.head_pm.0009.
Step 5 Run HSPLOT
Run HSPLOT by double clicking on the hsplot.exe program. The HSPLOT program will produce two output files:
- abdulo.olf.dat (the overland flow output file)
- abdulo.pm.dat (the porous medium output file)
Note for Linux Users
When you log in your linux system, you will need to set the environment variable HGSDIR where the hgs executable (hgs.x) can access the license file and you may locate the executables. To do so, you can edit your shell script file (.bash_profile if you are using bash) . For example, you can add the following two lines to set HGSDIR and to add that directory to PATH:
- Work through the verification problems included in the installation to gain an understanding of the basic model and process setup.
- Start with an existing grok file that is similar to your needs. The verification folder is a great place to start.
- The Reference Manual is searchable and includes an index. Search it to look up command functions.
- Copy the executable files (grok.exe, phgs.exe, and hsplot.exe) and the library files (libmmd.dll, libiomp5md.dll, and libifcoremd.dll) from the installation folder into your simulation folder. This allows for version auditing if you revisit simulation results at a future date.
- For large models directory structure is important. Large input files (climate, geology) common to multiple simulations can be centrally located.
- Debug.control file is very useful for examining current model state at an arbitrary solution time, or for saving output before prematurely terminating a simulation. See the comment below on the use of the exclamation point to control commands in the debugger.
- Numerical models don’t like zeros! Use a small number instead of 0 (i.e., 1e-10)
- Think carefully about your simulation controls (e.g., max timestep, head control, convergence criteria, etc.,) as overly restrictive values can result in unnecessarily slow runtimes (sometimes orders of magnitude slower).
Grok Commands all users should know
- ! – the exclamation point allows users to insert comments in the grok file. Liberal use of comments is great for documentation and can make it easier for other users to review and understand model setup. Use comments to temporarily deactivate commands rather than deleting them.
- Interpolate – Used with time series in boundary conditions. Smooths out the shock between stress periods and can significantly improve simulation time.
- Impermeable matrix – Shuts off subsurface flow when setting up and testing the overland flow domain.
- Skip on/off/rest – causes grok to skip over portions of the grok file without having to delete those portions. Useful for trouble shooting bad runs.
- Mesh to tecplot – Outputs the 3D mesh during execution of grok.exe. Useful for inspecting model setup prior to simulation.
- Auto save on – Outputs a snapshot of heads and concentrations at regular wall clock intervals. Useful way to intermittently save simulation results. Particularly useful for long simulations where a computer failure is possible. Ensures that not too much simulation time is lost while not incurring significant data storage.
- *.eco - this file is generate during execution of grok.exe and is a more verbose version of what is displayed on the screen when grok is running. Reviewing this file can be very informative should you suspect a setup issue, or if grok fails.
- *.lst – this file is generate during phgs.exe exectution. This is a more verbose version of the screen output and provides additional insight into simulation performance and water balance.
Visualization (with Tecplot)
- Running preplot.exe (found in Tecplot installation) after running hsplot.exe significantly reduces tecplot load time for large models by converting the hsplot ascii output to binary.
- Blanking is a great way to view properties or results for different material zones.
- *.plot.control – controls hsplot.exe. Useful for decreasing or increasing the amount of information formatted for visualization. Can also be used to create files for other visualization tools such as Paraview.
- HGS parellization can be used to significantly improve run times.
- Test the model in serial mode to make sure that everything is setup and running properly before switching to parallel mode
- Use an equal and even number of CPU’s and Domain Partitions (e.g., 2,4,6,8…)
- Parallel efficiency degrades at a certain point (more CPUs isn’t always better for small models). Optimum parallel efficiency is often around 100,000 nodes per CPU
- Visit the following link for more information: