Title: | Tools for Processing and Analyzing Files from the Hydrological Catchment Model HYPE |
---|---|
Description: | Work with model files (setup, input, output) from the hydrological catchment model HYPE: Streamlined file import and export, standard evaluation plot routines, diverse post-processing and aggregation routines for hydrological model analysis. The HYPEtools package is also archived at <doi:10.5281/zenodo.7627955> and can be cited in publications with Brendel et al. (2024) <doi:10.1016/j.envsoft.2024.106094>. |
Authors: | Rene Capell [aut, cre] , Conrad Brendel [aut] , Jafet Andersson [ctb], David Gustafsson [ctb], Jude Musuuza [ctb], Jude Lubega [ctb] |
Maintainer: | Rene Capell <[email protected]> |
License: | LGPL-3 |
Version: | 1.6.3.9000 |
Built: | 2024-11-05 04:49:45 UTC |
Source: | https://github.com/rcapell/hypetools |
Function to find all SUBIDs of downstream sub-catchments along the main stem for a single sub-catchment.
AllDownstreamSubids(subid, gd, bd = NULL, write.arcgis = FALSE)
AllDownstreamSubids(subid, gd, bd = NULL, write.arcgis = FALSE)
subid |
Integer, SUBID of a target sub-catchment (must exist in |
gd |
Dataframe, an imported 'GeoData.txt' file. Mandatory argument. See 'Details'. |
bd |
Dataframe, an imported 'BranchData.txt' file. Optional argument. See 'Details'. |
write.arcgis |
Logical. If |
AllDownstreamSubids
finds all downstream SUBIDs of a given SUBID along the main stem (including itself but not
including potential irrigation links or groundwater flows) using GeoData columns 'SUBID' and 'MAINDOWN'. If a BranchData file
is provided, the function will also include information on downstream bifurcations.
AllDownstreamSubids
returns a vector of downstream SUBIDs to the outlet if no BranchData is provided, otherwise a data frame with
two columns downstream
with downstream SUBIDs and is.branch
with logical values indicating if a downstream SUBID contains a
bifurcation ('branch' in HYPE terms). Downstream SUBIDs are ordered from source to final outlet SUBID.
AllUpstreamSubids
, OutletSubids
, OutletIds
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) AllDownstreamSubids(subid = 3344, gd = te)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) AllDownstreamSubids(subid = 3344, gd = te)
Function to find all SUBIDs of upstream sub-catchments for a single sub-catchment.
AllUpstreamSubids( subid, gd, bd = NULL, sort = FALSE, get.weights = FALSE, write.arcgis = FALSE )
AllUpstreamSubids( subid, gd, bd = NULL, sort = FALSE, get.weights = FALSE, write.arcgis = FALSE )
subid |
SUBID of a target sub-catchment (must exist in |
gd |
A data frame, containing 'SUBID' and 'MAINDOWN' columns, e.g. an imported 'GeoData.txt' file. Mandatory argument. See 'Details'. |
bd |
A data frame, containing 'BRANCHID' and 'SOURCEID' columns, and 'MAINPART' with argument |
sort |
Logical. If |
get.weights |
Logical. If |
write.arcgis |
Logical. If |
AllUpstreamSubids
finds all upstream SUBIDs of a given SUBID (including itself but not
including potential irrigation links or groundwater flows) using GeoData columns 'SUBID' and 'MAINDOWN', i.e the full upstream catchment.
If a BranchData file is provided, the function will also include upstream areas which are connected through an upstream bifurcation. The
results can be directly used as 'partial model setup file' ('pmsf.txt') using the export function WritePmsf
.
If argument get.weights
is set to TRUE
, weighting fractions are returned along with upstream SUBIDs. The fractions are based
on column 'MAINPART' in argument bd
. The function considers fractions from bifurcation branches which flow into the basin, and
fractions where bifurcation branches remove discharge from the basin. Fractions are incrementally updated, i.e. nested bifurcation fractions
are multiplied.
For details on bifurcation handling in HYPE, see the HYPE online documentation for BranchData.txt.
If get.weights
is FALSE
, AllUpstreamSubids
returns a vector of SUBIDs, otherwise a two-column data frame with SUBIDs in
the first, and flow weight fractions in the second column.
UpstreamGeoData
, AllDownstreamSubids
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) AllUpstreamSubids(subid = 63794, gd = te)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) AllUpstreamSubids(subid = 63794, gd = te)
Calculate annual regimes based on long-term time series, typically imported HYPE basin output and time output result files.
AnnualRegime( x, stat = c("mean", "sum"), ts.in = NULL, ts.out = NULL, start.mon = 1, incl.leap = FALSE, na.rm = TRUE, format = c("list", "long") )
AnnualRegime( x, stat = c("mean", "sum"), ts.in = NULL, ts.out = NULL, start.mon = 1, incl.leap = FALSE, na.rm = TRUE, format = c("list", "long") )
x |
Data frame, with column-wise equally-spaced time series. Date-times in |
stat |
Character string, either |
ts.in |
Character string, timestep of |
ts.out |
Character string, timestep for results, defaults to |
start.mon |
Integer between 1 and 12, starting month of the hydrological year, used to order the output. |
incl.leap |
Logical, leap days (Feb 29) are removed from results per default, set to |
na.rm |
Logical, indicating if |
format |
Character string. Output format, |
AnnualRegime
uses aggregate
to calculate long-term average regimes for all data columns provided in x
,
including long-term arithmetic means, medians, minima and maxima, and 5%, 25%, 75%, and 95% percentiles. With HYPE result files,
AnnualRegime
is particularly applicable to basin and time output files imported using ReadBasinOutput
and
ReadTimeOutput
. The function does not check if equally spaced time steps are provided in x
or if the
overall time period in x
covers full years so that the calculated averages are based on the same number of values.
Values within each output time period can be aggregated either by arithmetic means or by sums within each period, e.g. typically means for temperatures and sums for precipitation. Long-term aggregated values are always computed as arithmetic means.
If argument format
is list
, AnnualRegime
returns a list with 8 elements and two additional attributes()
. Each list element contains a
named data frame with aggregated annual regime data: arithmetic means, medians, minima, maxima, and 5%, 25%, 75%, and 95%
percentiles.
Each data frames contains, in column-wise order: reference dates in POSIXct
format, date information as string, and aggregated
variables found in x
.
Reference dates are given as dates in either 1911, 1912, or 1913 (just because a leap day and outer weeks '00'/'53' occur during these years) and can be used for plots starting at the beginning of the hydrological year (with axis annotations set to months only). Daily and hourly time steps are given as is, weekly time steps are given as mid-week dates (Wednesday), monthly time steps as mid month dates (15th).
If argument format
is long
, AnnualRegime
returns a four-column data frame with one value per row, and all variable information aligned
with the values. Columns in the data frame: id
with SUBIDs or HYPE variable IDs, month/week/day
with aggregation time steps, name
with
short names of regime data (means, medians, minima, maxima, percentiles), and value
with the variable value.
Attribute period
contains a two-element POSIXct vector containing start and end dates of the
source data. Attribute timestep
contains a timestep keyword corresponding to function argument ts.out
.
If weekly data are provided in x
, AnnualRegime
will inflate x
to daily time steps before computing
results. Values in x
will be assigned to the preceeding week days, corresponding to HYPE file output, where weekly
values are conventionally printed on the last day of the week. If NA
values are present in the original weekly data,
these will be filled with the next available value as a side effect of the inflation.
If weekly output time steps are computed in combination with a user-defined start month, the function will round up weeks to
determine the first week of the hydrological year. Weeks are identified using Monday as first day of the week and the first Monday
of the year as day 1 of week 1 (see conversion code %W
in strptime
). Boundary weeks '00'
and
'53'
are merged to week '00'
prior to average computations.
# Source data, HYPE basin output with a number of result variables te <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) # Daily discharge regime, computed and observed, hydrologigical year from October AnnualRegime(te[, c("DATE", "COUT", "ROUT")], ts.in = "day", start.mon = 10) # Id., aggregated to weekly means AnnualRegime(te[, c("DATE", "COUT", "ROUT")], ts.in = "day", ts.out = "week", start.mon = 10) # Long format, e.g. for subsequent plotting with ggplot AnnualRegime(te[, c("DATE", "COUT", "ROUT")], ts.in = "day", ts.out = "week", format = "long", start.mon = 10) # Precipitation regime, monthly sums AnnualRegime(te[, c("DATE", "UPCPRC")], ts.in = "day", ts.out = "month", stat = "sum")
# Source data, HYPE basin output with a number of result variables te <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) # Daily discharge regime, computed and observed, hydrologigical year from October AnnualRegime(te[, c("DATE", "COUT", "ROUT")], ts.in = "day", start.mon = 10) # Id., aggregated to weekly means AnnualRegime(te[, c("DATE", "COUT", "ROUT")], ts.in = "day", ts.out = "week", start.mon = 10) # Long format, e.g. for subsequent plotting with ggplot AnnualRegime(te[, c("DATE", "COUT", "ROUT")], ts.in = "day", ts.out = "week", format = "long", start.mon = 10) # Precipitation regime, monthly sums AnnualRegime(te[, c("DATE", "UPCPRC")], ts.in = "day", ts.out = "month", stat = "sum")
Function to plot upstream-averaged landscape property classes of one or several sub-basins as bar plots, e.g.
land use or soils. Builds on barplot
.
BarplotUpstreamClasses( x, type = c("custom", "landuse", "soil", "crop"), desc = NULL, class.names = NULL, xlab = NULL, ylab = "Area fraction (%)", ylim = c(-0.05, max(x[, -1] * 150)), names.arg = rep("", ncol(x) - 1), cex.axis = 1, cex.names = 0.9, col = NULL, border = NA, legend.text = NULL, legend.pos = "left", pars = list(mar = c(1.5, 3, 0.5, 0.5) + 0.1, mgp = c(1.5, 0.3, 0), tcl = NA, xaxs = "i") )
BarplotUpstreamClasses( x, type = c("custom", "landuse", "soil", "crop"), desc = NULL, class.names = NULL, xlab = NULL, ylab = "Area fraction (%)", ylim = c(-0.05, max(x[, -1] * 150)), names.arg = rep("", ncol(x) - 1), cex.axis = 1, cex.names = 0.9, col = NULL, border = NA, legend.text = NULL, legend.pos = "left", pars = list(mar = c(1.5, 3, 0.5, 0.5) + 0.1, mgp = c(1.5, 0.3, 0), tcl = NA, xaxs = "i") )
x |
Data frame, containing column-wise class group fractions with SUBIDs in first column. Typically a result
from |
type |
Character string keyword for class group labeling, used in combination with |
desc |
List for use with |
class.names |
Character vector of class group names, with same length as number of class group fractions in |
xlab |
Character string, x-axis label, with defaults for standard groups land use, soil, and crops. |
ylab |
Character string, y-axis label. |
ylim |
Numeric, two element vector with limits for the y-axis. Defaults to values which give ample space for bar labels. |
names.arg |
Character vector, see |
cex.axis |
Numeric, character expansion factor for axis annotation and labels. |
cex.names |
Numeric, character expansion factor for class group labels. |
col |
Colors for bars. Defaults to |
border |
Colors for bar borders. Defaults to no borders. |
legend.text |
Character, if provided, a legend will be plotted. Defaults to none if one sub-basin is plotted, and SUBIDs
if several sub-basins are plotted. Set to |
legend.pos |
Character keyword for legend positioning, most likely |
pars |
List of tagged values which are passed to |
BarplotUpstreamClasses
is a wrapper for barplot
, with vertical labels plotted over the class group bars.
Most arguments have sensible defaults, but can be adapted for fine-tuning if necessary.
Column names of x
are used to link class groups to class IDs in desc
. HYPE has no formal
requirements on how class IDs are numbered and when one of the standard groups land use, soil, or crop are provided in x
,
there might be missing class IDs. Class names in desc
are matched against column name endings '_x'
in x
.
If manual names are provided in class.names
, the column name endings must be a consecutive sequence from 1 to number of elements
in class.names
.
The function returns bar midpoints, see description in barplot
.
UpstreamGroupSLCClasses
barplot
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) te3 <- ReadDescription(filename = system.file("demo_model", "description.txt", package = "HYPEtools")) # Calculate plot data, upstream soil fractions te4 <- UpstreamGroupSLCClasses(subid = 63794, gd = te1, gcl = te2, type = "soil") # Function call BarplotUpstreamClasses(x = te4, type = "s", desc = te4, ylim = c(0,100))
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) te3 <- ReadDescription(filename = system.file("demo_model", "description.txt", package = "HYPEtools")) # Calculate plot data, upstream soil fractions te4 <- UpstreamGroupSLCClasses(subid = 63794, gd = te1, gcl = te2, type = "soil") # Function call BarplotUpstreamClasses(x = te4, type = "s", desc = te4, ylim = c(0,100))
BoxplotSLCClasses
plots SLC class distributions for all SUBIDs in a GeoData data frame as boxplots. Boxes can represent distributions
of area fractions
BoxplotSLCClasses( gd, gcl, col.landuse = "rainbow", col.group = NULL, lab.legend = NULL, pos.legend = 1, abs.area = FALSE, log = "", ylim = NULL, range = 0, mar = c(3, 3, 1, 7) + 0.1, mgp = c(1.5, 0.2, 0), tcl = 0.1, xaxs = "i", xpd = TRUE )
BoxplotSLCClasses( gd, gcl, col.landuse = "rainbow", col.group = NULL, lab.legend = NULL, pos.legend = 1, abs.area = FALSE, log = "", ylim = NULL, range = 0, mar = c(3, 3, 1, 7) + 0.1, mgp = c(1.5, 0.2, 0), tcl = 0.1, xaxs = "i", xpd = TRUE )
gd |
Data frame containing columns with SLC fractions, typically a 'GeoData.txt' file imported with |
gcl |
Data frame containing columns with SLCs and corresponding land use and soil class IDs, typically a 'GeoClass.txt'
file imported with |
col.landuse |
Specification of colors for box outlines, to represent land use classes. Either a keyword character string, or a vector of
colors with one element for each land use class as given in argument |
col.group |
Integer vector of the same length as the number of land use classes given in
|
lab.legend |
Character string giving optional land use and soil class names to label the legend. Land use classes first, then soil classes.
Both following class IDs as given in |
pos.legend |
Numeric, legend position in x direction. Given as position on the right hand outside of the plot area in x-axis units. |
abs.area |
Logical, if |
log |
Character string, passed to |
ylim |
Numeric vector of length 2, y-axis minimum and maximum. Set automatically if not specified. |
range |
Argument to |
mar , mgp , tcl , xaxs , xpd
|
Arguments passed to |
BoxplotSLCClasses
allows to analyze the occurrence of individual SLCs in a given model set-up. both in terms of area fractions (SLC values)
and absolute areas. The function uses boxplot
to plot distributions of SLCs of all SUBIDs in a GeoData data frame. Land use classes
are color-coded, and soil classes marked by a point symbol below each box. Box whiskers extend to the data extremes.
BoxplotSLCClasses
returns a plot to the currently active plot device, and invisibly a data frame of SLC class fractions with 0
values replaced by NA
s. If absolute areas are plotted, these are returned in the data frame.
There is a maximum of 26 symbols available for marking soil classes. BoxplotSLCClasses
can be quite crowded, depending on the number of SLCs
in a model set-up. Tested and recommended plot device dimensions are 14 x 7 inches (width x height), e.g.:
> x11(width = 14, height = 7)
> png("mySLCdistri.png", width = 14, height = 7, units = "in", res = 600)
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) BoxplotSLCClasses(gd = te1, gcl = te2)
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) BoxplotSLCClasses(gd = te1, gcl = te2)
CleanSLCClasses
attempts to clean small SLC fractions within each SUBID (sub-catchment) from an imported GeoData file using user-provided
area thresholds. Cleaning can be performed along class similarity rules or along SLC area alone.
CleanSLCClasses( gd, gcl, m1.file = NULL, m1.class = "s", m1.clean = rep(TRUE, 2), m1.precedence = rep(TRUE, 2), m2.frac = NULL, m2.abs = NULL, signif.digits = 3, verbose = TRUE, progbar = TRUE )
CleanSLCClasses( gd, gcl, m1.file = NULL, m1.class = "s", m1.clean = rep(TRUE, 2), m1.precedence = rep(TRUE, 2), m2.frac = NULL, m2.abs = NULL, signif.digits = 3, verbose = TRUE, progbar = TRUE )
gd |
Data frame containing columns with SUBIDs, SUBID areas in m^2, and SLC fractions, typically a 'GeoData.txt' file
imported with |
gcl |
Data frame containing columns with SLCs and corresponding land use and soil class IDs, typically a 'GeoClass.txt'
file imported with |
m1.file |
Character string, path and file name of the soil or land use class transfer table, a tab-separated text file. Format see details.
A value of |
m1.class |
Character string, either "soil" or "landuse", can be abbreviated. Gives the type of transfer class table for method 1 cleaning. See Details. |
m1.clean |
A logical vector of length 2 which indicates if cleaning should be performed for area fraction thresholds (position 1) and/or absolute area thresholds (position 2). |
m1.precedence |
A logical vector of length 2 which indicates if areas below cleaning threshold should be moved to similar areas according to
precedence in the transfer table given in |
m2.frac |
Numeric, area fraction threshold for method 2 cleaning, i.e. moving of small SLC areas to largest SLC in each SUBID without considering
similarity between classes. Either a single value or a vector of the same length as the number of SLC classes in |
m2.abs |
Numeric, see |
signif.digits |
Integer, number of significant digits to round cleaned SLCs to. See also |
verbose |
Logical, print some information during runtime. |
progbar |
Logical, display a progress bar while calculating SLC class fractions. Adds overhead to calculation time but useful when |
CleanSLCClasses
performs a clean-up of small SLC fractions in an imported GeoData file. Small SLCs are eliminated either by moving their
area to similar classes according to rules which are passed to the function in a text file (Method 1), or by simply moving their area to the
largest SLC in the SUBID (Method 2). Moving rules for the first method can be based on either soil classes or land use classes but these cannot
be combined in one function call. Run the function two times to combine soil and land use based clean-up. Method 1 and 2, however, can be combined
in one function call, in which case the rule-based classification will be executed first. Clean-up precedence in method 1: if
clean-ups based on area fractions and absolute areas are combined (m1.clean = rep(TRUE, 2)
), then area fractions will be cleaned first. In
order to reverse precedence, call CleanSLCClasses
two times with absolute area cleaning activated in first call and area fraction cleaning
in second. In both methods, SLCs in each SUBID are cleaned iteratively in numerical order, starting with SLC_1. This implies a greater likelihood of
eliminating SLCs with smaller indices.
Method 1
For method one, small SLC fractions are moved to either similar land use classes within the same soil class, or vice versa. Similarities are
defined by the user in a tab-separated text file, which is read by CleanSLCClasses
during runtime. Soil and land use classes correspond to
the classes given in column two and three in the GeoClass
file. The file must have the following format:
class.1 | thres.frac.1 | thres.abs.1 | transfer.1 | ... | transfer.n |
class.2 | thres.frac.2 | thres.abs.2 | transfer.1 | ... | transfer.o |
... | ... | ... | ... | ... | ... |
class.m | thres.frac.m | thres.abs.m | transfer.1 | ... | transfer.p |
Column 1 contains the source land use or soil classes subjected to clean-up, columns 2 and 3 contain threshold values for area fractions and
absolute areas. The remaining columns contain classes to which areas below threshold will be transferred, in order of precedence. Each class can
have one or several transfer classes. CleanSLCClasses
will derive SLC classes to clean from the given soil or land use class using the
GeoClass table given in argument gcl
.
No header is allowed. At least one transfer class must exist, but classes can be omitted and will then be ignored by CleanSLCClasses
.
The order of transfer classes in the transfer file indicates transfer preference. CleanSLCClasses
constructs a transfer list for each SLC
class in the model set-up and per default uses the order to choose a preferred SLC to transfer to. However, if several SLCs exist for a given soil
or land use class, one of them will be chosen without further sorting. If argument m1.precedence
is set to FALSE
for either area
fractions or absolute areas, precedence will be ignored and the largest area available will be chosen to transfer small areas to. Area fraction
thresholds are given as fractions of 1, absolute area thresholds as values in . If an area below threshold is identified but there
are no fitting SLCs available to transfer to, the area will remain unchanged.
Method 2
This method is more rigid than method one and can also be applied as a post-processor after clean-up using method 1 to force a removal of all SLCs
below a given threshold from a GeoData file (method 1 cleaning can be be very selective, depending on how many transfer classes are provided in
the transfer table). Cleaning thresholds for method 2 area fractions and absolute areas are given in arguments m2.frac
and m2.abs
.
SLC areas below the given thresholds will be moved to the largest SLC in the given SUBID without considering any similarity between classes.
CleanSLCClasses
returns the GeoData data frame passed to the function in argument gd
with cleaned SLC class columns.
RescaleSLCClasses
for re-scaling of SLC area fraction sums.
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) # Clean-up using method 2, 0.5 % area fraction threshold and 100 m^2 absolute area threshold te3 <- CleanSLCClasses(gd = te1, gcl = te2, m2.frac = 0.005, m2.abs = 100) # Detailed comparison with function CompareFiles te4 <- CompareFiles(te1, te3, type = "GeoData") te4
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) # Clean-up using method 2, 0.5 % area fraction threshold and 100 m^2 absolute area threshold te3 <- CleanSLCClasses(gd = te1, gcl = te2, m2.frac = 0.005, m2.abs = 100) # Detailed comparison with function CompareFiles te4 <- CompareFiles(te1, te3, type = "GeoData") te4
Compare HYPE model files to identify any differences, typically used to check that no undesired changes were made when writing a new file.
CompareFiles( x, y, type = c("AquiferData", "BasinOutput", "BranchData", "CropData", "DamData", "ForcKey", "FloodData", "GeoClass", "GeoData", "Info", "LakeData", "MapOutput", "MgmtData", "Optpar", "Par", "PointSourceData", "Obs", "Simass", "Subass", "TimeOutput", "Xobs"), by = NULL, compare.order = TRUE, threshold = 1e-10, ... )
CompareFiles( x, y, type = c("AquiferData", "BasinOutput", "BranchData", "CropData", "DamData", "ForcKey", "FloodData", "GeoClass", "GeoData", "Info", "LakeData", "MapOutput", "MgmtData", "Optpar", "Par", "PointSourceData", "Obs", "Simass", "Subass", "TimeOutput", "Xobs"), by = NULL, compare.order = TRUE, threshold = 1e-10, ... )
x |
Path to a HYPE model file to read, or an existing list/data frame object for a HYPE model file.
File contents are compared to those of |
y |
Path to a HYPE model file to read, or an existing list/data frame object for a HYPE model file.
File contents are compared to those of |
type |
Character string identifying the type of HYPE model file. Used to determine appropriate read function. One of
|
by |
Character vector, names of columns in |
compare.order |
Logical, whether or not the order of the rows should be compared. If |
threshold |
Numeric, threshold difference for comparison of numeric values. Set to 0 to only accept identical values. |
... |
Other arguments passed on to functions to read the files to compare (e.g. |
CompareFiles
compares two HYPE model files and identifies any differences in values. The function reads two model files, compares
the values in columns with corresponding names, and returns a data frame consisting of rows/columns with any differences. Values that are
the same in both files are set to NA
. If numeric values in two columns aren't exactly the same, then the difference between the values will be taken
and compare to theshold
. If the difference is <= theshold
, then the values will be considered the equal and set to NA
.
The function is primarily intended as a check to ensure that no unintended changes were made when writing
model files using the various HYPEtools write functions. However, it can also be used to e.g. compare files between different model versions.
Returns invisibly a data frame containing rows and columns in which differences exist between x
and y
. Values that are the same in both
files are set to NA
. If the returned data frame has 0 row, then there were no differences between the files.
# Import demo model GeoData file, edit a SUBID te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te1$SUBID[1] <- 1 # Compare with original file te2 <- CompareFiles(system.file("demo_model", "GeoData.txt", package = "HYPEtools"), te1, type = "GeoData") te2
# Import demo model GeoData file, edit a SUBID te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te1$SUBID[1] <- 1 # Compare with original file te2 <- CompareFiles(system.file("demo_model", "GeoData.txt", package = "HYPEtools"), te1, type = "GeoData") te2
ConvertDischarge
converts volumetric discharge to specific discharge (unit area discharge) and vice versa.
ConvertDischarge(q, area, from = "m3s", to = "mmd")
ConvertDischarge(q, area, from = "m3s", to = "mmd")
q |
An object of type |
area |
An object of type |
from |
Character string keyword, giving the current unit of
or a volumetric discharge, one of:
|
to |
Character string keyword, see |
ConvertDischarge
is a simple conversion function, most likely to be used in combination with apply
or related functions.
ConvertDischarge
returns a numeric object of the same type as provided in argument q
.
ConvertDischarge(6, 400000000) ConvertDischarge(c(1.1, 1.2, 1.9, 2.8, 2, 1.5, 1.3, 1.2, 1.15, 1.1), from = "mmd", to = "ls", area = 1.2e6)
ConvertDischarge(6, 400000000) ConvertDischarge(c(1.1, 1.2, 1.9, 2.8, 2, 1.5, 1.3, 1.2, 1.15, 1.1), from = "mmd", to = "ls", area = 1.2e6)
CreateOptpar
creates a list representing a HYPE optpar.txt file from an imported par.txt file
and a selection of parameters.
CreateOptpar( x, pars, tasks = data.frame(character(), character()), comment = "", fun.ival = NULL )
CreateOptpar( x, pars, tasks = data.frame(character(), character()), comment = "", fun.ival = NULL )
x |
a list with named vector elements, as an object returned from |
pars |
Character vector with HYPE parameter names to be included in optpar list. Parameters must
exist in |
tasks |
Data frame with two columns providing optimization tasks and settings (key-value pairs) as described in the optpar.txt online documentation. Defaults to an empty task section. |
comment |
Character string, comment (first row in optpar.txt file). |
fun.ival |
Either |
CreateOptpar
makes it a bit more convenient to compose a HYPE optimization file. The function creates a template
with all parameters to be included in an optimization run.
Parameter boundaries for individual classes have to be adapted after creation of the template, the function takes the
existing parameter value(s) in x
as upper and lower boundaries.
Parameter step width intervals (third parameter rows in optpar.txt files) are calculated with an internal function which per default returns the nearest single 1/1000th of the parameter value, with conditional replacement of '0' intervals:
function(x) { res <- 10^floor(log10(x/1000)) ifelse(res == 0, .1, res) }
Alternative functions can be passed to CreateOptpar
using argument fun.ival
. Such functions must have a
single argument x
, which represents the parameter value taken from argument x
. The function is applied to
all parameters in the resulting optpar list.
The function returns a list with elements as described in ReadOptpar
.
ReadOptpar
WriteOptpar
OptimisedClasses
# Import a HYPE parameter file te1 <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) # Create optimization parameters for a Monte Carlo run with 1000 iterations te2 <- data.frame(key = c("task", "num_mc", "task"), value = c("MC", 1000, "WS")) # Create an optpar file structure for HYPE recession coefficients te3 <- CreateOptpar(x = te1, pars = c("rrcs1", "rrcs2"), tasks = te2) te3
# Import a HYPE parameter file te1 <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) # Create optimization parameters for a Monte Carlo run with 1000 iterations te2 <- data.frame(key = c("task", "num_mc", "task"), value = c("MC", 1000, "WS")) # Create an optpar file structure for HYPE recession coefficients te3 <- CreateOptpar(x = te1, pars = c("rrcs1", "rrcs2"), tasks = te2) te3
Pre-defined color ramp palettes which are used in other HYPEtools
functions.
ColNitr(n) ColPhos(n) ColPrec(n) ColTemp(n) ColQ(n) ColDiffTemp(n) ColDiffGeneric(n) ColBlues(n) ColReds(n) ColGreens(n) ColYOB(n) ColPurples(n)
ColNitr(n) ColPhos(n) ColPrec(n) ColTemp(n) ColQ(n) ColDiffTemp(n) ColDiffGeneric(n) ColBlues(n) ColReds(n) ColGreens(n) ColYOB(n) ColPurples(n)
n |
Integer, number of colors to generate. |
These functions build on calls to colorRampPalette
.
All functions return vectors of length n
with interpolated RGB color values in hexadecimal
notation (see rgb
).
ColNitr(10) ColGreens(6) barplot(rep(1, 11), col = ColTemp(11))
ColNitr(10) ColGreens(6) barplot(rep(1, 11), col = ColTemp(11))
Function to find direct upstream SUBIDs including flow fractions for MAINDOWN/BRANCHDOWN splits for a single sub-catchment or all sub-catchments in a GeoData-like data frame.
DirectUpstreamSubids(subid = NULL, gd, bd = NULL)
DirectUpstreamSubids(subid = NULL, gd, bd = NULL)
subid |
Integer, SUBID of a target sub-catchment (must exist in |
gd |
Data frame, typically an imported 'GeoData.txt' file. Mandatory argument. See 'Details'. |
bd |
Data frame, typically an imported 'BranchData.txt' file. Optional argument, defaults to an empty placeholder. See 'Details'. |
DirectUpstreamSubids
identifies direct upstream SUBIDs for a user-provided target SUBID or
for all SUBIDs given in a data frame gd
, typically an imported GeoData file.
A sub-catchment in HYPE can have several upstream sub-catchments. If there are more than one upstream sub-catchments, the downstream sub-catchment is a confluence. HYPE stores these connections in the GeoData file, in downstream direction, given as downstream SUBID in column 'MAINDOWN'. Bifurcations, i.e. splits in downstream direction, are also possible to model in HYPE. These additional downstream connections are provided in the BranchData file, together with flow fractions to each downstream SUBID.
Formally, gd
can be any data frame which contains columns 'SUBID' and 'MAINDOWN' (not case-sensitive), and bd
any
data frame which contains three columns: 'BRANCHID', 'SOURCEID', and 'MAINPART', and optionally columns 'MAXQMAIN', 'MINQMAIN', 'MAXQBRANCH'.
Typically, these are HYPE data files
imported through ReadGeoData
and ReadBranchData
. See HYPE documentation for further details on connections
Between SUBIDs in the model.
DirectUpstreamSubids
always returns a list. If argument subid
is non-NULL
, a list with two elements is returned:
subid
contains an integer giving the target SUBID and upstr.df
contains a data frame with columns
upstream
(upstream SUBID), is.main
(logical, TRUE
if it is a MAINDOWN connection),
fraction
(fraction of flow going into the target SUBID), and llim
and ulim
giving upper and lower flow boundaries which
optionally limit flow into the target SUBID.
If no specific SUBID was provided, DirectUpstreamSubids
returns a list with upstream information for all SUBIDs in argument
gd
, each list element containing the list described above, i.e. with an integer element (SUBID) and a data frame element
(upstream connections).
AllUpstreamSubids
, which returns all upstream SUBIDs, i.e. the full upstream network up to the headwaters, for a given SUBID.
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) DirectUpstreamSubids(subid = 3594, gd = te)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) DirectUpstreamSubids(subid = 3594, gd = te)
distinctColorPalette
generates an attractive palette of random colors.
distinctColorPalette(count = 1, seed = NULL, darken = 0)
distinctColorPalette(count = 1, seed = NULL, darken = 0)
count |
Integer, number of colors (>= 1). May be ineffective for count > 40. |
seed |
Integer, seed number to produce repeatable palettes. |
darken |
Numeric specifying the amount of darkening applied to the color palette. See colorspace::darken. Negative values will lighten the palette. |
Adapted from the randomcoloR package https://cran.r-project.org/package=randomcoloR.
distinctColorPalette
returns a character vector of count
optimally distinct colors in hexadecimal codes.
distinctColorPalette()
distinctColorPalette()
EquallySpacedObs
creates equally spaced time series with missing observations from a data frame with irregular
observations.
EquallySpacedObs(x, sort.data = TRUE, timestep, ts.col = 1)
EquallySpacedObs(x, sort.data = TRUE, timestep, ts.col = 1)
x |
A |
sort.data |
Logical, if |
timestep |
Character string keyword, giving the target time step length. Either |
ts.col |
Integer, column index of datetime column. |
EquallySpacedObs
will preserve additional attributes present in x
. If datetime column is of class
Date
, there may occur problems with daylight saving time shifts. To avoid problems, use class
POSIXct
and set time zone to "UTC"
.
EquallySpacedObs
returns a dataframe.
te <- data.frame(date = as.POSIXct(c("2000-01-01", "2000-02-01"), tz = "gmt"), obs = c(1, 2)) EquallySpacedObs(x = te, timestep = "day")
te <- data.frame(date = as.POSIXct(c("2000-01-01", "2000-02-01"), tz = "gmt"), obs = c(1, 2)) EquallySpacedObs(x = te, timestep = "day")
This function calculates quantiles suitable for duration curves of environmental time series data.
ExtractFreq( data, probs = c(0, 1e-05, 1e-04, 0.001, seq(0.01, 0.99, by = 0.01), 0.999, 0.9999, 0.99999, 1) )
ExtractFreq( data, probs = c(0, 1e-05, 1e-04, 0.001, seq(0.01, 0.99, by = 0.01), 0.999, 0.9999, 0.99999, 1) )
data |
either a numeric vector or an all-numeric dataframe ( |
probs |
numeric, vector of probabilities as in |
ExtractFreq
is a convenience wrapper function, it uses quantile
to calculate the quantiles
of one or more time series with a density appropriate for duration curves.
NA
s are allowed in the input data. For the results to be meaningful, input should represent equally-spaced time series,
e.g. HYPE basin output files.
ExtractFreq
returns a dataframe with probabilities in the first column, and quantiles of data in the following columns.
Number of observations per variable in data
are given in an attribute n.obs
(see attributes
).
ExtractFreq(rnorm(1000))
ExtractFreq(rnorm(1000))
Calculate aggregated statistics from long-term time series, typically imported HYPE time output files.
ExtractStats( x, start.mon = 1, aggperiod = c("year", "season1", "season2", "month"), timestep = attr(x, "timestep"), subid = attr(x, "subid"), FUN, ... )
ExtractStats( x, start.mon = 1, aggperiod = c("year", "season1", "season2", "month"), timestep = attr(x, "timestep"), subid = attr(x, "subid"), FUN, ... )
x |
Data frame, with column-wise equally-spaced time series. Date-times in |
start.mon |
Integer between 1 and 12, starting month of the hydrological year. |
aggperiod |
Character string, timestep for aggregated results. One of |
timestep |
Character string, timestep of data in |
subid |
Integer, a vector of HYPE subbasin IDs for data in |
FUN |
A function to compute for each |
... |
Optional arguments to |
ExtractStats
uses aggregate
to calculate statistics for all data columns provided in x
. Argument
start.mon
allows to define the start of the hydrological year. Hydrological seasons begin with winter (season1) or
autumn (season2).
ExtractStats
returns a dataframe with starting dates for each aggregation period in the first column, and a descriptive
aggregation period name in the second. Remaining columns contain aggregated results as ordered in x
. Additional attributes
subid
with subbasin IDs, timestep
with time step of the source data, and period
with a two-element POSIXct vector
containing start and end dates of the source data.
If FUN
returns several values per aggregation period, these are returned in nested columns in the resulting dataframe. See
Value
section for aggregate
and example code below.
# Import example data te1 <- ReadTimeOutput(filename = system.file("demo_model", "results", "timeCOUT.txt", package = "HYPEtools"), dt.format = "%Y-%m") # Extract maxima ExtractStats(x = te1, start.mon = 1, FUN = max) # Multiple result stats: Extract min, mean, and max in one go: te2 <- ExtractStats(x = te1, start.mon = 1, FUN = function(x) {c(min(x), mean(x), max(x))}) # extract mean from resulting nested dataframe: data.frame(te2[, 1:2], sapply(te2[, -c(1:2)], function(x){x[, 2]}))
# Import example data te1 <- ReadTimeOutput(filename = system.file("demo_model", "results", "timeCOUT.txt", package = "HYPEtools"), dt.format = "%Y-%m") # Extract maxima ExtractStats(x = te1, start.mon = 1, FUN = max) # Multiple result stats: Extract min, mean, and max in one go: te2 <- ExtractStats(x = te1, start.mon = 1, FUN = function(x) {c(min(x), mean(x), max(x))}) # extract mean from resulting nested dataframe: data.frame(te2[, 1:2], sapply(te2[, -c(1:2)], function(x){x[, 2]}))
Numerical goodness-of-fit measures between sim and obs, with treatment of missing values.
gof(sim, obs, ...) ## Default S3 method: gof( sim, obs, na.rm = TRUE, do.spearman = FALSE, s = c(1, 1, 1), method = c("2009", "2012"), start.month = 1, digits = 2, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) valindex(sim, obs, ...) ## Default S3 method: valindex(sim, obs, ...) rPearson(sim, obs, ...) ## Default S3 method: rPearson( sim, obs, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) sKGE(sim, obs, ...) ## Default S3 method: sKGE( sim, obs, s = c(1, 1, 1), na.rm = TRUE, method = c("2009", "2012"), start.month = 1, out.PerYear = FALSE, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) KGE(sim, obs, ...) ## Default S3 method: KGE( sim, obs, s = c(1, 1, 1), na.rm = TRUE, method = c("2009", "2012", "2021"), out.type = c("single", "full"), fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) NSE(sim, obs, ...) ## Default S3 method: NSE( sim, obs, na.rm = TRUE, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) pbias(sim, obs, ...) ## Default S3 method: pbias( sim, obs, na.rm = TRUE, dec = 1, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) mae(sim, obs, ...) ## Default S3 method: mae( sim, obs, na.rm = TRUE, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) VE(sim, obs, ...) ## Default S3 method: VE( sim, obs, na.rm = TRUE, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA )
gof(sim, obs, ...) ## Default S3 method: gof( sim, obs, na.rm = TRUE, do.spearman = FALSE, s = c(1, 1, 1), method = c("2009", "2012"), start.month = 1, digits = 2, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) valindex(sim, obs, ...) ## Default S3 method: valindex(sim, obs, ...) rPearson(sim, obs, ...) ## Default S3 method: rPearson( sim, obs, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) sKGE(sim, obs, ...) ## Default S3 method: sKGE( sim, obs, s = c(1, 1, 1), na.rm = TRUE, method = c("2009", "2012"), start.month = 1, out.PerYear = FALSE, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) KGE(sim, obs, ...) ## Default S3 method: KGE( sim, obs, s = c(1, 1, 1), na.rm = TRUE, method = c("2009", "2012", "2021"), out.type = c("single", "full"), fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) NSE(sim, obs, ...) ## Default S3 method: NSE( sim, obs, na.rm = TRUE, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) pbias(sim, obs, ...) ## Default S3 method: pbias( sim, obs, na.rm = TRUE, dec = 1, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) mae(sim, obs, ...) ## Default S3 method: mae( sim, obs, na.rm = TRUE, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA ) VE(sim, obs, ...) ## Default S3 method: VE( sim, obs, na.rm = TRUE, fun = NULL, ..., epsilon.type = c("none", "Pushpalatha2012", "otherFactor", "otherValue"), epsilon.value = NA )
sim |
numeric, vector of simulated values |
obs |
numeric, vector of observed values |
... |
further arguments passed to/from other methods. |
na.rm |
a logical value indicating whether 'NA' should be stripped before the computation proceeds. When an 'NA' value is found at the i-th position in obs OR sim, the i-th value of obs AND sim are removed before the computation. |
do.spearman |
logical, indicates if the Spearman correlation should be computed. The default is |
s |
argument passed to the |
method |
argument passed to the |
start.month |
argument passed to the |
digits |
integer, numer of decimal places used for rounding the goodness of fit indexes. |
fun |
function to be applied to |
epsilon.type |
argument used to define a numeric value to be added to both
|
epsilon.value |
numeric, value to be added to both |
out.PerYear |
logical, argument passed to the |
out.type |
argument passed to the |
dec |
argument passed to the |
The gof
, mae
, pbias
, NSE
, rPearson
, sKGE
, and KGE
functions are provided to calculate goodness of fit statistics.
The functions were adapted from the hydroGOF package https://github.com/hzambran/hydroGOF.
gof
Returns a matrix of goodness of fit statistics. mae
, pbias
, NSE
, rPearson
, sKGE
, and KGE
return a numeric of the goodness of fit statistic.
gof(sim = sample(1:100), obs = sample(1:100))
gof(sim = sample(1:100), obs = sample(1:100))
GroupSLCClasses
calculates grouped sums for SLC classes (area fractions or absolute areas) based on land use, soil, or crop groups in a GeoClass
table, or any other user-provided grouping index.
GroupSLCClasses( gd, gcl = NULL, type = c("landuse", "soil", "crop"), group = NULL, abs.area = FALSE, verbose = TRUE )
GroupSLCClasses( gd, gcl = NULL, type = c("landuse", "soil", "crop"), group = NULL, abs.area = FALSE, verbose = TRUE )
gd |
Data frame containing columns with SUBIDs, SLC fractions, and SUBID areas if |
gcl |
Data frame containing columns with SLCs and corresponding landuse and soil class IDs, typically a 'GeoClass.txt'
file imported with |
type |
Character string keyword for use with |
group |
Integer vector, of same length as number of SLC classes in |
abs.area |
Logical, if |
verbose |
Logical, if |
If absolute areas are calculated, area units will correspond to areas provided in gd
.
GroupSLClasses
returns the data frame with SUBIDs, SUBID areas, and grouped SLC class columns.
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) # Calculate soil groups GroupSLCClasses(gd = te1, gcl = te2, type = "s")
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) # Calculate soil groups GroupSLCClasses(gd = te1, gcl = te2, type = "s")
Function to calculate nutrient load retention fractions in groundwater parts of HYPE, i.e. after root zone retention. See Details for exact definition.
GwRetention(nfrz, nfs3, gts3, gd, par, unit.area = TRUE, nutrient = "tn")
GwRetention(nfrz, nfs3, gts3, gd, par, unit.area = TRUE, nutrient = "tn")
nfrz |
Data frame with two-columns. Sub-basin IDs in first column, net loads from root zone in kg/year in second column. Typically an imported HYPE map output file, HYPE output variable SL06. See Details. |
nfs3 |
Data frame with two-columns. Sub-basin IDs in first column, net loads from soil layer 3 in kg/year in second column. Typically an imported HYPE map output file, HYPE output variable SL18. See Details. |
gts3 |
Data frame with two-columns. Sub-basin IDs in first column, gross loads to soil layer 3 in kg/year in second column. Typically an imported HYPE map output file, HYPE output variable SL17. See Details. |
gd |
Data frame, with columns containing sub-basin IDs and rural household emissions, e.g. an imported 'GeoData.txt' file. See details. |
par |
List, HYPE parameter list, typically an imported 'par.txt' file. Must contain parameter locsoil (not case-sensitive). |
unit.area |
Logical, set to |
nutrient |
Character keyword, one of the HYPE-modeled nutrient groups, for which to calculate groundwater retention. Not
case-sensitive. Currently, only |
GwRetention
calculates a groundwater nutrient retention as fractions of outgoing and incoming loads using HYPE soil load variables. Incoming loads
include drainage into layer 3 from the root zone (defined as soil layer 1 and 2), rural load fractions into soil (dependent on parameter locsoil),
tile drainage, surface flow, and flow from layer 1 and 2. Outgoing loads include runoff from all soil layers, tile drain, and surface flow.
The retention fraction R is calculated as (see also the variable description in the HYPE online documentation):
[kg/y]
[kg/y]
, where li is incoming load to groundwater (leaching rates), lr is rural load (total from GeoData converted to kg/yr; locsoil in the formula converts it to rural load into soil layer 3), and
nfrz, gts3, nfs3 are soil loads as in function arguments described above. See Examples for HYPE variable names for TN
loads.
Columns SUBID
, LOC_VOL
, and LOC_TN
must be present in gd
, for a description of column contents see the
GeoData file description in the HYPE online documentation.
Column names are not case-sensitive.
GwRetention
returns a three-column data frame, containing SUBIDs, retention in groundwater as a fraction of incoming loads
(if multiplied by 100, it becomes \
.
# Create dummy data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te1$loc_tn <- runif(n = nrow(te1), min = 0, max = 100) te1$loc_vol <- runif(n = nrow(te1), min = 0, max = 2) te2 <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) te2$locsoil <- .3 # HYPE soil load (sl) variables for TN, dummy loads GwRetention(nfrz = data.frame(SUBID = te1$SUBID, SL06 = runif(n = nrow(te1), 10, 50)), gts3 = data.frame(SUBID = te1$SUBID, SL17 = runif(n = nrow(te1), 10, 50)), nfs3 = data.frame(SUBID = te1$SUBID, SL18 = runif(n = nrow(te1), 10, 50)), gd = te1, par = te2)
# Create dummy data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te1$loc_tn <- runif(n = nrow(te1), min = 0, max = 100) te1$loc_vol <- runif(n = nrow(te1), min = 0, max = 2) te2 <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) te2$locsoil <- .3 # HYPE soil load (sl) variables for TN, dummy loads GwRetention(nfrz = data.frame(SUBID = te1$SUBID, SL06 = runif(n = nrow(te1), 10, 50)), gts3 = data.frame(SUBID = te1$SUBID, SL17 = runif(n = nrow(te1), 10, 50)), nfs3 = data.frame(SUBID = te1$SUBID, SL18 = runif(n = nrow(te1), 10, 50)), gd = te1, par = te2)
Function to find all headwater SUBIDs of a HYPE model domain.
HeadwaterSubids(gd)
HeadwaterSubids(gd)
gd |
A data frame, containing among others two columns |
HeadwaterSubids
finds all headwater SUBIDs of a model domain as provided in a 'GeoData.txt' file, i.e. all subcatchments
which do not have any upstream subcatchments.
HeadwaterSubids
returns a vector of outlet SUBIDs.
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) HeadwaterSubids(gd = te)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) HeadwaterSubids(gd = te)
These are simple convenience wrapper functions to quickly query and assign values of attributes which are added to HYPE data on import.
datetime(x) datetime(x) <- value hypeunit(x) hypeunit(x) <- value obsid(x) obsid(x) <- value outregid(x) outregid(x) <- value subid(x) subid(x) <- value timestep(x) timestep(x) <- value variable(x) variable(x) <- value
datetime(x) datetime(x) <- value hypeunit(x) hypeunit(x) <- value obsid(x) obsid(x) <- value outregid(x) outregid(x) <- value subid(x) subid(x) <- value timestep(x) timestep(x) <- value variable(x) variable(x) <- value
x |
Object whose attribute is to be accessed |
value |
Value to be assigned |
These functions are just shortcuts for attr
.
The extractor functions return the value of the respective attribute or NULL
if no matching attribute is found.
te <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) hypeunit(te) timestep(te) subid(te)
te <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) hypeunit(te) timestep(te) subid(te)
These are simple convenience wrapper functions to export various HYPE data files from R.
WriteAquiferData(x, filename, verbose = TRUE) WriteOutregions(x, filename, verbose = TRUE) WriteBranchData(x, filename, verbose = TRUE) WriteCropData(x, filename, verbose = TRUE) WriteDamData(x, filename, verbose = TRUE) WriteFloodData(x, filename, verbose = TRUE) WriteLakeData(x, filename, verbose = TRUE) WriteMgmtData(x, filename, verbose = TRUE) WritePointSourceData(x, filename, verbose = TRUE) WriteForcKey(x, filename) WriteGlacierData(x, filename, verbose = TRUE)
WriteAquiferData(x, filename, verbose = TRUE) WriteOutregions(x, filename, verbose = TRUE) WriteBranchData(x, filename, verbose = TRUE) WriteCropData(x, filename, verbose = TRUE) WriteDamData(x, filename, verbose = TRUE) WriteFloodData(x, filename, verbose = TRUE) WriteLakeData(x, filename, verbose = TRUE) WriteMgmtData(x, filename, verbose = TRUE) WritePointSourceData(x, filename, verbose = TRUE) WriteForcKey(x, filename) WriteGlacierData(x, filename, verbose = TRUE)
x |
The object to be written, a dataframe as returned from the |
filename |
A character string naming a path and file name to write to. Windows users: Note that Paths are separated by '/', not '\'. |
verbose |
Logical, display informative warning messages if columns contain |
Hype data file exports, simple fwrite
wrappers with formatting options adjusted to match HYPE file
specifications:
In most files, HYPE requires NA
-free input in required columns, but empty values are
allowed in additional comment columns which are not read by HYPE. Informative warnings will be thrown if NA
s are
found during export. Character string lengths in comment columns of HYPE data files are restricted to 100 characters,
the functions will return with a warning if longer strings were exported.
No return value, called for export to text files.
te <- ReadForcKey(filename = system.file("demo_model", "ForcKey.txt", package = "HYPEtools")) WriteForcKey(x = te, filename = tempfile())
te <- ReadForcKey(filename = system.file("demo_model", "ForcKey.txt", package = "HYPEtools")) WriteForcKey(x = te, filename = tempfile())
These are simple convenience wrapper functions to import various HYPE data files as data frame into R.
ReadAquiferData( filename = "AquiferData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadOutregions( filename = "Outregions.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadBranchData( filename = "BranchData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadCropData( filename = "CropData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadDamData( filename = "DamData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", quote = "", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadFloodData( filename = "FloodData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", quote = "", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadGlacierData( filename = "GlacierData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadLakeData( filename = "LakeData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", quote = "", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadMgmtData( filename = "MgmtData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadPointSourceData( filename = "PointSourceData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), data.table = FALSE, ... ) ReadAllsim(filename = "allsim.txt", na.strings = "-9999") ReadForcKey( filename = "ForcKey.txt", sep = "\t", encoding = c("unknown", "latin1", "UTF-8") ) ReadUpdate( filename = "update.txt", header = TRUE, sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), data.table = FALSE, ... )
ReadAquiferData( filename = "AquiferData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadOutregions( filename = "Outregions.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadBranchData( filename = "BranchData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadCropData( filename = "CropData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadDamData( filename = "DamData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", quote = "", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadFloodData( filename = "FloodData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", quote = "", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadGlacierData( filename = "GlacierData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadLakeData( filename = "LakeData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", quote = "", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadMgmtData( filename = "MgmtData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), ... ) ReadPointSourceData( filename = "PointSourceData.txt", verbose = TRUE, header = TRUE, na.strings = "-9999", sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), data.table = FALSE, ... ) ReadAllsim(filename = "allsim.txt", na.strings = "-9999") ReadForcKey( filename = "ForcKey.txt", sep = "\t", encoding = c("unknown", "latin1", "UTF-8") ) ReadUpdate( filename = "update.txt", header = TRUE, sep = "\t", stringsAsFactors = FALSE, encoding = c("unknown", "latin1", "UTF-8"), data.table = FALSE, ... )
filename |
Path to and file name of HYPE data file file to import. Windows users: Note that Paths are separated by '/', not '\'. |
verbose |
Logical, display message if columns contain |
header |
|
na.strings |
See |
sep |
See |
stringsAsFactors |
See |
encoding |
|
... |
Other parameters passed to |
quote |
See |
data.table |
Logical, return data.table instead of data frame. |
Hype data file imports, simple read.table
or fread
wrappers with
formatting arguments set to match HYPE file specifications:
In most files, HYPE requires NA
-free input in required columns, but empty values are
allowed in additional comment columns. Informative warnings will be thrown if NA
s are found during import.
Imported files are returned as data frames.
te <- ReadForcKey(filename = system.file("demo_model", "ForcKey.txt", package = "HYPEtools"))
te <- ReadForcKey(filename = system.file("demo_model", "ForcKey.txt", package = "HYPEtools"))
Constructor function for data frames which hold HYPE GeoData tables with information on sub-basins.
HypeGeoData(x)
HypeGeoData(x)
x |
Data frame with at least five mandatory columns, see details. |
S3 constructor function for data frames which hold HYPE GeoData tables. These are normal data frames with at least five mandatory
columns, all numeric
: AREA, SUBID, MAINDOWN, RIVLEN, and SLC_n, where n are
consecutive SLC class numbers (up to 999).See also the
HYPE file description
for GeoData.txt files for reference.
Usually, this class will be assigned to GeoData tables on import with ReadGeoData
. A summary
method exists for
HypeGeoData
data frames.
Returns a data frame with added class
attribute HypeGeoData
.
te <- data.table::fread(file = system.file("demo_model", "GeoData.txt", package = "HYPEtools"), data.table = FALSE) HypeGeoData(x = te) summary(te)
te <- data.table::fread(file = system.file("demo_model", "GeoData.txt", package = "HYPEtools"), data.table = FALSE) HypeGeoData(x = te) summary(te)
Constructor function for arrays which hold equidistant time series of multiple HYPE variables for a single sub-basin and multiple model runs, typically imported HYPE basin output results.
HypeMultiVar( x, datetime, hype.var, hype.unit, subid = NULL, outregid = NULL, hype.comment = "" )
HypeMultiVar( x, datetime, hype.var, hype.unit, subid = NULL, outregid = NULL, hype.comment = "" )
x |
numeric |
datetime |
|
hype.var , hype.unit
|
Character vectors of keywords to specify HYPE variable IDs, corresponding to second dimension
(columns) in |
subid |
Integer, HYPE sub-basin ID. Either this or |
outregid |
Integer, HYPE output region ID, alternative to |
hype.comment |
Character, first-row optional comment string of basin output file. Empty string, if non-existing. |
S3 class constructor function for array objects which can hold (multiple) HYPE basin output results.
Returns a 3-dimensional array with
[time, variable, iteration]
dimensions and additional attributes
:
A vector of date-times. Corresponds to 1st array dimension.
A character vector of HYPE output variable IDs.
A character vector of HYPE output variable units.
A single SUBID.
A single OUTREGID.
A character keyword for the time step.
A comment string, currently used for class group outputs.
# import a basin output file te1 <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) # create a dummy array with two iterations from imported basin file te2 <- array(data = c(unlist(te1[, -1]), unlist(te1[, -1])), dim = c(nrow(te1), ncol(te1) - 1, 2), dimnames = list(rownames(te1), colnames(te1)[-1])) # Construct HypeMultiVar array HypeMultiVar(te2, datetime = te1$DATE, hype.var = variable(te1), hype.unit = hypeunit(te1), subid = 3587)
# import a basin output file te1 <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) # create a dummy array with two iterations from imported basin file te2 <- array(data = c(unlist(te1[, -1]), unlist(te1[, -1])), dim = c(nrow(te1), ncol(te1) - 1, 2), dimnames = list(rownames(te1), colnames(te1)[-1])) # Construct HypeMultiVar array HypeMultiVar(te2, datetime = te1$DATE, hype.var = variable(te1), hype.unit = hypeunit(te1), subid = 3587)
Constructor function for arrays which hold equidistant time series of a single HYPE variable for multiple sub-basins and multiple model runs, typically imported time and map output results.
HypeSingleVar(x, datetime, subid = NULL, outregid = NULL, hype.var)
HypeSingleVar(x, datetime, subid = NULL, outregid = NULL, hype.var)
x |
numeric |
datetime |
|
subid |
Integer vector with HYPE sub-basin IDs, of the same length as |
outregid |
Integer vector with HYPE output region IDs, alternative to |
hype.var |
Character string, keyword to specify HYPE variable ID, see list of HYPE variable. Not case-sensitive. |
S3 class constructor function for array objects which can hold (multiple) HYPE time or map output results.
Returns a 3-dimensional array with
[time, subid, iteration]
dimensions and additional attributes
:
A vector of date-times. Corresponds to 1st array dimension.
A vector of SUBIDs. Corresponds to 2nd array dimension (NA
, if it does not apply to data contents).
A vector of OUTREGIDs. Corresponds to 2nd array dimension (NA
, if it does not apply to data contents).
HYPE output variable ID.
A character keyword for the time step.
# Import a time output file te1 <- ReadTimeOutput(filename = system.file("demo_model", "results", "timeCOUT.txt", package = "HYPEtools"), dt.format = "%Y-%m") # Create a dummy array with two iterations from imported time file te2 <- array(data = c(unlist(te1[, -1]), unlist(te1[, -1])), dim = c(nrow(te1), ncol(te1) - 1, 2), dimnames = list(rownames(te1), colnames(te1)[-1])) # Construct HypeSingleVar array HypeSingleVar(x = te2, datetime = te1$DATE, subid = subid(te1), hype.var = variable(te1))
# Import a time output file te1 <- ReadTimeOutput(filename = system.file("demo_model", "results", "timeCOUT.txt", package = "HYPEtools"), dt.format = "%Y-%m") # Create a dummy array with two iterations from imported time file te2 <- array(data = c(unlist(te1[, -1]), unlist(te1[, -1])), dim = c(nrow(te1), ncol(te1) - 1, 2), dimnames = list(rownames(te1), colnames(te1)[-1])) # Construct HypeSingleVar array HypeSingleVar(x = te2, datetime = te1$DATE, subid = subid(te1), hype.var = variable(te1))
Quickly query vectors of HYPE sub-basin IDs (SUBID) for various properties.
IsHeadwater(subid, gd) IsOutlet(subid, gd) IsRegulated(subid, gd, dd = NULL, ld = NULL)
IsHeadwater(subid, gd) IsOutlet(subid, gd) IsRegulated(subid, gd, dd = NULL, ld = NULL)
subid |
Numeric, vector of SUBIDs to be queried |
gd |
|
dd |
Data frame, typically an imported
DamData.txt file. Defaults
to |
ld |
Data frame, typically an imported
LakeData.txt file. Defaults
to |
These are convenience functions to query subbasin properties. Some functions can be inefficient if applied to many or all subbasins of a HYPE model setup and more efficient functions may exist in HYPEtools, see links in See also section below or browse the package index.
The functions return a logical vector of the same length as subid
, with NA
values for all SUBIDs which do not exist
in gd
.
AllUpstreamSubids()
; AllDownstreamSubids()
; OutletSubids()
; OutletIds()
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) IsHeadwater(subid = 40556, gd = te) IsHeadwater(subid = te$SUBID, gd = te)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) IsHeadwater(subid = 40556, gd = te) IsHeadwater(subid = te$SUBID, gd = te)
Constructor function for data frames which hold HYPE Xobs.txt file contents, i.e. time series of a multiple observation variables for multiple sub-basins and equidistant time steps in POSIXct format in the first column.
HypeXobs(x, comment, variable, subid, verbose = TRUE)
HypeXobs(x, comment, variable, subid, verbose = TRUE)
x |
|
comment |
Character string, metadata or other information, first line of a HYPE Xobs.txt file. |
variable |
Character vector of four-letter keywords to specify HYPE variable IDs, corresponding to second to
last column in |
subid |
Integer vector with HYPE sub-basin IDs, corresponding to second to last column in |
verbose |
Logical, throw warning if attribute Not case-sensitive. |
S3 class constructor function for HypeXobs
data frame objects which hold HYPE Xobs.txt file contents. Xobs.txt
files contain three header rows, see the
Xobs.txt description in the HYPE documentation.
These headers are stored as additional attributes in objects.
Returns a data frame of class HypeXobs
with additional attributes
:
A character vector.
A character vector of HYPE variable IDs.
A vector of SUBIDs.
Time step keyword, "day"
, or "n hour"
(n = number of hours). NULL
, if x
contains just one row.
# Use the Xobs file import function instead of the class constructor for standard work flows te <- ReadXobs(file = system.file("demo_model", "Xobs.txt", package = "HYPEtools")) summary(te) # Class constructor HypeXobs(x = as.data.frame(te), comment = comment(te), variable = variable(te), subid = subid(te))
# Use the Xobs file import function instead of the class constructor for standard work flows te <- ReadXobs(file = system.file("demo_model", "Xobs.txt", package = "HYPEtools")) summary(te) # Class constructor HypeXobs(x = as.data.frame(te), comment = comment(te), variable = variable(te), subid = subid(te))
Add/Remove lines to HYPE info.txt files
AddInfoLine(info, name, value, after = NULL) RemoveInfoLine(info, name)
AddInfoLine(info, name, value, after = NULL) RemoveInfoLine(info, name)
info |
Named list containing the info.txt file data, typically created using |
name |
Name of info.txt code to add/remove. |
value |
Value of the info.txt code to add/remove. |
after |
String vector containing the name(s) of info.txt codes that the new info.txt code should be inserted below.
If multiple values are specified and all codes are present in |
The AddInfoLine
and RemoveInfoLine
functions provide features to add/remove lines to an imported info.txt
file. Info.txt codes can be found on the HYPE Wiki.
AddInfoLine
and RemoveInfoLine
return a named list in the info.txt file structure.
info <- ReadInfo(filename = system.file("demo_model", "info.txt", package = "HYPEtools")) info <- AddInfoLine(info, name = "testline", value = "testvalue") info <- RemoveInfoLine(info, name = "testline")
info <- ReadInfo(filename = system.file("demo_model", "info.txt", package = "HYPEtools")) info <- AddInfoLine(info, name = "testline", value = "testvalue") info <- RemoveInfoLine(info, name = "testline")
By default, this function creates an sf
object which contains regional irrigation connections between
source and target HYPE sub-catchments. However, this function can also be used to create interactive Leaflet maps.
MapRegionalSources( data, map, map.subid.column = 1, group.column = NULL, group.colors = NULL, digits = 3, progbar = FALSE, map.type = "default", plot.scale = TRUE, plot.searchbar = FALSE, weight = 0.5, opacity = 1, fillColor = "#4d4d4d", fillOpacity = 0.25, line.weight = 5, line.opacity = 1, seed = NULL, darken = 0, font.size = 10, file = "", vwidth = 1424, vheight = 1000, html.name = "" )
MapRegionalSources( data, map, map.subid.column = 1, group.column = NULL, group.colors = NULL, digits = 3, progbar = FALSE, map.type = "default", plot.scale = TRUE, plot.searchbar = FALSE, weight = 0.5, opacity = 1, fillColor = "#4d4d4d", fillOpacity = 0.25, line.weight = 5, line.opacity = 1, seed = NULL, darken = 0, font.size = 10, file = "", vwidth = 1424, vheight = 1000, html.name = "" )
data |
Dataframe, containing a column |
map |
A |
map.subid.column |
Integer, index of the column in |
group.column |
Integer, optional index of the column in |
group.colors |
Named list providing colors for connection groups in Leaflet maps. List names represent the names of the groups in the |
digits |
Integer, number of digits to which irrigation connection lengths are rounded to. |
progbar |
Logical, display a progress bar while calculating. |
map.type |
Map type keyword string. Choose either |
plot.scale |
Logical, include a scale bar on Leaflet maps. |
plot.searchbar |
Logical, if |
weight |
Numeric, weight of subbasin boundary lines in Leaflet maps. Used if |
opacity |
Numeric, opacity of subbasin boundary lines in Leaflet maps. Used if |
fillColor |
String, color of subbasin polygons in Leaflet maps. Used if |
fillOpacity |
Numeric, opacity of subbasin polygons in Leaflet maps. Used if |
line.weight |
Numeric, weight of connection lines in Leaflet maps. See |
line.opacity |
Numeric, opacity of connection lines in Leaflet maps. See |
seed |
Integer, seed number to to produce repeatable color palette. |
darken |
Numeric specifying the amount of darkening applied to the random color palette. Negative values will lighten the palette. See |
font.size |
Numeric, font size (px) for subbasin labels in Leaflet maps. |
file |
Save a Leaflet map to an image file by specifying the path to the desired output file using this argument. File extension must be specified.
See |
vwidth |
Numeric, width of the exported Leaflet map image in pixels. See |
vheight |
Numeric, height of the exported Leaflet map image in pixels. See |
html.name |
Save a Leaflet map to an interactive HTML file by specifying the path to the desired output file using this argument. File extension must be specified.
See |
MapRegionalSources
can return static plots or interactive Leaflet maps depending on value provided for the argument map.type
.
By default, MapRegionalSources
creates an sf
object from HYPE SUBID centerpoints using a table of SUBID pairs. Regional
irrigation sources in HYPE are transfers from outlet lakes or rivers in a source sub-catchment to the soil storage of irrigated SLC classes
(Soil, Land use, Crop) in a target sub-catchment. If map.type
is set to "leaflet", then MapRegionalSources
returns an object of class leaflet
.
For default static maps, MapRegionalSources
returns an sf
object containing columns SUBID
(irrigation target
sub-catchment), REGSRCID
(irrigation source sub-catchment), and Length_[unit]
(distance between sub-catchments) where
'unit' is the actual length unit of the distances. The projection of the returned object is always identical to the projection of
argument map
. For interactive Leaflet maps, PlotMapOutput
returns an object of class leaflet
. If map
contains
polygon data, then the interactive map will include the polygons as a background layer.
# Import subbasin centroids and subbasin polygons (to use as background) require(sf) te1 <- st_read(dsn = system.file("demo_model", "gis", "Nytorp_centroids.gpkg", package = "HYPEtools")) te2 <- st_read(dsn = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools")) # Create dummy MgmtData file with irrigation links te3 <- data.frame(SUBID = c(3594, 63794), REGSRCID = c(40556, 3486)) # Plot regional irrigation links between subbasins with subbasin outlines as background MapRegionalSources(data = te3, map = te1, map.subid.column = 25) plot(st_geometry(te2), add = TRUE, border = 2)
# Import subbasin centroids and subbasin polygons (to use as background) require(sf) te1 <- st_read(dsn = system.file("demo_model", "gis", "Nytorp_centroids.gpkg", package = "HYPEtools")) te2 <- st_read(dsn = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools")) # Create dummy MgmtData file with irrigation links te3 <- data.frame(SUBID = c(3594, 63794), REGSRCID = c(40556, 3486)) # Plot regional irrigation links between subbasins with subbasin outlines as background MapRegionalSources(data = te3, map = te1, map.subid.column = 25) plot(st_geometry(te2), add = TRUE, border = 2)
Merge an imported HYPE GeoData table of class link{HypeGeoData}
with another data frame.
## S3 method for class 'HypeGeoData' merge(x, y, all.x = TRUE, sort = NA, ...)
## S3 method for class 'HypeGeoData' merge(x, y, all.x = TRUE, sort = NA, ...)
x |
|
y |
Data frame, with mandatory |
all.x |
Logical, keep all rows from |
sort |
Logical, result sorting by |
... |
Arguments passed to S3 method for data frames, see |
merge.HypeGeoData
allows to merge new columns to an existing HYPE GeoData table, while preserving the HypeGeoData
class attribute. Duplicate columns are marked with a ".y"
-suffix for the merged y
data frame.
The following arguments of the default method are hard-coded:
by, by.x, by.y
, set to "SUBID"
suffixes
, set to c("", ".y")
The method warns if any of these arguments is supplied by the user. To override, use the GeoData table as argument y
or
call the data frame method explicitly (merge.data.frame()
).
A HypeGeoData
data frame.
merge
, the S3 generic function.
# import and create dummy data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- data.frame(SUBID = sample(x = te1$SUBID, size = 10), loc_vol = runif(n = 10, 10, 50)) merge(x = te1, y = te2)
# import and create dummy data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- data.frame(SUBID = sample(x = te1$SUBID, size = 10), loc_vol = runif(n = 10, 10, 50)) merge(x = te1, y = te2)
Function to merge two HYPE observation data frames, with handling of overlapping time periods and time periods gaps as well as merging of common columns.
MergeObs(x, y)
MergeObs(x, y)
x , y
|
Data frames containing observation timeseries data. Typically imported using |
MergeObs
handles time steps of different lengths (e.g. daily, hourly), but requires identical time
step lengths from both inputs data frames.
In case of common columns (identical date and SUBID combinations in x
and y
),
values from columns in x
will take precedence, and values from y
will only be added if
x
values are missing.
MergeObs
returns a data frame with merged Obs data.
# Import dummy data, add new observations to second Obs table, and merge te1 <- ReadObs(filename = system.file("demo_model", "Tobs.txt", package = "HYPEtools")) te2 <- ReadObs(filename = system.file("demo_model", "Tobs.txt", package = "HYPEtools")) te2$X0000[1:365] <- runif(n = 365, -20, 25) MergeObs(x = te1, y = te2)
# Import dummy data, add new observations to second Obs table, and merge te1 <- ReadObs(filename = system.file("demo_model", "Tobs.txt", package = "HYPEtools")) te2 <- ReadObs(filename = system.file("demo_model", "Tobs.txt", package = "HYPEtools")) te2$X0000[1:365] <- runif(n = 365, -20, 25) MergeObs(x = te1, y = te2)
Function to merge two Xobs data frames, with handling of overlapping time periods and time periods gaps as well as merging of common columns.
MergeXobs(x, y, comment = "")
MergeXobs(x, y, comment = "")
x , y
|
Data frames of class |
comment |
Character string, will be added to the result as attribute |
MergeXobs
handles time steps of different lengths (e.g. daily, hourly), but requires identical time
step lengths from both inputs data frames. The functions expects data frames of class HypeXobs
or data frames with comparable structure and will throw a warning if the class attribute is missing.
In case of common columns (identical observation variable and SUBID combinations in x
and y
),
values from columns in x
will take precedence, and values from y
will only be added if
x
values are missing
MergeXobs
returns a data frame with attributes for Xobs data.
# Import dummy data, add new observations to second Xobs table te1 <- ReadXobs(filename = system.file("demo_model", "Xobs.txt", package = "HYPEtools")) te2 <- ReadXobs(filename = system.file("demo_model", "Xobs.txt", package = "HYPEtools")) te2$WSTR_40541[1:10] <- runif(n = 10, 50, 100) MergeXobs(x = te1, y = te2)
# Import dummy data, add new observations to second Xobs table te1 <- ReadXobs(filename = system.file("demo_model", "Xobs.txt", package = "HYPEtools")) te2 <- ReadXobs(filename = system.file("demo_model", "Xobs.txt", package = "HYPEtools")) te2$WSTR_40541[1:10] <- runif(n = 10, 50, 100) MergeXobs(x = te1, y = te2)
Nash-Sutcliffe Efficiency calculation for imported HYPE outputs with single variables for several catchments, i.e. time and map files, optionally multiple model run iterations combined.
## S3 method for class 'HypeSingleVar' NSE(sim, obs, na.rm = TRUE, progbar = TRUE, ...)
## S3 method for class 'HypeSingleVar' NSE(sim, obs, na.rm = TRUE, progbar = TRUE, ...)
sim |
|
obs |
|
na.rm |
Logical. If |
progbar |
Logical, if |
... |
ignored |
NSE.HypeSingleVar
returns a 2-dimensional array of NSE performances for all SUBIDs and model iterations provided in
argument sim
, with values in the same order
as the second and third dimension in sim
, i.e. [subid, iteration]
.
# Create dummy data, discharge observations with added white noise as model simulations te1 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te1 <- HypeSingleVar(x = array(data = unlist(te1[, -1]) + runif(n = nrow(te1), min = -.5, max = .5), dim = c(nrow(te1), ncol(te1) - 1, 1), dimnames = list(rownames(te1), colnames(te1)[-1])), datetime = te1$DATE, subid = obsid(te1), hype.var = "cout") te2 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te2 <- HypeSingleVar(x = array(data = unlist(te2[, -1]), dim = c(nrow(te2), ncol(te2) - 1, 1), dimnames = list(rownames(te2), colnames(te2)[-1])), datetime = te2$DATE, subid = obsid(te2), hype.var = "rout") # Nash-Sutcliffe Efficiency NSE(sim = te1, obs = te2, progbar = FALSE)
# Create dummy data, discharge observations with added white noise as model simulations te1 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te1 <- HypeSingleVar(x = array(data = unlist(te1[, -1]) + runif(n = nrow(te1), min = -.5, max = .5), dim = c(nrow(te1), ncol(te1) - 1, 1), dimnames = list(rownames(te1), colnames(te1)[-1])), datetime = te1$DATE, subid = obsid(te1), hype.var = "cout") te2 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te2 <- HypeSingleVar(x = array(data = unlist(te2[, -1]), dim = c(nrow(te2), ncol(te2) - 1, 1), dimnames = list(rownames(te2), colnames(te2)[-1])), datetime = te2$DATE, subid = obsid(te2), hype.var = "rout") # Nash-Sutcliffe Efficiency NSE(sim = te1, obs = te2, progbar = FALSE)
OptimisedClasses
checks which classes (land use or soil) of parameters in an imported optpar list are actually
optimized, i.e. have a min/max range larger than zero.
OptimisedClasses(x)
OptimisedClasses(x)
x |
list with named elements, as an object returned from |
OptimisedClasses
allows to quickly check which classes of parameters in an optpar.txt file are actually optimized
during a HYPE optimization run. The function compares min and max values in the pars
element of an imported HYPE
optpar.txt file to identify those.
OptimisedClasses
returns a named list with one vector element for each parameter found in x
. List element
names are HYPE parameter names. Each vector contains the optimized class numbers for the respective parameter.
te <- ReadOptpar(filename = system.file("demo_model", "optpar.txt", package = "HYPEtools")) OptimisedClasses(te)
te <- ReadOptpar(filename = system.file("demo_model", "optpar.txt", package = "HYPEtools")) OptimisedClasses(te)
Function to find the identifier(s) used to signify model domain outlets, i.e. the "downstream" ID of outlet catchments, in a GeoData file.
This is typically just one number, often e.g. '0' or '-9999', but can be one or several IDs if the GeoData file originates from a HYPE sub-model
set-up, e.g. created with the 'SelectAro' program. Use OutletSubids
to find the actual SUBID values of the outlet catchments.
OutletIds(gd)
OutletIds(gd)
gd |
Data frame with two columns |
OutletIds
finds the unique outlet IDs of a GeoData file. The outlet ID of a typical model
is a single placeholder number, often e.g. '0' or '-9999', but there can be several outlet IDs, e.g. one or
several SUBIDs if the GeoData file originates from a HYPE sub-model set-up, created
with the 'SelectAro' tool.
OutletIds
returns a vector of outlet IDs.
AllDownstreamSubids
, OutletSubids
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) OutletIds(gd = te)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) OutletIds(gd = te)
Find observation stations close to specified outlet subbasins of a HYPE model set-up. Proximity threshold as upstream area fraction of target outlet subbasin(s). Currently, only upstream observations are identified.
OutletNearObs( gd, file.qobs = NULL, file.xobs = NULL, variable = NULL, outlets = NULL, frac.drain = 0.8, nearest.only = TRUE, verbose = TRUE )
OutletNearObs( gd, file.qobs = NULL, file.xobs = NULL, variable = NULL, outlets = NULL, frac.drain = 0.8, nearest.only = TRUE, verbose = TRUE )
gd |
Data frame with two columns |
file.qobs , file.xobs
|
Character string, file location of HYPE observation data file. Only one of these needs to be
supplied, with |
variable |
Character string, HYPE variable to use. Needed only with argument |
outlets |
Integer vector, HYPE SUBIDs of subbasins to be considered outlets. If |
frac.drain |
Numeric, minimum fraction of drainage area at corresponding outlet to be covered by observation site. |
nearest.only |
Logical, if |
verbose |
Logical, print status messages and progress bars during runtime. |
OutletNearObs
finds observation sites for observation variables in
HYPE 'Qobs.txt' and
HYPE 'Xobs.txt' files
located upstream an outlet sub-basin. For file.xobs
files, which can hold several observation variables, a single variable has
to be selected (the function conveniently prints available variables in file.xobs
, if no variable
is provided).
Any number of SUBIDs present in gd
can be defined as outlet subbasins with argument outlets
. The function handles nested
outlets, i.e. cases where user-provided subbasins in outlets
are upstream basins of one another. Outlet proximity is
defined by drainage area size compared to the respective outlet. The function returns either the nearest or all sites matching
or exceeding fraction frac.drain
, depending on argument nearest.only
.
OutletNearObs
returns a data frame with 4 columns, containing row-wise all observation sites which match the search
criteria:
SUBID of outlet subbasin
SUBID of observation site
Relative drainage area fraction of observation site, compared to corresponding outlet subbasin
Drainage area of outlet subbasin, in km^2
Drainage area of observation site, in km^2
If file.xobs
is provided without variable
, the function prints available HYPE observation variables in file.xobs
and silently
returns the same information as character vector.
# Import source data te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) # Find observation near domain outlet OutletNearObs(file.qobs = system.file("demo_model", "Qobs.txt", package = "HYPEtools"), gd = te, verbose = FALSE) # get vector of variables in an Xobs file OutletNearObs(file.xobs = system.file("demo_model", "Xobs.txt", package = "HYPEtools"), gd = te, verbose = FALSE)
# Import source data te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) # Find observation near domain outlet OutletNearObs(file.qobs = system.file("demo_model", "Qobs.txt", package = "HYPEtools"), gd = te, verbose = FALSE) # get vector of variables in an Xobs file OutletNearObs(file.xobs = system.file("demo_model", "Xobs.txt", package = "HYPEtools"), gd = te, verbose = FALSE)
Function to find all outlet SUBIDs of a HYPE model domain.
OutletSubids(gd)
OutletSubids(gd)
gd |
A data frame, with two columns |
OutletSubids
finds all outlet SUBIDs of a model domain as provided in a 'GeoData.txt' file, i.e. all SUBIDs from which
stream water leaves the model domain.
OutletSubids
returns a vector of outlet SUBIDs.
AllDownstreamSubids
, OutletIds
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) OutletSubids(gd = te)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) OutletSubids(gd = te)
Creates a Party Parrot.
PartyParrot(sound = 8)
PartyParrot(sound = 8)
sound |
Character string or number specifying which sound to play when showing the Party Parrot. See the |
PartyParrot
generates a Party Parrot. Uses for Party Parrots include, for example, celebrating the successful execution of a script.
Returns a Party Parrot to the Console.
PartyParrot()
PartyParrot()
Percent bias (PBIAS) calculation for imported HYPE outputs with single variables for several catchments, i.e. time and map files, optionally multiple model runs combined.
## S3 method for class 'HypeSingleVar' pbias(sim, obs, na.rm = TRUE, progbar = TRUE, ...)
## S3 method for class 'HypeSingleVar' pbias(sim, obs, na.rm = TRUE, progbar = TRUE, ...)
sim |
|
obs |
|
na.rm |
Logical. If |
progbar |
Logical. If |
... |
ignored |
pbias.HypeSingleVar
returns a 2-dimensional array of NSE performances for all SUBIDs and model iterations provided in
argument sim
, with values in the same order
as the second and third dimension in sim
, i.e. [subid, iteration]
.
# Create dummy data, discharge observations with added white noise as model simulations te1 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te1 <- HypeSingleVar(x = array(data = unlist(te1[, -1]) + runif(n = nrow(te1), min = -.5, max = .5), dim = c(nrow(te1), ncol(te1) - 1, 1), dimnames = list(rownames(te1), colnames(te1)[-1])), datetime = te1$DATE, subid = obsid(te1), hype.var = "cout") te2 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te2 <- HypeSingleVar(x = array(data = unlist(te2[, -1]), dim = c(nrow(te2), ncol(te2) - 1, 1), dimnames = list(rownames(te2), colnames(te2)[-1])), datetime = te2$DATE, subid = obsid(te2), hype.var = "rout") # Percentage bias pbias(sim = te1, obs = te2, progbar = FALSE)
# Create dummy data, discharge observations with added white noise as model simulations te1 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te1 <- HypeSingleVar(x = array(data = unlist(te1[, -1]) + runif(n = nrow(te1), min = -.5, max = .5), dim = c(nrow(te1), ncol(te1) - 1, 1), dimnames = list(rownames(te1), colnames(te1)[-1])), datetime = te1$DATE, subid = obsid(te1), hype.var = "cout") te2 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te2 <- HypeSingleVar(x = array(data = unlist(te2[, -1]), dim = c(nrow(te2), ncol(te2) - 1, 1), dimnames = list(rownames(te2), colnames(te2)[-1])), datetime = te2$DATE, subid = obsid(te2), hype.var = "rout") # Percentage bias pbias(sim = te1, obs = te2, progbar = FALSE)
Convenience wrapper function for a combined line plot
with polygon
variation ranges.
PlotAnnualRegime( x, line = c("mean", "median"), band = c("none", "p05p95", "p25p75", "minmax"), add.legend = FALSE, l.legend = NULL, l.position = c("topright", "bottomright", "right", "topleft", "left", "bottomleft"), log = FALSE, ylim = NULL, ylab = expression(paste("Q (m"^3, " s"^{ -1 }, ")")), xlab = paste(format(attr(x, "period"), format = "%Y"), collapse = " to "), col = "blue", alpha = 30, lty = 1, lwd = 1, mar = c(3, 3, 1, 1) + 0.1, verbose = TRUE )
PlotAnnualRegime( x, line = c("mean", "median"), band = c("none", "p05p95", "p25p75", "minmax"), add.legend = FALSE, l.legend = NULL, l.position = c("topright", "bottomright", "right", "topleft", "left", "bottomleft"), log = FALSE, ylim = NULL, ylab = expression(paste("Q (m"^3, " s"^{ -1 }, ")")), xlab = paste(format(attr(x, "period"), format = "%Y"), collapse = " to "), col = "blue", alpha = 30, lty = 1, lwd = 1, mar = c(3, 3, 1, 1) + 0.1, verbose = TRUE )
x |
List, typically a result from |
line |
Character string, keyword for type of average line to plot. Either |
band |
Character vector, keyword for variation bands. If |
add.legend |
Logical. If |
l.legend |
Character vector. If non-NULL, legend labels are read from here instead of from column names in |
l.position |
Legend position, keyword string. One of |
log |
Logical, if |
ylim |
Numeric vector of length two, giving y-axis limits. Defaults to min-max range of all plotted data. |
ylab |
Character or |
xlab |
Character string or |
col |
Line color specification, see |
alpha |
Numeric, alpha transparency value for variation bands. Value between |
lty |
Line type specification, see |
lwd |
Line width specification, see |
mar |
Numeric vector of length 4, margin specification as in |
verbose |
Logical, print warnings if |
PlotAnnualRegime
plots contents from lists as returned by AnnualRegime
(for format details, see there). If
NA
values are present in the plot data, the function will throw a warning if verbose = TRUE
and proceed with plotting
all available data.
Argument band
allows to plot variation bands to be plotted in addition to average lines. These can be (combinations of) ranges
between minima and maxima, 5th and 95th percentiles, and 25th and 75th percentiles, i.e. all moments available in AnnualRegime
results.
Grid lines plotted in the background are mid-month lines.
PlotAnnualRegime
returns a plot to the currently active plot device.
AnnualRegime
, PlotSimObsRegime
## Not run: # Source data, HYPE basin output with a number of result variables te1 <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) # Daily discharge regime, computed and observed, # hydrological year from October, aggregated to weekly means te2 <- AnnualRegime(te1[, c("DATE", "COUT", "ROUT")], ts.in = "day", ts.out = "week", start.mon = 10) # Screen devices should not be used in examples PlotAnnualRegime(x = te2) PlotAnnualRegime(x = te2, line = "median", band = "p05p95", add.legend = TRUE, col = c("red", "blue")) ## End(Not run)
## Not run: # Source data, HYPE basin output with a number of result variables te1 <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) # Daily discharge regime, computed and observed, # hydrological year from October, aggregated to weekly means te2 <- AnnualRegime(te1[, c("DATE", "COUT", "ROUT")], ts.in = "day", ts.out = "week", start.mon = 10) # Screen devices should not be used in examples PlotAnnualRegime(x = te2) PlotAnnualRegime(x = te2, line = "median", band = "p05p95", add.legend = TRUE, col = c("red", "blue")) ## End(Not run)
Plot a standard suite of time series plots from a basin output file, typically used for model performance inspection and/or during manual calibration
PlotBasinOutput( x, filename, driver = c("default", "pdf", "png", "screen"), timestep = attr(x, "timestep"), hype.vars = "all", vol.err = TRUE, log.q = FALSE, start.mon = 1, from = 1, to = nrow(x), date.format = "", name = "", area = NULL, subid = attr(x, "subid"), gd = NULL, bd = NULL, ylab.t1 = "Conc." )
PlotBasinOutput( x, filename, driver = c("default", "pdf", "png", "screen"), timestep = attr(x, "timestep"), hype.vars = "all", vol.err = TRUE, log.q = FALSE, start.mon = 1, from = 1, to = nrow(x), date.format = "", name = "", area = NULL, subid = attr(x, "subid"), gd = NULL, bd = NULL, ylab.t1 = "Conc." )
x |
Data frame, with column-wise equally-spaced time series of HYPE variables. Date-times in
|
filename |
String, file name for plotting to file device, see argument |
driver |
String, device driver name, one of |
timestep |
Character string, timestep of |
hype.vars |
Either a keyword string or a character vector of HYPE output variables. User-specified selection of HYPE variables
to plot. Default ( |
vol.err |
Logical, if |
log.q |
Logical, y-axis scaling for flow duration curve and discharge time series, set to |
start.mon |
Integer between 1 and 12, starting month of the hydrological year. For runoff regime plot, see also
|
from , to
|
Integer or date string of format \
interpreted as row indices of |
date.format |
String format for x-axis dates/times. See |
name |
Character string, name to be printed on the plot. |
area |
Numeric, upstream area of sub-basin in m^2. Required for calculation of accumulated volume error. Optional argument,
either this or arguments |
subid |
Integer, HYPE SUBID of a target sub-catchment (must exist in |
gd |
A data frame, containing 'SUBID' and 'MAINDOWN' columns, e.g. an imported 'GeoData.txt' file. Mandatory with argument
|
bd |
A data frame, containing 'BRANCHID' and 'SOURCEID' columns, e.g. an imported 'BranchData.txt' file. Optional with argument
|
ylab.t1 |
String or |
PlotBasinOutput
plots a suite of time series along with a flow duration curve, a flow regime plot, and a selection of
goodness-of-fit measures from an imported HYPE basin output file. The function selects from a range of "known" variables, and plots
those which are available in the user-supplied basin output. It is mostly meant as a support tool during calibration, manual or
automatic, providing a quick and comprehensive overview of model dynamics in a subbasin of interest.
HYPE outputs which are known to PlotBasinOutput
include:
precipitation
air temperature
discharge
lake water level
water temperature
evapotranspiration
snow water equivalent
sub-surface storage components
nitrogen concentrations
phosphorus concentrations
suspended sediment concentrations
total sediment concentrations
tracer concentration
Below a complete list of HYPE variables known to the function in HYPE info.txt format, ready to copy-paste into an info.txt file. For a detailed description of the variables, see the HYPE online documentation.
basinoutput variable upcprf upcpsf temp upepot upevap cout rout soim sm13 upsmfp snow upcprc cct2 ret2 ccin rein ccon reon cctn retn
ccsp resp ccpp repp cctp retp wcom wstr ccss ress ccts rets cct1 ret1
Device dimensions are hard-coded to a width of 15 inches and height depending on the number of plotted time series. When plotting
to a screen device, a maximum height of 10 inches is enforced in order to prevent automatic resizing with slow redrawing.
PlotBasinOutput
throws a warning if the plot height exceeds 10 inches, which can lead to overlapping plot elements. On screens with
less than 10 inch screen, redrawing is inhibited, which can lead to an empty plot. The recommended solution for both effects
is to plot to pdf or png file devices instead.
Returns a multi-panel plot in a new graphics device.
PlotBasinSummary
, PlotAnnualRegime
, PlotDurationCurve
, ReadBasinOutput
# Source data, HYPE basin output with a number of result variables te1 <- ReadBasinOutput(filename = system.file("demo_model", "results","0003587.txt", package = "HYPEtools")) te2 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) ## Not run: # Plot selected water variables on screen device PlotBasinOutput(x = te1, gd = te2, driver = "screen",hype.vars = c("cout", "rout", "snow", "upcprf", "upcpsf")) ## End(Not run)
# Source data, HYPE basin output with a number of result variables te1 <- ReadBasinOutput(filename = system.file("demo_model", "results","0003587.txt", package = "HYPEtools")) te2 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) ## Not run: # Plot selected water variables on screen device PlotBasinOutput(x = te1, gd = te2, driver = "screen",hype.vars = c("cout", "rout", "snow", "upcprf", "upcpsf")) ## End(Not run)
Plot a standard suite of plots summarizing properties of a sub-basin including upstream area and model performance for discharge and concentrations of nutrients, sediment, and tracers.
PlotBasinSummary( x, filename, driver = c("default", "pdf", "png", "screen"), panels = 1, gd = NULL, bd = NULL, gcl = NULL, psd = NULL, subid = NULL, desc = NULL, timestep = attr(x, "timestep"), hype.vars = "all", from = 1, to = nrow(x), log = FALSE, xscale = "gauss", start.mon = 10, name = "", ylab.t1 = "Conc." )
PlotBasinSummary( x, filename, driver = c("default", "pdf", "png", "screen"), panels = 1, gd = NULL, bd = NULL, gcl = NULL, psd = NULL, subid = NULL, desc = NULL, timestep = attr(x, "timestep"), hype.vars = "all", from = 1, to = nrow(x), log = FALSE, xscale = "gauss", start.mon = 10, name = "", ylab.t1 = "Conc." )
x |
Data frame, with column-wise daily time series of HYPE variables. Date-times in
|
filename |
String, file name for plotting to file device, see argument |
driver |
String, device driver name, one of |
panels |
Integer, either |
gd |
A data frame, containing 'SUBID', 'MAINDOWN', and 'AREA' columns, e.g. an imported 'GeoData.txt' file. Only needed with bar chart panels, see Details. |
bd |
A data frame, containing 'BRANCHID' and 'SOURCEID' columns, e.g. an imported 'BranchData.txt' file. Optional argument. Only needed with bar chart panels, see Details. |
gcl |
Data frame containing columns with SLCs and corresponding land use and soil class IDs, typically a 'GeoClass.txt'
file imported with |
psd |
A data frame with HYPE point source specifications, typically a 'PointSourceData.txt' file imported with
|
subid |
Integer, SUBID of sub-basin for which results are plotted. If |
desc |
List for use with |
timestep |
Character string, timestep of |
hype.vars |
Either a keyword string or a character vector of HYPE output variables. User-specified selection of HYPE variables
to plot. Default ( |
from , to
|
Integer or date string of format \
interpreted as row indices of |
log |
Logical, log scaling discharge and concentrations. |
xscale |
Character string, keyword for x-axis scaling. Either |
start.mon |
Integer between 1 and 12, starting month of the hydrological year. For regime plots, see also
|
name |
Character or expression string. Site name to plot besides bar chart panels. Only relevant with |
ylab.t1 |
String or |
PlotBasinSummary
plots a multi-panel plot with a number of plots to evaluate model properties and performances for a
chosen sub-basin. Performance plots include discharge, HYPE-modeled nutrient species for nitrogen (total, inorganic, organic)
and phosphorus (total, particulate, soluble), and HYPE modeled suspended and total sediment concentrations.
Plotted panels show:
Summarized catchment characteristics as bar charts: Upstream-averaged land use, soil, and crop group fractions; modeled nutrient loads in sub-basin outlet, and summed upstream gross loads from point sources and rural households (if necessary variables available, omitted otherwise).
Goodness-of-fit measures for discharge and concentrations: KGE (Kling-Gupta Efficiency), NSE (Nash-Sutcliffe Efficiency), PBIAS (Percentage Bias, aka relative error), MAE (Mean Absolute Error), r (Pearson product-moment correlation coefficient), VE (Volumetric Efficiency).
Simulation-observation relationships for discharge and concentrations: Simulated and observed concentration-discharge relationships, relationship between observed and simulated nutrient, sediment, and tracer concentrations.
Duration curves for flow and concentrations: Pairwise simulated and observed curves.
Annual regimes for flow and concentrations: Pairwise simulated and observed regime plots at monthly aggregation, with number of observations for concentration regimes.
Corresponding plots for IN/TN and SP/TP ratios.
Per default, the function plots from available model variables in an imported HYPE basin output file, and missing variables will be
automatically omitted. Variable selection can be additionally fine-tuned using argument hype.vars
.
Argument panels
allows to choose if bar chart panels should be plotted. This can be time-consuming for sites with many upstream
sub-basins and might not necessary e.g. during calibration. If 1
(default), all panels are plotted. If set to 2
, bar
charts will be excluded. If 3
, only bar charts will be plotted. Arguments gd
, bd
, gcl
, psd
, subid
,
and desc
are only needed for bar chart plotting.
Below a complete list of HYPE variables known to the function in HYPE info.txt format, ready to copy-paste into an info.txt file. For a detailed description of the variables, see the HYPE online documentation.
basinoutput variable cout rout ccin rein ccon reon cctn retn ccsp resp ccpp repp cctp retp ctnl ctpl ccss ress ccts rets cct1 ret1
#' Device dimensions are hard-coded to a width of 13 inches and height depending on the number of plotted time series. When plotting
to a screen device, a maximum height of 10 inches is enforced in order to prevent automatic resizing with slow redrawing.
PlotBasinOutput
throws a warning if the plot height exceeds 10 inches, which can lead to overlapping plot elements. On screens with
less than 10 inch screen height, redrawing is inhibited, which can lead to an empty plot. The recommended solution for both effects
is to plot to pdf or png file devices instead.
Returns a multi-panel plot in a new graphics device.
PlotBasinOutput
, BarplotUpstreamClasses
, PlotSimObsRegime
, PlotAnnualRegime
,
PlotDurationCurve
, ReadBasinOutput
# Source data, HYPE basin output with a number of result variables te1 <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) te2 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) ## Not run: # Plot basin summary for discharge on screen device PlotBasinSummary(x = te1, gd = te2, driver = "screen", panels = 2) ## End(Not run)
# Source data, HYPE basin output with a number of result variables te1 <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) te2 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) ## Not run: # Plot basin summary for discharge on screen device PlotBasinSummary(x = te1, gd = te2, driver = "screen", panels = 2) ## End(Not run)
Convenience wrapper function for a (multiple) line plot
, with pretty defaults for axis annotation and a Gaussian scaling option for the x-axis.
PlotDurationCurve( freq, xscale = "lin", yscale = "log", add.legend = FALSE, l.legend = NULL, ylim = NULL, xlab = "Flow exceedance percentile", ylab = "m3s", col = "blue", lty = 1, lwd = 1, mar = c(3, 3, 1, 1) + 0.1 )
PlotDurationCurve( freq, xscale = "lin", yscale = "log", add.legend = FALSE, l.legend = NULL, ylim = NULL, xlab = "Flow exceedance percentile", ylab = "m3s", col = "blue", lty = 1, lwd = 1, mar = c(3, 3, 1, 1) + 0.1 )
freq |
Data frame with at least two columns, containing probabilities in the first and series of data quantiles in the remaining columns. Typically
an object as returned by |
xscale |
Character string, keyword for x-axis scaling. Either |
yscale |
Character string, keyword for y-axis scaling. Either |
add.legend |
Logical. If |
l.legend |
Character vector. If non-NULL, legend labels are read from here instead of from column names in |
ylim |
Numeric vector of length two, giving y-axis limits. |
xlab |
Character string, x-axis label. |
ylab |
Character or |
col |
Line color specification, see |
lty |
Line type specification, see |
lwd |
Line width specification, see |
mar |
Numeric vector of length 4, margin specification as in |
PlotDurationCurve
plots a duration curve with pretty formatting defaults. The function sets par
parameters tcl
and mgp
internally and will override previously set values for the returned plot. It typically uses results from ExtractFreq
as input data and via that
function it can be used to visualize and compare time series properties.
PlotDurationCurve
returns a plot to the currently active plot device.
# Import source data te1 <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) te2 <- ExtractFreq(te1[, c("COUT", "ROUT")]) # Plot flow duration curves for simulated and observed discharge PlotDurationCurve(freq = te2, add.legend = TRUE, col = c("red", "blue"))
# Import source data te1 <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) te2 <- ExtractFreq(te1[, c("COUT", "ROUT")]) # Plot flow duration curves for simulated and observed discharge PlotDurationCurve(freq = te2, add.legend = TRUE, col = c("red", "blue"))
Draw HYPE map results, with pretty scale discretizations and color ramp defaults for select HYPE variables.
PlotMapOutput( x, map = NULL, map.subid.column = 1, var.name = "", map.type = "default", shiny.data = FALSE, plot.legend = TRUE, legend.pos = "right", legend.title = NULL, legend.signif = 2, col = "auto", col.ramp.fun, col.breaks = NULL, col.labels = NULL, col.rev = FALSE, plot.scale = TRUE, scale.pos = "br", plot.arrow = TRUE, arrow.pos = "tr", weight = 0.15, opacity = 0.75, fillOpacity = 0.5, outline.color = "black", na.color = "#808080", plot.searchbar = FALSE, plot.label = FALSE, plot.label.size = 2.5, plot.label.geometry = c("centroid", "surface"), file = "", width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 300, vwidth = 1424, vheight = 1000, html.name = "", map.adj = 0, legend.outer = FALSE, legend.inset = c(0, 0), par.cex = 1, par.mar = rep(0, 4) + 0.1, add = FALSE, sites = NULL, sites.subid.column = NULL )
PlotMapOutput( x, map = NULL, map.subid.column = 1, var.name = "", map.type = "default", shiny.data = FALSE, plot.legend = TRUE, legend.pos = "right", legend.title = NULL, legend.signif = 2, col = "auto", col.ramp.fun, col.breaks = NULL, col.labels = NULL, col.rev = FALSE, plot.scale = TRUE, scale.pos = "br", plot.arrow = TRUE, arrow.pos = "tr", weight = 0.15, opacity = 0.75, fillOpacity = 0.5, outline.color = "black", na.color = "#808080", plot.searchbar = FALSE, plot.label = FALSE, plot.label.size = 2.5, plot.label.geometry = c("centroid", "surface"), file = "", width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 300, vwidth = 1424, vheight = 1000, html.name = "", map.adj = 0, legend.outer = FALSE, legend.inset = c(0, 0), par.cex = 1, par.mar = rep(0, 4) + 0.1, add = FALSE, sites = NULL, sites.subid.column = NULL )
x |
HYPE model results, typically 'map output' results. Data frame object with two columns, first column containing SUBIDs and second column containing model results to plot. See details. |
map , sites
|
A |
map.subid.column , sites.subid.column
|
Integer, column index in the |
var.name |
Character string. HYPE variable name to be plotted. Mandatory for automatic color ramp selection of pre-defined
HYPE variables ( |
map.type |
Map type keyword string. Choose either |
shiny.data |
Logical, if |
plot.legend |
Logical, plot a legend along with the map. |
legend.pos |
Keyword string for legend position. For static plots, one of: |
legend.title |
Character string or mathematical expression. An optional title for the legend. If none is provided here, |
legend.signif |
Integer, number of significant digits to display in legend labels. |
col |
Colors to use on the map. One of the following:
|
col.ramp.fun |
DEPRECATED, for backwards compatibility only. |
col.breaks |
A numeric vector, specifying break points for discretization of model result values into classes. Used if a color palette is specified with |
col.labels |
A character vector, specifying custom labels to be used for each legend item. Works with |
col.rev |
Logical, If |
plot.scale |
Logical, plot a scale bar on map. NOTE: Scale bar may be inaccurate for geographic coordinate systems (Consider switching to projected coordinate system). |
scale.pos |
Keyword string for scalebar position for static maps. One of |
plot.arrow |
Logical, plot a North arrow in static maps. |
arrow.pos |
Keyword string for north arrow position for static maps. One of |
weight |
Numeric, weight of subbasin boundary lines. See ggplot2::geom_sf for static maps and leaflet::addPolygons for Leaflet maps. |
opacity |
Numeric, opacity of subbasin boundary lines in Leaflet maps. See leaflet::addPolygons. |
fillOpacity |
Numeric, opacity of subbasin polygons in Leaflet maps. See leaflet::addPolygons. |
outline.color |
Character string of color to use to for subbasin polygon outlines. Use |
na.color |
Character string of color to use to symbolize subbasin polygons in maps which correspond to |
plot.searchbar |
Logical, if |
plot.label |
Logical, if |
plot.label.size |
Numeric, size of text for labels on default static plots. See ggplot2::geom_sf_text. |
plot.label.geometry |
Keyword string to select where plot labels should be displayed on the default static plots. Either |
file |
Save map to an image file by specifying the path to the desired output file using this argument. File extension must be specified. See ggplot2::ggsave for static maps and
mapview::mapshot for Leaflet maps. You may need to run |
width |
Numeric, width of output plot for static maps in units of |
height |
Numeric, height of output plot for static maps in units of |
units |
Keyword string for units to save static map. One of |
dpi |
Integer, resolution to save static map. See ggplot2::ggsave. |
vwidth |
Numeric, width of the exported Leaflet map image in pixels. See mapview::mapshot. |
vheight |
Numeric, height of the exported Leaflet map image in pixels. See mapview::mapshot. |
html.name |
Save Leaflet map to an interactive HTML file by specifying the path to the desired output file using this argument. File extension must be specified. See htmlwidgets::saveWidget. |
map.adj |
Numeric, map adjustment in direction where it is smaller than the plot window. A value of |
legend.outer |
Logical. If |
legend.inset |
Numeric, inset distance(s) from the margins as a fraction of the plot region for legend, scale and north arrow.
See |
par.cex |
Numeric, character expansion factor. See description of |
par.mar |
Plot margins as in |
add |
Logical, default |
PlotMapOutput
plots HYPE results from 'map[variable name].txt' files, typically imported using ReadMapOutput
.
x
arguments must contain the variable of interest in the second column. For map results with multiple columns, i.e.
several time periods, pass index selections to x
, e.g. mymapresult[, c(1, 3)]
.
PlotMapOutput
can return static plots or interactive Leaflet maps depending on value provided for the argument map.type
.
For backwards compatibility, legacy static plots can still be generated by setting map.type
to legacy
. For legacy plots, legend.pos
and
map.adj
should be chosen so that legend and map do not overlap, and the legend position can be fine-tuned using
argument legend.inset
. This is particularly useful for legend titles with more than one line. In order to move map and legend closer to each other, change the plot device width.
For details on inset specification for the default maps, see inset
in legend
.
Mapped variables are visualized using color-coded data intervals. HYPEtools
provides a number of color ramps functions for HYPE variables,
see CustomColors
. These are either single-color ramps with less saturated colors for smaller values
and more saturated values for higher values, suitable for e.g. concentration or volume ranges, or multi-color ramps suitable for calculated
differences, e.g. between two model runs.
Break points between color classes of in-built or user-provided color ramp palettes can optionally be provided in argument
col.breaks
. This is particularly useful when specific pretty class boundaries are needed, e.g. for publication figures. Per default,
break points for internal single color ramps and user-provided ramps are calculated based on 10\
x
. Default break points for internal color ramp ColDiffGeneric
are based on an equal distance classification of log-scaled
x
ranges, centered around zero. For internal color ramp ColDiffTemp
, they are breaks in an interval from -7.5 to 7.5 K.
For select common HYPE variables, given in argument var.name
, an automatic color ramp selection including pretty breaks and legend titles
is built into PlotMapOutput
. These are 'CCTN', 'CCTP', 'COUT', and 'TEMP'. Automatic selection is activated by choosing keyword
"auto"
in col
. All other HYPE variables will be plotted using a generic color ramp palette and generic break points with
"auto"
color selection.
For default static maps, PlotMapOutput
returns an object of class ggplot
. This plot can also be assigned to a variable in the environment.
For interactive Leaflet maps, PlotMapOutput
returns an object of class leaflet
. For legacy static plots, PlotMapOutput
returns a plot to the
currently active plot device, and invisibly an object of class SpatialPolygonsDataFrame
as provided in argument map
, with plotted values and color codes added as columns
in the data slot.
ReadMapOutput
for HYPE result import; PlotMapPoints
for plotting HYPE results at points, e.g. sub-basin outlets.
# Import plot data and subbasin polygons require(sf) te1 <- ReadMapOutput(filename = system.file("demo_model", "results", "mapCRUN.txt", package = "HYPEtools"), dt.format = NULL) te2 <- st_read(dsn = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools")) # plot runoff map PlotMapOutput(x = te1, map = te2, map.subid.column = 25, var.name = "CRUN", col = ColQ)
# Import plot data and subbasin polygons require(sf) te1 <- ReadMapOutput(filename = system.file("demo_model", "results", "mapCRUN.txt", package = "HYPEtools"), dt.format = NULL) te2 <- st_read(dsn = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools")) # plot runoff map PlotMapOutput(x = te1, map = te2, map.subid.column = 25, var.name = "CRUN", col = ColQ)
Plot mapped point information, e.g. model performances at observation sites.
PlotMapPoints( x, sites = NULL, sites.subid.column = 1, sites.groups = NULL, bg = NULL, bg.label.column = 1, var.name = "", map.type = "default", shiny.data = FALSE, plot.legend = TRUE, legend.pos = "right", legend.title = NULL, legend.signif = 2, col = NULL, col.breaks = NULL, col.labels = NULL, col.rev = FALSE, plot.scale = TRUE, scale.pos = "br", plot.arrow = TRUE, arrow.pos = "tr", radius = 5, weight = 0.15, opacity = 0.75, fillOpacity = 0.5, na.color = "#808080", jitter = 0.01, bg.weight = 0.15, bg.opacity = 0.75, bg.fillColor = "#e5e5e5", bg.fillOpacity = 0.75, plot.label = FALSE, plot.label.size = 2.5, plot.label.geometry = c("centroid", "surface"), noHide = FALSE, textOnly = FALSE, font.size = 10, plot.bg.label = NULL, file = "", width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 300, vwidth = 1424, vheight = 1000, html.name = "", map.adj = 0, legend.outer = FALSE, legend.inset = c(0, 0), pt.cex = 1, par.cex = 1, par.mar = rep(0, 4) + 0.1, pch = 21, lwd = 0.8, add = FALSE, map = NULL, map.subid.column = NULL )
PlotMapPoints( x, sites = NULL, sites.subid.column = 1, sites.groups = NULL, bg = NULL, bg.label.column = 1, var.name = "", map.type = "default", shiny.data = FALSE, plot.legend = TRUE, legend.pos = "right", legend.title = NULL, legend.signif = 2, col = NULL, col.breaks = NULL, col.labels = NULL, col.rev = FALSE, plot.scale = TRUE, scale.pos = "br", plot.arrow = TRUE, arrow.pos = "tr", radius = 5, weight = 0.15, opacity = 0.75, fillOpacity = 0.5, na.color = "#808080", jitter = 0.01, bg.weight = 0.15, bg.opacity = 0.75, bg.fillColor = "#e5e5e5", bg.fillOpacity = 0.75, plot.label = FALSE, plot.label.size = 2.5, plot.label.geometry = c("centroid", "surface"), noHide = FALSE, textOnly = FALSE, font.size = 10, plot.bg.label = NULL, file = "", width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 300, vwidth = 1424, vheight = 1000, html.name = "", map.adj = 0, legend.outer = FALSE, legend.inset = c(0, 0), pt.cex = 1, par.cex = 1, par.mar = rep(0, 4) + 0.1, pch = 21, lwd = 0.8, add = FALSE, map = NULL, map.subid.column = NULL )
x |
Information to plot, typically model performances from imported HYPE 'subassX.txt' files. Data frame object with two columns, first column containing SUBIDs and second column containing model results to plot. See details. |
sites , map
|
A |
sites.subid.column , map.subid.column
|
Integer, column index in the |
sites.groups |
Named list providing groups of SUBIDs to allow toggling of point groups in Leaflet maps. Default |
bg |
A |
bg.label.column |
Integer, column index in the |
var.name |
Character string. HYPE variable name to be plotted. Mandatory for automatic color ramp selection of pre-defined
HYPE variables ( |
map.type |
Map type keyword string. Choose either |
shiny.data |
Logical, if |
plot.legend |
Logical, plot a legend along with the map. |
legend.pos |
Keyword string for legend position. For static plots, one of: |
legend.title |
Character string or mathematical expression. An optional title for the legend. If none is provided here, the name of the second column in |
legend.signif |
Integer, number of significant digits to display in legend labels. |
col |
Colors to use on the map. One of the following:
|
col.breaks |
A numeric vector, specifying break points for discretization of model result values into classes. Class boundaries will be
interpreted as right-closed, i.e upper boundaries included in class. Lowest class boundary included in lowest class as well.
Meaningful results require the lowest and uppermost breaks to bracket all model result values, otherwise there will be
unclassified white spots on the map plot. If |
col.labels |
A character vector, specifying custom labels to be used for each legend item. Works with |
col.rev |
Logical, If |
plot.scale |
Logical, plot a scale bar on map. NOTE: Scale bar may be inaccurate for geographic coordinate systems (Consider switching to projected coordinate system). |
scale.pos |
Keyword string for scalebar position for static maps. One of |
plot.arrow |
Logical, plot a North arrow in static maps. |
arrow.pos |
Keyword string for north arrow position for static maps. One of |
radius |
Numeric, radius of markers maps. See ggplot2::geom_sf for static maps and leaflet::addCircleMarkers for Leaflet maps. |
weight |
Numeric, weight of marker outlines in Leaflet maps. See leaflet::addCircleMarkers. |
opacity |
Numeric, opacity of marker outlines in Leaflet maps. See leaflet::addCircleMarkers. |
fillOpacity |
Numeric, opacity of markers in Leaflet maps. See leaflet::addCircleMarkers. |
na.color |
Character string of color to use to symbolize markers in maps which correspond to |
jitter |
Numeric, amount to jitter points with duplicate geometries. See sf::st_jitter. |
bg.weight |
Numeric, weight of |
bg.opacity |
Numeric, opacity of |
bg.fillColor |
Character string of color to use to symbolize |
bg.fillOpacity |
Numeric in range 0-1, opacity of |
plot.label |
Logical, if |
plot.label.size |
Numeric, size of text for labels on default static plots. See ggplot2::geom_sf_text. |
plot.label.geometry |
Keyword string to select where plot labels should be displayed on the default static plots. Either |
noHide |
Logical, set to |
textOnly |
Logical, set to |
font.size |
Numeric, font size (px) for marker labels in Leaflet maps. |
plot.bg.label |
String, if |
file |
Save map to an image file by specifying the path to the desired output file using this argument. File extension must be specified. See ggplot2::ggsave for static maps and
mapview::mapshot for Leaflet maps. You may need to run |
width |
Numeric, width of output plot for static maps in units of |
height |
Numeric, height of output plot for static maps in units of |
units |
Keyword string for units to save static map. One of |
dpi |
Integer, resolution to save static map. See ggplot2::ggsave. |
vwidth |
Numeric, width of the exported Leaflet map image in pixels. See webshot::webshot. |
vheight |
Numeric, height of the exported Leaflet map image in pixels. See webshot::webshot. |
html.name |
Save Leaflet map to an interactive HTML file by specifying the path to the desired output file using this argument. File extension must be specified. See htmlwidgets::saveWidget. |
map.adj |
Numeric, map adjustment in direction where it is smaller than the plot window. A value of |
legend.outer |
Logical. If |
legend.inset |
Numeric, inset distance(s) from the margins as a fraction of the plot region for legend, scale and north arrow.
See |
pt.cex |
Numeric, plot point size expansion factor, works on top of |
par.cex |
Numeric, character expansion factor. See description of |
par.mar |
Plot margins as in |
pch , lwd
|
Integer, plotting symbol and line width. See |
add |
Logical, default |
PlotMapPoints
can be used to print point information on a mapped surface. The primary target are model performance
measures as written to
HYPE 'subassX.txt' files, but
color scale and break point arguments are flexible enough to also be used with e.g. HYPE output variables or other data.
PlotMapOutput
can return static plots or interactive Leaflet maps depending on value provided for the argument map.type
.
For backwards compatibility, legacy static plots can still be generated by setting map.type
to legacy
. For legacy plots, legend.pos
and
map.adj
should be chosen so that legend and map do not overlap, and the legend position can be fine-tuned using
argument legend.inset
. This is particularly useful for legend titles with more than one line. For details on inset
specification for the default maps, see inset
in legend
.
For default static maps, PlotMapPoints
returns an object of class ggplot
. This plot can also be assigned to a variable in the environment.
For interactive Leaflet maps, PlotMapOutput
returns an object of class leaflet
. For legacy static plots, PlotMapOutput
returns a plot to the
currently active plot device and invisibly an object of class SpatialPointsDataFrame
as provided in argument sites
, with plotted values and color codes added as columns
in the data slot.
ReadSubass
for HYPE result import; ReadMapOutput
for a similar plot function
# Import plot data and subbasin points require(sf) te1 <- ReadSubass(filename = system.file("demo_model", "results", "subass1.txt", package = "HYPEtools")) te2 <- st_read(dsn = system.file("demo_model", "gis", "Nytorp_station.gpkg", package = "HYPEtools")) te2$SUBID <- 3587 # add station SUBID to point te3 <- st_read(dsn = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools")) # plot NSE performance for discharge PlotMapPoints(x = te1[, 1:2], sites = te2, sites.subid.column = 4, bg = te3)
# Import plot data and subbasin points require(sf) te1 <- ReadSubass(filename = system.file("demo_model", "results", "subass1.txt", package = "HYPEtools")) te2 <- st_read(dsn = system.file("demo_model", "gis", "Nytorp_station.gpkg", package = "HYPEtools")) te2$SUBID <- 3587 # add station SUBID to point te3 <- st_read(dsn = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools")) # plot NSE performance for discharge PlotMapPoints(x = te1[, 1:2], sites = te2, sites.subid.column = 4, bg = te3)
Create scatterplots of model performance by SUBID attributes.
PlotPerformanceByAttribute( subass, subass.column = 2, groups = NULL, attributes, join.type = c("join", "cbind"), group.join.type = c("join", "cbind"), groups.color.pal = NULL, drop = TRUE, alpha = 0.4, trendline = TRUE, trendline.method = "lm", trendline.formula = NULL, trendline.alpha = 0.5, trendline.darken = 15, density.plot = FALSE, density.plot.type = c("density", "boxplot"), scale.x.log = FALSE, scale.y.log = FALSE, xsigma = 1, ysigma = 1, xlimits = c(NA, NA), ylimits = c(NA, NA), xbreaks = waiver(), ybreaks = waiver(), xlabels = waiver(), ylabels = waiver(), xlab = NULL, ylab = NULL, ncol = NULL, nrow = NULL, align = "hv", common.legend = TRUE, legend.position = "bottom", group.legend.title = "Group", common.y.axis = FALSE, summary.table = FALSE, table.margin = 0.4, filename = NULL, width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 300 ) PlotJohan( subass, subass.column = 2, groups = NULL, attributes, join.type = c("join", "cbind"), group.join.type = c("join", "cbind"), groups.color.pal = NULL, drop = TRUE, alpha = 0.4, trendline = TRUE, trendline.method = "lm", trendline.formula = NULL, trendline.alpha = 0.5, trendline.darken = 15, density.plot = FALSE, density.plot.type = c("density", "boxplot"), scale.x.log = FALSE, scale.y.log = FALSE, xsigma = 1, ysigma = 1, xlimits = c(NA, NA), ylimits = c(NA, NA), xbreaks = waiver(), ybreaks = waiver(), xlabels = waiver(), ylabels = waiver(), xlab = NULL, ylab = NULL, ncol = NULL, nrow = NULL, align = "hv", common.legend = TRUE, legend.position = "bottom", group.legend.title = "Group", common.y.axis = FALSE, summary.table = FALSE, table.margin = 0.4, filename = NULL, width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 300 )
PlotPerformanceByAttribute( subass, subass.column = 2, groups = NULL, attributes, join.type = c("join", "cbind"), group.join.type = c("join", "cbind"), groups.color.pal = NULL, drop = TRUE, alpha = 0.4, trendline = TRUE, trendline.method = "lm", trendline.formula = NULL, trendline.alpha = 0.5, trendline.darken = 15, density.plot = FALSE, density.plot.type = c("density", "boxplot"), scale.x.log = FALSE, scale.y.log = FALSE, xsigma = 1, ysigma = 1, xlimits = c(NA, NA), ylimits = c(NA, NA), xbreaks = waiver(), ybreaks = waiver(), xlabels = waiver(), ylabels = waiver(), xlab = NULL, ylab = NULL, ncol = NULL, nrow = NULL, align = "hv", common.legend = TRUE, legend.position = "bottom", group.legend.title = "Group", common.y.axis = FALSE, summary.table = FALSE, table.margin = 0.4, filename = NULL, width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 300 ) PlotJohan( subass, subass.column = 2, groups = NULL, attributes, join.type = c("join", "cbind"), group.join.type = c("join", "cbind"), groups.color.pal = NULL, drop = TRUE, alpha = 0.4, trendline = TRUE, trendline.method = "lm", trendline.formula = NULL, trendline.alpha = 0.5, trendline.darken = 15, density.plot = FALSE, density.plot.type = c("density", "boxplot"), scale.x.log = FALSE, scale.y.log = FALSE, xsigma = 1, ysigma = 1, xlimits = c(NA, NA), ylimits = c(NA, NA), xbreaks = waiver(), ybreaks = waiver(), xlabels = waiver(), ylabels = waiver(), xlab = NULL, ylab = NULL, ncol = NULL, nrow = NULL, align = "hv", common.legend = TRUE, legend.position = "bottom", group.legend.title = "Group", common.y.axis = FALSE, summary.table = FALSE, table.margin = 0.4, filename = NULL, width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 300 )
subass |
Information to plot, typically model performances from imported HYPE 'subassX.txt' files. Data frame object with first column containing SUBIDs and additional columns containing model results to plot. See details. |
subass.column |
Column index of information in |
groups |
Optional data frame object to specify groups of SUBIDs to plot separately. First column should contain SUBIDs and second column should contain group IDs. |
attributes |
Data frame object containing the subbasin attribute information to plot on the x-axis of the output plots. Typically a data frame created by |
join.type |
Specify how to join |
group.join.type |
Specify how to join |
groups.color.pal |
Vector containing colors to use when plotting groups. Only used if groups is not |
drop |
Logical, should unused factor levels be omitted from the legend. See ggplot2::scale_color_manual and ggplot2::scale_fill_manual. |
alpha |
Numeric value to set transparency of dots in output plots. Should be in the range 0-1. |
trendline |
Logical, if |
trendline.method |
Specify method used to create trendlines. See ggplot2::geom_smooth. |
trendline.formula |
Specify formula used to create trendlines. See ggplot2::geom_smooth. |
trendline.alpha |
Numeric value to set transparency of trendlines in output plots. Should be in the range 0-1. |
trendline.darken |
Numeric value to make the trendlines darker color shades of their corresponding scatterplot points. Should be in the range 1-100. |
density.plot |
Logical, if |
density.plot.type |
String, type of plot geometry to use for density plots: |
scale.x.log |
Vector describing if output plots should use a log scale on the x-axis. A pseudo-log scale will be used if any zero or negative values are present. If length of vector == 1, then the value will be used for all output plots. Vector values should be either |
scale.y.log |
Vector describing if output plots should use a log scale on the y-axis. A pseudo-log scale will be used if any zero or negative values are present. If length of vector == 1, then the value will be used for all output plots. Vector values should be either |
xsigma |
Numeric, scaling factor for the linear part of psuedo-long transformation of x axis. Used if |
ysigma |
Numeric, scaling factor for the linear part of psuedo-long transformation of y axis. Used if |
xlimits |
Vector containing minimum and maximum values for the x-axis of the output plots. See ggplot2::scale_x_continuous. |
ylimits |
Vector containing minimum and maximum values for the y-axis of the output plots. See ggplot2::scale_y_continuous. |
xbreaks |
Vector containing the break values used for the x-axis of the output plots. See ggplot2::scale_x_continuous. |
ybreaks |
Vector containing the break values used for the y-axis of the output plots. See ggplot2::scale_y_continuous. |
xlabels |
Vector containing the labels for each break value used for the x-axis of the output plots. See ggplot2::scale_x_continuous. |
ylabels |
Vector containing the labels for each break value used for the y-axis of the output plots. See ggplot2::scale_y_continuous. |
xlab |
String containing the text to use for the x-axis title of the output plots. See ggplot2::xlab. |
ylab |
String containing the text to use for the y-axis title of the output plots. See ggplot2::ylab. |
ncol |
Integer, number of columns to use in the output arranged plot. See ggpubr::ggarrange. |
nrow |
Integer, number of rows to use in the output arranged plot. See ggpubr::ggarrange. |
align |
Specify how output plots should be arranged. See ggpubr::ggarrange. |
common.legend |
Specify if arranged plot should use a common legend. See ggpubr::ggarrange. |
legend.position |
Specify position of common legend for arranged plot. See ggpubr::ggarrange. Use |
group.legend.title |
String, title for plot legend when generating plots with |
common.y.axis |
Logical, if |
summary.table |
Logical, if |
table.margin |
Numeric, controls spacing between plots and summary table. |
filename |
String, filename used to save plot. File extension must be specified. See ggplot2::ggsave. |
width |
Numeric, specify width of output plot. See ggplot2::ggsave. |
height |
Numeric, specify height of output plot. See ggplot2::ggsave. |
units |
Specify units of |
dpi |
Specify resolution of output plot. See ggplot2::ggsave. |
PlotPerformanceByAttribute
can be used to analyze model performance according to subbasin attributes. The function requires two primary inputs; Model performance
information is contained in the subass
input, and subbasin attribute information is contained in the attributes
input. The subass.column
argument controls
which column of the subass
data frame will be used as the y-coordinate of points. Plots will be generated for each column in the attributes
data frame
(except for the column named "SUBID") using the column values as the x-coordinate of the points.
A subbasin attribute summary table can be generated using SubidAttributeSummary
, and additional columns can be joined to the data frame to add additional output plots.
PlotPerformanceByAttribute
returns a plot to the currently active plot device.
ReadSubass
for HYPE result import; SubidAttributeSummary
for subbasin attribute summary
subass <- ReadSubass(filename = system.file("demo_model", "results", "subass1.txt", package = "HYPEtools" ), check.names = TRUE) gd <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools" )) gc <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools" )) attributes <- SubidAttributeSummary(subids <- subass$SUBID, gd = gd, gc = gc, mapoutputs = c(system.file("demo_model", "results", "mapCOUT.txt", package = "HYPEtools")), upstream.gd.cols = c("SLOPE_MEAN") ) PlotPerformanceByAttribute( subass = subass, attributes = attributes[, c("SUBID", "landuse_1", "landuse_2", "landuse_3")], xlimits = c(0, 1) )
subass <- ReadSubass(filename = system.file("demo_model", "results", "subass1.txt", package = "HYPEtools" ), check.names = TRUE) gd <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools" )) gc <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools" )) attributes <- SubidAttributeSummary(subids <- subass$SUBID, gd = gd, gc = gc, mapoutputs = c(system.file("demo_model", "results", "mapCOUT.txt", package = "HYPEtools")), upstream.gd.cols = c("SLOPE_MEAN") ) PlotPerformanceByAttribute( subass = subass, attributes = attributes[, c("SUBID", "landuse_1", "landuse_2", "landuse_3")], xlimits = c(0, 1) )
A combined plot for annual regimes with box plot elements for observed variables and ribbon elements for simulated variables. Particularly designed for comparisons of sparse observations with high-density model results, e.g. for in-stream nutrients.
PlotSimObsRegime( x, sim, obs, ts.in = NULL, ts.out = "month", start.mon = 1, add.legend = TRUE, pos.legend = "topright", inset = 0, l.legend = NULL, log = FALSE, ylim = NULL, xlab = NULL, ylab = NULL, mar = c(3, 3, 1, 1) + 0.1 )
PlotSimObsRegime( x, sim, obs, ts.in = NULL, ts.out = "month", start.mon = 1, add.legend = TRUE, pos.legend = "topright", inset = 0, l.legend = NULL, log = FALSE, ylim = NULL, xlab = NULL, ylab = NULL, mar = c(3, 3, 1, 1) + 0.1 )
x |
Data frame, with column-wise equally-spaced time series of HYPE variables. Date-times in
|
sim , obs
|
Character string keywords, observed and simulated HYPE variable IDs to plot. Not case-sensitive, but must exist in |
ts.in |
Character string, timestep of |
ts.out |
Character string, aggregation timestep for simulation results, defaults to |
start.mon |
Integer between 1 and 12, starting month of the hydrological year, used to order the output. |
add.legend |
Logical. If |
pos.legend |
Character string keyword for legend positioning. See Details in |
inset |
Integer, legend inset as fraction of plot region, one or two values for x and y. See |
l.legend |
Character vector of length 2 containing variable labels for legend, first for |
log |
Logical, if |
ylim |
Numeric vector of length two, giving y-axis limits. Defaults to min-max range of all plotted data. |
xlab |
Character string or |
ylab |
Character or |
mar |
Numeric vector of length 4, margin specification passed to |
PlotSimObsRegime
combines ribbons and box plot elements. Box plot elements are composed as defaults from boxplot
,
i.e. boxes with 25\
extreme values as points. Observation counts per month over the observation period are printed above the x-axis.
Aggregation time length of the simulated variable can be chosen in argument ts.out
, resulting in more or less smoothed ribbons.
For the observed variable, the aggregation is fixed to months, in order to aggregate enough values for each box plot element.
PlotSimObsRegime
returns a plot to the currently active plot device, and invisibly a list
object containing three
elements with the plotted data and variable IDs.
Element obs
contains a list as returned by AnnualRegime
. Element obs
contains a list with two elements, a
vector refdate
with x positions of box plots elements, and a list reg.obs
with observations for the monthly box plot elements.
Element variable
contains a named vector with HYPE variable IDs for observations and simulations. sim
and obs
returned
empty if corresponding function argument was NULL
.
PlotAnnualRegime
for a more generic annual regime plot, AnnualRegime
to compute annual regimes only.
# Plot observed and simulated discharge te <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) PlotSimObsRegime(x = te, sim = "cout", obs = "rout", start.mon = 10)
# Plot observed and simulated discharge te <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) PlotSimObsRegime(x = te, sim = "cout", obs = "rout", start.mon = 10)
Plot routing of subbasins for a HYPE model on an interactive map.
PlotSubbasinRouting( map, map.subid.column = 1, gd = NULL, bd = NULL, plot.scale = TRUE, plot.searchbar = FALSE, weight = 0.5, opacity = 1, fillColor = "#4d4d4d", fillOpacity = 0.25, line.weight = 5, line.opacity = 1, seed = NULL, darken = 0, font.size = 10, file = "", vwidth = 1424, vheight = 1000, html.name = "" )
PlotSubbasinRouting( map, map.subid.column = 1, gd = NULL, bd = NULL, plot.scale = TRUE, plot.searchbar = FALSE, weight = 0.5, opacity = 1, fillColor = "#4d4d4d", fillOpacity = 0.25, line.weight = 5, line.opacity = 1, seed = NULL, darken = 0, font.size = 10, file = "", vwidth = 1424, vheight = 1000, html.name = "" )
map |
Path to file containing subbasin polygon GIS data (e.g. shapefile or geopackage) or a |
map.subid.column |
Integer, column index in the |
gd |
Path to model GeoData.txt or a GeoData object from |
bd |
Path to model BranchData.txt or a BranchData object from |
plot.scale |
Logical, include a scale bar on the map. |
plot.searchbar |
Logical, if |
weight |
Numeric, weight of subbasin boundary lines. See |
opacity |
Numeric, opacity of subbasin boundary lines. See |
fillColor |
String, color of subbasin polygons. See |
fillOpacity |
Numeric, opacity of subbasin polygons. See |
line.weight |
Numeric, weight of routing lines. See |
line.opacity |
Numeric, opacity of routing lines. See |
seed |
Integer, seed number to to produce repeatable color palette. |
darken |
Numeric specifying the amount of darkening applied to the random color palette. Negative values will lighten the palette. See |
font.size |
Numeric, font size (px) for map subbasin labels. |
file |
Save map to an image file by specifying the path to the desired output file using this argument. File extension must be specified.
See |
vwidth |
Numeric, width of the exported map image in pixels. See |
vheight |
Numeric, height of the exported map image in pixels. See |
html.name |
Save map to an interactive HTML file by specifying the path to the desired output file using this argument. File extension must be specified.
See |
PlotSubbasinRouting
generates an interactive Leaflet map with lines indicating the routing of flow between subbasins. GeoData information only needs
to be provided if the map
GIS data does not include SUBID and/or MAINDOWN fields. BranchData information only needs to be provided if model has a
BranchData.txt file. Subbasin routing lines are randomly assigned a color using distinctColorPalette
.
Returns an interactive Leaflet map.
## Not run: PlotSubbasinRouting( map = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools" ), gd = system.file("demo_model", "GeoData.txt", package = "HYPEtools"), map.subid.column = 25 ) ## End(Not run)
## Not run: PlotSubbasinRouting( map = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools" ), gd = system.file("demo_model", "GeoData.txt", package = "HYPEtools"), map.subid.column = 25 ) ## End(Not run)
Pearson product-moment correlation coefficient calculation, a specific case of function cor
.
r(sim, obs, ...) ## S3 method for class 'HypeSingleVar' r(sim, obs, progbar = TRUE, ...)
r(sim, obs, ...) ## S3 method for class 'HypeSingleVar' r(sim, obs, progbar = TRUE, ...)
sim |
|
obs |
|
... |
Ignored. |
progbar |
Logical, if |
This function wraps a call to cor(x = obs, y = sim, use = "na.or.complete", method = "pearson")
.
Method r.HypeSingleVar
calculates Pearson's r for imported HYPE outputs with single variables for several
catchments, i.e. time and map files, optionally multiple model runs combined, typically results from calibration runs.
r.HypeSingleVar
returns a 2-dimensional array of Pearson correlation coefficients for all SUBIDs and model
iterations provided in argument sim
, with values in the same order
as the second and third dimension in sim
, i.e. [subid, iteration]
.
cor
, on which the function is based. ReadWsOutput
for importing HYPE calibration results.
# Create dummy data, discharge observations with added white noise as model simulations te1 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te1 <- HypeSingleVar(x = array(data = unlist(te1[, -1]) + runif(n = nrow(te1), min = -.5, max = .5), dim = c(nrow(te1), ncol(te1) - 1, 1), dimnames = list(rownames(te1), colnames(te1)[-1])), datetime = te1$DATE, subid = obsid(te1), hype.var = "cout") te2 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te2 <- HypeSingleVar(x = array(data = unlist(te2[, -1]), dim = c(nrow(te2), ncol(te2) - 1, 1), dimnames = list(rownames(te2), colnames(te2)[-1])), datetime = te2$DATE, subid = obsid(te2), hype.var = "rout") # Pearson correlation r(sim = te1, obs = te2, progbar = FALSE)
# Create dummy data, discharge observations with added white noise as model simulations te1 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te1 <- HypeSingleVar(x = array(data = unlist(te1[, -1]) + runif(n = nrow(te1), min = -.5, max = .5), dim = c(nrow(te1), ncol(te1) - 1, 1), dimnames = list(rownames(te1), colnames(te1)[-1])), datetime = te1$DATE, subid = obsid(te1), hype.var = "cout") te2 <- ReadObs(filename = system.file("demo_model", "Qobs.txt", package = "HYPEtools")) te2 <- HypeSingleVar(x = array(data = unlist(te2[, -1]), dim = c(nrow(te2), ncol(te2) - 1, 1), dimnames = list(rownames(te2), colnames(te2)[-1])), datetime = te2$DATE, subid = obsid(te2), hype.var = "rout") # Pearson correlation r(sim = te1, obs = te2, progbar = FALSE)
This is a convenience wrapper function to import a basin output file as data frame or matrix into R.
ReadBasinOutput( filename, dt.format = "%Y-%m-%d", type = c("df", "dt", "hmv"), id = NULL, warn.nan = FALSE )
ReadBasinOutput( filename, dt.format = "%Y-%m-%d", type = c("df", "dt", "hmv"), id = NULL, warn.nan = FALSE )
filename |
Path to and file name of the basin output file to import. Windows users: Note that Paths are separated by '/', not '\'. |
dt.format |
Date-time |
type |
Character, keyword for data type to return. |
id |
Integer, SUBID or OUTREGID of the imported sub-basin or outregion results. If |
warn.nan |
Logical, check if imported results contain any |
ReadBasinOutput
is a convenience wrapper function of fread
from package
data.table::data.table, with conversion of date-time strings to
POSIX time representations. Monthly and annual time steps are returned as first day of the time step period.
HYPE basin output files can contain results for a single sub-basin or for a user-defined output region. ReadBasinOutput
checks HYPE
variable names (column headers in imported file) for an "RG"-prefix. If it is found, the ID read from either file name or argument
id
is saved to attribute outregid
, otherwise to attribute subid
.
ReadBasinOutput
returns a data.frame
, data.table::data.table, or a HypeMultiVar
array.
Data frames and data tables contain additional attributes
: hypeunit
, a vector of HYPE variable units,
subid
and outregid
, the HYPE SUBID/OUTREGID to which the time series belong (both attributes always created and assigned NA
if not applicable to data contents), timestep
with a time step keyword attribute, and comment
with contents of an optional
first-row comment (NA
otherwise). An additional attribute subid.nan
might be returned, see argument warn.nan
.
For the conversion of date/time strings, time zone "UTC" is assumed. This is done to avoid potential daylight saving time side effects when working with the imported data (and possibly converting to string representations during the process).
HYPE results are printed to files using a user-specified accuracy. This accuracy is specified in 'info.txt' as a number of
decimals to print. If large numbers are printed, this can result in a total number of digits which is too large to print.
Results will then contain values of '****************'. ReadBasinOutput
will convert those cases to 'NA' entries.
Current versions of HYPE allow for defining significant numbers of digits instead of fixed ones, which should prevent this issue from arising.
te <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools"))
te <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools"))
This is a convenience wrapper function to import a ClassData file as data frame into R. ClassData files contain definitions of SLC (Soil and Land use Crop) classes in five to 15 predefined columns, see ClassData.txt documentation.
ReadClassData( filename = "ClassData.txt", encoding = c("unknown", "UTF-8", "Latin-1"), verbose = TRUE )
ReadClassData( filename = "ClassData.txt", encoding = c("unknown", "UTF-8", "Latin-1"), verbose = TRUE )
filename |
Path to and file name of the ClassData file to import. Windows users: Note that Paths are separated by '/', not '\'. |
encoding |
Character string, encoding of non-ascii characters in imported text file. Particularly relevant when
importing files created under Windows (default encoding "Latin-1") in Linux (default encoding "UTF-8") and vice versa. See
also argument description in |
verbose |
Print information on number of data columns in imported file. |
ReadClassData
is a convenience wrapper function of fread
, with treatment of leading
comment rows. Column names are created on import, optional comment rows are imported as strings in attribute
'comment'.
Optional inline comments (additional non-numeric columns) are automatically identified and imported along with data columns.
ReadClassData
returns a data frame with added attribute 'comment'.
te <- ReadClassData(filename = system.file("demo_model", "ClassData.txt", package = "HYPEtools")) te
te <- ReadClassData(filename = system.file("demo_model", "ClassData.txt", package = "HYPEtools")) te
Read a 'description.txt' file as list
object into R. A 'description.txt' file contains land use, soil, and crop
class names of a HYPE set-up, as well as model set-up name and version.
ReadDescription( filename, gcl = NULL, ps = NULL, encoding = c("unknown", "UTF-8", "latin1") )
ReadDescription( filename, gcl = NULL, ps = NULL, encoding = c("unknown", "UTF-8", "latin1") )
filename |
Path to and file name of the 'description.txt' file to import. |
gcl |
dataframe, GeoClass.txt file imported with |
ps |
dataframe, PointSourceData.txt file imported with |
encoding |
Character string, encoding of non-ascii characters in imported text file. Particularly relevant when
importing files created under Windows (default encoding "Latin-1") in Linux (default encoding "UTF-8") and vice versa. See
also argument description in |
ReadDescription
imports a 'description.txt' into R. This file is not used by HYPE, but is convenient for
e.g. plotting legend labels or examining imported GeoClass files. E.g., PlotBasinSummary
requires a list
as returned from ReadDescription
for labeling.
A 'description.txt' file consists of 28 lines, alternating names and semicolon-separated content. Lines with names are not read by the import function, they just make it easier to compose and read the actual text file.
File contents read by ReadDescription
:
HYPE set-up name (line 2)
HYPE set-up version (line 4)
Land use class IDs (line 6)
Land use class names (line 8)
Land use class short names (line 10)
Soil class IDs (line 12)
Soil class names (line 14)
Soil class short names (line 16)
Crop class IDs (line 18)
Crop class names (line 20)
Crop class short names (line 22)
Point Source IDs (line 24)
Point Source type names (line 26)
Point Source type short names (line 28)
Note that Crop class IDs start from 0
, which means no crop, whereas land use and soil IDs start from 1
(or higher).
Formatting example for description.txt files:
# Name
MyHYPE
# Version
0.1
# Land use class IDs
1;2
# Land use class names
Agriculture;Coniferous forest
# Short land use class names
Agric.;Conif. f.
# Soil class IDs
1;2
# Soil class names
Coarse soils;Medium to fine soils
# Short soil class names
Coarse;Medium
# Crop class IDs
0;1;2
# Crop class names
None;Row crops;Autumn-sown cereal
# Short crop class names
None;Row;Aut.-sown
# Point source type IDs
-1;1;2
# Point source type names
Abstraction;Primary;Secondaryl
# Short point source type names
ABS;NP1;NP2
ReadDescription
returns a named list with named character elements, corresponding to the
imported lines:
Name
, Version
, lu.id
, Landuse
, lu
(short names), so.id
,
Soil
, so
(short names), cr.id
, Crop
, cr
(short names),
ps.id
, PointSource
, ps
(short names)
te <- ReadDescription(filename = system.file("demo_model", "description.txt", package = "HYPEtools")) te
te <- ReadDescription(filename = system.file("demo_model", "description.txt", package = "HYPEtools")) te
This is a convenience wrapper function to import a GeoClass file as data frame into R. GeoClass files contain definitions of SLC (Soil and Land use Crop) classes in twelve to 14 predefined columns, see GeoClass.txt documentation.
ReadGeoClass( filename = "GeoClass.txt", encoding = c("unknown", "UTF-8", "Latin-1"), verbose = TRUE )
ReadGeoClass( filename = "GeoClass.txt", encoding = c("unknown", "UTF-8", "Latin-1"), verbose = TRUE )
filename |
Path to and file name of the GeoClass file to import. Windows users: Note that Paths are separated by '/', not '\'. |
encoding |
Character string, encoding of non-ascii characters in imported text file. Particularly relevant when
importing files created under Windows (default encoding "Latin-1") in Linux (default encoding "UTF-8") and vice versa. See
also argument description in |
verbose |
Print information on number of data columns in imported file. |
ReadGeoClass
is a convenience wrapper function of fread
, with treatment of leading
comment rows. Column names are created on import, optional comment rows are imported as strings in attribute
'comment'.
Optional inline comments (additional non-numeric columns) are automatically identified and imported along with data columns.
ReadGeoClass
returns a data frame with added attribute 'comment'.
te <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) te
te <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) te
Import a GeoData file into R.
ReadGeoData( filename = "GeoData.txt", sep = "\t", encoding = c("unknown", "UTF-8", "Latin-1"), remove.na.cols = TRUE )
ReadGeoData( filename = "GeoData.txt", sep = "\t", encoding = c("unknown", "UTF-8", "Latin-1"), remove.na.cols = TRUE )
filename |
Path to and file name of the GeoData file to import. Windows users: Note that Paths are separated by '/', not '\'. |
sep |
character string. Field separator character as described in |
encoding |
Character string, encoding of non-ascii characters in imported text file. Particularly relevant when
importing files created under Windows (default encoding "Latin-1") in Linux (default encoding "UTF-8") and vice versa. See
also argument description in |
remove.na.cols |
Logical, remove columns which have all NA values. |
ReadGeoData
uses fread
from the data.table::data.table package
with type numeric
type for columns AREA
and RIVLEN
(if they exist), and
upper-case column names.
If the imported file is a HYPE-conform GeoData file, ReadGeoData
returns an object of S3 class HypeGeoData
(see the class description there), providing its own summary
method. If mandatory GeoData columns are missing,
a standard dataframe is returned along with informative warning messages.
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) summary(te)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) summary(te)
Import a HYPE model settings information file as list into R.
ReadInfo( filename = "info.txt", encoding = c("unknown", "UTF-8", "latin1"), mode = c("simple", "exact"), comment.duplicates = TRUE )
ReadInfo( filename = "info.txt", encoding = c("unknown", "UTF-8", "latin1"), mode = c("simple", "exact"), comment.duplicates = TRUE )
filename |
Path to and file name of the info.txt file to import. |
encoding |
Character string, encoding of non-ascii characters in imported text file. Particularly relevant when
importing files created under Windows (default encoding "Latin-1") in Linux (default encoding "UTF-8") and vice versa. See
also argument description in |
mode |
Use |
comment.duplicates |
Logical, if |
Using ReadInfo
with the simple
mode discards all comments of the imported file (comment rows and in-line comments). The function's purpose is to quickly
provide access to settings and details of a model run, not to mirror the exact info.txt file structure into an R data object. If you would like to mirror the exact file
structure, then use the exact
mode.
ReadInfo
returns a named list. List names are settings codes
(see info.txt documentation). Settings with two
codes are placed in nested lists, e.g. myinfo$basinoutput$variable
. Multi-line subbasin definitions for basin outputs and class
outputs are merged to single vectors on import.
WriteInfo
AddInfoLine
RemoveInfoLine
te <- ReadInfo(filename = system.file("demo_model", "info.txt", package = "HYPEtools")) te
te <- ReadInfo(filename = system.file("demo_model", "info.txt", package = "HYPEtools")) te
This is a convenience wrapper function to import a map output file ('map<HYPE_output_variable>.txt') into R.
ReadMapOutput( filename, dt.format = NULL, hype.var = NULL, type = c("df", "dt", "hsv"), warn.nan = FALSE, col.prefix = "X" )
ReadMapOutput( filename, dt.format = NULL, hype.var = NULL, type = c("df", "dt", "hsv"), warn.nan = FALSE, col.prefix = "X" )
filename |
Path to and file name of the map output file to import. Windows users: Note that Paths are separated by '/', not '\'. |
dt.format |
Date-time |
hype.var |
Character string, a four-letter keyword to specify HYPE variable ID of file contents. See
list of HYPE variables.
If |
type |
Character, keyword for data type to return. |
warn.nan |
Logical, check if imported results contain any |
col.prefix |
String, prefix added to mapoutput column names. Default is |
ReadMapOutput
is a convenience wrapper function of fread
from package
data.table::data.table,
with conversion of date-time strings to POSIX time representations. Monthly and annual time steps are returned as first day
of the time step period.
ReadMapOutput
returns a data.frame
, data.table::data.table, or a HypeSingleVar
array.
Data frames and data tables contain additional attributes
: variable
, giving the HYPE variable ID,
date
, a vector of date-times (corresponding to columns from column 2), timestep
with a time step attribute,
and comment
with the first line of the imported file as text string. An additional attribute subid.nan
might be
returned, see argument warn.nan
.
HYPE results are printed to files using a user-specified accuracy. This accuracy is specified in 'info.txt' as a number of
decimals to print. If large numbers are printed, this can result in a total number of digits which is too large to print.
Results will then contain values of '****************'. ReadMapOutput
will convert those cases to 'NA' entries.
Current versions of HYPE allow for defining significant instead of fixed number of digits, which should prevent this issue from arising.
te <- ReadMapOutput(filename = system.file("demo_model", "results", "mapEVAP.txt", package = "HYPEtools"), dt.format = NULL) te
te <- ReadMapOutput(filename = system.file("demo_model", "results", "mapEVAP.txt", package = "HYPEtools"), dt.format = NULL) te
Import single-variable HYPE observation files into R.
ReadObs( filename, variable = "", dt.format = NULL, nrows = -1, type = c("df", "dt"), select = NULL, obsid = NULL ) ReadPTQobs( filename, variable = "", dt.format = NULL, nrows = -1, type = c("df", "dt"), select = NULL, obsid = NULL )
ReadObs( filename, variable = "", dt.format = NULL, nrows = -1, type = c("df", "dt"), select = NULL, obsid = NULL ) ReadPTQobs( filename, variable = "", dt.format = NULL, nrows = -1, type = c("df", "dt"), select = NULL, obsid = NULL )
filename |
Path to and file name of the file to import. Windows users: Note that Paths are separated by '/', not '\'. |
variable |
Character string, HYPE variable ID of file contents. If |
dt.format |
Optional date-time |
nrows |
Number of rows to import. A value of |
type |
Character, keyword for data type to return. |
select |
Integer vector, column numbers to import. Note: first column with dates must be imported and will be added if missing. |
obsid |
Integer vector, HYPE OBSIDs to import. Alternative to argument |
ReadObs
is a convenience wrapper function of fread
from package
data.table::data.table,
with conversion of date-time strings to POSIX time representations. Observation IDs (SUBIDs or IDs connected to SUBIDs with a
ForcKey.txt file) are returned as integer
attribute obsid
(directly accessible through obsid
).
Observation file types with automatic (dummy) variable
attribute assignment:
File | HYPE variable ID |
(*: dummy ID) | |
Pobs.txt | prec |
Tobs.txt | temp |
Qobs.txt | rout |
TMINobs.txt | tmin* |
TMAXobs.txt | tmax* |
VWobs.txt | vwnd* |
UWobs.txt | uwnd* |
SFobs.txt | snff* |
SWobs.txt | swrd* |
RHobs.txt | rhum* |
Uobs.txt | wind* |
ReadObs
returns a data frame or data table with additional attributes: obsid
with observation IDs, timestep
with a time step string, either "day"
or "nhour"
(only daily or n-hourly time steps supported), and variable
with a HYPE variable ID string.
For the conversion of date/time strings, time zone "UTC" is assumed. This is done to avoid potential daylight saving time side effects when working with the imported data (and e.g. converting to string representations during the process).
WriteObs
ReadXobs
for multi-variable HYPE observation files
te <- ReadObs(filename = system.file("demo_model", "Tobs.txt", package = "HYPEtools")) head(te)
te <- ReadObs(filename = system.file("demo_model", "Tobs.txt", package = "HYPEtools")) head(te)
This function imports an 'optpar.txt' into a list.
ReadOptpar(filename = "optpar.txt", encoding = c("unknown", "UTF-8", "latin1"))
ReadOptpar(filename = "optpar.txt", encoding = c("unknown", "UTF-8", "latin1"))
filename |
Path to and file name of the 'optpar.txt' file to import. |
encoding |
Character string, encoding of non-ascii characters in imported text file. Particularly relevant when
importing files created under Windows (default encoding "Latin-1") in Linux (default encoding "UTF-8") and vice versa. See
also argument description in |
ReadOptpar
imports HYPE 'optpar.txt' files. Optpar files contain instructions for parameter calibration/optimization
and parameter value ranges, for details on the file format, see the
optpar.txt online documentation.
ReadOptpar
returns a list
object with three elements:
comment
, the file's first-row comment string.
tasks
, a two-column dataframe with row-wise key-value pairs for tasks and settings.
pars
, a list of dataframes, each containing values for one parameter. Three columns each, holding parameter
range minima, maxima, and intervals.
The number of rows in each dataframe corresponds to the number of soil or land use classes for class-specific parameters.
Parameter names as list element names.
te <- ReadOptpar(filename = system.file("demo_model", "optpar.txt", package = "HYPEtools")) te
te <- ReadOptpar(filename = system.file("demo_model", "optpar.txt", package = "HYPEtools")) te
Import a HYPE parameter file as list into R.
ReadPar(filename = "par.txt", encoding = c("unknown", "UTF-8", "latin1"))
ReadPar(filename = "par.txt", encoding = c("unknown", "UTF-8", "latin1"))
filename |
Path to and file name of the parameter file to import. Windows users: Note that Paths are separated by '/', not '\'. |
encoding |
Character string, encoding of non-ascii characters in imported text file. Particularly relevant when
importing files created under Windows (default encoding "Latin-1") in Linux (default encoding "UTF-8") and vice versa. See
also argument description in |
ReadPar
checks for inline comments in 'par.txt' files, these are moved to separate "lines" (list elements).
ReadPar
returns a list of named vectors. Parameters are returned as numeric vectors with HYPE parameter names as list
element names. Comments are returned in separate list elements as single character strings, former inline comments are moved
to elements preceding the original comment position (i.e. to a line above in the par.txt file structure). Comment elements are
named `!!`
.
te <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) te
te <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) te
This is a small convenience function to import a 'partial model setup file' as integer vector into R.
ReadPmsf(filename = "pmsf.txt")
ReadPmsf(filename = "pmsf.txt")
filename |
Path to and file name of the pmsf file to import. Windows users: Note that Paths are separated by '/', not '\'. |
ReadPmsf
imports 'pmsf.txt' files, which contain SUBIDs and are used to run only parts of a HYPE setup's domain
without having to extract a separate model setup. For details on the file format, see the
pmsf.txt online documentation.
Pmsf.txt files imported with ReadPmsf
are stripped from the first value containing the total number of subcatchments
in the file. No additional attribute is added to hold this number since it can be easily obtained using length
.
ReadPmsf
returns an integer vector.
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te
Import a HYPE simass.txt simulation assessment file as data frame into R. Simulation assessment files contain domain-wide aggregated performance criteria results, as defined in 'info.txt'.
ReadSimass(filename = "simass.txt")
ReadSimass(filename = "simass.txt")
filename |
Path to and file name of the 'simass.txt' file to import. |
ReadSimass
imports a simulation assessment file into R.
HYPE simass.txt files contain
domain-wide performance measures for observed-simulated variable pairs as defined in
HYPE info.txt files.
The function interprets character-coded time steps (e.g. "DD"
for daily time steps), as used in some HYPE versions.
Sub-daily time steps are currently not treated and will probably result in a warning during time step evaluation within the
function. Please contact the developers if you need support for sub-daily time steps!
ReadSubass
returns a data frame with columns for HYPE variable names (observed, simulated), aggregation periods, and
performance measure values of evaluated variable pairs. Aggregation periods are coded as in info.txt files, i.e. 1 = daily,
2 = weekly, 3 = monthly, 4 = annual. Metadata is added to the data frame as additional attributes
:
names.long
, character
vector with long names, corresponding to abbreviations uses as actual column names
n.simulation
, integer
, simulation number (e.g. with Monte Carlo simulations)
crit.total
, numeric
, total criteria value
crit.conditional
, numeric
, conditional criteria value
threshold
, integer
, data limit threshold
te <- ReadSimass(filename = system.file("demo_model", "results", "simass.txt", package = "HYPEtools")) te
te <- ReadSimass(filename = system.file("demo_model", "results", "simass.txt", package = "HYPEtools")) te
This is a convenience wrapper function to import an subassX.txt sub-basin assessment file as data frame into R. Sub-basins assessment files contain performance criteria results, as defined in 'info.txt', for individual sub-basins with observations.
ReadSubass( filename = "subass1.txt", nhour = NULL, check.names = FALSE, na.strings = c("****************", "-9999") )
ReadSubass( filename = "subass1.txt", nhour = NULL, check.names = FALSE, na.strings = c("****************", "-9999") )
filename |
Path to and file name of the 'subassX.txt' file to import. |
nhour |
Integer, time step of sub-daily model results in hours. See details. |
check.names |
Logical. If |
na.strings |
Vector of strings that should be read as NA. |
ReadSubass
imports a sub-basin assessment file into R. Information on model variables evaluated in the
file is imported as additional attributes
variables
, the evaluation time step in an attribute
timestep
.
Sub-daily time steps are reported with time step code '0' in HYPE result files. In order to preserve the time step
information in the imported R object, users must provide the actual model evaluation time step in hours
in argument nhour
in the sub-daily case.
ReadSubass
returns a data frame with two additional attributes: variables
contains a 2-element
character vector with IDs of evaluated observed and simulated HYPE variables, timestep
contains a character
keyword detailing the evaluation time step.
te <- ReadSubass(filename = system.file("demo_model", "results", "subass1.txt", package = "HYPEtools")) te
te <- ReadSubass(filename = system.file("demo_model", "results", "subass1.txt", package = "HYPEtools")) te
Import a time output file 'time<HYPE_output_variable>.txt' or a converted time output file in netCDF format into R.
ReadTimeOutput( filename, dt.format = "%Y-%m-%d", hype.var = NULL, out.reg = NULL, type = c("df", "dt", "hsv"), select = NULL, id = NULL, nrows = -1L, skip = 0L, warn.nan = FALSE, verbose = TRUE )
ReadTimeOutput( filename, dt.format = "%Y-%m-%d", hype.var = NULL, out.reg = NULL, type = c("df", "dt", "hsv"), select = NULL, id = NULL, nrows = -1L, skip = 0L, warn.nan = FALSE, verbose = TRUE )
filename |
Path to and file name of the time output file to import. Acceptable file choices are |
dt.format |
Date-time |
hype.var |
Character, HYPE variable ID in |
out.reg |
Logical, specify if file contents are sub-basin or output region results (i.e. SUBIDs or OUTREGIDs as columns).
|
type |
Character, keyword for data type to return. |
select |
Integer vector, column numbers to import. Note: first column with dates must be imported and will be added if missing. |
id |
Integer vector, HYPE SUBIDs/OUTREGIDs to import. Alternative to argument |
nrows |
Integer, number of rows to import, see documentation in |
skip |
Integer, number of data rows to skip on import. Time output header lines are always skipped. |
warn.nan |
Logical, check if imported results contain any |
verbose |
Logical, print information during import. |
ReadTimeOutput
imports from text or netCDF files. netCDF import is experimental and not feature-complete (e.g. attributes are
not yet fully digested).
Text file import uses fread
from package
data.table::data.table, netCDF import extracts data and attributes using functions from package ncdf4
.
Date-time representations in data files are converted to POSIX time representations. Monthly and annual time steps are returned as
first day of the time step period.
Import from netCDF files requires an id
dimension in the netCDF data. Gridded data with remapped HYPE results in spatial x/y
dimensions as defined in the HYPE netCDF formatting standard
are currently not supported.
ReadTimeOutput
returns a data.frame
, data.table::data.table, or a HypeSingleVar
array.
Data frames and data tables contain additional attributes
: variable
, giving the HYPE variable ID,
subid
and outregid
, the HYPE SUBIDs/OUTREGIDs (corresponding to columns from column two onward) to which the time
series belong (both attributes always created and assigned NA
if not applicable to data contents), timestep
with a
time step attribute, and comment
with first row comment of imported text file as character string or global attributes of imported
netCDF file as character string of collated key-value pairs. An additional attribute id.nan
might be returned, see argument
warn.nan
.
For the conversion of date/time strings, time zone "UTC" is assumed. This is done to avoid potential daylight saving time side effects when working with the imported data (and possibly converting to string representations during the process).
HYPE results are printed to files using a user-specified accuracy. This accuracy is specified in 'info.txt' as a number of
decimals to print. If large numbers are printed, this can result in a total number of digits which is too large to print.
Results will then contain values of '****************'. ReadTimeOutput
will convert those cases to 'NA' entries.
Current versions of HYPE allow for defining significant instead of fixed number of digits, which should prevent this
issue from arising.
te <- ReadTimeOutput(filename = system.file("demo_model", "results", "timeCOUT.txt", package = "HYPEtools"), dt.format = "%Y-%m") te
te <- ReadTimeOutput(filename = system.file("demo_model", "results", "timeCOUT.txt", package = "HYPEtools"), dt.format = "%Y-%m") te
Read and combine HYPE optimization simulation output files, generated with 'task WS' during HYPE optimization runs. Outputs can consist of basin, time, or map output files.
ReadWsOutput( path, type = c("time", "map", "basin"), hype.var = NULL, id = NULL, dt.format = NULL, select = NULL, from = NULL, to = NULL, progbar = TRUE, warn.nan = FALSE )
ReadWsOutput( path, type = c("time", "map", "basin"), hype.var = NULL, id = NULL, dt.format = NULL, select = NULL, from = NULL, to = NULL, progbar = TRUE, warn.nan = FALSE )
path |
Character string, path to the directory holding simulation output files to import. Windows users: Note that Paths are separated by '/', not '\'. |
type |
Character string, keyword for HYPE output file type to import. One of |
hype.var |
Character string, keyword to specify HYPE output variable to import. Must include "RG"-prefix in case of output region files.
Not case-sensitive. Required in combination with |
id |
Integer, giving a single SUBID or OUTREGID for which to import basin output files. Required in combination with |
dt.format |
Date-time |
select |
Integer vector, column numbers to import, for use with |
from |
Integer. For partial imports, number of simulation iteration to start from. |
to |
Integer. For partial imports, number of simulation iteration to end with. |
progbar |
Logical, display a progress bar while importing HYPE output files. Adds overhead to calculation time but useful when many files are imported. |
warn.nan |
Logical, check if imported results contain any |
HYPE optimization routines optionally allow for generation of simulation output files for each iteration in the optimization routine. For further details see documentation on 'task WS' in the optpar.txt online documentation.
ReadWsOutput
imports and combines all simulation iterations in an array
, which can then be easily used in
further analysis, most likely in combination with performance and parameter values from an imported corresponding 'allsim.txt' file.
The result folder containing HYPE WS results, argument path
, can contain other files as well, ReadWsOutput
searches for
file name pattern to filter targeted result files. However, if files of the same type exist from different model runs, e.g.
from another calibration run or from a standard model run, the pattern search cannot distinguish these from the targeted files
and ReadWsOutput
will fail.
For large numbers of result files, simulations can be partially imported using arguments from
and to
, in order to avoid
memory exceedance problems.
ReadWsOutput
returns a 3-dimensional array with additional attributes. The array content depends on the HYPE output file type
specified in argument type
. Time and map output file imports return an array of class HypeSingleVar
with
[time, subid, iteration]
dimensions, basin output file imports return an array of class HypeMultiVar
with
[time, variable, iteration]
dimensions. An additional attribute subid.nan
might be
returned, see argument warn.nan
, containing a list with SUBID vector elements. Vectors contain iterations where NaN
values occur for the given SUBID.
Returned arrays contain additional attributes
:
A vector of date-times, POSIX
if argument dt.format
is non-NULL
. Corresponds to 1st array
dimension.
A (vector of) SUBID(s). Corresponds to 2nd array dimension for time and map output files.
NA
if not applicable.
A (vector of) OUTREGID(s). Corresponds to 2nd array dimension for time and map output files.
NA
if not applicable.
A vector of HYPE output variables. Corresponds to 2nd array dimension for basin output files.
A named list with SUBID or HYPE variable vector elements. Vectors contain iterations where NaN
values occur for the given SUBID/HYPE variable.
te <- ReadWsOutput(path = system.file("demo_model", "results", package = "HYPEtools"), type = "map", hype.var = "cout", dt.format = "%Y-%m") te
te <- ReadWsOutput(path = system.file("demo_model", "results", package = "HYPEtools"), type = "map", hype.var = "cout", dt.format = "%Y-%m") te
This is a convenience wrapper function to import an Xobs file into R.
ReadXobs( filename = "Xobs.txt", dt.format = "%Y-%m-%d", variable = NULL, nrows = -1L, verbose = if (nrows %in% 0:2) FALSE else TRUE )
ReadXobs( filename = "Xobs.txt", dt.format = "%Y-%m-%d", variable = NULL, nrows = -1L, verbose = if (nrows %in% 0:2) FALSE else TRUE )
filename |
Path to and file name of the Xobs file to import. Windows users: Note that Paths are separated by '/', not '\'. |
dt.format |
Date-time |
variable |
Character vector, HYPE variable ID(s) to select for import. Not case-sensitive. If |
nrows |
Integer, number of rows to import. A value of |
verbose |
Logical, throw warning if class |
ReadXobs
is a convenience wrapper function of fread
from package
data.table::data.table,
with conversion of date-time strings to POSIX time representations. Variable names, SUBIDs, comment, and timestep are returned as
attributes (see attr
on how to access these).
Duplicated variable-SUBID combinations are not allowed in HYPE Xobs files, and the function will throw a warning if any are found.
If datetime import to POSIXct worked, ReadXobs
returns a HypeXobs
object, a data frame with four
additional attributes variable
, subid
, comment
, and timestep
: variable
and subid
each contain a vector with column-wise HYPE IDs (first column with date/time information omitted).
comment
contains the content of the Xobs file comment row as single string. timestep
contains a keyword string.
Column names of the returned data frame are composed of variable names and SUBIDs, separated by an underscore,
i.e. [variable]_[subid]
. If datetime conversion failed on import, the returned object is a data frame
(i.e. no class HypeXobs
).
For the conversion of date/time strings, time zone "UTC" is assumed. This is done to avoid potential daylight saving time side effects when working with the imported data (and e.g. converting to string representations during the process).
te <- ReadXobs(filename = system.file("demo_model", "Xobs.txt", package = "HYPEtools")) te
te <- ReadXobs(filename = system.file("demo_model", "Xobs.txt", package = "HYPEtools")) te
RescaleSLCClasses
re-scales several or all SLC classes for each SUBID in a GeoData data frame
to a new target sum for all classes.
RescaleSLCClasses(gd, slc.exclude = NULL, target = 1, plot.box = TRUE)
RescaleSLCClasses(gd, slc.exclude = NULL, target = 1, plot.box = TRUE)
gd |
A data frame containing columns 'SLC_n' ( |
slc.exclude |
Integer, SLC class numbers. Area fractions of classes listed here are kept fixed
during re-scaling. If |
target |
Numeric, target sum for SLC class fractions in each subbasin after re-scaling. Either a single
number or a vector with one value for each row in |
plot.box |
Logical, if |
RescaleSLCClasses
allows to rescale SLC classes, e.g. as part of a post-processing work flow during
HYPE model setup. Individual SLC classes can be excluded to protect. This can be useful e.g. for lake areas which
maybe must correspond to areas a LakeData file. The function will throw a warning if excluded SLC class fractions
are greater than sums provided in target
, but not if they are smaller.
RescaleSLCClasses
returns the data frame provided in gd
, with re-scaled SLC class fractions.
SumSLCClasses
for inspection of SLC class fraction sums in each subbasin
CleanSLCClasses
for pruning of small SLC fractions.
# Import source data te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) # Re-scale SLC classes, protect the first two RescaleSLCClasses(gd = te, slc.exclude = 1:2)
# Import source data te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) # Re-scale SLC classes, protect the first two RescaleSLCClasses(gd = te, slc.exclude = 1:2)
ScaleAquiferData
scales the RETRATE
time step-dependent recession coefficient in an imported
HYPE 'AquiferData.txt' file to a
new target time step. See HYPE wiki tutorial on sub-daily time steps.
ScaleAquiferData( x = NULL, timestep.ratio = 1/24, digits = 3, verbose = TRUE, print.par = FALSE )
ScaleAquiferData( x = NULL, timestep.ratio = 1/24, digits = 3, verbose = TRUE, print.par = FALSE )
x |
Data frame containing HYPE AquiferData contents. Typically imported with |
timestep.ratio |
Numeric, time step scaling factor. Defaults to (1/24) to scale from daily to hourly time steps. To scale from hourly to daily time steps use 24. |
digits |
Integer, number of significant digits in scaled parameter values to export. See |
verbose |
Logical, if |
print.par |
Logical, print known time-scale dependent recession coefficients instead of scaling an AquiferData data frame. |
ScaleAquiferData
applies a user-specified scaling factor timestep.ratio
to the RETRATE
time step-dependent recession coefficient
in a HYPE AquiferData data frame. All RETRATE
values that are not equal to zero are assumed to be aquifer rows and will be converted to the new time step.
Recession coefficients are matched against an inbuilt set of column names. To see these names, call ScaleAquiferData(print.par = TRUE)
.
Please notify us if you find any missing coefficients.
Timestep-dependent recession coefficients are scaled using the relationship described in: Nalbantis, Ioannis (1995). “Use of multiple-time-step information in rainfall-runoff modelling”, Journal of Hydrology 165, 1-4, pp. 135–159.
new_coefficient_value = 1 - (1 - old_coefficient_value)^time_step_ratio
Use the ScalePar
and ScaleFloodData
functions to scale the time-dependent parameters and recession coefficients in par.txt and FloodData.txt files, respectively.
Note that ScalePar
does not scale the values for the "gratk", "ilratk", "olratk", or "wetrate" rating curve recession coefficients in par.txt because they are not limited to the range 0-1.
Likewise, HYPEtools does not provide any scaling function for the "RATE" columns in DamData.txt and LakeData.txt because these values are not limited to the range 0-1.
We recommend looking at the results from the lakes/wetlands and recalibrating these parameters and their related power coefficients as needed.
A data.frame()
object as supplied in x
, with re-scaled recession coefficients, or nothing if print.par = TRUE
.
# Import daily HYPE AquiferData file ad <- ReadAquiferData(filename = system.file("demo_model", "AquiferData_Example.txt", package = "HYPEtools")) # Scale to hourly time steps ScaleAquiferData(x = ad) # Print all time scale-dependent coefficients known to the function ScaleAquiferData(print.par = TRUE)
# Import daily HYPE AquiferData file ad <- ReadAquiferData(filename = system.file("demo_model", "AquiferData_Example.txt", package = "HYPEtools")) # Scale to hourly time steps ScaleAquiferData(x = ad) # Print all time scale-dependent coefficients known to the function ScaleAquiferData(print.par = TRUE)
ScaleFloodData
scales the time step-dependent recession coefficients in an imported
HYPE 'FloodData.txt' file to a
new target time step. See HYPE wiki tutorial on sub-daily time steps.
ScaleFloodData( x = NULL, timestep.ratio = 1/24, digits = 3, verbose = TRUE, print.par = FALSE )
ScaleFloodData( x = NULL, timestep.ratio = 1/24, digits = 3, verbose = TRUE, print.par = FALSE )
x |
Data frame containing HYPE FloodData contents. Typically imported with |
timestep.ratio |
Numeric, time step scaling factor. Defaults to (1/24) to scale from daily to hourly time steps. To scale from hourly to daily time steps use 24. |
digits |
Integer, number of significant digits in scaled parameter values to export. See |
verbose |
Logical, if |
print.par |
Logical, print known time-scale dependent recession coefficients instead of scaling a FloodData data frame. |
ScaleFloodData
applies a user-specified scaling factor timestep.ratio
to the time step-dependent recession coefficients
in a HYPE FloodData data frame. ' Recession coefficients are matched against an inbuilt set of column names. To see these names, call ScaleFloodData(print.par = TRUE)
.
Please notify us if you find any missing coefficients.
Timestep-dependent recession coefficients are scaled using the relationship described in: Nalbantis, Ioannis (1995). “Use of multiple-time-step information in rainfall-runoff modelling”, Journal of Hydrology 165, 1-4, pp. 135–159.
new_coefficient_value = 1 - (1 - old_coefficient_value)^time_step_ratio
Use the ScalePar
and ScaleAquiferData
functions to scale the time-dependent parameters and recession coefficients in par.txt and AquiferData.txt files, respectively.
Note that ScalePar
does not scale the values for the "gratk", "ilratk", "olratk", or "wetrate" rating curve recession coefficients in par.txt because they are not limited to the range 0-1.
Likewise, HYPEtools does not provide any scaling function for the "RATE" columns in DamData.txt and LakeData.txt because these values are not limited to the range 0-1.
We recommend looking at the results from the lakes/wetlands and recalibrating these parameters and their related power coefficients as needed.
A data.frame()
object as supplied in x
, with re-scaled recession coefficients, or nothing if print.par = TRUE
.
# Import daily HYPE FloodData file fd <- ReadFloodData(filename = system.file("demo_model", "FloodData_Example.txt", package = "HYPEtools")) # Scale to hourly time steps ScaleFloodData(x = fd) # Print all time scale-dependent coefficients known to the function ScaleFloodData(print.par = TRUE)
# Import daily HYPE FloodData file fd <- ReadFloodData(filename = system.file("demo_model", "FloodData_Example.txt", package = "HYPEtools")) # Scale to hourly time steps ScaleFloodData(x = fd) # Print all time scale-dependent coefficients known to the function ScaleFloodData(print.par = TRUE)
ScalePar
scales time step-dependent parameters and recession coefficients in an imported
HYPE 'par.txt' parameter file to a
new target time step. See HYPE wiki tutorial on sub-daily time steps.
ScalePar( x = NULL, timestep.ratio = 1/24, digits = 3, verbose = TRUE, print.par = FALSE )
ScalePar( x = NULL, timestep.ratio = 1/24, digits = 3, verbose = TRUE, print.par = FALSE )
x |
List containing HYPE parameters. Typically imported with |
timestep.ratio |
Numeric, time step scaling factor. Defaults to (1/24) to scale from daily to hourly time steps. To scale from hourly to daily time steps use 24. |
digits |
Integer, number of significant digits in scaled parameter values to export. See |
verbose |
Logical, if |
print.par |
Logical, print known time-scale dependent parameters and recession coefficients instead of scaling a parameter list. |
ScalePar
applies a user-specified scaling factor, timestep.ratio
, to all time scale-dependent parameters and recession coefficients
in a HYPE parameter list. Parameters are matched against an inbuilt set of parameter names. To see these parameters, call ScalePar(print.par = TRUE)
.
Please notify us if you find any missing parameters.
If parameters are not timestep-dependent recession coefficients, then scaling is performed using the ratio between the two time step lengths (e.g. 1/24 when scaling from daily to hourly time steps). If parameters are timestep-dependent recession coefficients, then scaling is performed using the relationship described in: Nalbantis, Ioannis (1995). “Use of multiple-time-step information in rainfall-runoff modelling”, Journal of Hydrology 165, 1-4, pp. 135–159.
new_parameter_value = 1 - (1 - old_parameter_value)^time_step_ratio
ScalePar
does not scale the values for the "gratk", "ilratk", "olratk", or "wetrate" rating curve recession coefficients in par.txt because they are not limited to the range 0-1.
Likewise, HYPEtools does not provide any scaling function for the "RATE" columns in DamData.txt and LakeData.txt because these values are not limited to the range 0-1.
We recommend looking at the results from the lakes/wetlands and recalibrating these parameters and their related power coefficients as needed.
Use the ScaleAquiferData
and ScaleFloodData
functions to scale the time-dependent recession coefficients in AquiferData.txt and FloodData.txt files, respectively.
A list()
object as supplied in x
, with re-scaled parameters and recession coefficients, or nothing if print.par = TRUE
.
ScaleAquiferData
ScaleFloodData
# Import daily HYPE parameter file par <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) # Scale to hourly time steps ScalePar(x = par) # Print all time scale-dependent parameters known to the function ScalePar(print.par = TRUE)
# Import daily HYPE parameter file par <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) # Scale to hourly time steps ScalePar(x = par) # Print all time scale-dependent parameters known to the function ScalePar(print.par = TRUE)
Update par.txt with values from an allsim.txt or bestsims.txt file
AllSimToPar(simfile, row, par) BestSimsToPar(simfile, row, par)
AllSimToPar(simfile, row, par) BestSimsToPar(simfile, row, par)
simfile |
Imported allsim.txt or bestsims.txt file imported as data frame. |
row |
Integer, row number indicating row containing the parameter values that should be replaced/added to |
par |
Imported par.txt file that should be updated using parameter values from |
AllSimToPar
and BestSimsToPar
can be used to update an existing par.txt file with the parameter values from a HYPE allsim.txt or bestsims.txt file.
If a parameter in the allsim or bestsims file already exists in par
, then the parameter values will be overwritten in par
. If the parameter does not exist,
then the parameter will be added to the bottom of the output.
AllSimToPar
and BestSimsToPar
return a list of named vectors in the format used by ReadPar
.
ReadPar
for HYPE par.txt import; WritePar
to export HYPE par.txt files
simfile <- read.table(file = system.file("demo_model", "results", "bestsims.txt", package = "HYPEtools" ), header = TRUE, sep = ",") par <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) BestSimsToPar(simfile, 1, par)
simfile <- read.table(file = system.file("demo_model", "results", "bestsims.txt", package = "HYPEtools" ), header = TRUE, sep = ",") par <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) BestSimsToPar(simfile, 1, par)
Function to sort an imported GeoData.txt file in downstream order, so that all upstream sub-basins are listed in rows above downstream sub-basins.
SortGeoData(gd, bd = NULL, progbar = TRUE)
SortGeoData(gd, bd = NULL, progbar = TRUE)
gd |
A data frame containing a column with SUBIDs and a column (MAINDOWN) containing the corresponding downstream SUBID, e.g. an imported 'GeoData.txt' file. |
bd |
A data frame with bifurcation connections, e.g. an imported 'BranchData.txt' file. Optional argument. |
progbar |
Logical, display a progress bar while calculating SUBID sorting. |
GeoData.txt files need to be sorted in downstream order for HYPE to run without errors. SortGeoData
considers bifurcation connections, but not
irrigation or groundwater flow links.
SortGeoData
returns a GeoData dataframe.
AllUpstreamSubids
OutletSubids
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) SortGeoData(gd = te)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) SortGeoData(gd = te)
Prepare data frame containing summary of subbasin attributes.
SubidAttributeSummary( subids = NULL, gd, bd = NULL, gc = NULL, desc = NULL, group = NULL, group.upstream = TRUE, signif.digits = NULL, progbar = FALSE, summarize.landuse = TRUE, summarize.soil = TRUE, summarize.crop = TRUE, summarize.upstreamarea = TRUE, unweighted.gd.cols = NULL, upstream.gd.cols = NULL, olake.slc = NULL, bd.weight = FALSE, mapoutputs = NULL )
SubidAttributeSummary( subids = NULL, gd, bd = NULL, gc = NULL, desc = NULL, group = NULL, group.upstream = TRUE, signif.digits = NULL, progbar = FALSE, summarize.landuse = TRUE, summarize.soil = TRUE, summarize.crop = TRUE, summarize.upstreamarea = TRUE, unweighted.gd.cols = NULL, upstream.gd.cols = NULL, olake.slc = NULL, bd.weight = FALSE, mapoutputs = NULL )
subids |
Vector containing SUBIDs of subbasins to summarize. |
gd |
Imported HYPE GeoData.txt file. See |
bd |
Imported HYPE BranchData.txt file. See |
gc |
Imported HYPE GeoClass.txt file. See |
desc |
Optional, Imported HYPE Description file. If provided, then dataframe columns will be renamed using the short names in the description file. See |
group |
Optional, Integer vector of same length as number of SLC classes in gd. Alternative grouping index specification to gcl + type for |
group.upstream |
Logical, if |
signif.digits |
Optional, Integer specifying number of significant digits to round outputs to. Used by |
progbar |
Logical, display a progress bar while calculating summary information. Used by |
summarize.landuse |
Logical, specify whether or not subbasin upstream landuse fractions should be calculated. |
summarize.soil |
Logical, specify whether or not subbasin upstream soil fractions should be calculated. |
summarize.crop |
Logical, specify whether or not subbasin upstream crop fractions should be calculated. |
summarize.upstreamarea |
Logical, specify whether or not subbasin upstream area should be calculated. |
unweighted.gd.cols |
Vector, names of |
upstream.gd.cols |
Vector, specify column names of |
olake.slc |
Integer, SLC class number representing outlet lake fractions. Used by |
bd.weight |
Logical, if set to TRUE, flow weights will be applied for areas upstream of stream bifurcations. See |
mapoutputs |
Vector, paths to mapoutput files that should be read by |
SubidAttributeSummary
can be used to create a data frame object containing subbasin attribute summary information. This data frame can then be used as the attributes
input for PlotPerformanceByAttribute
. The function can summarize subbasin upstream landuse, soil, and crop fractions using UpstreamGroupSLCClasses
. In addition, the
function can summarize upstream GeoData information using UpstreamGeoData
. Finally, the function can join mapoutput and GeoData columns directly to the output data frame (i.e without further processing).
SubidAttributeSummary
returns a data frame object containing subbasin attribute summary information.
UpstreamGroupSLCClasses
, GroupSLCClasses
, UpstreamGeoData
, ReadMapOutput
for subbasin attribute summary functions; PlotPerformanceByAttribute
for related plotting function.
subass <- ReadSubass(filename = system.file("demo_model", "results", "subass1.txt", package = "HYPEtools" ), check.names = TRUE) gd <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools" )) gc <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools" )) SubidAttributeSummary(subids <- subass$SUBID, gd = gd, gc = gc, mapoutputs = c(system.file("demo_model", "results", "mapCOUT.txt", package = "HYPEtools")), upstream.gd.cols = c("SLOPE_MEAN") )
subass <- ReadSubass(filename = system.file("demo_model", "results", "subass1.txt", package = "HYPEtools" ), check.names = TRUE) gd <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools" )) gc <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools" )) SubidAttributeSummary(subids <- subass$SUBID, gd = gd, gc = gc, mapoutputs = c(system.file("demo_model", "results", "mapCOUT.txt", package = "HYPEtools")), upstream.gd.cols = c("SLOPE_MEAN") )
SumSLCClasses
sums all SLC classes for each SUBID in a GeoData data frame and optionally plots the results.
SumSLCClasses(gd, plot.box = TRUE, silent = FALSE, ...)
SumSLCClasses(gd, plot.box = TRUE, silent = FALSE, ...)
gd |
Data frame containing columns with SLC fractions, typically a 'GeoData.txt' file imported with |
plot.box |
Logical, if |
silent |
Logical, if set to |
... |
Other arguments to be passed to |
SumSLCClasses
is a wrapper for colSums
with a boxplot output option, and allows to quickly control if SLCs of all SUBIDs in a
GeoData data frame sum up to 1.
SumSLCClasses
returns a vector of SLC sums, invisibly if plot.box
is TRUE
.
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) SumSLCClasses(gd = te, plot.box = TRUE) SumSLCClasses(gd = te, plot.box = FALSE)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) SumSLCClasses(gd = te, plot.box = TRUE) SumSLCClasses(gd = te, plot.box = FALSE)
Function to calculate upstream areas of a vector of SUBIDs or all SUBIDs in a GeoData table.
SumUpstreamArea(subid = NULL, gd, bd = NULL, cl = 2, progbar = FALSE)
SumUpstreamArea(subid = NULL, gd, bd = NULL, cl = 2, progbar = FALSE)
subid |
Integer vector of SUBIDs to calculate upstream areas for (must exist in |
gd |
A data frame, containing 'SUBID', 'MAINDOWN', and 'AREA' columns, e.g. an imported 'GeoData.txt' file. |
bd |
A data frame, containing 'BRANCHID' and 'SOURCEID' columns, e.g. an imported 'BranchData.txt' file. Optional argument. |
cl |
Integer, number of processes to use for parallel computation. Set to |
progbar |
Logical, display a progress bar while calculating upstream areas. Adds overhead to calculation time but useful if you want HYPEtools to decide how long your coffee break should take. |
SumUpstreamArea
sums upstream areas of all connected upstream SUBIDs, including branch connections in case of stream bifurcations
but not including potential irrigation links or groundwater flows.
SumUpstreamArea
returns a data frame with two columns containing SUBIDs and total upstream areas (in area units as provided in gd
).
Upstream areas include areas of outlet SUBIDs.
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) SumUpstreamArea(subid = c(3361, 63794), gd = te, progbar = FALSE)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) SumUpstreamArea(subid = c(3361, 63794), gd = te, progbar = FALSE)
Function to calculate upstream sums and averages for selected variables of imported GeoData.txt files.
UpstreamGeoData( subid = NULL, gd, bd = NULL, olake.slc = NULL, bd.weight = FALSE, signif.digits = 5, progbar = TRUE )
UpstreamGeoData( subid = NULL, gd, bd = NULL, olake.slc = NULL, bd.weight = FALSE, signif.digits = 5, progbar = TRUE )
subid |
Integer vector of SUBIDs for which to calculate upstream properties (must exist in |
gd |
A data frame containing a column with SUBIDs and a column with areas, e.g. an imported 'GeoData.txt' file. |
bd |
A data frame with bifurcation connections, e.g. an imported 'BranchData.txt' file. Optional argument. |
olake.slc |
Integer,SLC class number which represents outlet lake fractions. Mandatory for weighted averaging of outlet lake depths. |
bd.weight |
Logical, if set to |
signif.digits |
Integer, number of significant digits to round upstream variables to. See also |
progbar |
Logical, display a progress bar while calculating SLC class fractions. Adds overhead to calculation time but useful
when |
UpstreamGeoData
calculates upstream averages or sums of selected variables in a GeoData data frame, including branch connections
in case of stream bifurcations but not including potential irrigation links or groundwater flows. Averages are weighted by sub-catchment area, with
the exception of outlet lake depths and rural household emission concentrations provided in GeoData variables 'lake_depth', 'loc_tn',
and 'loc_tp'. Outlet lake depths are weighted by outlet lake area and the GeoData column with
SLC class fractions for outlet lakes must be provided in function argument col.olake.slc
. Rural household emissions are weighted by
emission volume as provided in column 'loc_vol'. Elevation and slope standard deviations are
averaged if the corresponding mean values exist (sample means are required to calculate overall means of standard deviations).
Currently, the following variables are considered:
elev_mean, slope_mean, buffer, close_w, latitude, longitude, all SLC classes, lake depths, elev_std, slope_std
loc_tn, loc_tp
area, rivlen, loc_vol
UpstreamGeoData
returns a data frame with the same number of columns as argument gd
and number of rows corresponding to number of
SUBIDs in argument subid
, with updated upstream columns marked with a leading 'UP_' in the column names.
UpstreamSLCClasses
SumUpstreamArea
AllUpstreamSubids
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) # Upstream stats for domain outlet UpstreamGeoData(subid = OutletSubids(te), gd = te, olake.slc = 1, progbar = FALSE)
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) # Upstream stats for domain outlet UpstreamGeoData(subid = OutletSubids(te), gd = te, olake.slc = 1, progbar = FALSE)
Function to calculate averages of grouped SLC class fractions calculated from imported GeoData.txt and GeoClass.txt or any other user-defined grouping.
UpstreamGroupSLCClasses( subid = NULL, gd, bd = NULL, gcl = NULL, type = c("landuse", "soil", "crop"), group = NULL, signif.digits = 3, progbar = TRUE )
UpstreamGroupSLCClasses( subid = NULL, gd, bd = NULL, gcl = NULL, type = c("landuse", "soil", "crop"), group = NULL, signif.digits = 3, progbar = TRUE )
subid |
Integer vector of SUBIDs for which to calculate upstream properties (must exist in |
gd |
A data frame containing a column with SUBIDs and a column with areas, e.g. an imported 'GeoData.txt' file imported with |
bd |
A data frame, containing 'BRANCHID' and 'SOURCEID' columns, e.g. an imported 'BranchData.txt' file. Optional argument. |
gcl |
Data frame containing columns with SLCs and corresponding land use and soil class IDs, typically a 'GeoClass.txt'
file imported with |
type |
Keyword character string for use with |
group |
Integer vector, of same length as number of SLC classes in |
signif.digits |
Integer, number of significant digits to round upstream SLCs to. See also |
progbar |
Logical, display a progress bar while calculating SLC class fractions. Adds overhead to calculation time but useful when |
UpstreamGroupSLCClasses
calculates area-weighted upstream averages of CropID fractions from SLC class fractions in a GeoData table and corresponding
grouping columns in a GeoClass table or a user-provided vector. Upstream calculations include branch connections in case of stream bifurcations but not
potential irrigation links or groundwater flows. Averages are weighted by sub-catchment area.
The function builds on GroupSLCClasses
, which provides grouped sums of SLC classes for several or all sub-basins in a GeoData dataframe.
UpstreamGroupSLCClasses
returns a data frame with SUBIDs in the first column, and upstream group fractions in the following columns.
UpstreamGroupSLCClasses
expects SLC class columns in argument gd
to be ordered in ascending order.
GroupSLCClasses
UpstreamSLCClasses
UpstreamGeoData
SumUpstreamArea
AllUpstreamSubids
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) # Upstream land use fractions for single SUBID UpstreamGroupSLCClasses(subid = 63794, gd = te1, gcl = te2, type = "landuse", progbar = FALSE) # Upstream soil fraction for all SUBIDs in GeoData UpstreamGroupSLCClasses(gd = te1, gcl = te2, type = "soil")
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) te2 <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) # Upstream land use fractions for single SUBID UpstreamGroupSLCClasses(subid = 63794, gd = te1, gcl = te2, type = "landuse", progbar = FALSE) # Upstream soil fraction for all SUBIDs in GeoData UpstreamGroupSLCClasses(gd = te1, gcl = te2, type = "soil")
Function to calculate point source emissions over all upstream areas of a vector of SUBIDs or all SUBIDs in a GeoData table.
UpstreamPointSources( subid = NULL, gd, psd, bd = NULL, signif.digits = 4, progbar = TRUE )
UpstreamPointSources( subid = NULL, gd, psd, bd = NULL, signif.digits = 4, progbar = TRUE )
subid |
Integer vector of SUBIDs to calculate upstream point sources for (must exist in |
gd |
A data frame containing columns 'SUBID' with SUBIDs and 'MAINDOWN' with downstream SUBIDs, e.g. an imported 'GeoData.txt' file. |
psd |
A data frame with HYPE point source specifications, typically a 'PointSourceData.txt' file imported with |
bd |
A data frame, containing 'BRANCHID' and 'SOURCEID' columns, e.g. an imported 'BranchData.txt' file. Optional argument. |
signif.digits |
Integer, number of significant digits to round upstream SLCs to. See also |
progbar |
Logical, display a progress bar while calculating SLC class fractions. Adds overhead to calculation time but useful when |
UpstreamPointSources
calculates summarized upstream point source emissions. For each sub-basin with at least one upstream
point source (including the sub-basin itself), summed emission volumes and volume weighted emission concentrations are calculated.
HYPE point source types ('ps_type') are returned in separate rows. UpstreamPointSources
requires point source types to be one of -1, 0, 1, 2, 3
,
corresponding to water abstractions, no differentiation/tracer, and type 1 to 3 (e.g. wastewater treatment plants, industries, and urban stormwater).
For water abstraction point sources, only summed upstream volumes are returned, i.e., concentrations are simply set to zero in results.
UpstreamPointSources
returns a data frame with columns containing SUBIDs, point source types, volumes, and concentrations found in psd
: total nitrogen,
total phosphorus, total suspended sediment, tracer, and temperature.
te1 <- ReadPointSourceData(filename = system.file("demo_model", "PointSourceData.txt", package = "HYPEtools")) te2 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) UpstreamPointSources(subid = OutletSubids(te2), gd = te2, psd = te1, progbar = FALSE)
te1 <- ReadPointSourceData(filename = system.file("demo_model", "PointSourceData.txt", package = "HYPEtools")) te2 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) UpstreamPointSources(subid = OutletSubids(te2), gd = te2, psd = te1, progbar = FALSE)
Function to calculate SLC class fractions over all upstream areas of a vector of SUBIDs or all SUBIDs in a GeoData table.
UpstreamSLCClasses( subid = NULL, gd, bd = NULL, signif.digits = 3, progbar = TRUE )
UpstreamSLCClasses( subid = NULL, gd, bd = NULL, signif.digits = 3, progbar = TRUE )
subid |
Integer vector of SUBIDs to calculate upstream SUBID fractions for (must exist in |
gd |
A data frame containing columns 'SUBID' with SUBIDs, 'MAINDOWN' with downstream SUBIDs, and 'AREA' with sub-basin areas, e.g. an imported 'GeoData.txt' file. |
bd |
A data frame with bifurcation connections, e.g. an imported 'BranchData.txt' file. Optional argument. |
signif.digits |
Integer, number of significant digits to round upstream SLCs to. See also |
progbar |
Logical, display a progress bar while calculating SLC class fractions. Adds overhead to calculation time but useful when |
UpstreamSLCClasses
sums upstream areas of all connected upstream SUBIDs, including branch connections in case of stream bifurcations
but not including potential irrigation links or groundwater flows.
UpstreamSLCClasses
returns a data frame with columns containing SUBIDs, total upstream areas (in area unit as provided in gd
), and SLC
class fractions for upstream areas.
This function is now superseded by UpstreamGeoData
, which returns more upstream variables.
SumUpstreamArea
, UpstreamGeoData
, UpstreamGroupSLCClasses
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) # Upstream SLCs for single SUBID UpstreamSLCClasses(subid = 3361, gd = te1, progbar = FALSE)
# Import source data te1 <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) # Upstream SLCs for single SUBID UpstreamSLCClasses(subid = 3361, gd = te1, progbar = FALSE)
Lookup information (e.g. Name, Units) for a specific HYPE variable ID, or find HYPE variable information for a search term.
VariableInfo( variable, info = c("ID", "Name", "Unit", "Description", "Aggregation", "Reference", "Component") ) VariableSearch( search, info = c("ID", "Name", "Unit", "Description", "Aggregation", "Reference", "Component"), ignore_case = TRUE )
VariableInfo( variable, info = c("ID", "Name", "Unit", "Description", "Aggregation", "Reference", "Component") ) VariableSearch( search, info = c("ID", "Name", "Unit", "Description", "Aggregation", "Reference", "Component"), ignore_case = TRUE )
variable |
String, HYPE Variable ID (e.g. "COUT"). |
info |
A vector of strings describing HYPE variable attribute information to return/search: "ID", "Name", "Unit", "Description", "Aggregation", and/or "Component". |
search |
String, search HYPE variable info for string matches in |
ignore_case |
Logical, should case differences be ignored in the match? |
The VariableInfo
and VariableSearch
functions provide features to lookup information on HYPE variables from the
HYPE Wiki.
VariableInfo
can be used to return information (e.g. Name, Units) for a known HYPE Variable ID.
VariableSearch
can be used to search for e.g. an unknown HYPE variable ID based on a search
term.
The info
argument can be used to select which information to return or search.
VariableInfo
Returns a named list of the selected info
for the specified variable
ID.
VariableInfo
returns a tibble of the search results.
VariableInfo(variable = "COUT", info = c("Name","Unit")) VariableSearch(search = "ccSS", info = c("ID", "Name", "Description"))
VariableInfo(variable = "COUT", info = c("Name","Unit")) VariableSearch(search = "ccSS", info = c("ID", "Name", "Description"))
Interactive maps and plots for visualizing MapOutput files.
VisualizeMapOutput( results.dir = NULL, file.pattern = "^map.*\\.(txt|csv)$", map = NULL, map.subid.column = 1, output.dir = NULL ) VisualiseMapOutput( results.dir = NULL, file.pattern = "^map.*\\.(txt|csv)$", map = NULL, map.subid.column = 1, output.dir = NULL )
VisualizeMapOutput( results.dir = NULL, file.pattern = "^map.*\\.(txt|csv)$", map = NULL, map.subid.column = 1, output.dir = NULL ) VisualiseMapOutput( results.dir = NULL, file.pattern = "^map.*\\.(txt|csv)$", map = NULL, map.subid.column = 1, output.dir = NULL )
results.dir |
Optional string, path to a directory containing MapOutput files that should be loaded on app initialization. |
file.pattern |
Optional string, filename pattern to select files in |
map |
Optional string, path to GIS file for subbasin polygons that should be loaded on app initialization. Typically a GeoPackage (.gpkg) or Shapefile (.shp). |
map.subid.column |
Optional integer, column index in the |
output.dir |
Optional string, path to a default output directory to save captured map images. |
VisualizeMapOutput
is a Shiny app that provides interactive maps, plots, and tables for visualizing HYPE MapOutput files. The interactive Leaflet map is generated using PlotMapOutput
.
The app can be launched with or without the input arguments. All necessary input buttons and menus are provided within the app interface. For convenience, however, the input arguments can be provided in order to quickly launch the
app with desired settings.
VisualizeMapOutput
returns a Shiny application object.
## Not run: if (interactive()) { VisualizeMapOutput( results.dir = system.file("demo_model", "results", package = "HYPEtools"), map = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools"), map.subid.column = 25 ) } ## End(Not run)
## Not run: if (interactive()) { VisualizeMapOutput( results.dir = system.file("demo_model", "results", package = "HYPEtools"), map = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools"), map.subid.column = 25 ) } ## End(Not run)
Interactive maps and plots for visualizing mapped point information, e.g. HYPE MapOutput files or model performances at observation sites.
VisualizeMapPoints( results.dir = NULL, file.pattern = "^(map|subass).*\\.(txt|csv)$", sites = NULL, sites.subid.column = 1, bg = NULL, output.dir = NULL ) VisualiseMapPoints( results.dir = NULL, file.pattern = "^(map|subass).*\\.(txt|csv)$", sites = NULL, sites.subid.column = 1, bg = NULL, output.dir = NULL )
VisualizeMapPoints( results.dir = NULL, file.pattern = "^(map|subass).*\\.(txt|csv)$", sites = NULL, sites.subid.column = 1, bg = NULL, output.dir = NULL ) VisualiseMapPoints( results.dir = NULL, file.pattern = "^(map|subass).*\\.(txt|csv)$", sites = NULL, sites.subid.column = 1, bg = NULL, output.dir = NULL )
results.dir |
Optional string, path to a directory containing e.g. MapOutput or Subass files that should be loaded on app initialization. |
file.pattern |
Optional string, filename pattern to select files in |
sites |
Optional string, path to GIS file for outlet points that should be loaded on app initialization. Typically a GeoPackage (.gpkg) or Shapefile (.shp). |
sites.subid.column |
Optional integer, column index in the |
bg |
Optional string, path to GIS file with polygon geometry to plot in the background. Typically an imported sub-basin vector polygon file. |
output.dir |
Optional string, path to a default output directory to save captured map images. |
VisualizeMapPoints
is a Shiny app that provides interactive maps, plots, and tables for visualizing mapped point information. The interactive Leaflet map is generated using PlotMapPoints
.
The app can be launched with or without the input arguments. All necessary input buttons and menus are provided within the app interface. For convenience, however, the input arguments can be provided in order to quickly launch the
app with desired settings.
VisualizeMapPoints
returns a Shiny application object.
## Not run: if (interactive()) { VisualizeMapPoints( results.dir = system.file("demo_model", "results", package = "HYPEtools"), sites = system.file("demo_model", "gis", "Nytorp_centroids.gpkg", package = "HYPEtools"), sites.subid.column = 25, bg = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools") ) } ## End(Not run)
## Not run: if (interactive()) { VisualizeMapPoints( results.dir = system.file("demo_model", "results", package = "HYPEtools"), sites = system.file("demo_model", "gis", "Nytorp_centroids.gpkg", package = "HYPEtools"), sites.subid.column = 25, bg = system.file("demo_model", "gis", "Nytorp_map.gpkg", package = "HYPEtools") ) } ## End(Not run)
Function to export a basin output file from R.
WriteBasinOutput(x, filename, dt.format = "%Y-%m-%d")
WriteBasinOutput(x, filename, dt.format = "%Y-%m-%d")
x |
The object to be written, a dataframe with |
filename |
A character string naming a file to write to. Windows users: Note that Paths are separated by '/', not '\'. |
dt.format |
Date-time |
WriteBasinOutput
exports a dataframe with headers and formatting options adjusted to match HYPE's basin output files.
No return value, called for file export.
te <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) WriteBasinOutput(x = te, filename = tempfile())
te <- ReadBasinOutput(filename = system.file("demo_model", "results", "0003587.txt", package = "HYPEtools")) WriteBasinOutput(x = te, filename = tempfile())
This is a convenience wrapper function to export a 'GeoClass.txt' file from R.
WriteGeoClass(x, filename, use.comment = FALSE)
WriteGeoClass(x, filename, use.comment = FALSE)
x |
The object to be written, a dataframe, as an object returned from |
filename |
A character string naming a file to write to. Windows users: Note that Paths are separated by '/', not '\'. |
use.comment |
Logical, set to |
WriteGeoClass
exports a GeoClass dataframe. HYPE accepts comment rows with a leading '!' in the beginning rows of a
GeoClass file. Comment rows typically contain some class descriptions in a non-structured way. With argument
use.comment = TRUE
, the export function looks for those in attribute
'comment',
where ReadGeoClass
stores such comments. Description files (see ReadDescription
) offer a more structured
way of storing that information.
No return value, called for export to text files.
te <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) WriteGeoClass(x = te, filename = tempfile())
te <- ReadGeoClass(filename = system.file("demo_model", "GeoClass.txt", package = "HYPEtools")) WriteGeoClass(x = te, filename = tempfile())
This is a convenience wrapper function to export a 'GeoData.txt' file from R.
WriteGeoData(x, filename, digits = 6)
WriteGeoData(x, filename, digits = 6)
x |
The object to be written, a dataframe, as an object returned from |
filename |
A character string naming a file to write to. Windows users: Note that Paths are separated by '/', not '\'. |
digits |
Integer, number of significant digits in SLC class columns to export. See |
WriteGeoData
exports a GeoData dataframe using fwrite
. SUBID
and MAINDOWN
columns are forced to non-scientific notation by conversion to text strings prior to exporting. For all other numeric columns,
use fwrite
argument scipen
.
HYPE does neither allow empty values in any GeoData column nor any string elements with more than 50 characters. The
function will return with warnings if NA
s or long strings were exported.
No return value, called for export to text files.
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) summary(te) WriteGeoData(x = te, filename = tempfile())
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) summary(te) WriteGeoData(x = te, filename = tempfile())
This is a convenience wrapper function to export a data frame to the required Harmonized Data File format. See the HYPEObsMetadataTools documentation.
WriteHarmonizedData( df, filename = "", replace.accents = FALSE, strip.punctuation = FALSE, ignore.cols = NULL, nThread = NULL )
WriteHarmonizedData( df, filename = "", replace.accents = FALSE, strip.punctuation = FALSE, ignore.cols = NULL, nThread = NULL )
df |
Data frame containing the harmonized data. |
filename |
Path to and file name (including ".csv" file extension) of the Harmonized Data CSV file to export. Windows users: Note that Paths are separated by '/', not '\'. |
replace.accents |
Logical, if |
strip.punctuation |
Logical, if |
ignore.cols |
Vector of columns in |
nThread |
Integer, set number of threads to be used when writing file. If |
WriteHarmonizedData
is a convenience wrapper function of fread
to export harmonized data in the HYPEObsMetadataTools Harmonized Data Format.
The function checks that all required columns are present, includes options to format strings, and exports data to output CSV files with the correct encoding and formatting.
WriteHarmonizedData
exports a CSV file if filename
is specified. Otherwise, the function outputs a data frame to the console.
df <- data.frame( "STATION_ID" = "A1", "DATE_START" = "2002-06-18 12:00", "DATE_END" = "2002-06-18 12:00", "PARAMETER" = "NH4_N", "VALUE" = 0.050, "UNIT" = "mg/L", "QUALITY_CODE" = "AA" ) WriteHarmonizedData(df)
df <- data.frame( "STATION_ID" = "A1", "DATE_START" = "2002-06-18 12:00", "DATE_END" = "2002-06-18 12:00", "PARAMETER" = "NH4_N", "VALUE" = 0.050, "UNIT" = "mg/L", "QUALITY_CODE" = "AA" ) WriteHarmonizedData(df)
This is a convenience wrapper function to export a data frame to the required Harmonized Spatial Description File format. See the HYPEObsMetadataTools documentation.
WriteHarmonizedSpatialDescription( df, filename = "", replace.accents = FALSE, strip.punctuation = FALSE, ignore.cols = NULL, nThread = NULL )
WriteHarmonizedSpatialDescription( df, filename = "", replace.accents = FALSE, strip.punctuation = FALSE, ignore.cols = NULL, nThread = NULL )
df |
Data frame containing the harmonized spatial description data. |
filename |
Path to and file name (including ".csv" file extension) of the Harmonized Spatial Description CSV file to export. Windows users: Note that Paths are separated by '/', not '\'. |
replace.accents |
Logical, if |
strip.punctuation |
Logical, if |
ignore.cols |
Vector of columns in |
nThread |
Integer, set number of threads to be used when writing file. If |
WriteHarmonizedSpatialDescription
is a convenience wrapper function of fread
to export harmonized spatial description data in the HYPEObsMetadataTools Harmonized Spatial Description Format.
The function checks that all required columns are present, includes options to format strings, and exports data to output CSV files with the correct encoding and formatting.
WriteSpatialDescrption
exports a CSV file if filename
is specified. Otherwise, the function outputs a data frame to the console.
df <- data.frame( "STATION_ID" = "A1", "SRC_NAME" = "Example", "DOWNLOAD_DATE" = "2022-10-19", "SRC_STATNAME" = "Station", "SRC_WBNAME" = "River", "SRC_UAREA" = NA, "SRC_XCOORD" = 28.11831, "SRC_YCOORD" = -25.83053, "SRC_EPSG" = 4326, "ADJ_XCOORD" = 28.11831, "ADJ_YCOORD" = -25.83053, "ADJ_EPSG" = 4326 ) WriteHarmonizedSpatialDescription(df)
df <- data.frame( "STATION_ID" = "A1", "SRC_NAME" = "Example", "DOWNLOAD_DATE" = "2022-10-19", "SRC_STATNAME" = "Station", "SRC_WBNAME" = "River", "SRC_UAREA" = NA, "SRC_XCOORD" = 28.11831, "SRC_YCOORD" = -25.83053, "SRC_EPSG" = 4326, "ADJ_XCOORD" = 28.11831, "ADJ_YCOORD" = -25.83053, "ADJ_EPSG" = 4326 ) WriteHarmonizedSpatialDescription(df)
WriteInfo
writes its required argument x
to a file.
WriteInfo(x, filename)
WriteInfo(x, filename)
x |
The object to be written, a list with named vector elements, as an object returned from |
filename |
A character string naming a file to write to. Windows users: Note that Paths are separated by '/', not '\'. |
WriteInfo
writes an 'info.txt' file, typically originating from an imported and modified 'info.txt'.
No return value, called for export to text files.
ReadInfo
with a description of the expected content of x
.
AddInfoLine
RemoveInfoLine
te <- ReadInfo(filename = system.file("demo_model", "info.txt", package = "HYPEtools"), mode = "exact") WriteInfo(x = te, filename = tempfile())
te <- ReadInfo(filename = system.file("demo_model", "info.txt", package = "HYPEtools"), mode = "exact") WriteInfo(x = te, filename = tempfile())
Function to export a map output file from R.
WriteMapOutput(x, filename, dt.format = "%Y-%m-%d")
WriteMapOutput(x, filename, dt.format = "%Y-%m-%d")
x |
The object to be written, a dataframe with |
filename |
A character string naming a file to write to. Windows users: Note that Paths are separated by '/', not '\'. |
dt.format |
Date-time |
WriteMapOutput
exports a dataframe with headers and formatting options adjusted to match HYPE's map output files.
The function attempts to format date-time information to strings and will return a warning if the attempt fails.
No return value, called for export to text files.
te <- ReadMapOutput(filename = system.file("demo_model", "results", "mapEVAP.txt", package = "HYPEtools"), dt.format = NULL) WriteMapOutput(x = te, filename = tempfile())
te <- ReadMapOutput(filename = system.file("demo_model", "results", "mapEVAP.txt", package = "HYPEtools"), dt.format = NULL) WriteMapOutput(x = te, filename = tempfile())
Export forcing data and discharge observation files from R.
WriteObs( x, filename, dt.format = "%Y-%m-%d", round = NULL, signif = NULL, obsid = NULL, append = FALSE, comment = NULL ) WritePTQobs( x, filename, dt.format = "%Y-%m-%d", round = NULL, signif = NULL, obsid = NULL, append = FALSE, comment = NULL )
WriteObs( x, filename, dt.format = "%Y-%m-%d", round = NULL, signif = NULL, obsid = NULL, append = FALSE, comment = NULL ) WritePTQobs( x, filename, dt.format = "%Y-%m-%d", round = NULL, signif = NULL, obsid = NULL, append = FALSE, comment = NULL )
x |
The object to be written, a |
filename |
Path to and file name of the file to import. Windows users: Note that Paths are separated by '/', not '\'. |
dt.format |
Date-time |
round , signif
|
Integer, number of decimal places and number of significant digits to export, respectively. See |
obsid |
Integer vector containing observation IDs/SUBIDs in same order as columns in |
append |
Logical, if |
comment |
A character string to be exported as first row comment in the Obs file. Comments are only exported if |
WriteObs
is a convenience wrapper function of fwrite
to export a HYPE-compliant observation file.
Headers are generated from attribute obsid
on export (see attr
on how to create and access it).
Observation IDs are SUBIDs or IDs connected to SUBIDs with a ForcKey.txt file.
If the first column in x
contains dates of class POSIXt
, then they will be formatted according to dt.format
before writing the output file.
If round
is specified, then WriteObs()
will use round
to round the observation values to a specified number of decimal places.
Alternatively, signif
can be used to round the observation values to a specified number of significant digits using signif
.
Finally, if both round
and signif
are specified, then the observation values will be first rounded to the number of decimal places specified
with round
and then rounded to the number of significant digits specified with signif
.
No return value, called for export to text files.
te <- ReadObs(filename = system.file("demo_model", "Tobs.txt", package = "HYPEtools")) WriteObs(x = te, filename = tempfile())
te <- ReadObs(filename = system.file("demo_model", "Tobs.txt", package = "HYPEtools")) WriteObs(x = te, filename = tempfile())
WriteOptpar
prints a HYPE parameter optimization list to a file.
WriteOptpar(x, filename, digits = 10, nsmall = 1)
WriteOptpar(x, filename, digits = 10, nsmall = 1)
x |
The object to be written, a list with named elements, as an object returned from |
filename |
A character string naming a file to write to. Windows users: Note that Paths are separated by '/', not '\'. |
digits |
Integer, number of significant digits to export. See |
nsmall |
Integer, number of significant decimals to export. See |
No return value, called for export to text files.
ReadOptpar
with a description of the expected content of x
.
te <- ReadOptpar(filename = system.file("demo_model", "optpar.txt", package = "HYPEtools")) WriteOptpar(x = te, filename = tempfile())
te <- ReadOptpar(filename = system.file("demo_model", "optpar.txt", package = "HYPEtools")) WriteOptpar(x = te, filename = tempfile())
WritePar
prints its required argument x
to a file.
WritePar(x, filename, digits = 10, nsmall = 1)
WritePar(x, filename, digits = 10, nsmall = 1)
x |
The object to be written, a list with named vector elements, as an object returned from |
filename |
A character string naming a file to write to. Windows users: Note that Paths are separated by '/', not '\'. |
digits |
Integer, number of significant digits to export. See |
nsmall |
Integer, number of significant decimals to export. See |
WritePar
writes a 'par.txt' file, typically originating from an imported and modified 'par.txt'.
No return value, called for export to text files.
ReadPar
with a description of the expected content of x
.
te <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) # Note that par files lose all comment rows on import WritePar(x = te, filename = tempfile())
te <- ReadPar(filename = system.file("demo_model", "par.txt", package = "HYPEtools")) # Note that par files lose all comment rows on import WritePar(x = te, filename = tempfile())
This is a small convenience function to export a 'partial model setup file' from R.
WritePmsf(x, filename)
WritePmsf(x, filename)
x |
The object to be written, an |
filename |
A character string naming a file to write to. Windows users: Note that Paths are separated by '/', not '\'. |
Pmsf files are represented as integer vectors in R. The total number of subcatchments in the file are added as first value on export. pmsf.txt files need to be ordered as downstream sequence.
No return value, called for export to text files.
AllUpstreamSubids
, which extracts upstream SUBIDs from a GeoData dataframe.
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) WritePmsf(x = te$SUBID[te$SUBID %in% AllUpstreamSubids(3564, te)], filename = tempfile())
te <- ReadGeoData(filename = system.file("demo_model", "GeoData.txt", package = "HYPEtools")) WritePmsf(x = te$SUBID[te$SUBID %in% AllUpstreamSubids(3564, te)], filename = tempfile())
Function to export a time output file from R.
WriteTimeOutput(x, filename, dt.format = "%Y-%m-%d")
WriteTimeOutput(x, filename, dt.format = "%Y-%m-%d")
x |
The object to be written, a dataframe with |
filename |
A character string naming a file to write to. Windows users: Note that Paths are separated by '/', not '\'. |
dt.format |
Date-time |
WriteTimeOutput
exports a data frame with headers and formatting options adjusted to match HYPE's time output files.
No return value, called for export to text files.
te <- ReadTimeOutput(filename = system.file("demo_model", "results", "timeCOUT.txt", package = "HYPEtools"), dt.format = "%Y-%m") WriteTimeOutput(x = te, filename = tempfile(), dt.format = "%Y-%m")
te <- ReadTimeOutput(filename = system.file("demo_model", "results", "timeCOUT.txt", package = "HYPEtools"), dt.format = "%Y-%m") WriteTimeOutput(x = te, filename = tempfile(), dt.format = "%Y-%m")
WriteXobs
writes or appends an observation data set to an Xobs file.
WriteXobs( x, filename, append = FALSE, comment = NULL, variable = NULL, subid = NULL, last.date = NULL, timestep = "d" )
WriteXobs( x, filename, append = FALSE, comment = NULL, variable = NULL, subid = NULL, last.date = NULL, timestep = "d" )
x |
A data frame, e.g. an object originally imported with |
filename |
A character string naming a file to write to. Windows users: Note that Paths are separated by '/', not '\'. |
append |
Logical. If |
comment |
A character string to be exported as first row comment in the Xobs file. If provided, it takes precedence over
a |
variable |
A character vector to be exported as second row in the Xobs file. Must contain the same number of
variables as |
subid |
Third row in Xobs, containing SUBIDs (integer). Behavior otherwise as argument |
last.date |
Optional date-time of last observation in existing Xobs file as text string. Only relevant with |
timestep |
Character string, either "day" or "hour", giving the time step between observations. Can be abbreviated. |
WriteXobs
writes a 'Xobs.txt' file, typically originating from an imported and modified 'Xobs.txt'.
HYPE Xobs files contain a three-row header, with a comment line first, next a line of variables, and then a line of SUBIDs.
Objects imported with ReadXobs
include attributes holding this information, and WriteXobs
will use this
information. Otherwise, these attributes can be added to objects prior to calling WriteXobs
, or passed as function
arguments.
If argument append
is TRUE
, the function requires daily or hourly time steps as input.
The date-time column must be of class POSIXct
, see as.POSIXct
. Objects returned from
ReadXobs
per default have the correct class for the date-time column. When appending to existing file, the
function adds new rows with '-9999' values in all data columns to fill any time gaps between existing and new data. If time
periods overlap, the export will stop with an error message. Argument last.date
can be provided to speed up appending exports,
but per default, WriteXobs
extracts the last observation in the existing file automatically.
No return value, called for export to text files.
Both variable
and subid
do not include elements for the first column in the Xobs file/object, in accordance
with ReadXobs
. These elements will be added by the function.
te <- ReadXobs(filename = system.file("demo_model", "Xobs.txt", package = "HYPEtools")) WriteXobs(x = te, filename = tempfile())
te <- ReadXobs(filename = system.file("demo_model", "Xobs.txt", package = "HYPEtools")) WriteXobs(x = te, filename = tempfile())