Date of this Version
Model performance was assessed using Nash-Sutcliffe model efficiency (NSE), coefficient of determination (r2), and percent bias (PBIAS) as defined by Moriasi et al. (2007 and 2015). Threshold values indicating acceptable model performance based on these statistics are dependent on the spatial and temporal scales of the data, water quality constituents of interest, and the modeling objectives (Moriasi et al., 2015). Although some standard values have been suggested (Moriasi et al., 2007 and 2015), considerable variability exist in the published literature. For instance Ramanarayan et al. (1997) considered r2 >0.5 and NSE >0.40 as satisfactory for simulation of monthly surface water quality with the APEX model. Chung et al. (2002) defined r2 > 0.5 and NSE > 0.3 as satisfactory for monthly tile flow and NO3-N loss simulated with the Erosion Productivity Impact Calculator (EPIC) model. Wang et al. (2008) indicated r2 > 0.5 and NSE > 0.4 as acceptable for monthly runoff and nutrient concentrations using the APEX model. Moriasi et al. (2007) suggested NSE > 0.5 with P-bias ±25% for streamflow, ±55% for sediment and ±70% for nitrogen and phosphorus for monthly values. They also indicated that NSE values can be relaxed for shorter time steps (daily events). Yin et al. (2009) reported NSE for event based runoff and sediment between 0.41-0.84 and r2 between 0.55 - 0.85. Mudgal et al. (2010) regarded r2 > 0.5 and NSE > 0.45 as threshold for satisfactory calibration and validation with event data.