У нас вы можете посмотреть бесплатно SWATCUP Tutorial 5(5): Calibration of a watershed in Danube Basin или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Recap of 5(1) & 5(2) & 5(3) & 5(4) Single objective non-behavioral Single objective behavioral Multi-objective non-behavioral Multi-objective behavioral Using the so-called "best simulation" produced by a single performance criterion such as NS, R2, or PBIAS is very inadequate for assessing the goodness of a calibration. Saying that if "NS.gt.0.6", the calibration is "good" or if "NS.gt.0.7," the calibration is "very good," etc. makes no sense. Comparing two single signals (measured data and performance criteria) is based on a wrong deterministic concept, which does not apply to watershed modeling. The reason is simple: Most watersheds nowadays are highly managed, and the management measures are almost always unknown to modelers. Therefore, trying to match the peak flow of measured data to the peak flow of simulated data in terms of timing and magnitude (to obtain good values for R2 or NS), or base-flows is almost impossible due to lack of management information, and also the inadequacy of the station-based rainfall data. Students often have to do strange massaging of their data to make a simulation match the observation! We have to move away from looking at the performance criteria such as R2, NS, PBIAS, etc. Instead, we should look at the model output in terms of its uncertainty. That is, look at the measures such as "p-factor" and "r-factor," which are deducted from the 95% Prediction Uncertainty (95PPU).