generated from NOAA-OWP/owp-open-source-project-template
-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2pt] Fixing incorrect benchmark results and Geocurves #320
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
RobHanna-NOAA
changed the title
WIP: Fixing incorrect benchmark results
[2pt] Fixing incorrect benchmark results and Geocurves
May 23, 2024
hhs732
approved these changes
May 23, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR was tested for HUC 12090301. Two tools of run_eval_bench_data.py
and run_test_cases.py
were successfully executed.
CarsonPruitt-NOAA
approved these changes
May 24, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
During testing and comparing ras2fim output units to benchmark data, we discovered some problems with geocurves. Many were not being created as expected.
Part of the evaluation of a unit is to use a new tool called
run_eval_bench_data.py
. This is a WIP tool and requires a fair bit of hardcoding to be used at this time, but it is expected to evolve as time permits later.The normal "alpha testing" evaluation of a unit is to run the
run_test_cases.py
against a unit, then runrun_eval_bench_data.py
to see metrics and agreement rasters.Note: This release does require a ras2fim Conda environment reload
Additions
tools\run_eval_bench_data.py
: as described above.File Renamed
tools\run_unit_benchmark_tests.py
, Now:run_test_cases.py
: This is the same name as used in the FIM product for this functionality and helps minimize confusion.Changes
environment.yml
: Adding seaborn used for plots and updated a few other packagessrc
conflate_hecras_to_nwm.py
: Linting fixescreate_geocurves.py
: A wide number of changes to fix the bug listed above. It also has significantly upgraded logging.create_shapes_from_hecras.py
: Added a note about an import fix required later. See Issue 323 Change logic of filtering out models and key model files.tools
ras2inundation.py
: A validation fix.run_test_cases.py
: (renamed as mentioned above): Some linting updates and some debugging cleanupacquire_and_preprocess_3dep_dems.py
: A small fix to disable the levee system in this script. The levee system is not fully operational system wide.Testing
A very wide array of testing was done with small set models, plus full units (all models).
To test:
tools\run_test_cases.py
. Watch the notes and args as this is stable tool but some args have never been tested. It is encouraged to use the defaults args and pathing. This will give you metrics files and agreement files.tools\run_eval_bench_data.py
. It is a very new tool and is in rough prototype form. You will need to change some hardcoded values in order to use at this time, but it is very valuable to comparing results from V1 to V2 or any version of a unit.Checklist
You may update this checklist before and/or after creating the PR. If you're unsure about any of them, please ask, we're here to help! These items are what we are going to look for before merging your code.
[_pt] PR: <description>
dev
branch (the default branch), you have a descriptive Feature Branch name using the format:dev-<description-of-change>
(e.g.:dev-revise-levee-masking
)dev
branchMerge Checklist (For Technical Lead use only)