This repository accompanies a paper in review.
We provide basic instructions for how to recreate the key results in our paper.
We use TensorFlow 2 for the implementation of our methods. We provide an environment.yaml
file which you may use to install the required dependencies. (They're all pretty standard, though, so you may already have a setup that will work.)
A modern laptop should be able to train the 1D versions in a few minutes. Training the 2D versions with the domain size we used in a reasonable time will require a GPU. Our results were generated on a single NVIDIA V100 GPU.
Finally, we use Weights and Biases to track our training runs. We use the W&B API to automatically download relevant runs in our figure-generating notebooks. The easiest way to replicate our figures will be to create an account and setup your own project. If you don't want to do this, you can run wandb off
to disable syncing results. The necessary model results are saved locally, but you'll have to minimally edit the figure-generating notebooks to manually provide the saved model files.
The entry point for all training is train.py
. All parameters are specified by a YAML file that can be provided to train.py
as follows:
python train.py --params params/1d_example_with_model_loss.yaml
This YAML file also contains a reference to a data pickle file. For the 1D example, we provide this pickle file as the data is synthetically generated. Unfortunately, we do not have permission to re-share the 2D data, so you'll have to create it yourself. See below for instructions.
The two 1D cases we show in the paper can be produced by running:
python train.py --params params/1d_example_with_model_loss.yaml
python train.py --params params/1d_example_no_model_loss.yaml
If you want to run the 2D models, you'll need some data that we aren't allowed to re-distribute. This will also serve as a rough guide if you want to try applying this on your own data or on publicly available data for a different region.
We rely on four sources of data, all of which are publicly available (but you must go download yourself):
- MEaSURE Ice Velocities:
antarctica_ice_velocity_450m_v2.nc
from https://nsidc.org/data/NSIDC-0754/versions/1 - BedMachine Antarctica data (used for comparison figures and ice-free land mask):
BedMachineAntarctica_2019-11-05_v01.nc
from https://nsidc.org/data/nsidc-0756 - CReSIS MCoRDS Radar Data:
Browse_2017_Antarctica_Basler.csv
,Browse_2011_Antarctica_DC8.csv
, and2011_Antarctica_TO.csv
from https://data.cresis.ku.edu/data/rds/ - UTIG HiCARS Radar Data: All "IceBridge HiCARS 1 L2 Geolocated Ice Thickness V001" and "IceBridge HiCARS 2 L2 Geolocated Ice Thickness V001" data from the region around Byrd Glacier. This can be downloaded through the Operation IceBridge Data Portal tool: https://nsidc.org/icebridge/portal/map
- (Optional) BedMap v2 Ice Thickness Map (ONLY used for comparison figure): Use the handy
download_and_extract_to_nc.py
script from PISM https://github.com/pism/pism-ais/blob/master/bedmap2/download_and_extract_to_nc.py (can be run standalone with minor adaptations to strip out other PISM depenencies)
Place all of these in a data directory somewhere with a structure like this:
data
├── antarctica_ice_velocity_450m_v2.nc
├── BedMachineAntarctica_2019-11-05_v01.nc
├── bedmap2_1km_input.nc (optional, for figure only)
├── cresis
│ ├── 2011_Antarctica_TO.csv
│ ├── Browse_2011_Antarctica_DC8.csv
│ └── Browse_2017_Antarctica_Basler.csv
├── hicars-byrd
│ ├── 114342533
| | ...
└── └── 86547581
Once you have the above data, simply run the Jupyter notebook data_preprocess.ipynb
, updating the data_dir
variable to point to your data directory.
The notebook will produce some visualizations and a byrd-data.pickle
file.
As mentioned above, it is strongly recommended that you run these on a GPU. With the above data pickle file now created, the two 2D cases we show in the paper may be re-created by running:
python train.py --params params/2d_cartesian.yaml
python train.py --params params/2d_rt_asym_diff_smooth.yaml
The latter is the case used to generate the main results figure and the comparison between our results, BedMachine Antarctica, and BedMap v2.
Assuming you created a W&B account and project, the easiest way to look at the results of these runs is by logging into your W&B account.
If you'd like to re-create our figures, you can do so by running the provided figures Jupyter notebooks.
Note that each notebook is setup to take W&B experiment IDs as inputs and automatically download the relevant files. You'll need to update these IDs to your own run IDs. Alternatively, you can manually provide the saved models produced by each run.
(Note that the formatting of each figure is touched up after the fact using a vector graphics editor, so the SVG's produced by these notebooks may be a little ugly.)
We're glad you're interested in our work! Feel free to get in touch using the contact information in the paper.