Skip to content

Commit

Permalink
Merge branch 'master' into tutorial/gis
Browse files Browse the repository at this point in the history
  • Loading branch information
VeckoTheGecko authored Jun 6, 2023
2 parents 500a012 + e5c7cbc commit bf1e4b7
Show file tree
Hide file tree
Showing 10 changed files with 16 additions and 17 deletions.
6 changes: 3 additions & 3 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ build:
os: ubuntu-22.04
tools:
python: mambaforge-4.10
# jobs:
# pre_build:
# - sphinx-build -b linkcheck docs/ _build/linkcheck
jobs:
pre_build:
- sphinx-build -b linkcheck docs/ _build/linkcheck

sphinx:
configuration: docs/conf.py
Expand Down
5 changes: 4 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,10 @@

linkcheck_ignore = [
r'http://localhost:\d+/',
"http://www2.cesm.ucar.edu/models/cesm1.0/pop2/doc/sci/POPRefManual.pdf", # Site doesn't allow crawling
r"http://www2\.cesm\.ucar\.edu/models/cesm1\.0/pop2/doc/sci/POPRefManual.pdf", # Site doesn't allow crawling
r"https://pubs\.acs\.org/doi/10\.1021/acs\.est\.0c01984", # Site doesn't allow crawling
r"https://aip\.scitation\.org/doi/10\.1063/1\.4982720", # Site doesn't allow crawling
r"https://www\.sciencedirect\.com/.*", # Site doesn't allow crawling
]

# The version info for the project you're documenting, acts as replacement for
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/documentation_MPI.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"For small projects, the above instructions are sufficient. If your project is large, then it is helpful to combine the `proc*` directories into a single zarr dataset and to optimize the chunking for your analysis. What is \"large\"? If you find yourself running out of memory while doing your analyses, saving the results, or sorting the dataset, or if reading the data is taking longer than you can tolerate, your problem is \"large.\" Another rule of thumb is if the size of your output directory is 1/3 or more of the memory of your machine, your problem is large. Chunking and combining the `proc*` data in order to speed up analysis is discussed [in the documentation on runs with large output](https://nbviewer.org/github/OceanParcels/parcels/blob/master/parcels/examples/documentation_LargeRunsOutput.ipynb).\n"
"For small projects, the above instructions are sufficient. If your project is large, then it is helpful to combine the `proc*` directories into a single zarr dataset and to optimize the chunking for your analysis. What is \"large\"? If you find yourself running out of memory while doing your analyses, saving the results, or sorting the dataset, or if reading the data is taking longer than you can tolerate, your problem is \"large.\" Another rule of thumb is if the size of your output directory is 1/3 or more of the memory of your machine, your problem is large. Chunking and combining the `proc*` data in order to speed up analysis is discussed [in the documentation on runs with large output](https://docs.oceanparcels.org/en/latest/examples/documentation_LargeRunsOutput.html).\n"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions docs/examples/tutorial_Agulhasparticles.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This brief tutorial shows how to recreate the [animated gif](http://oceanparcels.org/animated-gifs/globcurrent_fullyseeded.gif) showing particles in the Agulhas region south of Africa.\n"
"This brief tutorial shows how to recreate the [animated gif](http://oceanparcels.org/images/globcurrent_fullyseeded.gif) showing particles in the Agulhas region south of Africa.\n"
]
},
{
Expand Down Expand Up @@ -110,7 +110,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can advect the particles. Note that we do this inside a `for`-loop, so we can save a plot every six hours (which is the value of `runtime`). See the [plotting tutorial](http://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/examples/tutorial_plotting.ipynb) for more information on the `pset.show()` method.\n"
"Now we can advect the particles. Note that we do this inside a `for`-loop, so we can save a plot every six hours (which is the value of `runtime`). See the [plotting tutorial](https://docs.oceanparcels.org/en/latest/examples/tutorial_plotting.html) for more information on the `pset.show()` method.\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/tutorial_Argofloats.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This tutorial shows how simple it is to construct a Kernel in Parcels that mimics the [vertical movement of Argo floats](http://www.argo.ucsd.edu/operation_park_profile.jpg).\n"
"This tutorial shows how simple it is to construct a Kernel in Parcels that mimics the [vertical movement of Argo floats](https://www.aoml.noaa.gov/phod/argo/images/argo_float_mission.jpg).\n"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions docs/examples/tutorial_analyticaladvection.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"While [Lagrangian Ocean Analysis](https://www.sciencedirect.com/science/article/pii/S1463500317301853) has been around since at least the 1980s, the [Blanke and Raynaud (1997)](https://journals.ametsoc.org/doi/full/10.1175/1520-0485%281997%29027%3C1038%3AKOTPEU%3E2.0.CO%3B2) paper has really spurred the use of Lagrangian particles for large-scale simulations. In their 1997 paper, Blanke and Raynaud introduce the so-called _Analytical Advection_ scheme for pathway integration. This scheme has been the base for the [Ariane](http://stockage.univ-brest.fr/~grima/Ariane/) and [TRACMASS](http://www.tracmass.org/) tools. We have also implemented it in Parcels, particularly to facilitate comparison with for example the Runge-Kutta integration scheme.\n",
"While [Lagrangian Ocean Analysis](https://www.sciencedirect.com/science/article/pii/S1463500317301853) has been around since at least the 1980s, the [Blanke and Raynaud (1997)](https://journals.ametsoc.org/doi/full/10.1175/1520-0485%281997%29027%3C1038%3AKOTPEU%3E2.0.CO%3B2) paper has really spurred the use of Lagrangian particles for large-scale simulations. In their 1997 paper, Blanke and Raynaud introduce the so-called _Analytical Advection_ scheme for pathway integration. This scheme has been the base for the [Ariane](http://ariane.lagrangian.free.fr/whatsariane.html) and [TRACMASS](http://www.tracmass.org/) tools. We have also implemented it in Parcels, particularly to facilitate comparison with for example the Runge-Kutta integration scheme.\n",
"\n",
"In this tutorial, we will briefly explain what the scheme is and how it can be used in Parcels. For more information, see for example [Döös et al (2017)](https://www.geosci-model-dev.net/10/1733/2017/).\n"
]
Expand All @@ -33,7 +33,7 @@
"\n",
"And specifically for the implementation in Parcels 2. The `AdvectionAnalytical` kernel only works for `Scipy Particles`. 3. Since Analytical Advection does not use timestepping, the `dt` parameter in `pset.execute()` should be set to `np.inf`. For backward-in-time simulations, it should be set to `-np.inf`. 4. For time-varying fields, only the 'intermediate timesteps' scheme ([section 2.3 of Döös et al 2017](https://www.geosci-model-dev.net/10/1733/2017/gmd-10-1733-2017.pdf)) is implemented. While there is also a way to also analytically solve the time-evolving fields ([section 2.4 of Döös et al 2017](https://www.geosci-model-dev.net/10/1733/2017/gmd-10-1733-2017.pdf)), this is not yet implemented in Parcels.\n",
"\n",
"We welcome contributions to the further development of this algorithm and in particular the analytical time-varying case. See [here](https://github.com/OceanParcels/parcels/blob/master/parcels/kernels/advection.py) for the code of the `AdvectionAnalytical` kernel.\n"
"We welcome contributions to the further development of this algorithm and in particular the analytical time-varying case. See [here](https://github.com/OceanParcels/parcels/blob/master/parcels/application_kernels/advection.py) for the code of the `AdvectionAnalytical` kernel.\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/tutorial_jit_vs_scipy.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@
"\n",
"Sometimes, you'd want to run Parcels in Scipy mode anyways. In that case, there are ways to make Parcels a bit faster.\n",
"\n",
"As background, one of the most computationally expensive operations in Parcels is the [Field Sampling](http://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_sampling.ipynb). In the default sampling in Scipy mode, we don't keep track of _where_ in the grid a particle is; which means that for every sampling call, we need to again search for which grid cell a particle is in.\n",
"As background, one of the most computationally expensive operations in Parcels is the [Field Sampling](https://docs.oceanparcels.org/en/latest/examples/tutorial_sampling.html). In the default sampling in Scipy mode, we don't keep track of _where_ in the grid a particle is; which means that for every sampling call, we need to again search for which grid cell a particle is in.\n",
"\n",
"Let's see how this works in the simple Peninsula FieldSet used above. We use a simple Euler-Forward Advection now to make the point. In particular, we use two types of Advection Kernels\n"
]
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/tutorial_sampling.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Instead, you should use the code `u, v = fieldset.UV[...]`. With this code, the sampling is consistent with the actual velocity fields used in the advection Kernels. The difference is that on a curvilinear grid, `fieldset.U[..]` returns the velocity in the `i`-direction (the columns on the grid), while `fieldset.UV[...]` returns the velocities in the longitude and latitude direction. Furthermore, only `fieldset.UV[...]` sampling can correctly deal with boundary conditions such as `freeslip` and `partialslip` ([documentation_unstuck_Agrid](https://nbviewer.org/github/OceanParcels/parcels/blob/master/parcels/examples/documentation_unstuck_Agrid.ipynb#3.-Slip-boundary-conditions))\n"
"Instead, you should use the code `u, v = fieldset.UV[...]`. With this code, the sampling is consistent with the actual velocity fields used in the advection Kernels. The difference is that on a curvilinear grid, `fieldset.U[..]` returns the velocity in the `i`-direction (the columns on the grid), while `fieldset.UV[...]` returns the velocities in the longitude and latitude direction. Furthermore, only `fieldset.UV[...]` sampling can correctly deal with boundary conditions such as `freeslip` and `partialslip` ([documentation_unstuck_Agrid](https://docs.oceanparcels.org/en/latest/examples/documentation_unstuck_Agrid.html#3.-Slip-boundary-conditions))\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/tutorial_unitconverters.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"However, printing the velocites directly shows something perhaps surprising. Here, we use the square-bracket field-interpolation notation to print the field value at (5W, 40N, 0m depth) at time 0. _Note that sampling a velocity in Parcels is done by calling the `fieldset.UV` VectorField; see the [Field Sampling tutorial](https://nbviewer.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_sampling.ipynb#Sampling-velocity-fields?flush_cache=true) for more information._\n"
"However, printing the velocites directly shows something perhaps surprising. Here, we use the square-bracket field-interpolation notation to print the field value at (5W, 40N, 0m depth) at time 0. _Note that sampling a velocity in Parcels is done by calling the `fieldset.UV` VectorField; see the [Field Sampling tutorial](https://docs.oceanparcels.org/en/latest/examples/tutorial_sampling.html#Sampling-velocity-fields) for more information._\n"
]
},
{
Expand Down
4 changes: 0 additions & 4 deletions parcels/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,10 +91,6 @@ def get_data_home(data_home=None):
If the ``data_home`` argument is not provided, it will use a directory
specified by the ``PARCELS_EXAMPLE_DATA`` environment variable (if it exists)
or otherwise default to an OS-appropriate user cache location.
Adapted from `seaborn.utils.get_data_home`_
.. _seaborn.utils.get_data_home: https://github.com/mwaskom/seaborn/blob/824c102525e6a29cde9bca1ce0096d50588fda6b/seaborn/utils.py#L522-L537
"""
if data_home is None:
data_home = os.environ.get("PARCELS_EXAMPLE_DATA", platformdirs.user_cache_dir("parcels"))
Expand Down

0 comments on commit bf1e4b7

Please sign in to comment.