Skip to content

Commit

Permalink
Merge pull request #1426 from OceanParcels/v3.0
Browse files Browse the repository at this point in the history
Towards parcels v3.0
  • Loading branch information
erikvansebille authored Oct 10, 2023
2 parents b7738e9 + 8597833 commit 3662aa5
Show file tree
Hide file tree
Showing 111 changed files with 64,257 additions and 122,034 deletions.
8 changes: 1 addition & 7 deletions .binder/environment.yml
Original file line number Diff line number Diff line change
@@ -1,13 +1,7 @@
name: py3_parcels
name: parcels_binder
channels:
- conda-forge
- defaults
dependencies:
- python>=3.8
- parcels
- ffmpeg>=3.2.3
- mpi4py>=3.0.1
- mpich>=3.2.1
- py>=1.4.27
- scikit-learn
- pykdtree
11 changes: 0 additions & 11 deletions .github/actions/install-parcels/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,14 +29,3 @@ runs:
- name: Install parcels
run: pip install .
shell: bash -el {0}
- name: Set env variables for macos
run: |
if [[ "${{ runner.os }}" == macOS ]]; then
echo "Setting CONDA_BUILD_SYSROOT and C_INCLUDE_PATH for macos"
echo "CONDA_BUILD_SYSROOT=/" >> $GITHUB_ENV
echo "C_INCLUDE_PATH=$C_INCLUDE_PATH:/Applications/Xcode.app/Contents//Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/" >> $GITHUB_ENV
echo "CC=gcc" >> $GITHUB_ENV
else
echo "Platform not macos."
fi
shell: bash
2 changes: 1 addition & 1 deletion .github/workflows/integration-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ jobs:
- name: Setup Conda and parcels
uses: ./.github/actions/install-parcels
with:
environment-file: environment_py3_${{ matrix.os-short }}.yml
environment-file: environment.yml
environment-name: py3_parcels
- name: Integration test
run: |
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/unit-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
fail-fast: false
matrix:
os: [macos, ubuntu, windows]
include:
include: # TODO check if these can go?
- os: macos
os-short: osx
- os: ubuntu
Expand All @@ -35,7 +35,7 @@ jobs:
- name: Setup Conda and parcels
uses: ./.github/actions/install-parcels
with:
environment-file: environment_py3_${{ matrix.os-short }}.yml
environment-file: environment.yml
environment-name: py3_parcels
- name: Unit test
run: |
Expand Down
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
rev: v4.5.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
Expand All @@ -10,7 +10,7 @@ repos:
types: [text]
files: \.(json|ipynb)$
- repo: https://github.com/pycqa/flake8
rev: '6.0.0'
rev: '6.1.0'
hooks:
- id: flake8
- repo: https://github.com/pycqa/pydocstyle
Expand Down
4 changes: 2 additions & 2 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ version: 2
build:
os: ubuntu-22.04
tools:
python: mambaforge-4.10
python: mambaforge-22.9
jobs:
pre_build:
- sphinx-build -b linkcheck docs/ _build/linkcheck
Expand All @@ -11,4 +11,4 @@ sphinx:
configuration: docs/conf.py

conda:
environment: environment_py3_linux.yml
environment: docs/environment_docs.yml
198 changes: 0 additions & 198 deletions docs/_static/diagrams/ParticleSetHierarchy_basic_framework.drawio

This file was deleted.

599 changes: 0 additions & 599 deletions docs/_static/diagrams/ParticleSet_UML.drawio

This file was deleted.

Binary file added docs/_static/homepage.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/_static/loop-icon.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 3 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@
r"https://aip\.scitation\.org/doi/10\.1063/1\.4982720", # Site doesn't allow crawling
r"https://www\.sciencedirect\.com/.*", # Site doesn't allow crawling
r"https://lxml\.de/", # Crawler occasionally fails to establish connection
r"https://linux\.die\.net/", # Site doesn't allow crawling

# To monitor
r"http://marine.copernicus.eu/", # 2023-06-07 Site non-responsive
Expand Down Expand Up @@ -361,7 +362,8 @@ def linkcode_resolve(domain, info):
'examples/tutorial_interaction': '_static/pulled_particles_twoatractors_line.gif',
'examples/documentation_LargeRunsOutput': '_static/harddrive.png',
'examples/tutorial_unitconverters': '_static/globe-icon.jpg',
'examples/documentation_geospatial': '_images/tutorial_geospatial_google_earth.png'
'examples/documentation_geospatial': '_images/tutorial_geospatial_google_earth.png',
'examples/tutorial_kernelloop': '_static/loop-icon.jpeg',
}
# -- Options for LaTeX output ---------------------------------------------

Expand Down
2 changes: 1 addition & 1 deletion docs/documentation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ Parcels has several documentation and tutorial Jupyter notebooks which go throug
examples/documentation_indexing.ipynb
examples/tutorial_nemo_curvilinear.ipynb
examples/tutorial_nemo_3D.ipynb
examples/tutorial_SummedFields.ipynb
examples/tutorial_NestedFields.ipynb
examples/tutorial_timevaryingdepthdimensions.ipynb
examples/tutorial_periodic_boundaries.ipynb
Expand All @@ -47,6 +46,7 @@ Parcels has several documentation and tutorial Jupyter notebooks which go throug
examples/tutorial_particle_field_interaction.ipynb
examples/tutorial_interaction.ipynb
examples/tutorial_analyticaladvection.ipynb
examples/tutorial_kernelloop.ipynb


.. nbgallery::
Expand Down
3 changes: 0 additions & 3 deletions environment_py3_linux.yml → docs/environment_docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,6 @@ dependencies:
- cgen
- ffmpeg>=3.2.3
- git
- gcc_linux-64
- mpi4py>=3.0.1
- mpich>=3.2.1
- jupyter
- matplotlib-base>=2.0.2
- netcdf4>=1.1.9
Expand Down
16 changes: 3 additions & 13 deletions docs/examples/documentation_MPI.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Parallelisation\n"
"# Parallelisation with MPI and Field chunking with dask\n"
]
},
{
Expand Down Expand Up @@ -148,25 +148,15 @@
"source": [
"Note that if you want, you can save this new DataSet with the `.to_zarr()` or `.to_netcdf()` methods.\n",
"\n",
"When using `.to_zarr()`, then it further analyses may be sped up by first rechunking the DataSet, by using `ds.chunk()`. Note that in some cases, you will first need to remove the chunks encoding information manually, using a code like below\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for v in ds.variables:\n",
" del ds[v].encoding[\"chunks\"]"
"When using `.to_zarr()`, then further analysis may be sped up by first rechunking the DataSet, by using `ds.chunk()`. Note that in some cases, you will first need to remove the chunks encoding information manually, using a code like below\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"For small projects, the above instructions are sufficient. If your project is large, then it is helpful to combine the `proc*` directories into a single zarr dataset and to optimise the chunking for your analysis. What is \"large\"? If you find yourself running out of memory while doing your analyses, saving the results, or sorting the dataset, or if reading the data is taking longer than you can tolerate, your problem is \"large.\" Another rule of thumb is if the size of your output directory is 1/3 or more of the memory of your machine, your problem is large. Chunking and combining the `proc*` data in order to speed up analysis is discussed [in the documentation on runs with large output](https://docs.oceanparcels.org/en/latest/examples/documentation_LargeRunsOutput.html).\n"
"For small projects, the above instructions are sufficient. If your project is large, then it is helpful to combine the `proc*` directories into a single zarr dataset and to optimise the chunking for your analysis. What is \"large\"? If you find yourself running out of memory while doing your analysis, saving the results, or sorting the dataset, or if reading the data is taking longer than you can tolerate, your problem is \"large.\" Another rule of thumb is if the size of your output directory is 1/3 or more of the memory of your machine, your problem is large. Chunking and combining the `proc*` data in order to speed up analysis is discussed [in the documentation on runs with large output](https://docs.oceanparcels.org/en/latest/examples/documentation_LargeRunsOutput.html).\n"
]
},
{
Expand Down
Loading

0 comments on commit 3662aa5

Please sign in to comment.