Skip to content

Releases: OceanParcels/Parcels

Parcels v1.0.3: a Lagrangian Ocean Analysis tool for the petascale age

08 May 12:47
4977d89
Compare
Choose a tag to compare

Parcels v1.0.3 builds on the previous v1.0.2 release. Major changes since then:

  • Fixed bug in Windows install (#355)
  • Spatially varying Brownian (#340) and exponential variate (#333) diffusion
  • Added option for two-dimensional histograms in plotParticleTrajectories() (#367)
  • Arguments for Field.from_netcdf() have changed to match FieldSet.from_netcdf() (#364)
  • Minor other bugs fixed

Parcels v1.0.2: a Lagrangian Ocean Analysis tool for the petascale age

05 Apr 12:54
693f8f2
Compare
Choose a tag to compare

Parcels v1.0.2 builds on the previous v1.0.1 release. Major changes since then:

  • Parcels is now also Python3-compatible
  • No need anymore for FieldSet.advancetime(), as advancing of time for large datasets is now dealt with under the hood. For datasets that have more than 3 time snapshots, Parcels runs in 'defer_load' mode, where the actual reading of NetCDF files is only performed when required. This means that longer lists of filenames in FieldSet.from_netcdf() (and FieldSet.from_nemo()) don't require more memory. If you still require the entire dataset to be loaded in one go, you can use full_load=True as an option to FieldSet.from_netcdf().
  • TheParticleFile class now always writes particle data in array format. See also http://oceanparcels.org/faq.html#outputformat
  • The website has been revamped at http://oceanparcels.org, with repository at https://github.com/OceanParcels/oceanparcels_website

Parcels v1.0.1: a Lagrangian Ocean Analysis tool for the petascale age

02 Feb 15:08
ea516c7
Compare
Choose a tag to compare

Parcels v1.0.1 builds on the previous v1.0 release. Major changes since then:

  • Changes to the arguments of ParticleSet.execute(): (#289)
    • the interval argument has been renamed outputdt and should now be set when creating the ParticleFile object
    • the show_movie argument has been split into a moviedt argument (to set the frequency of the animation) and a movie_background_field argument that determines which background field to show (either a Field object, or the string vector) or the default None for no background field
    • endtime cannot be a timedelta object anymore. Only valid formats for endtime are a datetime object or a double. If you want to give a timedelta object, use the runtime argument
    • Except for dt, all arguments controlling intervals should always be positive, regardless of whether you run in forward or backward mode. Hence, to change the direction of a run, the only thing to change is to negate dt
  • Renaming of old FieldSet.from_nemo() to FieldSet.from_parcels() and new FieldSet.from_nemo to handle Curvilinear NEMO grids (#285)
  • Adding of a timer class to profile CPU time (#288)
  • Adding of option ParticleFile(..., write_ondelete=True) to write only particle data when particle is deleted (#290)
  • Adding option to write Kernels directly as C-functions, for JITParticles (#278)

Parcels v1.0: a Lagrangian Ocean Analysis tool for the petascale age

20 Jan 20:30
Compare
Choose a tag to compare

Parcels v1.0 build on the previous v0.9 release. Major changes since then:

  • Support for many more types of Grids, including curvilinear (horizontal) and s-grids (vertical) (#262)
  • Added a Brownian diffusion kernel. More diffusion kernels to come in future versions (#269)
  • Easier API for repeated particle release (#261)
  • Support for Fields with different Grids in one FieldSet (#241)
  • Support on Windows OS (#236) and easier install on macOS and Linux (#228)
  • Many bugfixes and tweaks

Future work now will focus on efficiency and parallelisation of Parcels. Thanks to all who contributed to this version 1.0!

Parcels v0.9: prototyping a Lagrangian Ocean Analysis tool for the petascale age

06 Jul 11:26
Compare
Choose a tag to compare

Parcels v0.9 is a fully-functional, feature-complete code for offline Lagrangian ocean analysis. This version 0.9 is focussed on laying out the API, with future work concentrating on optimisation, efficiency and at-runtime integration with OGCMs.