Skip to content

Commit

Permalink
Merge tag '1.13.3' into release/1.13.x
Browse files Browse the repository at this point in the history
  • Loading branch information
jhunkeler committed Jan 5, 2024
2 parents 4ca56b5 + 4b6d22c commit 7dc1afc
Show file tree
Hide file tree
Showing 41 changed files with 560 additions and 457 deletions.
36 changes: 34 additions & 2 deletions CHANGES.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,39 @@
1.13.3 (unreleased)
1.13.4 (unreleased)
===================

-
-


1.13.3 (01-05-2024)
===================

documentation
-------------

- Updated many docs to change the use of unordered/bullet lists to
numbered lists, to avoid formatting issues in html pages. [#8156]

- Added arguments docs for the ``assign_wcs`` step. [#8156]

- Added ``in_memory`` to the arguments lists in the ``outlier_detection``
and ``resample`` steps. [#8156]

- Added instructions to the README for setting CRDS_CONTEXT to a specific
value. [#8156]

- Removed unused ``grow`` parameter from ``outlier_detection`` docs. [#8156]

outlier_detection
-----------------

- Removed the ``grow`` parameter from the step arguments, because it's no
longer used in the algorithms. [#8156]

ramp_fitting
------------

- Updated the argument description and parameter definition for `maximum_cores`
to accept integer values to be passed to STCAL ramp_fit.py. [#8123]

1.13.2 (2023-12-21)
===================
Expand Down
4 changes: 2 additions & 2 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ authors:
given-names: "Maria"
orcid: "https://orcid.org/0000-0003-2314-3453"
title: "JWST Calibration Pipeline"
version: 1.13.2
version: 1.13.3
doi: 10.5281/zenodo.7038885
date-released: 2023-12-21
date-released: 2024-01-05
url: "https://github.com/spacetelescope/jwst"
19 changes: 14 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,10 @@

![STScI Logo](docs/_static/stsci_logo.png)

**JWST requires Python 3.9 or above and a C compiler for dependencies.**
**JWST requires a C compiler for dependencies and is currently limited to Python 3.9, 3.10 or 3.11.**

**Until Python 3.12 is supported, fresh conda environments will require setting the
Python version to one of the three supported versions.**

**Linux and MacOS platforms are tested and supported. Windows is not currently supported.**

Expand Down Expand Up @@ -50,13 +53,13 @@ Remember that all conda operations must be done from within a bash/zsh shell.

You can install the latest released version via `pip`. From a bash/zsh shell:

conda create -n <env_name> python
conda create -n <env_name> python=3.11
conda activate <env_name>
pip install jwst

You can also install a specific version:

conda create -n <env_name> python
conda create -n <env_name> python=3.11
conda activate <env_name>
pip install jwst==1.9.4

Expand All @@ -65,7 +68,7 @@ You can also install a specific version:
You can install the latest development version (not as well tested) from the
Github master branch:

conda create -n <env_name> python
conda create -n <env_name> python=3.11
conda activate <env_name>
pip install git+https://github.com/spacetelescope/jwst

Expand Down Expand Up @@ -117,7 +120,7 @@ already installed with released versions of the `jwst` package.

As usual, the first two steps are to create and activate an environment:

conda create -n <env_name> python
conda create -n <env_name> python=3.11
conda activate <env_name>

To install your own copy of the code into that environment, you first need to
Expand Down Expand Up @@ -170,6 +173,11 @@ two environment variables:
``<locally-accessable-path>`` can be any the user has permissions to use, such as `$HOME`.
Expect to use upwards of 200GB of disk space to cache the latest couple of contexts.

To use a specific CRDS context, other than the current default, set the ``CRDS_CONTEXT``
environment variable:

export CRDS_CONTEXT=jwst_1179.pmap

## Documentation

Documentation (built daily from the Github `master` branch) is available at:
Expand Down Expand Up @@ -210,6 +218,7 @@ the specified context and less than the context for the next release.

| jwst tag | DMS build | SDP_VER | CRDS_CONTEXT | Released | Ops Install | Notes |
|---------------------|-----------|----------|--------------|------------|-------------|-----------------------------------------------|
| 1.13.3 | B10.1rc4 | 2023.4.0 | 1181 | 2024-01-05 | | Fourth release candidate for B10.1 |
| 1.13.2 | B10.1rc3 | 2023.4.0 | 1181 | 2023-12-21 | | Third release candidate for B10.1 |
| 1.13.1 | B10.1rc2 | 2023.4.0 | 1181 | 2023-12-19 | | Second release candidate for B10.1 |
| 1.13.0 | B10.1rc1 | 2023.4.0 | 1179 | 2023-12-15 | | First release candidate for B10.1 |
Expand Down
32 changes: 32 additions & 0 deletions docs/jwst/assign_wcs/arguments.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
Step Arguments
==============

The ``assign_wcs`` step has the following optional arguments to control
the behavior of the processing.

``--sip_approx`` (boolean, default=True)
A flag to enable the computation of a SIP approximation for
imaging modes.

``--sip_degree`` (integer, max=6, default=None)
Polynomial degree for the forward SIP fit. "None" uses the best fit.

``--sip_max_pix_error`` (float, default=0.1)
Maximum error for the SIP forward fit, in units of pixels. Ignored if
``sip_degree`` is set to an explicit value.

``--sip_inv_degree`` (integer, max=6, default=None)
Polynomial degree for the inverse SIP fit. "None" uses the best fit.

``--sip_max_inv_pix_error`` (float, default=0.1)
Maximum error for the SIP inverse fit, in units of pixels. Ignored if
``sip_inv_degree`` is set to an explicit value.

``--sip_npoints`` (integer, default=12)
Number of points for the SIP fit.

``--slit_y_low`` (float, default=-0.55)
Lower edge of a NIRSpec slit.

``--slit_y_high`` (float, default=0.55)
Upper edge of a NIRSpec slit.
1 change: 1 addition & 0 deletions docs/jwst/assign_wcs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ Assign WCS
:maxdepth: 1

main.rst
arguments.rst
reference_files.rst
asdf-howto.rst
exp_types.rst
Expand Down
28 changes: 14 additions & 14 deletions docs/jwst/background_step/description.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,42 +35,42 @@ image depends on whether the background exposures are "rate" (2D) or
"rateint" (3D) exposures. In the case of "rate" exposures, the average
background image is produced as follows:

* Clip the combined SCI arrays of all background exposures. For mixtures
of full chip and subarray data, only overlapping regions are used
* Compute the mean of the unclipped SCI values
* Sum in quadrature the ERR arrays of all background exposures, clipping the
#. Clip the combined SCI arrays of all background exposures. For mixtures
of full chip and subarray data, only overlapping regions are used
#. Compute the mean of the unclipped SCI values
#. Sum in quadrature the ERR arrays of all background exposures, clipping the
same input values as determined for the SCI arrays, and convert the result
to an uncertainty in the mean
* Combine the DQ arrays of all background exposures using a bitwise OR
#. Combine the DQ arrays of all background exposures using a bitwise OR
operation

In the case of "rateint" exposures, each background exposure can have multiple
integrations, so calculations are slightly more involved. The "overall" average
background image is produced as follows:

* Clip the SCI arrays of each background exposure along its integrations
* Compute the mean of the unclipped SCI values to yield an average image for
#. Clip the SCI arrays of each background exposure along its integrations
#. Compute the mean of the unclipped SCI values to yield an average image for
each background exposure
* Clip the means of all background exposure averages
* Compute the mean of the unclipped background exposure averages to yield the
#. Clip the means of all background exposure averages
#. Compute the mean of the unclipped background exposure averages to yield the
"overall" average background image
* Sum in quadrature the ERR arrays of all background exposures, clipping the
#. Sum in quadrature the ERR arrays of all background exposures, clipping the
same input values as determined for the SCI arrays, and convert the result
to an uncertainty in the mean (This is not yet implemented)
* Combine the DQ arrays of all background exposures, by first using a bitwise
#. Combine the DQ arrays of all background exposures, by first using a bitwise
OR operation over all integrations in each exposure, followed by doing by a
bitwise OR operation over all exposures.

The average background exposure is then subtracted from the target exposure.
The subtraction consists of the following operations:

* The SCI array of the average background is subtracted from the SCI
#. The SCI array of the average background is subtracted from the SCI
array of the target exposure

* The ERR array of the target exposure is currently unchanged, until full
#. The ERR array of the target exposure is currently unchanged, until full
error propagation is implemented in the entire pipeline

* The DQ arrays of the average background and the target exposure are
#. The DQ arrays of the average background and the target exposure are
combined using a bitwise OR operation

If the target exposure is a simple ImageModel, the background image is
Expand Down
14 changes: 7 additions & 7 deletions docs/jwst/cube_build/main.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,13 @@ spatial and one spectral.

The ``cube_build`` step can accept several different forms of input data, including:

- a single file containing a 2-D IFU image
#. A single file containing a 2-D IFU image

- a data model (IFUImageModel) containing a 2-D IFU image
#. A data model (`~jwst.datamodels.IFUImageModel`) containing a 2-D IFU image

- an association table (in json format) containing a list of input files
#. An association table (in json format) containing a list of input files

- a model container with several 2-D IFU data models
#. A model container with several 2-D IFU data models

There are a number of arguments the user can provide either in a parameter file or
on the command line that control the sampling size of the cube, as well as the type of data
Expand Down Expand Up @@ -98,8 +98,8 @@ We use the following terminology to define the spectral range divisions of MIRI:
example, the shortest wavelength range on MIRI is covered by Band 1-SHORT (aka 1A) and the
longest is covered by Band 4-LONG (aka 4C).

For **NIRSpec** we define a *band* as a single grating-filter combination, e.g. G140M-F070LP. The possible grating/filter
combinations for NIRSpec are given in the table below.
For **NIRSpec** we define a *band* as a single grating-filter combination, e.g. G140M-F070LP. The possible grating/filter
combinations for NIRSpec are given in the table below.

NIRSpec IFU Disperser and Filter Combinations
+++++++++++++++++++++++++++++++++++++++++++++
Expand Down Expand Up @@ -355,7 +355,7 @@ user with the options: ``rois`` and ``roiw``.
If *n* point cloud members are located within the ROI of a voxel, the voxel flux K =
:math:`\frac{ \sum_{i=1}^n Flux_i w_i}{\sum_{i=1}^n w_i}`

where the weighting ``weighting=emsm`` is
where the weighting ``weighting=emsm`` is:

:math:`w_i =e\frac{ -({xnormalized}_i^2 + {ynormalized}_i^2 + {znormalized}_i^2)} {scale factor}`

Expand Down
6 changes: 3 additions & 3 deletions docs/jwst/dark_current/description.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@ GROUPGAP intervening frames.

The frame-averaged dark is constructed using the following scheme:

* SCI arrays are computed as the mean of the original dark SCI arrays
* ERR arrays are computed as the uncertainty in the mean, using
:math:`\frac{\sqrt {\sum \mathrm{ERR}^2}}{nframes}`
#. SCI arrays are computed as the mean of the original dark SCI arrays
#. ERR arrays are computed as the uncertainty in the mean, using
:math:`\frac{\sqrt {\sum \mathrm{ERR}^2}}{nframes}`

The dark reference data are not integration-dependent for most instruments,
hence the same group-by-group dark current data are subtracted from every
Expand Down
8 changes: 4 additions & 4 deletions docs/jwst/dq_init/description.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,20 +13,20 @@ integrations for a given pixel.

The actual process consists of the following steps:

- Determine what MASK reference file to use via the interface to the bestref
#. Determine what MASK reference file to use via the interface to the bestref
utility in CRDS.

- If the "PIXELDQ" or "GROUPDQ" arrays of the input dataset do not already exist,
#. If the "PIXELDQ" or "GROUPDQ" arrays of the input dataset do not already exist,
which is sometimes the case for raw input products, create these arrays in
the input data model and initialize them to zero. The "PIXELDQ" array will be
2D, with the same number of rows and columns as the input science data.
The "GROUPDQ" array will be 4D with the same dimensions (nints, ngroups,
nrows, ncols) as the input science data array.

- Check to see if the input science data is in subarray mode. If so, extract a
#. Check to see if the input science data is in subarray mode. If so, extract a
matching subarray from the full-frame MASK reference file.

- Propagate the DQ flags from the reference file DQ array to the science data "PIXELDQ"
#. Propagate the DQ flags from the reference file DQ array to the science data "PIXELDQ"
array using numpy's ``bitwise_or`` function.

Note that when applying the ``dq_init`` step to FGS guide star data, as is done in
Expand Down
44 changes: 22 additions & 22 deletions docs/jwst/extract_1d/description.rst
Original file line number Diff line number Diff line change
Expand Up @@ -179,28 +179,28 @@ each column (or row, if dispersion is vertical), using pixel values from all
background regions within each column (or row).

Parameters related to background subtraction are ``smoothing_length``,
``bkg_fit``, and ``bkg_order``.

* If ``smoothing_length`` is specified, the 2D image data used to perform
background extraction will be smoothed along the dispersion direction using
a boxcar of width ``smoothing_length`` (in pixels). If not specified, no
smoothing of the input 2D image data is performed.

* ``bkg_fit`` specifies the type of background computation to be performed
within each column (or row). The default value is None; if not set by
the user, the step will search the reference file for a value. If no value
is found, ``bkg_fit`` will be set to "poly". The "poly" mode fits a
polynomial of order ``bkg_order`` to the background values within
the column (or row). Alternatively, values of "mean" or "median" can be
specified in order to compute the simple mean or median of the background
values in each column (or row). Note that using "bkg_fit=mean" is
mathematically equivalent to "bkg_fit=poly" with "bkg_order=0". If ``bkg_fit``
is provided both by a reference file and by the user, e.g.
``steps.extract_1d.bkg_fit='poly'``, the user-supplied value will override
the reference file value.

* If ``bkg_fit=poly`` is specified, ``bkg_order`` is used to indicate the
polynomial order to be used. The default value is zero, i.e. a constant.
``bkg_fit``, and ``bkg_order``:

#. If ``smoothing_length`` is specified, the 2D image data used to perform
background extraction will be smoothed along the dispersion direction using
a boxcar of width ``smoothing_length`` (in pixels). If not specified, no
smoothing of the input 2D image data is performed.

#. ``bkg_fit`` specifies the type of background computation to be performed
within each column (or row). The default value is None; if not set by
the user, the step will search the reference file for a value. If no value
is found, ``bkg_fit`` will be set to "poly". The "poly" mode fits a
polynomial of order ``bkg_order`` to the background values within
the column (or row). Alternatively, values of "mean" or "median" can be
specified in order to compute the simple mean or median of the background
values in each column (or row). Note that using "bkg_fit=mean" is
mathematically equivalent to "bkg_fit=poly" with "bkg_order=0". If ``bkg_fit``
is provided both by a reference file and by the user, e.g.
``steps.extract_1d.bkg_fit='poly'``, the user-supplied value will override
the reference file value.

#. If ``bkg_fit=poly`` is specified, ``bkg_order`` is used to indicate the
polynomial order to be used. The default value is zero, i.e. a constant.

During source extraction, the background fit is evaluated at each pixel within the
source extraction region for that column (row), and the fitted values will
Expand Down
40 changes: 20 additions & 20 deletions docs/jwst/flatfield/main.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,35 +24,35 @@ modes included in this category are NIRCam WFSS and Time-Series Grism,
NIRISS WFSS and SOSS, and MIRI MRS and LRS. All of these modes are processed
as follows:

- If the science data have been taken using a subarray and the FLAT
reference file is a full-frame image, extract the corresponding subarray
region from the flat-field data.
#. If the science data have been taken using a subarray and the FLAT
reference file is a full-frame image, extract the corresponding subarray
region from the flat-field data.

- Find pixels that have a value of NaN or zero in the FLAT reference file
SCI array and set their DQ values to "NO_FLAT_FIELD" and "DO_NOT_USE."
#. Find pixels that have a value of NaN or zero in the FLAT reference file
SCI array and set their DQ values to "NO_FLAT_FIELD" and "DO_NOT_USE."

- Reset the values of pixels in the flat that have DQ="NO_FLAT_FIELD" to
1.0, so that they have no effect when applied to the science data.
#. Reset the values of pixels in the flat that have DQ="NO_FLAT_FIELD" to
1.0, so that they have no effect when applied to the science data.

- Propagate the FLAT reference file DQ values into the science exposure
DQ array using a bitwise OR operation.
#. Propagate the FLAT reference file DQ values into the science exposure
DQ array using a bitwise OR operation.

- Apply the flat according to:
#. Apply the flat according to:

.. math::
SCI_{science} = SCI_{science} / SCI_{flat}
.. math::
SCI_{science} = SCI_{science} / SCI_{flat}
.. math::
VAR\_POISSON_{science} = VAR\_POISSON_{science} / SCI_{flat}^2
.. math::
VAR\_POISSON_{science} = VAR\_POISSON_{science} / SCI_{flat}^2
.. math::
VAR\_RNOISE_{science} = VAR\_RNOISE_{science} / SCI_{flat}^2
.. math::
VAR\_RNOISE_{science} = VAR\_RNOISE_{science} / SCI_{flat}^2
.. math::
VAR\_FLAT_{science} = ( SCI_{science}^{2} / SCI_{flat}^{2} ) * ERR_{flat}^{2}
.. math::
VAR\_FLAT_{science} = ( SCI_{science}^{2} / SCI_{flat}^{2} ) * ERR_{flat}^{2}
.. math::
ERR_{science} = \sqrt{VAR\_POISSON + VAR\_RNOISE + VAR\_FLAT}
.. math::
ERR_{science} = \sqrt{VAR\_POISSON + VAR\_RNOISE + VAR\_FLAT}
Multi-integration datasets ("_rateints.fits" products), which are common
for modes like NIRCam Time-Series Grism, NIRISS SOSS, and MIRI LRS Slitless,
Expand Down
Loading

0 comments on commit 7dc1afc

Please sign in to comment.