Releases: dkazanc/NoStripesNet
Releases · dkazanc/NoStripesNet
Version 2.3
Adds the ability to train across multiple nodes with multiple GPUs, some Jupyter Notebooks to analyse results, and some minor bug fixes/clean-ups.
Changes
- Added the ability to train using PyTorch's
DistributedDataParallel
module, speeding up training by around 5x - Changed Generator architecture to only use 4x4 kernels
- Added more control over saving a model during training; an interval in epochs can be specific at which to save the model
- Added lists of clean and stripe indices to
PatchVisualiser
class - Models are now saved with the
--name
argument, rather than the--mode
argument - Added parameter for the Centre of Rotation in the
reconstruct()
function - Added a script to generate a mask of stripe locations
- Added a script to apply the model to arbitrary tomographic scans
- Added Jupyter Notebooks for visualising results, RMSEs, residuals and graphs
- Added a tutorial on how to use the repository
- Updated documentation and removed old/unused code
- Minor bug fixes
Version 2.2
Adds the ability to generate "patches" of sinograms, rather than whole sinograms.
Patches are of size (1801, 256) by default, and are not downsampled.
The motivation behind this is two-fold:
- The network only sees a smaller area around each stripe, allowing it to focus on important local information
- By splitting one sinogram into many patches, the network has more data to train on
Changes
- Added a new mode of generating data: "patch"
- Added a new mode for training & testing: "patch"
- Added a new visualizer class to display patch-based data
- Added wandb support
- Updated Dataset classes so that they can process synthetic, real-life and patch-based directory structures
- Datasets are now shuffled before the train/validate/test split, as well as after
- Increased width of simulated stripes, and removed the expansion of mask widths
- Minor bug fixes
Version 2.1
This version includes a series of changes to make the project more readable and understandable.
Changes
- Added documentation to most functions
- Added installation instructions to
README.md
and moved old content toPROJECT_DESCRIPTION.md
- Improved the code in 'simulator/realdata_loader.py' to make it neater, more modular, and easier to understand
- Added a description of parameters & options for both data generation and training/testing
- Improved & updated command-line option descriptions for data generation and training/testing
- Removed
from ... import *
fromutils/__init__.py
to hopefully speed up importing fromutils
- Updated
simulateStripe()
fromsimulator/data_simulator.py
to use astripe/
directory (rather thanshift00/
) - Changed name of
real
mode in data generation topaired
to make its function more clear & distinguish from other methods of processing real data loadHDF()
fromutils/data_io.py
now only returnsdata
, rather than the tuple(data, angles)
- Made all lines less than 80 characters long
Version 2.0
New Features
- Real-life data from hdf5 and nexus files can now be processed and saved as tiffs, so that a model can be trained on them
- There are currently three different ways of saving data:
- Raw: apply no post-processing methods, save the data exactly as it is on disk
- Paired: create an input/target pair for each sinogram in the data
- Dynamic (for dynamic experiments only): save each "frame" of a dynamic tomography scan
- New Stripe Detection method featuring Morphological Processing and Clustering
Changes
- New utils directory contains functions shared between different sub-modules
- Some small improvements to utils functions, including test metrics, rescaling, and plotting
- Some old dataset code has been deprecated; a new, more scalable dataset class has been created
Version 1.0
Features
- Simulated data can be generated using either simple stripe artifacts or more realistic flat fields
- A neural network can be trained on this dataset with the following modes:
- base: input whole sinograms, generate whole sinograms
- window: input windowed sinograms, generate windowed sinograms; windows are automatically created from inputs
- full: input whole sinograms, generate whole sinograms; whole sinograms are created by concatenating windows
- mask: input masked sinograms, generate masked sinograms; masks are automatically created from inputs
- simple: same as mask, but masks are created from both inputs and targets
- Neural networks can also be trained using LSGANs (rather than cGANs) with any of the above modes
- A trained network can then be tested, outputting the following test statistics:
- MAE, L2 norm, MSE, Gradient Difference, Dice Coefficient, IoU, Histogram Intersection, Structural Similarity and PSNR