Releases: FluxML/MLJFlux.jl
v0.6.0
MLJFlux v0.6.0
All models, except ImageClassifier
, now support categorical features (presented as table columns with a CategoricalVector
type). Rather than one-hot encoding, embeddings into a continuous space are learned (i.e, by adding an embedding layer) and the dimension of theses spaces can be specified by the user, using a new dictionary-valued hyperparameter, embedding_dims
. The learned embeddings are exposed by a new implementation of transform
, which means they can be used with other models (transfer learning) as described in Cheng Guo and Felix Berkhahn (2016): Entity Embeddings of Categorical Variables.
Also, all continuous input presented to these models is now forced to be Float32
, but this is the only breaking change.
Merged pull requests:
- Update docs (#265) (@ablaom)
- Introduce EntityEmbeddings (#267) (@EssamWisam)
- Fix
l2
loss inMultitargetNeuralNetworkRegressor
doctring (#270) (@ablaom) - automatically convert input matrix to Float32 (#272) (@tiemvanderdeure)
- Force
Float32
as type presented to Flux chains (#276) (@ablaom) - For a 0.6.0 release (#277) (@ablaom)
Closed issues:
v0.5.1
v0.5.0
MLJFlux v0.5.0
- (new model) Add
NeuralNetworkBinaryClasssifier
, an optimised form ofNeuralNetworkClassifier
for the special case of two target classes. UseFlux.σ
instead ofsoftmax
for the default finaliser (#248) - (internals) Switch from implicit to explicit differentiation (#251)
- (breaking) Use optimisers from Optimisers.jl instead of Flux.jl (#251). Note that the new optimisers are immutable.
- (RNG changes.) Change the default value of the model field
rng
fromRandom.GLOBAL_RNG
toRandom.default_rng()
. Change the seeded RNG, obtained by specifying an integer value forrng
, fromMersenneTwister
toXoshiro
(#251) - (RNG changes.) Update the
Short
builder so that therng
argument ofbuild(::Short, rng, ...)
is passed on to theDropout
layer, as these layers now support this on a GPU, at
least forrng=Random.default_rng()
(#251) - (weakly breaking) Change the implementation of L1/L2 regularization from explicit loss penalization to weight/sign decay (internally chained with the user-specified optimiser). The only breakage for users is that the losses reported in the history will no longer be penalized, because the penalty is not explicitly computed (#251)
Merged pull requests:
- Fix metalhead breakage (#250) (@ablaom)
- Omnibus PR, including switch to explicit style differentiation (#251) (@ablaom)
- 🚀 Instate documentation for MLJFlux (#252) (@EssamWisam)
- Update examples/MNIST Manifest, including Julia 1.10 (#254) (@ablaom)
- ✨ Add 7 workflow examples for MLJFlux (#256) (@EssamWisam)
- Add binary classifier (#257) (@ablaom)
- For a 0.5.0 release (#259) (@ablaom)
- Add check that Flux optimiser is not being used (#260) (@ablaom)
Closed issues:
v0.4.0
MLJFlux v0.4.0
Merged pull requests:
- Bump Metalhead to 0.9 - making CUDA optional (#240) (@mohamed82008)
- For a 0.4.0 release (#243) (@ablaom)
v0.3.1
v0.3.0
MLJFlux v0.3.0
Merged pull requests:
v0.2.10
v0.2.9
MLJFlux v0.2.9
- (bug fix) Address improper scaling of l2/l1 loss penalty with batch size (#213) @mohamed82008
Closed issues:
- Penalty (wrongly) not multiplied by the relative batch size (#213)
Merged pull requests:
- Divide penalty by n_batches (#214) (@mohamed82008)
- For a 0.2.9 release (#215) (@ablaom)
v0.2.8
v0.2.7
MLJFlux v0.2.7
Merged pull requests: