Releases: tensorflow/adanet
Releases · tensorflow/adanet
AdaNet v0.9.0
- Drop support for Tensorflow 1.* . Only TensorFlow >= 2.1 is supported.
- Drop support for Python 2.* . Only Python >= 3.6 is supported.
- Preserved the outputs in the
PredictionOutputthat are not in thebest_export_outputs. - Add
warm_startsupport to adanetEstimators. - Added support for predicting/serving on TPU.
- Introduce support for
AutoEnsembleTPUEstimator. - Introduce experimental
adanet.experimentalKeras ModelFlow APIs. - Replace reports.proto with simple serialized JSON. No longer have proto dependencies.
AdaNet v0.8.0
- Add support for TensorFlow 2.0.
- Begin developing experimental Keras API for auto-ensembling.
- Support advanced subnetworks and subestimators that need to read and write from disk by giving them a dedicated subdirectory in
model_dir. - Fix race condition in parallel evaluation during distributed training.
- Support subnetwork hooks requesting early stopping.
- Adding AdaNet replay. The ability to rerun training without having to determine the best candidate for the iteration. A list of best indices from the previous run is provided and honored by AdaNet.
- Introduced
adanet.ensemble.MeanEnsemblerwith a basic implementation for taking the mean of logits of subnetworks. This also supports including the mean of last_layer (helpful if subnetworks have same configurations) in thepredictionsandexport_outputsof the EstimatorSpec. - BREAKING CHANGE: AdaNet now supports arbitrary metrics when choosing the best ensemble. To achieve this, the interface of
adanet.Evaluatoris changing. TheEvaluator.evaluate_adanet_losses(sess, adanet_losses)function is being replaced withEvaluator.evaluate(sess, ensemble_metrics). Theensemble_metricsparameter contains all computed metrics for each candidate ensemble as well as theadanet_loss. Code which overridesevaluate_adanet_lossesmust migrate over to use the newevaluatemethod (we suspect that such cases are very rare). - Allow user to specify a maximum number of AdaNet iterations.
- BREAKING CHANGE: When supplied, run the
adanet.EvaluatorbeforeEstimator#evaluate,Estimator#predict, andEstimator#export_saved_model. This can have the effect of changing the best candidate chosen at the final round. When the user passes an Evaluator, we run it to establish the best candidate during evaluation, predict, and export_saved_model. Previously they used the adanet_loss moving average collected during training. While the previous ensemble would have been established by the Evaluator, the current set of candidate ensembles that were not done training would be considered according to the adanet_loss. Now when a user passes an Evaluator that, for example, uses a hold-out set, AdaNet runs it before making predictions or exporting a SavedModel to use the best new candidate according to the hold-out set. - Support
tf.keras.metrics.Metricsduring evaluation. - Allow users to disable summaries to reduce memory and disk footprint.
- Stop individual subnetwork training on
OutOfRangeErrorraised during bagging. - Train forever if
max_stepsandstepsare bothNone.
AdaNet v0.7.0
- Add embeddings support on TPU via
TPUEmbedding. - Train the current iteration forever when
max_iteration_steps=None. - Introduce
adanet.AutoEnsembleSubestimatorfor training subestimators on different training data partitions and implement ensemble methods like bootstrap aggregating (a.k.a bagging). - Fix bug when using Gradient Boosted Decision Tree Estimators with
AutoEnsembleEstimatorduring distributed training. - Allow
AutoEnsembleEstimator'scandidate_poolargument to be alambdain order to createEstimatorslazily. - Remove
adanet.subnetwork.Builder#prune_previous_ensemblefor abstract class. This behavior is now specified usingadanet.ensemble.Strategysubclasses. - BREAKING CHANGE: Only support TensorFlow >= 1.14 to better support TensorFlow 2.0. Drop support for versions < 1.14.
- Correct eval metric computations on CPU and GPU.
AdaNet v0.6.2
- Fix n+1 global-step increment bug in
adanet.AutoEnsembleEstimator. This bug incremented the global_step by n+1 for n cannedEstimatorslikeDNNEstimator.
AdaNet v0.6.1
- Maintain compatibility with TensorFlow versions >=1.9.
AdaNet v0.6.0
- Officially support AdaNet on TPU using
adanet.TPUEstimatorwithadanet.Estimatorfeature parity. - Support dictionary candidate pools in
adanet.AutoEnsembleEstimatorconstructor to specify human-readable candidate names. - Improve AutoEnsembleEstimator ability to handling custom
tf.estimator.Estimatorsubclasses. - Introduce
adanet.ensemblewhich contains interfaces and examples of ways to learn ensembles using AdaNet. Users can now extend AdaNet to use custom ensemble-learning methods. - Record TensorBoard
scalar,image,histogram, andaudiosummaries on TPU during training. - Add debug mode to help detect NaNs and Infs during training.
- Improve subnetwork
tf.train.SessionRunHooksupport to handle more edge cases. Maintain compatibility with TensorFlow versions 1.9 thru 1.13Only works for TensorFlow version >=1.13. Fixed in AdaNet v0.6.1.- Improve documentation including adding 'Getting Started' documentation to adanet.readthedocs.io.
- BREAKING CHANGE: Importing the
adanet.subnetworkpackage usingfrom adanet.core import subnetworkwill no longer work, because the package was moved to theadanet/subnetworkdirectory. Most users should already be usingadanet.subnetworkorfrom adanet import subnetwork, and should not be affected.
AdaNet v0.5.0
- Support training on TPU using
adanet.TPUEstimator. - Allow subnetworks to specify
tf.train.SessionRunHookinstances for training withadanet.subnetwork.TrainOpSpec. - Add API documentation generation with Sphinx.
- Fix bug preventing subnetworks with Resource variables from working beyond the first iteration.
AdaNet v0.4.0
- Add
sharedfield toadanet.Subnetworkto deprecate, replace, and be more flexible thanpersisted_tensors. - Officially support multi-head learning with or without dict labels.
- Rebuild the ensemble across iterations in Python without a frozen graph. This allows users to share more than
Tensorsbetween iterations including Python primitives, objects, and lambdas for greater flexibility. Eliminating reliance on aMetaGraphDefproto also eliminates I/O allowing for faster training, and better future-proofing. - Allow users to pass custom eval metrics when constructing an
adanet.Estimator. - Add
adanet.AutoEnsembleEstimatorfor learning to ensembletf.estimator.Estimatorinstances. - Pass labels to
adanet.subnetwork.Builder'sbuild_subnetworkmethod. - The TRAINABLE_VARIABLES collection will only contain variables relevant to the current
adanet.subnetwork.Builder, so not passingvar_listto theoptimizer.minimizewill lead to the same behavior as passing it in by default. - Using
tf.summaryinsideadanet.subnetwork.Builderis now equivalent to using theadanet.Summaryobject. - Accessing the
global_stepfrom within anadanet.subnetwork.Builderwill return theiteration_stepvariable instead, so that the step starts at zero at the beginning of each iteration. One subnetwork incrementing the step will not affect other subnetworks. - Summaries will automatically scope themselves to the current subnetwork's scope. Similar summaries will now be correctly grouped together correctly across subnetworks in TensorBoard. This eliminates the need for the
tf.name_scope("")hack. - Provide an override to force the AdaNet ensemble to grow at the end of each iteration.
- Correctly seed TensorFlow graph between iterations. This breaks some tests that check the outputs of
adanet.Estimatormodels.
AdaNet v0.3.0
- Add official support for tf.keras.layers.
- Fix bug that incorrectly pruned colocation constraints between iterations.
AdaNet v0.2.0
- Estimator no longer creates eval metric ops in train mode.
- Freezer no longer converts Variables to constants, allowing AdaNet to handle Variables larger than 2GB.
- Fixes some errors with Python 3.