Releases: qiskit-community/qiskit-machine-learning
Qiskit Machine Learning 0.3.0
Changelog
New Features
-
Addition of a QuantumKernelTrainer object which may be used by kernel-based machine learning algorithms to perform optimization of some QuantumKernel parameters before training the model. Addition of a new base class, KernelLoss, in the loss_functions package. Addition of a new KernelLoss subclass, SVCLoss.
-
The class TrainableModel, and its sub-classes NeuralNetworkClassifier, NeuralNetworkRegressor, VQR, VQC, have a new optional argument callback. User can optionally provide a callback function that can access the intermediate training data to track the optimization process, else it defaults to None. The callback function takes in two parameters: the weights for the objective function and the computed objective value. For each iteration an optimizer invokes the callback and passes current weights and computed value of the objective function.
-
Classification models (i.e. models that extend the NeuralNetworkClassifier class like VQC) can now handle categorical target data in methods like fit() and score(). Categorical data is inferred from the presence of string type data and is automatically encoded using either one-hot or integer encodings. Encoder type is determined by the one_hot argument supplied when instantiating the model.
-
There’s an additional transpilation step introduced in CircuitQNN that is invoked when a quantum instance is set. A circuit passed to CircuitQNN is transpiled and saved for subsequent usages. So, every time when the circuit is executed it is already transpiled and overall time of the forward pass is reduced. Due to implementation limitations of RawFeatureVector it can’t be transpiled in advance, so it is transpiled every time it is required to be executed and only when all parameters are bound. This means overall performance when RawFeatureVector is used stays the same.
-
Introduced a new classification algorithm, which is an alternative version of the Quantum Support Vector Classifier (QSVC) that is trained via the Pegasos algorithm from https://home.ttic.edu/~nati/Publications/PegasosMPB.pdf instead of the dual optimization problem like in sklearn. This algorithm yields a training complexity that is independent of the size of the training set (see the to be published Master’s Thesis “Comparing Quantum Neural Networks and Quantum Support Vector Machines” by Arne Thomsen), such that the PegasosQSVC is expected to train faster than QSVC for sufficiently large training sets.
-
QuantumKernel transpiles all circuits before execution. However, this
information was not being passed, which calls the transpiler many times during the execution of the QSVC/QSVR algorithm. Now, had_transpiled=True is passed correctly and the algorithm runs faster. -
QuantumKernel now provides an interface for users to specify a new class field, user_parameters. User parameters are an array of Parameter objects corresponding to parameterized quantum gates in the feature map circuit the user wishes to tune. This is useful in algorithms where feature map parameters must be bound and re-bound many times (i.e. variational algorithms). Users may also use a new function assign_user_parameters to assign real values to some or all of the user parameters in the feature map.
-
Introduce the TorchRuntimeClient for training a quantum model or a hybrid quantum-classical model faster using Qiskit Runtime. It can also be used for predicting the result using the trained model or calculating the score of the trained model faster using Qiskit Runtime.
Known Issues
- If positional arguments are passed into QSVR or QSVC and these classes are printed, an exception is raised.
Deprecation Notes
- Positional arguments in QSVR and QSVC are deprecated.
Bug Fixes
-
Fixed a bug in QuantumKernel where for statevector simulator all circuits were constructed and transpiled at once, leading to high memory usage. Now the circuits are batched similarly to how it was previously done for non-statevector simulators (same flag is used for both now; previously batch_size was silently ignored by statevector simulator)
-
Fix a bug where TorchConnector failed on backward pass computation due to empty parameters for inputs or weights. Validation added to qiskit_machine_learning.neural_networks.NeuralNetwork._validate_backward_output().
-
TwoLayerQNN now passes the value of the exp_val parameter in the constructor to the constructor of OpflowNN which TwoLayerQNN inherits from.
-
In some configurations forward pass of a neural network may return the same value across multiple calls even if different weights are passed. This behavior is confirmed with AQGD optimizer. This was due to a bug in the implementation of the objective functions. They cache a value obtained at the forward pass to be re-used in the backward pass. Initially, this cache was based on an identifier (a call of id() function) of the weights array. AQGD re-uses the same array for weights: it updates the values keeping an instance of the array the same. This caused to re-use the same forward pass value across all iteration. Now the forward pass cache is based on actual values of weights instead of identifiers.
-
Fix a bug, where qiskit_machine_learning.circuit.library.RawFeatureVector.copy() didn’t copy all internal settings which could lead to issues with the copied circuit. As a consequence qiskit_machine_learning.circuit.library.RawFeatureVector.bind_parameters() is also fixed.
-
Fixes a bug where VQC could not be instantiated unless either feature_map or ansatz were provided (#217). VQC is now instantiated with the default feature_map and/or ansatz.
-
The QNN weight parameter in TorchConnector is now registered in the torch DAG as weight, instead of _weights. This is consistent with the PyTorch naming convention and the weight property used to get access to the computed weights.
Qiskit Machine Learning 0.2.1
Changelog
Added
- The class TrainableModel, and its sub-classes NeuralNetworkClassifier, NeuralNetworkRegressor, VQR, VQC, have a new optional argument callback. User can optionally provide a callback function that can access the intermediate training data to track the optimization process, else it defaults to None. The callback function takes in two parameters: the weights for the objective function and the computed objective value. For each iteration an optimizer invokes the callback and passes current weights and computed value of the objective function.
- Classification models (i.e. models that extend the NeuralNetworkClassifier class like VQC) can now handle categorical target data in methods like fit() and score(). Categorical data is inferred from the presence of string type data and is automatically encoded using either one-hot or integer encodings. Encoder type is determined by the one_hot argument supplied when instantiating the model.
Fixed
-
Fix a bug, where qiskit_machine_learning.circuit.library.RawFeatureVector.copy() didn’t copy all internal settings which could lead to issues with the copied circuit. As a consequence qiskit_machine_learning.circuit.library.RawFeatureVector.bind_parameters() is also fixed.
-
The QNN weight parameter in TorchConnector is now registered in the torch DAG as weight, instead of _weights. This is consistent with the PyTorch naming convention and the weight property used to get access to the computed weights.
Qiskit Machine Learning 0.2.0
Changelog
Added
- A base class TrainableModel is introduced for machine learning models. This class follows Scikit-Learn principles and makes the quantum machine learning compatible with classical models. Both NeuralNetworkClassifier and NeuralNetworkRegressor extend this class. A base class ObjectiveFunction is introduced for objective functions optimized by machine learning models. There are three objective functions introduced that are used by ML models: BinaryObjectiveFunction, MultiClassObjectiveFunction, and OneHotObjectiveFunction. These functions are used internally by the models.
- The optimizer argument for the classes NeuralNetworkClassifier and NeuralNetworkRegressor, both of which extends the TrainableModel class, is made optional with the default value being SLSQP(). The same is true for the classes VQC and VQR as they inherit from NeuralNetworkClassifier and NeuralNetworkRegressor respectively.
- The constructor of NeuralNetwork, and all classes that inherit from it, has a new parameter input_gradients which defaults to False. Previously this parameter could only be set using the setter method. Note that TorchConnector previously set input_gradients of the NeuralNetwork it was instantiated with to True. This is not longer the case. So if you use TorchConnector and want to compute the gradients w.r.t. the input, make sure you set input_gradients=True on the NeuralNetwork before passing it to TorchConnector.
- Added a parameter initial_point to the neural network classifiers and regressors. This an array that is passed to an optimizer as an initial point to start from.
- Computation of gradients with respect to input data in the backward method of NeuralNetwork is now optional. By default gradients are not computed. They may inspected and turned on, if required, by getting or setting a new property input_gradients in the NeuralNetwork class.
- Now NeuralNetworkClassifier extends ClassifierMixin and NeuralNetworkRegressor extends RegressorMixin from Scikit-Learn and rely on their methods for score calculation. This also adds an ability to pass sample weights as an optional parameter to the score methods.
Changed
- The valid values passed to the loss argument of the TrainableModel constructor were partially deprecated (i.e. loss='l1' is replaced with loss='absolute_error' and loss='l2' is replaced with loss='squared_error'). This affects instantiation of classes like the NeuralNetworkClassifier. This change was made to reduce confusion that stems from using lowercase ‘l’ character which can be mistaken for a numeric ‘1’ or capital ‘I’. You should update your model instantiations by replacing ‘l1’ with ‘absolute_error’ and ‘l2’ with ‘squared_error’.
- The weights property in TorchConnector is deprecated in favor of the weight property which is PyTorch compatible. By default, PyTorch layers expose weight properties to get access to the computed weights.
Fixed
- This fixes the exception that occurs when no optimizer argument is passed to NeuralNetworkClassifier and NeuralNetworkRegressor.
- Fixes the computation of gradients in TorchConnector when a batch of input samples is provided.
- TorchConnector now returns the correct input gradient dimensions during the backward pass in hybrid nn training.
- Added a dedicated handling of ComposedOp as a operator in OpflowQNN. In this case output shape is determined from the first operator in the ComposedOp instance.
- Fix the dimensions of the gradient in the quantum generator for the qGAN training.