Skip to content

Releases: qiskit-community/qiskit-machine-learning

Qiskit Machine Learning 0.8.1

09 Dec 11:51
ab822c1
Compare
Choose a tag to compare

New Features

  • Enhanced Tutorials and Documentation for V2 Primitives, including a migration guide for V2 primitives.

  • Extended support for V2 primitives across various quantum machine learning algorithms including VQC, VQR, QSVC, QSVR, and QBayesian. If no primitive is provided, these algorithms will default to using V1 primitives as a fallback for this release. A warning is now issued to inform users of this default behavior.

  • Added partial multi-class support for VQC. This feature is now enabled when the output_shape parameter is set to num_classes and an interpret function is defined, allowing for multi-label classification tasks.

  • The PegasosQSVC and algorithms derived from NeuralNetworkClassifier module now support predict_proba function. This method can be utilized similarly to other scikit-learn-based algorithms.

  • The ADAM class now supports a callback function. This feature allows users to pass a custom callback function that will be called with information at each iteration step during the optimization process. The information passed to the callback includes the current time step, the parameters, and the function value. The callback function should be of the type Callable[[int, Union[float, np.ndarray], float], None]. Example of a callback function:

def callback(iteration:int, weights:np.ndarray, loss:float):
  ...
  acc = calculate_accuracy(weights)
  print(acc)
  print(loss)
  ...

v0.8.0

11 Nov 14:03
512fc44
Compare
Choose a tag to compare

Prelude to the changelog

From this release, Qiskit Machine Learning requires Qiskit 1.0 or above, with important changes and upgrades, such as introducing Quantum Bayesian inference and migrating a subset of Qiskit Algorithms features to Qiskit Machine Learning. These changes are part of building full compatibility with the version-2 (V2) Qiskit primitives available from version 0.8 of Qiskit Machine Learning. V1 primitives are deprecated and will be removed from version 0.9 (please find more information below).

New Features

1. Quantum Bayesian inference

We introduced a new class qiskit_machine_learning.algorithms.QBayesian which implements quantum Bayesian inference on a quantum circuit representing a Bayesian network with binary random variables.

The computational complexity is reduced from $\mathcal{O}(nmP(e)^{-1})$ to $\mathcal{O}(n\ 2^{m}P(e)^{-\frac{1}{2}})$ per sample, where $n$ is the number of nodes in the Bayesian network with at most $m$ parents per node and $e$ is the evidence. At least a quantum circuit that represents the Bayesian network has to be provided. A quantum circuit can be passed in various forms as long as it represents the joint probability distribution of the Bayesian network. Note that QBayesian defines an order for the qubits in the circuit. The last qubit in the circuit will correspond to the most significant bit in the joint probability distribution. For example, if the random variables A, B, and C are entered into the circuit in this order with ($A=1, B=0 and C=0$), the probability is represented by the probability amplitude of quantum state $001$.

An example of using this class is as follows:

from qiskit import QuantumCircuit
from qiskit_machine_learning.algorithms import QBayesian

# Define a quantum circuit
qc = QuantumCircuit(...)

# Initialize the framework
qb = QBayesian(qc)

# Perform inference
result = qb.inference(query={...}, evidence={...})

print("Probability of query given evidence:", result)

You may refer to the QBI tutorial which describes a step-by-step approach to quantum Bayesian inference on a Bayesian network.

2. Support for Python 3.12

Added support for using Qiskit Machine Learning with Python 3.12.

3. Incorporation of Qiskit Algorithms

Migrated essential Qiskit Algorithms features to Qiskit Machine Learning. Also, Qiskit Machine Learning now requires Qiskit 1.0 or higher. You may be required to upgrade Qiskit Aer accordingly, depending on your setup. The merge of some of the features of Qiskit Algorithms into Qiskit Machine Learning might lead to breaking changes. For this reason, caution is advised when updating to version 0.8 during critical production stages in a project. This change ensures continued enhancement and maintenance of essential features for Qiskit Machine Learning following the end of official support for Qiskit Algorithms. Therefore, Qiskit Machine Learning will no longer depend on Qiskit Algorithms.

Users must update their imports and code references in code that uses Qiskit Machine Leaning and Algorithms:

  • Change qiskit_algorithms.gradients to qiskit_machine_learning.gradients
  • Change qiskit_algorithms.optimizers to qiskit_machine_learning.optimizers
  • Change qiskit_algorithms.state_fidelities to qiskit_machine_learning.state_fidelities
  • Update utilities as needed due to partial merge.

To continue using sub-modules and functionalities of Qiskit Algorithms that have not been transferred, you may continue using them as before by importing from Qiskit Algorithms. However, be aware that Qiskit Algorithms is no longer officially supported and some of its functionalities may not work in your use case. For any problems directly related to Qiskit Algorithms, please open a GitHub issue at https://github.com/qiskit-community/qiskit-algorithms. Should you want to include a Qiskit Algorithms functionality that has not been incorporated in Qiskit Machine Learning, please open a feature-request issue at https://github.com/qiskit-community/qiskit-machine-learning, explaining why this change would be useful for you and other users.

Four examples of upgrading the code can be found below.

Gradients:

# Before:
from qiskit_algorithms.gradients import SPSA, ParameterShift
# After:
from qiskit_machine_learning.gradients import SPSA, ParameterShift
# Usage
spsa = SPSA()
param_shift = ParameterShift()

Optimizers:

# Before:
from qiskit_algorithms.optimizers import COBYLA, ADAM
# After:
from qiskit_machine_learning.optimizers import COBYLA, ADAM
# Usage
cobyla = COBYLA()
adam = ADAM()

Quantum state fidelities:

# Before:
from qiskit_algorithms.state_fidelities import ComputeFidelity
# After:
from qiskit_machine_learning.state_fidelities import ComputeFidelity
# Usage
fidelity = ComputeFidelity()

Algorithm globals (used to fix the random seed):

# Before:
from qiskit_algorithms.utils import algorithm_globals
# After:
from qiskit_machine_learning.utils import algorithm_globals
algorithm_globals.random_seed = 1234

4. Support for V2 Primitives

The EstimatorQNN and SamplerQNN classes now support V2 primitives (EstimatorV2 and SamplerV2), allowing direct execution on IBM Quantum backends. This enhancement ensures compatibility with Qiskit IBM Runtime’s Primitive Unified Block (PUB) requirements and instruction set architecture (ISA) constraints for circuits and observables. Users can switch between V1 primitives and V2 primitives from version 0.8. From version 0.9, V1 primitives will be removed.

Upgrade notes

  • Removed support for using Qiskit Machine Learning with Python 3.8 to reflect the EOL of Python 3.8 in October 2024 (PEP 569). To continue using Qiskit Machine Learning, you must upgrade to Python 3.9 or above if you are using older versions of Python.

  • From version 0.8, Qiskit Machine Learning requires Qiskit 1.0 or higher.

  • Users working with real backends are advised to migrate to V2 primitives (EstimatorV2 and SamplerV2) to ensure compatibility with Qiskit IBM Runtime hardware requirements. These V2 primitives will become the standard in the 0.8 release going forward, while V1 primitives are deprecated.

Deprecation Notes

  • The V1 primitives (e.g., EstimatorV1 and SamplerV1) are no longer compatible with real quantum backends via Qiskit IBM Runtime. This update provides initial transitional support, but V1 primitives may be fully deprecated and removed in version 0.9. Users should adopt V2 primitives for both local and hardware executions to ensure long-term compatibility.

Bug Fixes

  • Added a max_circuits_per_job parameter to the FidelityQuantumKernel used in the case that more circuits are submitted than the job limit for the backend, the circuits are split up and run through separate jobs.

  • Removed QuantumKernelTrainer dependency on copy.deepcopy that was throwing an error with real backends. Now, it modifies the TrainableKernel in place. If you would like to use the initial kernel, please call TrainableKernel.assign_training_parameters of the TrainableKernel using the QuantumKernelTrainer.initial_point attribute of QuantumKernelTrainer.

  • Fixes the dimension mismatch error in the torch_connector raised when using other-than 3D datasets. The updated implementation defines the Einstein summation signature dynamically based on the number of dimensions ndim of the input data (up to 26 dimensions).

  • Fixed a bug where FidelityStatevectorKernel threw an error when pickled.

  • Fixes an issue for the Quantum Neural Networks where the binding order of the inputs and weights might end up being incorrect. Though the parameters for the inputs and weights are specified to the QNN, the code previously bound the inputs and weights in the order given by the circuit.parameters. This would end up being the right order for the Qiskit circuit library feature maps and ansatzes most often used, as the default parameter names led to the order being as expected. However, for custom names and so on, this was not always the case and then led to unexpected behaviour. The sequences for the input and weights parameters, as supplied, are now always used as the binding order, for the inputs and weights respectively, such that the order of the parameters in the overall circuit no longer matters.

New Contributors

Full Changelog: 0.7.0...0.8.0

Qiskit Machine Learning 0.7.2

29 Feb 16:40
d77757d
Compare
Choose a tag to compare

Changelog

New Features

  • Added support for using Qiskit Machine Learning with Python 3.12.

Bug Fixes

  • Added a max_circuits_per_job parameter to the FidelityQuantumKernel used in the case that if more circuits are submitted than the job limit for the backend, the circuits are split up and run through separate jobs.

  • Removed QuantumKernelTrainer dependency on copy.deepcopy that was throwing an error with real backends. Now, it modifies the TrainableKernel in place. If you would like to use the initial kernel, please call assign_training_parameters() of the TrainableKernel using the initial_point attribute of QuantumKernelTrainer.

  • Fixes an issue for the Quantum Neural Networks where the binding order of the inputs and weights might end up being incorrect. Though the params for the inputs and weights are specified to the QNN, the code previously bound the inputs and weights in the order given by the circuit.parameters. This would end up being the right order for the Qiskit circuit library feature maps and ansatzes most often used, as the default parameter names led to the order being as expected. However for custom names etc. this was not always the case and then led to unexpected behavior. The sequences for the input and weights parameters, as supplied, are now always used as the binding order, for the inputs and weights respectively, such that the order of the parameters in the overall circuit no longer matters.

  • Fixed a bug where FidelityStatevectorKernel threw an error when pickled.

Qiskit Machine Learning 0.7.1

01 Dec 12:09
541ccc3
Compare
Choose a tag to compare

Changelog

This bug fix release fixed the link to the Qiskit medium blog post where it was announced that application modules had been moved to the qiskit-community organization.

Qiskit Machine Learning 0.7.0

10 Nov 16:24
95894f7
Compare
Choose a tag to compare

Prelude

Qiskit Machine Learning has been migrated to the qiskit-community Github organization to further emphasize that it is a community-driven project. To reflect this change, and because we are onboarding additional code-owners and maintainers, with this version (0.7) we have decided to remove all deprecated code, regardless of the time of its deprecation. This ensures that the new members of the development team do not have a large bulk of legacy code to maintain. This can mean one of two things for you as the end-user:

  • Nothing, if you already migrated your code and no longer rely on any deprecated features.
  • Otherwise, you should make sure that your workflow doesn’t rely on deprecated classes. If you cannot do that, or want to continue
    using some of the features that were removed, you should pin your version of Qiskit Machine Learning to 0.6.

For more context on the changes around Qiskit Machine Learning and the other application projects as well as the Algorithms library in Qiskit, be sure to read this blog post.

New Features

  • The QNNCircuit class can be passed as circuit to the SamplerQNN and EstimatorQNN. This simplifies the interfaces to build a Sampler or Estimator based neural network implementation from a feature map and an ansatz circuit.
    Using the QNNCircuit comes with the benefit that the feature map and ansatz do not have to be composed explicitly. If a QNNCircuit is passed to the SamplerQNN or EstimatorQNN the input and weight parameters do not have to be provided, because these two properties are taken from the QNNCircuit.
    An example of using QNNCircuit with the SamplerQNN class is as follows:

     from qiskit_machine_learning.circuit.library import QNNCircuit
     from qiskit_machine_learning.neural_networks import SamplerQNN
    
     def parity(x):
         return f"{bin(x)}".count("1") % 2
    
     # Create a parameterized 2 qubit circuit composed of the default ZZFeatureMap feature map
     # and RealAmplitudes ansatz.
     qnn_qc = QNNCircuit(num_qubits = 2)
    
     qnn = SamplerQNN(
         circuit=qnn_qc,
         interpret=parity,
         output_shape=2
     )
    
     qnn.forward(input_data=[1, 2], weights=[1, 2, 3, 4, 5, 6, 7, 8])

    The QNNCircuit is used with the EstimatorQNN class in the same fashion:

    from qiskit_machine_learning.circuit.library import QNNCircuit
    from qiskit_machine_learning.neural_networks import EstimatorQNN
    
    # Create a parameterized 2 qubit circuit composed of the default ZZFeatureMap feature map
    # and RealAmplitudes ansatz.
    qnn_qc = QNNCircuit(num_qubits = 2)
    
    qnn = EstimatorQNN(
        circuit=qnn_qc
    )
    
    qnn.forward(input_data=[1, 2], weights=[1, 2, 3, 4, 5, 6, 7, 8])
  • Added a new QNNCircuit class that composes a Quantum Circuit from a feature map and an ansatz.
    At least one parameter, i.e. number of qubits, feature map, ansatz, has to be provided.
    If only the number of qubits is provided the resulting quantum circuit is a composition of the ZZFeatureMap and the RealAmplitudes ansatz. If the number of qubits is 1 the ZFeatureMap is used per default. If only a feature map is provided, the RealAmplitudes ansatz with the corresponding number of qubits is used. If only an ansatz is provided the ZZFeatureMap with the corresponding number of qubits is used.
    In case number of qubits is provided along with either a feature map, an ansatz or both, a potential mismatch between the three inputs with respect to the number of qubits is resolved by constructing the QNNCircuit with the given number of qubits. If one of the QNNCircuit properties is set after the class construction, the circuit is is adjusted to incorporate the changes. This is, a new valid configuration that considers the latest property update will be derived. This ensures that the classes properties are consistent at all times.

    An example of using this class is as follows:

        from qiskit_machine_learning.circuit.library import QNNCircuit
        qnn_qc = QNNCircuit(2)
        print(qnn_qc)
        # prints:
        #      ┌──────────────────────────┐»
        # q_0: ┤0                         ├»
        #      │  ZZFeatureMap(x[0],x[1]) │»
        # q_1: ┤1                         ├»
        #      └──────────────────────────┘»
        # «     ┌──────────────────────────────────────────────────────────┐
        # «q_0: ┤0                                                         ├
        # «     │  RealAmplitudes(θ[0],θ[1],θ[2],θ[3],θ[4],θ[5],θ[6],θ[7]) │
        # «q_1: ┤1                                                         ├
        # «     └──────────────────────────────────────────────────────────┘
    
        print(qnn_qc.num_qubits)
        # prints: 2
    
        print(qnn_qc.input_parameters)
        # prints: ParameterView([ParameterVectorElement(x[0]), ParameterVectorElement(x[1])])
    
        print(qnn_qc.weight_parameters)
        # prints: ParameterView([ParameterVectorElement(θ[0]), ParameterVectorElement(θ[1]),
        #         ParameterVectorElement(θ[2]), ParameterVectorElement(θ[3]),
        #         ParameterVectorElement(θ[4]), ParameterVectorElement(θ[5]),
        #         ParameterVectorElement(θ[6]), ParameterVectorElement(θ[7])])
  • A new TrainableFidelityStatevectorKernel class has been added that provides a trainable version of FidelityStatevectorKernel. This relationship mirrors that between the existing FidelityQuantumKernel. Thus, TrainableFidelityStatevectorKernel inherits from both FidelityStatevectorKernel and TrainableKernel.
    This class is used with [QuantumKernelTrainer](https://qiskit.org/ecosystem/machine-learning/stu...

Read more

Qiskit Machine Learning 0.6.1

09 May 07:59
2ef8f86
Compare
Choose a tag to compare

Changelog

Bug Fixes

  • Compatibility fix to support Python 3.11.

  • The function qiskit_machine_learning.datasets.discretize_and_truncate() is fixed on numpy version 1.24. This function is used by the QGAN implementation.

Qiskit Machine Learning 0.6.0

27 Mar 20:48
6b3c65a
Compare
Choose a tag to compare

Changelog

New Features

  • Allow callable as an optimizer in NeuralNetworkClassifier, VQC, NeuralNetworkRegressor, VQR, as well as in QuantumKernelTrainer.

    Now, the optimizer can either be one of Qiskit’s optimizers, such as SPSA or a callable with the following signature:

      from qiskit.algorithms.optimizers import OptimizerResult
  
      def my_optimizer(fun, x0, jac=None, bounds=None) -> OptimizerResult:
          # Args:
          #     fun (callable): the function to minimize
          #     x0 (np.ndarray): the initial point for the optimization
          #     jac (callable, optional): the gradient of the objective function
          #     bounds (list, optional): a list of tuples specifying the parameter bounds
          result = OptimizerResult()
          result.x = # optimal parameters
          result.fun = # optimal function value
          return result

      The above signature also allows to directly pass any SciPy minimizer, for instance as

      from functools import partial
      from scipy.optimize import minimize
      optimizer = partial(minimize, method="L-BFGS-B")
  • Added a new FidelityStatevectorKernel class that is optimized to use only statevector-implemented feature maps. Therefore, computational complexity is reduced from $O(N^2)$ to $O(N)$.

    Computed statevector arrays are also cached to further increase efficiency. This cache is cleared when the evaluate method is called, unless auto_clear_cache is False. The cache is unbounded by default, but its size can be set by the user, i.e., limited to the number of samples in the worst case.

    By default the Terra reference Statevector is used, however, the type can be specified via the statevector_type argument.

    Shot noise emulation can also be added. If shots is None, the exact fidelity is used. Otherwise, the mean is taken of samples drawn from a binomial distribution with probability equal to the exact fidelity.

    With the addition of shot noise, the kernel matrix may no longer be positive semi-definite (PSD). With enforce_psd set to True this condition is enforced.

    An example of using this class is as follows:

    from sklearn.datasets import make_blobs
    from sklearn.svm import SVC

    from qiskit.circuit.library import ZZFeatureMap
    from qiskit.quantum_info import Statevector

    from qiskit_machine_learning.kernels import FidelityStatevectorKernel

    # generate a simple dataset
    features, labels = make_blobs(
        n_samples=20, centers=2, center_box=(-1, 1), cluster_std=0.1
    )

    feature_map = ZZFeatureMap(feature_dimension=2, reps=2)
    statevector_type = Statevector

    kernel = FidelityStatevectorKernel(
        feature_map=feature_map,
        statevector_type=Statevector,
        cache_size=len(labels),
        auto_clear_cache=True,
        shots=1000,
        enforce_psd=True,
    )
    svc = SVC(kernel=kernel.evaluate)
    svc.fit(features, labels)
  • The PyTorch connector TorchConnector now fully supports sparse output in both forward and backward passes. To enable sparse support, first of all, the underlying quantum neural network must be sparse. In this case, if the sparse property of the connector itself is not set, then the connector inherits sparsity from the networks. If the connector is set to be sparse, but the network is not, an exception will be raised. Also you may set the connector to be dense if the network is sparse.

    This snippet illustrates how to create a sparse instance of the connector.

    import torch
    from qiskit import QuantumCircuit
    from qiskit.circuit.library import ZFeatureMap, RealAmplitudes

    from qiskit_machine_learning.connectors import TorchConnector
    from qiskit_machine_learning.neural_networks import SamplerQNN

    num_qubits = 2
    fmap = ZFeatureMap(num_qubits, reps=1)
    ansatz = RealAmplitudes(num_qubits, reps=1)
    qc = QuantumCircuit(num_qubits)
    qc.compose(fmap, inplace=True)
    qc.compose(ansatz, inplace=True)

    qnn = SamplerQNN(
        circuit=qc,
        input_params=fmap.parameters,
        weight_params=ansatz.parameters,
        sparse=True,
    )

    connector = TorchConnector(qnn)

    output = connector(torch.tensor([[1., 2.]]))
    print(output)

    loss = torch.sparse.sum(output)
    loss.backward()

    grad = connector.weight.grad
    print(grad)

      In hybrid setup, where a PyTorch-based neural network has classical and quantum layers, sparse operations should not be mixed with dense ones, otherwise exceptions may be thrown by PyTorch.

      Sparse support works on python 3.8+.

Upgrade Notes

  • The previously deprecated CrossEntropySigmoidLoss loss function has been removed.
  • The previously deprecated datasets have been removed: breast_cancer, digits, gaussian, iris, wine.
  • Positional arguments in QSVC and QSVR were deprecated as of version 0.3. Support of the positional arguments was completely removed in this version, please replace them with corresponding keyword arguments.

Bug Fixes

  • SamplerQNN can now correctly handle quantum circuits without both input parameters and weights. If such a circuit is passed to the QNN then this circuit executed once in the forward pass and backward returns None for both gradients.

Qiskit Machine Learning 0.5.0

08 Nov 22:41
85c028c
Compare
Choose a tag to compare

Changelog

New Features

  • Added support for categorical and ordinal labels to VQC. Now labels can be passed in different formats, they can be plain ordinal labels, a one dimensional array that contains integer labels like 0, 1, 2, …, or an array with categorical string labels. One-hot encoded labels are still supported. Internally, labels are transformed to one hot encoding and the classifier is always trained on one hot labels.

  • Introduced Estimator Quantum Neural Network (EstimatorQNN) based on (runtime) primitives. This implementation leverages the estimator primitive (see BaseEstimator) and the estimator gradients (see BaseEstimatorGradient) to enable runtime access and more efficient computation of forward and backward passes.
    The new EstimatorQNN exposes a similar interface to the Opflow QNN, with a few differences. One is the quantum_instance parameter. This parameter does not have a direct replacement, and instead the estimator parameter must be used. The gradient parameter keeps the same name as in the Opflow QNN implementation, but it no longer accepts Opflow gradient classes as inputs; instead, this parameter expects an (optionally custom) primitive gradient.
    The existing training algorithms such as VQR, that were based on the Opflow QNN, are updated to accept both implementations. The implementation of NeuralNetworkRegressor has not changed.

  • Introduced Quantum Kernels based on (runtime) primitives. This implementation leverages the fidelity primitive (see BaseStateFidelity) and provides more flexibility to end users. The fidelity primitive calculates state fidelities/overlaps for pairs of quantum circuits and requires an instance of Sampler. Thus, users may plug in their own implementations of fidelity calculations.
    The new kernels expose the same interface and the same parameters except the quantum_instance parameter. This parameter does not have a direct replacement and instead the fidelity parameter must be used.

    A new hierarchy is introduced:

          - A base and abstract class [BaseKernel](https://qiskit.org/documentation/machine-learning/stubs/qiskit_machine_learning.kernels.BaseKernel.html#qiskit_machine_learning.kernels.BaseKernel) is introduced. All concrete implementation must inherit this class.
    
          - A fidelity based quantum kernel [FidelityQuantumKernel](https://qiskit.org/documentation/machine-learning/stubs/qiskit_machine_learning.kernels.FidelityQuantumKernel.html#qiskit_machine_learning.kernels.FidelityQuantumKernel) is added. This is a direct replacement of [QuantumKernel](https://qiskit.org/documentation/machine-learning/stubs/qiskit_machine_learning.kernels.QuantumKernel.html#qiskit_machine_learning.kernels.QuantumKernel). The difference is that the new class takes either a sampler or a fidelity instance to estimate overlaps and construct kernel matrix.
    
          - A new abstract class [TrainableKernel](https://qiskit.org/documentation/machine-learning/stubs/qiskit_machine_learning.kernels.TrainableKernel.html#qiskit_machine_learning.kernels.TrainableKernel) is introduced to generalize ability to train quantum kernels.
    
          - A fidelity-based trainable quantum kernel [TrainableFidelityQuantumKernel](https://qiskit.org/documentation/machine-learning/stubs/qiskit_machine_learning.kernels.TrainableFidelityQuantumKernel.html#qiskit_machine_learning.kernels.TrainableFidelityQuantumKernel) is introduced. This is a replacement of the existing [QuantumKernel](https://qiskit.org/documentation/machine-learning/stubs/qiskit_machine_learning.kernels.QuantumKernel.html#qiskit_machine_learning.kernels.QuantumKernel) if a trainable kernel is required. The trainer [QuantumKernelTrainer](https://qiskit.org/documentation/machine-learning/stubs/qiskit_machine_learning.kernels.algorithms.QuantumKernelTrainer.html#qiskit_machine_learning.kernels.algorithms.QuantumKernelTrainer) now accepts both quantum kernel implementations, the new one and the existing one.
    

    The existing algorithms such as QSVC, QSVR and other kernel-based algorithms are updated to accept both implementations.

  • Introduced Sampler Quantum Neural Network (SamplerQNN) based on (runtime) primitives. This implementation leverages the sampler primitive (see BaseSampler) and the sampler gradients (see BaseSamplerGradient) to enable runtime access and more efficient computation of forward and backward passes more efficiently.
    The new SamplerQNN exposes a similar interface to the CircuitQNN, with a few differences. One is the quantum_instance parameter. This parameter does not have a direct replacement, and instead the sampler parameter must be used. The gradient parameter keeps the same name as in the CircuitQNN implementation, but it no longer accepts Opflow gradient classes as inputs; instead, this parameter expects an (optionally custom) primitive gradient. The sampling option has been removed for the time being, as this information is not currently exposed by the Sampler, and might correspond to future lower-level primitives.

  • The existing training algorithms such as VQC, that were based on the CircuitQNN, are updated to accept both implementations. The implementation of NeuralNetworkClassifier has not changed.

  • Expose the callback attribute as public property on TrainableModel. This, for instance, allows setting the callback between optimizations and store the history in separate objects.

  • Gradient operator/circuit initialization in OpflowQNN and CircuitQNN respectively is now delayed until the first call of the backward method. Thus, the networks are created faster and gradient framework objects are not created until they are required.

  • Introduced a new parameter evaluate_duplicates in QuantumKernel. This parameter defines a strategy how kernel matrix elements are evaluated if duplicate samples are found. Possible values are:

          -  all means that all kernel matrix elements are evaluated, even the diagonal ones when
    
              training. This may introduce additional noise in the matrix.
    
          - off_diagonal when training the matrix diagonal is set to 1, the rest elements are
    
              fully evaluated, e.g., for two identical samples in the dataset. When inferring, all elements are evaluated. This is the default value.
    

...

Read more

Qiskit Machine Learning 0.4.0

29 Apr 17:12
63ecb31
Compare
Choose a tag to compare

Changelog

New Features

  • In the previous releases at the backpropagation stage of CircuitQNN and OpflowQNN gradients were computed for each sample in a dataset individually and then the obtained values were aggregated into one output array. Thus, for each sample in a dataset at least one job was submitted. Now, gradients are computed for all samples in a dataset in one go by passing a list of values for a single parameter to CircuitSampler. Therefore, a number of jobs required for such computations is significantly reduced. This improvement may speed up training process in the cloud environment, where queue time for submitting a job may be a major contribution in the overall training time.

  • Introduced two new classes, EffectiveDimension and LocalEffectiveDimension, for calculating the capacity of quantum neural network models through the computation of the Fisher Information Matrix. The local effective dimension bounds the generalization error of QNNs and only accepts single parameter sets as inputs. The global effective dimension (or just effective dimension) can be used as a measure of the expressibility of the model, and accepts multiple parameter sets.

  • Objective functions constructed by the neural network classifiers and regressors now include an averaging factor that is evaluated as 1 / number_of_samples. Computed averaged objective values are passed to a user specified callback if any. Users may notice a dramatic decrease in the objective values in their callbacks. This is due to this averaging factor.

  • Added support for saving and loading machine learning models. This support is introduced in TrainableModel, so all sub-classes can be saved and loaded. Also, kernel based models can be saved and loaded. A list of models that support saving and loading models:

      NeuralNetworkClassifier
    
      NeuralNetworkRegressor
    
      VQC
    
      VQR
    
      QSVC
    
      QSVR
    
      PegasosQSVC
    
  • When model is saved all model parameters are saved to a file, including a quantum instance that is referenced by internal objects. That means if a model is loaded from a file and is used, for instance, for inference, the same quantum instance and a corresponding backend will be used even if a cloud backend was used.

  • Added a new feature in CircuitQNN that ensures unbound_pass_manager is called when caching the QNN circuit and that bound_pass_manager is called when QNN parameters are assigned.

  • Added a new feature in QuantumKernel that ensures the bound_pass_manager is used, when provided via the QuantumInstance, when transpiling the kernel circuits.

Upgrade Notes

  • Added support for running with Python 3.10. At the the time of the release, Torch didn’t have a python 3.10 version.

  • The previously deprecated BaseBackend class has been removed. It was originally deprecated in the Qiskit Terra 0.18.0 release.

  • Support for running with Python 3.6 has been removed. To run Machine Learning you need a minimum Python version of 3.7.

Deprecation Notes

  • The functions breast_cancer, digits, gaussian, iris and wine in the datasets module are deprecated and should not be used.

  • Class CrossEntropySigmoidLoss is deprecated and marked for removal.

  • Removed support of l2 and l1 values as loss function definitions. Please, use absolute_error and squared_error respectively.

Bug Fixes

  • Fixes in Ad Hoc dataset. Fixed an ValueError when n=3 is passed to ad_hoc_data. When the value of n is not 2 or 3, a ValueError is raised with a message that the only supported values of n are 2 and 3.

  • Previously, VQC would throw an error if trained on batches of data where not all of the target labels that can be found in the full dataset were present. This is because VQC interpreted the number of unique targets in the current batch as the number of classes. Currently, VQC is hard-coded to expect one-hot-encoded targets. Therefore, VQC will now determine the number of classes from the shape of the target array.

  • Fixes an issue where VQC could not be trained on multiclass datasets. It returned nan values on some iterations. This is fixed in 2 ways. First, the default parity function is now guaranteed to be able to assign at least one output bitstring to each class, so long as 2**N >= C where N is the number of output qubits and C is the number of classes. This guarantees that it is at least possible for every class to be predicted with a non-zero probability. Second, even with this change it is still possible that on a given training instance a class is predicted with 0 probability. Previously this could lead to nan in the CrossEntropyLoss calculation. We now replace 0 probabilities with a small positive value to ensure the loss cannot return nan.

  • Fixes an issue in QuantumKernel where evaluating a quantum kernel for data with dimension d>2 raised an error. This is fixed by changing the hard-coded reshaping of one-dimensional arrays in QuantumKernel.evaluate().

  • Fixes an issue where VQC would fail with warm_start=True. The extraction of the initial_point in TrainableModel from the final point of the minimization had not been updated to reflect the refactor of optimizers in qiskit-terra; the old optimize method, that returned a tuple was deprecated and new method minimize was created that returns an OptimizerResult object. We now correctly recover the final point of the minimization from previous fits to use for a warm start in subsequent fits.

  • Added GPU support to TorchConnector. Now, if a hybrid PyTorch model is being trained on GPU, TorchConnector correctly detaches tensors, moves them to CPU, evaluate forward and backward passes and places resulting tensors to the same device they came from.

  • Fixed a bug when a sparse array is passed to VQC as labels. Sparse arrays can be easily observed when labels are encoded via OneHotEncoder from SciKit-Learn. Now both NeuralNetworkClassifier and VQC support sparse arrays and convert them dense arrays in the implementation.

Qiskit Machine Learning 0.3.1

17 Feb 23:18
e927a48
Compare
Choose a tag to compare

Changelog

Upgrade Notes

  • Added support for running with Python 3.10. At the the time of the release, Torch didn’t have a python 3.10 version.

Bug Fixes

  • Fixes in Ad Hoc dataset. Fixed an ValueError when n=3 is passed to ad_hoc_data. When the value of n is not 2 or 3, a ValueError is raised with a message that the only supported values of n are 2 and 3.

  • Previously, VQC would throw an error if trained on batches of data where not all of the target labels that can be found in the full dataset were present. This is because VQC interpreted the number of unique targets in the current batch as the number of classes. Currently, VQC is hard-coded to expect one-hot-encoded targets. Therefore, VQC will now determine the number of classes from the shape of the target array.

  • Fixes an issue where VQC could not be trained on multiclass datasets. It returned nan values on some iterations. This is fixed in 2 ways. First, the default parity function is now guaranteed to be able to assign at least one output bitstring to each class, so long as 2**N >= C where N is the number of output qubits and C is the number of classes. This guarantees that it is at least possible for every class to be predicted with a non-zero probability. Second, even with this change it is still possible that on a given training instance a class is predicted with 0 probability. Previously this could lead to nan in the CrossEntropyLoss calculation. We now replace 0 probabilities with a small positive value to ensure the loss cannot return nan.

  • Fixes an issue where VQC would fail with warm_start=True. The extraction of the initial_point in TrainableModel from the final point of the minimization had not been updated to reflect the refactor of optimizers in qiskit-terra; the old optimize method, that returned a tuple was deprecated and new method minimize was created that returns an OptimizerResult object. We now correctly recover the final point of the minimization from previous fits to use for a warm start in subsequent fits.