v0.8.0
Prelude to the changelog
From this release, Qiskit Machine Learning requires Qiskit 1.0
or above, with important changes and upgrades, such as introducing Quantum Bayesian inference and migrating a subset of Qiskit Algorithms features to Qiskit Machine Learning. These changes are part of building full compatibility with the version-2 (V2) Qiskit primitives available from version 0.8
of Qiskit Machine Learning. V1 primitives are deprecated and will be removed from version 0.9
(please find more information below).
New Features
1. Quantum Bayesian inference
We introduced a new class qiskit_machine_learning.algorithms.QBayesian
which implements quantum Bayesian inference on a quantum circuit representing a Bayesian network with binary random variables.
The computational complexity is reduced from QBayesian
defines an order for the qubits in the circuit. The last qubit in the circuit will correspond to the most significant bit in the joint probability distribution. For example, if the random variables A, B, and C are entered into the circuit in this order with (
An example of using this class is as follows:
from qiskit import QuantumCircuit
from qiskit_machine_learning.algorithms import QBayesian
# Define a quantum circuit
qc = QuantumCircuit(...)
# Initialize the framework
qb = QBayesian(qc)
# Perform inference
result = qb.inference(query={...}, evidence={...})
print("Probability of query given evidence:", result)
You may refer to the QBI tutorial which describes a step-by-step approach to quantum Bayesian inference on a Bayesian network.
2. Support for Python 3.12
Added support for using Qiskit Machine Learning with Python 3.12
.
3. Incorporation of Qiskit Algorithms
Migrated essential Qiskit Algorithms features to Qiskit Machine Learning. Also, Qiskit Machine Learning now requires Qiskit 1.0 or higher. You may be required to upgrade Qiskit Aer accordingly, depending on your setup. The merge of some of the features of Qiskit Algorithms into Qiskit Machine Learning might lead to breaking changes. For this reason, caution is advised when updating to version 0.8
during critical production stages in a project. This change ensures continued enhancement and maintenance of essential features for Qiskit Machine Learning following the end of official support for Qiskit Algorithms. Therefore, Qiskit Machine Learning will no longer depend on Qiskit Algorithms.
Users must update their imports and code references in code that uses Qiskit Machine Leaning and Algorithms:
- Change
qiskit_algorithms.gradients
toqiskit_machine_learning.gradients
- Change
qiskit_algorithms.optimizers
toqiskit_machine_learning.optimizers
- Change
qiskit_algorithms.state_fidelities
toqiskit_machine_learning.state_fidelities
- Update utilities as needed due to partial merge.
To continue using sub-modules and functionalities of Qiskit Algorithms that have not been transferred, you may continue using them as before by importing from Qiskit Algorithms. However, be aware that Qiskit Algorithms is no longer officially supported and some of its functionalities may not work in your use case. For any problems directly related to Qiskit Algorithms, please open a GitHub issue at https://github.com/qiskit-community/qiskit-algorithms. Should you want to include a Qiskit Algorithms functionality that has not been incorporated in Qiskit Machine Learning, please open a feature-request issue at https://github.com/qiskit-community/qiskit-machine-learning, explaining why this change would be useful for you and other users.
Four examples of upgrading the code can be found below.
Gradients:
# Before:
from qiskit_algorithms.gradients import SPSA, ParameterShift
# After:
from qiskit_machine_learning.gradients import SPSA, ParameterShift
# Usage
spsa = SPSA()
param_shift = ParameterShift()
Optimizers:
# Before:
from qiskit_algorithms.optimizers import COBYLA, ADAM
# After:
from qiskit_machine_learning.optimizers import COBYLA, ADAM
# Usage
cobyla = COBYLA()
adam = ADAM()
Quantum state fidelities:
# Before:
from qiskit_algorithms.state_fidelities import ComputeFidelity
# After:
from qiskit_machine_learning.state_fidelities import ComputeFidelity
# Usage
fidelity = ComputeFidelity()
Algorithm globals (used to fix the random seed):
# Before:
from qiskit_algorithms.utils import algorithm_globals
# After:
from qiskit_machine_learning.utils import algorithm_globals
algorithm_globals.random_seed = 1234
4. Support for V2 Primitives
The EstimatorQNN
and SamplerQNN
classes now support V2
primitives (EstimatorV2
and SamplerV2
), allowing direct execution on IBM Quantum backends. This enhancement ensures compatibility with Qiskit IBM Runtime’s Primitive Unified Block (PUB) requirements and instruction set architecture (ISA) constraints for circuits and observables. Users can switch between V1
primitives and V2
primitives from version 0.8
. From version 0.9
, V1 primitives will be removed.
Upgrade notes
-
Removed support for using Qiskit Machine Learning with Python
3.8
to reflect the EOL of Python3.8
in October 2024 (PEP 569). To continue using Qiskit Machine Learning, you must upgrade to Python3.9
or above if you are using older versions of Python. -
From version
0.8
, Qiskit Machine Learning requires Qiskit1.0
or higher. -
Users working with real backends are advised to migrate to
V2
primitives (EstimatorV2
andSamplerV2
) to ensure compatibility with Qiskit IBM Runtime hardware requirements. TheseV2
primitives will become the standard in the0.8
release going forward, whileV1
primitives are deprecated.
Deprecation Notes
- The
V1
primitives (e.g.,EstimatorV1
andSamplerV1
) are no longer compatible with real quantum backends via Qiskit IBM Runtime. This update provides initial transitional support, butV1
primitives may be fully deprecated and removed in version0.9
. Users should adoptV2
primitives for both local and hardware executions to ensure long-term compatibility.
Bug Fixes
-
Added a
max_circuits_per_job
parameter to theFidelityQuantumKernel
used in the case that more circuits are submitted than the job limit for the backend, the circuits are split up and run through separate jobs. -
Removed
QuantumKernelTrainer
dependency oncopy.deepcopy
that was throwing an error with real backends. Now, it modifies theTrainableKernel
in place. If you would like to use the initial kernel, please callTrainableKernel.assign_training_parameters
of theTrainableKernel
using theQuantumKernelTrainer.initial_point
attribute ofQuantumKernelTrainer
. -
Fixes the dimension mismatch error in the
torch_connector
raised when using other-than 3D datasets. The updated implementation defines the Einstein summation signature dynamically based on the number of dimensionsndim
of the input data (up to 26 dimensions). -
Fixed a bug where
FidelityStatevectorKernel
threw an error when pickled. -
Fixes an issue for the Quantum Neural Networks where the binding order of the inputs and weights might end up being incorrect. Though the parameters for the inputs and weights are specified to the QNN, the code previously bound the inputs and weights in the order given by the
circuit.parameters
. This would end up being the right order for the Qiskit circuit library feature maps and ansatzes most often used, as the default parameter names led to the order being as expected. However, for custom names and so on, this was not always the case and then led to unexpected behaviour. The sequences for the input and weights parameters, as supplied, are now always used as the binding order, for the inputs and weights respectively, such that the order of the parameters in the overall circuit no longer matters.
New Contributors
- @MoritzWillmann made their first contribution in #719
- @espREssOOOHHH made their first contribution in #743
- @dependabot made their first contribution in #751
- @OkuyanBoga made their first contribution in #772
- @oscar-wallis made their first contribution in #778
- @FrancescaSchiav made their first contribution in #784
- @edoaltamura made their first contribution in #796
- @proeseler made their first contribution in #717
Full Changelog: 0.7.0...0.8.0