​What’s the difference between error suppression, error mitigation, and error correction?



Errors are a natural thing to occur in a computer: the quantum state should evolve as prescribed by the quantum circuit that is executed.


Press release from IBM
October 23rd 2022 | 387 readers

Photo by Michael Dziedzic on Unsplash
However, the actual quantum state and quantum bits might evolve differently, causing errors in the calculation, due to various unavoidable disturbances in the outside environment or in the hardware itself, disturbances which we call noise. But quantum bit errors are more complex than classical bit errors. Not only can the qubit’s zero or one value change, but qubits also come with a phase — kind of like a direction that they point. We need to find a way to handle both of these kinds of errors at each level of the system: by improving our control of the computational hardware itself, and by building redundancy into the hardware so that even if one or a few qubits error out, we can still retrieve an accurate value for our calculations.

There are several different ways we handle these errors, but all the terminology can get confusing — and even within the field, there’s disagreement about what exactly each of these terms mean. We can break error handling into three core pieces, each with their own research and development considerations: error suppression, error mitigation, and error correction. Take note, the differences are subtle and not totally defined, especially between suppression and mitigation.

Read the full blog post here

You can read too...