13.2 Methods for encoding data
If classical computers use bits 0 and 1, but quantum computers use qubits represented as
for complex a and b, where |a|2 + |b|2 = 1, how do we efficiently map classical data into a multi-qubit quantum representation? We must quantum encode classical data so a quantum computer can use it.
In some instances, it may make sense to encode 0 ↦ |0⟩ and 1 ↦ |1⟩, but this would not be practical if our application involves the huge amount of information typically used in machine learning, for example. In this section, we look at several techniques for quantum-encoding classical data. There is a trade-off between being efficient with time spent encoding and decoding versus the number of qubits needed to store the information.
Suppose we have a real n-dimensional vector x. We want to represent x in an N-qubit state |ψ⟩ with N ≤ n. We do this via an encoding circuit or unitary operator...