Now, adding to M. Stern's answer:

The primary reason as to why error correction is needed for quantum computers, is because qubits have a **continuum of states** (I'm considering qubit-based quantum computers only, at the moment, for sake of simplicity).

In quantum computers, unlike classical computers each bit doesn't exist in only two possible states. For instance a likely source of error is over-rotation: $\alpha|0\rangle+\beta|1\rangle$ might be supposed to become $\alpha|0\rangle + \beta e^{i\phi}|1\rangle$ but actually becomes $\alpha|0\rangle+\beta e^{i(\phi+\delta)}|1\rangle$. The actual state is close to the correct state but still wrong. If we don't do something about this the small errors will build up over the course of time and eventually become a big error.

Moreover, quantum states are very delicate, and any interaction with the environment can cause decoherence and collapse of a state like $\alpha|0\rangle+\beta|1\rangle$ to $|0\rangle$ with probability $|\alpha|^2$ or $|1\rangle$ with probability $|\beta|^2$.

In a classical computer if say a bit's value is being replicated n-times as follows:

$$0 \to 00000...\text{n times}$$ and $$1 \to 11111...\text{n times}$$

In case after the step something like $0001000100$ is produced it can be corrected by the classical computer to give $0000000000$ because majority of the bits were $0's$ and most probably the intended aim of the initial operation was replicating the $0$-bit $10$ times.

But, for qubits such a error correction method won't work, because first of all duplicating qubits directly is not possible due to the No-Cloning theorem. And secondly, even if you could replicate $|\psi\rangle = \alpha |0\rangle +\beta |1\rangle$ 10-times it's highly probably that you'd end up with something like $(\alpha|0\rangle + \beta |1\rangle)\otimes (\alpha e^{i\epsilon}|0\rangle + \beta e^{i\epsilon'}|1\rangle)\otimes (\alpha e^{i\epsilon_2}|0\rangle + \beta e^{i\epsilon_2'}|1\rangle)\otimes ...$ i.e. with errors in the phases, where all the qubits would be in different states (due to the errors). That is, the situation is no-longer binary. A quantum computer, unlike a classical computer can no longer say that: "Since majority of the bits are in $0$-state let me convert the rest to $0$ !", to correct any error which occurred during the operation. That's because all the $10$ states of the $10$ different qubits might be different from each other, after the so-called "replication" operation. The number of such possible errors will keep increasing rapidly as more and more operations are performed on a system of qubits. M. Stern has indeed used the right terminology in their answer to your question i.e. "that doesn't **scale well**".

So, you need a *different breed* of error correcting techniques to deal with errors occurring during the operation of a quantum computer, which can deal not only with bit flip errors but also phase shift errors. Also, it has to be resistant against unintentional decoherence. One thing to keep in mind is that most quantum gates will not be "perfect", even though with right number of "universal quantum gates" you can get *arbitrarily close* to building *any* quantum gate which performs (in theory) an unitary transformation.

Niel de Beaudrap mentions that there are clever ways to apply classical error correction techniques in ways such that they can correct many of the errors which occur during quantum operations, which is indeed correct, and is exactly what current day quantum error correcting codes do. I'd like to add the following from Wikipedia, as it might give some clarity about how quantum error correcting codes deal with the problem described above:

Classical error correcting codes use a *syndrome measurement* to
diagnose which error corrupts an encoded state. We then reverse an
error by applying a corrective operation based on the *syndrome*.
Quantum error correction also employs syndrome measurements. We
perform a multi-qubit measurement that does not disturb the quantum
information in the encoded state but retrieves information about the
error. A syndrome measurement can determine whether a qubit has been
corrupted, and if so, which one. What is more, the outcome of this
operation (the syndrome) tells us not only which physical qubit was
affected, but also, in which of several possible ways it was affected.
The latter is counter-intuitive at first sight: Since noise is
arbitrary, how can the effect of noise be one of only few distinct
possibilities? In most codes, the effect is either a bit flip, or a
sign (of the phase) flip, or both (corresponding to the Pauli matrices
X, Z, and Y). The reason is that the measurement of the syndrome has
the projective effect of a quantum measurement. So even if the error
due to the noise was arbitrary, it can be expressed as a superposition
of basis operations—the error basis (which is here given by the Pauli
matrices and the identity). The syndrome measurement "forces" the
qubit to "decide" for a certain specific "Pauli error" to "have
happened", and the syndrome tells us which, so that we can let the
same Pauli operator act again on the corrupted qubit to revert the
effect of the error.

The syndrome measurement tells us as much as possible about the error
that has happened, but nothing at all about the value that is stored
in the logical qubit—as otherwise the measurement would destroy any
quantum superposition of this logical qubit with other qubits in the
quantum computer.

**Note**: I haven't given any example of actual quantum error correcting techniques. There are plenty of good textbooks out there which discuss this topic. However, I hope this answer will give the readers a basic idea of why we need error correcting codes in quantum computation.

*Recommended Further Readings:*

*Recommended Video Lecture:*

Mini Crash Course: Quantum Error Correction by Ben Reichardt, University of Southern California

As a quick correction, modern hardware does suffer from non-negligible error rates, and error-correction methods are used. That said, of course your point about the problems being much worse on current quantum computers holds. – Nat – 2018-03-14T21:12:19.490

@Nat: interesting. I'm vaguely aware that this may currently be the case for GPUs, and (in a context not involving active computation) RAID arrays are an obvious example as well. But could you describe other hardware platforms for which classical computation must rely on error correction during a computation? – Niel de Beaudrap – 2018-03-14T23:52:25.043

Seems like errors are most frequently in networking contexts, followed by disk storage, followed by RAM. Networking protocols and disks routinely implement error-correction tricks. RAM's a mixed bag; server/workstation RAM tends to use error-correcting code (ECC), though consumer RAM often doesn't. Within CPU's, I'd imagine that they have more implementation-specific tactics, but those'd likely be manufacturer secrets. Error-rates in CPU's and GPU's become relevant at an observable level in a few cases, e.g. in overclocking and manufacturer core-locking decisions. – Nat – 2018-03-15T00:02:02.183

Actually kinda curious about CPU-type error-correction now.. I mean, the cache would seem prone to the same issues that normal RAM is (unless somehow buffered with more power or something?), which'd presumably be unacceptable in server/workstation contexts. But at the register-level? That'd be something neat to read about; didn't see anything immediately on Google, though I suppose that such info'd likely be a trade secret. – Nat – 2018-03-15T00:08:44.610