Theoretical obstacles for a BZK-like quantum algorithm instantiated with a subroutine operates at small block size?


Obviously this specific topic had not been thoroughly studied, and the best we can do are empirical deductions based on past experience with quantum algorithms.

The security of lattice-based public-key encryption and digital signature schemes are based on the intractability of reducing algebraic lattice, with concrete estimates of the security levels based on BKZ family of algorithms.

The rough idea is that, BKZ operates with a siever/enumerator that finds short lattice vectors in a particular block size, and BKZ use that to reduce the lattice to certain desirable norm. Typically, the larger the block size, the smaller the norm of the output of BKZ. The siever/enumerator here, have running time that's typically exponential to the block size.

However, what if there's a quantum version of BKZ that's capable of producing lattice with small norm using siever/enumerator operating with small block size? That brings us to the questions:

What would be the theoretical obstacles for such BKZ-like quantum algorithm?


Posted 2021-01-28T12:28:24.637

Reputation: 131

No answers