I'm working on an open-source and open-hardware implementation of the KLM protocol, currently I'm working on EDA software that converts OpenQASM code into a manufacturable PCB
As part of that, I need to make a set of standardized components which requires creating a scalable architecture, etc. etc.
In a perfect world where every component I use works perfectly, timing and synchronization isn't an issue at all, but this is reality so I'm assuming there is a degree of accuracy needed to actually implement the KLM protocol.
I've devised a delay system using a laser that allows me to trigger my single photon sources at the same time and allows me to synchronize the production of a photon in controlled gates with the arrival of a photon at the gate, at least in theory, the reality is the electrical components (e.g. photodiodes, transistors, etc.) take time to actually do their job. Now, all of these components have known max times to do their job, usually rated in nanoseconds, so I can predict with a degree of accuracy how close the photons will be along a specific path.
So, for instance if I have 2 photons traveling towards a CX gate with two ancillary modes with a photon and a vacuum state, if the 3 photons enter the gate a few ns apart will the gate still work?
Maybe I'm overthinking this, but if I count the number of photons at my ancilla detectors and their values are what they should be even though it took an extra few ns for the photons to pass through, that means the gate worked right?
But what happens if one photon passes through the gate before the other even arrives at the gate? Surely this must mean the gate would fail, no? I guess I'm not understanding how the controlled gates actually work.
And so, my question is how do the controlled gates work, and what is an acceptable discrepancy of time between the photons (or how do I calculate it)? The max accuracy I can realize without going to silicon fabrication is down to a few picoseconds