Continuous Real-Time AI Simulation Loop



Due to my ignorance in this space I am not sure if a similar approach has already been proposed for what I outline below (I am not sure what terminology to even search for):

I have been doing some brainstorming about architectures for AI systems, and was considering the feasibility of an approach that borrows heavily from real-time game loop simulations.

That is to say, an approach that runs the AI simulation itself in real-time at a fixed rate regardless of external events, as opposed to feeding inputs in (near) real time into an existing network (i.e. clocked by events) and reading the outputs. This seems to me to be one way to allow for a temporal awareness and even some form of primitive "consciousness" within the AI system.

In pseudocode, here is what I've been thinking for a real-time AI loop:

network = LoadNetwork();
registeredInputs = network.GetAllRegisteredInputs();
registeredOutputs = network.GetAllRegisteredOutputs();
timeStepTick = 0;
    //Iterate over inputs and process - Could use event-driven approach here, but would potentially require locking on the network if concurrency is applied
    foreach(registeredInput in registeredInputs)
        //If the input has new samples since last iteration, apply it to mapped neurons
            //Get new samples of input (these samples live in time domain)
            samples = registeredInput.GetNewSamples();
            foreach(sample in samples)
                network.ProcessInputSample(timeStepTick, registeredInput, sample);

    //Iterate over outputs and process 
    foreach(registeredOutput in registeredOutputs)
        //Map relevant neuron outputs to the registered output "frame" buffer

        //Whatever registered this output is responsible for reading the buffer at whatever rate is required 

    //Whichever simulation time step algorithm - arbitrarily clock this network at 1khz
    Step(timeStepTick, 1);

Using this sort of approach, I speculated about the possibility of registering certain internal inputs in addition to external inputs:

  • "Brain wave" signals: Could feed a series of generated sinusoidal waveforms into the network as a kind of baseline stimulation.
  • Time of day signal: Simulation of things like circadian rythym. Could simply be a waveform with frequency of 1 day.
  • Time Step signal: Feed current time step value (monotonic, starts at 0) as a sense of age.
  • Feedback signals: Certain output neurons selected to be mapped back in as inputs. These would create a continuous loop of signaling between neurons which would continue even in the absence of any external input signals.

The inputs and outputs would obviously need to be sampled at a rate that falls within the Nyquist rate of the network (would be 512hz for the above psuedocode). For video data, this would be pretty easy as most video content is clocked within <100hz (tradeoff being large frame data). Audio data might need to be resampled as it usually runs around 48khz, but that isnt to say the simulation couldnt be clocked at something like 100khz to address both needs. The audio samples would be ingested at their native sample rate, and video frames would only be handled if a new frame arrives.

The continuous load of such a system on the underlying hardware should be very easy to calculate based on number of mapped inputs, outputs, and sample rate. It might even be feasible to let the network run as fast as hardware allows (above some minimum constraint for input/output sampling rate), which could make it "better" in some arbitrary sense.

I have no idea what sort of neuron algorithm would be most appropriate here (I suspect existing algorithms will not work well), but I do think that it would have to be very efficient from a computational perspective considering the potential tick rate of the simulation. All of the magic definitely lives in the ProcessInputSample() method in the code above.

Obviously, the current applications for such a network are very dubious due to the way it operates. This in my mind simply feels like a very complex DSP filter that just takes input waveforms (either external or internal) convolves them with various amounts of temporal phasing, and pipes to a series of output buffers that the real world interacts with. That being said, this also seems like the sort of "flexibility" that is required for an AI to learn and operate in a way that does not require some arbitrary number of iterations and a specific training model. It has always seemed to me that LTSM-style neurons were a bandaid for the temporal memory issue. Why push that computational concern to every single neuron when you can make it an inherent property of the AI by placing its entire simulation in the time domain?


Posted 2017-08-30T18:57:42.780


1It's an interesting concept. I've attempted to implement similar approaches but it's not something I'm skilled enough to do in a couple twilight sessions. You can look at how Cognitive Services work (By Microsoft or Amazon or any other big tech company) and get an idea of additional methods to disjoin your network from your environment.

As you are aware, one of the biggest difficulties is in structures that rely on all the incoming data to be valid data. One way to combat this is to add a slow decay to your network and data anomalies will spike you network reinforcing connections. – Zakk Diaz – 2017-08-30T19:13:02.217


Take a look at Jeff Hawkins -

– Zakk Diaz – 2017-08-30T19:19:24.570

Can you please narrow down your question for effective feedback from the community. – quintumnia – 2017-08-30T19:28:01.140

I have not read your question thoroughly but if your conceiving of a simulation with an input output algorithmically manipulated machine to contain a simulation of a world with subjects there perception internal to an exterior observer would be a result of the clock speed governing the manipulations, and each subject would have a perception of causal events that would be normal to them but external observers would view at a differing relative rate because of the clock. – Bobs – 2018-06-05T21:35:38.233

No answers