21

10

The impetus behind the twentieth century transition from analog to digital circuitry was driven by the desire for greater accuracy and lower noise. Now we are developing software where results are approximate and noise has positive value.

- In artificial networks, we use gradients (Jacobian) or second degree models (Hessian) to
**estimate**next steps in a convergent algorithm and define acceptable levels of inaccuracy and doubt.^{1} - In convergence strategies, we
**deliberately add noise**by injecting random or pseudo random perturbations to improve reliability by essentially jumping out local minima in the optimization surface during convergence.^{2}

What we accept and deliberately introduce in current AI systems are the same things that drove electronics to digital circuitry.

Why not return to analog circuitry for neural nets and implement them with operational amplifier matrices instead of matrices of digital signal processing elements?

The values of artificial network learning parameters can maintained using integrated capacitors charged via D-to-A converters such that the learned states can benefit from digital accuracy and convenience, while forward propagation benefits from analog advantages.

- Greater speed
^{3} - Orders of magnitude fewer transistors to represent network cells
- Natural thermal noise
^{4}

An academic article or patent search for analog artificial networks reveals much work over the last forty years, and the research trend has been maintained. Computational analog circuits are well developed and provide a basis for neural arrays.

Could the current obsession with digital computation be clouding the common view of AI architectural options?

Is hybrid analog the superior architecture for artificial networks?

**Footnotes**

[1] The PAC (probably approximately correct) Learning Framework relates acceptable error $\epsilon$ and acceptable doubt $\delta$ to the sample size required for learning for specific model types. (Note that $1 - \epsilon$ represents accuracy and $1 - \delta$ represents confidence in this framework.)

[2] Stochastic gradient descent is shown, when appropriate strategies and hyper-parameters are used, to converge more quickly during learning and is becoming a best practice in typical real world applications of artificial networks.

[3] Intel Core i9-7960X Processor runs at turbo speeds of 4.2 GHz whereas the standard fixed-satelite broadcasting is 41 GHz.

[4] Thermal noise can be obtained on silicon by amplifying and filtering electron leakage across a reverse biased zener diodes at its avalanche point. The source of the quantum phenomena is Johnson–Nyquist thermal noise. Sanguinetti et. al. state in their 'Quantum Random Number Generation on a Mobile Phone' (2014), "A detector can be modeled as a lossy channel with a transmission probability η followed by a photon-to-electron converter with unit efficiency ... measured distribution will be the combination of quantum uncertainty and technical noise," and there's CalTech's JTWPA work. Both of these may become standards for producing truly nondeterministic quantum noise in integrated circuits.

**References**

*STDP Learning of Image Patches with Convolutional Spiking Neural Networks*, Saunders et. al. 2018, U Mass and HAS*General-Purpose Code Acceleration with Limited-Precision Analog Computation*, Amant et. al., 2014*Analog computing and biological simulations get a boost from new MIT compiler*, by Devin Coldewey, 2016*Analog computing returns*, by Larry Hardesty, 2016**Why Analog Computation?*, NSA Declassified Document*Back to analog computing: Columbia researchers merge analog and digital computing on a single chip*, Columbia U, 2016*Field-Programmable Crossbar Array (FPCA) for Reconfigurable Computing*, Zidan et. al., IEEE, 2017*FPAA/Memristor Hybrid Computing Infrastructure*, Laiho et. al., IEEE, 2015*Foundations and Emerging Paradigms for Computing in Living Cells*, Ma, Perli, Lu, Harvard U, 2016*A Flexible Model of a CMOS Field Programmable Transistor Array Targeted for Hardware Evolution*(FPAA), by Zebulum, Stoica, Keymeulen, NASA/JPL, 2000*Custom Linear Array Incorporates Up To 48 Precision Op Amps Per Chip*, Ashok Bindra, 2001, Electronics Design*Large-Scale Field-Programmable Analog Arrays for Analog Signal Processing*, Hall et. al., IEEE Transactions on Circuits and Systems, vol. 52, no. 11, 2005*Large-scale field-programmable analog arrays for analog signal processing*, Hall et. al. 2005*A VLSI array of low-power spiking neurons and bistable synapses with spike-timing dependent plasticity*, Indiveri G, Chicca E, Douglas RJ, 2006- https://www.amazon.com/Analog-Computing-Ulmann/dp/3486728970
- https://www.amazon.com/Neural-Networks-Analog-Computation-Theoretical/dp/0817639497

1I would argue that you're onto something. There's some efforts to put AI into analog chips (I think Apple might be doing something with iphone). I'm not sure how much research has been done but I'm sure you can find some white paper somewhere. It's definitely worth researching. My prediction is that there soon may be programmable AI chips that have a set number of inputs and outputs (Kinda like bus registers). – Zakk Diaz – 2018-09-12T17:49:47.040

It's not a full answer, but I suspect the main issue is cost. Printing circuits is super cheap at scale, and still pretty expensive in small batches. Discrete GPUs are mass produced already, and work "well enough". An analog chip usually can only do one task well, and the preferred models change quickly. A discrete chip can be programmed to do many different things. If we find a "best" topology for ANNs, maybe it will make sense to make analog chips again. – John Doucette – 2018-09-12T19:49:07.020