Synapse vs a Parameter?

In terms of AI timelines, the biggest question I haven’t seen addressed is the computational equivalence of a synapse vs a parameter in modern neural nets. This seems like a very important input for any prediction on when we will have human-level AI.

Moravec’s estimates of a retina vs the then-current edge-detection methods are sort of worthless under the assumption that AI will be built using modern learning methods, because feature engineered code is plausibly much more compute efficient than learned policies on tasks that are comprehensible to human programmers, and Moravec compared the retina with the former.

To pump this intuition, if we assume that 1 synapse == 1 parameter, then Moravec’s estimates are more than 6 orders of magnitude too low. The size of models we are able to train, on computers that are more powerful than Moravec predicted are needed for AGI, is at most 10 billion parameters, which is about as many synapses as the retina has.

A very interesting question, then, is how many parameters does a modern learned model need to have plausibly similar capabilities to a human retina. This seems hard to investigate but not impossible. If we look at the first few layers of a conv net, they seem to be doing the sort of edge detection that the retina does.

I think this would be a high-leverage thing to investigate as both the parameter > synapse and parameter < synapse are not implausible at first glance. Some people think the brain is doing something mysterious and better than backpropagation. Some people think it is just doing a really shitty, Rube Goldberg approximation of backpropagation. Artificial spiking neural networks perform really poorly compared standard stuff, which may imply paramater > synapse, etc.

If we assume AGI will come from scaling up current methods (rather than, as Moravec predicted, very efficient programs that imitate the aggregate function of thousand-neuron assemblies) then this question is very pertinent to any prediction. Is anyone here aware of any work on this?

3 thoughts on “Synapse vs a Parameter?

  1. There was a recent paper, where they tried to learn to imitate the simulation of a neuron using Deep Learning. If you assume that the simulation software captured all of the essential complexity of neuronal behavior, than the number of parameters needed to imitate that behavior for single neurons or dendritic synapses gives you an upper bound for how many parameters per synapse you need.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s