Dream functions, Neural networks and random inputs

First at all, sorry ! It will be a very long post !

Most of the information can be found at :
https://www.lucidity.com/LD8DFM.html
which is an excerpt of Stephen LaBerge’s “Lucid Dreaming” (1985).

In this post, i’ll submit two hypotheses about dream function. They are based on the following remarks :
=> dream imagery is produced from random impulses ;
=> neural networks(1) are effective brain simulators.
From these remarks, I will try to find out why random inputs can be useful in a neural network and therefore in brain.

i Neural networks are AI computer programs which attempt to imitate the way a human brain works. A neural network works by creating connections between artificial neurons, the computer equivalent of biological neurons. The organization and weights of the connections determine the output.[/i]

Dream imagery and random impulses :

In 1977, Drs. Allan Hobson and Robert McCarley of Harvard University found that in REM sleep,

Considering the dream functions, we will retain that Hobson and McCarley thought it was the maintaining process of the neurons, activating and testing at regular intervals the brain circuits that underlie our behavior-including cognition and meaning attribution ; and that this test program is essential to normal brain-mind functioning.

In 1983, Nobel Laureate Francis Crick and Graeme Mitchison proposed another function of dream sleep. Their theory was derived from the hypothesis that the cerebral cortex, as a completely interconnected network of neurons,

Thus, the dream function

In brief, it seems that random impulses in REM could be useful in maintaining, testing and correcting the brain neural network.

Comparison with artificial neural networks :

Moreover, we can wonder if we’ll find an equivalent in computer neural networks.
And we’ll find it in NN learning indeed. Learning data must be proposed to the NN randomly. If not, the network can be “traumatized” by the first data and learning fails.

That’s my first hypothesis, and it’s not so far from Hobson’s. We can compare a NN with a river system. Imagine a landscape with hills and valleys. Random water drops on the landscape will follow different ways and irrigate different parts of the landscape. In case of NN trauma, it’s like all the drops always go down in the same river : the output of the NN will be the same, disregarding of the inputs. It’s the “parasitic mode of behaviour”, whose Hobson is talking about.
Let’s suppose a dream where a first random input is elephant, and the brain makes the following associations : elephant, trunk, sex. And for the second random input, cliff : cliff, grotto, vagina, sex. It must be a problem somewhere !
In a dream, while stimuled by random inputs, the brain could be able to recognize such problems and correct them.

Hobson also stresses the problems which could be caused by the modifications produced by experience.

These problems find their equivalent in artificial NN too, and are called catastrophic interferences :

In conclusion, we can formulate the hypothesis that random inputs are useful in artificial neural networks in correcting learning problems due to excessively organized input data causing traumas, and enabling progressive learning.
It could be se same in human brain.

Congrats to all those who read this long post. :smile:

In my former post, I presented two concepts which could explain, in my opinion, the dream functions. These concepts were :

  • random inputs and their effects ;
  • dual neural network : one of them is randomly stimulated while the other collect the outputs of the first.

I found an scientific article where people explained that they could make a “Creativity Machine” from these two concepts. They call it a “dreaming” neural network.
For instance, imagine a neural network which has been trained in order to recognize fonts characters from A to Z. If it’s stimulated by random inputs, it can create new letters from what it learned. Then, the second neural network collects the results.

It shows that during the dream state, the brain can create, from random inputs, new patterns which can be useful (or not) in concept attainment and world understanding.
Thus, randomness of inputs doesn’t mean unconsistency of outputs, as Drs. Allan Hobson and Robert McCarley thought in 1977.

I’ll try to find more information about that.

Geoffrey Hinton created Deep Belief Networks which are hierarchical generative models, i.e. neural networks stacked on top of eachother capabable of generating data, i.e. “dreaming” or “fantasizing”. See a youtube talk here or a demo; here (click a digit top-left and press play)

Friston also uses the same kind of generative models, only in a different theoretical framework. He argues the brain learns using “active inference” which can be modelled by “emperical Bayes” in a dynamic hierarchial neural network.

these are pointers to more current / relevant literature about this subject.

I think I have to correct an information I brought in my second post : in Stephen Thaler’s Creativity Machines, noise is not added as an input, yet in the weights of the connections between neurons. So, in my opinion, it doesn’t emulate the actual brain functioning during REM sleep and we have to forget this.

As for Hinton’s “dreaming” and “fantasying”, thank you for this information. Now, as far as I understand his very interesting new neural network architecture (a long time has passed since I worked with NN’s, and what he describes was not yet invented, thus I’ve some trouble getting back into theory), I think it should be better termed as “expectation”. His method enhance learning by minimizing the “surprise” between expectations and observations (what Friston calls “minimizing the free-energy”). So, at first sight, it’s more in relation with the way we learn when awake during the day and not truly related with random firing of neurons during REM sleep.

Another interesting point in Hinton’s model is he found an efficient way of preventing an error to backpropagate through the hidden layers of the neural network, as it happened sometimes in old multilayers perceptrons. But I don’t think there has been much investigation in the effects of this random firing in neural backpropagation.

:confused: Basilus, There might be some faulty scholarship here. See the 1994 US Patent 5,659,666,where Thaler addresses the use of all forms of noise in a neural network to induce productive dreaming. patft.uspto.gov/netacgi/nph-Pars … PN/5659666. He incorporated such principles to model all aspects of cognition and consciousness in his papers from the 90s.

Thanks Sploosh for having found the text of the patent. I didn’t know it so that my comment above was based upon Stephen Thaler’s articles on the Imagination Engines website.

The article Neural Networks That Autonomously Create and Discover emphasizes the gradual degradation of the network mapping through random disturbance applied to its connection weights. It explains how Creativity Machines are utilizing what Thaler calls the “virtual input effect”. In A Quantitative Model of Seminal Cognition, this “virtual input effect” is explained as the network “perceiving” illusory inputs (when in fact they are set to zero) while its inner architecture is progressively destroyed by pruning its connections. This effect is compared by Thaler to “sensory deprivation, in effect hallucinating within a silent and darkened room”.

A similar explanation can be read in The Imagination Engine® or Imagitron™ where the Imagination Engine is “internally ‘tickled’ by randomly varying the connection weights joining neurons”.

Here, in A Quantitative Model of Seminal Cognition: The Creativity Machine Paradigm the Imagination Engine is described in the abstract as a “synaptically perturbed neural network”. Later it’s more precisely explained that it doesn’t require any external input:

“When supplied no external inputs, the production of meaningful activations by the network relies upon a different brand of vector completion than is normally discussed. Rather than fill in incomplete or corrupted input patterns, the net attempts to complete internal, noise-induced activation patterns within the net’s encryption layers. Therefore, any local or temporary damage to the network’s mapping is interpreted by downstream layers as some “familiar” activation pattern normally encountered upon application of a training exemplar to the network’s inputs (Thaler, 1995). Because of the many combinatorial possibilities in perturbing connection weights within a network, we arrive at a means for generating proportionately more novel schema than is possible with input perturbations alone. Furthermore, because the connection traces within a trained neural network generally correspond to the rules binding the underlying conceptual space together, such stochastic perturbation schemes serve to soften these rules, in turn allowing a gradual departure from the known space of possibilities. The result is a strictly neurological search engine whose internal noise level may be parametrically increased to achieve progressively more novel concepts. I call such a chaotic network an imagination engine or IE.”

All these articles, when describing the Imagination Engine of the Creativity Machine, stress the addition of noise within connection weights. It sounds like input perturbations are considered as quite useless or less efficient so that inputs are generally set to a constant.

Now the US patent may cover aspects which are not used in the Creativity Machine strictly speaking. And indeed, in the Discussion of the prior art chapter of the US Patent, one can read:

“Therefore, a neural network trained to generate the surface profiles of some device or object such as a known mountain range would tend to produce very plausible but unfamiliar mountain ranges if the inputs are subjected to random stimulations. Similarly, a neural network trained to only produce classical music would tend to produce potential classical themes when exposed to random inputs”.

This looks more like Hobson’s theory of dream generation.

Sorry, Hinton has done nothing new but create some misleading terminology and strategically omit critical references (see the numerous Thaler patents wherein critic algorithm may take on any form, and “A Proposed Symbolism for Network-Implemented Discovery Processes,” World Congress on Neural Networks, 1996 where all manner of noise-driven cascades are described).

Sorry again. From a functional perspective, neural networks are largely synapses, representing a volume effect. Comparatively, input layers are a surface effect. Therefore, they are most sensitive to any “dinking around” with connections. noisy PGO bursts to cortex are likely only the “spark plug.” Experience emotion in your dream (i.e., angst) and the endocrine system does its job permeating the entire synaptic structure with perturbations that can transmogrify your “customs house,” as well as provide abrubt discontinuities within dream sequences. :woo: