Wednesday, July 7, 2010

Reservoir neural network (a concept)

Intro
Sometimes rather interesting ideas come into my mind and I give myslef a promise to implement them and then put them aside, so that they just piled on backstages of my memory. Some details can dissappear with time, hence I think it's worth writing these ideas somewhere so that later they could be useful for me or somebody else.

Description

Goal: General-purpose learning with self-adaptation of the artificial neural network’s (ANN) structure and parameters.

Idea: The idea is inspired by influence of biochemical reactions and spatial relations on brain’s functioning, which is not considered in most known models of ANNs. To implement this ANN is placed inside an expandable virtual 2d or 3d reservoir which can have different zones affecting the signals transmitting, activation of nodes, learning rates etc.

The network’s structure and reservoir parameters change over time in the following ways:
- New nodes and connections can appear.
- Some nodes and connections can be removed dynamically (to implement forgetting and/or giving other nodes more space to function).
- Nodes can change their location, moving towards “coherent” nodes, so that nodes with correlated outputs would tend to be located closer organizing structures.
- Connections can change their weight and length (the latter should have some impact on signal’s transmitting).
- The size and form of different zones in reservoir change to affect ANN functioning without training or corrections.
- Reservoir can have regulators defining number and parameters of zones and which can be changed either externally (by user or some control-program, it models the case of eating, making some physical actions and psychology), or by some law (models change of daytime, biorhythms etc.), or from the current state of nodes within the reservoir (models self-control and self-regulation). Or combination of all these can be used.
- The reservoir’s size can change to house as many nodes as required or to shrink if there’s too much of free space. This is required to implement evolution of the ANN.

Extension 1
The scheme above gives a general outline for dynamical creation and training of ANN in complex environment. This extension provides the idea for building hierarchy within the network. There are several variants (which can be used simultaneously):

1. Embedded reservoirs. When new node appears it is placed in its own reservoir if this node is located rather far from all other nodes. This reservoir can have its own zones and reservoirs which are (partially) independent from the parent reservoir. Each reservoir can have only one parent, while each parent can have multiple sibling-reservoirs. The decision whether new reservoir should be created can be made via judging the minimal distance between a new node and existing ones. If this distance is less than some dist_critical then new reservoir is created. For the embedded reservoirs critical distance can be reduced logarithmically and can optionally depend upon reservoir’s size.
2. United nodes. As nodes with correlating outputs move towards each other then when maximal distance between such nodes is below some threshold these nodes can be separated by creating new reservoir in their current location and placing them inside it. Again for the embedded reservoir the value of threshold for uniting its nodes can be reduced in logarithmical scale.

The reservoirs described above can be used further as independent units, i.e. they can treated as single nodes or be copied/deleted, form their own reservoirs by the 2nd variant etc.

Problem with meta-knowledge
I position this concept as an approach to implement data-independent learning which deals with different types of input data and can solve learning, inference, recognition and prediction problems. But it’s unknown how to make such a system to decide which kind of data or task it faces. E.g. this system should somehow get to know about this variety of problems and data. In other words the system should be able to form and extract meta-knowledge. And implementation of this might be a big problem. I believe that to check this at least some.

No comments:

Post a Comment