Friday, February 24, 2012

The NexGen Algorithms


Modern evolutionary algorithms do not learn. They can be adaptive during the search of solution, but when the run is over, they forget everything and if we re-run the problem, the algorithm starts its work from a scratch. This is normal for algorithms, but if we look at all living beings we'll see that they do not forget everything that catastrophically! Yes, single experience can be resultless, but after trying to solve some problem for hundreds and thousands times a new knowledge and experience eventually emerges. This is the way people learn to walk, speak, handle spoon and many other things. As people get older their past experience helps them to learn things easier (if these new things correlate with their experience).
From the other hand there are incremental learning algorithms, which are being more or less actively developed for the last 2 decades. Those algorithms can learn in complexifying and/or dynamic environments, which makes them more adaptive and they do not suffer that much from 'afterun amnesia'. There are certain problems though, like 'catastrophic forgetting', when after several updates algorithm totally forgets what it learned initially and starts behave in a new unexpected way. Well, that's normal for humans, it's just like if some person learned to speak English in a childhood and after living in France for years speaking in French this person forgets its first language. I believe that if catastrophic forgetting is not very fast then it's not a major problem. The larger issue here is that incremental earning algorithms usually are applicable to a certain domain only. That is they do learn something and collect past experience, but this experience is almost useless for solving other problems. So 'increamental learning' can be called something like 'temporal adaptation' and is good in its way, but still far from true learning algorithm. And, yes, increamental learning is usually for machine learning, but not for optimization.
The ultimate goal here may be in creation of the algorithms, which simply learns from any problem and extracts some general abstract regularities and rules to improve its performance for future tasks. This is closely related to the meta-learning concept and I believe that this can be done if some external trainable control system is attached to the algorithm to guide this algorithm's behaviour. This is a very amibitious task, connected with creation of very adaptive algorithms and probably one of the milestones to overcome on the way to true AI, since some implicit method 'to learn how to learn' is already built-in by evolution inside living organisms (animals can improve their skills!) and is essential for survival and for emergence of highly-organized intelligence.
There's a well-known meta-systems transition theory by V. Turchin which states that system enters a new level of development when a meta-level control system appears. For the existing evolutionary algorithms some origins of such systems can be found, like rules for adaptation or some heuristics like: if we move in the same direction, then increase movement speed; or try to cluster variables to understand their relations for proper optimization. And the next big step here is possible unification of these heuristics and adding memory fo EA, so that the algorithm could remember what problems it already solved and what were useful and useless rules. Having this memory it could be possible to recognize some common parts between problems (like, 'hmm, I solved something like this before, and I probably should apply correpondent set of optimization rules'). That may be my fantasy, and there are a lot of open questions (how to store information about problems features and compare problems of different dimensionalities etc.), but this is a real task for scientists, which probably can boost development of algorithms. That's why NexGen :)    

No comments:

Post a Comment