Recent thoughts and readings made me doubt about promises of neuroevolution. I still believe that it is a really good research domain and that it is truly easy approach for many problems which are complicated to solve using traditional methods (board games playing, adaptive behavior, robotic control in complex situations, artificial music and art etc.). But the basis of my doubts is in randomness of evolutionary search which involves solutions with badly predictable structure and features, without any "physically" clear explanation of why this works and thus not very reliable when it comes to practice (nobody wants to risk much, you know).
Consider classification problem for supervised learning. Feasible approach is to extract regularities from training data and use problem dependent priors to create a structure for classifier (here I mean what parts this classifier has and how they are tuned and interconnected). This approach can be probabilistic since priors often come from statistics and somewhat tricky thing, which is called "experience", and hence the solution can have some variations when the approach is restarted. And one can even say that the result is in some way smart. But any way the classifier's structure is guided by the knowledge.
Next just look at the NE approach. We have (preprocessed) training data and then an NE algorithm is run to minimize the error function. The resulting structure of neural network will really have some reasons to be what it is (otherwise it would be unable to perform well). But I bet that in most cases it still have some unnecessary elements while some other useful elements and parts are not there. And there can also be some space for optimization of connection weights.
This can be exampled by house building for the purpose of living in it (yes, just this general). Traditional architect makes a plan, all the calculations etc. and builds a house. And the house's structure is guided by knowledge of how to build right. But evolutionary architect just builds the house up through trials and errors until something suitable is received. And imagine that our evolutionary architect is able to build any house (not only choosing, say, number of floors and roof type). And there is rather high probability that this house will be … um, very original and non-standard, not saying ridiculous (for example, having lots of wrongly shaped windows, which may be good for design, but bad for convenience). So evolutionary structure of the house is random. One can say that we can guide evolution through additional restrictions and penalties. But these restrictions can not be very strict since the purpose is very general (just like "minimize the error") and moreover it is almost impossible to consider all of them (because it can lead to the ridge objective function), thus the space of possible houses is till ve-e-ery large and mostly consists of "designed" houses, which are still good for living. Well at least for now I'm pretty sure that things are like this and that the same thing is about evolving neural networks.
Does it bad? Yeah. But… I believe that it is possible to involve self-organization principles, like those which lead to emergence of scale-free networks, together with some rules of thumb to make NE algorithms behave better and to produce more feasible solutions. And this is what I'm going to do for next several years.
No comments:
Post a Comment