Today I went to post office to fax an agreement for funding my research by Human Capital Foundation (I happened to win a grant for this year, here (in Russian)). The woman, which was faxing the agreement, saw either the grant sum or just that it was a funding agreement and said (very quietly, but I could heard that!) about "raskulachit'" me. This is a word from revolution in October 1917, which means that I am bad and mean person, because I earn too much money while other people live hard, and thus some good and honest people should take my money away and beat me. But I'm a scientist in Russia, I can not be reach, it's a kind unnatural here.
At first I wanted to say her everything that I thought about this garbage of hers and that that was honest money, and I worked hard for this, and that she thinks so mostly because I'm a Korean (nationalism in Russia has spread wide during last two decades, and this is really annoying), and that even Robin Hood would not agree with her etc. But then it just came into my mind that it wasn't her fault from the very start (although I'm in no way approve her), it's that brainwashing politicians which:
(1) Keep saying that Russia has enemies everywhere and that the whole world wants to bring Russia on its kneels. It's a rather popular tool (or should I say weapon?): when some politician needs something he says that this something will make Russia stronger, and when he doesn't he simply says that our enemies want this something. It's ridiculous, but it works and is applied very often. And a typical enemies for the casual citizen are either some non-Russian bastard, who is more prosperous then he, or a mean emigrant from a 3rd world country. Communism's legacy enforced by snobbery. Somehow it reminds me of Jehovah Witnesses which think that this world is hostile towards them because it belongs to Satan.
(2) Trying to unlearn people from thinking by their own (it's enough to watch any newscast or analytical weekend program on major channels to understand this). The motivation is simple: when someone thinks independently he is not that easy to control. And here is a devilish contradiction with the (1): strong country consists of strong people, but not from a bunch of zombies.
And that woman just thought so because she was "forced" to.
Anyway, I'm still angry with that faxing woman... (oops, that was on the verge of foul ;)). At first I wanted to start with a story about Ernest Rutherford, who had the following policy: When a new assistant applied for job in Rutherford's lab, he submitted a research task for this assistant. If having the task done the assistant came for the next one Rutherford fired him, because to become a good scientist one should start analyzing, and thinking, and understanding what should be done further by his own. This is an essential for scientific maturity process. And I planned to make some sort of a good story about not being a bad assistant. But that faxing woman...
Wednesday, March 24, 2010
Tuesday, March 23, 2010
Instinct science
One of the last scenes in "Indiana Jones and the last Crusade" is about Indiana Jones hanging over the huge earth crack and trying to reach the Holy Grail. His father is trying to stop him but Indy keeps saying "I can reach it" hypnotized by the artifact. Fortunately he is stopped and everybody get out from the temple and go into the sunset.
I think that every person at least once felt something like this. It's like a hunter instinct: you see a prey and you know that you can make it, all you need is to give it a try. The same thing often happens in science. It just came into my mind this morning, when I woke up early this morning and went directly to the computer to continue the research which caught me until 1 am last night, because some good results had started to appear. I believe that without this hunter instinct, when you determine yourself to your goal and then try hard to get it, it's impossible to make a good science (neither it is possible to reach almost anything in other activities). Was this instinct elaborated by evolution or a social phenomenon? To me the former is more appropriate. Thanks, evolution!
I think that every person at least once felt something like this. It's like a hunter instinct: you see a prey and you know that you can make it, all you need is to give it a try. The same thing often happens in science. It just came into my mind this morning, when I woke up early this morning and went directly to the computer to continue the research which caught me until 1 am last night, because some good results had started to appear. I believe that without this hunter instinct, when you determine yourself to your goal and then try hard to get it, it's impossible to make a good science (neither it is possible to reach almost anything in other activities). Was this instinct elaborated by evolution or a social phenomenon? To me the former is more appropriate. Thanks, evolution!
Friday, March 19, 2010
Why I have doubts about neuroevolution and possible way out
Recent thoughts and readings made me doubt about promises of neuroevolution. I still believe that it is a really good research domain and that it is truly easy approach for many problems which are complicated to solve using traditional methods (board games playing, adaptive behavior, robotic control in complex situations, artificial music and art etc.). But the basis of my doubts is in randomness of evolutionary search which involves solutions with badly predictable structure and features, without any "physically" clear explanation of why this works and thus not very reliable when it comes to practice (nobody wants to risk much, you know).
Consider classification problem for supervised learning. Feasible approach is to extract regularities from training data and use problem dependent priors to create a structure for classifier (here I mean what parts this classifier has and how they are tuned and interconnected). This approach can be probabilistic since priors often come from statistics and somewhat tricky thing, which is called "experience", and hence the solution can have some variations when the approach is restarted. And one can even say that the result is in some way smart. But any way the classifier's structure is guided by the knowledge.
Next just look at the NE approach. We have (preprocessed) training data and then an NE algorithm is run to minimize the error function. The resulting structure of neural network will really have some reasons to be what it is (otherwise it would be unable to perform well). But I bet that in most cases it still have some unnecessary elements while some other useful elements and parts are not there. And there can also be some space for optimization of connection weights.
This can be exampled by house building for the purpose of living in it (yes, just this general). Traditional architect makes a plan, all the calculations etc. and builds a house. And the house's structure is guided by knowledge of how to build right. But evolutionary architect just builds the house up through trials and errors until something suitable is received. And imagine that our evolutionary architect is able to build any house (not only choosing, say, number of floors and roof type). And there is rather high probability that this house will be … um, very original and non-standard, not saying ridiculous (for example, having lots of wrongly shaped windows, which may be good for design, but bad for convenience). So evolutionary structure of the house is random. One can say that we can guide evolution through additional restrictions and penalties. But these restrictions can not be very strict since the purpose is very general (just like "minimize the error") and moreover it is almost impossible to consider all of them (because it can lead to the ridge objective function), thus the space of possible houses is till ve-e-ery large and mostly consists of "designed" houses, which are still good for living. Well at least for now I'm pretty sure that things are like this and that the same thing is about evolving neural networks.
Does it bad? Yeah. But… I believe that it is possible to involve self-organization principles, like those which lead to emergence of scale-free networks, together with some rules of thumb to make NE algorithms behave better and to produce more feasible solutions. And this is what I'm going to do for next several years.
Consider classification problem for supervised learning. Feasible approach is to extract regularities from training data and use problem dependent priors to create a structure for classifier (here I mean what parts this classifier has and how they are tuned and interconnected). This approach can be probabilistic since priors often come from statistics and somewhat tricky thing, which is called "experience", and hence the solution can have some variations when the approach is restarted. And one can even say that the result is in some way smart. But any way the classifier's structure is guided by the knowledge.
Next just look at the NE approach. We have (preprocessed) training data and then an NE algorithm is run to minimize the error function. The resulting structure of neural network will really have some reasons to be what it is (otherwise it would be unable to perform well). But I bet that in most cases it still have some unnecessary elements while some other useful elements and parts are not there. And there can also be some space for optimization of connection weights.
This can be exampled by house building for the purpose of living in it (yes, just this general). Traditional architect makes a plan, all the calculations etc. and builds a house. And the house's structure is guided by knowledge of how to build right. But evolutionary architect just builds the house up through trials and errors until something suitable is received. And imagine that our evolutionary architect is able to build any house (not only choosing, say, number of floors and roof type). And there is rather high probability that this house will be … um, very original and non-standard, not saying ridiculous (for example, having lots of wrongly shaped windows, which may be good for design, but bad for convenience). So evolutionary structure of the house is random. One can say that we can guide evolution through additional restrictions and penalties. But these restrictions can not be very strict since the purpose is very general (just like "minimize the error") and moreover it is almost impossible to consider all of them (because it can lead to the ridge objective function), thus the space of possible houses is till ve-e-ery large and mostly consists of "designed" houses, which are still good for living. Well at least for now I'm pretty sure that things are like this and that the same thing is about evolving neural networks.
Does it bad? Yeah. But… I believe that it is possible to involve self-organization principles, like those which lead to emergence of scale-free networks, together with some rules of thumb to make NE algorithms behave better and to produce more feasible solutions. And this is what I'm going to do for next several years.
Labels:
complex networks,
feasibility,
neuroevolution,
random house
Subscribe to:
Posts (Atom)