As someone who has worked closely with Machine Learning Algorithms for this decade, I have been very suspicious of calling them Artificial Intelligence. The ones that I use: Support Vector Machines, clustering (Nearest Neighbor), Boosted Decision Trees and (Deep Convolutional) Neural Networks do not look anything like Intelligence as we define it among humans. At least to me.
I have only followed at a distance the game playing machine learning algorithms like AlphaGo. That seemed to be interesting but the state space was still so small that it didn't seem so different to me. It (could be) just memorizing precise patterns and matching them and applying the correct response. They use reinforcement learning, which is something that I haven't spent much time thinking about.
I haven't had the time (with parenting, politics and my own projects in mathematics and physics) to follow the field closely. I was interested though when they announced that they were going to tackle Starcraft 2. I think that the best players end up knowing the game very well, but for me as an amateur it was an example where I needed to deal with new information and to display some intelligence. Although when I did my best in competitive play is when I had responses, which could be considered sort of automated, that I learned.
While the DeepMind AI did get defeated when hampered similarly to a human, it is still an impressive showing.
What I would like to see is taking the same AI (not the same network, but the same reinforcement trained AI) and have it play Warcraft 3 or Defense of the Ancients or even Civilization 6. There would need to be some mapping of controls and limitations, but if intelligence is actually being trained then the AI should be successful there after being trained on Starcraft 2.
After all the state space of Real Life is by some considerations effectively infinite. The fact that the computer can be trained to find patterns at a much increased rate to humans doesn't necessarily make it more intelligent, rather it is if a trained algorithm can be put into a real time situation and adapt and find/relate patterns to be successful in a new situation.
I haven't really read the papers, so when I get time to do so I should be able to think more intelligently on this topic.
The Vox article https://www.vox.com/future-perfect/2019/1/24/18196177/ai-artificial-intelligence-google-deepmind-starcraft-game which shows that the Starcraft 2 state space is (probably?) too large to be completely mapped for the machine learning algorithm and so it is displaying strategy and tactics and not just exact situational responses.
No comments:
Post a Comment