Home > Personal Technology > Google > Google’s AI Becomes Aggressive when in Competition

Google’s AI Becomes Aggressive when in Competition

Google’s DeepMind AI shows that the aggression is linked to the AI’s complexity

Google’s machine learning system DeepMind has learned to resort to ‘highly aggressive’ behavior when faced with competitive situations. DeepMind is the machine that previously became the first AI to beat a human player at the board game Go. Researchers have now used DeepMind to test various aspects of game theory; a field of study which attempts to study responses to cooperative and competitive situations.

The researchers designed a game where two copies of Deep Mind would try to gather as many apples as possible. The game featured a mechanic which allows a player to use a beam to attack another player. If you’re hit twice, you’re eliminated. Given millions of game-moves, the AIs began learning tactics. As long as there were enough apples for both AIs, there were no problems, but as soon as the supply of Apples ran low, the computers began trying to kill each other in order to steal the apples. What’s interesting is that the behavior only surfaced with advanced versions of Deep Mind: The more complex the network of computers powering the system, the more likely it was to act aggressively.

Of course, to a human, there’s an obvious logic in eliminating the other player, but to an AI, the issue is more complex. Every move the AI spends attacking the other player, is a move spent not collecting apples. It is a noteworthy achievement that the AI has the capacity to understand the future consequences of having competition in an environment with few resources, and thus choosing to forego those resources momentarily to work towards a future goal.

A second game, Wolfpack, involved the AIs playing as wolves trying to corner their prey. The AIs both received rewards if they were near the prey when it was killed, but if only one was there, there was a possibility of no reward (similar to how a lone wolf might lose the kill to scavengers). Here, the AIs also learned from the game, but instead of competing, chose to cooperate.

“At this point we are really looking at the fundamentals of agent cooperation as a scientific question, but with a view toward informing our multi-agent research going forward,” says Deep Mind researcher Joel Z Leibo, “Say, you want to know what the impact on traffic patterns would be if you installed a traffic light at a specific intersection. You could try out the experiment in the model first and get a reasonable idea of how an agent would adapt.”

source: WIRED

David F.
A grad student in experimental physics, David is fascinated by science, space and technology. When not buried in lecture books, he enjoys movies, gaming and mountainbiking

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Read previous post:
WD Blue SSD 250GB review: The dawn of a new era

Western Digital has set the standard for mechanical HDDs for over a decade now, but the brand failed to see...