A large team from Meta AI have just published an article (in Science, no less) showing that their automated agent more than held its own in competion with human players in Diplomacy, which is a negotiation-based game requiring both a degree of natural language understanding, the apparent ability to reason about the beliefs and actions of the other players, and the ability to persuade them to do things.

This is fabulous work. The players didn’t know that they were playing with an automated agent, but the platform’s terms of service warn them that this is a possibility. As they comment in the supplementary material, the authors made a conscious choice not to further alert the players about their system, because that would most likely have changed the way that they interact.

If one wanted, this study could be re-done with an explicit alert that an automated agent is involved.

Since the system, like an automated chess engine, has capabilities that are different from but not inferior to those of human players, I’d really like to know what happens if you do warn the players. It would be fascinating to see whether or not good players can find weakmesses to exploit. If they can spot the AI, or are told which player it is, the other players have lots of options. They might ally against it, either because they think it is weak, or because they perceive it as dangerously strong. Or they could try to ally with it, especially if it turns out to be good at spotting opportunities that they haven’t seen.

There could also be a side-game, where players are challenged to spot the AI and offered a reward for doing so. Mike Lewis, from the team, told me that they ran some games internally, and that even if you know what clues to look for, spotting the AI is sometimes surprisingly hard.