However, how the opponent's actions reveal that information depends upon their knowledge of our private information and how our actions reveal it.
This kind of recursive reasoning is why one cannot easily reason about game situations in isolation, which is at the heart of local search methods for perfect information games."
In other words, it's hard to reduce poker to a workable abstraction without compromising on the level of play. Two competing groups, however, appear to have overcome that problem lately: Moravcik's and another one, from Carnegie Mellon University, which hasn't published a description of its winning program yet.
Members of that group, however, have provided pointers to what they did in their previous work.
The language in which DeepStack creators describe their software is disturbing to anyone worried about being edged out by machines. Moravcik and his team wrote that DeepStack had "intuition" - an ability to replace computation with a "fast approximate estimate".
The machine developed it through "training" on lots of random poker situations. It worked well enough consistently to beat 33 pro players from 17 countries.
Libratus, the Carnegie Mellon team's product, is apparently based on different principles, using more precise calculations in the final part of the poker hand than in the early stage.
It has beaten four top poker players, who came away in awe: The software managed to remain unpredictable and keep winning. Among other techniques, it varied the size of its bets to maximise profit in a way even the best human player finds it too taxing to imitate.
The good news for humans, however, is that even with all the complexity-reducing shortcuts the researchers have developed, beating a good poker player requires a huge amount of computing power.
Deep Blue, the IBM machine that beat Gary Kasparov at chess, was a 32-node high performance computer. Libratus used 600 nodes of a supercomputer, the equivalent of 3330 high-end MacBooks.
It would take far more, and probably more ingenious shortcuts, to create an artificial intelligence that can win real-life, multi-player poker games at a high level.
It never pays in AI research to say that something is impossible. The field is fast developing and claiming successes that seemed unattainable a decade ago, even five years ago. But one can see how introducing even a little uncertainty and information asymmetry immediately makes AI developers' work far harder and resource-intensive.
Poker, though it's extremely difficult to play well, is, after all, a game with well-defined rules. How much artificial brainpower, and what unfathomable shortcuts, will be required to excel in a game with few or no rules - like a business negotiation, or, at the extreme, a process like the Syria peace talks? Humans are used to situations in which rules develop in real time.
No existing machine - and, judging by the state of the art, none that will be developed in a near future - can come close to our confidence in dealing with uncertainty and imperfect information.
Machines play an important role in eliminating routine jobs. What we're seeing with recent AI developments is the expansion of how we define "routine" to most processes with clear rules.
Even the more complex of such processes - like multi-player poker - may turn out to be economically inefficient to automate. But processes without defined rules appear to be beyond the realm of the practical. To be safe from machines, we humans need to seek out such situations and learn to excel in them.