Hanabi (card game)

Hanabi (from Japanese 花火, fireworks) is a cooperative card game created by French game designer Antoine Bauza and published in 2010. Players are aware of other players' cards but not their own, and attempt to play a series of cards in a specific order to set off a simulated fireworks show. The types of information that players may give to each other is limited, as is the total amount of information that can be given during the game. In 2013, Hanabi won the Spiel des Jahres, an industry award for best board game of the year.

Gameplay
The Hanabi deck contains cards in five suits (white, yellow, green, blue, and red): three 1s, two each of 2s, 3s, and 4s, and one 5. The game begins with 8 available information tokens and 3 fuse tokens. To start the game, players are dealt a hand containing five cards (four for 4 or 5 players). As in blind man's bluff, players can see each other's cards but they cannot see their own. Play proceeds around the table; each turn, a player must take one of the following actions:


 * Give information: The player points out the cards of either a given number or a given suit in the hand of another player (examples: "This card is your only red card," "These two cards are your only 3s"). The information given must be complete and correct. (In some editions, it is allowed to indicate that a player has zero of something; other versions explicitly forbid this case.) Giving information consumes one information token.
 * Discard a card: The player chooses a card from their hand and adds it to the discard pile, then draws a card to replace it. The discarded card is out of the game and can no longer be played. Discarding a card replenishes one information token.
 * Play a card: The player chooses a card from their hand and attempts to add it to the cards already played. This is successful if the card is a 1 in a suit that has not yet been played, or if it is the next number sequentially in a suit that has been played. Otherwise a fuse token is consumed and the misplayed card is discarded. Successfully playing a 5 of any suit replenishes one information token. Whether the play was successful or not, the player draws a replacement card.

The game ends immediately when either all fuse tokens are used up, resulting in a game loss, or all 5s have been played successfully, leading to a game win. Otherwise, play continues until the deck runs out, and for one full round after that. At the end of the game, the values of the highest cards in each suit are summed, resulting in a total score out of a possible 25 points.

Reception
Hanabi received positive reviews. Board Game Quest awarded the game four and a half stars, praising its uniqueness, accessibility and engagement. Similarly, The Opinionated Gamers also praised the game's engagement and addictiveness. It won several awards, including the 2013 Spiel des Jahres winner and 2013 Fairplay À la carte Award winner. Hanabi also placed sixth place in the 2013 Deutscher Spiele Preis.

Computer Hanabi
Hanabi is a cooperative game of imperfect information.

Computer programs which play Hanabi can either engage in self-play or "ad hoc team play". In self-play, multiple instances of the program play with each other on a team. They thus share a carefully honed strategy for communication and play, though of course they are not allowed to illegally share any information about each game with other instances of the program.

In ad hoc team play, the program plays with other arbitrary programs or human players.

A variety of computer programs have been developed by hand-coding rule-based strategies. The best programs, such as WTFWThat, achieved near-perfect results in self-play with five players, with an average score of 24.9 out of 25.

AI challenge
In 2019, DeepMind proposed Hanabi as an ideal game with which to establish a new benchmark for Artificial intelligence research in cooperative play.

In self-play mode, the challenge is to develop a program which can learn from scratch to play well with other instances of itself. Such programs achieve only about 15 points per game as of 2019, far worse than hand-coded programs. However, this gap has narrowed significantly as of 2020, with the Simplified Action Decoder achieving scores around 24.

Ad hoc team play is a far greater challenge for AI, because "Hanabi elevates reasoning about the beliefs and intentions of other agents to the foreground". Playing at human levels with ad hoc teams requires the algorithms to learn and develop communication conventions and strategies over time with other players via a theory of mind. Computer programs developed for self-play fail badly when playing on ad hoc teams, since they don't know how to learn to adapt to the way other players play. Hu et al. demonstrated that learning symmetry-invariant strategies helps AI agents avoid learning uninterpretable conventions, improving their performance when matched with separately trained AI agents (scoring around 22), and with humans (scoring around 16 vs. a baseline self-play model that scored around 9).

Deepmind released an open source code framework to facilitate research, called the Hanabi Learning Environment.