Computer bridge

Computer bridge is the playing of the game contract bridge using computer software. After years of limited progress, since around the end of the 20th century the field of computer bridge has made major advances. In 1996 the American Contract Bridge League (ACBL) established an official World Computer-Bridge Championship, to be held annually along with a major bridge event. The first championship took place in 1997 at the North American Bridge Championships in Albuquerque. Since 1999 the event has been conducted as a joint activity of the American Contract Bridge League and the World Bridge Federation. Alvin Levy, ACBL Board member, initiated this championship and has coordinated the event annually since its inception. The event history, articles and publications, analysis, and playing records can be found at the official website.

World Computer-Bridge Championship
The World Computer-Bridge Championship is typically played as a round robin followed by a knock-out between the top four contestants. Winners of the annual event are: • 1997 Bridge Baron

• 1998 GIB

• 1999 GIB

• 2000 Meadowlark Bridge

• 2001 Jack

• 2002 Jack

• 2003 Jack

• 2004 Jack

• 2005 Wbridge5

• 2006 Jack

• 2007 Wbridge5

• 2008 Wbridge5

• 2009 Jack

• 2010 Jack

• 2011 Shark Bridge

• 2012 Jack

• 2013 Jack

• 2014 Shark Bridge

• 2015 Jack

• 2016 Wbridge5

• 2017 Wbridge5

• 2018 Wbridge5

• 2019 Micro Bridge

• 2020- championships cancelled Since 2022, "Unofficial Computer Bridge Championships" have been held. The format is a round-robin event, followed by a semi-final and then a final, with scoring in IMPs (and Victory Points may be used to resolve ties). In 2022, due to "technical difficulties" the round-robin was replaced by a quarter-final.

Winners of the annual event are:


 * 2022 Wbridge 5.12
 * 2023 Qplus 15.3
 * 2024 in preparation

Computers versus humans
In Zia Mahmood's book, Bridge, My Way (1992), Zia offered a £1 million bet that no four-person team of his choosing would be beaten by a computer. A few years later the bridge program GIB (which can stand for either "Ginsberg’s Intelligent Bridgeplayer" or "Goren In a Box"), brainchild of American computer scientist Matthew Ginsberg, proved capable of expert declarer plays like winkle squeezes in play tests. In 1996, Zia withdrew his bet. Two years later, GIB became the world champion in computer bridge, and also had a 12th place score (11210) in declarer play compared to 34 of the top humans in the 1998 Par Contest (including Zia Mahmood). However, such a par contest measures technical bridge analysis skills only, and in 1999 Zia beat various computer programs, including GIB, in an individual round robin match.

Further progress in the field of computer bridge has resulted in stronger bridge playing programs, including Jack and Wbridge5. These programs have been ranked highly in national bridge rankings. A series of articles published in 2005 and 2006 in the Dutch bridge magazine IMP describes matches between five-time computer bridge world champion Jack and seven top Dutch pairs including a Bermuda Bowl winner and two reigning European champions. A total of 196 boards were played. Jack defeated three out of the seven pairs (including the European champions). Overall, the program lost by a small margin (359 versus 385 IMPs).

In 2009, Phillip Martin, an expert player, began a four-year project in which he played against the champion bridge program, Jack. He played one hand at one table, with Jack playing the other three; at another table, Jack played the same cards at all four seats, producing a comparison result. He posted his results and analysis in a blog he titled The Gargoyle Chronicles. The program was no match for Martin, who won every contest by large margins.

Cardplay algorithms
Bridge poses challenges to its players that are different from board games such as chess and go. Most notably, bridge is a stochastic game of incomplete information. At the start of a deal, the information available to each player is limited to just his/her own cards. During the bidding and the subsequent play, more information becomes available via the bidding of the other three players at the table, the cards of the partner of the declarer (the dummy) being put open on the table, and the cards played at each trick. However, it is usually only at the end of the play that full information is obtained.

Today's top-level bridge programs deal with this probabilistic nature by generating many samples representing the unknown hands. Each sample is generated at random, but constrained to be compatible with all information available so far from the bidding and the play. Next, the result of different lines of play are tested against optimal defense for each sample. This testing is done using a so-called "double-dummy solver" that uses extensive search algorithms to determine the optimum line of play for both parties. The line of play that generates the best score averaged over all samples is selected as the optimal play.

Efficient double-dummy solvers are key to successful bridge-playing programs. Also, as the amount of computation increases with sample size, techniques such as importance sampling are used to generate sets of samples that are of minimum size but still representative.

Comparison to other strategy games
While bridge is a game of incomplete information, a double-dummy solver analyses a simplified version of the game where there is perfect information; the bidding is ignored, the contract (trump suit and declarer) is given, and all players are assumed to know all cards from the very start. The solver can therefore use many of the game tree search techniques typically used in solving two-player perfect-information win/lose/draw games such as chess, go and reversi. However, there are some significant differences.


 * Although double-dummy bridge is in practice a competition between two generalised players, each "player" controls two hands and the cards must be played in a correct order that reflects four players. (It makes a difference which of the four hands wins a trick and must lead the next trick.)
 * Double-dummy bridge is not simply win/lose/draw and not exactly zero-sum, but constant-sum since two playing sides compete for 13 tricks. It is trivial to transform a constant-sum game into a zero-sum game. Moreover, the goal (and the risk management strategy) in general contract bridge depends not only on the contract but also on the form of tournament (due to differences in scoring systems affecting the optimal general strategy). However, once the game has been reduced to deterministic double-dummy analysis, the goal is simple: one can without loss of generality aim to maximize the number of tricks taken.
 * Bridge is incrementally scored; each played trick contributes irreversibly to the final "score" in terms of tricks won or lost. This is in contrast to games where the final outcome is more or less open until the game ends. In bridge, the already determined tricks provide natural lower and upper bounds for alpha–beta pruning, and the interval shrinks naturally as the search goes deeper. Other games typically need an artificial evaluation function to enable alpha–beta pruning at limited depth, or must search to a leaf node before pruning is possible.
 * It is relatively inexpensive to compute "sure winners" in various positions in a double-dummy solver. This information improves the pruning. It can be regarded as a kind of evaluation function, however while the latter in other games is an approximation of the value of the position, the former is a definitive lower bound on the value of the position.
 * During the course of double-dummy game tree search, one can establish equivalence classes consisting of cards with apparently equal value in a particular position. Only one card from each equivalence class needs to be considered in the subtree search, and furthermore, when using a transposition table, equivalence classes can be exploited to improve the hit rate. This has been described as partition search by Matthew Ginsberg.


 * Numerous strategy games have been proven hard in a complexity class, meaning that any problem in that complexity class can be reduced in polynomial time to that problem. For example, generalized $x × x$ chess has been proven $EXPTIME$-complete (both in $EXPTIME$ and $EXPTIME$-hard), effectively meaning that it is among the hardest problems in $EXPTIME$. However, since there is no natural structure to exploit in double-dummy bridge towards a hardness proof or disproof, unlike in a board game, the question of hardness remains.

The future
In comparison to computer chess, computer bridge has not reached world-class level, but the top robots have demonstrated a consistently high level of play. (See analysis of the last few years of play at www.computerbridge.com.) However see below the Philippe Pionchon's article (1984). Yet, whereas computer chess has taught programmers little about building machines that offer human-like intelligence, more intuitive and probabilistic games such as bridge might provide a better testing ground.

The question of whether bridge-playing programs will reach world-class level in the foreseeable future is not easy to answer. Computer bridge has not attracted an amount of interest anywhere near to that of computer chess. On the other hand, much progress has been made in the last decade by researchers working in the field.

Regardless of bridge robots' level of play, computer bridge already has changed the analysis of the game. Commercially available double-dummy programs can solve bridge problems in which all four hands are known, typically within a fraction of a second. These days, few editors of books and magazines will solely rely on humans to analyse bridge problems before publications. Also, more and more bridge players and coaches utilize computer analysis in the post-mortem of a match.