Talk:Computer Arimaa

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

AI & Chess[edit]

[This discussion was originally on the "Arimaa" article and copy-pasted to the newly-created page "Computer Arimaa" by User:Mattj2 on May 14, 2013]

"The successful quest to build a world-championship-caliber chess program has contributed essentially nothing to the field of artificial intelligence": huh? sure it has, although people don't seem to listen. Computers don't think like humans do; a machine that can do millions of multiplications a second and store gigabytes of information [i]exactly[/i], but for which fuzzy pattern matching is a non-trivial operation does things differently from a machine that is excellent at fuzzy pattern matching, but has trouble storing more than a few bytes exactly or doing [i]any[/i] multiplication. Funny that. A winning Arimaa program will demonstrate just that, IMO. That's my POV, but it helps show that this statement is POV. --Prosfilaes 03:47, 3 Jan 2005 (UTC)

My own POV is that computers are already intelligent in many reasonable senses of the word. I watched on ICC when Kasparov was playing Deep Blue, and heard people objecting to comments like "Deep Blue thinks the bishop is worth more than the knight right now," objecting on the grounds that computers don't think, and to say that Deep Blue "thinks" anything is an abuse of language. I personally believe that it was and is an extremely natural use of language, and to insist that we not talk that way is pedantry. You and I probably agree more than we disagree about whether computers display intelligence.
That said, I believe that my original statement in the article was not as subjective as you are making it out to be. The phrase "artificial intelligence" is coming to have a technical meaning which is divergent from my understanding of "intelligence". There is a community of people who are interested in making computers do certain things well that they don't do well at present, and who are as interested in how it is done as in what is done. To call that set of goals and techniques "artificial intelligence" is in some ways at odds with a common notion of intelligence, but less so than (for example) what ecomonists call "efficiency" is at odds with a common notion of efficiency. For you to object to the way in which artificial intelligence is used in a technical sense seems to me quite as pedantic as objections to saying that computers think.
If there are more accurate words to use in these contexts than "think" and "artificial intelligence", then by all means substitute them for clarity. But to insert a "He believes" in front of the statement illuminates only that you had a difference of opinion, and doesn't illuminate your grounds for objecting. Am I right that you are not objecting to the statement of fact so much as taking exception to the way most people define artificial intelligence? Do you accept that strong chess programs are strong for reasons other than pattern recognition, learning from mistakes, using neural networks, self-modifying, or any of the hodge-podge of things that are lumped together under the name of artificial intelligence? If so, then let's get this issue straightened out in some other way than the current edit. Let's recognize it as a fact that chess was conquered by computers in a way that AI people didn't find useful or applicable to the problems that interested them. --Fritzlein 07:06, 3 Jan 2005 (UTC)
P.S. It is worth noting that Bomb, the best Arimaa-playing program at present, does not use any AI techniques (as commonly defined), but rather uses techniques which have worked well for chess. They don't work as well for Arimaa as they do for chess, but so far they work better than anything else that has been tried.
This is a quick response--your message needs much more thought to respond in full--but I had AI in college, and one of the sections was on alpha-beta trees. If AI has a technical meaning that doesn't include alpha-beta trees, I would say that that technical meaning is too esoteric for wikipedia. --Prosfilaes 23:05, 3 Jan 2005 (UTC)
Hmmm... maybe the definition of AI that I've learned isn't as widespread as I think. Just out of curiosity, do alpha-beta trees have applications outside of computer gaming? --Fritzlein 05:00, 4 Jan 2005 (UTC)
Well, anywhere stratagy is needed, of course. But beyond that, I suppose that the same concept, if not the exact implimentation, could be helpful in, say, public heath sims as a way to try and determine the best course of action. Of course, it could be argued that that is just a complex game with real applications... hmm... --Kinkoblast 19:13, 16 May 2006 (UTC)[reply]
I don't see how alpha-beta trees could have application out of a game; it's a pretty narrow technique. I'm going to try something on the page. --Prosfilaes 07:09, 6 Jan 2005 (UTC)
Fantastic, I love the current edit. --Fritzlein 01:41, 9 Jan 2005 (UTC)

Human Advantage[edit]

[This discussion was originally on the "Arimaa" article and copy-pasted to the newly-created page "Computer Arimaa" by User:Mattj2 on May 14, 2013]

It is correct to attribute the difficulty fast computers running an AI program reasonably well-designed exclusively for playing Arimaa have mainly to the extremely high branching factor of this game. The four-move-per-turn move cycle and consideration of the vast number of additional, meaningfully-distinct orders for casting the four individual moves comprising a turn is a related factor worthy of specific mention as well.

In attempting to pinpoint the learning psychology involved and how a human advantage manifests, I believe the current description is somewhat incorrect based upon my own related experiences in playing various chess variants (which have positive transfer to Arimaa).

When I first look at the gameboard immediately after my opponent has moved where the positions are complex, I am rarely, totally disoriented even if the moves made were unexpected since I am following the game to that point intently. However, I am usually moderately disoriented at that moment by consideration of the numerous possibilities for response, esp. any competing, important offensive and defensive priorities which must be chosen between sacrificially. I would actually assess my initial state as similar to that of a computer opponent- overwhelmed by the possibilities.

Fortunately, I am usually able to quickly identify all of the most pressing offensive and defensive theatres, inerrantly pruning a vast number of bad and trivial moves from further consideration. Then, it is a matter of course to correctly identify the very most important ONE out of typically 2-6 options for response. It often becomes clear with certainty which move is the most important one to make after a thorough analysis of all of the candidate moves is made. Sometimes this requires only a little time; sometimes this requires a moderate amount of time ... but this never requires a huge amount of time such as is the case where a computer is trapped working thru a combinatorial explosion at a given ply of search depth.

I am interested in and value the experiences of others and their opinions.

--AceVentura

Interesting speculation, Ace. Now that you mention it, I have very little clue why humans can play Arimaa well, and what the elements of the psycholgoical process are. I can observe myself as I play, but that does not provide infallible information.
Nevertheless, no matter how inaccurate my wild guess on the subject may be, the issue needs to be addressed. It is not at all sufficient to say that computers play Arimaa badly due to a high branching factor, because there are games with a high branching factor that humans play awfully compared to computers. There must be something about the game that humans are able to quantify more quickly and accurately than computers can.
From my experience I would say that it is possible that I consider and reject many moves so quickly I am unaware of the pruning I am doing. However, it seems highly implausible that I am rejecting thousands or even hundreds of possibilities each move. I am convinced there are whole categories of moves I don't consider at all, for example because I judge a certain theater to be temporarily relatively unimportant. To my mind it is qualitiatively different to neglect to consider a move than it is to consider and reject that move. You may call each a sort of pruning, but that conceals more than it reveals.
Moreover, as I ponder a position, I routinely find new moves I hadn't considered before that I then judge to be better than any of my previous candidates. This happens even in postal games where I have studied a position for half an hour or more, and suddenly see something new that solves my problems more efficiently or effectively. It does not strike me as being at all similar to selecting from a small number of candidate moves if I can bring new moves into the mix very late in the thinking process. It seems like I say things to myself such as "There must be some move which threatens his camel without exposing my cat. Aha! I have found one!" Is this not more akin to building up a set of candidate moves, than it is akin to winnowing down the multitude to a few?
I don't know how others experience their own thought processes, but I feel that I orient myself by first identifying strategic elements of the position and only second considering relevant moves. I could perhaps orient myself a different way, namely by looking at a variety of possible moves I can make and using those samples to inform me what potential the position holds, but this seems hopelessly inefficient. If there are only a handful of important possibilities, what are the chances I would stumble upon even one of them, never mind several of them that are worth comparing? No, it is much better to try to grasp the position first, and then generate relevant moves. It is a method that seems to work for me, at any rate.
That said, I am aware that my guesses about human game psychology are tenuous, and I am open to the article being edited in any number of ways. In particular (although this has nothing to do with how humans play) you are quite right that generating repetitive moves from different step-orderings consumes a significant percentage of CPU cycles for Arimaa software, and it would be a technological breakthrough to find a way generate each unique move only once. Hash tables may be used to prevent duplicated searching further down the tree when a move is generated multiple times, but it would be nice to be able to avoid multiple generation in the first place. --Fritzlein 06:28, 16 October 2005 (UTC)[reply]

Just as the opposite methods by which chess supercomputers running sophisticated AI programs and top, human chessmasters play chess (for example) extremely well must both be respected, likewise should contrasting methods of evaluation used by skillful human players.

In light of your interesting, detailed description of how you "solve the board" when it is your turn to move (in Arimaa and other board games), I no longer have any confidence that I should replace your personal account, infused into the article, with mine. Unfortunately, I fear that adding my personal account (and others who have not yet spoken-up) alongside yours in describing the human thinking process would unavoidably render this encyclopediac article too vague, confusing or self-contradictory. Instead, I recommend that existing attempts to define and describe the human advantage in Arimaa, however it undoubtedly exists, should be made much more concise so as to confine ourselves to facts we are relatively sure of (if any). --AceVentura

I agree that my guesses sit awkwardly in an encyclopedia article. I should be more clear about how speculative I am being. One way to do this would be to include alternatives, but I fear you are right that it would make the section vague and rambling. Probably more appropriate would be for me to remove that paragraph entirely and replace it with "we don't know how we do it, but we do it", or words to that effect.
Still, I feel the need to say something about the remarkable human ability to pick a good move among thousands, because the current vogue is towards oversimplification along the lines of "Any game with a high branching factor will of course give an advantage to humans over computers". This is probably outright false, but at a minimum doesn't tell the whole story. There must be features in the game which are easy for humans to spot and hard for computers to spot, and a high branching factor doesn't insure this. --Fritzlein 17:08, 16 October 2005 (UTC)[reply]
I have removed the paragraph on how humans think about Arimaa, and replaced it with a new section which doesn't try so much to answer the question as to make clear that some question needs to be answered. --Fritzlein 19:01, 20 October 2005 (UTC)[reply]

Excellent work! The section you added "human competence" nicely counter-balances the pre-existing section "computer ineptitude". Anyway, there was only one paragraph that I previously took exception to. Describing the human advantage as you did in terms of adaptation or improvement with experience is verifiably factual and simple without taking a dangerous turn into unprovable theories of psychology. Yes, it is responsible to mention that the nature of the human advantage is not fully understood. --AceVentura

Reading these articles has led me to consider what humans don't do. For example our perceptual limitations might be an advantage. Human beings cannot visualize, let alone evaluate, more than a small number of moves in a board game like Arimaa. Yet they can defeat computers. In other words an approach based on limited perception is more effective than brute force analysis or large task methods.
Perhaps it is due to our physical and mental limitations that we are even able to make sense of games like Arimaa - or the world in general. Human beings tend to perceive the world as physical objects or conceptual groupings. The rest (math, causation) is ignored. This approach was necessary for our survival as a species - a useful evolutionary algorithm perhaps. I do hope Arimaa will help! Pendragon39 20:33, 25 January 2006 (UTC)[reply]
There's also something to be said for gathering information visually, while the poor computer merely gets blind input from its keyboard. Pattern recognition and object tracking are useful functions being developed for automated systems elsewhere... board games should be no exception.
My analysis of human psychology in playing Arimaa would be as follows: The human player examines the board position visually. During the course of this examination, certain areas of the board will attract their attention. The player may make a cursory examination of less interesting areas of the board, but his or her focus will return majorily to those areas deemed to be of greater significance. The player will then examine possible moves in more detail by visualizing the movement of pieces. In terms of what is evaluated as 'interesting' or 'significant', there are two categories: opportunity and threat. More effort is given to moves or situations that appear to fit these two concepts. Pendragon39 18:14, 27 January 2006 (UTC)[reply]

Material Handicaps[edit]

[This discussion was originally on the "Arimaa" article and copy-pasted to the newly-created page "Computer Arimaa" by User:Mattj2 on May 14, 2013]

It is a very dodgy exercise to compare Arimaa material handicaps to chess material handicaps because Arimaa is much more of a blockade, breakthrough, and control while and chess is much more of a capture game. Anyway if you look at the material evaluation functions that the computers use, a camel handicap in Arimaa is more like a rook handicap in chess, so thinking of it as "queen odds" arises mostly from the fact that the queen is used to represent the camel when playing with a chess board. —Preceding unsigned comment added by 96.226.2.62 (talk) 17:01, 11 September 2007 (UTC)[reply]

Compare Arimaa challenge to chess challenge[edit]

[This discussion was originally on the "Arimaa" article and copy-pasted to the newly-created page "Computer Arimaa" by User:Mattj2 on May 14, 2013]

First the introduction says, “Arimaa has so far proven to be more difficult for artificial intelligences to play than chess”. Where is the proof? Mschribr (talk) 19:50, 15 March 2010 (UTC)[reply]

That Arimaa programs have a harder time beating good human Arimaa players than chess programs do beating good human chess players, presumably. AnonMoos (talk) 22:19, 15 March 2010 (UTC)[reply]
Yes. Where is the proof for that? The opposite is true. I want to add a section comparing the Arimaa challenge to the chess challenge giving another point of view to balance to article. Mschribr (talk) 01:56, 16 March 2010 (UTC)[reply]
Not sure what you mean; Deep Blue beat the world chess champion 13 years ago using basically a "fast but dumb" approach (in the context of the field of artificial intelligence research as a whole). The whole point of Arimaa is that the "fast but dumb" approach won't currently get you too far. AnonMoos (talk) 11:42, 16 March 2010 (UTC)[reply]
The biggest of my questions is how do you know “the "fast but dumb" approach won't currently get you too far” (with Arimaa)? Mschribr (talk) 14:35, 16 March 2010 (UTC)[reply]
We know the "fast but dumb" approach won't currently get you too far because competent developers have tried it, and it didn't get them very far. Fritzlein (talk) 03:33, 27 March 2010 (UTC)[reply]
No, we do know that. There were too few arimaa developers and not enough time. We do not have a fair arimaa challenge to judge how good the computers are. Mschribr (talk) 22:15, 28 March 2010 (UTC)[reply]
As I argued below, the small and new community of human Arimaa players is at least as great a handicap on the human side as the small and new community of Arimaa software developers is on the computer side. You can't insist on the half of the argument that favors your viewpoint and ignore the half of the argument that contradicts your viewpoint. Fritzlein (talk) 17:36, 29 March 2010 (UTC)[reply]
Are you saying the proof is computers have harder time beating humans in Arimaa than beating humans in chess because deep blue beat the world champion and computers cannot beat humans in Arimaa challenge. That is not a proof because you cannot compare the Arimaa challenge to the chess matches. I will add a section to prove my point. Mschribr (talk) 15:52, 16 March 2010 (UTC)[reply]
Mschribr, the proof that top computers have a harder time beating top humans at Arimaa does not come primarily from comparing the Arimaa Challenge to the Deep Blue vs. Kasparov match. It comes from thousands of games under a variety of conditions that are being played even as we type. The conditions of the 1997 man-vs-machine match were peculiar and debatable, but only tangential to the current balance of power as of 2010.
At present, top computer chess software running on commodity hardware can beat grandmasters at pawn-and-move odds. Indeed, for chess is it difficult to create an interesting man-vs-machine match under conditions that would generally be accepted as equal to both sides. The 2006 Deep Fritz vs. Kramnik match was played under conditions widely regarded as being in Kramnik's favor, yet Deep Fritz won. Since then the dominance of chess computers has only increased.
At present, top Arimaa software running on commodity hardware is scant challenge for the best Arimaa players. At tournament time controls, I personally could beat the top four programs in a clocked simultaneous match, i.e. with a four-to-one time disadvantage. (At one point, I beat the top eight computers in a clocked simul, but the gap has since narrowed ;-))
If you would like to seriously maintain that chess is harder for computers than Arimaa, I suggest starting by naming conditions under which the World Champion of chess can beat top chess software but the World Champion of Arimaa can't beat top Arimaa software. I'm confident that under any match conditions (not just those of the Arimaa Challenge) pitting top software against top humans, the humans will fare relatively better in Arimaa than in chess. If you feel otherwise, let's discuss an apples-to-apples comparison rather than the straw man of Arimaa Challenge compared to Deep Blue vs. Kasparov.
You began this thread by asking for proof that Arimaa is harder for computers than chess. I believe that the existing evidence is heavily in favor of the claim. This evidence may or may not rise to the level you call proof, but for you to therefore claim the opposite (that chess is more difficult for computers than Arimaa) is not "balance" so much as imagination. It is in the interest of the Arimaa community to persuade everyone of the genuine computer-resistance of Arimaa, even to the point of setting up additional matches specifically to prove it. What sort of evidence would you consider to be proof? With luck it can be arranged.
I will await your reply here, then edit the section you added. Fritzlein (talk) 06:08, 24 March 2010 (UTC)[reply]
The human may win a comparable chess challenge to arimaa challenge. A comparable chess challenge to arimaa challenge would be: One computer to play 3 matches against the top 3 players on rating list. Currently they are Carlsen, Topalov and Anand. The computer must score 2/3 of the points in each match. Each match consists of 3 games. The computer must win a qualification match. The computer may not be changed after the qualification match or between games. The computer may not be custome made or cost more than $1,000. Unfortunately, these matches are probably too expensive to run.
We can't have the challenge you specify, but other man vs. machine matches have been played which allow relevant comparisons. For example, in 2007 Rybka running on a quad core PC beat Jaan Ehlvest 4.5 to 1.5 in a match with time and color advantage given to Ehlvest. Ehlvest is Elo 2600, i.e. 200 points below the best at chess. The 2009 Arimaa challenge ran on a quad core PC, and the human Arimaa Challenge defender Jan Macura, rated 500 points below the Arimaa World Champion, beat the Arimaa Computer Champion 2 to 1. Thus chess software won a harder challenge than the Arimaa software lost. Fritzlein (talk) 17:36, 29 March 2010 (UTC)[reply]
Even if the humans lose at chess it will not prove the arimaa is harder. The reason the computer is more successful at chess than at arimaa is because the greater effort created better chess programs. This has nothing to do with how hard the game Arimaa. The way to measure to how hard a game is by state space complexity. The state space complexity is higher for chess than for arimaa. Mschribr (talk) 22:15, 28 March 2010 (UTC)[reply]
The state-space complexity is indeed higher for chess than it is for Arimaa, but the state-space complexity doesn't determine how difficult a game is for a computer to play, as I have explained below. Fritzlein (talk) 17:36, 29 March 2010 (UTC)[reply]

I've added "unreferenced section tag" for the following questions:

1. Is higher state-space complexity is the only measure for complexity of game? There should be a reference for that claim. 2. Reasons indicated for computers not winning the humans are speculative and are not based on reliable sources that clearly support that notion. —Preceding unsigned comment added by Thol (talkcontribs) 05:47, 21 March 2010 (UTC)[reply]

  1. If you order the games by state-space complexity then the easier games have lower numbers and harder games have higher number. For example 9 Men's Morris with 10^10 is lower than 8x8 English draughts with 10^20. If you use another order such as Game-tree complexity than we get things like 9 Men's Morris is 10^50, orders of magnitude higher than 8x8 English draughts which is 10^31. However, we know that the computer had an easier time with 9 Men's Morris than 8x8 English draughts.
  2. The reasons are not speculative but obvious. Nevertheless, my point is different. You cannot conclude that Arimaa is harder than chess by comparing the Arimaa challenge to chess matches such as the deep blue matches. The requirements for wining are different and higher for Arimaa than for chess.
Here are some of the more obvious mistakes in the section “Computer ineptitude”.
  1. The article says “Top chess programs use brute-force searching”. This is false. Brute-force is a trivial technique that enumerates all possible candidates for the solution. No chess program uses brute-force. See Brute-force search.
  2. The statement “Chess programs examine many, many possible moves, but they are not good (compared to humans) at determining who is winning at the end of a series of moves unless one side has more pieces than the other”. This is false because computers usually beat humans. For the computer to usually win it must determine who is winning at the end of a series of moves better than the human.
  3. The paragraphs “When brute-force searching is applied to Arimaa” and “An Arimaa player has nearly as many legal choices” are both wrong because no chess programs or Arimaa programs use brute-force searching. They all use pruning techniques. See Brute-force search.
  4. The statement “In Arimaa, however, the side to move switches only every four steps, which reduces the number of available cutoffs in a step-based search” is false because cutoffs can be used at anytime including before a moved is completely analyzed.
Mschribr (talk) 02:06, 23 March 2010 (UTC)[reply]
Mschribr, it seems that your primary objection to the section on computer ineptitude is the use of the adjective "brute-force". That can be remedied without changing the strength of the comparison. The essential fact is that top-level chess programs and top-level Arimaa programs build search trees in the same way. They use full-width depth-first search with alpha-beta pruning, iteratively deepened for better move ordering. They add a gamut of less significant enhancements such as null-move pruning, selective extensions, late-move reductions, transposition tables, etc. At the leaves of the search tree, a static evaluation function is applied, which is bubbled back up the tree by mini-max. So much is identical for both games, but given equal thinking time and equal hardware, Arimaa software can't search as deeply as chess software, because the branching factor in Arimaa is far higher. I understand that you object to the current article text, but do you at least agree that achievable search depth is constrained by the branching factor?
State-space complexity is irrelevant because neither chess software nor Arimaa software plays by looking up moves in a dictionary of all possible positions. The game-tree complexity is also irrelevant, because in neither case is the entire game tree searched. What is relevant is the part of the game tree that is actually built by chess engines and Arimaa engines. Because of Arimaa's greater branching factor, the actually-constructed portion of the game tree is necessarily wider and shallower than it is for chess, assuming that the same techniques are applied in each case. Fritzlein (talk) 06:08, 24 March 2010 (UTC)[reply]

Hiya. I just read the Computer ineptitude and Compare Arimaa challenge to chess challenges sections. I personally believe that Arimaa is more difficult for computers than chess but what I believe means nothing in this context and I can clearly see Mschribr's point.

The fact is that this article just isn't up to Wikipedia's current referencing standards. Back in the old days you could add anything into Wikipedia articles and it was kept if it sounded about right. Nowadays references are required for pretty much everything. I also know that there are thousands of articles that have less references than this one, or even none at all, but again that has got nothing to do with this particular case.

What particularly strikes my eye are the paragraphs 7 (In most Arimaa positions...), 8 (The weakness of Arimaa programs...) and 9 (As the game progresses...) of the Computer ineptitude chapter. Together they make lots of claims but have exactly zero external references. Now, I know it may not be easy or even possible to find Reliable sources for all the content in those chapters. However, Wikipedia is not the place for such discussion and claims as true as they may be. Mschribr also made some valid points about mistakes in the chapter.

That being said, I don't think Mschibir's retort, the rather awkwardly named Compare section, was too successful either. I agree with the points presented there but again, it seems to be nothing but original thought, publisher of which Wikipedia is not, as already mentioned. Another problem with the chapter is that it makes no attempt to identify what is meant by "the chess matches".

Finally, agreeing on this talk page that "achievable search depth is constrained by the branching factor" or anything like that is not even worth discussion. Just cite your external sources that say so. This talk page is for discussing the Arimaa article and for improving it, and in that process personal views don't really matter.

Anyway, I recognized your situation and figured I'd bring in the perspective of a neutral third party. (And of course I also like a good argument every now and then... All in good spirit of course! :) Cheers,

ZeroOne (talk / @) 23:39, 25 March 2010 (UTC)[reply]

Thanks ZeroOne for your criticism. I hope you continue to give more criticism. I will try to get more sources and explain chess matches. I also have more comments.
Brute-force is only part of the problem. The comparison between the chess matches and the Arimaa challenge is incorrect. You are calling some of computer chess’s major breakthroughs as minor. These techniques such as selective extensions used by deep blue have pushed the computer higher by many points. These are major improvements for deep blue and computer chess. Arimaa is not chess, not even a chess variant. You cannot take a chess program make some changes and expect it to be successful at Arimaa. There were 1000s of programmers working for 40 years making chess computers successful. They developed many ideas and tested them in many chess programs. They created specific chess programmer aids like uci and chess programmer forums. You will need these things to develop advanced Arimaa programs. Arimaa is new game. Are there even 20 Arimaa programmers? How many years have they worked on Arimaa programs less than 8? There has not been enough effort to develop Arimaa specific techniques. I do not understand why Arimaa was created. Do you know there is a popular chess variant as old as chess with 100s of programmers who worked for 30 years and the humans can still beat the computer? I am talking about shogi. Shogi is harder than chess. Once they have developed enough advanced shogi specific techniques, computers will beat the humans. Arimaa programs failed because of the small effort spent on programming computer Arimaa. The reason is not the bigger branching factor in Arimaa. Arimaa programmers can transfer some knowledge from chess so it will gain some time. However, the small number of programmers will slow things down a lot. It will take 50 years before a computer can win the Arimaa challenge. However, when computers do win at Arimaa it will not be because Arimaa programs think like humans. Computers have different talents than humans.
State-space complexity is important because programs do not look up moves in a dictionary. If programs did look up moves then it would not matter how many branches there are. However, since they must traverse and prune large trees then bigger tress need more pruning and traversing. State-space complexity says the relative difficulty the computer has in traversing and pruning an Arimaa tree compared to a chess tree. This in turn tells us which game is harder for the computer.
I believe you are not getting state-space complexity right. It does not depend on the branching factor. It is the total number of legal positions, so it would be relevant if computer did play by dictionary lookup. Perhaps you are thinking of game-tree complexity. That does depend on the branching factor, and might be more relevant because chess and Arimaa engines actually build search trees. Fritzlein (talk) 15:00, 26 March 2010 (UTC)[reply]
The number of legal positions depends on the branching factor. Mschribr (talk) 22:15, 28 March 2010 (UTC)[reply]
No, the number of legal positions does not depend on the branching factor. For example, if Arimaa had only two steps per move instead of four steps per move, it would have exactly as many legal positions, but would have a much lower branching factor. Fritzlein (talk) 17:36, 29 March 2010 (UTC)[reply]
Mschribr, it does seem to me that Fritzlein is correct here. The number of legal positions is an ambiguous number anyway since there are positions which are legal but not reachable (i.e. positions which only have theoretical interest as puzzles or such). —ZeroOne (talk / @) 22:52, 29 March 2010 (UTC)[reply]
Indeed, the main reason chess has a higher state-space complexity than Arimaa is that chess allows pawn promotion. If you don't count positions where a player has three knights, bishops of the same color, or some other promotion-induced weirdness, then Arimaa and chess have roughly the same state-space complexity, as you would expect from counting arrangements of the same 32 pieces on the same 8-by-8 board. It would be a stretch to argue that chess is more difficult for computers than Arimaa based on the existence of a class of chess positions that essentially never arise in practice, even if chess and Arimaa engines did play by dictionary lookup, which they don't. Fritzlein (talk) 16:17, 30 March 2010 (UTC)[reply]
You also have a problem with the definition of artificial intelligence. Artificial intelligence is not when the computer does a task the same way a human does a task. Artificial intelligence is any task a computer does that if it were done by a human would be considered intelligent. So when a human plays chess he is using his intelligence. Therefore, when a computer plays chess then the computer is using Artificial intelligence. Mschribr (talk) 01:42, 26 March 2010 (UTC)[reply]
Some people claim that whatever a computer can do well (so far) is by definition not true intelligence! What chess programs do is a collection of technical tricks and CPU-cycle shaving techniques which can be interesting in their own way, and certainly an achievement if done well, but which are extremely remote from anything which would constitute any form of "general intelligence", and which seem to have few repercussions for AI research outside of a fairly narrow domain of playing certain types of games... AnonMoos (talk) 08:51, 26 March 2010 (UTC)[reply]
I second this. Computer chess was originally thought of as an interesting AI problem but nowadays I don't think anyone regards it as such anymore. The basic methods used in chess programs are nowadays extremely well established. I have taken one university level course in artificial intelligence and implemented an AI for a board game. Even though the course was interesting and well organized it was still quite a let-down: it truly revealed that there's no "magic" in AI that makes the computer "intelligent" or "alive". It's just algorithms and datastructures. Discussing the concept of intelligence would be a problem in the domain of philosophy rather than computer science anyway. —ZeroOne (talk / @) 23:04, 26 March 2010 (UTC)[reply]
It sounds like you were expecting the manic-depressive robot from hitchhiker guide to the galaxy or 2001 HAL. Sorry to disappoint you those things only happen in movies. Mschribr (talk) 22:52, 28 March 2010 (UTC)[reply]
Whatever -- a program or machine would not have to be able to pass a full Turing test or display genuine emotions in order to display some fraction of something like general intelligence ability (even if a rather small fraction in absolute terms, and only using it within some specific domain). By contrast, within the context of artificial intelligence research as a whole, chess programs seem to consist mainly of somewhat low-level optimizations and CPU-cycle shaving technical shortcuts. Chess programs may have been on the cutting edge of artificial intelligence research in the 1960's, but I don't think that's been remotely true for well over 20 years... AnonMoos (talk) 00:17, 29 March 2010 (UTC)[reply]
True again. I admit that when watching my AI play the game I was surprised many times: was it really smart enough to make that move or did it just play it by accident? Still, I knew exactly how the program reasoned. The interesting thing was that I had no way of actually following the program's thought process: it was browsing through thousands of positions, evaluating them and just somehow arriving to a conclusion. I might define AI as an algorithmic process that may yield surprising results and that the programmer or other observers cannot easily follow. Of course you could now come up with as many counter examples of processes fitting that description as you wish but I hope you see my point.
It's also funny you should mention the Turing Test. Indeed, the original Turing Test of chatting with an AI is an extremely difficult problem. However, there can be thought to be domain-specific Turing Tests. In the context of a first person shooter game, for example, the game could be thought to pass a variant of the Turing Test if you mistake its bots for human players. You might want to read some of the articles in aigamedev.com, Good AI vs. Fun AI for example. It's a pretty enlightening site.
ZeroOne (talk / @) 22:43, 29 March 2010 (UTC)[reply]
ZeroOne, I appreciate your perspective that what is relevant for Wikipedia is what can be referenced, not what is true. The fact that truth is no defense for a sentence or paragraph on Wikipedia is the main reason I have essentially ceased to edit Wikipedia. For some true statements in the "computer ineptitude" section there are no references, so they will simply have to be deleted to get this article up to standard.
For other statements there is a reference, but so what? For one claim I have a reference: "An essential reason that Arimaa is so hard for computers is its huge branching factor." That fact is written in black and white on page 47 of my book [http://www.amazon.com/Beginning-Arimaa-Reborn-Computer-Comprehension/dp/0982427409 Beginning Arimaa]. Because I published a book about Arimaa, I can now cite an external source. Yay! But why did the statement become more valid when I said it in my book than when I said it here? It was equally true both times I said it.
I was roused to action mostly because of the untrue claim that chess is harder for computers than Arimaa. But I see that it doesn't matter what is true or untrue, merely was is referenced or unreferenced. If two opposite statements both can cite a reference, they are equally valid regardless of truth. If neither can cite a reference, they are equally invalid regardless of truth. So be it.
I realize that there are problems with the computer ineptitude section apart from the lack of references. I would work to correct those problems and to make the section more true, except that the content will have to be deleted anyway even if I get it right, so why bother? For example, I guarantee that you will not find an external source saying that lower capture density makes Arimaa harder than chess for computers, except sources that cite the present version of Wikipedia's Arimaa article. It's original research, which is unacceptable on Wikipedia. Delete away. Fritzlein (talk) 15:00, 26 March 2010 (UTC)[reply]
Fritzlein, I hear you. I didn't create this userbox for nothing. I'm not a deletionist so I really don't want to delete the material in the article. I'm merely warning you that someone else probably will and that something should be done to the article before that happens.
Also, you are correct in that having something printed in on book does not make the claim any more true than it already is in any other context. It does, however, make it a reliable source for the fact. This is because it is assumed that your publisher has proof-read your book. And because it's their money that's on stake, they want to make darn sure that their books don't contain erroneus claims. Publish a book full of errors and poof, the next book you publish will not sell. So if they have allowed you to say something on a book, then we can here assume that whatever it is that you said has been verified. So if the claims on the article can be cited to a book then please do so. (See, for example, the endgame tablebase article for examples on how to cite various pages of various books.)
ZeroOne (talk / @) 23:04, 26 March 2010 (UTC)[reply]
I self-published. Tee-Hee! Unfortunately, I didn't include the average captures per game of chess/Arimaa, or the median turn of first capture. Both of those statistics were my original research based on sampling public chess and Arimaa games. They are verifiable by anyone with a similar interest, but have no external text to cite. Of course, without them I can't even claim that Arimaa has a lower capture density than chess, never mind list lower capture density as a reason why Arimaa would be more difficult for computers. Nor can I support my claim that endgame tablebases are useless for Arimaa once I must delete the original research that shows endgames seldom arise.
The claim that opening books are useless for Arimaa will also have to go. There is no reason why an Arimaa bot can't compile an opening book, and indeed some of them do so. Everyone agrees that Arimaa opening books are far less useful than chess opening books, but again, there is no external source. Similarly the fact that Arimaa computers play relatively better in the endgame is universally accepted, but will have to be deleted.
One of the few statements for which you could find an external reference is that Arimaa is harder than chess for computers. We have no less of an authority than programmers who have tried to win the Arimaa Challenge. Furthermore, I am unaware of any source (other than Mschribr) claiming the opposite. So perhaps the end result of this discussion will be that the article can keep the essential fact, but delete all the explanation of why that fact might be so. I can live with that outcome as long as we also delete the anti-fact that chess is harder for computers than Arimaa. Fritzlein (talk) 03:28, 27 March 2010 (UTC)[reply]
Well, I admit that's a rather tricky situation. Wikipedia does have some guidelines on self-published sources. The main point that applies here seems to be this: "Self-published material may, in some circumstances, be acceptable when produced by an established expert on the topic of the article whose work in the relevant field has previously been published by reliable third-party publications." I'd say you qualify as an established expert on the topic, but I don't know if you have ever had anything Arimaa related "published by a reliable third-party publication"? Also, I don't know about this but the situation could be interpreted as so that you could not cite your own book ("original research") but any other user could... Tricky, as I said. I'd still find any sources better than no sources and I doubt whether anyone would bother checking let alone raising an issue about a source that is self-published. —ZeroOne (talk / @) 22:43, 29 March 2010 (UTC)[reply]
I wouldn't consider myself an established expert on game engine programming, even though I have thought a lot about the meaning of the Arimaa Challenge. It would feel absurd for me to cite my own book. But, as you pointed out below, there are experts to cite. David Fotland is an established expert on game engine programming, and his published paper on Arimaa says, "The key issue for Arimaa and computers is the huge branching factor. Some positions have only one legal step (after a push), but most have between 20 and 25 legal steps. The four steps in a move lead to about 300,000 4-step sequences. Not counting transpositions, there are typically between 20,000 and 30,000 distinct four step moves. // Because the pieces move slowly compared to chess, an attack can take several moves to set up, so the program must look ahead at least five moves (20 steps) to compete with strong players. This is too deep for a simple iterative deepening alpha-beta searcher. Forcing sequences are rare, so deep searching based on recognizing forced moves (such as PN search or shogi endgame search) are not effective. // Sacrificing material to create a threat to reach the goal, or to immobilize a strong piece, is much more common than sacrifices for attacks in chess, so programs that focus on material are at a disadvantage." Fritzlein (talk) 15:38, 30 March 2010 (UTC)[reply]
Yeah, I wasn't actually thinking of you as an expert in game programming but as an expert in Arimaa. :) I now realize that a game programmer's word would be a lot more convincing here than an Arimaa expert's word. Fotland's paper seems like a very good source. —ZeroOne (talk / @) 22:14, 30 March 2010 (UTC)[reply]
Mschribr, your argument based on the newness of Arimaa is unpersuasive. As ZeroOne points out, it is no longer good enough for Wikipedia content to merely sound right, but hopefully it still makes some difference if I explain why your argument doesn't sound right.
I quite agree with you that Arimaa programs are not as advanced as they would be if there were as many developers devoted to the task as there were for chess, or if they had been working on it for as long as they have been working on chess. The quality of the solution depends on how many people are working on the problem, and how much time they have to solve it. The chess programming community is large and old, whereas the Arimaa programming community is small and young, therefore it is reasonable to assume that chess software will be of a higher quality than Arimaa software.
The issue under debate, however, is the intrinsic difficulty of the two games for computers relative to the difficulty of the two games for humans. We have no benchmark for how well computers play Arimaa other than how well humans play Arimaa. The human benchmark is presently very low for exactly the reasons you propound on the computer side, namely that the human playing population is very small and has only been studying Arimaa for a few years. In contrast, chess has been around for centuries, it has been extensively studied, and millions of people play it. The top chess players are very, very good at chess, much better than the top Arimaa players are at Arimaa.
A neutral attitude would be that the newness of Arimaa and the small community of interest are an equal handicap to Arimaa developers and Arimaa gamers. Yes, the level of Arimaa software is not yet as high as it is for chess, but also the level of human Arimaa play is not as high as it is for chess. If we suppose that the handicap of a small, young community is equal in both cases, then it is telling that top chess software beats top chess humans, whereas top Arimaa software is not close to top Arimaa humans.
I would argue, however, that even this neutral stance is too favorable to chess. Arimaa was not originally introduced as a new strategy game for humans to play, it was originally introduced as an artificial intelligence challenge. Because of how Arimaa was publicized and promoted, it has so far attracted a high proportion of developers relative to gamers. I submit that if you took the number of people who have written code for an Arimaa bot in the last year and divided by the number of humans who have played a game of Arimaa in the past year, you would get a higher ratio than if you did the same for chess.
If the Arimaa community is actually, as I suggest, more programmer-dense than the chess community, then the intrinsic computer-resistance of Arimaa is being underestimated at present. That is to say, perhaps the current dominance of top humans over top computers is even more remarkable than it appears because relatively more effort has gone into winning the Arimaa Challenge $10,000 prize than has gone into winning the human Arimaa World Championship, with its $225 prize this year.
But of course, the ratio of active Arimaa developers/players compared to ratio of active chess developers/players would be original research, which can't be done. I would be satisfied if the Wikipedia janitors would merely support some kind of balanced viewpoint whereby the relative playing strength of Arimaa software/gamers compared to the playing strength of chess software/gamers is considered a relevant argument. Fritzlein (talk) 15:00, 26 March 2010 (UTC)[reply]

Regarding the paragraph which ends with "In Arimaa, however, the side to move switches only every four steps, which reduces the number of available cutoffs in a step-based search."

This looks like nitpicking. An Arimaa program could easily use turn-based search, and then the cutoffs would be applied each ply just as in chess. In fact the step/turn based search choice should be just a matter of programmer preference, with no performance impact if done right (for example in a turn based search you would probably generate moves in batches as needed, since a lot of them will get ignored due to pruning). True, there are less plies in an Arimaa search but that's due to the branching factor which was already discussed in previous paragraphs. So this argument looks to be subsumed by the "bigger branching factor" argument when doing an Arimaa vs Chess search difficulty comparison.

Let me put this another way in case the above wasn't convincing: we can either do an apples to apples comparison by comparing a turn based search for both games, or if we want to equate Arimaa steps with chess moves then we also have to give Arimaa AI the advantage that the number of choices per step is much lower in Arimaa than in chess (the fourth root of 17,281 is about 11.5, which is less than 35).

If no one has any good arguments against it, I will eventually remove this paragraph. 109.74.204.218 (talk) 09:38, 28 May 2010 (UTC)[reply]

The argument about alpha-beta cutoffs may indeed be nit-picking, in the sense of attaching undue significance to a small matter. It is not, however, irrelevant.
Arimaa programmers do indeed have the option of using turn-based search or step-based search. It turns out not to be merely a matter of programmer preference as one might intuit; step-based search performs better in practice. This is not because Arimaa programmers don't know how to do turn-based search properly, it is because step-based search provides better move ordering. The effectiveness of alpha-beta pruning depends on examining good moves before bad ones. Creating a good move ordering can be worth a considerable investment of time, and evaluating after partial moves (steps) is a reasonable way to create such an ordering.
Another reason that step-based searching is more effective than turn-based searching is that partially searched plies add little strength. For example, an Arimaa program might finish a 2-ply search in 1/5 of its allotted thinking time. With a turn-based approach, even thinking five times as long as it has already thought is unlikely to complete a 3-ply search. A step-based search, however, can complete deeper iterations, with a greater probability of contributing useful information. Similarly a chess engine will usually complete another full ply given five times the thinking time: Advantage chess.
The "fewer cutoffs" argument is certainly less significant than the "higher branching factor" argument. However, it is not exactly subsumed by it. If we make an apples-to-apples comparison using turn-based search in both cases, then we could indeed delete the "fewer cutoffs" argument as irrelevant, but then we should add an argument about the reduced effectiveness of iterative deepening, i.e. worse move ordering and less effective use of time, as argued in my previous two paragraphs.
If this is not convincing, ask any chess engine developer whether it would hurt the strength of his alpha-beta pruner if it were permitted to do iterative deepening only in multiples of four (or even multiples of three) ply. Will he shrug and say it is basically irrelevant how many ply at a time the iterative deepening is done? Not a chance.
I understand the suspicion that the article overstates the difficultly of programming an Arimaa engine. It would be odd, however, to try to correct this by insisting that a comparison be made on the basis of a *less* effective way of programming an Arimaa engine. Ironically, making the comparison on the basis of a turn-based search would weaken the Arimaa engines and thus *strengthen* the case that Arimaa is difficult for computers. The "fewer cutoffs" argument goes away, only to be replaced by arguments that are more significant (less nit-picky).
Incidentally, the per-step branching factor of Arimaa is not 11.5, but rather in the 20's. The per-turn branching factor of 17,281 already throws out transpositions of different orders of steps. Including transpositions, the number of possibilities per turn is nearly 24 times as high (4! = orderings of four independent steps). I don't know that the per-step branching factor has been studied as thoroughly as Janzert's study of the per-turn branching factor, though.
I support changing the computer ineptitude article for greater accuracy. Thanks for giving an opportunity to respond on the chat page about the validity of your conclusions about Arimaa programming. Fritzlein (talk) 17:23, 6 June 2010 (UTC)[reply]

There are errors in the section Comparing arimaa challenge to chess challenges. You say arimaa is a new game so humans are at a disadvantage just as much computers. This is false. Humans and computers do not learn at the same rate. Humans are visual and see patterns, so learn quickly and have attained high level of play quickly. Computers do not learn, but improve slowly though the process of computer programming. You compare the arimaa challenge match to the Man vs Machine World Team Championship of 2004 and 2005. The arimaa challenge match does not allow custom made computers or computers costing more than $1000. However, the Hydra chess machine in the Man vs Machine World Team Championships uses custom FPGA chips and runs on a 32 node Xeon cluster, with a total of 64 gigabytes of RAM. The arimaa challenge says the computer program must demonstrate that it can win at least two of the three games against each of the human players to win the challenge. In the 2004 Man vs Machine World Team Championship the computer won 5 out of 12 games. In the 2005 Man vs Machine World Team Championship the computer won 6 out of 12 games. The computer lost the 2004 and 2005 Man vs Machine World Team Championship according to rules of the arimaa challenge because the computers need to win 2/3 or 8 out of 12 games. In the 2006, Deep Fritz vs. Vladimir Kramnik the computer won 2 games. The computer lost the Deep Fritz vs. Vladimir Kramnik match because according to rules of the arimaa challenge the computer needed to win 2/3 or 4 out of 6 games. Rybka lost the Rybka vs. Jaan Ehlvest matches because Rybka won less then 2/3 of games according to rules of the arimaa challenge. This makes the computer even weaker because Jaan Ehlvest is not even in the list of top 100 players. While the humans in the arimaa challenge are in the list of top 100 players. A human has never won a chess match using the rules of the arimaa challenge match. So according to the rules arimaa challenge match, humans are better than computers in chess. The points made by the Arimaa community are invalid and need to be removed. Mschribr (talk) 21:36, 2 March 2011 (UTC)[reply]

There are two traditions in chess for the scoring of draws. Most common is that they count as half a win and half a loss. Less common is that they don't count at all. In the team matches, if you count the former way, computers had 8.5-3.5 wins in 2004 and 8-4 wins in 2005. If you count the latter way, computers had 6-1 wins in 2004 and 5-1 wins in 2005. Either way computers scored 2/3 of the wins both years. Similarly, in 2006 Deep Fritz beat Kranmik 4-2 by the former method of scoring, and 2-0 by the latter method of scoring. In 2007 Rybka beat Ehlvest (at pawn odds) 6.5-1.5 or 4-1, depending on how you count. In these matches, too, computers scored 2/3 of the wins by either method of counting.
The only way computers didn't score 2/3 of the wins in all of these matches is if draws are counted as losses for computers and wins for humans. That method of scoring is not part of the rules of the Arimaa Challenge, and nowhere in the history of scoring chess matches. This scoring is entirely your invention. Made-up scoring rules are a rather thin basis for calling statements in the Arimaa article "false".
As for the strength of the hardware, the Arimaa programs in the 2010 Arimaa Challange ran on a quad Q9550. Rybka in 2007 against Ehlvest ran on a quad QX6700. Fritz in 2006 against Kramnik ran on two Intel core 2 duos. In other words, in the Arimaa Challenge humans are winning against hardware that is comparable to the hardware that beat humans at chess. Admittedly, humans also lose at chess to systems with even more than four cores, but that fact doesn't prove what you are trying to prove.
As for Ehlvest being a weak player, I specifically answered that point higher in this very thread in a post dated 17:36, 29 March 2010 (UTC)
As for humans attaining a high level of play at a two-player game of perfect information more easily and/or more quickly than computers can, please give a source or even just an example. The evolution of ratings of top humans and top computers on arimaa.com tends to contradict your assertion, but if you have better evidence than arimaa.com ratings, I would like to hear it.
Fritzlein (talk) 05:35, 24 March 2011 (UTC)[reply]
The Arimaa Challenge rules say, “The hardware will typically be a standard general purpose computer that can be purchased within $1000 USD”. Hydra violates that rule. Hydra played in Man vs Machine World Team Championship. Therefore, you cannot compare the Arimaa Challenge to the Man vs Machine World Team Championship.
The Arimaa Challenge rules say, “The program must demonstrate that it can win at least two of the three games against each of the human players to win the challenge”. The Man vs Machine World Team Championships each had 12 games. Therefore, the computer needed to win 8 out 12 games to win the Championships using the Arimaa Challenge rules. In 2004 and 2005, the computers won less than 8 games. Therefore, under the Arimaa Challenge rules the computer lost the Championships. Mschribr (talk) 19:33, 1 April 2011 (UTC)[reply]
Obviously there has never been a man vs. machine chess match played under the exact rules of the Arimaa Challenge. That is not a good reason to ignore the substantial evidence that humans would lose to computers in a chess challenge under exactly the same conditions under which humans win the Arimaa Challenge every year. Since you haven't even bothered to respond regarding the hardware strength of Deep Fritz in 2006 and Rybka in 2007, and haven't bothered to defend your made-up scoring system under which draws count as losses for computers (which is not part of the Arimaa Challenge scoring), I'm not sure what else there is to say. Fritzlein (talk) 04:57, 4 April 2011 (UTC)[reply]
Computers need a much higher level of play to win under the Arimaa Challenge rules then under standard chess rules. Computer beat humans in chess under standard chess rules. However, there is no proof that computers have reached the level needed to win under Arimaa Challenge rules. These rules would be an appropriate challenge for computers in chess in 2011 after 50 years of computer chess research. However, new games like Arimaa have not had enough research to reach that high level of play. The Arimaa Challenge rules say, “The program must demonstrate that it can win at least two of the three games against each of the human players to win the challenge”. Therefore, if the computer won 5 out of the 9 games the computer loses. However, under standard chess rules the computer wins.
The win of the Fritz over Kramnik in 2006 did not meet the Arimaa Challenge requirement of “The program must demonstrate that it can win at least two of the three games against each of the human players to win the challenge”. Fritz needed to win 4 of the 6 games. Fritz won 2 of the 6 games. Therefore, Fritz lost to Kramnik in 2006 under the Arimaa Challenge requirements. Rybka in 2007 played 6 games. Rybka needed to win 4 games under the Arimaa Challenge requirement of “The program must demonstrate that it can win at least two of the three games against each of the human players to win the challenge”. Rybka won 2 games. Therefore, Rybka in 2007 lost under the Arimaa Challenge requirements.
You did not answer the point that you cannot compare the Arimaa challenge match to the Man vs Machine World Team Championship. Hydra violates the Arimaa Challenge rules. Remove the comparison between the Arimaa challenge match and the Man vs Machine World Team Championship. Mschribr (talk) 20:26, 5 April 2011 (UTC)[reply]
You have two reasonable options for scoring draws: either they are half a win and half a loss, or they don't count at all. You continue to insist on counting draws as losses for computers and wins for humans, which is manifestly unfair. The Arimaa Challenge has never scored in this way; no chess tournament has ever scored in this way. Arimaa is now a drawless game, but when draws were allowed, they counted for half a win and half a loss, just as in nearly every chess tournament. At that time, two wins and four draws would have counted just the same as four wins and two losses, i.e. it would have been 2/3 of the wins. You can repeat your scoring system that nobody except you uses, but that is not relevant to the discussion. Further, I did directly answer your point about Hydra. To repeat: just because a computer with more than four processors can beat humans at chess doesn't mean a computer with only four processors can't beat a human at chess. You are relying on a logical fallacy. But there is more than logic at stake; there is direct evidence: a computer with only four processors *has* beaten the World Champion at chess with 2/3 of the wins (or, if you prefer, 100% of the wins and 2/3 of the points). Fritzlein (talk) 22:43, 5 April 2011 (UTC)[reply]
Of course, if a computer with more than 4 processors can beat a human that does not preclude a computer with less than 4 processors from also beating a human. However, you cannot use hydra as a proof, which what you are doing when you compare to the Man vs Machine World Team Championships. Then we could remove hydra from the Man vs Machine World Team Championship. In 2004, Fritz and Junior played 8 games against Karjakin, Ponomariov and Topalov. The computers had 3 wins, 4 draws and 1 loss. The computers won 5-3. In 2005, Fritz and Junior played 8 games against Ponomariov, Kasimdzhanov and Khalifman. The computers again had 3 wins, 4 draws and 1 loss. The computers won 5-3 in both tournaments. The computers scored 5 out of maximum 8 or 0.625, which is less 2/3 or 0.6667 of the games. Under the Arimaa Challenge rules 2/3 is needed so the computer lost in 2004 and 2005. The win against the world champion was anomaly. In that tournamnet, the world champion made an impossible blunder with a mate in 1 move in game 2. This kind blunder does not happen at the world champion level. In game 1 the champion was winning, the commentators said that the champion made s series of bad moves in the middle of the game, which drew the game. Therefore, the champion was definitely playing poorly in this match. We compare this match to other world champion vs personal computer matches, which all ended in draws. To use this match as example that the world champion lost to a personal computer is wrong when the champion obviously played poorly and other world champion matches draw the personal computer. This is no proof that the personal computer has reached the high level needed to win a chess match under arimaa challenge rules. The Rybka Ehlvest match only tells that a computer is better than a weak grandmaster with a performance of 2667 well below world champion level. It definitely does not prove that the computer is able to beat the world champion under the Arimaa Challenge rules. Mschribr (talk) 21:22, 6 April 2011 (UTC)[reply]
OK, removing Hydra from the team matches is a reasonable suggestion. I support editing the article to reflect that way of looking at it. But it is not reasonable to disregard the Kramnik vs. Fritz match. Using the argument that the better player lost merely because he was playing poorly, every single match result can be thrown out, and no result means anything. What is the evidence that Kramnik was better than Fritz? Prima facie, the fact that Fritz beat Kramnik is evidence that Fritz is the better player. Similarly, the Rybka vs. Ehlvest match can't be thrown out, because it is a point of close comparison to Arimaa. Arimaa players who are further below World Champion level than Ehlvest is below chess World Champion have won matches in conditions less favorable to humans than Ehlvest lost. Indeed, your insistence that computers in chess haven't passed the bar of the Arimaa Challenge is quite undermined by the Ehlvest match because he was losing games in which he was given a handicap. We can compare matches feature by feature. Hardware was comparable; chess player was relatively higher-rated than Arimaa player; chess player received handicaps the Arimaa player did not; chess player lost and Arimaa player won. Thus we do have evidence that Arimaa is more computer-resistant than chess. Fritzlein (talk) 04:32, 8 April 2011 (UTC)[reply]

A relevant data point is the 2011 Arimaa Challenge that is currently underway. One of the human players, Toby Hudson, is rated 2169, nearly five hundred Elo points behind the Arimaa World Champion Jean Daligault at 2659. Jaan Ehlvest was rated only two hundred Elo points behind the chess World Champion, ~2600 vs. ~2800. The hardware is an Intel Xeon Quad X3360 2.83GHz. If Hudson can beat the top Arimaa software in the 2011 Arimaa Challenge, that will show a level of human dominance that can't, to my knowledge, be matched by any chess man-versus-machine match of the last decade. Has a 2300-rated chess player recently defeated any top chess software running on a quad core? Fritzlein (talk) 15:42, 9 April 2011 (UTC)[reply]

If you take out hydra then why do you need the Man vs Machine World Team Championships? The Man vs Machine World Team Championships show humans have not reached the high level required to win under the Arimaa Challenge. The reason we can disregard Kramnik vs. Fritz match because the world champion blundered a mate in 1. This is a blunder even a novice player should never make. We can also compare this match to the 3 previous world champion vs personal compuer matches. Those all ended in draws. Changing this 1 game from loss to draw means the world champion would not have lost under the Arimaa Challenge rules. The Rybka Ehlvest match just shows the computer is better at chess than Arimaa. It does not say why the computer is better at chess than Arimaa. Also the Arimaa Challenge is against 3 players. If the computer beats even the weakest player but loses to 1 other player then the computer loses the Arimaa Challenge. So the Rybka Ehlvest match is not comparible to the Arimaa Challenge.
I would agree that a computer in 2011 maybe able to beat humans in chess under the Arimaa Challenge rules. However, the computer is definitely not ready to beat humans in Arimaa under the Arimaa Challenge rules. First, the computer should beat humans in Arimaa under standard chess rules before you use Arimaa Challenge rules. I think a weaker Arimaa player as Toby Hudson could beat any custom made super computer until 2020. If that is true, it still does not prove Arimaa is harder for the computer than chess. It just means a bigger effort is needed in Arimaa.
Why create a new game that is hard for the computer when there are older games closer to chess that are harder for the computer, games such as Shogi and Go? Alternatively, use 1 of the many chess variants like Crazyhouse to make the same point? Mschribr (talk) 15:35, 12 April 2011 (UTC)[reply]

I do believe that computers would prevail over humans at chess under the terms of the Arimaa Challenge match. It is becoming clear to me, however, that there is no need to argue this point. For the purposes of the article, the essential point is that Arimaa is more computer-resistant than chess. This is what is claimed in the opening paragraph, and this is what is amply supported by comparable man-versus-machine matches. I am glad you also concede the weight of this evidence that computers are better at chess than at Arimaa. I expect this means we could find a mutually satisfactory wording along the lines of, "It is debatable whether computers would defeat humans at chess under the terms of the Arimaa Challenge, but given the hardware used in the Arimaa Challenge, top chess software plays at least as well as any human, whereas top Arimaa software plays far below the level of the top humans."

You argue that Toby Hudson's case doesn't show that Arimaa is harder for computers than chess is, because of the bigger effort that has gone into chess software, but you again neglect the bigger effort that has been expended by human chess champions. Imagine a chess player who played actively for only four years from the date he learned the game, and having for sparring parters only players as inexperienced as himself. Imagine he semi-retired and played in only one small chess tournament each year for the following four years. Imagine that this chess player had read only one book on chess in his life, and that this one book was geared toward novices. Could such a chess player compete anywhere near the level of Jaan Ehlvest? Top humans are better at chess than they are at Arimaa. Yes, obviously the greater effort of chess programmers has produced better chess software than the Arimaa software produced so far by the Arimaa community. And equally obviously the greater effort of human chess players, millions of chess players spanning many generations, has produced better human chess players than the human Arimaa players produced by the Arimaa community.

To your final point, you are indubitably correct that shogi and Go are more computer-resistant than chess. Both shogi and Go are excellent games. The fact that Arimaa is similar in some way to the great games of shogi and Go is rather a reason to take Arimaa seriously. If you prefer to devote yourself to classic games rather than modern games, more power to you, but it then sounds odd to put down a modern game because it shares some point of excellence with the classic games that you love. Fritzlein (talk) 17:27, 15 April 2011 (UTC)[reply]

I think the main point of disagreement is the reason humans beat computers in Arimaa. I say because humans learn faster than computers and the small effort by computer programmers. You say because Arimaa is too hard for computers. It seems currently there is no hard proof either way. We can come to agreement if you put both reasons or leave out any reason. The Arimaa game was created to research computer AI. Why create a new platform when there are platforms to do just that? Both Shogi and go are old games where humans reached a high level of play. Both have a long history of programmers striving to reach that goal. This proves that both Shogi and go are harder for computers than chess. Humans are obviously stronger than computers in Arimaa. Why raise the bar even higher for computers? Raise the bar for the computers when computers have reached equality in Arimaa. The reasons a computer beat Ehlvest and a computer lost to Hudson. 40 years of research produced strong computer players. Humans learn fast and reached a higher level in Arimaa after a few years. Computer advancement is a slow process and reached a low level after a few years. A second point of contention is if Arimaa is harder than chess for the computer. The significant difference is chess has a higher state-space complexity than Arimaa so chess is harder. So any comparison between Arimaa and chess should include chess is harder than Arimaa because chess has a higher state-space complexity. Both Shogi and go have higher state-space complexity than chess therefore they are harder than chess for the computer. Mschribr (talk) 23:06, 15 April 2011 (UTC)[reply]
Mschribr -- Not entirely sure what "both points of view" means, since nobody except you seems to dispute that with the current state of overall knowledge and COTS hardware, Arimaa is a harder problem than chess (though of course the Arimaa vs. chess comparison is not, and cannot be, a rigorously controlled experiment like a double-blind pharmaceutical trial to find out which of two drugs is most effective etc.), and you haven't presented any real support for your position other than personal arguments and opinions which others have not necessarily been impressed by. In any case, programmers coding Arimaa-playing programs can freely draw upon the last 45 years of techniques developed for chess-playing programs, but this doesn't seem to have helped all that much beyond a certain point... AnonMoos (talk) 02:18, 16 April 2011 (UTC)[reply]
If you do not understand what are “both points of view” then reread my previous post. It is straightforward. I can answer specific questions. We are not certain with the current state of overall knowledge and commercially available off-the-shelf hardware that Arimaa is a harder problem than chess. We are certain under the Arimaa challenge rules, which put further restrictions on commercially available off-the-shelf hardware and require the computer to play at much higher level to win. You are correct that Arimaa programmers do not benefit much from 45 years of computer chess research. Arimaa programmers will probably need 45 years of research in computer Arimaa before they can win under the Arimaa challenge rules. Mschribr (talk) 12:24, 17 April 2011 (UTC)[reply]
I'm sorry, but that doesn't make too much sense -- programmers of Arimaa-playing programs aren't starting all over completely from scratch with PDP-1s and ignoring all past research. They can draw upon general strategies and algorithms developed over past decades for chess and also for many other games. If you feel that there are restrictions on the Arimaa prize beyond what is reasonable to ensure that it's unlikely that a brute-force or "fast-but-dumb" approach will prevail (see the discussion at the very top of this subsection above), then what are these alleged excessive restrictions? -- AnonMoos (talk) 12:55, 17 April 2011 (UTC)[reply]
Programmers of games like chess, international draughts, Shogi and go use some similar ideas. However, they must create new ideas specific to their game if they want to achieve a high level of play. If we remove all ideas specific to their game then their program will play at a low level. The same is true with Arimaa. Specific ideas to Arimaa must be developed to reach a high level of play. Games like chess, international draught and Shogi do not use brute-force or "fast-but-dumb" approach to achieve a high level of play. If you read the rules to the Arimaa challenge, you will find all the restrictions. Some restrictions are the computer must win 2/3 of the games, which is more than a majority of the games. Between games, no improving programs. The computer cannot be custom-made or priced over $1,000. Mschribr (talk) 16:27, 17 April 2011 (UTC)[reply]
Mschribr -- of course chess programs don't literally use "brute force" in the sense of bogosort or whatever. However, in the context of the field of artificial intelligence research, they use semi-low-level algorithms of rather narrow scope (sometimes elegantly optimized, but still semi-low-level), and there seems to have been a certain tendency in recent decades for major advances in program chess-playing strength to come from throwing hardware at the problem — or at least being gently lifted by the rising tide of Moore's Law — rather than from any fundamental conceptual breakthroughs... AnonMoos (talk) 04:00, 9 May 2011 (UTC)[reply]
Bogosort means playing random moves like moving the king into a check mate in 1 move. Brute force is looking at every possible move. Brute force beats Bogosort. Brute force plays at a low level because you can only look ahead a few moves. Therefore, chess programs do not use brute force. There were definitely many high level breakthroughs in getting the computer to play chess at the current high level. That we humans cannot use these breakthroughs to play better chess does not mean these are not high level breakthroughs. Not using the fastest available computers is foolishly handicapping your performance. High level of play does not require a breakthrough. Mschribr (talk) 18:41, 16 May 2011 (UTC)[reply]
It's great to have a point of agreement, namely that top humans dominate top software at Arimaa but not at chess, on comparable commodity hardware and counting wins the same way in each case. Since that agreement took some time to hash out, it might pay to present the evidence in its favor in coherent form in the article. I would support such an edit by someone else, and would be willing to make the edit myself if there were no objections.
Among the several points of disagreement, I will not bother responding to, "The rules of the Arimaa Challenge should be different," and, "There was no point in inventing Arimaa." Those are germane neither to the wording of the article nor to understanding why humans dominate computers at Arimaa but not at chess.
The point of disagreement that is germane is over the article text, "Arimaa has so far proven to be more difficult for artificial intelligences to play than chess." Disputing this point, in light of agreed human dominance at Arimaa, requires showing that human dominance is due to something other than the nature of the game itself.
One argument is that the difficulty of a game for computers is proportional to its state-space complexity, and since the state-space complexity of Arimaa is lower than that of chess, Arimaa must be easier for computers. That seems rather circular to me, along the lines of arguing that the black swan we see swimming there can't actually be black, because everyone knows that all swans are white. Fortunately, however, we don't have to rely on Arimaa itself to disprove the notion that the difficulty of a game for computers is proportional to its state-space complexity; there are other counter-examples. For instance, according to the Game_complexity article, 15x15 freestyle gomoku has a higher state-space complexity than shogi, but a computer can prove a first-player win for gomoku while not being able to contend with championship humans at shogi with either color. The fact that computers can "bust" gomoku in spite of its large state space shows that computer dominance can depend on other features of a game as well.
The argument Mschribr presented above is (A) Shogi and Go are more computer resistant than chess; (B) Shogi and Go have higher state-space complexity than chess; (C) Therefore state-space complexity determines computer resistance. This is a logical fallacy. Correlation is not causation. Indeed, the identical (fallacious) argument proves that game-tree complexity determines computer resistance, because both shogi and Go have greater game-tree complexity than chess does. And as a corollary, one could note that Arimaa has a higher game-tree complexity than chess, and must therefore be more computer-resistant than chess. It a strong sign of flawed reasoning when the same argument can be used to prove both a statement and its negation.
A second argument is that Arimaa software is relatively poor because of small number of developers attacking the problem, and the short amount of time they have had to work on it. This in itself is plausible, but it is hard to see why the level of human Arimaa play should not be equally poor. Mschribr asserts that humans can quickly attain a high level of play at a new game, but computers can't. The evidence from Arimaa itself is rather the opposite: David Fotland, drawing on his experience coding game engines for Go, chess and other games, needed only a few months to create an Arimaa engine that played on a par with the world's best human Arimaa players, but over time human players slowly learned Arimaa strategy and pulled ahead. It appears that humans are the tortoise and computers the hare, at least in this race so far.
But perhaps the Arimaa experience is unusual. I am open to other evidence. Therefore I ask you, Mschribr, for examples of modern games that support your assertion. Is it universal that developers and human gamers, starting at the same time to attempt to conquer a new game, have a balance of power that is tipped against developers, and only slowly evens out? Of course, classic games such as chess, Go, shogi, mill, checkers, etc., are no use for this comparison, because humanity had generations to hone its expertise before computers even existed. Of course humans had a head start in all such games. To prove your point requires modern games. I am very curious; do you have examples?
The appeal to Arimaa's newness is the only attempt I can find in this entire thread to explain why observed human dominance at Arimaa is not due to the intrinsic computer-resistance of Arimaa. Yes, there are scattershot observations on many matters, but only this one argument that undermines the default explanation that observed human dominance at Arimaa IS due to the intrinsic computer-resistance of Arimaa. If I am overlooking some relevant part of the thread, please let me know. Fritzlein (talk) 02:05, 17 April 2011 (UTC)[reply]
I agree that computers play better chess than Arimaa on the same hardware. What is not clear is if Arimaa Challenge rules were removed how much stronger the computer would be. An increase in strength would come from stronger hardware and more programmers would be interested and compete in the challenge. I was also trying to get some information about Arimaa. Why create a new platform for computer AI when similar platforms exist? Why require the computer to play at a high level before the computer plays at a lower level? As an alternative to your argument, you can prove that human dominance in Arimaa is due to the nature of the game itself instead of the Arimaa Challenge restrictions and smaller effort. I am interested in a fair fight between humans and computers. Use a game where humans have achieved their highest levels possible such as chess, Shogi and go. Then let humans create the strongest programs running on the strongest computers and see what happens. This is what happened in chess. This is happening in Shogi and go. The reason why humans and computers are not equally poor at Arimaa after a short time period is that humans and computers do not learn or improve in the same way. Humans learn visually and through patterns. Computers improve though computer programming. It is interesting that the first Arimaa programs were on par with humans. Where are details of those matches? I saw the computer lost 8 out of 8 in 2004 Arimaa challenge. I am not familiar with gomoku. I know there are many versions of gomoku. Maybe gomoku is different from other games because the State-space complexity is higher than the Game-tree complexity. Mschribr (talk) 12:39, 17 April 2011 (UTC)[reply]
I don't see how it helps to drag the rules of the Arimaa Challenge back into the argument, because those rules have no relation to the level of human dominance over computers at Arimaa. No matter what rules you dream up for man-vs.-machine matches, if a man-vs.-machine chess match and a man-vs.-machine Arimaa match are played under the same rules, the humans will fare better at Arimaa than they do at chess. Play a chess match under the rules of the Arimaa Challenge, and maybe (although I doubt it) humans would win at chess, but even if they did it would be by a far smaller margin than humans win the Arimaa Challenge each year. Equal match rules will produce unequal results. Is that not what we were recently agreeing upon?
Omar Syed created Arimaa to spur AI research. Given that half a dozen academic papers have already been written about Arimaa, it seems he has succeed at least in some measure. But the reason doesn't affect our disagreement. Whether or not the purpose of Arimaa was to undermine your theory that computer-resistance is a function of state-space complexity, Arimaa does indeed undermine your theory that computer-resistance is a function of state-space complexity.
It is true that Go and shogi provide excellent and interesting AI challenges, and the social conditions of those challenges are more similar to chess than to Arimaa. Yes, it is much easier to prove that Go and shogi are more computer-resistant than chess than it is to prove the same for Arimaa. However, it is impossible to make the conditions identical so as to get a safer basis for comparison. The fact that the social situation is different hardly means that Arimaa will appear to be more computer-resistant than it is intrinsically, as you keep asserting. On the contrary, I believe it means that Arimaa appears to be less computer-resistant than it is intrinsically. I could make an argument for the latter that is at least as strong (and with marginally more supporting evidence) as your argument for the former.
In 2004, Omar Syed did win the first Arimaa Challenge 8-0, but that was after humanity had time to study his opponent Bomb and find its weaknesses. If Omar had been forced to play the first Arimaa Challenge match ten months sooner than he did, i.e. in the spring of 2003, he might well have lost. David Fotland is my witness; see page two of http://arimaa.com/arimaa/papers/Fotland/CGC2004/Arimaa_paper.doc . But ever since that scary moment when computers leaped ahead, humans have progressed faster than machines, and the Arimaa Challenge has become increasingly hopeless for the developers. This seems to indicate that a new game favors machines, because developers can apply known search techniques from other games, whereas humans need longer to work out game-specific strategic ideas. We have several cross-over developers and game players; all of them will assure you that the Arimaa software development is more similar to chess software development than Arimaa strategy is similar to chess strategy.
It is true that human progress at games happens differently than machine progress at games. But you have yet to show a single example that demonstrates your theory about which side gets an early advantage from this difference. Arimaa is one example against your theory. A theory backed by a little evidence is stronger than a theory backed by no evidence whatsoever. Fritzlein (talk) 04:40, 19 April 2011 (UTC)[reply]

Reliable sources: Google Scholar[edit]

[This discussion was originally on the "Arimaa" article and copy-pasted to the newly-created page "Computer Arimaa" by User:Mattj2 on May 14, 2013]

I just noticed that some scientific papers, such as Master's Thesis works, have been written about Arimaa. You can find them with Google Scholar: http://scholar.google.com/scholar?q=arimaa Someone might want to read them and introduce some additional references into this article. —ZeroOne (talk / @) 23:23, 26 March 2010 (UTC)[reply]

See also http://arimaa.com/arimaa/papers/ Fritzlein (talk) 02:11, 17 April 2011 (UTC)[reply]
I added everything from http://arimaa.com/arimaa/papers/. I didn't find anything new on Google Scholar. Mattj2 (talk) 09:08, 14 May 2013 (UTC)[reply]

Seven reasons[edit]

[This discussion was originally on the "Arimaa" article and copy-pasted to the newly-created page "Computer Arimaa" by User:Mattj2 on May 14, 2013]

"It has been argued that a computer has beaten the world chess champion but not beaten the human in the Arimaa challenge because of six reasons". I would think there would actually be seven reasons. 7: The low prize pool value of 10,000 dollars is not enough to attract serious programming talent and the relative unknownness of Arimaa means that many programmers simply don't know about the challenge or care to spend time on it. 122.49.138.156 (talk) 08:50, 16 February 2011 (UTC)[reply]

I think your reason of a low prize not attracting sufficient talent is included in the article’s reason 1. Reason 1 says it is a new and relatively unknown game therefore there is insufficient programming effort. Your reason of small prize also causes insufficient programming effort. A larger prize would attract more talent. When computer chess was starting in 1960s and 1970s there where no large prizes but there was many people working on computer chess. I do not think there will a large prize for Arimaa. It is a catch 22. A sponsor will offer a large prize when there is already a large audience. The sponsor wants to get publicity from the large audience. Ibm hired the chess programmers and sponsored a large prize for the computer vs. human world champion chess tournament because ibm could get publicity from the large number of people who know about chess. Ibm was not interested in chess. A company will offer a large prize for Arimaa when there is a large audience for Arimaa. On the other hand, there are classic games that do have large programming efforts without large prizes. Games with large programming effort are Shogi and Go. In spite of the large effort man beats the computer in these games. Shogi is more like chess than Arimaa. Mschribr (talk) 11:18, 23 February 2011 (UTC)[reply]
There are some definite issues with the speculation on the computer playing and some failure of NPOV. A modern computer chess program is not brute force but heuristic. The only issue I have about mentioning the prize is if it has not been mentioned in RS but the conditions are odd.Tetron76 (talk) 18:07, 1 March 2011 (UTC)[reply]
I do not understand what you are saying. Can you be more specific? What is RS? Modern chess computers do not use brute force or heuristic. They mostly use alpha-beta pruning. Mschribr (talk) 21:26, 2 March 2011 (UTC)[reply]

Anonmoos Removed Correct Statement[edit]

[This discussion was originally on the "Arimaa" article and copy-pasted to the newly-created page "Computer Arimaa" by User:Mattj2 on May 14, 2013]

AnonMoos removed the statement that hydra it is a custom made 32 node Xeon cluster, with a total of 64 gigabytes of RAM. He said this is opinionated personal commentary. It is not. These are facts. If he cannot explain his actions, I will undo his changes. Mschribr (talk) 01:33, 23 March 2011 (UTC)[reply]

That's obviously not why I reverted your edit, so your remarks above are somewhat disingenuous...
I know that you feel very strongly that the why-Arimaa-is-harder-for-computers-than-chess arguments are all wrong, but (as emerged in the past discussions above), many people on and off Wikipedia disagree. For this reasons, unreferenced blunt statements about how things are allegedly "false" are not suitable for the article in that particular form. AnonMoos (talk) 09:01, 23 March 2011 (UTC)[reply]
Let us try to leave feeling out of this discussion. Ok? The article compared the Arimaa challenge match to the Man vs Machine World Team Championship. I stated the comparison is false because hydra is custom made. Does that belong in the Arimaa article? Mschribr (talk) 11:33, 23 March 2011 (UTC)[reply]
Why don't you rephrase it as a brief factual statement backed up by a source, instead of as a lengthy opinionated screed? AnonMoos (talk) 15:21, 23 March 2011 (UTC)[reply]

Equation in Section "Computer Performance"[edit]

[This discussion was originally on the "Arimaa" article and copy-pasted to the newly-created page "Computer Arimaa" by User:Mattj2 on May 14, 2013]

In my optionion, for the casual reader we should explain the displayed equatio illustrating the branching factor more clearly. Currently it appears to have fallen out of the sky and should be embedded better. Christian Brech (talk) 00:22, 29 November 2012 (UTC)[reply]

I changed the equation to emphasize the result rather than the process of calculation. 109.74.204.218 (talk) 11:17, 5 March 2013 (UTC)[reply]

Brute force search[edit]

The current article seems to imply that brute force search is used and works in Chess while it doesn't work well and isn't used for Arimaa. Currently though Arimaa bots are generally using the same search techniques as Chess bots do. Whether or not these search methods counts as brute force probably depends on your own exact definition. Janzert (talk) 10:09, 9 May 2013 (UTC)[reply]

Thanks Janzert! I just copy-pasted the "Computer performance" and "Comparing Arimaa challenge to chess challenges" sections over from the Arimaa page without evaluating them. (Unlike you, I haven't written my own bot.) Personally I think the focus of this page should be "what does and doesn't work when programming an Arimaa bot" rather than "why Arimaa is harder to program for than chess." I think those sections are written in a persuasive tone that's not appropriate for an encyclopedia, and they have WP:OR. I encourage you to edit whatever you think should be edited; I don't know if you're supposed to be WP:BOLD or check with User:Fritzlein first but don't feel like you have to check in with me. :) Mattj2 (talk) 08:32, 11 May 2013 (UTC)[reply]
I removed "brute force search" from the section "Techniques rarely used in Arimaa bots." I think the "Computer performance" and "Comparing Arimaa challenge to chess challenges" need to be rewritten... Mattj2 (talk) 19:46, 11 May 2013 (UTC)[reply]
Thanks, I just have neither the time or motivation right now to do any sort of rewrite. I was actually coming to suggest the simplest way to at least make the page self consistent would be the removal of brute force search from the list that you already did. Janzert (talk) 15:55, 14 May 2013 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on Computer Arimaa. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 20:46, 11 August 2017 (UTC)[reply]