Could AI revolutionize / improve Bridge scoring anytime soon?
#1
Posted 2025-December-14, 10:53
Paraphrasing a response post made by Barmar in 2012(!)...
'MP scoring only looks at the scores, it can't tell WHY you got the score you got. You may get a high % score either through good play, or through poor play by your opponents ('a gift')...
But unless we decide to replace matchpoint scoring with judges who examine the play and decide who actually played "better", the best we can do is look at the scores...'
So that begs the question.
Now that AI can probably (easily?) distinguish between 'good' play at Bridge, and 'poor' play by an Opponent, would it be possible and desirable to create a new form of scoring whereby AI effectively acted as a 'judge' to determine what % score you should get for a given hand?
Critically, your % + your Opp's % *need not* then sum to 100%.
You might play a board exactly on a par with others (so score 50%), but if you made an extra trick due to your Opponents bad error, they might only score (say) 30% on the same deal.
That seems a lot fairer than the current system whereby you might score 70% on the board (undeserved), and your Opponents 30%.
It seems to me that would be a very desirable scoring system, and eminantly possible if playing online.
Has it already been considered, or pursued?
I'm just curious, no more.
#2
Posted 2025-December-14, 11:37
Perhaps overlooked is how will AI change human evolution, human biology.
Example
Eye glasses change human vision
Hearing aids change human hearing
AI changes human evolution, biology, how?
If AI can improve human memory and speed/capacity of analysis then bridge...??
#3
Posted 2025-December-14, 16:02
a.k.a. Appeal Without Merit
#4
Posted 2025-December-14, 16:06
EzioBridge, on 2025-December-14, 10:53, said:
Paraphrasing a response post made by Barmar in 2012(!)...
'MP scoring only looks at the scores, it can't tell WHY you got the score you got. You may get a high % score either through good play, or through poor play by your opponents ('a gift')...
But unless we decide to replace matchpoint scoring with judges who examine the play and decide who actually played "better", the best we can do is look at the scores...'
So that begs the question.
Now that AI can probably (easily?) distinguish between 'good' play at Bridge, and 'poor' play by an Opponent, would it be possible and desirable to create a new form of scoring whereby AI effectively acted as a 'judge' to determine what % score you should get for a given hand?
Critically, your % + your Opp's % *need not* then sum to 100%.
You might play a board exactly on a par with others (so score 50%), but if you made an extra trick due to your Opponents bad error, they might only score (say) 30% on the same deal.
That seems a lot fairer than the current system whereby you might score 70% on the board (undeserved), and your Opponents 30%.
It seems to me that would be a very desirable scoring system, and eminantly possible if playing online.
Has it already been considered, or pursued?
I'm just curious, no more.
There is no need to invoke AI if we want to change the rules to avoid rewarding side A for mere errors of side B. Simple algorithms could already identify many clear errors (or gross deviations from the rest of the field and/or expected result) and avoid rewarding the other side.
My impression of current AI is that it is nowhere near being able to identify the remaining outlying cases. I concede that this could change quickly.
#5
Posted 2025-December-14, 18:07
I am also reminded of a 3.5 table 99er game that, because there were four directors staffed for what really was a 3.5 director table count, I had enough "spare time" to play about 2 boards of the 4-board sitout with the caddymaster and teacher of about half the field. Of course it didn't count, of course if a call came, I left the table, of course... but it alleviated the half-hour sitout some.
On one hand, the teacher/dummy said "you took a very long time playing to trick 1. I think the opponents would be interested in what you were thinking about. Why don't we put the cards back up and you walk through it?"
Which, normally, would have been a really good suggestion (and, frankly, was here, too, but). Unfortunately, I had to start by saying "So, when I look at the hand, I see 8 tricks in 3NT, with no way to get a ninth without giving up the lead. And the opening lead was the best, and when the defence gets in, they can cash enough tricks in that suit to set me. So, how do I play the hand to look like I'm happy for the lead to be continued, and suggest they switch?" And then walked through exactly that, how this suit will never be switched to because dummy, but this one could look weak, but then I have to give up the lead before they get a chance to signal,...
Now, for this story, of course it worked, they switched, and I wrapped 3NT. And yes, they were novices and I - wasn't. Which doesn't make me feel very good about it, and particularly didn't make me feel very good about explaining that "how to hoodwink you two newbies into giving me the contract" was what took so long...but, as the first quote reminds us, this is a very important bridge skill, one that you should be proud of (quietly, at least until the opponents leave,...) when you do it.
All this to say: even if we get spicy autocomplete to the point where it is able to tell the difference between good play and poor defence, will it be able to tell the difference between poor defence and good play-induced poor defence?
#6
Posted 2025-December-15, 12:56
IF there ever comes a day AI bridge IS better then human Bridge it STILL is a very bad idea.
- It is true bad plays can lead to good scores on a board. That is one of the charms of the game, smile and be nice to the opponent that benefits from playing poorly. You are SURE they play poorly more often, you'll more then get compensated.
- Bridge is about doing the right thing more often then the opponents, not about doing the right thing on every single board.
- Sometimes (not always) the right play of a hand is up for debate, not objective. Like, during bidding you are sure there is overbidding, but who do you assess to be the culprit and why? An opponent discards to complete your count on his hand, does he do this on purpose so you'll finesse. way and should you therefore play for the drop? Or does he lack the skills? Or does he have the skills but WANTS to ensure you play for the drop?
- One opponent must be falsecarding his distribution. Which one? And why?
- What if AI finds out for instance some relay precision bidding systems are best at finding the best contract on average, will it then penalize inferior natural systems even if they find the right contracts because it wasn't bid out the right way?
IMHO the idea of basing scoring on the opinion of a juror should be reserved for a circus, it has no place in Bridge.
#7
Posted 2025-December-15, 13:10
- Bridge is NOT about doing the right thing on every single hand, that's simply impossible. It is about doing the right thing more often then opponents.
- What IS right is debatable on MANY occasions.
1. You correctly assess during bidding someone must be overbidding as there are 50HCP on this board. Who is it and why. And why is your assessment better or worse than others?
2. You have a suit you must play without a loser, you can play for a finesse or a drop. One opponent discards in other suits in a way you have full count on his hand. Does he do this on purpose to trick you into playing for the drop? Or does he lack the skills to even consider such a thing? Or does he KNOW you'll consider him tricking you and does he want you to not belief him? What if he had NOT discarded to give you full count?
3. What if AI establishes that Relay precision systems are best for finding the right contracts most of the times, will it penalize natural systems for finding the right contract too when they do because it was done by means of inferior bidding?
4. Lesser player having a good score because of bad play simply is part of the fun of bridge. Just smile, compliment them and realize they will play badly on lots of occasions and will more then compensate you (that is IF your assessment of them play bad was right, I personally have adjusted my assessment of opponents over time having been wrong initially).
All in all a bad idea. It should not happen.
#8
Posted 2025-December-15, 16:24
For example, you could look at actions which players do that reduced their PAR score for a given board (say passing a partscore when game is on, or fail to find the killing opening lead) and then see how frequent and how costly different kind of mistakes are at different levels. This wouldn't tell you how good a pair is overall (well, it could, but traditional MP or IMP scoring would probably be more accurate) but it could tell a partnership what they have to work on, or it could tell teachers what to emphasize for a particular type of learning players.
Somewhat closer to what you aim at, one could look at how often players make choices that differ from those made by a robot which plays much better than the players themselves. Obviously this would only make sense if the robot had the same partnership understanding and the same understanding of the four players' weaknesses as the players have, but probably one day that will be more or less feasible.
But again, I don't think replacing MP or IMP with such metrics will be popular. Suppose that one day such a deviance-from-robot-choices metrics would be better at prediction long-term MP than the MP themselves. I can imagine that it would then be used for selection of teammates, but I find it harder to believe that it would be used to award prices.
#9
Posted 2025-December-15, 17:25
This would probably be a much better long term predictor of your MP scores than your scores from those 100 boards, but would you really want that score to determine prizes? For one thing, a slightly inferior pair would never win...
#10
Posted 2025-December-16, 05:42
#11
Posted 2025-December-16, 05:48
Huibertus, on 2025-December-15, 13:10, said:
4. Lesser player having a good score because of bad play simply is part of the fun of bridge.
We'll agree to disagree on that one, particularly given the "lesser players" have a habit of gloating/rubbing your nose in it when they do get a good score against you as though they have done something brilliant as opposed to getting lucky through poor play. Good players don't do this as a rule although they may have their own antagonistic habits.
#12
Posted 2025-December-16, 06:07
This is even more pronounced on defence. Good defenders put declarer to the test with the threat of imagined distrubutions, creating incredible scores on seemingly flat deals. Is that a declarer error, a defensive victory, or something else?
There's a huge irrational risk aversion in bridge, and a corresponding status quo bias. Do nothing, not your fault even if it loses. Do something, your fault if it costs. How do you intend to evaluate that, without waving around a magic wand of "AI/statistics will understand what I'm supposed to mean even if I don't know it"?
Some of these posts really read like frustration vents to me. Sometimes you did nothing wrong and get a poor score. At least the scoring is simple, transparent and an easy goal for optimization. I think including subjectivity, be it through a trained opaque algorithm, a bridge jury or an innocuous statistics algorithm, would leave the game worse off.
P.S.: it is worth considering that your average score is likely to be the long term outcome on any of these metrics. For every scenario where you get to write off a loss using sophisticated evaluation, expect to be confronted with a blunder you failed to notice.
#13
Posted 2025-December-16, 17:04
It’s not bridge. It’s akin to lamford’s Xmas cracker puzzle under the interesting hands thread. It has interest for those that enjoy technical challenges but very limited interest to those looking to play bridge.
As for getting poor boards when the opps do something weird, it’s part of the game. It’s not much fun, usually, but it has very little impact on your partnership’s long term results.
At a recent regional an opp opened 2C with AKJ10xx void KQx Kxxx. Partner bid 2D, waiting, and he jumped to 3S!
Partner bid 4N keycard and he bid 5N, 2 or 5 keycards with a void.
Caught his partner with only one ace, but it wasn’t the heart ace. His dummy in 6S was something like xx Kxxxx Axx QJx.
Spades were 2-2, clubs 3=3, so slam made. Fortunately nv but we knew we’d lost 11 imps, lol. But there’s no point crying about it. Any pair that bids that way in real life is hopeless. We won the match by 40 imps, this board being their only pickup.
In a similar vein, my partner and I had a screwup on a sequence we’d not discussed, where I thought I was placing the contract in 4S but he thought I was keycarding in hearts (don’t ask…we play some weird stuff and disagreed over whether I had an artificial punt available prior to a sign off). Anyway, I corrected his eventual 6H bid to 6S…all perfectly ethical btw given our auction prior to the screw up. 6S required a 3-2 break, a bad lead, a poor misguess by the pro on my right, and a finesse and a squeeze. Win 13. So what goes around often eventually comes back around.

Help