BBO Discussion Forums: Would you join a group effort to write a new simulation? - BBO Discussion Forums

Jump to content

  • 6 Pages +
  • « First
  • 3
  • 4
  • 5
  • 6
  • You cannot start a new topic
  • You cannot reply to this topic

Would you join a group effort to write a new simulation? Appeal for a bridge program that is not a GIB clone.

#81 User is offline   bluecalm 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,555
  • Joined: 2007-January-22

Posted 2012-August-02, 12:11

How good is GIB at constructive bidding ?
I mean, if you play with it is it clear it's worse than your average expert ?
0

#82 User is offline   Bbradley62 

  • PipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 6,542
  • Joined: 2010-February-01
  • Gender:Male
  • Location:Brooklyn, NY, USA

Posted 2012-August-02, 12:32

View Postbluecalm, on 2012-August-02, 12:11, said:

How good is GIB at constructive bidding ? I mean, if you play with it is it clear it's worse than your average expert?
GIB's bidding is way better than his cardplay, especially his defense. Primarily, GIB does not defend as though he has a partner who is watching his cards and making decisions based on what he sees, and he "pops" cards instead of making declarer figure out where they are. It seems to me that this is the area where major improvements should be hoped for.
0

#83 User is offline   CarlRitner 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 211
  • Joined: 2005-July-14

Posted 2012-August-02, 12:37

View Postbluecalm, on 2012-August-02, 12:11, said:

How good is GIB at constructive bidding ?
I mean, if you play with it is it clear it's worse than your average expert ?


What makes this difficult to judge, in my opinion, is that you must play GIB's system, period. There can be NO partnership agreements other than what GIB does in the bidding. Not what it SAYS it does, but what it actually does, what it would do playing with another GIB.

Playing with an average expert, you'll go over your system notes and agree to things, and the expert will agree to play your way in certain areas, because he knows a less than optimal agreement is 100 times better than any implicit disagreement. GIB makes no such concession, and as a result, You & GIB are going to be less effective than GIB & GIB.

What is clear is that there are so many fewer rules to write for constructive (uncontested) auctions, and fewer chances for misunderstandings.
Cheers,
Carl
0

#84 User is offline   mike777 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 16,739
  • Joined: 2003-October-07
  • Gender:Male

Posted 2012-August-02, 13:14

Perhaps a good first step would be to come up with a measurement system of how good GIB is now.

I mean can posters agree that GIB is better than say 50% of all ACBL players right now?

If so how much improvement do you need to increase that measurement to say better than 60% of all ACBL players?


It seems impossible if you dont have some agreed standard of measurement.


We toss around terms such as expert or average expert but we dont define and measure what we mean.
0

#85 User is offline   CarlRitner 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 211
  • Joined: 2005-July-14

Posted 2012-August-02, 13:29

View Postmike777, on 2012-August-02, 13:14, said:


Perhaps a good first step would be to come up with a measurement system of how good GIB is now.



Good idea. For bidding, it should be fairly simple to feed it all the Bridge World MSC and the Bulletin IYC problems for the last 10 years and get a baseline on that.
IYC usually features one computer bridge program every month, it had been Bridge Baron for a while and now it's Bridge Buff.
Cheers,
Carl
0

#86 User is offline   Free 

  • mmm Duvel
  • PipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 10,728
  • Joined: 2003-July-30
  • Gender:Male
  • Location:Belgium
  • Interests:Duvel, Whisky

Posted 2012-August-02, 14:13

View PostAntrax, on 2012-July-26, 03:04, said:

Interesting, from what I understood from Stephen Tu's posts, single-dummy analysis is possible but computationally expensive.

I haven't read the entire thread, but I'm convinced that single dummy solvers are the best theoretical solution. Sadly, it requires waaaaaaaaaay too much effort to compute a complete hand. Perhaps it's possible if each player has 2 or 3 cards left, but not with everyone holding 13 cards. You can write a single dummy program which will play percentage all of the time, and hopefully it may run smoothly one day, but at this moment it would take years to calculate the best lead.

The reason for this is quite obvious actually. Basically, in a decision tree of a DD analyzer, you only have to calculate 1 value for each leaf and then use minimax to determine the value of all nodes. Make a fast DD analyzer and you can do several simulations for each card you want to play. However, in a SD decision tree, the value of each node is calculated by performing several DD analyzes at that point. The problem is at least exponential in complexity. Suppose you determine your SD value based on 10 DD analyzes. Then each player holding 2 cards will require 20.000.000 DD analyzes (10.000.000 for each card you can play). Do this for 13 cards in each player's hand and you'll realize that building an SD solver is a pure waste of time with the current speed of our computers. :(
"It may be rude to leave to go to the bathroom, but it's downright stupid to sit there and piss yourself" - blackshoe
0

#87 User is offline   bluecalm 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,555
  • Joined: 2007-January-22

Posted 2012-August-02, 14:15

Maybe you don't need full single dummy solver to estimate the best play. I mean even humans can do that on some hands on a good day :)
0

#88 User is offline   CarlRitner 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 211
  • Joined: 2005-July-14

Posted 2012-August-02, 15:12

Don't want to argue it, Free, but GIBson is the GIB single dummy solver and it kicks in around trick 4.
Kurt Schneider has built a single dummy solver that can process several hands in an hour, not sure exactly what the time is on it, but his kicks in at trick 1.

Are we talking the same animal here, I wonder? I'll try to find the most recent thread on this over at rgb and post a link.
Cheers,
Carl
0

#89 User is offline   Antrax 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,458
  • Joined: 2011-March-15
  • Gender:Male

Posted 2012-August-02, 21:27

View PostFree, on 2012-August-02, 14:13, said:

I haven't read the entire thread, but I'm convinced that single dummy solvers are the best theoretical solution. Sadly, it requires waaaaaaaaaay too much effort to compute a complete hand. Perhaps it's possible if each player has 2 or 3 cards left, but not with everyone holding 13 cards. You can write a single dummy program which will play percentage all of the time, and hopefully it may run smoothly one day, but at this moment it would take years to calculate the best lead.

The reason for this is quite obvious actually. Basically, in a decision tree of a DD analyzer, you only have to calculate 1 value for each leaf and then use minimax to determine the value of all nodes. Make a fast DD analyzer and you can do several simulations for each card you want to play. However, in a SD decision tree, the value of each node is calculated by performing several DD analyzes at that point. The problem is at least exponential in complexity. Suppose you determine your SD value based on 10 DD analyzes. Then each player holding 2 cards will require 20.000.000 DD analyzes (10.000.000 for each card you can play). Do this for 13 cards in each player's hand and you'll realize that building an SD solver is a pure waste of time with the current speed of our computers. :(
That's why I thought it would make an interesting problem. I mean, you could argue the same for chess, yet "somehow" it works. That's why I thought it would be interesting to try and base something on a SD solver with aggressive pruning like assuming all spots below a certain value are the same or that you never duck with AKx opposite QJx, etc. I think if you make many such assumptions and also pre-define which notions are logical when playing suit combinations, you can keep the tree from going crazy, at the expense of losing some moves on rare layouts. I mean, it's probably right sometimes to play low to low from xxx opposite AQx or something.
Personally I would've been happy with a computer that plays as well as a low intermediate based on such an algorithm (if only because we could see exactly how intractable it is to improve its play level by discarding optimistic assumptions), but other posters seem more intent with taking the crown from GIB :)
0

#90 User is offline   Free 

  • mmm Duvel
  • PipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 10,728
  • Joined: 2003-July-30
  • Gender:Male
  • Location:Belgium
  • Interests:Duvel, Whisky

Posted 2012-August-02, 23:49

View PostCarlRitner, on 2012-August-02, 15:12, said:

Don't want to argue it, Free, but GIBson is the GIB single dummy solver and it kicks in around trick 4.
Kurt Schneider has built a single dummy solver that can process several hands in an hour, not sure exactly what the time is on it, but his kicks in at trick 1.

Are we talking the same animal here, I wonder? I'll try to find the most recent thread on this over at rgb and post a link.

Obviously there are a lot of tricks to speedup the process. For instance, if each player holds only 2 cards, you can generate hands a lot more quickly than when they hold 13 cards. DD solving also goes a lot faster in that case. So the lowest nodes of the decision tree are extremely fast, while higher nodes become slower and slower.

Like Antrax suggested, it's very well possible that GIBson uses many tricks like chess does. In chess, they don't do a brute force, instead they use lots of pruning so they don't have to evaluate certain moves which look ridiculous (and this on all depths). This is the reason of the whole Deep Blue vs Kasparov scandal: according to some people the computer made a move which it should never have considered. The results are impressive, performance goes up significantly, at the cost that some moves may not be anticipated.

I know that there are some tricks to prune in bridge as well, but I haven't analyzed this at all. I didn't know about these SD solvers, they seem to have proven that these tricks can speedup the process a lot. But my guess is they don't use a complete brute force (with obvious pruning) so the results may be off. Also I don't know how many DD results are used to determine the SD value of a single node. In my previous example I used 10, it may be less. Suppose you use 2 DD results for each SD value, then an AQ finesse will usually be taken, but not always. If you use 10 DD results, then the chance is less than 1/1000 that we'll have 10 DD simulations which suggest cashing the Ace. Make it 100 DD results and it's safe to say the computer will take the finesse until the end of times.
"It may be rude to leave to go to the bathroom, but it's downright stupid to sit there and piss yourself" - blackshoe
0

#91 User is offline   Quartic 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 285
  • Joined: 2010-December-19
  • Gender:Male
  • Location:England
  • Interests:Walking, Climbing, Mathematics, Programming, Linux, Reading, Bridge.

Posted 2012-August-03, 00:16

View PostFree, on 2012-August-02, 14:13, said:

In a SD decision tree, the value of each node is calculated by performing several DD analyzes at that point.


The problem is worse than this - to make a true single dummy solver you have to assume the other players are also playing single dummy - that is you have to take into account that they don't know all the cards either. Also, accurately modelling the two defenders (for example) you're playing against at the time is another layer of complexity.
0

#92 User is offline   Scarabin 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 382
  • Joined: 2010-December-30
  • Gender:Male
  • Interests:All types of games especially bridge & war games.
    old bidding systems & computer simulation programming.

Posted 2012-August-03, 01:06

View PostFree, on 2012-August-02, 14:13, said:

You can write a single dummy program which will play percentage all of the time, and hopefully it may run smoothly one day, but at this moment it would take years to calculate the best lead.



Forgive me for being stupid but I do not see why this ahould be. Particularly if I use a table of probabilities?
As regards opening leads the usual rules/considerations seem to operate better than random simulations.


View PostFree, on 2012-August-02, 14:13, said:

However, in a SD decision tree, the value of each node is calculated by performing several DD analyzes at that point.



Is not this just a subset of a monte carlo simulation? My concept of a single dummy analysis would be based on a priori probabilities.
0

#93 User is offline   Scarabin 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 382
  • Joined: 2010-December-30
  • Gender:Male
  • Interests:All types of games especially bridge & war games.
    old bidding systems & computer simulation programming.

Posted 2012-August-03, 01:16

View PostCarlRitner, on 2012-August-01, 08:21, said:

Look at Bob Richardson's program, Bridge Captain (used to be Bridge mate). That database is completely configurable by the end user and it's written in a script format that is easy for anyone exposed to any basic programming language to grasp and use.
...........

Bobby Goldman, author of Aces Scientific, wrote in that book that the better and tighter the set of rules governing the bids are, the less the team needs to rely on individual judgment, or something very much along those lines. Aces Scientific (the Advanced leaf) looks to be a very rigid, tight system, and since the modern 2/1 GF shares a lot with this system, that book (hard to find and hard to read) might offer a better starting point than Roman club.




Thanks for the steer to Bridge Captain. It is very interesting and alreadt includes Super Precision, although I have not had time to check how accurate and complete this is.

Appreciate your second point but I will probably persevere with Roman Club, because:

-it seems I am writing this simulation for myself,

-I have already done a fair bit of work on it,

-I am familiar with Roman Club,

-and Roman Club is pretty tight and rigid.

Regards

Scarabin
0

#94 User is offline   bluecalm 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,555
  • Joined: 2007-January-22

Posted 2012-August-05, 14:59

I think that the key to good bidding program is fast function to estimate how often the contract make given two hands. It doesn't need to be very accurate, just accurate in average case. People can do that without long analysis usually so I think could a computer.
I have some ideas for this but maybe this work is already done/started ? Is anybody aware of any efforts in this direction ?
0

#95 User is offline   Antrax 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,458
  • Joined: 2011-March-15
  • Gender:Male

Posted 2012-August-05, 21:08

Quote

People can do that without long analysis usually so I think could a computer
It sounds feasible, but the quoted argument isn't the reason why. There are several things humans (or even kids) do without a moment's thought that we have no clue how to make a computer do.
0

#96 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,403
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2012-August-06, 14:27

FWIW, I have an intrinsic bias against large, grandiose projects. It’s all fine and dandy to want to design and build the world’s greatest bridge program, however, in my experience these types of large projects tend to crash and burn because the developers bite off more that they can chew. I think that you’ll do a lot better if you try to tackle a relatively simple, well defined and well bounded problem. Use this project to give your team some practical experience working together. Make sure that you’re able to chalk up some small practical successes in the short term. This type of strategy will give you a good foundation to build upon as you move towards your final goal.

Here is one path that you might consider pursuing:

In theory, it should be possible to compare the efficiency of different strategies for investigating slam. As a practical, example, lets assume that you wanted to compare

1. Denial cuebidding with an AKQ base
2. Denial cue bidding with an AK base
3. Roman Key Card Blackwood and Control Asking Bids

How would you design a software system that could effectively compare the efficiency of the different approach’s? Personally, I think that the results of such a study would be extremely interesting, especially if you could generalize this to include a broad range of systems.

I recognize that your milage might vary. Quite honestly, I am fairly indifferent regarding what precise project you intend to study. However, I do strongly recommend that you start small and build from there.
Alderaan delenda est
0

#97 User is offline   CarlRitner 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 211
  • Joined: 2005-July-14

Posted 2012-August-07, 18:41

Good to hear from you you again, RW. It's good advice you offer.

Any "expert" bidding program is going to need, at a minimum, as complete a bidding database as possible. And each bid sequence is going to need as disciplined a definition as possible. Each hole in the system will degrade its value many times more than simply a less than optimal definition.

The above project turned out to be more than I could handle, alone. Every time I got someone else motivated to help, they got a little ways into it and then wanted to make changes to the system. Changes for the better, I do not doubt, but the ripple effect created more work in the "to finish" pile than it generated in the "done" pile.

I can imagine the challenges having a team working on this would present. In the end I gave it up, started playing bridge again, and lived happily ever after. Sort of.
Cheers,
Carl
0

#98 User is offline   tytobyto 

  • PipPip
  • Group: Members
  • Posts: 20
  • Joined: 2012-December-05

Posted 2012-December-15, 19:24

View PostCarlRitner, on 2012-August-07, 18:41, said:

Good to hear from you you again, RW. It's good advice you offer.

Any "expert" bidding program is going to need, at a minimum, as complete a bidding database as possible. And each bid sequence is going to need as disciplined a definition as possible. Each hole in the system will degrade its value many times more than simply a less than optimal definition.

The above project turned out to be more than I could handle, alone. Every time I got someone else motivated to help, they got a little ways into it and then wanted to make changes to the system. Changes for the better, I do not doubt, but the ripple effect created more work in the "to finish" pile than it generated in the "done" pile.

I can imagine the challenges having a team working on this would present. In the end I gave it up, started playing bridge again, and lived happily ever after. Sort of.

A good bidding database could benefit humans as well as robots. A good database should be a bytesize that would not necessarily choke a cooperative effort.

In my occasional efforts over the past 30 years to write a Precision bidder, I have learned some of the hazards in developing and using a bidding database. Perhaps the most basic: if one description interprets as "I have 5 or 6 hearts," then all other descriptions at the same level must say "I have less than 5 hearts or more than 6." This, unfortunately, becomes entirely too tedious, so descriptive languages use priority/precedence to adjudicate overlaps, and catchalls at low priorities to handle the unfortunate (but astoundingly commonplace) situations that fall through the cracks. The problem with the use of priority is that it rarely imitates human decisions which instead, usually resort to a set of subordinate rules. This project's bidding description must include syntax for resolving conflicts (and conflicts within conflicts, etc) while realizing that such descriptions are likely to tax even experts on bidding who are not also experts at description.

The problem with catchall rules is that following a lengthy set of rules, it can be difficult to calculate (and in somewhat of a reversal of the typical mind/machine discrepancy) difficult to invision exactly what the hand making the bid holds. This is a problem because it becomes difficult to accurately describe the responses. All too frequently (frequency in the type of database description not frequency of the hand occuring) each crack that the catchall catches requires an additional subset of rules to insure that the response is sensible. Generalities are great, but 95% of the problems are 5% of the cases.

I would recommend that the project, as soon as it settles on a syntax for recording bidding systems, begin development of a tool which can can analyze the descriptions, and because problems are always more likely at the extremes, use random extremes to generate examples of both overlaps and fallthroughs to query whether the rules are adequate. This tool would also be useful to those contemplating new systems or conventions.

(edit to add the following)
Another useful tool would be one that tests a database contributor by generating hands, asking the contributor for the proper bid and then comparing it to the bid described for the database. It is rare that an original description does not require elaboration in obscure places.
0

#99 User is offline   Cthulhu D 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,169
  • Joined: 2011-November-21
  • Gender:Not Telling
  • Location:Australia
  • Interests:Overbidding

Posted 2012-December-16, 09:44

You're almost certainly better of splitting the problem in two. The bidding engine and the play engine need no relationship with each other, as long as the bidding engine outputs all infomation it knows about the hand at the end, and the play engine picks that infomation up - but for declarer play, it doesn't even need that (thought it might be wise to do it anyway).

It's probably best to split the problem in three, bidding, declarer play and defensive play, and start with bidding or declarer play.
0

#100 User is offline   Grizz1y 

  • Pip
  • Group: Members
  • Posts: 8
  • Joined: 2012-January-08

Posted 2013-January-13, 16:40

View Postbarmar, on 2012-July-30, 18:43, said:

The rules in the Meadowlark bidding database used by GIB each contain the following information: Bid, Priority, Auction pattern, Hand pattern, Criteria, Specification.
Bid: This can be a specific bid like 1, but most of the time it's more general like "1 of some suit other than the last bid", "2 of some suit lower than partner's bid", or "jump bid in the same suit partner bid".


Thanks Barmar,
Very interesting explanations.
Just curious if this bidding-database (or perhaps any earlier versions of it), by any chance, is publicly available somewhere?
Would be an interesting study, I think, from several aspects...
0

  • 6 Pages +
  • « First
  • 3
  • 4
  • 5
  • 6
  • You cannot start a new topic
  • You cannot reply to this topic

13 User(s) are reading this topic
0 members, 13 guests, 0 anonymous users