Niemann - Carlsen

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • J. Crowhurst
    replied
    Originally posted by Vlad Drkulec View Post

    Sounds like you didn't hear many death threats before that incident.

    The adrenaline is geared towards making you able to take action or to run away (fight or flight) and not to help you talk. Time is slowed down. Senses are heightened. In that instant you are focused on the moment and what is happening. Fully alive and fully present and ready to take action or in some cases to freeze depending on how you handle the internal dialog awaiting your instructions to act or not act.
    Well this death threat was five minutes before he was to take the witness stand, just outside the courtroom, and the second was three minutes later as the judge was about to walk in. And then he no memory of either threat, cuz for him it's just shit talk, it doesn't mean anything. Until someone loses an eye....

    Leave a comment:


  • Brad Thomson
    replied
    I will say that as long as lie-detectors are not used as evidence in courts of law, but only as a tool the police can quietly use to develop or corroborate suspicions, then I have no objection to them. But when someone fails a lie-detector test the police should not (and generally do not) make that information public, as this would be completely unfair to the accused in the court of public opinion.

    Leave a comment:


  • Pargat Perrer
    replied
    Originally posted by Aris Marghetis View Post
    Dear Brad and Pargat, thank you for your replies, but it doesn't look like we can bridge the gap in our opinion of Dr.Regan's work. Respectfully, if your understanding is that Regan "must be doing a particular thing" (my quotes for emphasis, not actual quotes), but you don't believe that Dr.Regan is doing "something deeper", even though he actually IS doing something deeper, well then, I don't see how we can breach that impasse. If you like, feel free to research and study Dr.Regan's work. The next step, which is understandably not available to most people, is to work with him to screen a significant chess tournament. All I feel I can say on this public forum, is that his work and results are significantly deeper than what we can allude to here, a few paragraphs at a time. Then Brad, you and I go way back, my life is so enriched for knowing you, but calling Dr.Regan's work "mumbo jumbo ... theories" is not right. In my opinion, Dr.Regan's work continues to be crucial for saving chess. Finally, Dr.Regan's work is just one detection tool. Of course catching cheaters red-handed is great, but ALL detection tools can be employed as per the situation, to properly catch cheaters.

    I don't know that we can make any more progress on our differences here. I wish you both all the best.
    I don't mind disagreeing with you Aris, but I think you don't get what our disagreement centers on.

    I am not disagreeing as to whether or not Ken Regans methods with his data are "doing something deeper" or not. I respect his knowledge and his work. What I have been stressing is the single data point that Regan SHOULD BE working with: cheater makes top move choice of an engine (allegedly getting a communication as to an engine's top choice move, but that isn't a data point because we haven't caught the process in action as Brad points out).

    One can do whatever whiz bang statistical work one wants with those data points, but those are the only data points you have, because that's the only time cheating is occurring.

    You seemed to say in an earlier email that Regan works with other data besides that. What we are disagreeing on is whether he SHOULD be using such other data, whatever it may be.

    Your exact quote: "It's not as simple as "how often did the player obviously choose the computer move". "

    But that is the only time cheating is occurring, if it is occurring. So yes, it IS that simple as far as data points are concerned.

    Leave a comment:


  • Aris Marghetis
    replied
    Originally posted by Brad Thomson View Post
    Thank you Aris, I agree that we shall have to agree to disagree. I am of the opinion that there are far too many variables in life for Regan's theories to be useful, you are not. I am not willing to accuse someone of cheating based upon Regan's theories, you are. Please correct me if I am wrong. So again, we respectfully agree to disagree. Life would be boring if we all had the same opinons on everything.
    We're good Brad. I'll instead drop you an email now.

    Leave a comment:


  • Brad Thomson
    replied
    Thank you Aris, I agree that we shall have to agree to disagree. I am of the opinion that there are far too many variables in life for Regan's theories to be useful, you are not. I am not willing to accuse someone of cheating based upon Regan's theories, you are. Please correct me if I am wrong. So again, we respectfully agree to disagree. Life would be boring if we all had the same opinons on everything.

    Leave a comment:


  • Aris Marghetis
    replied
    Dear Brad and Pargat, thank you for your replies, but it doesn't look like we can bridge the gap in our opinion of Dr.Regan's work. Respectfully, if your understanding is that Regan "must be doing a particular thing" (my quotes for emphasis, not actual quotes), but you don't believe that Dr.Regan is doing "something deeper", even though he actually IS doing something deeper, well then, I don't see how we can breach that impasse. If you like, feel free to research and study Dr.Regan's work. The next step, which is understandably not available to most people, is to work with him to screen a significant chess tournament. All I feel I can say on this public forum, is that his work and results are significantly deeper than what we can allude to here, a few paragraphs at a time. Then Brad, you and I go way back, my life is so enriched for knowing you, but calling Dr.Regan's work "mumbo jumbo ... theories" is not right. In my opinion, Dr.Regan's work continues to be crucial for saving chess. Finally, Dr.Regan's work is just one detection tool. Of course catching cheaters red-handed is great, but ALL detection tools can be employed as per the situation, to properly catch cheaters.

    I don't know that we can make any more progress on our differences here. I wish you both all the best.

    Leave a comment:


  • Brad Thomson
    replied
    Thak you to everyone in this discussion. My opinion has not changed. Mathematics and formulae are not good enough, we must catch the cheaters in the act. And again, people are playing more and more like computers all the time, comparisons at any level are not sufficient to convict, catch them or leave them alone. What Carlsen and his supporters have done is the true crime here.

    I should add that players improve at varying rates of speed, especially young players who are brilliant and who have access to computers. And some players run very hot and very cold, consider Nepo. Regan's theories are a waste of time, and detrimental to the game. Chess cannot be quantified in such a cavalier manner.

    I dare say that if you gave Regan about 20 Nepo games, starting with when he is cold and then when he turns hot, the conclusion would be that he cheated (nobody improves that fast). The same could be said about the games of Canadian IM Deen Hergott. Deen had spells where he seemed very off form, and then periods when his games were magical and brilliant. Regan would say he cheated... oops, there were NO computers then that he could have used.

    Sandbaggers sometimes deliberately lose games when nothing is on the line then play their honest best when there is more at stake. Regan would claim they cheated when they played their best, but the exact opposite would have been the case.

    What if some player has a tragedy in her life and decides to play a lot of chess to work her way out of it? Her games at first are not nearly up to her normal ability, but this is when Regan first gets a hold of them and applies his "theories". Then she comes around as the tragedy moves farther into the past and gets better results at a very fast rate. Regan would contend that she must have cheated to improve that fast. (Statistics and formulae do not lie.)

    Again, Regan's theories are mumbo jumbo, there are too many variables in life for them to be at all meaningful. Hence, they are dangerous and very detrimental to the integrity of the game of chess.

    Catch cheaters cheating red handed.
    Last edited by Brad Thomson; Sunday, 6th November, 2022, 08:32 AM.

    Leave a comment:


  • Pargat Perrer
    replied
    Originally posted by Aris Marghetis View Post
    So in the very early days, Regan (and others) started by counting the number of times "a player making the move that an engine would make". But that evolved. These methodologies are now "counting" the "quality gap" (CPL) between played moves and engine moves, discounting forced moves, discounting opening moves, and some other discounts that I prefer not to publicize here.
    Thanks, Aris, this is pretty much EXACTLY how I do my GPR ratings work. I discount the first 12 moves (24 plies) of each game, because obviously there is no way to know exactly when a player transitions from playing by memory into playing by calculation. But 12 seems to me to be a good average approximation. I also discount if the gap between the Stockfish score of the move played versus the Stockfish score of its own best move is less than a certain level, because the two moves can be judged to be almost equivalent (at the search depth I use, with is pretty deep). I also discount if the move played is identical to Stockfish's but the next-best move is way below that score, implying a forced or obvious good move.

    The moves I discount do not even get tabulated into the GPR rating.

    So I do understand about all that part of it.

    But on the cheating question, let me ask you something: if a player had a working cheating method, and it involved some communication from outside about an engine's move choices, do you really think the player is getting enough information to be told things like "Ok, Stockfish picks ...Nf6-g5 as the best move, and 20 centipawns below that is ...Rd8-d6, and 33 centipawns below that is ...Bb7xPe4 ..." etc etc.? And the player decides that because ...Rd8-d6 is only 20 centipawns below and he doesn't want to always choose the engine's best move, he'll choose that move instead?

    That sounds way too sophisticated. You would want to keep the communication as short as possible.

    No, the most likely information they are getting is simply the engine's top move choice, and that's it. And so the player plays that move. That single data point is all Ken Regan can use for his statistical analysis. If he is taking other things into account, like this point gap that you mention, he is going off on a tangent that isn't part of cheating.

    The only data points that can possibly work with cheating are when the position is critical and complex, and there is a single engine move that seems to dominate over others BUT there is one or two other moves that might seem more natural to a human player, and the player chooses the engine move.

    I should stress it again: Those are the only data points Ken Regan SHOULD be using. If he is using other data points, then I totally suspect his work is flawed.

    Player chooses engine move ... in critical difficult complex position ... more often than his rating accounts for (such positions don't come up too often in a single game) .... these are the data points.



    Originally posted by Aris Marghetis View Post
    And then running heavy-duty stats on multiple measurements. It's not as simple as "how often did the player obviously choose the computer move". Finally, I do admit that I can't get my head around the phrase "playing more like engines" if that implies playing less like people? Or am I missing the intent of that phrase? Of course, I heartily accept that players are playing better now.
    Perhaps Ken Regan needs to go further in his work. For example, let's say he has samples of 10 games of a suspected cheater in top level chess. In those games, a total of let's say 40 complex, difficult positions come up in which the engine favors one move by some margin, and the second and third and maybe fourth top choices are significantly weaker, but seem more natural for a human to choose .... then Ken should compile those positions together, gather some top GMs and / or IMs, not tell them where these positions come from, and ask the GMs to study the positions for say 10 minutes and make their choice of move. Now, IF AND ONLY IF the GMs and IMs seem to overwhelmingly choose in all 40 positions from among what the engine had as its second or third or fourth move choices (i.e. the natural seeming moves), BUT the suspected cheater in all or most of the 40 positions chose the top engine move, then you have a case for cheating.

    If the GMs and IMs all choose the engine's top move, in almost all 40 cases, then the alleged cheater choosing them doesn't prove anything.

    I doubt Ken Regan is going that far, but he should be. Just analyzing the moves without having any comparison of what a typical human of similar ability or rating to the alleged cheater would do in those situations is not complete, imo.

    In the case of alleged cheaters at lower levels, do the same analysis but instead of GMs or IMs, choose players at the same level as the cheater's rating level.

    If Ken Regan is in fact going that far, then that is new information that hasn't been disclosed to this point, at least i haven't seen it.

    Leave a comment:


  • Aris Marghetis
    replied
    Originally posted by Tom O'Donnell View Post

    First, I would not use the same standard for chess cheating as I would for incarcerating someone. As I pointed out before, many cases involving millions of dollars, are decided by preponderance of the evidence.

    Second, I would not solely use statistical methods, but I would not discount them, either.

    Third, I once read an article that claimed about 1% of fingerprinting matches are wrong (i.e. a trained fingerprint examiner - a job only a few people are qualified for - made a mistake believing there was a match when there wasn't one). Mistakes are going to happen no matter what standard you use. Eyewitness testimony, fingerprints, even DNA matches can be wrong. People make mistakes. There are even cases of law enforcement planting evidence, manufacturing evidence, and lab people falsifying results. However, that doesn't mean we need find a serial killer actually in the act of killing someone to convict them.

    Using your example, if DNA that an expert claimed matched mine were found at the scene, and my fingerprints were found on the truck, and there were six eyewitnesses that saw me run off with a bag, etc. but no one caught me in the act of actually taking the money, then I would say I am probably going to prison. Even if there is chance the eyewitnesses were wrong, the DNA was wrong, the fingerprints didn't match, etc. I would have the right of appealing my conviction. I imagine convicted cheaters, assuming they cared enough, would have the same right.

    Questions regarding chess cheating:

    1) What standard should be used? "Beyond a reasonable doubt" is ridiculous.

    2) How good is the statistical methodology? I proposed a practical test of it in a post above.

    3) Should statistics alone be enough? No, for a variety of reasons.

    4) Should the cheater be "caught in the act"? No, because "catching them in the act" might require an adult to confront the alleged (possibly a minor) in a bathroom stall. Or demanding they empty their pockets. Or some other violation of their rights.
    Solid post Tom, thanks for capturing these points.

    Leave a comment:


  • Tom O'Donnell
    replied
    Originally posted by Pargat Perrer View Post


    Hmmm.... "Surely we cannot only convict miscreants if we "catch them in the act""

    The more we deviate from catching miscreants in the act, the more we do the reverse and convict innocents based on evidence that could be circumstantial or simply wrong.

    I like however that you recognize the sketchiness of using only statistics to prove guilt (at least in the chess cheating domain).

    Please read my post responding to Aris. Whatever methods Ken Regan is using, the key data points are when a player's move matches what a top engine would play. What other data points can he use? Cheating in chess is somehow getting a computer move and playing it.

    Brad is so right to point out that the younger generation of chess players, just from using engines so much, must be developing the abilities to play more like engines. And if that is happening, then there are bound to be many false positives from Regan's methods.

    EDIT: I really give Brad kudos for stressing that we simply MUST catch this miscreants in the act. That is what we should be stressing, not using mysterious statistics that only a few people understand.

    As an example, would you like Tom to be accused of robbing a Brinks truck when it showed up at a bank simply because statistics showed that so many times when that Brinks truck showed up, you happened to be in the bank?
    First, I would not use the same standard for chess cheating as I would for incarcerating someone. As I pointed out before, many cases involving millions of dollars, are decided by preponderance of the evidence.

    Second, I would not solely use statistical methods, but I would not discount them, either.

    Third, I once read an article that claimed about 1% of fingerprinting matches are wrong (i.e. a trained fingerprint examiner - a job only a few people are qualified for - made a mistake believing there was a match when there wasn't one). Mistakes are going to happen no matter what standard you use. Eyewitness testimony, fingerprints, even DNA matches can be wrong. People make mistakes. There are even cases of law enforcement planting evidence, manufacturing evidence, and lab people falsifying results. However, that doesn't mean we need find a serial killer actually in the act of killing someone to convict them.

    Using your example, if DNA that an expert claimed matched mine were found at the scene, and my fingerprints were found on the truck, and there were six eyewitnesses that saw me run off with a bag, etc. but no one caught me in the act of actually taking the money, then I would say I am probably going to prison. Even if there is chance the eyewitnesses were wrong, the DNA was wrong, the fingerprints didn't match, etc. I would have the right of appealing my conviction. I imagine convicted cheaters, assuming they cared enough, would have the same right.

    Questions regarding chess cheating:

    1) What standard should be used? "Beyond a reasonable doubt" is ridiculous.

    2) How good is the statistical methodology? I proposed a practical test of it in a post above.

    3) Should statistics alone be enough? No, for a variety of reasons.

    4) Should the cheater be "caught in the act"? No, because "catching them in the act" might require an adult to confront the alleged (possibly a minor) in a bathroom stall. Or demanding they empty their pockets. Or some other violation of their rights.

    Leave a comment:


  • Aris Marghetis
    replied
    Originally posted by Pargat Perrer View Post

    Hi Aris. I have been doing that kind of measurability with my Game Performance Rating work. The engine used is Stockfish 15.

    I found that often in top level chess, neither of the two players are playing for a win. In this case, they will each play entirely computer moves and the result will be a short or medium-length draw. Their GPR will skyrocket because they are matching Stockfish move for move. I have to discard the game, on the premise that neither player showed any initiative.

    The whole basis of cheating at chess is that in a critical position, you somehow get the move that the engine would make and you make that move. The engine would most likely be Stockfish 15, since it is apparently the best among all the minimax engines (AlphaZero would be another possibility except it requires very expensive hardware to run).

    So any "statistical anomolies" that Ken Regan is searching for is based on a player making the move that an engine would make. That is the key data point. That is the definition of cheating, that you somehow get that computer move and use it.

    Regan has to use statistics because the player in question is not cheating on every move (if he's careful, that is). But still, whatever statistics he is using, that doesn't matter. What matters is matching moves to the engine.

    Therefore Brad is correct in saying the basis of Regan's work is finding players who are "playing more like engines". And Brad's idea that the young players and top players of today's generation are bound to be playing more like engines is also absolutely correct.
    So in the very early days, Regan (and others) started by counting the number of times "a player making the move that an engine would make". But that evolved. These methodologies are now "counting" the "quality gap" (CPL) between played moves and engine moves, discounting forced moves, discounting opening moves, and some other discounts that I prefer not to publicize here.

    And then running heavy-duty stats on multiple measurements. It's not as simple as "how often did the player obviously choose the computer move". Finally, I do admit that I can't get my head around the phrase "playing more like engines" if that implies playing less like people? Or am I missing the intent of that phrase? Of course, I heartily accept that players are playing better now.

    Leave a comment:


  • Pargat Perrer
    replied
    Originally posted by Tom O'Donnell View Post

    That is a very strange standard. There are all sorts of "fanciful mathematical theories" we use to do more important things than catch chess cheats. DNA matching tests. Forensic accounting. Even weirder stuff like Benford's Law. Surely we cannot only convict miscreants if we "catch them in the act".

    I would agree that using only statistics, particularly in the relatively new field of chess cheat catching, to prove guilt, seems sketchy. That doesn't appear to me to be what Regan is doing.

    Hmmm.... "Surely we cannot only convict miscreants if we "catch them in the act""

    The more we deviate from catching miscreants in the act, the more we do the reverse and convict innocents based on evidence that could be circumstantial or simply wrong.

    I like however that you recognize the sketchiness of using only statistics to prove guilt (at least in the chess cheating domain).

    Please read my post responding to Aris. Whatever methods Ken Regan is using, the key data points are when a player's move matches what a top engine would play. What other data points can he use? Cheating in chess is somehow getting a computer move and playing it.

    Brad is so right to point out that the younger generation of chess players, just from using engines so much, must be developing the abilities to play more like engines. And if that is happening, then there are bound to be many false positives from Regan's methods.

    EDIT: I really give Brad kudos for stressing that we simply MUST catch this miscreants in the act. That is what we should be stressing, not using mysterious statistics that only a few people understand.

    As an example, would you like Tom to be accused of robbing a Brinks truck when it showed up at a bank simply because statistics showed that so many times when that Brinks truck showed up, you happened to be in the bank?
    Last edited by Pargat Perrer; Saturday, 5th November, 2022, 06:33 PM.

    Leave a comment:


  • Pargat Perrer
    replied
    Originally posted by Aris Marghetis View Post

    Brad, easy man. What Sid wrote right here is essentially correct. Regan's statistical methodologies do not catch people "playing more like engines" (I don't know if anyone's ever figured out how to apply any kind of measurability to that) However, going back to something like mid-1800s games, his work is simply (not simple) massive statistical crunching. I humbly believe that Regan is The Man.
    Hi Aris. I have been doing that kind of measurability with my Game Performance Rating work. The engine used is Stockfish 15.

    I found that often in top level chess, neither of the two players are playing for a win. In this case, they will each play entirely computer moves and the result will be a short or medium-length draw. Their GPR will skyrocket because they are matching Stockfish move for move. I have to discard the game, on the premise that neither player showed any initiative.

    The whole basis of cheating at chess is that in a critical position, you somehow get the move that the engine would make and you make that move. The engine would most likely be Stockfish 15, since it is apparently the best among all the minimax engines (AlphaZero would be another possibility except it requires very expensive hardware to run).

    So any "statistical anomolies" that Ken Regan is searching for is based on a player making the move that an engine would make. That is the key data point. That is the definition of cheating, that you somehow get that computer move and use it.

    Regan has to use statistics because the player in question is not cheating on every move (if he's careful, that is). But still, whatever statistics he is using, that doesn't matter. What matters is matching moves to the engine.

    Therefore Brad is correct in saying the basis of Regan's work is finding players who are "playing more like engines". And Brad's idea that the young players and top players of today's generation are bound to be playing more like engines is also absolutely correct.

    Leave a comment:


  • Aris Marghetis
    replied
    Originally posted by Sid Belzberg View Post






    Your interpretation of Ken Regan's work is incorrect. The idea is to look for statistical anomalies and variances from the norm. Ken Regan actually posted in this thread, and your insulting tone is uncalled for.
    Brad, easy man. What Sid wrote right here is essentially correct. Regan's statistical methodologies do not catch people "playing more like engines" (I don't know if anyone's ever figured out how to apply any kind of measurability to that) However, going back to something like mid-1800s games, his work is simply (not simple) massive statistical crunching. I humbly believe that Regan is The Man.

    Leave a comment:


  • Tom O'Donnell
    replied
    Originally posted by Brad Thomson View Post
    Tom, suppose a cheater looks at a computer analysis and rejects it. This is still cheating. The police will not be able to detect that the person cheated. I will never accept fanciful mathematical theories as an acceptable means of accusing cheaters. Catch them in the act.
    That is a very strange standard. There are all sorts of "fanciful mathematical theories" we use to do more important things than catch chess cheats. DNA matching tests. Forensic accounting. Even weirder stuff like Benford's Law. Surely we cannot only convict miscreants if we "catch them in the act".

    I would agree that using only statistics, particularly in the relatively new field of chess cheat catching, to prove guilt, seems sketchy. That doesn't appear to me to be what Regan is doing.
    Last edited by Tom O'Donnell; Saturday, 5th November, 2022, 03:58 PM.

    Leave a comment:

Working...
X