If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Policy / Politique
The fee for tournament organizers advertising on ChessTalk is $20/event or $100/yearly unlimited for the year.
Les frais d'inscription des organisateurs de tournoi sur ChessTalk sont de 20 $/événement ou de 100 $/année illimitée.
You can etransfer to Henry Lam at chesstalkforum at gmail dot com
Transfér à Henry Lam à chesstalkforum@gmail.com
Dark Knight / Le Chevalier Noir
General Guidelines
---- Nous avons besoin d'un traduction français!
Some Basics
1. Under Board "Frequently Asked Questions" (FAQs) there are 3 sections dealing with General Forum Usage, User Profile Features, and Reading and Posting Messages. These deal with everything from Avatars to Your Notifications. Most general technical questions are covered there. Here is a link to the FAQs. https://forum.chesstalk.com/help
2. Consider using the SEARCH button if you are looking for information. You may find your question has already been answered in a previous thread.
3. If you've looked for an answer to a question, and not found one, then you should consider asking your question in a new thread. For example, there have already been questions and discussion regarding: how to do chess diagrams (FENs); crosstables that line up properly; and the numerous little “glitches” that every new site will have.
4. Read pinned or sticky threads, like this one, if they look important. This applies especially to newcomers.
5. Read the thread you're posting in before you post. There are a variety of ways to look at a thread. These are covered under “Display Modes”.
6. Thread titles: please provide some details in your thread title. This is useful for a number of reasons. It helps ChessTalk members to quickly skim the threads. It prevents duplication of threads. And so on.
7. Unnecessary thread proliferation (e.g., deliberately creating a new thread that duplicates existing discussion) is discouraged. Look to see if a thread on your topic may have already been started and, if so, consider adding your contribution to the pre-existing thread. However, starting new threads to explore side-issues that are not relevant to the original subject is strongly encouraged. A single thread on the Canadian Open, with hundreds of posts on multiple sub-topics, is no better than a dozen threads on the Open covering only a few topics. Use your good judgment when starting a new thread.
8. If and/or when sub-forums are created, please make sure to create threads in the proper place.
Debate
9. Give an opinion and back it up with a reason. Throwaway comments such as "Game X pwnz because my friend and I think so!" could be considered pointless at best, and inflammatory at worst.
10. Try to give your own opinions, not simply those copied and pasted from reviews or opinions of your friends.
Unacceptable behavior and warnings
11. In registering here at ChessTalk please note that the same or similar rules apply here as applied at the previous Boardhost message board. In particular, the following content is not permitted to appear in any messages:
* Racism
* Hatred
* Harassment
* Adult content
* Obscene material
* Nudity or pornography
* Material that infringes intellectual property or other proprietary rights of any party
* Material the posting of which is tortious or violates a contractual or fiduciary obligation you or we owe to another party
* Piracy, hacking, viruses, worms, or warez
* Spam
* Any illegal content
* unapproved Commercial banner advertisements or revenue-generating links
* Any link to or any images from a site containing any material outlined in these restrictions
* Any material deemed offensive or inappropriate by the Board staff
12. Users are welcome to challenge other points of view and opinions, but should do so respectfully. Personal attacks on others will not be tolerated. Posts and threads with unacceptable content can be closed or deleted altogether. Furthermore, a range of sanctions are possible - from a simple warning to a temporary or even a permanent banning from ChessTalk.
Helping to Moderate
13. 'Report' links (an exclamation mark inside a triangle) can be found in many places throughout the board. These links allow users to alert the board staff to anything which is offensive, objectionable or illegal. Please consider using this feature if the need arises.
Advice for free
14. You should exercise the same caution with Private Messages as you would with any public posting.
Mario That depends maybe the move that complicates the game might lose by more!
In that case AlphaGo would be correct in keeping the evaluation as close to even as possible
Chess computers do the same thing when avoiding checkmates by sacrificing pieces to prolong the game machines do not care what looks harder for humans just what survives the longest
They would rather give an easy mate in 5 then a harder mate in 2
Therefore policy mistake is with the humans since it is better to make your opponent take the longer route to victory and not hope that he misses the quicker path
Last edited by Lee Hendon; Monday, 14th March, 2016, 04:37 AM.
In the online New Yorker site there is an article on the training of Korean children in Go.
The first paragraph is a grabber:
In the Age of Google DeepMind, Do the Young Go Prodigies of Asia Have a Future?
BY DAWN CHAN
The game of Go is simple in design but complex in its possible outcomes.
Choong-am Dojang is far from a typical Korean school. Its best pupils will never study history or math, nor will they receive traditional high-school diplomas. The academy, which operates above a bowling alley on a narrow street in northwestern Seoul, teaches only one subject: the game of Go. Each day, Choong-am’s students arrive at nine in the morning, find places at desks in a fluorescent-lit room, and play, study, memorize, and review games—with breaks for cafeteria meals or an occasional soccer match—until nine at night.
Mario That depends maybe the move that complicates the game might lose by more!
In that case AlphaGo would be correct in keeping the evaluation as close to even as possible
Chess computers do the same thing when avoiding checkmates by sacrificing pieces to prolong the game machines do not care what looks harder for humans just what survives the longest
They would rather give an easy mate in 5 then a harder mate in 2
Therefore policy mistake is with the humans since it is better to make your opponent take the longer route to victory and not hope that he misses the quicker path
AlphaGo's Policy neural network does not care about the quantity of point lost or won by. It's highest priority is maximizing winning probability or (when losing) minimizing losing probability. After move 78, it should have followed what you say chess engines do: attempt to prolong the game by complicating it.
The Policy neural network is the boss of hundreds (literature says a max of 1200, I don't know how many were actually used) of brute force engines similar to chess engines.In the future versions I see the following changes:
1.A change in the dynamic depth-of-analysis assigned to an engine. My guess is that which ever engine was given the task of tackling the area around move 78 was not going deep enough thus affecting the overall assessment of the entire board as a loss for Black (AlphaGo). Many (including commentator Michael Redmond) are now saying move 78 "did not work".
2.A change on the Policy NN to make use of complexity on the board.
There is a fun opinion piece on Slate today entitled “The Go Champion, the Grandmaster, and Me”.
If you follow Jeopardy! at all, you will remember Ken Jennings, who won 74 consecutive games.
In 2011, he and fellow champion Brad Ratter were pitted against a quiz-show playing computer called Watson, whom they both lost to.
Garry Kasparov lost to IBM’s Deep Blue in 1997. And Lee Solo just lost in Go to Alpha Go.
Ken writes:
So on behalf of Garry and Brad, I’d like to welcome Lee Solo to our tiny fraternity. Go may not be a major pastime in the United States and Europe, but it’s a big deal in China, where more than 60 million people watched the first game of the Alpha Go match. Lee’s celebrity and place in the history books are now assured, thanks to something that happened to him during what was almost certainly one of the worst weeks of his life.
______
And later Ken alludes to Kurt Vonnegut’s novel Player Piano, set in a future America where nearly all careers have been automated by computers.
There is a fun opinion piece on Slate today entitled “The Go Champion, the Grandmaster, and Me”.
If you follow Jeopardy! at all, you will remember Ken Jennings, who won 74 consecutive games.
In 2011, he and fellow champion Brad Ratter were pitted against a quiz-show playing computer called Watson, whom they both lost to.
Garry Kasparov lost to IBM’s Deep Blue in 1997. And Lee Solo just lost in Go to Alpha Go.
Ken writes:
So on behalf of Garry and Brad, I’d like to welcome Lee Solo to our tiny fraternity. Go may not be a major pastime in the United States and Europe, but it’s a big deal in China, where more than 60 million people watched the first game of the Alpha Go match. Lee’s celebrity and place in the history books are now assured, thanks to something that happened to him during what was almost certainly one of the worst weeks of his life.
______
And later Ken alludes to Kurt Vonnegut’s novel Player Piano, set in a future America where nearly all careers have been automated by computers.
Actually, the situation may arise when even the simplest machines will beat you. Anand, in a marvelous interview, said this last year:
“In 1997 you needed a supercomputer to beat the strongest human on the planet. By 2000 your laptop could do it, by 2004 an old laptop could do it, then your mobile phone couldn’t do it for a while but by the second or third iteration of these things that started to happen and soon your kitchen table will do it, your fridge will do it… it’s not going to be fun.”
The introduction to an article in Wired on the program that plays Go:
THIS MORE POWERFUL VERSION OF ALPHAGO LEARNS ON ITS OWN
NOAH SHELDON
AT ONE POINT during his historic defeat to the software AlphaGo last year, world champion Go player Lee Sedol abruptly left the room. The bot had played a move that confounded established theories of the board game*, in a moment that came to epitomize the mystery and mastery of AlphaGo.
A new and much more powerful version of the program called AlphaGo Zero unveiled Wednesday is even more capable of surprises. In tests, it trounced the version that defeated Lee by 100 games to nothing, and has begun to generate its own new ideas for the more than 2,000-year-old game.
AlphaGo Zero showcases an approach to teaching machines new tricks that makes them less reliant on humans. It could also help AlphaGo’s creator, the London-based DeepMind research lab that is part of Alphabet, to pay its way. In a filing this month, DeepMind said it lost £96 million last year.
DeepMind CEO Demis Hassabis said in a press briefing Monday that the guts of AlphaGo Zero should be adaptable to scientific problems such as drug discovery, or understanding protein folding. They too involve navigating a mathematical ocean of many possible combinations of a set of basic elements.
Despite its historic win for machines last year, the original version of AlphaGo stood on the shoulders of many, uncredited, humans. The software “learned” about Go by ingesting data from 160,000 amateur games taken from an online Go community. After that initial boost, AlphaGo honed itself to be superhuman by playing millions more games against itself.
AlphaGo Zero is so-named because it doesn’t need human knowledge to get started, relying solely on that self-play mechanism. The software initially makes moves at random. But it is programmed to know when it has won or lost a game, and to adjust its play to favor moves that lead to victories.
* A bit more on that move that didn’t make sense (above):
SEOUL, SOUTH KOREA — The move didn't make sense to the humans packed into the sixth floor of Seoul's Four Seasons hotel. But the Google machine saw it quite differently. The machine knew the move wouldn't make sense to all those humans. Yes, it knew. And yet it played the move anyway, because this machine has seen so many moves that no human ever has.
In the second game of this week's historic Go match between Lee Sedol, one of the world's top players, and AlphaGo, an artificially intelligent computing system built by a small team of Google researchers, this surprisingly skillful machine made a move that flummoxed everyone from the throngs of reporters and photographers to the match commentators to, yes, Lee Sedol himself. "That's a very strange move," said one commentator, an enormously talented Go player in his own right. "I thought it was a mistake," said the other. And Lee Sedol, after leaving the match room for a spell, needed nearly fifteen minutes to settle on a response.
Fan Hui, the three-time European Go champion who lost five straight games to AlphaGo this past October, was also completely gobsmacked. "It's not a human move. I've never seen a human play this move," he said. But he also called the move "So beautiful. So beautiful." Indeed, it changed the path of play, and AlphaGo went on to win the second game. Then it won the third, claiming victory in the best-of-five match after a three-game sweep, before Lee Sedol clawed back a dramatic win in Game Four to save a rather large measure of human pride.
When I started this thread back in May of 2014, we were discussing computers beating the top humans at chess, Othello, Scrabble, backgammon, poker, Jeopardy and Go.
Five years later, we have to add Jenga to the list.
(Wikipedia) - Jenga is a game of physical skill created by Leslie Scott, and currently marketed by Hasbro. Players take turns removing one block at a time from a tower constructed of 54 blocks. Each block removed is then placed on top of the tower, creating a progressively taller and more unstable structure.
The game ends when the tower falls, or if any piece falls from the tower other than the piece being knocked out to move to the top. The winner is the last person to successfully remove and place a block.
Extracts from an article in The Times:
Not such a blockhead: robot masters Jenga
Tom Whipple, Science Editor
January 31 2019
It was 22 years ago that a computer beat Garry Kasparov at chess. And for two years robots have had the upper hand at Go. But there is one game of such fiendish complexity, such devilish intricacy, that until now it has held out from the rise of the machines: Jenga.
Not, though, for much longer. Computers are closing in on the last redoubt of human exceptionalism — the ability to very carefully move wooden blocks without knocking them over.
In a laboratory at Massachusetts Institute of Technology (MIT) is a robot that has taught itself to play Jenga, scientists announced yesterday.
Manual dexterity is a much-underrated skill. While humans value intellect, from a computer’s point of view folding a sheet is far harder to master
Even the most sophisticated production lines still employ humans to perform the fiddly tasks. Playing Jenga is a good way to change this, because it is not only intricate but also combines several senses. It requires vision, to know where to find the blocks, but also requires touch; whether a block is loose is impossible to tell until you have given it a jiggle.
The researchers showed that they could train a robot to play the game at a rudimentary level. Before long it had taught itself to remove blocks as proficiently as a human. It is hoped that robots could apply this ability on the production line.
When AlphaZero, the finest chess computer, mastered the game it was able to play hundreds of thousands of games against itself in a matter of hours. This would have been significantly harder if, as was the case with the Jenga robot, someone had to manually reconstruct the game after each failure.
“Learning Jenga is so disruptive that we had to cut down the training from the standard millions of iterations to a few hundred,” Alberto Rodriguez, who co-authored the paper in Science Robotics, said. Professor Rodriguez said that the robot was still not at the level where it could defeat a human thinking strategically. “It does sufficiently well so that you could play and enjoy a nice game,” he said. “But it doesn’t play at superhuman level.”
Comment