If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Policy / Politique
The fee for tournament organizers advertising on ChessTalk is $20/event or $100/yearly unlimited for the year.
Les frais d'inscription des organisateurs de tournoi sur ChessTalk sont de 20 $/événement ou de 100 $/année illimitée.
You can etransfer to Henry Lam at chesstalkforum at gmail dot com
Transfér à Henry Lam à chesstalkforum@gmail.com
Dark Knight / Le Chevalier Noir
General Guidelines
---- Nous avons besoin d'un traduction français!
Some Basics
1. Under Board "Frequently Asked Questions" (FAQs) there are 3 sections dealing with General Forum Usage, User Profile Features, and Reading and Posting Messages. These deal with everything from Avatars to Your Notifications. Most general technical questions are covered there. Here is a link to the FAQs. https://forum.chesstalk.com/help
2. Consider using the SEARCH button if you are looking for information. You may find your question has already been answered in a previous thread.
3. If you've looked for an answer to a question, and not found one, then you should consider asking your question in a new thread. For example, there have already been questions and discussion regarding: how to do chess diagrams (FENs); crosstables that line up properly; and the numerous little “glitches” that every new site will have.
4. Read pinned or sticky threads, like this one, if they look important. This applies especially to newcomers.
5. Read the thread you're posting in before you post. There are a variety of ways to look at a thread. These are covered under “Display Modes”.
6. Thread titles: please provide some details in your thread title. This is useful for a number of reasons. It helps ChessTalk members to quickly skim the threads. It prevents duplication of threads. And so on.
7. Unnecessary thread proliferation (e.g., deliberately creating a new thread that duplicates existing discussion) is discouraged. Look to see if a thread on your topic may have already been started and, if so, consider adding your contribution to the pre-existing thread. However, starting new threads to explore side-issues that are not relevant to the original subject is strongly encouraged. A single thread on the Canadian Open, with hundreds of posts on multiple sub-topics, is no better than a dozen threads on the Open covering only a few topics. Use your good judgment when starting a new thread.
8. If and/or when sub-forums are created, please make sure to create threads in the proper place.
Debate
9. Give an opinion and back it up with a reason. Throwaway comments such as "Game X pwnz because my friend and I think so!" could be considered pointless at best, and inflammatory at worst.
10. Try to give your own opinions, not simply those copied and pasted from reviews or opinions of your friends.
Unacceptable behavior and warnings
11. In registering here at ChessTalk please note that the same or similar rules apply here as applied at the previous Boardhost message board. In particular, the following content is not permitted to appear in any messages:
* Racism
* Hatred
* Harassment
* Adult content
* Obscene material
* Nudity or pornography
* Material that infringes intellectual property or other proprietary rights of any party
* Material the posting of which is tortious or violates a contractual or fiduciary obligation you or we owe to another party
* Piracy, hacking, viruses, worms, or warez
* Spam
* Any illegal content
* unapproved Commercial banner advertisements or revenue-generating links
* Any link to or any images from a site containing any material outlined in these restrictions
* Any material deemed offensive or inappropriate by the Board staff
12. Users are welcome to challenge other points of view and opinions, but should do so respectfully. Personal attacks on others will not be tolerated. Posts and threads with unacceptable content can be closed or deleted altogether. Furthermore, a range of sanctions are possible - from a simple warning to a temporary or even a permanent banning from ChessTalk.
Helping to Moderate
13. 'Report' links (an exclamation mark inside a triangle) can be found in many places throughout the board. These links allow users to alert the board staff to anything which is offensive, objectionable or illegal. Please consider using this feature if the need arises.
Advice for free
14. You should exercise the same caution with Private Messages as you would with any public posting.
Umm..... The article headline is totally misleading. According to the article, here's all Watson does with respect to 'debating issues with you':
"The argument laid out gave the top three most relevant pros and cons to the issue, although no conclusions were drawn by Watson."
It's just a tool for searching, finding all revelant pro and con arguments, and presenting them to you. It filters out all the crud. No debating whatsoever.
We really need to stop the hype with AI. The only way an AI can 'reason' an answer to a philosophical question would be if the various answers could be assigned numbers, just as a chess engine assigns numbers to chess positions. And where would those numbers come from? Well, from the programmer(s), of course.
Only the rushing is heard...
Onward flies the bird.
Fergus Duniho, an expert on ethics and webmaster on The Chess Variant Pages wrote about Watson, too:
"Basically, Watson can locate relevant texts on moral issues and summarize their main points, but it can't draw any conclusions. That's not moral reasoning. That's text processing."
The main problem with my idea of a chess variant is perhaps what Fergus alluded to earlier in a link I gave, that experts on morality would need to be agreed to & appointed (e.g. to put the Q & As in database devices), and not all moral questions have universally accepted answers. Plus, arguments about judging in other sports would be nothing compared to the more frequent and (at least at times) heated ones about moral questions that could come up before even a game is finished, as Garland also alluded to. The idea is unworkable, due to the human nature of many adults, at least; I might have realized that if my brain and bod were not being baked by the weekend's heat here in Ottawa. I am kicking myself less now, since it was a nice try anyway, and Edison was happy to fail 99 times to succeed once.
In the spirit of Paul's last post, I got around to reading up on how the neural net technique worked with the program Giraffe (taught itself to be IM level in 72 hr.), and I read that it was essential for a 1,000,000 plus human vs. human games database to be fed to it, in order that it slowly begin to learn from good play. There's some hope there, since it can take decades for that many games to be played of any new chess variant or board game, and thus the same neural net technique could not be applied to it till after then. In any case, I am rather curious about Knightmare Chess, which Fergus has played and likes, and he's a good chess variant player, based on his website's Game Courier rating system, that counts game results in all variants one plays games of.
I can also report I lost my only rated Sac Chess game, to a Mexican fellow that Fergus broke even with in 4 games, if I recall correctly. I also lost 4 Sac Chess speed games, of varying time controls, to a Sac Chess playing program a former chess engine programmer designed with his fairy chess game playing programming toolkit ; however the Mexican beat the program very easily (it must have been at a fast time control for the machine, since it dropped a piece early to a pawn fork). Thus, I've lost 5/5 games of my own invented Sac Chess :o. One thing I learned is that in Sac Chess open lines can really be murder, and a king can more often be safe in the centre than in chess, with connecting the rooks done in good time anyway.
Last edited by Kevin Pacey; Monday, 27th June, 2016, 03:47 PM.
Reason: Spelling
Anything that can go wrong will go wrong. Murphy's law, by Edward A. Murphy Jr., USAF, Aerospace Engineer
...
The only way an AI can 'reason' an answer to a philosophical question would be if the various answers could be assigned numbers, just as a chess engine assigns numbers to chess positions. And where would those numbers come from? Well, from the programmer(s), of course.
That's what I assumed about driverless car programming, that it's all about the numbers game for a relatively small number of choices (e.g. what to collide with if a collision unavoidable), and rather less about sophisticated moral choices (granted, no stakes are normally higher than life or death).
[edit: note also that I don't see how neural net technique, using some sort of a database of moral Q & As, could be used to have a computer teach itself all kinds of moral principles.]
Last edited by Kevin Pacey; Monday, 27th June, 2016, 03:30 PM.
Reason: Grammar
Anything that can go wrong will go wrong. Murphy's law, by Edward A. Murphy Jr., USAF, Aerospace Engineer
Well it's all a reflection of the Turing test, isn't it? If you cannot distinguish between a computer and a human while interfacing on say, a message board, then it doesn't matter whether it really has "learned" morality.
BTW, the examples given regarding the cars are simply the obvious "simplistic" examples given to get the point across.
That's what I assumed about driverless car programming, that it's all about the numbers game for a relatively small number of choices (e.g. what to collide with if a collision unavoidable), and rather less about sophisticated moral choices (granted, no stakes are normally higher than life or death).
[edit: note also that I don't see how neural net technique, using some sort of a database of moral Q & As, could be used to have a computer teach itself all kinds of moral principles.]
Neural net learning by a computer also involves numbers, or things that can be converted to numbers (i.e. letters of the alphabet for string sorting).
Since the human brain is considered to be a colossal neural net, with billions of nodes feeding back into each other, constantly 'rewiring' itself, one might wonder if somehow numbers are being used there as well. And if so, where would those numbers come from? Perhaps DNA.... which would imply we are genetically preconditioned to believe certain things that have no factual answers in our environment.
I suppose the great hope with AI is the quantum computer, although I haven't heard or seen anything that hinted such technology would be able to reason on its own without any programmer input of values (numbers) into it.
Only the rushing is heard...
Onward flies the bird.
Well it's all a reflection of the Turing test, isn't it? If you cannot distinguish between a computer and a human while interfacing on say, a message board, then it doesn't matter whether it really has "learned" morality.
BTW, the examples given regarding the cars are simply the obvious "simplistic" examples given to get the point across.
I get your giving it as an example, Garland, but while computers learned tic-tac-toe early on, it took decades for them to rule at chess. Can they be as good at complex morality at any time from now, like the way they excelled at a complex game (chess) eventually? An open question still; I suppose there are optimists and pessimists (from any viewpoint you like) on the question.
Btw, robots are coming along as far as being conversational, if one thinks of Turing's test. In terms of morals, they are [often] programmed not to kill [e.g. a human in a factory], I'd suppose.
Neural net learning by a computer also involves numbers, or things that can be converted to numbers (i.e. letters of the alphabet for string sorting).
Since the human brain is considered to be a colossal neural net, with billions of nodes feeding back into each other, constantly 'rewiring' itself, one might wonder if somehow numbers are being used there as well. And if so, where would those numbers come from? Perhaps DNA.... which would imply we are genetically preconditioned to believe certain things that have no factual answers in our environment.
I suppose the great hope with AI is the quantum computer, although I haven't heard or seen anything that hinted such technology would be able to reason on its own without any programmer input of values (numbers) into it.
Afaik, the only (but huge) practical advantage of quantum computing, if ever effective, is that way more tasks would be executed in the same amount of time by a quantum computer as compared with a conventional machine.
[edit: Below is a link to a (short entry version) wikipedia discussion of quantum computing.]
In terms of morals, they are [often] programmed not to kill [e.g. a human in a factory], I'd suppose.
I think it's more accurate to state that no one programmed the robot to kill in the first place. It's actually difficult to program a machine to go out of it's way to kill someone. More often it a case of a robot performing a repetitive action without someone thinking "Hmm, what if a person gets in the way?". Sure the robot killed the person, but so did the boulder that rolled down a mountain crushing the man standing at the bottom. It isn't a moral act for either.
People keep bringing up how computers will not accomplish task X. In game playing it was checkers, chess, poker, Jeopardy, and now go. Meanwhile computers are getting better and better at natural language understanding. Think Siri. I read the posts about how Watson doesn't debate but only pulls out the reasons from massive information databases (the internet) for and against a subject. Reality is it does not take much additional programming to take that one step further. It wouldn't surprise me if the way they programmed Watson was deliberate. Bottom line, if it requires thought and reasoning, someone will eventually program it.
Human's most impressive evidence of intelligence is their ability to generalize, to adapt to new situations. One can take a chess grandmaster and say "Here's a knife. We'll drop you 200 miles from nowhere. Get out alive." The GM might not last more than a couple of weeks. But I bet he will last longer than a chess robot running Houdini. Compared to that test, I think programming a robot to answer "complex moral questions" is a walk in the park.
......People keep bringing up how computers will not accomplish task X. In game playing it was checkers, chess, poker, Jeopardy, and now go. Meanwhile computers are getting better and better at natural language understanding. Think Siri. I read the posts about how Watson doesn't debate but only pulls out the reasons from massive information databases (the internet) for and against a subject. Reality is it does not take much additional programming to take that one step further. It wouldn't surprise me if the way they programmed Watson was deliberate. Bottom line, if it requires thought and reasoning, someone will eventually program it.
Human's most impressive evidence of intelligence is their ability to generalize, to adapt to new situations. One can take a chess grandmaster and say "Here's a knife. We'll drop you 200 miles from nowhere. Get out alive." The GM might not last more than a couple of weeks. But I bet he will last longer than a chess robot running Houdini. Compared to that test, I think programming a robot to answer "complex moral questions" is a walk in the park.
Garland, I'd like to respond to all that, but first I need to ask you: what do you know about how computer software actually works? Do you author any computer programs in any language? And if so, how much do you know about the underlying process, that is to say, assembly language and how the CPU processes commands in assembly language?
Only the rushing is heard...
Onward flies the bird.
For anyone's info, here's a link to a NY Times article re: worries about dangerous (industrial) robots, followed by a wikipedia link re: military robots:
I suppose I can give an example of a 'complex moral question' that's admittedly not that complex, but might still confuse a machine for some time to come, perhaps (it's based on something I heard on TV some years ago; the journalist in the question gave the wrong answer, as overheard by a platoon sergeant in the field, who promptly hit him):
Q: You are an international journalist embedded with an army platoon in a war zone, dressed in your own different uniform, and you see that the enemy is just ahead, waiting in ambush; do you: 1) whisper to a soldier beside you, or 2) don't tell anyone, because you're a neutral journalist?
Anything that can go wrong will go wrong. Murphy's law, by Edward A. Murphy Jr., USAF, Aerospace Engineer
Paul, I attended the University of Waterloo, with an BSc and MSc and physics. My first computer was a Sinclair ZX81, which I programmed in basic and assembler. I took courses in programming in Fortran and in C, and basic electronics, so I know the basics of CPU architecture, as well as logic circuitry. I have written programs in assembler and C for the PC in DOS, as well as in Visual Basic. Things I programmed included a Fractal Calculator in 80x86 assembler, control of a robotic arm using a Commodore 64, and operation and analysis of data from an MRI machine in Microsoft Quickbasic. I plea ignorance in object oriented programming, so stuff like C++, Java and Python I know little about. So my skills are about 25 years out of date. I am a dinosaur. Hope that helps.
PS: I also read "Godel, Escher Bach" by Douglas Hofstadter twice in University. I highly recommend the book. I also read "The Emperor's New Mind" by Roger Penrose, which I considered sloppy and poorly thought out.
Paul, I attended the University of Waterloo, with an BSc and MSc and physics. My first computer was a Sinclair ZX81, which I programmed in basic and assembler. I took courses in programming in Fortran and in C, and basic electronics, so I know the basics of CPU architecture, as well as logic circuitry. I have written programs in assembler and C for the PC in DOS, as well as in Visual Basic. Things I programmed included a Fractal Calculator in 80x86 assembler, control of a robotic arm using a Commodore 64, and operation and analysis of data from an MRI machine in Microsoft Quickbasic. I plea ignorance in object oriented programming, so stuff like C++, Java and Python I know little about. So my skills are about 25 years out of date. I am a dinosaur. Hope that helps.
PS: I also read "Godel, Escher Bach" by Douglas Hofstadter twice in University. I highly recommend the book. I also read "The Emperor's New Mind" by Roger Penrose, which I considered sloppy and poorly thought out.
Garland, even if your skills are 25 years out of date and you don't know object-oriented programming, that is of little consequence with respect to this topic we are on. You definitely know how a CPU deals with software (object-oriented code, no matter how large or complex, still gets turned into assembly language). You definitely know that everything a CPU does basically comes down to numerical calculation performed on binary data. Even the assembly instructions themselves are nothing more than binary data.
So I want to reckon this with the statement in your previous post, that "Bottom line, if it requires thought and reasoning, someone will eventually program it."
That statement cannot really be disputed, because thought and reasoning could easily be programmed right now: just use random numbers. Design a program that takes random numbers and turn those numbers into statements of logic. For example, a random number between 0.5 and 1.0 would be equated to a statement that "Telling a lie is ok", and a random number between 0.0 and 4.9999999999999 would be equated to "Telling a lie is bad". Obviously, the output of this program would NOT be consistent.
But if consistency is desired, that can also be programmed.
My point about AI in my earlier post was that any logical reasoning, any philosophical debating, done by a software program on a compute machine would have to be done on a numerical basis, and the numbers used have to come from somewhere. And it is the programmer, not the machine, who controls where the numbers come from and what is done with them. Therefore, if we preclude any use of random numbers of any kind (including from random.org which uses natural phenomena to generate 'truly' random numbers), the moral reasoning done by any AI must be deterministic. There must exist some function f() that would spit out the exact same moral reasoning, for every question.
I alluded that this might also be true of humans: that perhaps in our brains, everything we decide and believe also comes down to numbers, perhaps derived from our DNA. We will never be able to either prove or disprove this notion, the complexity is far beyond our own understanding. The one thing we can say is that humans don't use random numbers -- we don't keep changing our moral beliefs, and we aren't totally inconsistent in them (barring those with mental incapacities).
I'm tying together random numbers with moral reasoning deliberately. Nobody really can define whether any set of numbers, no matter the size, are indisputably random. We do have criteria for things like distribution and other statistical attributes, but these only lead us to a conclusion that a particular set of numbers "seem" to be truly random (this is all documented on random.org site).
Similarly, there is no way to prove that a set of moral beliefs held by any AI not using random numbers came from a non-deterministic process. Unless there is a paradigm shift in how software operates, every set of moral beliefs expressed by an AI that does not use random numbers must be deterministic, even if we don't know exactly how to express the function f() that is being used internally.
Therefore the mere fact that some moral reasoning is being done by an AI can never be used to demonstrate that the AI is any of the following: self-aware, independent, alive.
I'm not saying you were arguing otherwise, Garland, but I just thought this should be mentioned in this discussion. And you can see now why I asked about your knowledge of how CPU's work, because if you knew nothing about it, you might actually believe that there COULD BE some magic involved whereby a programmer just needs to discover the 'magic words' to create a completely independent, or self-aware, or alive AI.
And the corollary to all this is that no AI using current computing technology could ever become so 'intelligent' that it would determine all on its own (i.e. without being programmed to do so) that we humans need to be eradicated.
Only the rushing is heard...
Onward flies the bird.
The one thing we can say is that humans don't use random numbers -- we don't keep changing our moral beliefs, and we aren't totally inconsistent in them (barring those with mental incapacities).
Interestingly enough, I believe that the use of randomness is a critical part of what makes the human mind work. It is a random number of electrons at a synapse in our brain that triggers a desire to have strawberry ice cream instead of chocolate today. Some parts of our brain are hardwired much more than others, but when events bring a junction in our neural nets to a threshold level, it is all random probabilities that determine if we are going to seek help or kill everyone in a class room. I also think that this randomness does not require quantum computing. Chaos behavior follows from deterministic rules. The influence of our environment gives enough random input to allow our brains - as well as a well crafted AI - to be intelligent and creative.
Comment