Heterodox chess (chess variants) thread 2.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Re: Heterodox chess (chess variants) thread 2.0

    Originally posted by Garland Best View Post
    ...Remember Watson, the Trivial Pursuit Champion? ... And yes, they also programmed it to debate morality issues with you. http://www.ibtimes.co.uk/ibm-superco...issues-1447413

    Umm..... The article headline is totally misleading. According to the article, here's all Watson does with respect to 'debating issues with you':

    "The argument laid out gave the top three most relevant pros and cons to the issue, although no conclusions were drawn by Watson."

    It's just a tool for searching, finding all revelant pro and con arguments, and presenting them to you. It filters out all the crud. No debating whatsoever.

    We really need to stop the hype with AI. The only way an AI can 'reason' an answer to a philosophical question would be if the various answers could be assigned numbers, just as a chess engine assigns numbers to chess positions. And where would those numbers come from? Well, from the programmer(s), of course.
    Only the rushing is heard...
    Onward flies the bird.

    Comment


    • #17
      Re: Heterodox chess (chess variants) thread 2.0

      Fergus Duniho, an expert on ethics and webmaster on The Chess Variant Pages wrote about Watson, too:

      "Basically, Watson can locate relevant texts on moral issues and summarize their main points, but it can't draw any conclusions. That's not moral reasoning. That's text processing."

      The main problem with my idea of a chess variant is perhaps what Fergus alluded to earlier in a link I gave, that experts on morality would need to be agreed to & appointed (e.g. to put the Q & As in database devices), and not all moral questions have universally accepted answers. Plus, arguments about judging in other sports would be nothing compared to the more frequent and (at least at times) heated ones about moral questions that could come up before even a game is finished, as Garland also alluded to. The idea is unworkable, due to the human nature of many adults, at least; I might have realized that if my brain and bod were not being baked by the weekend's heat here in Ottawa. I am kicking myself less now, since it was a nice try anyway, and Edison was happy to fail 99 times to succeed once.

      In the spirit of Paul's last post, I got around to reading up on how the neural net technique worked with the program Giraffe (taught itself to be IM level in 72 hr.), and I read that it was essential for a 1,000,000 plus human vs. human games database to be fed to it, in order that it slowly begin to learn from good play. There's some hope there, since it can take decades for that many games to be played of any new chess variant or board game, and thus the same neural net technique could not be applied to it till after then. In any case, I am rather curious about Knightmare Chess, which Fergus has played and likes, and he's a good chess variant player, based on his website's Game Courier rating system, that counts game results in all variants one plays games of.

      I can also report I lost my only rated Sac Chess game, to a Mexican fellow that Fergus broke even with in 4 games, if I recall correctly. I also lost 4 Sac Chess speed games, of varying time controls, to a Sac Chess playing program a former chess engine programmer designed with his fairy chess game playing programming toolkit ; however the Mexican beat the program very easily (it must have been at a fast time control for the machine, since it dropped a piece early to a pawn fork). Thus, I've lost 5/5 games of my own invented Sac Chess :o. One thing I learned is that in Sac Chess open lines can really be murder, and a king can more often be safe in the centre than in chess, with connecting the rooks done in good time anyway.
      Last edited by Kevin Pacey; Monday, 27th June, 2016, 03:47 PM. Reason: Spelling
      Anything that can go wrong will go wrong.
      Murphy's law, by Edward A. Murphy Jr., USAF, Aerospace Engineer

      Comment


      • #18
        Re: Heterodox chess (chess variants) thread 2.0

        Originally posted by Paul Bonham View Post
        ...
        The only way an AI can 'reason' an answer to a philosophical question would be if the various answers could be assigned numbers, just as a chess engine assigns numbers to chess positions. And where would those numbers come from? Well, from the programmer(s), of course.
        That's what I assumed about driverless car programming, that it's all about the numbers game for a relatively small number of choices (e.g. what to collide with if a collision unavoidable), and rather less about sophisticated moral choices (granted, no stakes are normally higher than life or death).

        [edit: note also that I don't see how neural net technique, using some sort of a database of moral Q & As, could be used to have a computer teach itself all kinds of moral principles.]
        Last edited by Kevin Pacey; Monday, 27th June, 2016, 03:30 PM. Reason: Grammar
        Anything that can go wrong will go wrong.
        Murphy's law, by Edward A. Murphy Jr., USAF, Aerospace Engineer

        Comment


        • #19
          Re: Heterodox chess (chess variants) thread 2.0

          Well it's all a reflection of the Turing test, isn't it? If you cannot distinguish between a computer and a human while interfacing on say, a message board, then it doesn't matter whether it really has "learned" morality.

          BTW, the examples given regarding the cars are simply the obvious "simplistic" examples given to get the point across.

          Comment


          • #20
            Re: Heterodox chess (chess variants) thread 2.0

            Originally posted by Kevin Pacey View Post
            That's what I assumed about driverless car programming, that it's all about the numbers game for a relatively small number of choices (e.g. what to collide with if a collision unavoidable), and rather less about sophisticated moral choices (granted, no stakes are normally higher than life or death).

            [edit: note also that I don't see how neural net technique, using some sort of a database of moral Q & As, could be used to have a computer teach itself all kinds of moral principles.]

            Neural net learning by a computer also involves numbers, or things that can be converted to numbers (i.e. letters of the alphabet for string sorting).

            Since the human brain is considered to be a colossal neural net, with billions of nodes feeding back into each other, constantly 'rewiring' itself, one might wonder if somehow numbers are being used there as well. And if so, where would those numbers come from? Perhaps DNA.... which would imply we are genetically preconditioned to believe certain things that have no factual answers in our environment.

            I suppose the great hope with AI is the quantum computer, although I haven't heard or seen anything that hinted such technology would be able to reason on its own without any programmer input of values (numbers) into it.
            Only the rushing is heard...
            Onward flies the bird.

            Comment


            • #21
              Re: Heterodox chess (chess variants) thread 2.0

              Originally posted by Garland Best View Post
              Well it's all a reflection of the Turing test, isn't it? If you cannot distinguish between a computer and a human while interfacing on say, a message board, then it doesn't matter whether it really has "learned" morality.

              BTW, the examples given regarding the cars are simply the obvious "simplistic" examples given to get the point across.
              I get your giving it as an example, Garland, but while computers learned tic-tac-toe early on, it took decades for them to rule at chess. Can they be as good at complex morality at any time from now, like the way they excelled at a complex game (chess) eventually? An open question still; I suppose there are optimists and pessimists (from any viewpoint you like) on the question.

              Btw, robots are coming along as far as being conversational, if one thinks of Turing's test. In terms of morals, they are [often] programmed not to kill [e.g. a human in a factory], I'd suppose.

              [edit: below is a wikipedia link re: robots.]

              https://en.wikipedia.org/wiki/Robot
              Last edited by Kevin Pacey; Monday, 27th June, 2016, 06:53 PM. Reason: Spelling
              Anything that can go wrong will go wrong.
              Murphy's law, by Edward A. Murphy Jr., USAF, Aerospace Engineer

              Comment


              • #22
                Re: Heterodox chess (chess variants) thread 2.0

                Originally posted by Paul Bonham View Post
                Neural net learning by a computer also involves numbers, or things that can be converted to numbers (i.e. letters of the alphabet for string sorting).

                Since the human brain is considered to be a colossal neural net, with billions of nodes feeding back into each other, constantly 'rewiring' itself, one might wonder if somehow numbers are being used there as well. And if so, where would those numbers come from? Perhaps DNA.... which would imply we are genetically preconditioned to believe certain things that have no factual answers in our environment.

                I suppose the great hope with AI is the quantum computer, although I haven't heard or seen anything that hinted such technology would be able to reason on its own without any programmer input of values (numbers) into it.
                Afaik, the only (but huge) practical advantage of quantum computing, if ever effective, is that way more tasks would be executed in the same amount of time by a quantum computer as compared with a conventional machine.

                [edit: Below is a link to a (short entry version) wikipedia discussion of quantum computing.]

                https://simple.wikipedia.org/wiki/Quantum_computer
                Last edited by Kevin Pacey; Monday, 27th June, 2016, 06:41 PM.
                Anything that can go wrong will go wrong.
                Murphy's law, by Edward A. Murphy Jr., USAF, Aerospace Engineer

                Comment


                • #23
                  Re: Heterodox chess (chess variants) thread 2.0

                  Originally posted by Kevin Pacey View Post
                  In terms of morals, they are [often] programmed not to kill [e.g. a human in a factory], I'd suppose.
                  I think it's more accurate to state that no one programmed the robot to kill in the first place. It's actually difficult to program a machine to go out of it's way to kill someone. More often it a case of a robot performing a repetitive action without someone thinking "Hmm, what if a person gets in the way?". Sure the robot killed the person, but so did the boulder that rolled down a mountain crushing the man standing at the bottom. It isn't a moral act for either.

                  People keep bringing up how computers will not accomplish task X. In game playing it was checkers, chess, poker, Jeopardy, and now go. Meanwhile computers are getting better and better at natural language understanding. Think Siri. I read the posts about how Watson doesn't debate but only pulls out the reasons from massive information databases (the internet) for and against a subject. Reality is it does not take much additional programming to take that one step further. It wouldn't surprise me if the way they programmed Watson was deliberate. Bottom line, if it requires thought and reasoning, someone will eventually program it.

                  Human's most impressive evidence of intelligence is their ability to generalize, to adapt to new situations. One can take a chess grandmaster and say "Here's a knife. We'll drop you 200 miles from nowhere. Get out alive." The GM might not last more than a couple of weeks. But I bet he will last longer than a chess robot running Houdini. Compared to that test, I think programming a robot to answer "complex moral questions" is a walk in the park.

                  Comment


                  • #24
                    Re: Heterodox chess (chess variants) thread 2.0

                    For anyone's info, here's a (short entry version) wikipedia link re: neural networks:

                    https://simple.wikipedia.org/wiki/Neural_network
                    Anything that can go wrong will go wrong.
                    Murphy's law, by Edward A. Murphy Jr., USAF, Aerospace Engineer

                    Comment


                    • #25
                      Re: Heterodox chess (chess variants) thread 2.0

                      Originally posted by Garland Best View Post
                      ......People keep bringing up how computers will not accomplish task X. In game playing it was checkers, chess, poker, Jeopardy, and now go. Meanwhile computers are getting better and better at natural language understanding. Think Siri. I read the posts about how Watson doesn't debate but only pulls out the reasons from massive information databases (the internet) for and against a subject. Reality is it does not take much additional programming to take that one step further. It wouldn't surprise me if the way they programmed Watson was deliberate. Bottom line, if it requires thought and reasoning, someone will eventually program it.

                      Human's most impressive evidence of intelligence is their ability to generalize, to adapt to new situations. One can take a chess grandmaster and say "Here's a knife. We'll drop you 200 miles from nowhere. Get out alive." The GM might not last more than a couple of weeks. But I bet he will last longer than a chess robot running Houdini. Compared to that test, I think programming a robot to answer "complex moral questions" is a walk in the park.

                      Garland, I'd like to respond to all that, but first I need to ask you: what do you know about how computer software actually works? Do you author any computer programs in any language? And if so, how much do you know about the underlying process, that is to say, assembly language and how the CPU processes commands in assembly language?
                      Only the rushing is heard...
                      Onward flies the bird.

                      Comment


                      • #26
                        Re: Heterodox chess (chess variants) thread 2.0

                        For anyone's info, here's a link to a NY Times article re: worries about dangerous (industrial) robots, followed by a wikipedia link re: military robots:

                        http://www.nytimes.com/2014/06/17/up...king.html?_r=0

                        https://en.wikipedia.org/wiki/Military_robot

                        I'd also have to wonder how much 'concern' over a person or an animal that a domestic robot might be programmed to have:

                        https://en.wikipedia.org/wiki/Domestic_robot
                        Last edited by Kevin Pacey; Monday, 27th June, 2016, 09:35 PM. Reason: Adding link
                        Anything that can go wrong will go wrong.
                        Murphy's law, by Edward A. Murphy Jr., USAF, Aerospace Engineer

                        Comment


                        • #27
                          Re: Heterodox chess (chess variants) thread 2.0

                          I suppose I can give an example of a 'complex moral question' that's admittedly not that complex, but might still confuse a machine for some time to come, perhaps (it's based on something I heard on TV some years ago; the journalist in the question gave the wrong answer, as overheard by a platoon sergeant in the field, who promptly hit him):

                          Q: You are an international journalist embedded with an army platoon in a war zone, dressed in your own different uniform, and you see that the enemy is just ahead, waiting in ambush; do you: 1) whisper to a soldier beside you, or 2) don't tell anyone, because you're a neutral journalist?
                          Anything that can go wrong will go wrong.
                          Murphy's law, by Edward A. Murphy Jr., USAF, Aerospace Engineer

                          Comment


                          • #28
                            Re: Heterodox chess (chess variants) thread 2.0

                            Paul, I attended the University of Waterloo, with an BSc and MSc and physics. My first computer was a Sinclair ZX81, which I programmed in basic and assembler. I took courses in programming in Fortran and in C, and basic electronics, so I know the basics of CPU architecture, as well as logic circuitry. I have written programs in assembler and C for the PC in DOS, as well as in Visual Basic. Things I programmed included a Fractal Calculator in 80x86 assembler, control of a robotic arm using a Commodore 64, and operation and analysis of data from an MRI machine in Microsoft Quickbasic. I plea ignorance in object oriented programming, so stuff like C++, Java and Python I know little about. So my skills are about 25 years out of date. I am a dinosaur. Hope that helps.

                            PS: I also read "Godel, Escher Bach" by Douglas Hofstadter twice in University. I highly recommend the book. I also read "The Emperor's New Mind" by Roger Penrose, which I considered sloppy and poorly thought out.

                            Comment


                            • #29
                              Re: Heterodox chess (chess variants) thread 2.0

                              Originally posted by Garland Best View Post
                              Paul, I attended the University of Waterloo, with an BSc and MSc and physics. My first computer was a Sinclair ZX81, which I programmed in basic and assembler. I took courses in programming in Fortran and in C, and basic electronics, so I know the basics of CPU architecture, as well as logic circuitry. I have written programs in assembler and C for the PC in DOS, as well as in Visual Basic. Things I programmed included a Fractal Calculator in 80x86 assembler, control of a robotic arm using a Commodore 64, and operation and analysis of data from an MRI machine in Microsoft Quickbasic. I plea ignorance in object oriented programming, so stuff like C++, Java and Python I know little about. So my skills are about 25 years out of date. I am a dinosaur. Hope that helps.

                              PS: I also read "Godel, Escher Bach" by Douglas Hofstadter twice in University. I highly recommend the book. I also read "The Emperor's New Mind" by Roger Penrose, which I considered sloppy and poorly thought out.

                              Garland, even if your skills are 25 years out of date and you don't know object-oriented programming, that is of little consequence with respect to this topic we are on. You definitely know how a CPU deals with software (object-oriented code, no matter how large or complex, still gets turned into assembly language). You definitely know that everything a CPU does basically comes down to numerical calculation performed on binary data. Even the assembly instructions themselves are nothing more than binary data.

                              So I want to reckon this with the statement in your previous post, that "Bottom line, if it requires thought and reasoning, someone will eventually program it."

                              That statement cannot really be disputed, because thought and reasoning could easily be programmed right now: just use random numbers. Design a program that takes random numbers and turn those numbers into statements of logic. For example, a random number between 0.5 and 1.0 would be equated to a statement that "Telling a lie is ok", and a random number between 0.0 and 4.9999999999999 would be equated to "Telling a lie is bad". Obviously, the output of this program would NOT be consistent.

                              But if consistency is desired, that can also be programmed.

                              My point about AI in my earlier post was that any logical reasoning, any philosophical debating, done by a software program on a compute machine would have to be done on a numerical basis, and the numbers used have to come from somewhere. And it is the programmer, not the machine, who controls where the numbers come from and what is done with them. Therefore, if we preclude any use of random numbers of any kind (including from random.org which uses natural phenomena to generate 'truly' random numbers), the moral reasoning done by any AI must be deterministic. There must exist some function f() that would spit out the exact same moral reasoning, for every question.

                              I alluded that this might also be true of humans: that perhaps in our brains, everything we decide and believe also comes down to numbers, perhaps derived from our DNA. We will never be able to either prove or disprove this notion, the complexity is far beyond our own understanding. The one thing we can say is that humans don't use random numbers -- we don't keep changing our moral beliefs, and we aren't totally inconsistent in them (barring those with mental incapacities).

                              I'm tying together random numbers with moral reasoning deliberately. Nobody really can define whether any set of numbers, no matter the size, are indisputably random. We do have criteria for things like distribution and other statistical attributes, but these only lead us to a conclusion that a particular set of numbers "seem" to be truly random (this is all documented on random.org site).

                              Similarly, there is no way to prove that a set of moral beliefs held by any AI not using random numbers came from a non-deterministic process. Unless there is a paradigm shift in how software operates, every set of moral beliefs expressed by an AI that does not use random numbers must be deterministic, even if we don't know exactly how to express the function f() that is being used internally.

                              Therefore the mere fact that some moral reasoning is being done by an AI can never be used to demonstrate that the AI is any of the following: self-aware, independent, alive.

                              I'm not saying you were arguing otherwise, Garland, but I just thought this should be mentioned in this discussion. And you can see now why I asked about your knowledge of how CPU's work, because if you knew nothing about it, you might actually believe that there COULD BE some magic involved whereby a programmer just needs to discover the 'magic words' to create a completely independent, or self-aware, or alive AI.

                              And the corollary to all this is that no AI using current computing technology could ever become so 'intelligent' that it would determine all on its own (i.e. without being programmed to do so) that we humans need to be eradicated.
                              Only the rushing is heard...
                              Onward flies the bird.

                              Comment


                              • #30
                                Re: Heterodox chess (chess variants) thread 2.0

                                Originally posted by Paul Bonham View Post
                                The one thing we can say is that humans don't use random numbers -- we don't keep changing our moral beliefs, and we aren't totally inconsistent in them (barring those with mental incapacities).
                                Interestingly enough, I believe that the use of randomness is a critical part of what makes the human mind work. It is a random number of electrons at a synapse in our brain that triggers a desire to have strawberry ice cream instead of chocolate today. Some parts of our brain are hardwired much more than others, but when events bring a junction in our neural nets to a threshold level, it is all random probabilities that determine if we are going to seek help or kill everyone in a class room. I also think that this randomness does not require quantum computing. Chaos behavior follows from deterministic rules. The influence of our environment gives enough random input to allow our brains - as well as a well crafted AI - to be intelligent and creative.

                                Comment

                                Working...
                                X