One Giant Step for a Chess-Playing Machine

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Sid Belzberg
    replied
    Originally posted by Peter Mckillop
    Is there any research suggesting that a quality like altruism is programmed rather than learned?
    Yes

    Genes underlying altruism

    Graham J. Thompson,1Peter L. Hurd,2 and Bernard J. Crespi3
    ABSTRACT

    William D. Hamilton postulated the existence of ‘genes underlying altruism’, under the rubric of inclusive fitness theory, a half-century ago. Such genes are now poised for discovery. In this article, we develop a set of intuitive criteria for the recognition and analysis of genes for altruism and describe the first candidate genes affecting altruism from social insects and humans. We also provide evidence from a human population for genetically based trade-offs, underlain by oxytocin-system polymorphisms, between alleles for altruism and alleles for non-social cognition. Such trade-offs between self-oriented and altruistic behaviour may influence the evolution of phenotypic diversity across all social animals.

    Leave a comment:


  • Peter McKillop
    replied
    Hi Garland. Some questions (just curious):
    1. When you say "uses the concept" you mean conjecture or speculation by Dawkins, right? Nothing proven; 'just' some thought-provoking material from a brilliant thinker?
    2. Given the amount of genetic material we have in common with every other human on the planet (99.9%+ seems to be a widely accepted figure), I wonder why Dawkins thought there would be something more genetically relevant in giving up one's life to save 100 distant relatives (aren't we all at least distant relatives?) as opposed to 100 randomly selected humans?
    3. If altruism is a learned behaviour, rather than a programmed behaviour at the genetic level, then it seems to me that Paul's thinking is more correct. Is there any research suggesting that a quality like altruism is programmed rather than learned?

    Leave a comment:


  • Garland Best
    replied
    Originally posted by Paul Bonham View Post


    What, you're preaching evolution now? I thought Darwin was all about survival of the fittest, not survival of the beneficiaries of altruism.

    Altruism is random. The beneficiaries of altruism could be those least likely to secure the survival of the species, while those doing the sacrificing could be the fittest members of the species.
    You need to read "The Selfish Gene" by Dawkins. In it he uses the concept that the organism is just a carrier for the gene, to explain how altruism can be a genetic advantage. My saving 100 distance relations at the cost of my own life will improve the chances that my genetic traits go on, because it's quite likely at least one or more of those persons have one of my traits. Altruism is not random. You are most likely to show altruism to your immediate family, then your community, then your ethnic group, then humanity as a whole, then to other species.

    Not sure why you think you think it odd that I accept the scientific evidence for evolution.

    Leave a comment:


  • Bob Armstrong
    replied
    As I see it, evolution (Survival of the Best, not the Fittest, in the physically strong sense) is dependent on 2 human traits:

    1. Self-Interest
    2. Cooperation.

    The second has been relatively ignored in academic discourse to my limited knowledge. I believe it is the dominant factor in human nature.

    This thread is becoming more tangentially chess-related, as does happen from time to time here.........because chess players do have a life and do like discussing it from time to time with other chess players......I know it drives some CT members bonkers.

    Here's a chess-related save: For AI chess, empathy is not required.

    Here's the non-chess follow-up: But the issue of the need for empathy in AI programming to generate "Human" evaluation in AI decision-making is a critical issue if we wish to discuss AI beyond chess.

    I am unsure about the opinion that an AI program could on its own come to the conclusion that empathy/altruism had to be a dominant factor in deciding human decisions.

    Bob A

    Leave a comment:


  • Paul Bonham
    replied
    Originally posted by Garland Best View Post
    And how did our brains develop empathy? Altruism is evolutionarily advantageous for the species as a whole. Our organic neural nets have simply been refined over millions of generations of Monte-Carlo simulations in real life. If the goals of the programming are such that altruism helps, then altruism will develop naturally on its own.

    We will probably first see it in team-based strategy games, where one AI on a team of independent AIs will "tank" so the other players on the team can reach the goal. It will die/lose, so others will win.

    What, you're preaching evolution now? I thought Darwin was all about survival of the fittest, not survival of the beneficiaries of altruism.

    Altruism is random. The beneficiaries of altruism could be those least likely to secure the survival of the species, while those doing the sacrificing could be the fittest members of the species.

    Leave a comment:


  • Sid Belzberg
    replied
    Originally posted by Garland Best View Post
    And how did our brains develop empathy? Altruism is evolutionarily advantageous for the species as a whole. Our organic neural nets have simply been refined over millions of generations of Monte-Carlo simulations in real life. If the goals of the programming are such that altruism helps, then altruism will develop naturally on its own.

    We will probably first see it in team-based strategy games, where one AI on a team of independent AIs will "tank" so the other players on the team can reach the goal. It will die/lose, so others will win.
    Non DNA based evolution is a fascinating idea. I have been thinking about that since the late 1980's when I started studying genetic algorithms that conceptually is not that dissimilar from the Monte Carlo Tree Search . Personally I think over time AI will probably evolve to a point that machine based intelligence will conclude that human beings are a far too destructive force on this planet and that will be the end of us.

    I actually have worked on big data analysis using Bayesian techniques to identify empathetic statements in things like tweets from politicians and at the time was able to get approximately 93% accuracy only because my training classifier size was too small but not a bad result all the same. We have also used AI to identify mental models on various subjects within selected populations and have been able to come back with how one can effectively govern communications accordingly. So programming empathy into AI is very possible but will not solve much as all kinds of companies and organizations are doing AI research that is not at all coordinated. Below is a link to a company that I helped my brother in law with with regards to this type of AI work.

    Last edited by Sid Belzberg; Friday, 28th December, 2018, 02:02 PM.

    Leave a comment:


  • Garland Best
    replied
    And how did our brains develop empathy? Altruism is evolutionarily advantageous for the species as a whole. Our organic neural nets have simply been refined over millions of generations of Monte-Carlo simulations in real life. If the goals of the programming are such that altruism helps, then altruism will develop naturally on its own.

    We will probably first see it in team-based strategy games, where one AI on a team of independent AIs will "tank" so the other players on the team can reach the goal. It will die/lose, so others will win.

    Leave a comment:


  • Bob Armstrong
    replied
    Erik: "The AI is only a tool."

    Me: I disagree........

    It will become an equal (Though intelligently superior).......a human-created new planetary species, with its own independence and rights.

    The issue is whether it will exercise judgment according to cold Greek logic, or, according to the complex human interplay of intelligence and empathy.

    Bob A

    Leave a comment:


  • Erik Malmsten
    replied
    Originally posted by Bob Armstrong View Post
    AlfaInfinity must be programmed with human empathy.

    If it is not, then it will be unable to duplicate the accurate decision-making generally of humans re human events and interactions.

    In such case, the human will have to decide if he/she knows better than a clearly superior intelligence, as to what is better for humanity and human nature.

    Bob A
    Why, Bob? The AI is only a tool. Humans will continue to make inferior decisions on who will benefit from the AI. Empathy hasn't been built into GNP or programs to make profits on the stock market. How would empathy be measured? Empathy for stockholders? employees? One industry towns? customers? babies? grandparents? disabled people? homeless? environment? animals? math whizs? Ok, maybe not politicians. If a decision is good for urban Toronto but isn't for rural Alberta, who will the AI have more empathy for?

    Empathy isn't needed for a chess game, although we do have emotional responses to aesthetically pleasing patterns and admire efficient series of moves to mate.

    Leave a comment:


  • Bob Armstrong
    replied
    AlfaInfinity must be programmed with human empathy.

    If it is not, then it will be unable to duplicate the accurate decision-making generally of humans re human events and interactions.

    In such case, the human will have to decide if he/she knows better than a clearly superior intelligence, as to what is better for humanity and human nature.

    Bob A

    Leave a comment:


  • Wayne Komer
    started a topic One Giant Step for a Chess-Playing Machine

    One Giant Step for a Chess-Playing Machine

    One Giant Step for a Chess-Playing Machine

    December 26, 2018

    This is the title of an essay in The New York Times.

    The subtitle:

    The stunning success of AlphaZero, a deep-learning algorithm, heralds a new age of insight — one that, for humans, may not last long.

    It is by Steven Strogatz, a professor of mathematics at Cornell and author of the forthcoming “Infinite Powers: How Calculus Reveals the Secrets of the Universe,” from which this essay is adapted.

    I’m not sure if you can read the article because of the paywall:

    https://www.nytimes.com/2018/12/26/s...ection=Science


    A couple of extracts:

    All of that has changed with the rise of machine learning. By playing against itself and updating its neural network as it learned from experience, AlphaZero discovered the principles of chess on its own and quickly became the best player ever. Not only could it have easily defeated all the strongest human masters — it didn’t even bother to try — it crushed Stockfish, the reigning computer world champion of chess. In a hundred-game match against a truly formidable engine, AlphaZero scored twenty-eight wins and seventy-two draws. It didn’t lose a single game.

    When AlphaZero was first unveiled, some observers complained that Stockfish had been lobotomized by not giving it access to its book of memorized openings. This time around, even with its book, it got crushed again. And when AlphaZero handicapped itself by giving Stockfish ten times more time to think, it still destroyed the brute.

    Tellingly, AlphaZero won by thinking smarter, not faster; it examined only 60 thousand positions a second, compared to 60 million for Stockfish. It was wiser, knowing what to think about and what to ignore. By discovering the principles of chess on its own, AlphaZero developed a style of play that “reflects the truth” about the game rather than “the priorities and prejudices of programmers,” Mr. Kasparov wrote.

    But envisage a day, perhaps in the not too distant future, when AlphaZero has evolved into a more general problem-solving algorithm; call it AlphaInfinity. Like its ancestor, it would have supreme insight: it could come up with beautiful proofs, as elegant as the chess games that AlphaZero played against Stockfish. And each proof would reveal why a theorem was true; AlphaInfinity wouldn’t merely bludgeon you into accepting it with some ugly, difficult argument.

    For human mathematicians and scientists, this day would mark the dawn of a new era of insight. But it may not last. As machines become ever faster, and humans stay put with their neurons running at sluggish millisecond time scales, another day will follow when we can no longer keep up. The dawn of human insight may quickly turn to dusk.
Working...
X