Top Banner
60

Dan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness

Jan 13, 2017

Download

Technology

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • This presentation is based on TechEmergence founder Dan Faggellas 2015 TedX talk titled

    What Will We Do When the Machines Can Feel?

  • In his presentation, Dan explores the ethical consequence of the development of conscious

    machines which many artificial intelligence researchers consider to be possible within our

    lifetime.

  • To view Dans talk for yourself, click the video screen below:

    https://www.youtube.com/watch?v=PjiZbMhqqTM

  • Its clear that technology matters... and it matters because it

    matters to us.

  • Technology doesnt really matter without us even if it does matter

    with us.

  • A cell phone, sitting by itself, doesnt have any moral worth, but

    an animal, we might say, does.

  • A cell phone, we could say, is just matter, while a deer actually

    matters.

  • And although we would presume that human beings might have

    the grandest and richest sentience among animals that we have found on this planet, we also attribute moral worth to animals

    now as well.

  • What would be the case when/if technology could matter in and of

    itself?

  • It seems very far out, but if we take a quick jaunt through the history of computing it might shed some light on where we

    might find ourselves in the decades ahead.

  • This is what computers looked like less than 70 years ago...

    Pictured is the ENIAC computer, introduced in 1946 at the University of Pennsylvania

  • Around 20 years later IBM developed computers that helped

    Apollo 11 get to the moon.

  • But weve made many giant leaps in computing technologies since

    that time that make IBMs computers seem paltry in this day

    and age.

  • Throughout the history of computing, performance has

    increased and price has decreased at a steady rate.

    The image pictured was taken from Ray Kurzweils book titled The Singularity Is Near. This theme goes along with Rays general theory of the law of accelerating return. For more information on rays site click here.

    http://www.kurzweilai.net/the-law-of-accelerating-returns

  • Some experts support, based on Kurzweils model, that we are getting relatively close to the

    point to where an average laptop will have as much raw computing power as a lower mamilian brain.

  • Ray Kurzweil is not among the only folks who is of the belief that

    in the coming decade or so we may have household computers that have the same computing

    power as the human brain.

  • ... But its not just raw computing power that would make a

    technology morally relevant.

  • Were more interested in what technolgy can do. Is it really

    smart?

  • The Deep Blue computer that beat (then) world chess champion

    Garry Kasparov was 1.4 tons of raw computing power. It was the finest supercomputer of its day

    just 15 years ago.

  • An iPhone 5 from 2012 has 7x the computing power that Deep Blue

    did...

  • Thats 15 years, 7x the computing power, and 1/11,000 of the size...

  • Jeoprady was supposed to be the final stronghold of human

    wisdom...

  • Until IMBs Watson beat the (then) two Jeoprady champions.

  • ...And theres other technologies that are on their way up as well.

    Biomemetic technology has taken off in the past decade. Click here to see one of MITs interesting projects about a robotic cheetas that sees.

    Apple recently opened up Siri to third-party developers. Click here to read an article outlining the announcement via WIRED.

    http://news.mit.edu/2015/cheetah-robot-lands-running-jump-0529http://www.wired.com/2016/06/apple-might-just-made-siri-something-really-good/

  • Theres alot of areas where humans are just being caught up

    to, but they are being beaten... handily.

  • It brings to mind some of the fears being posited by folks like Bill Gates, Stephen Hawkin, and Elon Musk within the last year around the real consequences

    of creating a superintelligence ... something vastly beyond

    ourselves.

  • If it was that much vastly beyond ourselves as we are above

    the lower animals... wouldnt it trounce the planet like humans have? Wouldnt that be morally

    consequential?

  • But since Bill Gates isnt exactly an AI researcher, the worthwhile

    question to ask is this:

    What do real folks doing real work in AI actually think about this?

  • Luckily there are some people doing leg-work there...

  • Nick BostromProfessor, University of OxfordDirector, Future of Humanity Institute

  • Dr. Bostrom asked 170 AI researchers the following:

    When, with a 50% confidence would you suppose we would

    have human-level machine intelligence?

  • Bostrom (et al) 2012-13 AI Researcher Poll, Timelines to Human-Level Machine Intelligence

    To see the full research study in context, you can see the full pdf on nick bostroms website here.

    Confidence inHuman-Level AI Median

    50% 2040

    http://www.nickbostrom.com/papers/survey.pdf

  • But what about consciousness?

  • But what about consciousness?A really complicated machine that can do smart

    things, but isnt really aware, doesnt really matter that much...

  • We asked 33 AI researchers when they believe (with 90% confidence) that artificial intelligence

    will be capable of self-aware consciousness.

  • To see the full research study in context, you can see the full pdf on nick bostroms website here.

    Before2021

    TechEmergence 2015 AI Researcher Poll,Timelines to Machine Consciousness

    (90% Confidence)

    2021 - 2060

    2036 - 2060

    2061 - 2100

    2101 -2200

    2201 -3000

    Likely Never

    Cant Tell

    412.12%

    515.15%

    824.24%

    412.12%

    26.06%

    26.06%

    13.03%

    618.18%

    http://www.nickbostrom.com/papers/survey.pdf

  • We could suppose that maybe in the next two decades that we might have some kind of a

    machine that replicates not only the intelligence but also the sentience of a dog, for example.

  • This would be something that would be able to understand sensory experiences, have a knowlege of the past, and some kind of a

    rough understanding of the future.

    We wouldnt just treat it as a machine anymore. It wouldnt just be empty massit would now

    have moral worth.

  • Its reasonable to suppose that if we are able to replicate that much intelligence andsentience into a machine, and if any part of Kurzweils

    trajectory continued, that it might not be horribly long until Bostrom maybe would be

    right.

  • We could then find ourselves where the sentient and intelligent complexity of our machinery

    would be able to at last match us.

  • If there were AI programmers that never had to sleep, didnt have to go to college, and never made mistakes, we might suppose that one day

    we may get here:

  • ... Where there would be something of greater sentience and moral worth than ourselves.

  • Its reasonable to suppose that there are in fact sensory experiences, concepts, and ideas that

    we cant possibly compute...given our hardware.

    It is supposed by many that if we were to get to human-level intelligence, there would be an explosion of intelligence and sentience itself that would vastly outstrip any words that we

    have to articulate it.

  • There would be a flexible and ever evolving kind of intelligence unlike anything biology has been

    able to create.

    But we should make note that this has happened yet...

  • We dont have human level computers and we may be coming up on a plateau before we touch

    what human is.

    Maybe we can replicate some kind of intelligence/sentience, but not much more than

    a fish. Maybe theres something in a human skull that science will never get its hands fully

    around.

  • Its reasonable to say that its more dangerous than ever in this time of exponentially

    improving technologies to hide under the rock of:

    It hasnt happened yet, so it never will

  • What kind of an artificial intelligence should corperations be able to build without

    regulation?

    If we are going to be able to construct alive machines who could suffer at our hands, should

    we do that at all?

  • Or should we make them only capable of experiencing pleasure, no matter how we treat

    them?

    But if that were the case, would they not have sympathy of our sorrows, and would they not

    feel as bad about harming us?

  • If we could set laws on bounding AI within the United States, what would ever stop another nation from doing the same developments

    themselves?

  • Ask yourself this:

    What makes a dogs life less worthy than a human being?

    Is it something about the consciousness of a human being?

  • If AI can crack open that door... what does that imply?

    When machines not only trounce us in chess, but in fact supercede us in the very moral traits and qualities that we suppose make us unique

    and make our lives worthwile.

    What do we mean and how do we matter then?

  • In order for an idea to trickle to policy and regulation, it first has to be worth of

    contemplation and dialogue.

    Luckily, this isnt the first grand moral concern that will involve global unity in some way, shape,

    or form.

    This could be another one of those efforts...

  • The cosmopolitan ideal is more alive now than ever. Despite our conflicts, education and

    exposure are making us more likely to embrace humnas, of any skin color, gender, or type...

    maybe even all sentient beings.

    I dont see the trend of expanding circles of sen-timent slowing down. We will need well

    intended collaboration if we are to survive the technologies that we will create.

  • Many of the global collaborations (League of nations, World Health Organization etc.) have

    first involved tragedy.

  • The way that I see it now, the way that these technologies are projected, the genuine

    perspective of people in this field, and given the moral consequence of not only destroying

    ourselves but maybe creating what is beyond us... I think that it behooves us to wake up

    before the machines do.

  • The way that I see it now, the way that these technologies are projected, the genuine

    perspective of people in this field, and given the moral consequence of not only destroying

    ourselves but maybe creating what is beyond us... I think that it behooves us to wake up

    before the machines do.

  • Click the screen below to view a video of Dans TedX talk for yourself:

    https://www.youtube.com/watch?v=PjiZbMhqqTM

  • dan@techemergence.com | www.danfaggella.com

    Thanks for viewing the presentation. To join the conversation on the intersection of

    technology and intelligence, visit my personal webpage and follow me on social media by

    clicking the icons below:

    https://www.facebook.com/danfaggellahttps://www.facebook.com/TechEmergence/https://twitter.com/danfaggellahttps://twitter.com/techemergencehttps://www.linkedin.com/in/danfaggellahttps://twitter.com/techemergencemailto:info%40techemergence.com?subject=http://www.techemergence.com

  • If youd like to stay ahead of the curve about cutting-edge research trends and insights in the

    field of artificial intelligence, be sure to stay connected with TechEmergence on social media

    by clicking the icons below:

    info@techemergence.com | www.techemergence.com TechEmergence LLC 2016 All Rights Reserved | Design by J. Daniel Samples

    Vector artwork and images via FlatIcon, Vecteezy, Pixaroma, and PixaBay

    https://www.facebook.com/techemergencehttps://www.facebook.com/TechEmergence/https://twitter.com/techemergencehttps://twitter.com/techemergencehttps://www.linkedin.com/company/techemergencehttps://twitter.com/techemergencehttps://soundcloud.com/techemergencehttps://soundcloud.com/techemergencemailto:info%40techemergence.com?subject=http://www.techemergence.com