Dan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness

Post on 13-Jan-2017

46 Views

Category:

Technology

0 Downloads

Preview:

Click to see full reader

Transcript

This presentation is based on TechEmergence founder Dan Faggella’s 2015 TedX talk titled

“What Will We Do When the Machines Can Feel?”

In his presentation, Dan explores the ethical consequence of the development of conscious

machines which many artificial intelligence researchers consider to be possible within our

lifetime.

To view Dan’s talk for yourself, click the video screen below:

It’s clear that technology matters... and it matters because it

matters to us.

Technology doesn’t really matter without us even if it does matter

with us.

A cell phone, sitting by itself, doesn’t have any moral worth, but

an animal, we might say, does.

A cell phone, we could say, is just matter, while a deer actually

matters.

And although we would presume that human beings might have

the grandest and richest sentience among animals that we have found on this planet, we also attribute moral worth to animals

now as well.

What would be the case when/if technology could matter in and of

itself?

It seems very far out, but if we take a quick jaunt through the history of computing it might shed some light on where we

might find ourselves in the decades ahead.

This is what computers looked like less than 70 years ago...

Pictured is the ENIAC computer, introduced in 1946 at the University of Pennsylvania

Around 20 years later IBM developed computers that helped

Apollo 11 get to the moon.

But we’ve made many giant leaps in computing technologies since

that time that make IBM’s computers seem paltry in this day

and age.

Throughout the history of computing, performance has

increased and price has decreased at a steady rate.

The image pictured was taken from Ray Kurzweil’s book titled The Singularity Is Near. This theme goes along with Ray’s general theory of the law of accelerating return. For more information on ray’s site click here.

Some experts support, based on Kurzweil’s model, that we are getting relatively close to the

point to where an average laptop will have as much raw computing power as a lower mamilian brain.

Ray Kurzweil is not among the only folks who is of the belief that

in the coming decade or so we may have household computers that have the same ‘computing

power’ as the human brain.

... But it’s not just raw computing power that would make a

technology morally relevant.

We’re more interested in what technolgy can do. Is it really

smart?

The Deep Blue computer that beat (then) world chess champion

Garry Kasparov was 1.4 tons of raw computing power. It was the finest supercomputer of it’s day

just 15 years ago.

An iPhone 5 from 2012 has 7x the computing power that Deep Blue

did...

That’s 15 years, 7x the computing power, and 1/11,000 of the size...

Jeoprady was supposed to be the final stronghold of human

wisdom...

Until IMB’s Watson beat the (then) two Jeoprady champions.

...And there’s other technologies that are on their way up as well.

Biomemetic technology has taken off in the past decade. Click here to see one of MIT’s interesting projects about a robotic cheetas that ‘sees’.

Apple recently opened up Siri to third-party developers. Click here to read an article outlining the announcement via WIRED.

There’s alot of areas where humans are just being ‘caught up’

to, but they are being beaten... handily.

It brings to mind some of the fears being posited by folks like Bill Gates, Stephen Hawkin, and Elon Musk within the last year around the real consequences

of creating a superintelligence ... something vastly beyond

ourselves.

If it was that much vastly beyond ourselves as we are above

the lower animals... wouldn’t it trounce the planet like humans have? Wouldn’t that be morally

consequential?

But since Bill Gates isn’t exactly an AI researcher, the worthwhile

question to ask is this:

What do real folks doing real work in AI actually think about this?

Luckily there are some people doing leg-work there...

Nick BostromProfessor, University of OxfordDirector, Future of Humanity Institute

Dr. Bostrom asked 170 AI researchers the following:

“When, with a 50% confidence would you suppose we would

have human-level machine intelligence?”

Bostrom (et al) 2012-13 AI Researcher Poll, Timelines to Human-Level Machine Intelligence

To see the full research study in context, you can see the full pdf on nick bostrom’s website here.

Confidence inHuman-Level AI Median

50% 2040

But what about consciousness?

But what about consciousness?A really complicated machine that can do smart

things, but isn’t really aware, doesn’t really matter that much...

We asked 33 AI researchers when they believe (with 90% confidence) that artificial intelligence

will be capable of self-aware consciousness.

To see the full research study in context, you can see the full pdf on nick bostrom’s website here.

Before2021

TechEmergence 2015 AI Researcher Poll,Timelines to Machine Consciousness

(90% Confidence)

2021 - 2060

2036 - 2060

2061 - 2100

2101 -2200

2201 -3000

Likely Never

Can’t Tell

412.12%

515.15%

824.24%

412.12%

26.06%

26.06%

13.03%

618.18%

We could suppose that maybe in the next two decades that we might have some kind of a

machine that replicates not only the intelligence but also the sentience of a dog, for example.

This would be something that would be able to understand sensory experiences, have a knowlege of the past, and some kind of a

rough understanding of the future.

We wouldn’t just treat it as a machine anymore. It wouldn’t just be empty mass—it would now

have moral worth.

It’s reasonable to suppose that if we are able to replicate that much intelligence andsentience into a machine, and if any part of Kurzweil’s

trajectory continued, that it might not be horribly long until Bostrom maybe would be

right.

We could then find ourselves where the sentient and intelligent complexity of our machinery

would be able to at last match us.

If there were AI programmers that never had to sleep, didn’t have to go to college, and never made mistakes, we might suppose that one day

we may get here:

... Where there would be something of greater sentience and moral worth than ourselves.

It’s reasonable to suppose that there are in fact sensory experiences, concepts, and ideas that

we cant possibly compute...given our hardware.

It is supposed by many that if we were to get to human-level intelligence, there would be an explosion of intelligence and sentience itself that would vastly outstrip any words that we

have to articulate it.

There would be a flexible and ever evolving kind of intelligence unlike anything biology has been

able to create.

But we should make note that this has happened yet...

We don’t have human level computers and we may be coming up on a plateau before we touch

what ‘human’ is.

Maybe we can replicate some kind of intelligence/sentience, but not much more than

a fish. Maybe there’s something in a human skull that science will never get its hands fully

around.

It’s reasonable to say that it’s more dangerous than ever in this time of exponentially

improving technologies to hide under the rock of:

“It hasn’t happened yet, so it never will”

What kind of an artificial intelligence should corperations be able to build without

regulation?

If we are going to be able to construct alive machines who could suffer at our hands, should

we do that at all?

Or should we make them only capable of experiencing pleasure, no matter how we treat

them?

But if that were the case, would they not have sympathy of our sorrows, and would they not

feel as bad about harming us?

If we could set laws on bounding AI within the United States, what would ever stop another nation from doing the same developments

themselves?

Ask yourself this:

“What makes a dogs life less worthy than a human being?”

Is it something about the consciousness of a human being?

If AI can crack open that door... what does that imply?

When machines not only trounce us in chess, but in fact supercede us in the very moral traits and qualities that we suppose make us unique

and make our lives worthwile.

What do we mean and how do we matter then?

In order for an idea to trickle to policy and regulation, it first has to be worth of

contemplation and dialogue.

Luckily, this isn’t the first grand moral concern that will involve global unity in some way, shape,

or form.

This could be another one of those efforts...

The cosmopolitan ideal is more alive now than ever. Despite our conflicts, education and

exposure are making us more likely to embrace humnas, of any skin color, gender, or type...

maybe even all sentient beings.

I don’t see the trend of expanding circles of sen-timent slowing down. We will need well

intended collaboration if we are to survive the technologies that we will create.

Many of the global collaborations (League of nations, World Health Organization etc.) have

first involved tragedy.

The way that I see it now, the way that these technologies are projected, the genuine

perspective of people in this field, and given the moral consequence of not only destroying

ourselves but maybe creating what is beyond us... I think that it behooves us to wake up

before the machines do.

The way that I see it now, the way that these technologies are projected, the genuine

perspective of people in this field, and given the moral consequence of not only destroying

ourselves but maybe creating what is beyond us... I think that it behooves us to wake up

before the machines do.

Click the screen below to view a video of Dan’s TedX talk for yourself:

dan@techemergence.com | www.danfaggella.com

Thanks for viewing the presentation. To join the conversation on the intersection of

technology and intelligence, visit my personal webpage and follow me on social media by

clicking the icons below:

If you’d like to stay ahead of the curve about cutting-edge research trends and insights in the

field of artificial intelligence, be sure to stay connected with TechEmergence on social media

by clicking the icons below:

info@techemergence.com | www.techemergence.com© TechEmergence LLC 2016 All Rights Reserved | Design by J. Daniel Samples

Vector artwork and images via FlatIcon, Vecteezy, Pixaroma, and PixaBay

top related