Top Banner
1 Plan Present and future challenges The technological growth Impact on business and job market Managers role ? Ethical and Philosophical issues 2 Why now? ExponenAal growth in compuAng power 3 ExponenAal grown in compuAng power 1996 2006 (9 years later!) ASCI Red, the world faster supercomputer Sony PlayStation 3 $55 million $500 1,600 square feet 1/10 of a square feet 1.8 teraflops (1.8 trillion, i.e. 10 12 operations per second) 1.8 teraflops 800,000 watts per hour 200 watts per hour 4
9

Why*now?** Plan ExponenAal*growth*in*compuAng*power ... · ExponenAal*growth*in*compuAng*power 3 ExponenAal*grown*in*compuAng*power 1996 2006 (9 years later!) ASCI Red, the world

Jun 04, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Why*now?** Plan ExponenAal*growth*in*compuAng*power ... · ExponenAal*growth*in*compuAng*power 3 ExponenAal*grown*in*compuAng*power 1996 2006 (9 years later!) ASCI Red, the world

1

Plan

•  Present*and*future*challenges •  The*technological*growth

•  Impact*on*business*and*job*market •  Managers*role*?*

•  Ethical*and*Philosophical*issues

2

Why*now?**ExponenAal*growth*in*compuAng*power

3

ExponenAal*grown*in*compuAng*power

1996 2006 (9 years later!)

ASCI Red, the world faster supercomputer

Sony PlayStation 3

$55 million $500

1,600 square feet 1/10 of a square feet

1.8 teraflops (1.8 trillion, i.e. 1012 operations per second)

1.8 teraflops

800,000 watts per hour 200 watts per hour

4

PRINTED BY: Maurizio Gabbrielli <[email protected]>. Printing is for personal, private use only. No part of this book may be reproduced or transmitted without publisher's prior permission. Violators will be prosecuted.

Page 2: Why*now?** Plan ExponenAal*growth*in*compuAng*power ... · ExponenAal*growth*in*compuAng*power 3 ExponenAal*grown*in*compuAng*power 1996 2006 (9 years later!) ASCI Red, the world

5

How*much*would*an*iPhone*have*cost*in*1991?

32GB*of*flash*memory:*$1,44*million

1*GB*in*1991:*$45,000,*now*it*costs*$0.55

Processor:*$*620,000

ConnecAvity:*$1,5*million

!*$3,56*million!

Plus*camera,*moAonUdetecAon,*operaAng*system,*display,*etc.

6

So*much*more*processing*power*now!

Our*phone*has*the*same*power*as*all*of*NASA*in*1969,*when*they*sent*a*man*to*the*moon!

The*chip*in*our*birthday*cards*is*more*powerful*than*all*allied*forces*in*1945!

Google*has*1,800,000*servers,*43*petaflops!

1*petaflop:*1,000*trillions*=*1015*operaAons*per*second

7

Why*now?

Cheap*parallel*computaAon Big*data

Incredible*amount*of*data*available*about*world,*human*behavior They*can*be*used*as*examples,*teaching*AIs*to*be*smarter

Be\er*algorithms*and*models Deep*learning

Good*reusable*openUsource*so^ware

8

More%and%more%people%connected%every%day%

Page 3: Why*now?** Plan ExponenAal*growth*in*compuAng*power ... · ExponenAal*growth*in*compuAng*power 3 ExponenAal*grown*in*compuAng*power 1996 2006 (9 years later!) ASCI Red, the world

9

More%and%more%people%connected%every%day%

10

Not%just%people,%but%also%things%

11

Everybody%can%be%an%innovator!%

More*than*6*billion*mobile*phones*in*2012

All*these*people*can Search*the*web Read*wikipedia

Follow*online*courses

Share*opinions*in*blogs,*twi\er,*etc. Perform*data*analysis*using*cloud*services

12

Impact*on*businesses*and*job*market

cognitive manual

routine Processing payments,Bank tellers, cashiers, mail clerks, translation, accounting, driving,Secretary, real estate

Machine operators, cement masons, janitors, house cleaning

Non-routine Handling customers’ questions,Financial analysis

Hair-dressing

Page 4: Why*now?** Plan ExponenAal*growth*in*compuAng*power ... · ExponenAal*growth*in*compuAng*power 3 ExponenAal*grown*in*compuAng*power 1996 2006 (9 years later!) ASCI Red, the world

13

Tehnology*changes*and*jobs

Kodak:* 1984:*45,000*people 2012:*bankrupt!

Instagram: 2012:*13*people,*sold*to*FB*for*$1*Billion

Foxconn*(electronics*components*manifacturing) $100*Billion 1,2*Million*people Is*gedng*an*army*of*1*Million*robots! *

Same*at*Canon*and*many*others

14

The*future*of*jobs Oxford*researchers*using*machine*learning*(2013)*:* 47%* of* jobs* in* US* will* be* replaced* in* 20* years* by**automaAon.*Three*steps 1.  People* replaced* in* vulnerable* fields:* producAons,*

transportaAon/logisAcs,*administraAve*support**

2.  Slow* down* of* replacement* due* to* engineering*bo\leneck:* creaAve* intelligence,* social* intelligence,*percepAon*and*manipulaAoon*

3.  AI* will* allow* to* replace* jobs* in* management,* science,*engineering,*arts*

*" The" future" of" employment.* C.* Benedikt* Frey* and* M.* Osborne.* Oxford*MarAn**School*at*the*University*of*Oxford.*2013.

15

The*future*of*jobs More*recent*study***(2016)

1.  77%*of*jobs*in*China*and*69%*of*jobs*in*India*at*risk

2.  Greater* inequaliAes:*divergence* in*penetraAon* rates*of*technology* adopAon* can* account* for* the* 82%* of* the*increase* in* the* income*gap*across* the*globe* in* the* last*180*years.* In*1820,*incomes*in*Western*countries*were*1.9*Ames*those*in*the*nonU Western.*In*2000,*7.2*Ames*!

***Technology"at"Work"v2.0:"The"Future"Is"Not"What"It"Used"To"Be.*CiA*GPS*and*the*Oxford*MarAn*School*at*the*University*of*Oxford.*2016*.

16

On*the*other*hand*…

1.  InnovaAon*is*important*for*the*growth

2.  AI*is*important*for*innovaAon*

Page 5: Why*now?** Plan ExponenAal*growth*in*compuAng*power ... · ExponenAal*growth*in*compuAng*power 3 ExponenAal*grown*in*compuAng*power 1996 2006 (9 years later!) ASCI Red, the world

17

AI*and*management

18

Five*pracAces*that*successful*managers*will*need*to*master*[1]**

1)*Leave*AdministraAon*to*AI Data*analyAcs*company*Tableau*and*NPL*company*NarraAve*Science*developed**a*so^ware*that*automaAcally*creates*wri\en*explanaAons*for*Tableau*graphics.

86%*of*the*surveyed**managers*like*AI*support*for*monitoring*and*reporAng.

2)*Focus*on*Judgment*Work Many*decisions*require*knowledge*of*organizaAonal*history*and*culture,*empathy,*ethical*reflecAon.*AI*provides*support*for*decision,*not*replacement

3)*Treat*AI*Machines*as*“colleagues”*not*compeAtors AI*can*provide*decision*support,*dataUdriven*simulaAons,*search*and*discovery*acAviAes.*

78%*believe*they*will*trust*the*advice*of**AI*in*making*business*decisions

Kensho*Technologies*system*allows*investment*managers*to*ask**quesAons*in*plain*English,*such*as,*“What*sectors*and*industries*perform*best*three*months*before*and*a^er*a*rate*hike?”

[1]*How"ArDficial"Intelligence"Will"Redefine"Management.*Vegard*Kolbjørnsrud,*Richard*Amico,*and*Robert*J.*Thomas.*November*2016.*Harvard*business*review.

19

Five*pracAces*that*successful*managers*will*need*to*master*[1]*

4)*Work*Like*a*designer ability*to*harness*others’*creaAvity

33%* of* the*managers* idenAfied* creaAve* thinking* and* experimentaAon* as* a* key*skill*area*they*need*to*learn*to*stay*successful

5)*Develop*Social*Skills*and*Networks The*managers* undervalued* the* social* skills* criAcal* to* networking,* coaching,* and*collaboraAng* that* will* help* them* in* a* world* where* AI* carries* out* many* of* the*administraAve*and*analyAcal*tasks*they*perform*today.

More*SuggesAons

a)  Explore*AI*early.*DisrupAon*is*arriving b)  Adopt*new*key*performance* indicators.*AI*will*bring*new*criteria*for*success:*

collaboraAon* capabiliAes,* informaAon* sharing,* learning* and* decisionUmaking*effecAveness,*and*the*ability*to*reach*beyond*the*organizaAon*for*insights.

c)  Develop* training* and* recruitment* strategies* for* creaAvity,* collaboraAon,*empathy,*and*judgment*skills.*Leaders*should*develop*a*diverse*workforce

20

Philosophical*and*ethical*issues*

Weak*AI

Can*we*build*machines*that*could*act*as*if*they*were*intelligent*?

Strong**AI

Can*we*build*machines*that*are*actually*intelligent*?

Depend*very*much*on*the*meaning*of*“intelligence”

AI*researchers*take*weak*AI*for*granted*(and*do*not*care*too*much*about*strong*AI*hypothesis)

Page 6: Why*now?** Plan ExponenAal*growth*in*compuAng*power ... · ExponenAal*growth*in*compuAng*power 3 ExponenAal*grown*in*compuAng*power 1996 2006 (9 years later!) ASCI Red, the world

21

Consciousness*ObjecAon*to*AI Can*machines*think*?*Turing*recognized*this*as*an*a*illUposed*quesAon*U>*Turing*test*

Many* claim* that* a*machine* that* passes* the* Turing* Test*would* not* be* actually*

thinking,* but* would* be* only* a* simulaAon* of* thinking:* Chinese* room* (J* Searle.*

1980**and*G.*Jefferson*1949):

“Not* unAl* a*machine* could*write* a* sonnet* or* compose* a* concerto* because* of*thoughts* and* emoAons* felt,* and* not* by* the* chance* fall* of* symbols,* could* we*

agree*that*machine*equals*brain—that*is,*not*only*write*it*but*know*that*it*had*

wri\en*it”.

Turing*had*foreseen*this*consciousness*objecAon.*His*answer*is*interesAng:

“In* ordinary* life*we*never* have* any* direct* evidence* about* the* internal*mental*states*of*other*humans.*Instead*of*arguing*conAnually*over*this*point,*it*is*usual*

to*have*the*polite*convenAon*that*everyone*thinks.”

*

22

Other*objecAons*to*AI*(some**foreseen*by*Turing)

1.  The*Theological*ObjecAon*(only*beings*created*by*God*can*think) 2.  The*MathemaAcal* ObjecAon.* J.R.* Lucas* 1961,* R.* Penrise* 1994* (based* on*

Goedel* incompleteness* theorem* 1930:* any* formal* theory* as* strong* as*Peano* arithmeAc* contain* true* statements* that* have* no* proof* within* the*theory*itself).

3.  Various* DisabiliAes* (cannot* be* kind,* resourceful,* beauAful,* friendly,* have*iniAaAve,*have*a*sense*of*humor,*fall*in*love,*enjoy*strawberries*and*cream)

4.  Lady* Lovelace's* ObjecAon* "The* AnalyAcal* Engine* has* no* pretensions* to*originate* anything.* It* can* do* whatever* we* know* how* to* order* it* to*perform”.*Lady*Lovelace*(*1842)

5.  Informality*of*Behaviour* *(impossible*to*provide*rules*that*describe*how*to*behave*in*any*possible*situaAon)

23

Two*opposite*views

1:* Biological* naturalism:*mental* states* are* highUlevel* emergent* features* that*are* caused* by* lowUlevel* physical* processes* in* the* neurons,* and* it* is* the*(unspecified)*properAes*of*the*neurons*that*ma\er.*Searle*1980.

Chinese*room

Monolingual*English*speaker*hand*tracing*

a*natural*language*understanding*program

For*Chinese*following*instrucAons*wri\en*

In*English

From*the*outside*we*see*a*system*that*answer*

In*Chinese*but*there*is*no*understanding*of*

Chinese

24

Two*opposite*views

2.*FuncAonalism:*a*mental*state*is*any*intermediate*causal*condiAon*between*input* and* output.* Any* two* systems*with* isomorphic* causal* processes*would*have*the*same*mental*states.*Therefore,*a*computer*program*could*have*the*same*mental*states*as*a*person.*The*assumpAon*is*that*there*is*some*level*of*abstracAon*below*which*the*specific*implementaAon*does*not*ma\er.

Brain*replacement*experiment*(H.*Moravec*1988):

•  Piecemeal* replacement* of* neurons* by* funcAonally* equivalent* electronic*devices.*

•  The*external*behaviour*remain*the*same

•  For* funcAonalists* (e.g.* Moravec)* the* internal* behaviour* (i.e.* the*consciousness)*would*remain*the*same.

•  For*biological*naturalists*(Searle)*the*consciousness*would*vanishA*more*general*Mind*Body*problem

Page 7: Why*now?** Plan ExponenAal*growth*in*compuAng*power ... · ExponenAal*growth*in*compuAng*power 3 ExponenAal*grown*in*compuAng*power 1996 2006 (9 years later!) ASCI Red, the world

25

Ethical*issues***

Machine*ethics

ComputaAonal*and*philosophical*assumpAons*for*machines*which*can*take*autonomous*moral*decisions

Autonomous*vehicles:*who*should*be*killed*in*case*of*accident*?

Medical*robots:*should*they*always*tell*the*truth*to*paAents*?

**With*the*help*of*Daniela*Tofani.

26

Machine*ethics*

How*can*we*guarantee*that*machines*do*not*take*“immoral”*decisions*?

1.  *Simple*rules*(e.g.*Asimov*three*roboAcs*laws)*are*not*enough,*given*the*complex*contextual*informaAon

2.  SimulaAon*of*moral*decision*and*acAons*of*humans*is*not*enough,*since*humans*take*also*bad*moral*decisions:*machines*need*to*be*“saint”

27

Machine*ethics*

We*would*need

1.  A*normaAve*ethics*which*solves*all*exisAng*moral*dilemmas*and*which*is*accepted*by*most*humans

2.  A*translaAon*of*such*an*ethics*in*computaAonal*terms

3.  The*ability*to*incorporate*commonsense*reasoning*in*machines

Three*huge*problems*!

28

Machine*ethics*

Moreover,*human*behaviours*and*machine*behaviours*are*ruled*by*different*laws*!!

*An*example*with*the*(famous)*trolley*problem

Page 8: Why*now?** Plan ExponenAal*growth*in*compuAng*power ... · ExponenAal*growth*in*compuAng*power 3 ExponenAal*grown*in*compuAng*power 1996 2006 (9 years later!) ASCI Red, the world

29

Machine*ethics*

Moreover,*human*behaviours*and*machine*behaviours*are*ruled*by*different*laws*!!*(That*is*differnet*legal*liabiliAes*apply)

An*example*with*the*famous*(modified)*trolley*problem

Electronic copy available at: https://ssrn.com/abstract=2881280

A B C

Figure 1: Three scenarios involving imminent unavoidable harm

c. The AV can either stay on course and kill several pedestrians or swerveand kill its own passenger.

The common factor in all these scenarios is that harm to persons is unavoid-able, so that a choice needs to be made as to which person will be harmed:passengers, pedestrians, or passersby.

This raises the issue of who should select the criteria the AV should followsin making such choices: should the same mandatory ethics setting (MES) beimplemented in all cars or should every driver have the choice to select his orher own personal ethics setting (PES).

Gogoll and Muller (2016) submit that despite the advantages of a PES, amandatory MES is actually in the best interest of society as a whole. In par-ticular, they argue that (1) implementing a PES will lead to socially unwantedoutcomes; (2) a MES that minimizes the risk of people being harmed in tra�cis in the considered interest of society; and (3) AVs, at least under some cir-cumstances, should sacrifice their drivers in order to save a greater number oflives.

Millar (2015) observes that technologies may act as moral proxies, imple-menting moral choices. He argues that user/owners, rather than designersshould maintain responsibility for such choices. In particular “designers [...]should reasonably strive to build options into self driving cars allowing thechoice to be left to the user.”

According to a study by Bonnefon et al (2016), through three on-line surveysconducted in June 2015, people are comfortable with the idea that AVs should

2

30

Trolley*problem*liability*analysis*1:*Human*driven*car*

In*scenario*(a),*the*choice*to*stay*on*course*and*let*several*pedestrians*be*killed,*rather*than*to*swerve*and*kill*one*passerby,*can*be*jusAfied*on*the*moralUlegal*stance*condemning*the*wilful*causaAon*of*death*(as*disAnguished*by*ledng*death*result*from*one’s*omission).*

In*scenario*(b),*the*choice*to*stay*on*course*can*be*jusAfied*by*invoking*the*state*of*necessity,*since*this*choice*is*necessary*to*save*the*life*of*the*driver.*

The*same*jusAficaAon*applies*to*scenario*(c),*even*though*in*this*case*the*driver’s*choice*to*save*his*or*her*own*life*leads*to*the*death*of*several*other*persons.*

**The*Ethical*Knob:*EthicallyUCustomisable*Automated*Vehicles*and*the*Law.*Giuseppe*ConAssa*Francesca*Lagioia*Giovanni*Sartor.*2017*

31

Trolley*problem*liability*analysis*2:*PreUprogrammed*Autonomous*Vehicle*

In* scenario* (a)* it* is* doub|ul* whether* the* programmer* would* be* jusAfied* when* choosing* to*program*an*AV*so*that*it*stays*on*course*and*kills*several*pedestrians*rather*than*swerving*and*killing* just* one*passerby.* In* fact,* the* disAncAon*between*omidng* to* intervene* (ledng* the* car*follow*its*path)*and*act*in*a*determined*way*(choosing*to*swerve)—a*disAncAon*that*in*the*case*of*a*manned*car*may*jusAfy*the*human*choice*of*allowing*the*car*to*keep*going*straight—does*not*seem*to*apply* to*the*programmer,*since*the* la\er*would*deliberately*choose*to*sacrifice*a*higher*number*of*lives.*

Scenario* (b):* When* the* perpetrator* is* not* directly* in* danger* and* does* not* act* out* of* selfUpreservaAon* (or* kinUpreservaAon),* the* applicability* of* the* general* stateUofUnecessity* defence* is*controversial.*For*instance,*Santoni*de*Sio*(2017)*argues*that*the*law*does*not*generally*allow*an*innocent*person*to*be*killed*for*saving*other*people’s*life.*On*this*basis*he*rejects*the*uAlitarian*preUprogramming* of* AVs.* If* the* legal* jurisdicAon* allows* for* such* parAcular* case* of* state* of*necessity,*then*the*programmer*would*not*be*punishable*for*either*choice.*Otherwise,* if*this* is*not*accepted*by*the*jurisdicAon,*then*it*is*very*doub|ul*whether*preproU*gramming*the*car*either*to* go* straight* (killing* a* pedestrian)* or* to* swerve* (killing* the* passenger)* would* be* legally*acceptable:*in*both*cases*the*programmer*would*arbitrarily*choose*between*two*lives.* In*scenario*(c),* it*seems*that*preprogramming*the*car*to*conAnue*on* its*trajectory,*causing*the*death*of*a*higher*number*of*people,*could*not*be*morally*and*legally*jusAfied*in*any*jurisdicAon:*it*would*amount*to*an*arbitrary*choice*to*kill*many*rather*than*one.*

**The*Ethical*Knob:*EthicallyUCustomisable*Automated*Vehicles*and*the*Law.*Giuseppe*ConAssa*Francesca*Lagioia*Giovanni*Sartor.*2017*

32

Conclusions:*developing*a*society*of*minds*and*machines**…**

•  Cheap,* reliable,*digital* smartness* running*behind*everything,*almost*invisible

•  As*machines*will* replace* and*augment*humans* in*more*and*more*tasks,*we*will*be\er*understand*what*makes*us*humans*and*what*intelligence*means*

•  More*free*Ame,*less*need*for*working*

•  Amplifying*human*and*collecAvity*capabiliAes

Page 9: Why*now?** Plan ExponenAal*growth*in*compuAng*power ... · ExponenAal*growth*in*compuAng*power 3 ExponenAal*grown*in*compuAng*power 1996 2006 (9 years later!) ASCI Red, the world

33

…*but*should*we*really*do*that?*

• Many*jobs*will*be*lost:*need*to*redefine*policies*and*economies

• People*might*lose*their*sense*of*being*unique •  Humanity*has*survived*other*setbacks*(Copernicus,*Darwin*…)

• AI*systems*might*be*used*toward*undesirable*ends •  U.S.*military*deployed*over*17*000*autonomous*vehicles*in*Iraq*

• The*use*of*AI*systems*might*result*in*a*loss*of*accountability •  Responsibility*for*wrong*diagnosis/decisions*?*Health,*finance,*cars*…

• The*success*of*AI*might*mean*the*end*of*the*human*race •  AI*system’s*state*esAmaAon*may*be*incorrect

•  Right*uAlity*funcAon*for*an*AI*system*to*maximize*(human*suffering…)

•  Learning*allow*to*develop*unintended*behaviour:*technological*singularity

34

A*last*word*by**A.*Turing

We"can"see"only"a"short"distance"ahead," but"we"can"see"that"much"remains"to"be"done.

Thanks *

35

Suggested*readings •  The"second"machine"age.*Erik*Brynjolfsson*and*Andrew*

McAfee,*Norton,*2014

•  Physics"of"the"future."Michio*Kaku,*Anchor,*2012

•  Superintelligence:"path,"dangers,"strategies."Nick*Bostrom,*Oxford*Univ.*Press,*2014

•  Smarter"than"us:"the"rise"of"machine"intelligence.*Stuart*Armstrong,*MIRI,*2014

•  The"glass"cage:"automaDon"and"us."Nicholas*Carr,*Norton,*2014

but*also*…*

•  A.*Turing.*CompuDng"Machinery"and"Intelligence.*1950.*