Top Banner
Realising the benefits of artificial intelligence: issues for the engineering, science and language services workforces Professionals Australia's submission to the public consultation on an ethics framework for the design and application of AI May 2019
14

Realising the benefits of artificial intelligence: issues ...

Mar 24, 2022

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Realising the benefits of artificial intelligence: issues ...

Realising the benefits of artificial intelligence: issues for the engineering, science and

language services workforces

Professionals Australia's submission to the public consultation on an ethics framework for the design and application of AI

May 2019

Page 2: Realising the benefits of artificial intelligence: issues ...

| Page 2

About Professionals Australia

Professionals Australia (formerly the Association of Professional Engineers, Scientists and Managers, Australia) is an organisation registered under the Fair Work Act 2009 representing over 23,000 professionals including professional engineers, scientists, veterinarians, surveyors, architects, pharmacists, information technology professionals, managers, transport industry professionals and translators and interpreters throughout Australia. Professionals Australia's engineering members are employed across all sectors of the Australian economy. Engineering-based industries are worth $479 billion or 32 per cent of national gross value added and exports from engineering-based industries excluding mining totalled $92 billion1 or 29 per cent of total exports. This figure swells to $249 billion or 78 per cent of exports if mining is included. Engineers perform design, scoping and project management roles in a diverse range of industries throughout the private and public sectors including roads, rail, water, electricity, information technology, telecommunications, construction, mining, oil and gas exploration, defence, shipbuilding and manufacturing. Engineers are largely responsible for designing, building and maintaining Australia's infrastructure. The contribution of our engineers and their ability to derive new ideas and develop solutions to our challenges as a nation will be fundamental to a successful transition to a competitive high-skill, knowledge-based economy. Professionals Australia's scientist members - from pathologists to vets, IVF experts, food technologists, geologists, surveyors, chemists, molecular biologists, agricultural scientists, environmental scientists, botanists, computer scientists, pharmacologists, medical physicists, medical scientists, meteorologists, synchrotron scientists, astronomers, biochemists, immunologists, water scientists, geneticists, defence scientists and forestry scientists - work each day in areas as diverse as they are critical to our future. As well as working in research, our scientists are engaged in a range of roles in industry – just as likely to be working at a desk or on-site in a high-vis vest as at the laboratory bench. These people deploy the skills they gained in their undergraduate and post-grad science degrees in roles as diverse as quality assurance officers, software developers, health and safety officers, senior administrators, science communicators, sustainability coordinators, technical officers and advisors, policy-makers and regulators, cyber-security advisors, web-developers, directors and CEOs. No matter what role or industry they operate in, the work of our scientist members is not only fundamental to a healthier, more prosperous and sustainable world but also to the $330 billion contributed to Australia’s economic output over the last 30 years and the one million-plus Australian jobs it supports. Professionals Australia's translators and interpreters provide vital language services for the community. They contribute their expert skills in a diverse range of settings and are a means to providing access and equity for those who face language barriers to full participation in the community. They play a vital role in maximising the social and economic benefits of Australia's cultural diversity across our justice, health, education, migration and trade systems. Professionals Australia is a not-for profit organisation and is owned by its members Professionals Australia GPO Box 1272, Melbourne VIC 8060 t: 1300 273 762 e: [email protected] w: www.professionalsaustralia.org.au

Page 3: Realising the benefits of artificial intelligence: issues ...

| Page 3

Contents

Introduction ............................................................................................................................................ 4

Main issues for Professionals Australia members .................................................................................. 5

Job loss/displacement ......................................................................................................................... 5

Automation of tasks that may negatively impact early training of graduates ................................... 5

Engineers ......................................................................................................................................... 5

Scientists ......................................................................................................................................... 5

Pressure to reduce staffing levels and potential impact on service quality ....................................... 6

Pathology services........................................................................................................................... 6

Translation services......................................................................................................................... 6

Differential impact of job losses ......................................................................................................... 7

Risk of deprofessionalisation .............................................................................................................. 7

Managers accountable for decisions made by automated systems ................................................... 8

Issues of diversity and bias ................................................................................................................. 8

Bias arising from unrepresentative training datasets ..................................................................... 8

Bias in workplace recruitment by agencies .................................................................................... 9

Bias arising from lack of diversity in AI workforce ........................................................................ 10

Polarisation of the workforce ........................................................................................................... 10

Recommendations/priorities ................................................................................................................ 10

Regulation ......................................................................................................................................... 10

Employment protections .................................................................................................................. 10

Targeted retraining and a safety net ................................................................................................ 10

Grow our national digital STEM skills base ....................................................................................... 11

Deal with bias in training datasets and algorithms ........................................................................... 11

Address factors that contribute to an unrepresentative AI workforce ............................................ 11

Help close the gender pay gap .......................................................................................................... 12

Take a broad approach to policy responses and ethics framework development ........................... 12

Conclusion ............................................................................................................................................. 13

Page 4: Realising the benefits of artificial intelligence: issues ...

| Page 4

Introduction

The 2019 ILO's Global Commission on the Future of Work's Work for a Brighter Future report2 said of the potential impact of artificial intelligence (AI):

".. artificial intelligence, automation and robotics will lead to job losses, as skills become obsolete, but they [will] also create new opportunities … Countless opportunities lie ahead to improve the quality of working lives, expand choice, close the gender gap [and] reverse the damages wreaked by global inequality. Yet none of this will happen by itself. Without decisive action we will be sleepwalking into a world that widens existing inequalities and uncertainties."

The Global Commission on the Future of Work attempted to take "a human-centred approach to the future of work" meaning they tried "to look at the future of work from the perspective of people, not from the perspective of technology." This is clearly one of the key challenges in the world of work in the 21st century. Growth in big data and computer processing power enable technology to automate systems and crunch numbers in new ways in new settings. With the workplace identified in a Stanford study as one of the eight areas most likely to be impacted by AI,3 the ethical framework and best practice guidelines around its implementation will play a major role in determining how it will impact the professional workforce – a comprehensive ethical framework will be critical in properly dealing with potential issues such as job displacement and discrimination in the workforce as automated systems are developed and applied. As part of a comprehensive range of services, Professionals Australia advocates for members in workforce and employment-related areas with the aim of positively impacting their operating environment and ensuring their interests are protected when Government policies, outsourcing and offshoring, management decisions or new technologies lead to workplace change. We are committed to collaborating with other key stakeholders to realise the benefits of artificial intelligence and other assistive technologies while ensuring the safety and well-being of the community, maintaining high professional service standards and ensuring a diverse, properly trained and sustainable professional workforce for the future. Our submission is divided into two parts – firstly, an analysis of some of the issues raised by AI implementation for the professional workforce, and secondly, an overview of some potential solutions. We thank you for the opportunity to highlight our concerns on behalf of our members. Definition In this report, artificial intelligence is defined as a broad family of interrelated technologies that delegates decision-making to machines. It automates decision-making solving problems autonomously and performing tasks to achieve defined objectives without explicit guidance from a human being.4 It includes those technologies referred to in this submission as assistive intelligence or assistive technologies.

Page 5: Realising the benefits of artificial intelligence: issues ...

| Page 5

Main issues for Professionals Australia members

Job loss/displacement Professionals Australia's primary concern in relation to AI implementation is job loss/displacement. As the discussion paper points out, there is debate about the extent of job losses likely to arise as a result of AI. Some claim that potential job losses have been overstated and that automation often requires new skills and new tasks with the job remaining intact.5 The discussion paper suggests that: ".. the weight of evidence suggests around half of all jobs will be significantly impacted (positively

or negatively) by automation and digital technologies. A smaller, but still significant, number of jobs are likely to be fully automated requiring workers to transition into new jobs and new careers."6

AI's initial applications are mostly found where there is a large volume of decisions to be made based on relatively uniform, uncontested criteria and no discretion required. It may therefore initially appear counter-intuitive that professional roles requiring the exercise of professional judgement and expertise could be affected. Our experience to date however suggests that the automation of tasks within professional roles and a misunderstanding that automated tasks do not require human oversight can put professional jobs at risk. We concur with the view in the report that where job losses occur, "An ethical approach to widespread AI-based automation of tasks performed by human workers requires helping the workers transition smoothly and proactively into new jobs and new careers."7

Automation of tasks that may negatively impact early training of graduates

Engineers There is speculation that some of the initial decision-making in high-skill jobs in the engineering area are 'codifiable' and that low-end engineering tasks traditionally undertaken by professional engineers could be automated to some extent. As pointed out by Tim Chapman, director of the Arup Infrastructure Design Group, it is possible that many of the simpler professional tasks currently undertaken by early-career engineers as a means of developing their experience and judgement could be taken over by AI systems.8 This has the potential to exacerbate the already serious challenge of ensuring high-quality, comprehensive early career development of graduate engineers. The latest engineering vacancy figures from the Department of Jobs and Small Business9 show that while employers received large fields of qualified candidates (19.3 on average per vacancy), more than 80 per cent of qualified applicants were not considered suitable due to a lack of sufficient experience in the engineering profession or lack of experience in a particular specialisation or industry sector. We need to ensure that in implementing artificial or assistive intelligence, graduates are not sidelined from opportunities to gain experience and develop their skills and judgement by undertaking basic engineering tasks under the supervision of more experienced professionals. This early career development will be critical to ensuring our future national engineering capability.

Scientists Pathology services is another example of automation systems potentially compromising the training of graduates. While new graduates appear to have quickly adapted to technologies which identify bacteria, using the system also means they do not get the opportunity to develop the traditional bench skills that allow them to identify bacteria using biochemical methods. The use of protein spectrometry using MALDI-TOF (Matrix Assisted Laser Desorption/Ionization Time of Flight) for rapid identification of bacteria allows graduates to put a sample in a 'black box' and obtain results without having the knowledge to judge whether or not they're correct. This means graduates do not get the chance to gain in depth first-hand knowledge of bacteria identification using traditional biochemical methods.

Page 6: Realising the benefits of artificial intelligence: issues ...

| Page 6

Pressure to reduce staffing levels and potential impact on service quality

Pathology services The deployment of AI systems doesn't occur in an employment vacuum. In Australia, alongside parts of pathology services being automated, services and laboratories are also increasingly being privatised, outsourced and offshored with laboratories under constant pressure to cut costs and reduce staffing levels. Pathology services is a good example of an area that shows the potential benefits but also the risks and quality compromises that can accompany the application of AI systems where they are introduced purely as a cost-cutting mechanism. There are currently several automated microbiology systems deployed in Australian laboratories that use AI to report on cultures from samples without human interaction. They include the Becton Dickinson Kiestra Total Laboratory Automation system (currently in use at SA Pathology and at least six other laboratories in Australia) and an LBT Innovations product called APAS Clever Culture Systems which is currently also used in several Australian laboratories. These systems basically automate manual processes such as plate inoculation, streaking and incubating and are particularly useful for reporting 'no growths' which can account for up to 40 per cent of samples received for testing. With some laboratories receiving up to several thousand samples for testing per day, the advantages of using AI systems are not only the time savings in identifying 'no growth' samples but the time freed up to allow scientific staff to work on more complex and significant mixed cultures. However, with the biggest outlay for any laboratory being employment costs, laboratory administrators are under constant pressure to review staffing levels and classifications on the basis that these systems are seen as '’automatic' with no scientific skills or input required. This is however not the case - the systems need to be maintained and algorithms modified; they require extensive quality control, monitoring and cross-referencing with other test results to ensure that the results which ultimately determine a patient's outcome are reliable and of the highest standard. The AI systems in this context are an enabling tool for scientists, not a substitute for them. Pressure to reduce staffing levels and the lack of time for graduates to learn traditional biochemical techniques (refer to previous section) potentially compromise a laboratory's capacity to provide a high-quality, safe and reliable service.

Translation services Another example of the quality of professional services being affected by automation has occurred over the last five years in the language services industry in the form of machine translation. A Professionals Australia study of the translating and interpreting industry in 201210 found that globalisation, the internet and advances in machine translation technology had significantly impacted the quality of translation services as well as pay rates and opportunities for translating professionals. Our members have found that the most significant way automation has impacted the quality of translation services is when a client relies on machine translation to produce a first draft document which is then submitted to translators for editing at a pay rate substantially less than if they had been asked to translate the document from scratch. Companies engaged in international business transactions, for example, can send machine-translated material to translating agencies to sub-contract for editing. Often this material is out of context and highly erroneous meaning the translator cannot edit it effectively and there is a major impact on the quality of the translation.

Page 7: Realising the benefits of artificial intelligence: issues ...

| Page 7

So in the context of pathology and translation services, the following issues were common to the rollout of automation:

• an assumption by decision-makers that AI systems are 'automatic' and that human oversight of automated or technology-assisted services is not needed - this is in sharp contrast to the views of experienced professionals who understand there is a need to monitor, cross-check and adjust and update algorithms in many contexts in which these systems are implemented;

• pressure to cut staffing levels as a result of the often mistaken assumption that AI-enabled systems are substitutes for the skills of a professional rather than a tool for them to use; and

• a drop in safety, quality control and/or consistency in the absence of a capacity to verify and cross-check outcomes.

Differential impact of job losses In "Without workplace reforms, robots could replace more women than men", Rebecca Henderson says: "A growing body of evidence suggests automation will amplify the gender gap in

employment by replacing jobs where women have traditionally thrived, with jobs [in areas where] .. men have traditionally dominated .. Women not only participate less in the workforce, those who do are concentrated in the sectors most likely to be impacted by automation .. Women comprise .. the large majority of employees in the retail and service sectors, where automation presents the most immediate threat, but are significantly underrepresented in industrial sectors like utilities and manufacturing, where automation has the potential to increase demand for new workers with technical skills .. Women will lose jobs to automation at almost twice the rate of men."11

Radiation therapists and radiographers are an example of a segment of the STEM workforce in which females participate at higher rates than males. With the work undertaken by these professionals potentially open to AI applications, staffing levels and the nature of the work undertaken are likely to be impacted by the implementation of assistive technologies. The question of whether the automation in this context will require new skills and new tasks with the job remaining intact, or whether the jobs can be fully automated requiring workers to transition into new roles, remains but potentially the impact of job losses on women in this female-dominated area could be significant. Women are just one example of a segment of workers that may be impacted to a greater extent than others as AI-enabled systems are rolled out (refer to Bias in workplace recruitment by agencies section below for an example of the impact of bias in recruitment on mature-age workers). The potential for AI implementation to impact differentially on particular groups in the workforce needs to be noted in the framework and guidelines and accounted for in policy responses.

Risk of deprofessionalisation One of the potential benefits of AI is that there may be a reallocation of the workforce away from 'routine' parts of medical and scientific practice to parts of the health system that would result in better patient care and more widely-accessible services. As an example, AI systems may provide an opportunity for non-medical professionals to make diagnoses or prescribe treatments previously undertaken by specially trained and qualified staff in locations that formerly required travel to receive treatment. It is feasible to foresee that nurses or allied health professionals who have undertaken some form of training would be able to perform some of the tasks previously undertaken by degree-qualified individuals to deliver for example chemotherapy or dialysis services – potentially very positive changes. The potential downside however is that the deprofessionalisation of such roles could lead to lower-quality patient care and increased risk for the healthcare provider.

Page 8: Realising the benefits of artificial intelligence: issues ...

| Page 8

The shift in qualifications and capabilities required by scientific or clinical professionals delivering medical and scientific services has the potential to negatively impact the quality of patient care. The changes provided by AI-enabled system potentially undermine the importance of degree-qualifications as a factor in not only ensuring patient safety and high-quality care but also mitigating risk in the high-risk area of medical services delivery. Where AI-enabled systems are seen as a substitute for qualified, trained staff rather than a tool to assist their work, there is the potential for the deprofessionalisation of the workforce, an impact on the quality of patient care and an increased risk to the organisation delivering the services and these issues should be accounted for in an ethical framework and best practice guidelines.

Managers accountable for decisions made by automated systems Our members in managerial roles are likely to be accountable for decisions made by automated systems. The main finding of the 1995 Karpin Report12 was the need to recognise that strong leaders and managers require well-structured, systematic education and continuing professional development to add the greatest value to the national economy through their performance at enterprise level. Since the Karpin Report, the business context has changed profoundly with increasing globalisation, new business models, the end of the mining boom bringing with it a need to transition from a resources to service-based economy along with the critical need to utilise technologies such as AI systems to drive greater levels of business innovation. It is critical then that where managers are accountable for AI systems, they are equipped with the skills and competencies to enable them to take a strategic approach to deployment of technologies and informed judgements on the relevant operational and ethical issues. Managers must have access to relevant resources and training in national and international standards, rules, laws, regulations, codes of conduct, ethics in implementing AI systems along with guidance on the need to conduct AI systems audits and put in place oversight mechanisms.13 As pointed out in the discussion paper,14 the questions around responsibility for the consequences of AI systems-based decisions when things go wrong are not only complex involving policy-maker, manager, supervisor and/or software programmer but they can also be difficult to codify in an employment agreement and/or position description. This is likely to present challenges for those looking to workplace law to set out and protect their terms of employment when they have AI systems within their area of responsibility.

Issues of diversity and bias

Bias arising from unrepresentative training datasets The Ready, willing and able report quotes Thereaux who says:

"We take bias, which in certain forms is what we call 'culture', put it in a black box and crystallise it for ever. That is where we have a problem. We have even more of a problem when we think that that black box has the truth and we follow it blindly, and we say, 'The computer says no'. That is the problem."15

The report goes on to say that:

".. If the data is unrepresentative, or the patterns reflect historical patterns of prejudice, then the decisions which they make may be unrepresentative or discriminatory as well … bias can emerge when datasets inaccurately reflect society, but it can also emerge when datasets accurately reflect unfair aspects of society."

Page 9: Realising the benefits of artificial intelligence: issues ...

| Page 9

Clearly there is a serious issue when AI systems rely on unrepresentative datasets because they can be a means of systemically embedding bias, recreating historic patterns of discrimination and entrenching a lack of diversity in the contexts in which they are applied. A reliance on AI-assisted outcomes can also result in availability bias – a bias that arises when decision-makers grab readily available data to make decisions rather than use all available and relevant data which will take more effort and time to analyse.16 As highlighted in the discussion paper, there is also the issue of automation bias which is defined as "the tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct."17

Bias in workplace recruitment by agencies The potential for bias in workplace recruitment is worthy of special mention as a form of bias arising from unrepresentative training datasets because of its importance to addressing the underrepresentation of particular groups in the Australian professional workforce. The authors of the Ready, willing and able report give an example of how bias in training datasets can impact hiring decisions in the recruitment setting:

".. an AI system trained to screen job applications will typically use datasets of previously successful and unsuccessful candidates, and will then attempt to determine particular shared characteristics between the two groups to determine who should be selected in future job searches. While the intention may be to ascertain those who will be capable of doing the job well, and fit within the company culture, past interviewers may have consciously or unconsciously weeded out candidates based on protected characteristics (such as age, sexual orientation, gender, or ethnicity) and socio-economic background, in a way which would be deeply unacceptable today."18

In these ways, AI systems can play a role in creating 'self-fulfilling prophecies'19 and self-perpetuating cultural values within workplaces. There is clearly potential for recruitment organisations using historic datasets to reproduce historic patterns in employment that may breach discrimination rules under current day legislation. This was the experience of many of our members in relation to age-based discrimination as set out in Professionals Australia's 2015 Mature-Age Workers Survey Report.20 Respondents widely reported recruitment agencies giving preference to younger workers when selecting candidates. The following comments are indicative of many made by respondents in this area:

• Doors are shut in your face before you even get a chance to get an interview. All they do is look at your birthdate and your resume is already on its way to the bin.

• The biggest problems mature workers face are the assumptions made by young human resource/recruitment company staff. Knocking off anyone over 45 is a quick way to thin out applicants.

• I have spent over six months looking for work. I found that in general agencies discriminated even more than actual employers. The position I started in three weeks ago was advertised and managed directly by the company’s management without external agencies.

• There are prejudices against mature workers in the external recruitment process particularly for less senior roles where recruiters are expecting young applicants.

The reality is that in contemporary HR recruitment, discriminatory preferences requested by a client combined with skewed training datasets based on previously successful applicants are likely to be embedded in algorithms used to sort online job applications and this may in turn lead to conscious or unconscious bias in the same way face-to-face discriminatory practices have operated in the past.

Page 10: Realising the benefits of artificial intelligence: issues ...

| Page 10

Bias arising from lack of diversity in AI workforce The Ready, willing and able report says that AI development should be:

"carried out by diverse workforces, who can identify issues with data and algorithm performance."21

and "If .. these systems .. [are to] serve us fairly rather than perpetuate and exacerbate prejudice and inequality, it is important to ensure that all groups in society are participating in their development .. we need diverse development teams in terms of specialisms, identities and experience."22

Professionals Australia agrees that one of the "main ways to address .. biases [is] to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct."23

Polarisation of the workforce While research suggests that some highly-skilled jobs may be automatable with AI, it is widely agreed that highly-skilled workers "are likely to take a growing proportion of income, while low-skilled workers .. will have at least some work taken away from them by machines .."24 The questions that then arise are whether or not there will be jobs for workers to replace those displaced by AI and automation, and if there are, whether or not they will be high-quality with decent pay. An ethics framework needs to deal with the challenging issue of the polarisation of wealth and income in our community. This issue is nothing new – but the impact of widespread rollout of AI systems has the potential to amplify existing issues around wealth distribution and so questions around policy responses are likely to arise.

Recommendations/priorities

Regulation As the discussion paper suggests, the best approach to regulation appears to be a combination of government regulatory adaptation - including the area of protection of employment standards where needed - and high levels of transparency around AI development.25 The suggestion in the discussion paper drawn from a French national report on AI that a national body be set up to advise on possible regulatory approaches and ethics for the application of digital technologies and AI within an institutional framework26 appears to be a sound one. A cross-disciplinary body such as this could also be responsible for making recommendations on policy interventions needed to address issues of widening inequality and/or segments of the community differentially impacted by the implementation of AI and automation.

Employment protections It is critical that those workers whose jobs are displaced as a result of the implementation of AI have access to relevant protections and retraining options under employment law. It is also essential that managers are not excluded from the protections of the Fair Work Act including unfair dismissal to provide recourse where managers are unfairly held accountable for issues that arise from AI implementations. The unfair dismissal income threshold would need to be removed or adjusted to provide protection and access of this nature to senior professionals with responsibility for AI-enabled applications.

Targeted retraining and a safety net

There is a clear need for retraining measures to deal with the impact of automation on the workforce.

Page 11: Realising the benefits of artificial intelligence: issues ...

| Page 11

As the discussion paper suggests: "An ethical approach to AI development requires helping people who are negatively impacted by

automation transition their careers. This could involve training, reskilling and new career pathways."

A sensible policy response to the current and future need for individuals to skill, reskill and upskill would be a commitment on the part of governments and industry to investment in lifelong learning. A commitment to a sound safety net for those who lose their jobs will also be critical. Because the implementation of AI may affect particular groups more than others, retraining initiatives should be targeted to ensure effectiveness. We concur with Jenny Macklin who said in Pathways to Growth: "Investing in the [training, reskilling and career pathways of the] Australian people –

including in the safety net – is the key to economic prosperity."27

Grow our national digital STEM skills base AI and technological disruption are likely to impact many STEM professions so there is a case for including a training component on the ethics of implementing AI, machine learning and/or big data in undergraduate STEM degrees to ensure an understanding of potential issues going forward. Including them would be an investment in the nation's digital capabilities and sound preparation for professionals likely to be involved in, and often responsible for, AI implementation across the workforce.

Deal with bias in training datasets and algorithms As Kriti Sharma is quoted as saying in the Ready, willing and able report:

"AI can help us fix some of the bias as well. Humans are biased; machines are not, unless we train them to be."28

To ensure machines help us fix some of the bias and that we don't train them to be biased, what's needed is an ethics framework, as the UK report points out, that "ensure[s] that data is representative of diverse populations and does not further perpetuate societal inequalities."29 To this end, the following could be included in an ethics framework and best practice guidelines:

• testing and auditing including discrimination impact assessments30 as standard measures to evaluate training datasets for potential bias;

• transparency and bias identification in algorithms and datasets used in workplace recruitment;

• access to de-identified open access large government datasets to ensure biases can be addressed in training datasets; and

• as suggested in the discussion paper, recourse mechanisms for individuals or groups adversely affected by a decision arising out of the use of an algorithm/automated system including impacts in the context of employment recruitment.31

Address factors that contribute to an unrepresentative AI workforce Women continue to be seriously underrepresented in Information Technology (IT). Females account for only 25 per cent of those with post-secondary IT qualifications in Australia.32 Female participation in the IT labour force is also lower than across other occupations with a participation rate of only 28 per cent of IT workers compared to 45 per cent across all professional industries.33 Our view is that numerical equality will always only be part of the solution to increasing and sustaining women’s participation in IT – the vital second part of the equation is addressing the issues

Page 12: Realising the benefits of artificial intelligence: issues ...

| Page 12

that lead to women leaving tech workplaces. We see the focus on issues in the workplace and addressing the reasons professional women leave the IT workforce as a critical part of progress toward gender equity in IT without which all the effort and investment in working towards numerical equality will be wasted. We need to ensure not only that women’s IT education translates into workforce participation but into retention in IT workplaces. There will be no diversity in IT or AI development teams without targetedinitiatives at the workplace level to retain underrepresented groups. We also need to put in place mechanisms to ensure we can monitor participation in specialist areas of IT including AI, cybersecurity, robotics, etc. by gender. The IT labour market is currently an area under-researched by independent research institutions and it would benefit from government funding to address this gap. Such research would also provide an evidence-base for providing targeted training and retraining opportunities for the Australian IT, science and engineering workforces in emerging IT specialist areas.

Help close the gender pay gap Having established the likelihood that AI implementation is likely to amplify the gender pay gap,34 encouraging the increased participation of women in IT generally but also in well-remunerated emerging fields such as AI, robotics, cybersecurity, etc. is likely to provide job opportunities and decent pay levels. Initiatives that encourage women to enter and remain in the IT profession across all levels of seniority will not only help ensure that women participate in the development of IT systems that affect them, but ensure their participation contributes to a reduction in the gender pay gap.35 The gender pay gap cannot be closed by simply paying males and females in like-for-like roles the same - occupational segregation in the form of women’s overrepresentation in less senior lower paid roles or fields and underrepresentation in higher-paid fields and at senior, management and executive levels are major factors contributing to the gender pay gap. It is therefore critical that those groups that are currently underrepresented are encouraged to participate and remain in well-remunerated areas including AI.

Take a broad approach to policy responses and ethics framework development It is imperative that high-quality interdisciplinary research into the potential impacts of AI not only on particular industries but also the effects on government and social policy is available to inform the development and implementation of AI systems. As the discussion paper points out, the issues relevant to the implementation of AI go beyond those that can be adequately dealt with within the IT profession and as the ILO report pointed out, a human-centred rather than technology-centred approach is needed. The upcoming ACOLA report referred to in the discussion paper will be instrumental in guiding action in this area.36

Page 13: Realising the benefits of artificial intelligence: issues ...

| Page 13

Conclusion

Professionals Australia sees the huge potential for AI systems to create opportunities to improve the quality of working lives, expand choice, close the gender pay gap and help address global inequality by automating baseline functions and operations in a myriad of contexts. We also see it however as a significant potential disruptor to the professional workforce and some of the effects are evident in its applications to date. We believe that developing an ethical framework and best practice guidelines for AI implementation is a worthwhile exercise, and see the following principles as fundamental to ensuring a human-centred, transparent approach to AI implementation across the Australian workforce:

• ensuring a commitment to lifelong learning for all workers and retraining, reskilling, upskilling and new career pathways for displaced workers;

• providing a proper safety net for those whose jobs are displaced;

• ensuring protection of safety standards, reliability and service quality by providing adequate staffing levels, quality control, auditing as needed and provision for cross-checking to verify consistency and outcomes where AI-enabled systems are deployed;

• ensuring graduates understand the underlying processes and mechanisms of systems being automated to ensure AI doesn't undermine their early career development or create gaps in their expertise;

• providing appropriate mechanisms to ensure access to broad training datasets as a reference point to determine whether bias in a training dataset is evident;

• providing appropriate recourse to deal with the potential issues of discrimination that can emerge where decisions are made using bias-embedded data;

• developing and funding initiatives to ensure diversity in AI implementation work teams, the AI workforce and the IT workforce more generally;

• ensuring managers are provided with appropriate training and access to recourse at the relevant industrial tribunals where they are held accountable for the impacts of AI-enabled systems;

• questioning assumptions by decision-makers that AI systems are 'automatic' and do not require human oversight, a view which is often held in sharp contrast to those of relevant experienced professionals who understand there is often a need to monitor, cross-check, adjust and update algorithms where AI systems are in operation;

• ensuring cuts to staffing levels do not occur without genuine consultation with the relevant parties when AI systems are implemented. Pressure to cut staffing levels often arises as a result of the assumption that AI-enabled systems are substitutes for the skills of a professional rather than a tool for them to use; and

• managing risk and liability where an outsourced provider uses AI-enabled systems and errors are made.

Submission preparation and contact Dr. Kim Rickard, e: [email protected]

Page 14: Realising the benefits of artificial intelligence: issues ...

| Page 14

Endnotes

1 IBISWorld Australia Industry Reports (ANZSIC). 2 ILO Global Commission on the Future of Work (2019). Work for a brighter future. Available at

https://www.ilo.org/global/topics/future-of-work/publications/WCMS_662410/lang--en/index.htm. 3 Stanford University, One Hundred Year Study on Artificial Intelligence (2014).The other areas noted were transport, healthcare,

education, low resource communities, public safety, homes and entertainment. Quoted in Dawson, D. and Schleiger, E, Horton, J., McLaughlin, J., Robinson, C., Quezada, G., Scowcroft, J. and Hajkowicz, S. (2019). Artificial Intelligence: Australia's Ethics Framework. Data61 CSIRO, Australia, p.23.

4 Drawn from definition in Dawson, D. and Schleiger, E, et.al. (2019), pp. 14 and 17. 5 Dawson, D. and Schleiger, E, et.al. (2019), p.55. 6 ibid. 7 ibid. 8 Alba, M. (2017) Artificial intelligence and engineering. Available at

https://www.engineering.com/DesignerEdge/DesignerEdgeArticles/ArticleID/14723/Artificial-Intelligence-and-Engineering.aspx. 9 Australian Government, Department of Jobs and Small Business (2019). Engineering Professions 2017-18. Available at

https://docs.jobs.gov.au/system/files/doc/other/ausengineeringprofcluster.pdf. 10 Professionals Australia (2012). Lost in translation: Barriers to building a sustainable Australian translating and interpreting industry: a

report by the Association of Professional Engineers, Scientists and Managers, Australia. 11 Henderson, R. (2017). Without workplace reforms, robots could replace more women than men. Available at

https://morningconsult.com/opinions/wihtout-workplace-reforms-robots-replace-women-men/. 12 Karpin, D.S. (1995). Enterprising nation : renewing Australia's managers to meet the challenges of the Asia-Pacific century / report of

the Industry Task Force on Leadership and Management Skills 13 Dawson, D. and Schleiger, E. et.al. (2019), p.26. 14 ibid, p.36. 15 Select Committee on Artificial Intelligence 2018, p.42. 16 Toner, M. (2017). Unconscious bias – what is it and why is it important in the STEM context? Available at

http://www.professionalsaustralia.org.au/professional-women/wp-content/uploads/sites/48/2014/03/Unconcious-Gender-Bias-in-STEM-Mark-Toner.pdf.

17 Parasuraman, R. and Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2): pp.230-253 in Dawson, D. and Schleiger, E. et.al. (2019), p.35.

18 Select Committee on Artificial Intelligence (2018), p.41. 19 Dawson, D. and Schleiger, E, et.al. (2019), p.6. 20 Professionals Australia (2015). Wasted Potential: The Critical Role of an Experienced Professional Workforce in Facing our Key

Economic Challenges, p. 19. 21 Select Committee on Artificial Intelligence (2018), p.43. 22 ibid, p.57. 23 ibid, p.44. 24 ibid, p.83. 25 ibid, p.23. 26 Villani, C. (2018). For a meaningful artificial intelligence: Towards a French and European strategy. Fench Government in Dawson, D.

and Schleiger, E. et.al. (2019), p.21. 27 Insights (2014), Melbourne Business and Economics, Vol. 16, November 2014, University of Melbourne Faculty of Business and

Economics. Pathways to Growth: the Reform Imperative, p.8. 28 Sharma, K. in UK national plan for AI – AI in the UK, Ready, willing and able?, p.44. 29 Select Committee on Artificial Intelligence 2018. AI in the UK: ready, willing and able?, p.44. 30 Dawson, D. and Schleiger, E. et.al. (2019), p.21. 31 ibid, p.8. 32 Office of the Chief Scientist (2016). Australia’s STEM Workforce, p.13. 33 Deloitte Access Economics (2018). Australia’s Digital Pulse, p.19. 34 Henderson, R. (2017). Without workplace reforms, robots could replace more women than men. 35 This is also true in other specialisations in IT such as cybersecurity. Cybersecurity is a good example of an emerging area in the IT

sector that requires a greater pool of IT talent to draw on to meet Australia’s future skills needs. Cybersecurity Ventures estimates that with cybercrime likely to triple over the next five years, around 3.5 million job openings will be created by 2021. Their research shows that women represent only 20 percent of the global cybersecurity workforce in 2018. If Australia is to remain competitive in this period of rapid change and fill the demand for IT skills in emerging areas, we need to build a skilled, adaptable, vibrant and diverse local IT workforce. The same holds true in the development of AI systems.

36 Dawson, D. and Schleiger, E. et.al. (2019), p.22.