Cross Government AI Adoption Review: Final Report
Cross Government AI Adoption Review: Final Report
AI A
doption
R
eview
Executive Summary
IntroductionBackgroundProject Objectives and key aimsMethodology
Observations‘As-is’ landscapeLong-list to medium list analysisDepartmental Engagement
Enabling ConditionsData InfrastructureData GovernancePeople, Skills & Culture
OpportunitiesHeadline observationsMoJ - Honourable mention
Information has been redacted➔ Final Report - Table of Contents
Page 2
Next steps
Appendices
Prioritisation comments and research (including returned questionnaires, workshop task tracker)
Workshop summaries
Project long-list with rejection and rescoping recommendations
Executive SummaryAI Adoption Review
Page 3
Executive Summary - Background and Objectives of the AI Review➔
➔ The Autumn 2018 Budget commits HM Government to review how it can use Artificial Intelligence (AI), automation and data in new ways to drive public sector productivity and wider economic benefits.
➔ To support this, a cross-government AI review was established. This has been led by the Government Digital Service and Office for AI, with support provided by Faculty.
➔ As part of the review, the team has worked with departments to scope out the most significant opportunities to improve the effectiveness, efficiency and productivity of key public services.
Background
Faculty is an Artificial Intelligence firm that works to bring the benefits of AI to everyone. We specialise in working with organisations to develop their data, skills and tools to use AI to solve business problems. We have worked on over 300 engagements across a range of industries including public, private and charity sectors.
1. Map the current landscape of AI applications to solve policy problems across the UK central Government, its ALBs, and international public and private sector organisations.
2. Scope the most significant opportunities to introduce AI technologies across the UK central Government to improve the effectiveness, efficiency and productivity of key public services.
3. Identify cross-fertilisation opportunities between UK cross-Government and its ALBs that could increase shared learning and ultimately, AI adoption.
Objectives
Page 4
It is important that our decisions about which cases to recommend for investment are guided by reliable, robust and auditable criteria. We constructed these criteria in conjunction with HMT and No10, see below for a summary:
1. Impactful Issues - is this a strong, well articulated, opportunity?
2. Tractable Questions - can this problem be solved with today’s AI technology?
3. Accessible Data - does the data exist and can it be accessed?
4. Actionable Outcome - is there political appetite / appropriate implementation method
1. Impactful Issues - is there VFM? Does it align with dept. Objectives? Is there a good public story?
2. Tractable Questions - technical feasibility assessment
3. Accessible Data - high-level assessment on data completeness & quality
4. Actionable Outcome - is there an actionable insight? Does it align with the Data Ethics framework?
Immediate qualification of problem Prioritisation based on detailed feedback
LONG LIST MEDIUM LIST SHORT LIST
PRIMARY FILTER DETAILED CRITERIA
Executive Summary - Methodology used to prioritise the opportunities➔
Page 5
177
Gathering opportunities
Follow up meetings, calls and review of documentation received from departments
Surveys sent out to every department to find out current state of AI
Calls with representatives from international governments (Sweden, Canada, Germany, NZ, USA (Stanford University), Switzerland & Estonia) & desk research
Workshops with individual departments (Home Office, MoJ, MoD, HMRC, DWP, DfT)
12 745Use cases investigated in 8
departmental workshops
Recommendations
for investment
Opportunities gathered
from departments
Opportunities on
the medium list
Executive Summary - Opportunities were gathered and prioritised ➔
Page 6
Executive Summary - Methodology used for value for money analysis➔
Page 7
This indicative analysis was performed by Cabinet Office economists in line with HMT Green Book guidance. These are indicative calculations. In most situations the analysis attempts only to quantify direct benefits to HMG; where indirect benefits were quantified they are reported separately.
Information has been redacted
Information has been redacted
Information has been redacted.
31In Production
30PoCs
116New ideas
Of the 177 Opportunities gathered from departments:
➔ 116 were as yet unimplemented ideas
➔ 31 were Proof-of-Concept (PoC) phase
➔ 30 were fully deployed and in production (some with more funding required)
Opportunities Breakdown
➔ Departmental surveys - 73
➔ Additional engagements (e.g. calls & meetings) - 50
➔ Departmental workshops - 42
➔ Desk research - 12
Page 8
➔ Executive Summary - Observations from Discovery
177 opportunities were identified
➔ Not enough information
➔ Poorly defined problem
➔ No mention of datasets or techniques
➔ Not suitable for ML / AI
➔ Simpler technology or better solution / change in process more appropriate
➔ Not possible with the current state of AI
Opportunities that passed this triage were taken through the ‘secondary filter’; a multistage, comprehensive prioritisation framework, designed with HMT and No.10.
69% 17 % 14 %
Page 9
Many of the opportunities submitted by departments did not include enough information or were unsuitable for AI / ML. To quickly identify these, we applied a ‘primary filter’ to rapidly and pragmatically assess the longlist opportunities.
Executive Summary - Details on de-prioritised opportunities➔
Executive Summary - Enabling Conditions that will promote AI Adoption ➔
Many departments are conducting skills reviews. We recommend funding the requests that come out of these reviews.
Information has been redacted.
Accessing sensitive data is difficult within departments, and between departments. We recommend encouraging sharing of data by setting up and enforcing processes while retaining the oversight and control.
Access to secure environments on the cloud is a blocker for many departments. All departments need help to establish infrastructure that meets their needs.
Information has been redacted. Information has been redacted
Page 10
Investment in these three pillars of enabling conditions will provide direct positive impact in AI Adoption across government.
Page 11
Executive Summary - Collaboration opportunities of AI/ML mapped to active buckets➔
The matrix below illustrates the buckets of AI applications across government that are low-risk, sensible AI applications. Some of these cases are suitable for collaboration: either the learnings can be shared (e.g. from the best practise examples to the departments attempting to do work in the same space), or 'working groups' could be set-up to encourage Data Scientists to leverage skills & share insights. Lastly, this list could indicate where data sets may be beneficial to share.
Information has been redacted
Executive Summary - Key opportunities for investment identiied ➔
Page 12
Information has been redacted.
Information has been redacted.
Information has been redacted.
Information has been redacted.
Executive Summary - Key opportunities for investment identiied➔
Page 13
Information has been redacted.
Information has been redacted.
Information has been redacted.
Information has been redacted.
Executive Summary - Key opportunities for investment identiied ➔
Page 14
Information has been redacted.
Information has been redacted.
Information has been redacted.
Information has been redacted.
Executive Summary - Key opportunities for investment identiied➔
Page 15
Information has been redacted.
Executive Summary - Summary of the key opportunities ➔
Page 16
The opportunities can be grouped in the following way
Information has been redacted
End of Executive Summary
AI Adoption Review - Final Report
Page 17
IntroductionAI Adoption Review
Page 18
Introduction - Background and Objectives of the Cross Government AI Adoption Review➔
The Autumn 2018 Budget commits HM Government to review how it can use Artificial Intelligence (AI), automation and data in new ways to drive public sector productivity and wider economic benefits. This cross-government AI review (“the Review”) led by the Government Digital Service. (GDS) and the Office for AI (joint DCMS and BEIS), has three Key Objectives:
1. Mapping the as-is landscape across UK central Government and gathering case studies from international, public and private sector organisations, with regard to AI; understanding and application across Government and its ALBs (including common barriers & best practice)
2. Scoping the most significant opportunities to introduce AI technologies across the UK central Government spend portfolio and delivery of public services (both to increase productivity and improve service quality) - identifying in quantifiable terms the applications and opportunities that would deliver the most sizeable efficiency gains
3. Identifying areas where UK cross-Government and its ALBs collaboration could increase AI adoption
The findings of the Review will feed into the GDS-led Innovation Strategy (published Spring 2019) and will also be used to inform the Government’s next steps on its strategy for the adoption of AI technologies across the public and private sectors.
Page 19
We gathered
177opportunities from
departments
8
24
6
Surveys returned from 12 departments
1st round workshops held
International conversations held
Additional meetings
4Follow up meetings, calls and review of documentation
received from departments
1Surveys sent out to every department to ind out current
state of AI
3Calls with representatives from international governments
(Sweden, Canada, US, Germany, NZ, & Estonia) & desk
research
2Two-hour workshops with individual departments
(Home Oice, MoJ, MoD, HMRC, DWP, DT)
6
Introduction - Discovery led us to identify 177 opportunities, at varying stages of maturity➔
Page 20
The project team drew upon various research streams to explore the ‘as-is’ landscape of AI Adoption across Government. Our interviewees ranged from product and business owners, to analysts and software engineers.
It is important that our decisions about which cases to recommend for investment are guided by reliable, robust and auditable criteria. We constructed these criteria in conjunction with HMT, see below for a summary:
1. Impactful Issues - is this a strong, well articulated, opportunity?
2. Tractable Questions - can this problem be solved with today’s AI technology?
3. Accessible Data - does the data exist and can it be accessed?
4. Actionable Outcome - is there political appetite / appropriate implementation method
1. Impactful Issues - is there VFM? Does it align with dept. Objectives? Is there a good public story?
2. Tractable Questions - technical feasibility assessment
3. Accessible Data - high-level assessment on data completeness & quality
4. Actionable Outcome - is there an actionable insight? Does it align with the Data Ethics framework?
Immediate qualification of problem Prioritisation based on detailed feedback
LONG LIST MEDIUM LIST SHORT LIST
PRIMARY FILTER DETAILED CRITERIA
Introduction - The opportunities underwent prioritisation using a robust methodology➔
Page 21
Pre-workshop questionnaires were sent out to participating
departments
8 workshops on 12 opportunities were held,
allowing us to probe answers from the questionnaires
Follow-up questions and requests for documentation were sent to departments to complete our understanding
Introduction - Prioritised opportunities were invited to participate in Workshops ➔
The purpose of the second round workshops was to gather as much information about the case as possible. We designed the process accordingly:
Page 22
Introduction - Prioritised opportunities were invited to participate in Workshops ➔
Page 23
Typical Workshop Agenda (2 hours)
➔ Introductions and Overview of the AI Review (15 minutes)
➔ Deep Dive into the use case (1 hour 30 minutes)
◆ Overview of the Current Process and Pilot
◆ Data and technical feasibility
◆ Cost Benefit Analysis
➔ Next Steps and Q&A (15 minutes)
Aims of Workshop (2 hours)
➔ Determine as much detail as possible on each use case.
➔ Stress test the idea with technical and policy input.
➔ Identify other opportunities, re-scope the idea if needed.
Workshops were run with the following departments to explore their proposed ideas. Participation of technical and policy profiles were encouraged, in order to evaluate its validity and suitable for inclusion in SR2019 submissions.
Information has been redacted
Introduction - Workshops evaluated the opportunities for technical and economic merit ➔
Page 24
Opportunity Information
➔ Detailed description of the use case.
➔ The user need that could be solved.
➔ Description of the end-to-end As-Is process (if available).
➔ Description of what the process would look like where an AI model is integrated.
➔ Any other information for more understanding.
Impactful Issues
➔ Benefits of the use case.
➔ Assessment of value for money for the use case.
➔ Other expected costs.
Tractable Questions
➔ Engagement with the data science and engineering team to evaluate the technical complexity of the project (if any).
➔ Availability of data architecture and how costly it is to build.
➔ Best practice examples in public or private sector.
➔ Availability of a baseline model performance.
➔ Minimum performance requirements.
Data
➔ How useful the data is in terms of information.
➔ Location of the data.
➔ How quickly the data can be accessed.
➔ How frequently the data is updated.
➔ How complete the data is.
➔ Format of the data.
➔ Data permissions and security.
Actionable Outcomes
➔ The amount of sponsorship the opportunity has.
➔ Number of different stakeholder groups involved.
➔ Any potential barriers to success.
The second round workshop process allowed us evaluate the technical, economic and policy feasibility of the opportunities.
Page 25
There were some post workshop engagement with departments as they were encouraged to feedback information on cost benefit analysis and data accessibility
Introduction - Further engagement was required with some departments 1/2➔
Information has been redacted
Page 26
There were some post workshop engagement with departments as they were encouraged to feedback information on cost benefit analysis and data accessibility
Introduction - Further engagement was required with some departments 2/2➔
Information has been redacted
Introduction - Prioritised cases were assessed for value for money analysis➔
Page 27
This indicative analysis was performed by Cabinet Office economists in line with HMT Green Book guidance. These are indicative calculations. In most situations the analysis attempts only to quantify direct benefits to HMG; where indirect benefits were quantified they are reported separately.
Information has been redacted
ObservationsAI Adoption Review
Page 28
Observations - The typology of AI applications across HMG ➔
Page 29
Very common
➔ Intelligent inspections e.g. risk and compliance for institutions, e.g. Schools, Farms or other, for example borders
➔ Triage of interactions - trivial decisions such as rejecting incomplete forms or routing enquiries to the right place
➔ Fraud detection - Something AI excels at - e.g. grant or benefit claimant fraud
➔ Management of documentation - e.g. processing documents with Optical Character Recognition / using NLP to investigate the contents
Common
➔ Exploiting unstructured data - e.g. using satellite data - international development, agricultural land usage
➔ Improved user experience - e.g. speeding up processing time of applications
➔ Assisted decision-making - supporting every-day decisions with data - whether providing recommendations or providing information. Importantly, this type of application lets the human stay in the loop and in control.
Following the initial Discovery phase of work, we better understand the broad patterns in AI / ML applications across HMG. This list is not exhaustive but illustrates the broad categories of AI that departments are investing in
Observations - Key relections on why opportunities were de-prioritised➔
Page 30
➔ Use-cases were not formulated effectively or detailed enough to make informed assessment. 69% of de-prioritised opportunities were in this category.
➔ Use-cases not suitable for ML / AI (e.g. ‘decision support’ or ‘data retrieval’). 9% of de-prioritised opportunities were in this category.
➔ Simpler technology / processes available (e.g. rules-based systems more suitable). In this category was also included cases for which commercially available software exists. For example, translation or transcription software. 8% of de-prioritised opportunities were in this category.
➔ Ambitious application of AI (e.g. Satellite Imagery for potholes). Some cases in this category showed a misunderstanding of how ML models learn. For example, they spoke of drawing out unreasonably advanced conclusions from small data sets or spoke of the ML models “learning over time”. In practice, the data to support a viable model should exist before modelling can commence. 14% of de-prioritised opportunities were in this category.
There are a number of reasons why opportunities were initially deprioritised
Theme Impact Review Response
Some AI Review leads were from
technical roles
Leads are not always aware of strategic priorities of department
Engaged with policy leads within departments where possible, including their opportunities in any prioritisation
Some opportunities put forward by
AI Review Leads are already funded
No requirement for SR bid to be created Focused on technically scoping ideas with the department, providing expert support to evaluate existing / identify additional opportunities (e.g. research based work)
Some departments are already
mature in the AI space
Some departments will continue to progress without involvement of AI Review team
Worked with AI Leads to help with opportunity mapping where appropriate, and to identify recommendations (e.g. infrastructure)
Some departments do not have time
for second round workshops
Opportunities will not be unpacked in detail, and not included in SR bids
Repeatedly attempted to contact departments to schedule workshops, holding calls and sending pre-questionnaires to reduce time required
Engagement with departments varied across government, which impacted the nature and quality of the reported opportunities. Some themes were raised with the Steering Group:
Observations - Departmental engagement across government varied ➔
Page 31
64%
36%
Technical Stakeholders
Policy Stakeholders
Level of Engagement
High Medium Low
Technical (16) 20% 40% 40%
Policy (9) 0% 22% 78%
Stak
ehol
ders
Observations - Departmental engagement across government varied ➔
Page 32
Engagement with departments varied across government, which impacted the nature and quality of the reported opportunities
Information has been redacted
Information has been redacted
Information has been redacted
Information has been redacted
Information has been redacted
Information has been redacted
Enabling ConditionsAI Adoption Review
Page 33
Page 34
Following completion of the first round of workshops and Checkpoint Event, five barriers to AI adoption were identified. Some of these are generalisable across domain, and others are specific to the public sector.
➔ Important to retain IP and access to data➔ No list of pre-approved suppliers makes procurement
labour intensive
➔ No mechanism to share funding, which means collaboration and shared learning is difficult
➔ Ongoing funding required for development and maintenance needs
➔ The model must fit into internal operations➔ Experimenting and validation post-deployment➔ Proof of model’s workability in a live environment
➔ Barriers in accessing data from sensitive sources➔ Lack of common data standards ➔ Variable data quality e.g. missing values & input errors
➔ Diverse and flexible teams essential to success➔ Data scientists should be at liberty to experiment
Enabling Conditions - Five key barriers to AI adoption were identiied ➔
Page 35
Enabling Conditions - Three enabling conditions have been identiied➔
These five barriers can be split into three enabling conditions, which are all designed to help increase AI adoption in the UK government
Data Infrastructure Data Governance People, skills & culture
A modern data architecture, with improved data pipelines and standard tooling can help department’s access the right data to build their AI models.
Setting up the right data access controls and aligning cross-departmentally on ownership will help the UK Government realise the considerable potential of its data.
A culture needs to be created that will enable hiring & retaining top data science talent, upskilling existing analytical talent and empowering the wider department to understand, value and take advantage of AI.
The lack of a widely-available and consistently-used modern data architecture means that AI model build and testing is time-consuming and labour-intensive and largely unsupported for production, and data assets cannot be joined.
Enabling Conditions - Data Infrastructure➔
Individual datasets should be accessible from a single cross-department data layer which handles permissioning, audit and versioning. The data layer should allow data from many sources to be built into multiple models and services, and data versioning should be incorporated to allow models and forecasts to be based on stable data if required.
Departments should review the processing pipeline for each dataset, seeking cost-effective opportunities to deploy automated processing. Specifically; API data supply from external partners, with matched single spine identifiers and scripted combinations of data into relational databases, rather than spreadsheets.
Improved data science tooling will allow for faster adoption of AI within and across departments. Coding languages such as Python and cloud-based analysis system offer wide-ranging capabilities. Additional tools for data ingestion and enhanced collaborative working (e.g. a data science platform) should also be considered.
Page 36
This can be remedied by implementing:
Enabling Conditions - Data Infrastructure➔
Moving from a Proof-of-Concept (PoC) model to production was cited by every department as a difficult part of the data science process. This is no surprise, as it brings with it a whole host of dependencies, including Engineering know-how, data permissions (beyond a “data dump” that may have been used for developing the model), policy alignment and operational needs.
Perhaps most importantly, however, deployment requires robust data pipelines and suitable infrastructure. While it is impossible to detail best practice here these issues are covered in depth in the “Guide to AI”.
We do not know what is
required to move from
PoC to PROD”
“Very few models
make it to
production due to
technical readiness”
Page 37
Further examples are described in the Executive Summary: Enabling Conditions that will promote AI Adoption across
government
Information has been redacted.
Enabling Conditions - Data Governance➔
In
Page 38
A lack of access to key datasets undermines departments ability to innovate. We recognise that data governance in large organisations is a challenge, in particular for those with legacy systems and data. To realise the potential of the Government’s data, suitable access controls & cross-departmental alignment is required.
This can be achieved by implementing:
Government departments create a huge amount of data as a product of both policy programmes and research. This data has huge potential to inform policy and improve operations, both customer facing and internally. At present, this data is not well managed: departments’ need to be able to make use of their own data, and can improve the quality and range of data they can to bring to the table with partner organisations.
When data access controls are clear, data can be used by internal teams who need it most. It also facilitates effective interaction with external research partners, leveraging external expertise and resources. Even where data ownership is clear, a lack of consistent and well-understood access controls means that much of departments’ data isn’t shared - either internally or externally - when it could be.
Data in Government is not the siloed set-up it once was. More than ever, there is a need for effective governance that allows for collaboration between departments. This goes beyond data sharing, and includes alignment on for example, who gathers what data, which can reduce burden on end users.
Enabling Conditions - Data Governance➔
Inability to access the right
data is a common blocker for
most projects”
Page 39
Further examples are described in the Executive Summary: Enabling Conditions that will promote AI Adoption across
government
Information has been redacted.
Information has been redacted.
Information has been redacted.
Enabling Conditions - Data Governance➔
Strategically important data sources are currently governed in an ambiguous or provincial manner. For example:
➔ It is hard to identify individual owners, and even harder to establish any consistency in their powers and duties in relation to their stewardship of that data
➔ bilateral Data Sharing gateways between departments are antediluvian (taking 6 months to negotiate but not providing any transparency or accountability in how they are used)
➔ Information has been redacted.➔ Legal constraints limit whether a department can share data with each other, as they
do not stand to benefit directly, and thus sharing is not permitted under their “core aims”
To address these data governance issues, a more consistent and rigorous set of operational responsibilities in relation to data could be applied to publicly funded IT systems within particular departments and discharged by the relevant departmental leader (e.g. the Chief Information Officer).
Inability to access the right
data is a common blocker for
most projects”
Page 40
Page 41
Poor retention rates and lack of time and opportunity to maintain skills across data science projects hinder capability building within the departments. Creating the right culture within the department, ensuring top talent is hired and retained, whilst existing analytical talent remains relevant and is upskilled is essential to AI adoption.
This can be ac hieved by implementing:
It is important for departments to understand and be ready for the challenges of managing AI projects. As well as all the usual project management challenges, there are some which are particular to data science and AI - for example, there is no fixed ‘workflow’ for technical data science (unlike software development).
The government has a wide range of skills and capabilities amongst its departments. There are pockets of expertise, as well as pockets of potential. Supporting both successful teams and emerging teams will quickly yield positive results.
Data leadership, to shape and direct what departments do with data, could help address some of this uncertainty. Fostering a culture that moves away from prescriptive approaches to ones that are experimental, iterative and agile is another enabler that many modern companies are using today.
Enabling Conditions - People, Skills & Culture➔
Enabling Conditions - People, Skills & Culture➔
Capability & skills
Best results are achieved by giving data scientists the tools they want and need. This information has been redacted.
Data scientists should have freedom to experiment. Data science is experimental by its nature, and we’ve heard from departments that acknowledging this fact was a vital part of the development of their teams.
Page 42
Diverse, flexible and well resourced teams are essential to success - delivery leads, data engineers, strategists who understand AI.
Information has been redacted.
Data science is a
team sport”
Information has been redacted
Information has been redacted
When procuring AI solutions, upskilling as part of the engagement means you are left with a team who can maintain and improve the models, and use the skills they have learnt on other projects. This is not always easy. Hiring externally is another way of bringing in new knowledge. This information has been redacted.
Information has been redacted
Information has been redacted.
Data science is a
team sport”
This information has been redacted.
Page 43
Capability & skills
Enabling Conditions - People, Skills & Culture➔
There are several things that gives departments difficulty creating business cases
➔ The exploratory nature of data science leads to uncertainty around benefits. HMRC noted that even strong cases are hard to get funded when there is high uncertainty.
➔ Once a project has got funds to move through PoC, and even engineering funds for deployment, it needs ongoing funds to support its maintenance, not to mention ongoing development in order to continue being useful. Better funding in this area would encourage departments to fund their own PoCs as they would stand to benefit from the return on investment if they were successful
➔ Many projects do not sit cleanly within one department. At present, there is no easy mechanism to share funding, which means collaboration and shared learning is difficult
Funding
Page 44
It is c hallenging to make the
case for funding to make the
step up to production”
All three of these barriers would benefit from dedicated pots of funding; a pot for AI in general, a pot for deployment of PoCs and a pot for shared projects.
Enabling Conditions - People, Skills & Culture➔
Enabling Conditions - People, Skills & Culture➔
Procurement
AI / ML requires a diferent
approac h to traditional
software procurement”
Page 45
Information has been redacted.
Information has been redacted.
OpportunitiesAI Adoption Review
Page 46
Opportunities - Summary➔
Page 47
The opportunities can be grouped in the following way
Information has been redacted
Information has been redacted
Information has been redacted
Information has been redacted
Page 48
Opportunities - Honourable mention - Information has been redacted ➔
Information has been redacted
Information has been redacted
Information has been redacted
Page 49
Opportunities - Opportunity One➔
Information has been redacted
50Page 50
Opportunities - Opportunity One➔
Information has been redacted
Information has been redacted
Information has been redacted
Opportunities - Opportunity One➔
Page 51
5 6
Information has been redacted
Information has been redacted
Information has been redacted
●
●
●
Page 52
Opportunities - Opportunity Two➔
Information has been redacted
Information has been redacted
●
●
Page 53
Opportunities - Opportunity Two➔
Information has been redacted
Information has been redacted
●
Page 54
Opportunities - Opportunity Two➔
Information has been redacted
Information has been redacted
Information has been redacted
Opportunities - Opportunity Two➔
Information has been redacted
Page 55
21 5 6
Information has been redacted
Information has been redacted
Opportunities - Opportunity Three➔
Information has been redacted.
Page 56
Information has been redacted.
Page 57
Opportunities - Opportunity Three➔
Information has been redacted.
Information has been redacted
Information has been redacted.
Opportunities - Opportunity Three➔
Information has been redacted.
Page 58
321 5 6
Information has been redacted.
Information has been redacted.
Page 59
Opportunities - Opportunity Four➔
Information has been redacted
Information has been redacted
Page 60
Opportunities - Opportunity Four➔
Information has been redacted
Information has been redacted
Page 61
Opportunities - Opportunity Four➔
Information has been redacted
Information has been redacted
Page 62
Opportunities - Opportunity Four➔
Information has been redacted Information has been redacted
1 21 2 3 4 5 6
Information has been redacted
Information has been redacted
Summary of Proposition
Page 63
Opportunities - Opportunity Five➔
Information has been redacted
Sed ut perspiciatis, unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam eaque ipsa, quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt, explicabo. Nemo enim ipsam voluptatem, quia voluptas sit, aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos, qui ratione voluptatem sequi nesciunt, neque porro quisquam est, qui dolorem ipsum, quia dolor sit amet consectetur adipiscing] velit, sed quia non-numquam eius modi tempora inci[di]dunt, ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostru exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit, qui in ea voluptate velit esse, quam nihil molestiae consequatur, vel illum, qui dolorem eum fugiat, quo voluptas nulla pariatur?
At vero eos et accusamus et iusto odio dignissimos ducimus, qui blanditiis praesentium voluptatum deleniti atque corrupti, quos dolores et quas molestias excepturi sint, obcaecati cupiditate non-provident, similique sunt in culpa, qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio, cumque nihil impedit, quo minus id, quod maxime placeat, facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet, ut et voluptas repudiandae sint et molestiae non-recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat…
Information has been redacted
64Page 64
Opportunities - Opportunity Five➔
Information has been redacted
Information has been redacted
65Page 65
Opportunities - Opportunity Five➔
Information has been redacted
Information has been redacted
Page 66
Opportunities - Opportunity Six➔
Information has been redacted
Information has been redacted
Page 67
Opportunities - Opportunity Six➔
Information has been redacted
Information has been redacted
Page 68
Opportunities - Opportunity Six➔
Information has been redactedInformation has been redacted
1 2 71 2 3 4 5 6
Information has been redacted
Information has been redacted
Page 69
Opportunities - Opportunity Seven➔
Information has been redacted
Information has been redacted
70Page 70
Opportunities - Opportunity Seven➔
Information has been redacted
Information has been redacted
71Page 71
Opportunities - Opportunity Seven➔
Information has been redacted
Information has been redacted
Page 72
Opportunities - Opportunity Seven➔
Information has been redacted.Information has been redacted
1 21 2 3 4 5 6 7
Information has been redacted
Information has been redacted
To support HMT and SR leads when considering an AI project we provide the following indicative costing methodology. This assumes that the project question has been well-scoped to a focussed question which is suitable for an AI solution; that all the technology needed is in place; and that the data needed has been identified and is readily available (see relevant sections of the AI Guide in order to validate these criteria).
1. Decisions on buy vs build vs mixed team depending on capabilities available within the department - where internal staff are needed, their time should be costed alongside any external contractors:
2. Set up governance of project - ensure alignment of all stakeholders, contracts in place as needed, discuss project communication and ways of working, timelines, review/decision points, etc
3. Review and prepare data and build model, which for a simple model build would typically take 12-16 weeks, excluding deployment (data cleaning and transforming; feature engineering; train and test model; iterate to achieve satisfactory model performance).
Page 73
Opportunities - Key determinants of costing an AI project ➔
➔ To support HMT and SR leads when considering an AI project we provide the following indicative costing methodology. This assumes that the project question has been well-scoped to a focussed question which is suitable for an AI solution; that all the technology needed is in place; and that the data needed has been identified and is readily available (see relevant sections of the AI Guide in order to validate these criteria).
Page 74
Opportunities - Key determinants of costing an AI project (Continued)➔
To support HMT and SR leads when considering an AI project we provide the following indicative costing methodology. This assumes that the project question has been well-scoped to a focussed question which is suitable for an AI solution; that all the technology needed is in place; and that the data needed has been identified and is readily available (see relevant sections of the AI Guide in order to validate these criteria).
4. Deployment - many options and potential drivers of cost and time here depending on what is needed:
This is where significant extra costs could be added (often at least as much as for the build phase) - ie additional software engineering and/or staff time for training or setting up new workflows to use the output of models.
5. Project review - evaluate the performance of the solution and lessons learned
➔ What is the desired form of the output of the model, e.g. email updates, an app or an API?➔ Can the output of the model be integrated directly into an existing workflow without additional software build? If
additional software engineering required, who can deliver this, and at what cost?➔ Can the output of the model be used by existing users in their existing workflow, or will new workflows need to be
designed and current or new users trained? Who will deliver this and what staff training time must be allowed for? ➔ What on-going support, if any, is required for the model or for users?
Next StepsAI Adoption Review
Page 75
Page 76
Workstream Activity Date
Overall Review/Prioritised Opportunities
➔ Draft, sign-off & share emails for departments who had opportunities on medium list
➔ Draft, sign-off & share Perm Sec letter ➔ Develop proposal for longer term 'next steps' of AI Review
➔ 4th April
➔ 12th April ➔ 24th April
Process for drafting prioritisation outcome emails
➔ Project team to create a template email & table of content tailored to each department to drop into placeholders
➔ Steering Group to make any amendments to template email wording & tailored content
➔ SRO to share emails with departments
➔ 1st April
➔ 3rd April
➔ 4th April
Process for drafting perm sec letter
➔ Project team to start drafting letter based on the content of the Faculty final report
➔ Steering Group to make any amendments to letter wording➔ Initiate clearance process (e.g. KC / MG etc. etc.)
➔ 1st April / 2nd April
➔ 3rd / 4th April➔ 5th April
Next Steps ➔
There are a number of tasks that GDS/OAI will be completing to keep pushing the AI Review forward
Machine learning and artificial intelligence are in the process of disrupting/transforming almost every sector of the economy - financial trading, advertising, retail, taxis, healthcare, supply chain optimisation, total automation of large global ports, manufacturing, television and music consumption, for example - and they have similar potential to improve the quality and user experience of public services, and reduce costs.
However, there are barriers to adoption - some general, some specific to the public sector. The Cross Government AI Review has been an extremely useful exercise to understand the current state of departmental capability/readiness/appetite, and to identify the most promising use cases for transformative projects.
Concluding Remarks ➔
From the 8th of January through the 29th of March, Faculty completed a 12 week discovery exercise to better understand the ‘as-is’ landscape of AI adoption across the UK central government. The aim over these 12 weeks was to, for each named central department, establish:➔ What the departments are currently working on➔ Assess the feasibility and impact of the identified
opportunities➔ Assess the departmental readiness ➔ Identify the most suitable opportunities for
investment in SR 2019
These findings were presented at the steering committee meeting on the 27th March to complete the project work. This supporting document will be shared by 1st April for future reference to all concerned parties.
Page 77
+44 20 3637 9415
54 Welbeck St, Marylebone, London W1G 9XS, UK
If you’re interested in finding out more about how Faculty can help, please get in touch.
Contact
AI A
doption
R
eview
: F
in
al R
eport
Page 78
79
Appendices➔
Appendix A - Prioritisation comments and research (including returned questionnaires, workshop task tracker)
Appendix B - Workshop summaries
Appendix C - Project long-list with rejection and rescoping recommendations