Essential science for broadband regulation

Post on 23-Jan-2018

1623 Views

Category:

Technology

0 Downloads

Preview:

Click to see full reader

Transcript

Essential science for broadband regulation

PREDICTABLE

NETWORK

SOLUTIONS

© 2015 Predictable Network Solutions Ltd - All Rights Reserved

September 2015

• This presentation is an edited and annotated version of the one shown on the webinar “Essential Science for Broadband Regulation” performed on 3rd September 2015. – The numbered pages correspond to those in the webinar.

• The webinar was produced by Predictable Network Solutions Ltd with support from Martin Geddes Consulting Ltd. – To watch the webinar, download this presentation and click

here. To read the Ofcom report on which the webinar is based, download this presentation and click here.

• Please note that the webinar (and this accompanying presentation) is not at Ofcom’s request or endorsed by Ofcom.

What is this presentation about?

Predictable Network Solutions Ltd

responded to an Ofcom invitation to tender

“A Study of Traffic Management Detection Methods & Tools”

2

• Our offer is to help you to reframe the issue of broadband traffic management; to share what we discovered from writing the report; and to help illuminate the way forward.

• Our hope is to grow awareness of our ground-breaking work and the framework we use; and to initiate a discussion on how we, as an industry, can better work together.

The only network performance science

company in the world.

PREDICTABLE

NETWORK

SOLUTIONS

3

• As far as we are aware, we are the only people able to make prospective system-wide statements about broadband performance backed by mathematical proof.

• This presentation was put together by a team with over 70 years of collective experience in distributed systems and their performance.

• We have been successfully applying network performance science for a number of years, and have several ‘world first’ breakthroughs, including the first ever quality-assured ISP.

• Our clients include US DoD/Boeing, CERN, and many tier 1 operators (both fixed and mobile).

The scope of the report

IN SCOPE

Performance

Science

Mathematics

Reasoning

OUT OF SCOPE

Blocking

Pricing

Economics

Policy

5

• The report addresses questions of pure science; not policy, or economics, or law.

• The framework that was used in the report can be applied to such questions – but this was beyond the remit.

• Hence issues like ‘zero rating’ are out of scope.

Some important terminology

Traffic Management (TM)

Differential Traffic Management (DTM)

Traffic Management Detection (TMD)

6

• Traffic Management (TM) is what happens at points in the network where demand exceeds supply (at short timescales) and resource allocation decisions are made (e.g. packet scheduling algorithms).

• Differential Traffic Management (DTM) is TM in which the resource allocation decisions depend on some aspect of the traffic (source, destination, markings, payload, etc.).

• Traffic Management Detection (TMD) is any method that uses observation of operational behaviour of the network with the goal of detecting the presence of DTM.

The question posed by Ofcom

An explicit question, similar to spectrum management:

Are any TMD methods suitable for regulatory use?

Some implicit questions:

How to find “bad actors”? (using undeclared DTM)

How to detect and remedy “foul play”? (using TMD)

7

The report considers TMD in the UK context

8

• Aspects of the conclusions have only been considered in the UK legal, market and technology context.

• This is important as the UK has a rich digital supply chain: open access, wholesale and retail, and many suppliers and technologies.

• It also introduces new subtleties, like the layering of the network protocols that affect the utility of TMD.

• However, the underlying science is universal.

Ofcom’s context

9

• Ofcom’s remit is defined in the Communications Act 2003, and is highly relevant.

• The Act requires the regulator to balance the cost of any regulation against its utility.

• The utility must consider the impact of the regulation on the weaker in society, such as small businesses, the disabled, those in rural areas, or the poor.

The “high bar” of a regulator

10

• Any new regulation has to be efficient and effective in its ability to detect, isolate and attribute any performance issues.

• It must also be strong enough to stand up in court. As such, we have approached this analysis as ‘expert witness strength’. We believe it to be highly robust.

• TMD is being considered here for a new purpose that its creators had not designed it for. Many of the shortfalls we have noted have also been identified by the original creators of these TMD techniques.

• This presentation is not intended as a critique of what were originally network research projects.

Many people are angry and feel let down by our industry

11

• The issue of TMD sits in a wider context, the contentious issue of ‘net neutrality’. Many people, rightly or wrongly, are upset about the broadband industry.

• The idea of ‘net neutrality’ bundles up these technical issues around traffic management with others, such as the abuse of market power.

• We are separating out the science of broadband performance from the wider debate.

Why care about what we say?

12

• The report is significant because the ‘net neutrality’ policy debate has thus far lacked rigorous scientific foundations with respect to traffic management.

• The report identifies several widespread misconceptions about how broadband works that have significant policy implications.

• The report (briefly) identifies a way to reframe the problem of broadband performance regulation to transcend the (over)heated debate we see today.

WHAT WE DISCOVERED

Our methodology

1. Problem specification

2. Research of TMD tools

3. Evaluate their fitness-for-purpose

14

• The methodology we followed started with a problem specification, in which we defined TM and the role of TMD.

• We then undertook research to identify the important TMD tools and how they relate.

• We then evaluated these tools’ fitness-for-purpose against Ofcom’s explicit criteria. These included their scalability, fidelity to reality (false +ve/-ve), and spatial localisation of performance issues.

• We also addressed Ofcom’s implicit question: “Even if you succeed at TMD, does it help Ofcom meet its remit?”.

Citation graph

15

• This citation graph captures the key published articles we found in this subject area.

• There are clearly some key ‘nodes’ of papers that are seen as being of the greatest significance.

• We believe that this search process has flushed out all of the likely candidate TMD techniques for which public data existed at the time the report was compiled (second half of 2014).

TMD tools we researched

NetPolice

NANO

DiffProbe

Glasnost

ShaperProbe

Chkdiff

16

• These tools were all analysed within the framework in a common fashion.

• For details of the analysis of each tool, see the report.

The bottom line: TMD is not fit for regulatory use

17

• There were three key criteria, and no TMD tool was found to satisfy them all. – Localisation: there are locations where TM can occur that

are below L3 routing, so cannot be pinned down by any of the TMD tools studied.

– Scalability: TMD may excessively consume network resources due to the volume or rate of load if scaled up.

– Reliability: They all fall well short of the standard of mathematical proof, so the ‘high bar’ of a regulator cannot be met.

• Network tomography is a new alternative approach to observation that has the required localisation and scalability. Its applicability in this regulatory domain requires further research.

The real issue… We have been looking at the problem in the wrong way

18

• Users only care about delivered performance outcomes. That experience is only a result of the end-to-end quality.

• The experience is variable because the resource is shared. There is a concern about ‘unfairness’ of poor performance due to that sharing.

• Regulators want to understand their role in managing ‘fairness’. Their implicit feeling is that DTM may lead to ‘unfair’ discrimination.

• The issue is that framing the problem in terms of DTM and TMD is unhelpful.

Looking more broadly…

What is the service “packet networking”

anyway?

19

• There is a more fundamental question. Broadband is, by definition, packet-based statistical multiplexing. So what is the service are users buying?

– What are its key parameters?

– What is it reasonable for users to expect from the service?

– How to know if they got it?

A FRAMEWORK TO THINK ABOUT THE PROBLEM

• To answer these questions, you need a framework to evaluate competing answers.

• What might that framework be?

Properties of a good framework

• Coherent

– Stand up scrutiny (scientific, and hence legal)

• Useful

– Relatable to Ofcom’s goals

• Practical

– Implementable with available technology…

– …at reasonable cost

21

• TMD techniques are looking for different TM behaviours. You might think of this as “in this basket of fruit, is there an apple, or an orange?”

• We are dealing with a class of “squishy things from trees with seeds in them”.

• To generalise the problem into a framework we need a “Theory of fruit” to characterise and classify them

– Calories, number of pips, type of flesh, vitamins, minerals, poisonous or edible, colour, and season.

• The report’s appendices outline the framework we used. It is a ‘theory of broadband performance’ and is mathematical in its nature.

• The framework is a general framing of the ‘semantics of performance’ (of packet networks).

• It is called ‘∆Q’ – it captures the essential performance properties that emerge from networks

• Whilst it has had multiple industry applications, this is the first time it has been used in a regulatory context.

Some key basic concepts

What did you want it to do?

Intentional semantics

What did you ask it to do?

Denotational semantics

What did it actually do?

Operational semantics

22

• The framework starts with some simple questions about what the system is supposed to do, was asked to do, and actually did.

• Computer scientists have fancy terms for these simple questions: intentional, denotational and operational semantics. – It’s a bit like sending your children to bed: you wanted

them in bed at 9pm, you asked them to go to bed for 9pm, and they went to bed at 9pm.

• However, these may not align (as any parent can tell you). Broadband performance regulation is about managing any misalignment.

• Think of deploying a fruit machine as an example.

• Intentional semantics

– “Make a profit from gambling, legally”

• Denotational semantics

– “Symbols on wheels and a promise of payment”

• Operational semantics

– “Many people have fun losing money, and few even more fun winning money”

• A regulator would wish to ensure compliance with the law (the intention), which means the payout (operation) needs to meet the payout ratio (denotation).

Typical network performance engineering

• Intentional semantics

– “Deliver a unified comms system”

• Denotational semantics

– “Deliver this quantity of quality to these users as expressed in this protocol”

• Operational semantics

– A working UC system with a bounded performance failure rate

24

Example: Typical broadband ISP performance

• Intentional semantics

– “Best effort”

• Denotational semantics

– Peak burst “speed”

• Operational semantics

– “Whatever happens, happened”

– Yesterday it worked, today it isn’t working, and that’s how networks work (or don’t)

25

Example: broadband regulation

• Intentional semantics

– “Support society’s communications needs, whilst protecting the weakest”

• Denotational semantics

– A collection of regulation policies

• Operational semantics

– An objective system of measurement and enforcement

26

Traffic management detection

• Intentional semantics

– “Someone may be acting with bad intent”

• Denotational semantics

– “Differential traffic management was inferred to be present”

• Operational semantics

– “Differential outcomes were observed”

28

• Deducing the intention from the operation is logically impossible. It CANNOT be done, philosophically or practically.

• Therefore using current TMD to derive intention in any general way is attempting a mathematically intractable problem from a ‘high bar’ regulatory perspective, due to false positives and negatives.

• Hence detecting and locating ‘neutrality violations’ or ‘discrimination’ is tantamount to a mathematical fools’ errand.

• So what can be done?

“Levels of fairness & justice”

Intentional Semantics

Denotational Semantics

Operational Semantics

Social (all telcos)

Business (all users)

Individual user QoE

Application performance

outcome

End-to-end packet loss and delay

Local packet queues & serialisation

Point to point transmission

Physics

The ‘game board’ of broadband performance regulation

29

• This is our first formulation of the key issues, and a the framework to evaluate competing theories of broadband regulation. It lays out the logical levels at which ‘fairness’ might apply.

• We think you’ll agree that ‘electron or photon fairness’ is not a widespread concern, but social fairness is! Yet we have to create the social fairness by the information we convey via electrons and photons, and all the intermediate levels.

• So where in this ‘board’ should we be focusing our attention?

“Levels of fairness & justice”

Intentional Semantics

Denotational Semantics

Operational Semantics

Social (all telcos)

Business (all users)

Individual user QoE

Application performance

outcome

End-to-end packet loss and delay

Local packet queues & serialisation

Point to point transmission

Physics

The “net neutrality” debate framing

TMD O

pen

In

ternet

30

• The common approach to analysing the ‘net neutrality’ issue is to start with a consideration of ‘best effort’ operational behaviours at the level of queues.

• It then presumes that ‘fair’ treatment of packets results in ‘fair’ treatment of application providers and users. A theory of ‘open Internet’ is usually invoked to explain the need for such fairness.

• The reasoning from that initial point relies on a ‘transitive closure’ assumption, whereby fairness at one level results in fairness at a higher level.

• We have to challenge that assumption! Indeed, not only may all the assumption(s) not hold, the arrows and ‘joins’ have (yet) to make a rational argument!

A core problem: fluke or fault?

31

• The core problem with this chain of reasoning is how to differentiate ‘flukes’ from ‘faults’ in the ‘network casino’. With ‘best effort’, the default is ‘everything is a fluke’.

• So how to formulate the intentional when trying to detect ‘unfairness’? For ‘best effort’ broadband, the intentional semantics are (by definition) undefined!

• The issue: there is no general means to distinguish flukes from faults (and cannot be one).

• So reverse engineering intention on the basis of TMD notions of fairness is not a meaningful question to even ask!

Three inference failures in the idea of ‘net neutrality’

1. You can’t even observe all possible forms of DTM

2. TMD attempts to generalise the specific to the general

3. Presence or absence of DTM isn’t what determines benefit to citizens anyway!

32

• Current TMD is only telling you about a very narrow set of DTM behaviours of those possible. Regulation would would need to consider all possible traffic management policies and mechanisms in all current and likely future network architectures.

• Furthermore, absence of evidence of unfairness is not evidence of absence of it. Conversely, presence of certain behaviours is not proof of unfairness.

• Finally, discouraging DTM is operationally infeasible. Equality of misery isn’t what citizens need, and in any case certain crucial aspects of network stability require DTM.

Problems we actually need to address

1. What is the intention that you should be regulating?

2. What could you practically operationally observe?

3. How can we focus on ends, not means?

33

• What is the intention the regulator should be regulating?

– What are ‘good’ and ‘bad’ intentions, anyway?

• What could you actually operationally observe?

– ‘Neutralness’ is not observable!

– So what would be desirable to observe?

• How can we focus on ends, not means?

– We want to just observe if the intention was delivered.

– Leave network operators freedom on the question of how to deliver it.

THE WAY FORWARD

How to escape from this regulatory dead end?

• We weren’t asked for a way out of the ‘dead end’

• Yet there is another way of framing the question that DOES have the appropriate properties…

• …and needs more work to implement

• It’s not about ‘net neutrality’, it’s really about ‘broadband policy’.

• To progress we need to change the language:

– Decouple TM, TMD, ‘net neutrality’ (i.e. ‘fairness’)

– Enable a market with suitable performance differentiators

• Given Ofcom’s original framing

– The relationship of network performance to QoE is known, but not yet widely understood

– Causality is commonly misrepresented (e.g. failure to understand the existence of a predictable region of operation; emergent nondeterminism not even considered)

• Questioning Ofcom’s framing

– The whole industry is grappling with the nature of cause and effect, resource allocation and outcome, technical vs socio-legal issues

What facts does good policy need to work from?

• Packet networks are stochastic

• They have emergent properties

• They are engineered by us

– They are not merely natural phenomena!

– We control the semantics

37

• Statistical sharing - the principle that makes ‘always on’ mass connectivity economically feasible - is also the key cause of variability in delivered service quality.

– This is because an individual shared resource can only process one thing at a time, so others that arrive have to wait.

• The unpredictability of the load from very many users and applications makes networks inherently random and possibly nondeterministic.

• The real-time statistical (i.e. stochastic) behaviour is what determines the performance of applications.

What are the myths that we need to be wary of?

• Belief in unbounded network self-optimisation

• Belief in the intentionality of flukes

• Belief that more capacity always solves all performance issues

38

• Scientific progress is made by understanding what are the good questions to ask – the good questions are ones than can be answered (many can not).

• The “myths” enumerated here are ones that we often hear expressed, whose implicit acceptance stops important questions from being asked. The facts:

1 Networks can’t self-optimise over all timescales and all sizes.

2 Statistical flukes can occur and, given protocol behaviours, there are various other induced phenomena outside the direct control of the network provider.

3 Making things faster, adding capacity, helps some issues – but there are always limits that need to be engaged with in the debate.

THERE IS HOPE!

• In working on the report, combined with other work we see that there is a potential practical resolution to broadband performance policy. One that is a ‘win-win-win’ for users, ISPs, and society.

• It has become clear that framing ‘network neutrality’ in terms of ‘packet fairness’ is not just unhelpful, it is untenable.

• The approach needed is one where the actors in the digital supply chain can constructively act together, not one which is based on blame and its attribution.

• Such an approach has the potential to deliver the predictable/consistent levels of performance needed to support future applications – IoT, e-health & education, smart grids, intelligent cities &

transport, etc.

“Levels of fairness & justice”

Intentional Semantics

Denotational Semantics

Operational Semantics

Social (all telcos)

Business (all users)

Individual user QoE

Application performance

outcome

End-to-end packet loss and delay

Local packet queues & serialisation

Point to point transmission

Physics

The alternative ‘quality floor’ framing

40

• Our proposal is to approach the regulatory problem in a different way.

• The issues of economics, law, policy, mathematics, physics and technology need to be teased apart.

• Each domain needs to be offered a space in which subject matter experts can legitimately express their knowledge without unconsciously expressing opinions on adjacent areas in which they are not authorities.

• A rational form of reasoning needs to start with the intentional, and work its way down and across, refining the social intention into operational behaviours.

• A ‘quality floor’ is one way to achieve this.

Our proposed way forward

• People: – Socialisation of the science of performance in the

policy community and beyond.

• Process: – Align policy to performance science.

• Technology: – Quality floor (narrow the intentional semantics).

– Network tomography (objective measurement of operational semantics).

41

• The process of aligning policy to performance science needs to address the following issues:

– What is quality (i.e. performance)?

– What does it mean to deliver it?

– How to measure it?

– How to attribute it in a supply chain?

Next steps

42

Regulators: Educate yourself on the science

Telcos and ISPs: Measure and manage quality through the users’ eyes

Industry bodies:

Start a dialogue around value delivered and service differentiation, not commodity speed

Consumer advocates: Campaign for minimum quality (not peak speeds)

Our relevant services

• We offer education and training in network performance science, and run both public and private workshops.

• We offer consulting services to help with broadband network product innovation and operational optimisation.

• We offer network tomography technology, which captures the operational behaviours needed for effective regulation.

Join us to move this debate

and industry forward

Contact Martin Geddes at mail@martingeddes.com

to set up a time to talk

Neil Davies Neil.Davies@pnsol.com

Peter Thompson Peter.Thompson@pnsol.com

Martin Geddes mail@martingeddes.com

PREDICTABLE

NETWORK

SOLUTIONS

top related