Top Banner
North America, South America, EMEA & APAC: [email protected] www.octoshape.com Octoshape Solutions: Breaking the Constraints of Scale, Quality, Cost, and Global Reach in Content Delivery
18
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Octoshape Solution Paper

North America, South America, EMEA & APAC: [email protected] www.octoshape.com

Octoshape Solutions:Breaking the Constraints of Scale, Quality, Cost, and Global Reach in Content Delivery

Page 2: Octoshape Solution Paper

Why Does Distance Have a Relationship to Quality....................................................................................................3

Acknowledgement System ..................................................................................................................................................4

Distance and TCP Bits in Flight ...........................................................................................................................................4

TCP Throughput Challenges................................................................................................................................................5

Methods Traditional CDN’s Use to Mitigate the Impact of Variable Throughput .............................................5

Edge Hosts .................................................................................................................................................................................5

Edge Networks .........................................................................................................................................................................5

Quality Equals Cost .................................................................................................................................................................6

Global Reach With Traditional Technologies? ...............................................................................................................7

Traditional Software Techniques to Address Distance = Quality ...........................................................................7

Adaptive Bit Rate .....................................................................................................................................................................7

Multiple HTTP Connections .................................................................................................................................................8

HTTP and Scale .........................................................................................................................................................................8

HTTP and CDN Barriers To Entry ........................................................................................................................................9

Introducing Octoshape to Fix the Root Content Delivery Problem on the Internet.......................................9

Core Transport ....................................................................................................................................................................... 10

Making Best-Effort Networks Perform Like Provisioned Networks .................................................................... 10

How It Works .......................................................................................................................................................................... 10

The Benefits of a Resilient Transport Approach ........................................................................................................ 12

Cloudmass .............................................................................................................................................................................. 12

A New Way To Play ............................................................................................................................................................... 13

Multicast Suite of Technologies ...................................................................................................................................... 14

Multicast - Reborn ................................................................................................................................................................ 14

Simulated Multicast ............................................................................................................................................................. 15

Native Source-Specific Multicast .................................................................................................................................... 16

AMT (Automatic Multicast Tunneling) .......................................................................................................................... 17

Conclusion .............................................................................................................................................................................. 18

Octoshape Solutions | Table of Contents

2Octoshape Solutions

Page 3: Octoshape Solution Paper

3Octoshape Solutions

Octoshape Solutions:Breaking the Constraints of Scale, Quality, Cost, and Global Reach in Content Delivery

Content delivery over the Internet today is governed by one fundamental rule. This rule impacts the architecture, scale, quality, and underlying economics of the ecosystem. It is pervasive even deep in the intellectual property battles that the traditional CDN’s vehemently protect to secure their barriers to entry in the market. This rule defines the current world the traditional CDN’s live in, and constrains these solutions in a way ripe for disruption.

The rule is that the distance from the streaming server is directly relative to the quality of the video the consumer receives.

This paper will investigate the impact that this governance has on the content delivery space. We will explore the challenges and boundaries that this creates in the market.

Octoshape technologies are defining a new set of rules for the content delivery ecosystem. These new rules tear down the foundation that the current ecosystem is built on, and creates a new bar on every level for quality, scale, cost and global reach relative to media delivery over the Internet.

Why Does Distance Have a Relationship to QualityThe most pervasive protocol used to deliver media over the Internet today is HTTP. Historically HTTP was used for progressive download of media, but now it is also used for streaming media. RTMP from Adobe is also a very pervasive solution, although the industry seems to be making a steady shift to HTTP based technology solutions.

The component of these streaming technologies that bind distance to quality is that they both use TCP as an underlying transport protocol. TCP is widely used because it has acknowledgment-based mechanisms to track and ensure that if the sender sent data, the receiver ultimately will receive it. This reduces the complexity required at the application level (Media player), so it doesn’t have to worry about missing video. This reliable transport mechanism clearly has value, but it unfortunately comes with a cost.

One of the largest tradeoffs of TCP is the variability of the throughput achievable between the sender and receiver. With TCP based protocols we must explore the difference in “Capacity” and “Throughput”. When streaming video over the Internet, the video packets must traverse multiple routers over the Internet. Each of these routers is connected with different speed connections. Capacity can be defined as the maximum

Page 4: Octoshape Solution Paper

4Octoshape Solutions

speed of the slowest link in the path between the sender and receiver. For example if there are 10 routers in the path, and they are all connected with 10 Gbps links, except for the link from the last hop router to the receiver is a 5 Mbps DSL line, then the Capacity of that path is 5 Mbps. The throughput of that collection of links however is not 5 Mbps using TCP technologies. Throughput is dynamic and fluctuates during a streaming session. The effective throughput can be calculated based on a couple of simple variables. The problem is that these variables are changing during the session, thus the effective throughput rate is also changing.

The variables are:

∞ Round Trip Time or RTT: The amount of time it takes for a message to be sent and acknowledged over a given path

∞ TCP Window Size: The amount of data that can be “in flight” or sent from the sender without being acknowledged from the receiver

Acknowledgement SystemTCP uses an “acknowledgement “ system to ensure data delivery. The sender and receiver have a two way conversation at all times whereby the receiver acknowledges receipt of the packets sent. As an example, the sender sends packet number 1, and the receiver responds with, “Send number 2”. This conversation in itself begins to define the constraints of the effective throughput rate of the conversation.

Distance and TCP Bits in FlightIn the most simple example, we can now see that inside of one second there is a discrete amount of time that a sender can send, and a receiver can acknowledge packets. Lets for example say that the sender is 50ms away from the receiver. This is commonly referred to as the latency between two points. This would mean that the time it takes for a packet to be sent to the receiver, and the receiver to return and acknowledgement would be 100ms, thus the round trip time or RTT. Given a 64KB TCP Window, this means that 64KB can be “in flight” or unacknowledged at a time. Since it takes 100ms for an acknowledgment to happen, then max throughput possible here is 640KB per second, or about 5 Mbps. Doubling the RTT or halving the TCP window results in half the achievable throughput.

Page 5: Octoshape Solution Paper

5Octoshape Solutions

TCP Throughput Challenges

The TCP algorithm starts the TCP Window off small, and then grows until a packet is lost to achieve maximum throughput. This loss of a packet is considered a sign of congestion, for which the algorithm responds by drastically reducing the window size to avoid the congestion. This in turn reduces the effective throughput immediately. The TCP window will begin to grow again until another congestion event. Visually the throughput variability looks like an hour glass inside a given set of capacity. This is fundamentally why TCP technologies cannot support TV quality, consistent bit rates.

Methods Traditional CDN’s Use to Mitigate the Impact of Variable ThroughputContent Distribution Networks were invented to solve this specific problem. It didn’t make sense for each company that wanted to distribute content on the Internet to build distributed infrastructure to do so. Content Distribution Networks sprung up to offer a shared hosting service focused on building network and host infrastructure that was distributed “close to the edge” to increase performance.

There are two main architectures currently deployed to address this problem in the space today, deploying edge hosts, or deploying edge networking.

Edge HostsThe first method is to deploy huge distributed host infrastructure at the edge of the Internet. This method attempts to address the issue by deploying servers very close. The challenges with this model are:

∞ It is very capital intensive

∞ It is difficult to manage a large deployment of distributed hosts

TCP THROUGHPUT CHALLENGES: BANDWIDTH CAPACITY INCREASES AND DECREASES AS CONDITIONS CHANGE ON THE PATH FROM SERVER TO USER.

EDGE HOST SOLUTIONS

InternetCDN

CDN

ISP 1

ISP 2

ISP 3

CDN

CDN

Page 6: Octoshape Solution Paper

6Octoshape Solutions

∞ To sustain quality, the infrastructure has to be scaled to peak in each locality. The impact to cost of goods sold has an exponential effect

∞ It exhibits a weakness from a congestion perspective due to routing paths out of the last mile to the closest edge network, which will usually be over one link

Edge NetworksThe second approach is to build a peering network directly to the edge networks. The effect this has is reducing the router hops across the normal Internet backbone, thus making a more centrally located set of hosts “look” closer from a latency perspective than they normally would.

∞ It is very expensive from an operational expense perspective

∞ More central deployments tend to be less resilient, and more prone to congestion.

Quality Equals CostThe common component to both of these methods is that they require a significant amount of capital and operational expense. We can conclude from this that there is a price associated with different levels of video quality distribution. Perhaps the most important point here is that with TCP based technologies, there is a floor for cost associated with a given quality that cannot be removed regardless of the economies of scale. The quality can shift, but the relationship between quality and cost is fixed. The constraints are real, and hold the scale of the Internet back based on the extent of capital or operational expense CDN’s are willing to invest. To illustrate, it would be difficult for one broadcaster to reserve enough capacity to serve 2 million simultaneous users at 2 Mbps (4 Tbps) from one CDN provider today. This is the equivalent of 2 Nielsen points at something similar to DVD quality.

EDGE NETWORK SOLUTIONS

Internet

ISP 1

ISP 2

ISP 3

CDN

CDNCDN

CDN

0

50

CAPEX

25

0

50

OPEX

25

4540

100

VIDEOQUALITY

50

0

85%

THE QUALITY EQUATION: VIDEO QUALITY = CAPEX + OPEX

Page 7: Octoshape Solution Paper

7Octoshape Solutions

Global Reach with Traditional Technologies?It is worth noting while almost every CDN today touts the ability to stream globally, or that they have presence globally. The more appropriate metric to discuss here, however, is the quality achievable in that given region.

The United States has enjoyed an immense influx of capital into the CDN infrastructure space over the last 10 years. This has resulted in a relatively high penetration of server infrastructure and / or networking infrastructure to support the region. However, this is not the case globally. The number of servers outside the respective CDN’s home market is usually quite limited. If a given CDN were assigned a particular quality metric in their home market, they would need to replicate their capital and operational expense outlay in a new market to achieve the same penetration or quality level. Therefore, with traditional TCP technologies, there is no way to select one vendor to provide a global service with high video quality.

The only way to truly understand quality and capacity is to use a third party monitoring service with last mile measurement capability.

Traditional Software Techniques to Address Distance = QualityTraditional streaming technologies are at the mercy of the fluctuating bandwidths inherent to TCP or HTTP based technologies. These technologies must make significant tradeoffs and work-arounds to make up for this deficiency.

The impact is quite significant as streaming media systems are now being designed to dynamically adjust video quality at the application layer. These systems then begin interacting or counteracting with transport layer protocols like TCP that have been designed to dynamically control flows on the Internet. These dynamic systems controlling other dynamic systems can create some very unexpected resonance effects. At low volume, with no congestion, these systems appear to provide improvements. However, at scale these mechanisms usually provide the negative effect.

Adaptive Bit RateSince traditional technologies are subject to varying throughput in the last mile, they began to adapt to the problem to keep the user experience from buffering and stuttering. The only parameter left to adjust is the video quality. Therefore, Adaptive Bit Rate technologies were born to trade off buffering and slow startup times for video quality. These technologies were built to shift the video quality many times during the session, as quickly as every two seconds to adapt to the current throughput available between the user and the streaming server. The real problem with this approach is that it is not acceptable for experiences displayed on the television. TV watchers are habituated around consistent TV quality experiences. A channel that changes quality is aggravating and distracting to watch in the context of viewing on a TV. It is much like trying to watch a VCD after you have been accustomed to watching DVD. The disks look the same so you expect the quality to be the same. When the VCD looks like a VCR quality, the viewer is left unfulfilled.

Page 8: Octoshape Solution Paper

8Octoshape Solutions

Multiple HTTP ConnectionsAnother approach to counteract the inherent variable throughput profile of TCP based technologies is opening multiple HTTP connections at the same time. This approach attempts to increase the actual throughput by parallelizing the flow of traffic. While in low scale this can have an additive effect to the throughput profile of a session, at scale it exacerbates the congestion issue on the Internet, and to the streaming servers themselves. You can imagine that if multiple connections are used, an event that looks like 1 million simultaneous users to a single TCP connection technology can look like a 5 million user event to a given multiple HTTP connection technology. That means more connections, and more variation in the throughput as the adaptive bit rate technology interacts with multiple instances of TCP flow and congestion control algorithms.

HTTP and ScaleScaling a live event is a lot like balancing a stack of teacups. With traditional technologies you must pre-provision the system. The architect or operator of that system does not want to change any variables during the actual event. Any changes could start a chain reaction that topples some part of the infrastructure kicking off some number of users, and that causes a wave of users trying to reconnect, toppling the infrastructure again. Network architects like smooth, load-balanced, deterministic flows of streaming data. Thrashing and spikes are detrimental to managing an event of any scale.

Another drawback to HTTP based technologies is that they are not actually streaming technologies at all. They are simply progressive download algorithms that split the stream into many thousands of physical files, and download those files as fast as they can. Instead of a smooth flow of data, it looks like hundreds of thousands of small spikes of bandwidth use to the network. As the event grows in size, these spikes grow in volume and become very difficult on the streaming infrastructure to manage.

This is problematic as the web caches that were designed to carry web images to increase the speed of web pages on the Internet are now filled up with significant amounts of video streaming data. Conversely blowing away the cache efficiency of the data that was intended to be in the cache in the first place, and slowing down the websites that use them.

HTTP AND SCALE: HIGH INTERVAL PROGRESSIVE DOWNLOAD SPIKES

Page 9: Octoshape Solution Paper

9Octoshape Solutions

HTTP at scale in the last mile is also quite a challenge. Given a last mile ISP that in a particular area has several pipes to the Internet. As a HTTP-based streaming event comes alive, the clients in a particular region find themselves pulling data inbound on the Internet from only one of the last mile pipes, as the destination streaming servers would have been routed to the closest CDN facility to achieve quality. As this pipe gets hot, the last mile operators have limited mechanisms to manage this inbound traffic since the source is coming from one close datacenter for that region.

This means the pipe gets congested, and everyone in that last mile suffers a degraded video experience.

HTTP and CDN Barriers To EntryCDN’s have learned to leverage the constraints that the relationship between distance and quality create. This relationship is like a gravitational force that holds quality, scale, and economics in a low orbit. This creates an environment where cash is king, and companies that have access to large pools of capital can build enormous infrastructures. These large infrastructures create a quality differentiation between the companies with access to capital, and the startups that don’t. The market then ends up with a few large providers that have a story around quality (at least in a specific region), and then a slew of new entrants fighting for the scraps and trying to break into the market.

For a traditional large broadcaster, it doesn’t make sense to buy services from any of the smaller CDN service providers. The broadcaster wants to have one or two providers maximum, and the smaller providers cannot service the broadcasters during very large peak events. The broadcaster is left with having to use capacity as a vendor qualification. They have to draw the line at 500Gbps of capacity for example, and any vendor under that is not worth the time to integrate. This leaves two or three players in the space to service 90% of the revenue in the market, and fifty or so smaller entrants fighting over the last 10% of the revenue.

Introducing Octoshape to Fix the Root Content Delivery Problem on the InternetOctoshape’s technology eliminates the need for this onerous and resource-devouring process for content creators, broadcasters and aggregators worldwide, while providing the following benefits:

∞ The highest, constant-quality video;

∞ The largest-scale audiences;

∞ Global OTT reach; and

∞ Economics unattainable by other technologies.

Page 10: Octoshape Solution Paper

10Octoshape Solutions

Core TransportAt its core, Octoshape solves the problem that traditional Internet video delivery technologies have today:

∞ Variable throughput

∞ Distance and geography constraints

∞ Poor performance in congested, last-mile, and mobile networks

∞ Traffic distribution scale models that are unsustainable because of capital and operational costs

Making Best-Effort Networks Perform Like Provisioned Networks

One of the keys to the constant quality Octoshape provides over best-effort networks lays in the core algorithms employed in the transport. Octoshape’s core transport approach uses a unique, resilient-coding scheme inside a UDP transport. This scheme enables Octoshape clients to:

∞ Survive packet loss without the data overhead of forward error-correction (FEC) schemes, and

∞ Pull data from multiple sources simultaneously, while only actually consuming the data rate necessary to recreate the stream.

∞ Achieve the most out of the capacity available generating the highest quality experience

∞ Creates a sustained throughput profile for a constant TV-quality viewing experience

How it WorksA video stream is pulled into the Octoshape system. For the purpose of this example, assume the video stream is a 1 Mbps stream of data.

Octoshape breaks the stream up into many unique 250kbps streamlets as it is replicated across the streaming server complex. The streams are coded in a way that 20 unique streamlets may exist across the Octoshape server complex, but only 4 are needed to recreate the original 1 Mbps stream for the user.

OCTOSHAPE TECHNOLOGIES: SIDESTEPS NETWORK CONGESTION TO PROVIDE THE HIGHEST QUALITY AT ALL TIMES GLOBALLY.

Page 11: Octoshape Solution Paper

LIVE STREAM ENCODE SERVER DATA STREAMLETS VIDEOEVENT

THROUGHPUTOPTOMIZEDSOURCE STREAM

11Octoshape Solutions

This approach enables the Octoshape client on the end user’s viewing device to tune into multiple streamlets at once, with these sources transparently prioritized based on quality. If a streamlet source is behind a congested route or goes offline for some reason, the system pulls in other stream sources to take its place.

The underlying transport provides throughput optimization using UDP transport. Resiliency normally provided by TCP is replaced with Octoshape’s resilient coding scheme. By contrast, the Octoshape approach removes the overhead requirement for resilient delivery from the client. Octoshape’s technology works with standard media formats including Flash, Windows and MPEG2_TS.

In the Octoshape scheme, the outbound stream from the encoder is sent to a local processor called the Octoshape broadcaster. This software processes the stream and sends it in the Octoshape throughput-optimized protocol to the Octoshape ingest servers in the cloud. This stream is resilient and supports active/active and active/passive redundancy modes.

Once ingested, the data is replicated as streamlets and sent to the Octoshape distribution servers in the cloud or to those located on the Octoshape server complex.

The Octoshape client on the computer, or other connected device, that is consuming the stream requests access to a stream. It is quickly fed a full stream from one of the servers in the cloud to achieve instant on or instant channel change.

The system quickly notifies the Octoshape client of a cloud of resources from which it can pull the data. Within seconds, the client has a multi-path option to consume portions of the stream. It then begins to profile the sources for latency and packet loss, creating a quality-ranked resilient mesh of sources for the video. As a result, any source could drop out, any packet could be lost, a data center could be knocked out, or a fiber could be cut, and the consumer will not see a frame drop in the video.

Page 12: Octoshape Solution Paper

Active Resources

Inactive Resources

Millions

Thousands

From thousands to millions of simultaneous viewers with no intervention, no additional cost, and no wasted bandwidth.

12Octoshape Solutions

This Underlying Resilient Transport Approach Has Several Benefits: ∞ It creates a constant bit-rate TV-like experience. The UDP resilient flow does not have the variable

characteristics of a normal TCP flow. Therefore, while Octoshape features Multi-Bit Rate technology, it does not rely on that technology very often because once matched with a bit rate, users stay put.

∞ Since the enabling Octoshape technology is multi-path, it acts as a smooth and easy back-off mechanism as the load increases in the last mile. If a link becomes congested, Octoshape notices the increasing jitter, packet loss and latency. The technology moves traffic off of an affected link to other less-congested ones. In the last mile, this even load balances the traffic inbound to the last mile, opening up and leveraging the capacity available on all the pipes, instead of just congesting one pipe like traditional CDN technologies.

∞ Since the resilient UDP flow is not subject to packet loss and latency of a TCP based technology, the high-quality video can attain global reach. Regional infrastructure constraints are no longer an issue. That means that data can be served from locations where bandwidth, power, cooling and space are inexpensive.

∞ The technology also enables Octoshape to perform very efficiently over best-effort networks including 3G and 4G infrastructures. Because TCP technologies tend to create more overhead as the conditions get more challenging, the Octoshape approach is the most efficient way to send data over wireless networks.

These core innovations have made way for dramatic architectural improvements, and have enabled distribution methods over the Internet that have before been challenged. Two of these innovations are Octoshape’s Multicast Suite of technologies and Octoshape’s Cloudmass service.

Cloudmass

To grasp the impact of the Octoshape Cloudmass product, we first must explore the effect Octoshape has on a CDN deployment. CDN’s normally deploy a fixed set of reliable resources that they own, and have architected and provisioned to provide a given output. Octoshape acts as a magnifying glass over a pool of resources. For example, in a given scenario, Octoshape can make a CDN that has invested in a 50Gbps infrastructure, perform and deliver the capacity at the level of a CDN that has invested in a 500Gbps infrastructure. This was the case

Page 13: Octoshape Solution Paper

13Octoshape Solutions

for the Inauguration coverage of Barack Obama in 2009, where Octoshape was deployed at a smaller CDN provider that served as much video traffic as one of the largest CDN’s in the space.

Additionally, Octoshape was able to sustain the bit rate through the peak of the event, where traditional technologies, crippled by congestion, would only have been able to sustain 70 percent of the video bit rate, and then would have buckled under the load.

In order to plan for an event of this magnitude with traditional streaming technologies, a CDN must purchase gear, architecture, deploy the gear close to the edge, and ensure that network capacity planning matches the desired output in that specific region to maintain quality. This is enormously capital intensive, in addition to requiring an immense amount of time and coordination for the deployment. These events must be planned months in advance, and usually require very high reservation fees from the CDN to protect them from this outlay of time and money.

A New Way to PlayToday, there is a race in the space to deploy cloud infrastructure. Many large providers are deploying immense infrastructure of centralized, shared network and compute platforms. If you segment and look at just one of the top ten providers, you would find that they had more computers and network provisioned than the largest CDN’s on the planet.

If we take the Octoshape magnifying glass and dynamically deploy it across fifteen percent of one of the large cloud providers, it can instantly serve at the capacity level of one of the largest CDN’s. If fifteen percent of the top ten cloud infrastructures were dynamically aggregated together at once using Octoshape, they could recreate the capacity of all the world’s CDN’s streaming capacity combined today. This is the impact of what Octoshape Cloudmass brings to the table.

The Cloudmass technology is an extension of the Octoshape deployment and provisioning technology. As load increases on the normal Octoshape CDN infrastructure, Octoshape can provision resources around the globe using the api’s of multiple cloud service providers in real time. As these sources come online, Octoshape client technology sees them as valid sources for building a resilient mesh of streaming sources.

Since Octoshape has broken the relationship between distance and quality, it is not important on what cloud, or what region in the world these cloud resources are provisioned. It is not important to the Octoshape infrastructure if one cloud becomes overloaded, or if there is a fiber cut to a particular datacenter, or if a specific rack of computer loses power. The Octoshape system is resilient to these types of glitches in the network. The Octoshape software was designed to run on a pool of unreliable resources globally.

As the event cools down, the resources are closed down dynamically. The operative concept here is that Octoshape Cloudmass can dynamically provision and activate global resources across several clouds, without all the traditional capital expenditure, deployment, coordination, and time required to facilitate events of this size.

Page 14: Octoshape Solution Paper

LIVE STREAM OCTOSHAPE BROADCASTER SOFTWARE

DATA STREAMLETS VIDEOEVENT

1MbpsTHROUGHPUTOPTOMIZEDSOURCE STREAM

STANDARDENCODE SERVER

1Mbps

250kbps250kbps250kbps250kbps

14Octoshape Solutions

The impact of this technology combined with the abundance of cloud-based computer and network resources is nothing less than disruptive to the current environment. It rips down the barriers to entry the CDN’s using traditional technologies have enjoyed because of the relationship between distance and quality.

The cloud provides a very unique opportunity for Octoshape to capture where traditional technologies cannot perform. Clouds are inherently centralized, they are often shared, undedicated resources, and they are often not designed for high throughput services like video streaming. This is problematic for TCP-based streaming technologies, as the clouds are not fundamentally designed to solve for the quality aspect of video delivery. It is also very expensive to stream data from the cloud. Even volume-based pricing in the cloud is still a expensive proposition today. Fortunately, this is an area that Octoshape has uniquely solved with multiple approaches for efficiently moving video to the last mile without pulling all the video from the origin streaming servers.

Multicast Suite of Technologies Octoshape’s suite of three Multicast technologies – Native Source-Specific Multicast, Automatic Multicast Tunneling, and Octoshape Simulated Multicast – provide the magnification effect enabling the vast impact of the Cloudmass technology.

Multicast - RebornMulticast has historically been restricted to provisioned networks. It is used in the enterprise, or in managed IPTV systems where the congestion and packet loss can be controlled and managed. There are many reasons Multicast has not been widely adopted across the public Internet, one being that it is not a resilient transport mechanism in native form.

In the Octoshape system, the process starts with a standard off-the-shelf encoder. Octoshape supports major video formats such as Flash RTMP, Windows Media, and MPEG2_TS, as supported by top vendors such as Digital Rapids, Inlet, Elemental, Viewcast, FME, Microsoft and Newtek.

In the case of Flash Live, Octoshape provides a small piece of software called the Octoshape broadcaster that can be installed directly on the encoding device or on another computer local to the encoder. To the encoder, the Octoshape broadcaster looks like a Flash Media Server. The encoder is configured in exactly the same was

Page 15: Octoshape Solution Paper

15Octoshape Solutions

as it has been traditionally, so to the encoder, Octoshape is transparent.

Octoshape takes the stream and applies the throughput optimization technology to the stream to improve the Internet path between the encoder and the Octoshape cloud. Once in the cloud, the stream is ready for distribution.

Simulated MulticastIn this model, the Octoshape-enabled media player tunes to an Octoshape stream. The Octoshape server complex in the cloud immediately sends instant stream start data down to the last mile, enabling the video to begin playing.

The Octoshape system then begins sending a list of valid sources, enabling the client to create a resilient mesh of stream sources. As other clients begin to tune into the stream, the Octoshape system adds them to the valid resource pool that is communicated to other clients.

The Octoshape client then begins pulling small bits of data from these participating clients just as it would from cloud server resources. These participating clients are ranked along with the server resources for stream quality based on jitter, latency and packet loss. The client constantly adjusts the amount of data flowing from each source based on the quality of incoming data.

As an event evolves, other participating clients in the region, ISP, last mile, or office begin to provide higher-quality data than the server resources from the cloud. When this occurs, the data begins to be delivered from the edge, instead of from the cloud server complex. This method simulates the efficiencies of Native Multicast in the last mile or enterprise using Octoshape application level communication technologies.

LIVE STREAM OCTOSHAPE BROADCASTER SOFTWARE

DATA STREAMLETS

LAST MILENETWORK 1

LAST MILENETWORK 2

STANDARDENCODE SERVER

250kbps250kbps250kbps250kbps

250kbps250kbps250kbps250kbps

1Mbps Each

1MbpsTHROUGHPUTOPTOMIZEDSOURCE STREAM

Page 16: Octoshape Solution Paper

16Octoshape Solutions

Native Source-Specific MulticastOne distribution option has Octoshape inject the stream into the Native Multicast cloud of a last-mile provider. Octoshape provides a piece of software to the provider that resiliently pulls a stream, or set of streams, into the last mile and injects it into the Native Multicast environment of the provider. The provider would give Octoshape a pool of SSM (s,g) addresses to manage for these streams.

When an Octoshape-enabled client tries to tune into the Octoshape URL for the stream, the Octoshape cloud will send data to the client that enables the stream to begin instantly. The Octoshape cloud then starts communicating a list of valid resources from which the client can extract data.

Typically, these are server resources in the cloud. But in this example, one of the valid sources is a native SSM address. While the user is watching high-quality video, Octoshape is attempting to receive data from the native multicast source. Since this particular client is connected to a native multicast domain, it begins to receive data from the multicast source, and therefore de-prioritizes the data from the cloud.

Trace amounts of resiliency data are still pulled from the cloud in case there is packet loss on the native multicast feed. In cases of packet loss, the cloud sources are reprioritized to fill the gaps. In this case, Octoshape is transparently managing cloud delivery and native multicast sources in parallel.

LIVE STREAM OCTOSHAPE BROADCASTER SOFTWARE

DATA STREAMLETS

MULTICAST ENABLEDISP LAST MILE #2

STANDARDENCODE SERVER

OCTOSHAPEMULTICAST RELAY

MULTICAST ENABLEDISP BACKBONE

1MbpsTHROUGHPUTOPTOMIZEDSOURCE STREAM

250kbps250kbps250kbps250kbps

NATIVEMULTICAST

MULTICAST ENABLEDISP LAST MILE #1

1Mbps

1Mbps

1Mbps

1Mbps Each

As-NeededResiliencyStreamlets

Page 17: Octoshape Solution Paper

17Octoshape Solutions

AMT (Automatic Multicast Tunneling)AMT is another option for efficiently moving video data to the edge of the network in instances where native multicast is not enabled. AMT is a multicast tunneling process built into router code that can bridge a multicast and non-multicast domain. It can extract one copy of the video into the last mile, and serve multiple copies as a relay from there.

For this case, the last-mile provider has some portions of the last mile enabled with Native Multicast, and some that are not. As in the previous scenario with Native Multicast, Octoshape can inject the streams into the native multicast domain of the last mile.

The client that is on the non-native multicast portion of the network seeks to tune to the stream. The media player requests the stream from Octoshape, and the Octoshape server complex in the cloud immediately sends instant stream start data to the last mile, enabling the video to begin playing. The Octoshape server complex begins to send alternative sources from which the client can pull resilient data. Among these sources is the Native SSM address.

The Octoshape client in the background attempts to tune into this SSM address and immediately finds it unavailable since it is not connected to the native multicast domain. The Octoshape client then sends an Anycast request out to find the closest AMT relay, if available. The closest relay responds to the request, and tunnels to the native multicast domain to pull one copy of the stream into the AMT relay.

The Octoshape client then begins to receive the feed from the native AMT relay. If bits are dropped along the way, the Octoshape client fills the holes by drawing from the cloud.

Note, AMT multicast models apply in enterprise and last-mile networks.

LIVE STREAM OCTOSHAPE BROADCASTER SOFTWARE

DATA STREAMLETS

MULTICAST ENABLEDISP LAST MILE #2

STANDARDENCODE SERVER

OCTOSHAPEMULTICAST RELAY

MULTICAST ENABLEDISP BACKBONE

1Mbps

1Mbps

250kbps250kbps250kbps250kbps

AMTTUNNEL

NON-MULTICAST ENABLEDISP LAST MILE #1

As-NeededResiliencyStreamlets

1Mbps

1Mbps

1Mbps MULTICAST ENABLEDROUTER

AMT ENABLEDROUTER

1Mbps Each

1MbpsTHROUGHPUTOPTOMIZEDSOURCE STREAM

Page 18: Octoshape Solution Paper

Global Presence: North America, South America,EMEA, APAC

[email protected]

© 2011 Octoshape. All Rights Reserved. The Octoshape brand and logo are trademarks of Octoshape ApS. Other brands and names may be claimed as the property of others.

ConclusionOctoshape has created the most efficient transport protocols for the delivery of constant bit rate content across best-effort networks such as the Internet, fixed wireless and mobile infrastructures. The technology uses standard media formats and standard media players.

The transport protocols eliminate the traditional barriers to optimal streaming of media, the chief among them being the relationship between distance from the streaming server and the quality of the stream. With traditional CDN technologies, if quality is a fixed parameter, this relationship creates a floor for the cost of goods sold that cannot be overcome regardless of economies of scale.

This is how Octoshape technologies usher in a new paradigm of quality, scale, and economics for TV-quality video delivery over the Internet. The technology enables the use of cloud aggregation techniques, multi-bit rate technology, and multicast distribution strategies not previously achievable with traditional technologies. The resulting impact takes quality and scale up to a level unreachable by any other technology, and cost of goods sold below a level than any other technology can technically reach.

This disruptive paradigm will help usher in the next generation of TV services by enabling a new frontier of business models and consumer choice.

For more information, visit www.octoshape.com