Top Banner

of 12

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • CLOUD RAN

  • 1Cloud RAN

    CLOUD RANAbstractMobile broadband is immensely important globally as a key

    socio-economic enabler, as evidenced by the continuing growth

    of data traffic on mobile networks. To meet this unabated growth

    in demand, cellular operators must increase their network

    capacity by using advanced wireless technologies like adding

    more network elements like cell sites, controllers, etc.

    According to growth estimation data, data traffic increases by

    131 percent every year, while air interface grows 55 percent

    yearly. At the same time, ARPU is constantly decreasing.

    Per UMTS Forum Report 44, the total worldwide mobile traffic

    will reach more than 127 Exabytes in 2020, which is 33 times

    more than the 2010 figure. Significantly, at least 80 percent of

    the traffic volume remains generated by users, leading to large

    variations in the total mobile traffic, in terms of time and space

    variations of traffic. Future mobile networks must be designed to

    cope with such variation of traffic and uneven traffic distribution,

    while at the same time maintaining permanent and extensive

    geographical coverage in order to provide continuity of service

    to customers. In 2020, daily traffic per Mobile Broadband

    subscription in the representative Western European country

    will stand at 294 MB, and at 503 MB for dongles (67 times

    greater than in 2010).

    The cost of acquiring a new spectrum, deploying new wireless

    carriers, and evolving network technologies (e.g., from GSM to

    W-CDMA to LTE), while adding more processing capacity, new

    radios, and antennasand managing the resulting heterogeneous

    networkis becoming economically unsustainable and leads

    to a vicious cycle of demand.

    An increase in the number of base stations is resulting in more

    power consumption, higher interference, and reduced coverage

    and capacity due to interference. This also requires more radio

    network controllers.

    Radio Access Network (RAN) architecture requires solutions

    in the following areas:

    > Additional base stations and radio antennas without

    increasing the number of cell sites

    > Reconfigurable BSs to support multiple technologies

    > Resource aggregation and dynamic allocation

    > Cooperative radio technology for coordinated multi point

    transmission and reception

    > More capacity and coverage with reduced interference

    > Distributed antenna technology for increased coverage

    > Controller software enhancement to run on virtualization

    environment for lower costs and elastic capacity

    > Summarily reduce Capex and Opex, and overall TCO

    This white paper provides an overview of the distributed RAN

    architecture called Cloud RAN, which addresses solutions for

    the different areas mentioned previously. It also provides a

    more detailed analysis of the Cloud radio network controller

    architecture.

    IntroductionIn a conventional cellular network, the antenna, RF equipment,

    digital processor, and baseband unit (BTS) sit in the cell site as

    shown in the Conventional Cellular Network diagram on the

    next page. This requires more power and real estate space, and

    additional directional antennas and big cell towers to support

    multi-frequency bands and new air interface technologies like LTE.

    Enhancing a conventional network to support data traffic demand

    in a current wireless network is economically unsustainable.

  • 2Cloud RAN

    Active Antenna ArrayIn order to support increasing bandwidth demand, operators

    need to enhance their network to support multiple technologies,

    multiple frequency bands, and new air interface technologies.

    This requires new antennas to be installed, multiple directional

    antennas to support MIMO, beam forming, Rx diversity, etc.

    This also increases the number of antennas in an already dense

    network, which in turn increases interference between different

    cells and reduces the capacity of the cell. The end result is

    increases site costs.

    In the Active Antenna array solution, each element supports a

    connection to a separate transceiver element. The antenna array

    can support multiple transceivers, which addresses the problem

    of installing multiple antennas to support multiple air interface

    technologies, MIMO, beam forming, Rx diversity, etc.

    Each active antenna array has the transceivers (RF and digital

    components) hardware embedded with each antenna element

    inside the antenna array, rather than outside in a separate RF

    box called RRH or in a conventional TRDU/TMA. This reduces

    loss due to the RF connection between the antenna and external

    RF. With the built-in transceivers, the individual signals can be

    fed into different antenna elements to create focused vertical

    beams per each user, carrier, technology, etc., which can control

    the interference and increase cell capacity and coverage.

    Multi-band Radio Remote HeadsIn conventional networks, BTS/NodeB contains radio (RF and

    digital components) and baseband units connected to an antenna

    using coaxial cables.

    The Open Base Station Architecture Initiative (OBSAI) and the

    Common Public Radio Interface (CPRI) standards introduced

    standardized interfaces separating the server and the radio

    There is an immediate need to identify a solution that reduces the

    number of cell sites, effectively reuses resources, and employs

    reconfigurable basebands, multi-band radios, and distributed

    wideband antennas to support different air interface technologies.

    Cloud RAN architecture is based on distributed radio access

    network architecture consisting of the following network

    elements:

    > Active antenna arrays

    > Multi-band radio remote heads

    > Centralized baseband units

    > Metro cells

    > Radio network controllers on cloud

    > Common management server

    > SON server for seamless management and optimal

    network usage

    Rural Zone

    Urban Zone

    Conventional Cellular Network

    Base Station (BTS)

    MSC BSC (BTS)

    (BTS)

    Internet

    (BTS)

    Figure 1: CRAN Access Technology Cloud

    CommonManagement

    Server

    IMS/OperatorServices

    Active Antenna System

    SONServer

    CentralizedBasebandBank> 2G/2.5G

    RAN Servers> GSM/GPRS> UMTS> UMTS Femto GW> HeNBGW> WiFi Access Gateway

    Controllers onCloud

    CoreNetwork

    > UMTS> HSPA> LTEeNB> LTE-A

    RemoteRadio Head

    Coax

    IP

    IP

    Optical

    MacroSite

    FemtoCells/WiFi

    InternetIP

  • 3Cloud RAN

    part of the base station, the latter of which is supported by the

    Remote Radio Heads (RRH).

    A separate RRH is required for each frequency band to support

    multiple frequency bands and multiple sectors in a given

    geographical area. The number of RRH required proportionally

    increases, and in many of the macrocell deployments, RRH is

    in the top of the cell tower with the antenna to reduce the RF

    loss. In denser network deployments, increasing the number of

    RRH may not be feasible in all deployments, so RRH may have

    to be deployed on high-rise buildings, etc. This increases the

    overall cost, RF loss, and maintenance costs.

    Multi-band RRH (MB-RRH) are supported by multiple vendors

    for addressing the issues mentioned above. It can support

    multiple frequency bands and multiple technologies like GSM,

    WCDMA, and LTE in combination with the RRH units. This reduces

    the number RRH required to support multiple frequency bands

    and different technologies, while reducing the cell site costs,

    power consumption, and complexity.

    Centralized Baseband UnitsIn typical macrocell deployments, the baseband unit is located

    at the base of the cell tower along with the radio and other digital

    equipment. The cost of deploying new baseband units along

    with radios, antennas, etc. to support additional carriers, spectral

    bandwidth, different technologies, etc. and managing the

    heterogeneous network is becoming economically challenging

    and unsustainable.

    The centralized baseband is built on the concept of Software

    Defined Radio (SDR) with use of distributed radio signal

    processing and baseband processing units, which are software

    configurable and reduce the complexity of deploying BBU at

    the location of the cell site. The increase in additional carriers,

    spectral bandwidth, new technologies, etc. can be seamlessly

    supported by stacking a number of baseband units in the

    baseband pool and deploying remote MB-RRH and AAA with

    comparatively less cost and easy maintenance.

    The baseband and radio signal processing is distributed using

    the CPRI interface between BBU and remote radio equipment.

    The Common Public Radio Interface (CPRI) is an industry

    cooperation aimed at defining a publicly available specification

    for interface between the Radio Equipment Control (REC) and

    the Radio Equipment (RE), which in our case is the BBU and

    Remote Radio Head respectively. The scope of the CPRI

    specification is restricted to the link interface only (layer 1 and

    layer 2), which is basically a point-to-point interface. The Open

    Base Station Architecture Initiative (OBSAI) was introduced

    to standardize interfaces separating the Base-Station server

    and the radio part of the base station. Figure 2 depicts a CRAN

    architecture utilizing CPRI or OBSAI interface.

    Key features of this architecture (Architecture A) are:

    > Cells are distributed across processors and flexibly connected

    to radio unit through high bandwidth (order of Gbps) optical

    fiber links

    > Board level, link level redundancy could be provided

    > High-speed communication across sectors for efficient

    inter-cell information sharing for cooperative/coordinated

    Figure 2: CRAN Architecture A: Utilizing CPRI/OBSAI Link

    Unit

    Cloud RAN Unit High Speed

    CPRI/OBSAI link over Fiber

    Unit M

    RRC, S1-AP, X2-AP, RRM, SON

    CPRI/OBSAI Engine

    Layer 2- Cell 1

    Layer 2- Cell n

    Layer 2- Cell 1

    Layer 2- Cell 2

    Layer 1 - Layer 1 -Layer 1 -

    RRC, S1-AP, X2-AP, RRM, SON

    CPRI/OBSAI Engine

    Layer 2- Cell 1

    Layer 2- Cell n

    Layer 2- Cell 2

    Layer 1 - Layer 1 -Layer 1 -

  • 4Cloud RAN

    radio resource management, scheduling, and power control

    to optimize cell throughput and interference reduction

    > Reduced need for hardware at antenna sites

    > Utilizes optical links where already available to avoid laying

    new links, which may make infrastructure expensive

    The main disadvantage of this approach is the high-bandwidth

    link required between radio equipment and the central unit.

    For example, CPRI supports different line-bit-rate options ranging

    from 614 Mbps to 6.14 Gbps. Overlaying such high-bandwidth

    connections is a costly prerequisite and can be a big barrier to

    this solution becoming popular. To overcome this problem, if

    the split between radio equipment and control unit can be moved

    higher up the network stack (i.e., from below Layer 1 to between

    Layer 1 and Layer 2), then instead of sharing IQ samples, only

    the demodulated and decoded data and protocol information

    need to be shared over an IP-based link between the remote

    unit and the central unit. This considerably reduces the

    bandwidth requirement to approximately 200 Mbps for a 2x2

    MIMO, 20 MHz cell. Figure 3 depicts CRAN Architecture utilizing

    IP link between radio unit and the central unit.

    Key features of Architecture Option B are:

    > Cloud RAN unit is connected with relatively low-bandwidth

    (order of 100 Mbps) IP links to Radio equipment siteIP

    connectivity should be through operator-managed network

    so that there is strict control over latency and jitter

    > Antenna site terminates IP links and carries out Layer 1

    processing according to air Interface timing

    > Layer 3 and Layer 2 located in Cloud RAN unit. To handle

    impact of latency of IP link on 1ms, strict scheduling of LTE

    and modification in MAC will be required. A portion of MAC

    should also run in the baseband unit in the antenna site to

    control the time-critical L1 interface and relay messages

    between Cloud MAC and antenna Layer 1.

    > High-speed communication across sectors for efficient

    inter-cell information sharing for cooperative/coordinated

    radio resource management, scheduling, and power control

    to optimize cell throughput and interference reduction

    The main advantage of option B is it requires cheaper and lower

    bandwidth IP links between the cell site and central unit. However,

    the cell site will require more hardware compared with option

    A because Layer 1 and some part of Layer 2 are being executed

    in the cell site. In addition, the end-to-end latency increases due

    to IP link delay and variance characteristics.

    BBU POOLING:

    The pooling of processing resources for multiple cell sites at a

    central location (utilizing architecture option A or B) has many

    benefits. Based on the capacity, coverage, and number of air

    interface technologies to support, additional BBU can be easily

    added and remotely managed. The cell sites need to have only

    RRH and antennas; this reduces the huge space, power

    consumption, and management overheads of the cell site.

    KEY BENEFITS OF BBU POOLING

    Capex and Opex reduction

    The hardware can be pooled across multiple cell sites in order

    to reduce the initial capital costs, as well as regular running

    (electricity, site rental, etc.) and maintenance costs.

    Figure 3: CRAN Architecture B: IP Link between Cloud RAN Unit and Antenna Site Equipment

    Cloud RAN Unit 1

    Cloud RAN Unit M

    High Speed

    Delay

    IP Link

    RRC, S1-AP, X2-AP, RRM, SON

    IP Link

    Layer 2- Cell 1

    Layer 2- Cell n

    Layer 2- Cell 2

    Antenna Site 1

    Antenna Site N

    Layer 2- Cell 1

    Site Management

    IP Link

    MAC(partial)

    MAC(partial)

    MAC(partial)

    Layer 1 - Layer 1 -Layer 1 -

  • 5Cloud RAN

    Load Aggregation and Balancing:

    Baseband processing for multiple cell sites is aggregated based

    on the bandwidth requirement not increasing the number of cell

    sites. The BBU units can be dynamically distributed to different

    cell sites based on the usage patterns.

    Multiple Technologies Support

    The BBU units can be dynamically configured to support different

    air interface technologies based on network load and service

    requirements.

    High Availability

    The BBU pool has number of BBU units. During the failure of

    any single BBU, other active BBUs can share the load of the

    failed BBU, so that it can seamlessly recover. During multiple

    BBU failures, the active BBU units can be dynamically configured

    to share traffic loads from a number of cell sites supported by

    the BBU pool.

    Cooperative Multi-point Operation (CoMP)

    The BBUs connected to different cell sites are located in a

    centralized location, allowing the cell site information related

    to signaling, traffic data, resource allocation, channel status,

    etc. can be easily shared between BBUs. This information can

    be used to optimize the allocation of resources, handovers, call

    handling, scheduling for Inter Cell Interference Control (ICIC)

    and improve spectral efficiency. The CoMP and ICIC are the key

    requirements of the LTE-A in the 3gpp Rel-11 specifications.

    Because the BBUs support macrocells and small cells, the

    coordinated multi-site processing helps optimize the mobility

    and ICIC between heterogeneous networks.

    SON Support

    The shared information of BBUs can be used for advanced

    SON features to optimize the various services. The SON can

    dynamically configure resources to be used for the cell site

    processing, optimize the handover between cells, manage

    inter-RAT handovers, conduct cell-load balancing, and efficiently

    use HW resources. During very low load conditions, some of

    the BBUs can be switched off to save energy and help achieve

    green BTS.

    MetrocellsAs mentioned before, adding more macro cells to support

    increased capacity and coverage is not an optimal solution. In

    an effort to reduce the load on the macrocells, and to provide

    higher capacity and greater coverage, operators are deploying

    offloading solutions where the macrocells are offloaded to

    lowcapacity, lowpower small cells called metrocells.

    The metrocells can be deployed on lamp posts, buildings, etc.

    and are connected to the operator core network through the

    IP backhaul. These cells can be deployed in both indoor and

    outdoor environments.

    This provides an economically viable solution for the operator

    to increase cell density with less cost, efficient spectrum usage,

    and less time taken to extend capacity and coverage.

    Radio Network Controllers on CloudAs defined by NIST, cloud computing is a model for enabling

    ubiquitous, convenient, on demand network access to a shared

    pool of configurable computing resources (e.g., networks,

    servers, storage, applications, and services) that can be rapidly

    provisioned and released with minimal management effort and

    service provider interaction.

    The radio network controllers in the cloud RAN solution are built

    using this cloud-computing model to support GSM BSC, UMTS

    RNC, HeNB-GW, MME, WiFi-GW functions with increased capacity,

    in addition to multiple technologies. This cloud computing

    model can also be extended to CN elements for supporting

    flexible open architecture to increase capacity, different

    technologies, effective reuse of resources, and high availability.

    Traditionally, radio access network controllers like BSC,

    RNC, H(e)NB-GW, etc. are built on specific hardware with

    customization. The controller application can only run on specific

    hardware and software solutions, and are built for supporting

    estimated capacity. The available resources are never used to

    their full capacity, which increases the TCO, time to market,

    and dependency on specific hardware and software vendor

    solutions.

    Software as Service (SaaS)

    Platform asService (PaaS)

    Infrastructureas Service

    Cloud Computing Service Models

    End Application likecontroller applications

    Application platform ormiddleware as a service

    Cloud HW, CPU, Core,Disks, Fabric

    Figure 4: Cloud Computing Service Models

  • 6Cloud RAN

    Cloud computing architecture defines three different service

    models, as shown in the Figure 5 below, where COTS solutions

    can be used in different service layers to avoid using customized

    hardware and software solutions from specific vendors.

    The radio network controller applications in the cloud computing

    environment still need all the software and hardware layers as

    in the traditional telecom equipment. But hardware virtualization,

    OS abstraction layers, and middle layers are provided to the

    application through virtual service layers so that it can remain

    independent of underlying hardware and software components.

    Cloud computing is in the very early stages of adaption in the

    telecom controller space. Using controller applications as SaaS

    on the different vendor PaaS and IaaS is still a common interface

    supported by multiple vendors that is still evolving. The standard

    bodies like NIST and ETSI are working to define a standard

    interface for the different service layers.

    Per NIST, generally, interoperability and portability of customer

    workloads is more achievable in the IaaS service model because

    the building blocks of IaaS offerings are relatively well-defined

    (e.g., network protocols, CPU instruction sets, legacy device

    interfaces, etc.).

    The IaaS layer is supported by multiple vendors through their

    COTS virtualization solutions. A hypervisor called the virtual

    machine manager provides hardware virtualization so that

    multiple operating systems are able to run concurrently on a

    host computer. The virtual hardware is called a virtual machine

    and the operating system it runs is called the guest. Each guest

    OS instance running on VM acts as an individual server for the

    application. The diagram below shows the overview of the

    virtual servers.

    A virtual machine (VM) is a software implementation (i.e., a

    computer) that executes programs like a physical machine.

    Virtual machines are separated into two major categories based

    on their use and degree of correspondence to any real machine.

    A system virtual machine provides a complete system platform

    that supports the execution of a complete operating system

    (OS), while a process virtual machine is designed to run a single

    program and support a single process.

    A system virtual machine (virtual hardware), which provides

    an abstraction of a simple x86 PC with private CPU, memory,

    network interface (NIC), and file system, is used for controller

    virtualization. Each VM is independent of the VMM and other VMs.

    When the number of VMs increases complexity of I/O traffic, and

    hardware handling in VMM increases, application handling

    significantly slows down compared with a non-virtualization

    environment.

    The PCI-SIG has defined a standard for how to virtualize SR-

    IOV (Single Root I/O Virtualization) where a physical device

    implements hundreds of images of itself, one for each VM.

    Each VM communicates with its own set of I/O queues, which

    can directly use the device without the performance cost of

    going through a VMM while ensuring isolation between the VMs.

    Figure 5: Virtual Servers

    Before: 3 dierent servers for 3 operating systems and services

    After: Only 1 server required for 3dierent operating systems

    and services

    Hardware

    OS OSApp App

    DOM U

    OS OSApp App

    Hardware

    OS OSApp App

    DOM U

    OS OSApp App

    Hardware

    OS OSApp App DOM U

    OS OSApp App

    Hardware

    Hardware

    Hardware

    Hardware

  • 7Cloud RAN

    VMware supports this technology with its ESXi VMM called the

    VMDirectPath. The VMDirectPath I/O allows a guest operating

    system on a virtual machine to directly access physical PCI

    and PCIe devices connected to a host. Each virtual machine can

    be connected to up to two PCI devices. PCI devices connected

    to a host can be marked as available for pass-through from the

    hardware advanced settings in the configuration for the host.

    Intel and AMD support hardware-based assistance for I/O

    virtualization processes and complement single-root I/O

    virtualization. Intels name for this technology is VT-d, while

    AMDs version is ADM-Vi.

    The controller applications in the cloud environment are based

    on third-party IaaS layer interfacing with guest OS/virtual

    machine or IaaS in the service-layer hierarchy. All software layers

    like guest Os, middle layers, controller-specific OAM, controller

    application, etc. which are above IaaS are provided by TEMs.

    The guest OS can be any standard OS like Linux, VxWorks,

    Solaris, etc. depending on the application architecture.

    The virtual server/cluster management is part of third-party

    IaaS solutions. This provides the mechanism to manage the

    virtualization environment, control the execution of the virtual

    machine, and loading the associated applications. Some of the

    key functionalities supported by virtual machine management are:

    > Centralized control and deep visibility into virtual infrastructure

    (create, edit, start, stop VM)

    > Proactive management to track physical resource availability,

    configuration, and usage by VMs

    > Distributed resource optimization

    > High availability

    > Scalable and extensible management platform

    > Security

    There are multiple vendors supporting centralized control at

    the different levels in the virtualization environment. The VMware

    vCenter is one such solution that supports scalable and

    extensible management platforms as shown in the diagram

    on the next page.

    The operator can host the controller application software on

    the operators own private cloud or on a service providers cloud

    (community or public).

    Using a cloud computing environment for radio network

    controllers has the following advantages:

    Hardware Independence

    Controller software can run on COTS hardware available from

    different HW vendors, hence no binding with customized

    hardware solutions. Different applications can run on the same

    hardware so that available resources can be used on demand.

    Software Independence

    Application software can run on COTS virtual machines available

    from different vendors as IaaS. The application is independent of

    the actual hardware used, so it can run on different hardwares

    with no application software changes. There is also no proprietary

    software supporting hardware independence.

    Resource Pooling

    The different hardware types can be pooled to run multiple

    instances of application software to support increased capacity.

    The resources can be dynamically allocated, with different

    applications running on the same hardware.

    High Availability

    Using pooled resource to run controller applications takes care

    of single or multiple units failing within a pool of resources,

    while providing geo-redundancy, multi-tenancy, and elasticity.

    Reduced CAPEX

    Usage of the COTS hardware and software reduces TCO and

    time to market. Reuse of available resources with dynamic

    allocation helps use the full capacity of the resources, thus

    reducing the number of resources required.

    Reduced OPEX

    Use of common hardware and software reduces the cost of

    managing different customized solutions. The resource can be

    affectively used depending on the load conditions. Based on

    demand, some of the resources can be switched off in order

    to reduce electricity and other infrastructure costs (e.g.,

    cooling, etc.).

    Elasticity, Best of Class Performance

    The capacity of the system can change quickly according to

    need. The controller applications (RNC, BSC, etc.) run in virtual

    machines independent of the physical hardware. Third-party

    virtualization technology from different vendors can be used to

    host the application-specific OS, middleware, and applications.

    There are multiple vendors providing the virtualization IaaS

    layer. Some of the key solutions are VMware, KVM, and WR

    hypervisor.

  • 8Cloud RAN

    Multiple applications can run on the single platform with different

    VMs running different OSs using a multi-tenant model. In a

    multi-core environment, different applications can run on a

    different core with associated VM, guest OS, middle layer, and

    applications. The different controller applications allow common

    cloud computing architecture to dynamically use available

    resources.

    Common Management ServerAs previously mentioned, operators use more than one RAT to

    support wireless data traffic demand. The converged solutions

    AAA, RRH, multi-standard BBUs, and radio network controllers

    are used to support multiple technologies. Management of these

    converged network elements requires a common management

    server capable of supporting the FCAPS features for GSM,

    UMTS, and LTE network nodes.

    SON FunctionsIn cloud RAN network architecture, each network element is

    capable of supporting self-configuration, optimization, and

    autonomous recovery. SON, in this architecture, is based on

    decentralized algorithms as applicable at each individual network

    element. The operator may support multiple technologies like

    GSM, WCDMA, and LTE in the cloud RAN deployment. This

    requires network-level self-optimization to support automatic

    updates of network topology changes between E-UTRAN/

    UTRAN/GERAN networks.

    Information related to network load, performance, etc. of the

    different wireless technologies is used by the centralized function

    to dynamically allocate shared resources to different network

    elements in the cloud RAN and support load balancing. For

    example, when the GSM load is less but the UMTS is in the peak,

    the shared NEs like AAA and RRH can be configured to support

    additional cells, frequency bands, etc. When the network load is

    low, the set of network elements can be switched off wherever

    the load can be handled by a minimum set of network elements.

    Conclusion and Aricent Value PropositionAs discussed in the previous sections, the complexity of

    enhancing traditional networks to support increasing broadband

    capacity and coverage is not economically viable. There is

    immediate need to deploy distributed networks with centralized

    Figure 6: Controller Application Over IaaS layer

    Dierent Applications, middle layer, OAM, etc.

    An example of radio controller application on cloud environment is shown in the following diagram:

    VMM

    BSC RNC

    M/W M/W

    Guest OS Guest OS Guest OS

    H(e)NB-GW

    M/W

    COTS VM Manager

    COTS SW/HW

    VM V-10

    Guest OS

    VM

    VM Disk

    V-10

    VirtualHW

    Hypervisor

    Physical Hardware (Servers or ATCA)

    VirtualHW

    VirtualHW

    Guest OS VM Disk

    VM V-10

    Guest OS

    VM

    VM Disk

    V-10

    Guest OS

    Core OS Disk

    Fabric HW IO HW

    Disk Disk

    CPU

    CPU

    VM Disk

  • 9Cloud RAN

    baseband units, RRHs, AAA, and radio network controllers on

    the cloud to reduce the complexity of introducing addition cell

    sites and adding additional antennas and radio components.

    The Radio network controllers on the cloud environment using

    virtualization technology reduce the infrastructure cost to

    support both multiple technologies and the complexity of

    managing multiple network elements.

    In the 3rd Generation Partnership Project (3GPP) international

    standardization group meeting held in June, 2012, energy

    saving, cost efficiency, and support for diverse application

    and traffic types were identified as priority areas for Release

    12. Deploying a cloud RAN architecture-based network can

    address these requirements. The NGMN group also initiated a

    CENTRALISED PROCESSING, COLLABORATIVE RADIO,

    REAL-TIME CLOUD COMPUTING, CLEAN RAN SYSTEM

    (P-CRAN) [11] project to address these issues.

    Implementation of a cloud RAN solution can save CAPEX up

    to 15 percent and OPEX up to 50 percent over five to seven

    compared with traditional RAN deployment, per the China

    Mobile report [1]. According to the Alcatel-Lucent Light Radio

    Economics analysis [2], these disruptive RAN architecture

    designs and innovative features can reduce overall TCO by at

    least 20 percent over five years for an existing high-capacity

    site in an urban area with at least 28 percent reduction for

    new sites.

    Aricent is actively participating in and following emerging C-RAN

    architecture initiatives. Aricent eNodeB, EPC, and HeNB-GW

    IPRs are ready for CRAN architecture.

    eNodeB Framework> RAN on the cloud must cater to variable capacity

    requirements and host multiple cells. Aricent Layer 3 and

    Layer 2 including Scheduler, MAC, RLC, PDCP, GTPU, are

    scalable for multi-core architectures, support multiple

    form factors (femto, pico, micro) and different capacity

    requirements based on deployment.

    > Single instance of Aricent Layer 3 can handle multiple cells/

    sectors hosted on cloud RAN equipment and can interface

    with cells/sectors hosted on other cloud RAN equipment

    on the X2 link.

    > Aricent Layer 2 can handle one cell/sector per instance

    and multiple instances of Layer 2 can be utilized to handle

    multiple cells/sectors.

    > eNodeB software is modified to handle IP link (architecture

    option B described previously) interface between cell site

    unit and the central unit.

    Enhanced Packet Core Modules > RAN on the cloud must cater to variable capacity

    requirements and host multiple cells. Aricent Layer 3 and

    Layer 2 including Scheduler, MAC, RLC, PDCP, GTPU, are

    scalable for multi-core architectures, support multiple form

    factors (femto, pico, micro) and different capacity

    requirements based on deployment.

    > Single instance of Aricent Layer 3 can handle multiple cells/

    sectors hosted on cloud RAN equipment and can interface

    with cells/sectors hosted on other cloud RAN equipment

    on the X2 link.

    > Aricent Layer 2 can handle one cell/sector per instance and

    multiple instances of Layer 2 can be utilized to handle

    multiple cells/sectors.

    > eNodeB software is modified to handle IP link (architecture

    option B described previously) interface between cell site

    unit and the central unit.

    Universal SON Server (UniSON)> RAN on the cloud must cater to variable capacity

    requirements and host multiple cells. Aricent Layer 3 and

    Layer 2 including Scheduler, MAC, RLC, PDCP, GTPU, are

    scalable for multi-core architectures, support multiple form

    factors (femto, pico, micro) and different capacity

    requirements based on deployment.

    > Single instance of Aricent Layer 3 can handle multiple cells/

    sectors hosted on cloud RAN equipment and can interface

    with cells/sectors hosted on other cloud RAN equipment

    on the X2 link.

    > Aricent Layer 2 can handle one cell/sector per instance and

    multiple instances of Layer 2 can be utilized to handle

    multiple cells/sectors.

    > eNodeB software is modified to handle IP link (architecture

    option B described previously) interface between cell site

    unit and the central unit.

  • 10Cloud RAN

    Additionally, Aricent is involved multiple services projects related

    to solution architecture, implementation, and field support of

    C-RAN solutions. This includes Tier 1 OEMs in the area of multi-

    RAT BTS, virtual common hardware for RNC/BSC solutions,

    etc. Aricent is well-equipped to provide software frameworks,

    (eNodeB, EPC etc.), necessary resources, management framework

    and a strong delivery process to assist our customers for their

    own C-RAN solution.

    REFERENCES

    (1) http://www.google.com/url?sa=t&rct=j&q=china+mobile+c-ran&source=web&cd=1&ved=0CE0QFjAA&url=http%3A%2F%2Flabs.

    chinamobile.com%2Farticle_download.php%3Fid%3D63069&ei=ebXyT6uBAc7LrQfRnK2rCQ&usg=AFQjCNFDC6S_4Oth6_0vLobNzvfvrlouHw

    (2) http://www.alcatel-lucent.com/wps/DocumentStreamerServlet?LMSG_CABINET=Docs_and_Resource_Ctr&LMSG_CONTENT_FILE=White_

    Papers%2FlightRadio_WhitePaper_EconomicAnalysis.pdf&REFERRER=j2ee.www%20%7C%20%2Ffeatures%2Flight_radio%2Findex.

    html%20%7C%20lightRadio%3A%20Evolve%20your%20wireless%20broadband%20network%20%7C%20Alcatel-Lucent

    (3) http://www.vmware.com/products/vcenter-server/overview.html

    (4) http://www.vmware.com/products/vsphere/mid-size-and-enterprise-business/overview.html

    (5) http://www.obsai.com/obsai/content/download/4977/41793/file/OBSAI_System_Spec_V2.0.pdf

    (6) http://www.cpri.info/downloads/CPRI_v_5_0_2011-09-21.pdf

    (7) http://csrc.nist.gov/publications/drafts/800-146/Draft-NIST-SP800-146.pdf

    (8) http://collaborate.nist.gov/twiki-cloud-computing/pub/CloudComputing/RoadmapVolumeIIIWorkingDraft/NIST_cloud_roadmap_VIII_

    draft_110311.pdf

    (9) http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf

    (10) http://www.umts-forum.org/component/option,com_docman/task,doc_download/gid,2545/Itemid,213/

    (11) http://www.ngmn.org/workprogramme/centralisedran.html

    Universal SON Server

    SON Client

    EMS

    ENODEB

    TR69,

  • 2014 Aricent. All rights reserved.All Aricent brand and product names are service marks, trademarks, or registered marks of Aricent in the United States and other countries.

    frog, the global leader in innovation and design, based in San Francisco is part of Aricent.

    The companys key investors are Kohlberg Kravis Roberts & Co. and Sequoia Capital.

    [email protected]

    Aricent is the worlds premier engineering services and software company.

    We specialize in inventing, developing and maintaining our clients most ambitious

    initiatives. Combining more than 20 years of engineering expertise with a force of more

    than 10,000 dedicated product engineers, Aricent is the only company in the world that

    list of global companies, bringing the next generation of breakthrough, innovative

    products to market.