Yota PCRF - BME MIK LABORATÓRIUMzfaigl/QoS/doc/freePCRF... · Yota PCRF 3.6 Administrator's Guide @ Yota 2013 2 Revision History Date Version Author Revision 23.08.2011 1.0 Evgenia
Post on 09-Mar-2018
220 Views
Preview:
Transcript
@ Yota 2013
Yota PCRF
Administrator's Guide
Product version: 3.6
Document version: 3.3
Status: development
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 2
Revision History
Date Version Author Revision
23.08.2011 1.0 Evgenia Martynyuk Document created
28.09.2011 1.1 Evgenia Martynyuk Added "PCRF Databases Resizing" section, "Configuration Parameters Setup" section
15.10.2011 2.1 Evgenia Martynyuk Supported product version changed to 2.5; Added "Congestion Management Configuration" section
20.10.2011 2.2 Evgenia Martynyuk Added pcrf_console process description
21.10.2011 2.3 Evgenia Martynyuk "Configuration Parameters Setup" section changed;
19.01.2012 2.4 Evgenia Martynyuk
Updated "O&M Console" chapter, added "Monitoring" chapter Supported product version changed to 3.0, updated" System Processes" chapter
16.02.2012 2.5 Evgenia Martynyuk Updated "Basic Administrator’s Tasks" chapter.
30.05.2012 2.6 Evgenia Martynyuk Supported product version changed to 3.1
25.07.2012 2.7 Evgenia Martynyuk Supported product version changed to 3.2
28.08.2012 2.8 Evgenia Martynyuk Supported product version changed to 3.3
27.10.2012 2.9 Evgenia Martynyuk Supported product version changed to 3.4
10.12.2012 3.0 Evgenia Martynyuk Supported product version changed to 3.5. System Parameters on PCRF and DDF nodes updated.
18.01.2013 3.1 Evgenia Martynyuk Supported product version changed to 3.5.1. System configuration parameters updated
03.04.2013 3.2 Evgenia Martynyuk Supported product version changed to 3.5.2. added Accumulators Setup, System Settings Export\Import sections
10.10.2013 3.3 Evgenia Martynyuk Supported product version changed to 3.6. Added "Push Interface of Session IP Address Notification" section.
@ Yota 2013 3
Table of Contents
Introduction .............................................................................................................. 6
Terms and Definitions ............................................................................................ 6
Related Documentation .......................................................................................... 6
Abbreviations ........................................................................................................ 6
System Modules Configuration Overview .................................................................... 8
Brief Yota PCRF Components Overview ......................................................................... 9
PCRF Node Components ............................................................................................ 10
PCRF Core ........................................................................................................... 10
PCRF Database ..................................................................................................... 11
Interfaces ............................................................................................................ 11
Service Components ............................................................................................. 11
EDR Writer .......................................................................................................... 11
Monitoring ........................................................................................................... 11
BackUp ............................................................................................................... 12
Administration Tool ............................................................................................... 12
PCRF Cluster Architecture .......................................................................................... 12
DDF Node Components ............................................................................................. 13
PCRF Database on DDF node .................................................................................. 14
PCRF Interfaces .................................................................................................... 14
O&M Console ....................................................................................................... 14
Basic Administrator’s Task ....................................................................................... 15
Initial Configuration .................................................................................................. 16
DDF O&M Console Server List Setup ........................................................................... 16
PCRF Nodes Configuration in PCRF O&M Console .......................................................... 16
PCRF Tables Information Setup .............................................................................. 16
PCRF Cluster Parameters Setup .............................................................................. 17
Service Dictionary Provisioning ................................................................................... 18
IP Range Configuration .............................................................................................. 19
Push Interface of Session IP Address Notification .......................................................... 19
Policy Logic Configuration .......................................................................................... 20
engine.lua ........................................................................................................... 20
rules.xml ............................................................................................................. 21
DDF Cluster Nodes Switchover ................................................................................... 21
System Restart ........................................................................................................ 21
DDF cluster restart ............................................................................................... 21
PCRF cluster restart .............................................................................................. 21
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 4
System Settings Export\Import .................................................................................. 22
DDF as Single Subscriber Profiles Storage .................................................................... 22
Session Event Setup ................................................................................................. 23
System Processes ..................................................................................................... 25
PCRF Node Processes ................................................................................................ 26
pcrf_core ............................................................................................................. 26
drug.................................................................................................................... 26
pcrf_console ........................................................................................................ 26
rx_watchdog ........................................................................................................ 27
pcrf_notify ........................................................................................................... 27
log_writer ............................................................................................................ 28
stat_writer ........................................................................................................... 28
edr_writer ........................................................................................................... 28
trace_writer ......................................................................................................... 28
pcrf_check ........................................................................................................... 28
nms_import ......................................................................................................... 28
DDF Node Processes ................................................................................................. 29
ddf ..................................................................................................................... 29
ddf_propagator .................................................................................................... 29
ddf_console ......................................................................................................... 29
rx_watchdog ........................................................................................................ 29
System Logging ....................................................................................................... 30
Other PCRF node log files ...................................................................................... 32
Other DDF node log files ........................................................................................ 32
Process Log Level ..................................................................................................... 32
System Configuration Parameters ............................................................................ 33
PCRF Cluster Parameters ........................................................................................... 34
DDF Cluster Parameters ............................................................................................ 38
Administration Interfaces Description ...................................................................... 40
HTTP Interfaces ........................................................................................................ 41
Subscriber Session Information Interface .................................................................... 42
Generic Request API (GRAPI) ..................................................................................... 42
DDF Information Interface ......................................................................................... 42
Administration Tools ................................................................................................ 43
O&M Console ........................................................................................................... 44
O&M Console Usage .............................................................................................. 44
O&M Console Home Page ....................................................................................... 44
@ Yota 2013 5
Server List ........................................................................................................... 45
Server Statistics ................................................................................................... 46
MiniCRM .................................................................................................................. 48
Operations and Workplace ..................................................................................... 49
Command Line Interface ........................................................................................... 50
Command List File ................................................................................................ 50
Available Commands ............................................................................................. 50
Databases ................................................................................................................. 55
PCRF Cluster Database .............................................................................................. 56
DDF Cluster Database ............................................................................................... 56
Databases Access ..................................................................................................... 56
ttIsql utility .......................................................................................................... 56
SQL queries ......................................................................................................... 56
O&M Console ....................................................................................................... 56
Monitoring ................................................................................................................ 57
Monitoring Instruments ............................................................................................. 58
Counters ............................................................................................................. 58
Sensors ............................................................................................................... 58
Upstream stats ..................................................................................................... 59
Diameter Peer Connection Status ........................................................................... 59
RRD Charts .......................................................................................................... 59
Appendix 1. Utilities list ........................................................................................... 63
PCRF cluster scripts .............................................................................................. 63
DDF cluster scripts ................................................................................................ 64
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 6
Introduction
The Yota PCRF Administrator's Guide describes the system administration aspects of using Yota PCRF product.
This document is intended for technical personnel who are in charge of the administration and
support of the Yota PCRF Product. It is recommended that users have not only a basic but also and working knowledge of the following:
Linux (preferably RHEL)
Oracle TimesTen
To make the Yota PCRF system function, configuration of the following items should be competed:
Hardware and OS
Yota PCRF components
PCEF component (including DPI)
Hardware and OS
Hardware\Software system requirement and required OS configuration are described in the
"Yota PCRF 3.6 Installation Guide".
Yota PCRF components
Required configuration of Yota PCRF components are described in the chapters below.
PCEF component (including DPI)
Configuration of external components, such as PCEF (Huawei UGW9811) or DPI (Cisco SCE,
Procera, etc.) is done by network specialists who are responsible for PCEF configuration.
Terms and Definitions
Peer is a node, which other PCRF system nodes create connections with. For PCRF node it can
be another cluster node or PCEF peer. For DDF node it can be another DDF node and all PCRF nodes.
Related Documentation
1. Yota PCRF 3.6 Product Description
2. Yota PCRF 3.6 Policy Engine
3. Yota PCRF 3.6 Diameter Interfaces
4. Yota PCRF 3.6 Provisioning Common Principles
5. Yota PCRF 3.6 SPR Configuration Interface
6. Yota PCRF 3.6 Subscriber Management Interface
Abbreviations
Abbreviation Meaning
AAA Authentication Authorization Accounting
ACR Access Control Router
AF Application Function
BSS Business Support System
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 7
CRM Customer Relationship Management
DDF Data Distribution Function
DPI Deep Packet Inspection
O&M Operations and Maintenance
SPR Subscriber Profile Repository
@ Yota 2013 8
1
System Modules Configuration
Overview
Brief Yota PCRF Components Overview
PCRF Node Components
DDF Node Components
Brief Yota PCRF Components Overview
@ Yota 2013 9
Brief Yota PCRF Components Overview
The main component of Yota PCRF system is a PCRF cluster. The system can have one or
several PCRF clusters. If several PCRF clusters are installed in different regions the installation
of additional component will be required – installation of DDF cluster.
The scheme of geographically distributed configuration is shown in the figure below:
Figure 1. Geographical distribution
BSS
DDF
HTTP
SPR
PCRF
PCRF
DB
PCEF
City A
PCRF
DB
HTTP,
Diameter (I0,I1)
HTTP,
Diameter (I0,I1)
HTTP,
Diameter (I0,I1)
SPR
PCRF
PCRF
DB
PCEF
City B
SPR
PCRF
PCRF
DB
PCEF
City C
Information
How to configure DDF and PCRF nodes during installation see in "Yota PCRF 3.6 Installation Guide".
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 10
PCRF Node Components
Each PCRF cluster has two nodes, which replicate required information between each other. Installation as standalone node is also possible (non-cluster mode).
PCRF node architecture scheme is shown in the figure below:
Figure 2. PCRF node architecture
PCRF Core
Secondary
PCRF Node
In-memory DB
PCRF
DB
PCRF
Node
DDF
Diameter Interfaces
PCEF
Policy Engine
Gx
BSS
HTTP
Monitoring
I0, I1
I0, I1 (Diameter)
Administrator Monitoring
Center
DDF O&M
Console
HTTP Interfaces
SPR
Configuration
Interface
Subscriber
Management
Interface
Subscriber
Session Info
Interface
GRAPI
Rx
Clu
ste
r
wa
tch
do
g
AF
CRM
SNMP,
HTTP
Administration Tools
CLIPCRF O&M
Console
Service
Components
MiniCRM
BackUpSystem
Logging
EDR
Writer
PCRF Core
This is the main component, which implements the logic of policy selection of subscriber access
to network resources based on different criteria, such as subscription information, subscriber
location, used quota or roaming conditions. The input is information from the subscriber profile,
session data, accumulated usage and other available information. The output is the chosen policy, which defines access of a subscriber to network resources and QoS levels.
PCRF Node Components
@ Yota 2013 11
Policy Engine has embedded script processor that is based on Lua scripting language. Script
processor can operate with PCRF functions and attributes and build any combination of attributes and conditions for policy selection.
Policy selection logic is configured in the engine.lua file.
After choosing a policy for a specified subscriber PCRF core sends the policy to the PCEF. The
list of policies that can be sent to the PCEF is stored in the rules.xml file.
For more information, see "Policy Logic Configuration" section in Chapter 2, "Basic Administrator’s Task".
PCRF Database
For more information, see "PCRF Cluster Database" section in Chapter 5, "Databases".
Interfaces
The following interfaces are available for external systems:
Diameter Interfaces (Gx, Rx)
HTTP Interfaces
Diameter Interface description is available at:
http://<pcrf_host>:8091/doc/diameter.html
For more information about the HTTP Interfaces, see "HTTP Interfaces" section, Chapter 5 "Administration Interfaces Description".
Service Components
Service components provide stable system operating and high performance, manage active
processes in the system (if a process is down or doesn’t respond it will be restarted), root out session doubles and terminate them, collect required information required for RRD charts, etc…
The service components configuration is performed only under critical conditions!
EDR Writer
This component generates files with information about all events that lead to a QoS policy or rule set change.
The component usually doesn’t require any configuration.
For more information about EDR files, see "Yota PCRF 3.6 EDR Generation.docx ".
Monitoring
Monitoring component serves for monitoring of system status, including sub-components, load level and performance.
Built-in statistics collection provides values of various counters, such as number of requests,
processing errors and so on.
For more information about the system monitoring, see Chapter 8 "Monitoring".
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 12
BackUp
This component is optional and performs full backup of the information (such as subscriber
profiles, session information, different configuration files, server settings, etc.), which is stored in PCRF database of PCRF node. Full backup is performed every 3 hours.
For more information about backup and restore procedures, see "Yota PCRF 3.6 Backup and
Recovery.docx".
Administration Tool
O&M Console, CLI, Mass Operation Utility are described in Chapter 6 "Administration Tool".
PCRF Cluster Architecture
PCRF cluster supports Active/Hot-standby redundancy scheme. PCRF cluster nodes process
requests in Active/Hot-standby mode. Session and SPR information is replicated between
nodes. If one of the cluster nodes is unavailable, the second one takes over all workloads and
handles sessions, which were created on the unavailable node. Redundancy scheme is presented in the figure below:
Figure 3. Redundancy
PCEF Cluster
DDF
Diameter
Interfaces
PCRF
DB
Diameter
Interfaces
PCRF
DB
DDF
PCEF
Primary
Node
Secondary
Node
DB replication
HTTPHTTP
Master link Slave link
HTTP
Interfaces
HTTP
Interfaces
Policy Engine Policy Engine
Cluster
watchdog
PCEF
PCRF Cluster
BSS
Gx Gx GxGx
DDF Node Components
@ Yota 2013 13
Database information replication is performed by TimesTen replication agent. For more
information about the replication agent, go to:
http://download.oracle.com/docs/cd/E11882_01/timesten.112/e13072/overview.htm
DDF Node Components
DDF (Data Distribution Function) provides geographical distribution of PCRF clusters. This component forwards provisioning interfaces commands from BSS to Yota PCRF clusters.
DDF carries out a single entry point for BSS. BSS sends subscriber management and SPR configuration commands only to DDF.
DDF can be configured as:
Only temporary subscriber profile storage (default). In this case DDF is only temporary
subscriber profile storage for new and migrated profiles. DDF performs subscriber profiles
migration from one regional PCRF cluster to another and subscriber profiles are then stored
on local PCRF permanently
Permanent subscriber profile storage. In this case DDF stores the whole operator’s
subscriber database. No subscriber profile migration from one local PCRF to another. Local
PCRF receives subscriber profiles and stores them until these subscribes session are
terminated.
As any PCRF cluster DDF cluster has two nodes too. The DDF node architecture scheme is
presented in the figure below:
Figure 4. DDF node architecture
HTTP Interfaces
DDF
Secondary DDF
Node
In-memory DB
DDF
Node
BSS
PCRF
DB
HTTP
Monitoring
I0, I1
PCRF
Clusters
Clu
ste
r
wa
tch
do
g
Propagator
Proxy
HTTP
Diameter Interfaces
Monitoring
Center
StatisticsDDF Info
Interface
SPR
Configuration
Interface
GRAPI
Subscriber
Session Info
Interface
Subscriber
Management
Interface
HTTP
CRM
HTTP
Administrator
Administration Tools
DDF O&M
ConsoleCLI
BackUp
Service
Components
System
Logging
MiniCRM
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 14
Configuration of the DDF cluster is usually performed via DDF O&M Console or configuration
parameters, which are described in the "DDF O&M Console Server List Setup" section, Chapter 2 "Basic Administrator’s Task".
DDF has the following main components:
PCRF Database
PCRF Interfaces
PCRF Database on DDF node
For more information about DDF databases, see "DDF Cluster Database" section, Chapter 7 "Databases".
PCRF Interfaces
For more information about PCRF Interfaces, see in Chapter 5 "Administration Interfaces Description".
O&M Console
DDF cluster has O&M Console which is used for configuration, administration and maintenance
of the system.
The O&M Console collets statistic and monitoring information from all clusters of the system
(DDF and PCRF).
Different operations of Yota PCRF configuration (add new subscriber, change a node IP, etc.) can be performed via the O&M Console.
For more information about the O&M Console, see Chapter 6 "O&M Console".
@ Yota 2013 15
2
Basic Administrator’s Task
Initial Configuration
PCRF Nodes Configuration in PCRF O&M
Console
Service Dictionary Provisioning
Policy Logic Configuration
System Parameters Setup
DDF Cluster Nodes Switchover
System Restart
System Settings Export\Import
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 16
This chapter describes the list of administrator’s task which can be performed on Yota PCRF
system clusters.
Initial Configuration
Initial configuration of Yota PCRF is performed during the system installation. For more details, please refer to "Yota PCRF 3.6 Installation Guide.docx".
DDF O&M Console Server List Setup
After initial Yota PCRF system installation the information about all PCRF clusters and clusters’
nodes should be added to the O&M Console manually. Only after this procedure the information about all clusters will be displayed in the O&M Console Server List correctly.
How to set PCRF clusters information in the O&M Console see in "Server List" section, Chapter 6 "O&M Console".
Only after Server List setup it will be possible to see node statics information and to make required PCRF nodes configuration in the O&M Console.
PCRF Nodes Configuration in PCRF O&M
Console
PCRF nodes configuration can be performed only after the Server List setup.
PCRF Tables Information Setup
It is required to set information about clusters, the PCRF node interact with. For example, it can be PCEF (DPI) cluster.
Configuration steps:
1. Choose turn by turn primary PCRF node of each cluster in the Server List.
2. Go to Configuration (in Operations block) -> Network Topology -> Cluster Table.
Information about this PCRF cluster and DDF cluster will be displayed.
3. Click Add button in appeared Cluster Table and set information about other clusters, PCRF
cluster interacts with:
Parameter Description
Cluster ID ID of the a cluster
Role Cluster role. Values: 1 – PCRF; 2 – SPR, 3 – PCRF with SPR, 4 – DDF; 6 – DDF with SPR, 8 – PCEF; 16 – AF; 32 – Slave PCEF
Name Optional cluster name
Cluster SSR Subscription
Subscription type: 0 (Cluster SSR NONE) 1 (Cluster SSR IP-only) 2 (Cluster SSR IP and Location)
Description Cluster description
PCEF cluster information sample:
Cluster ID Role Cluster SSR Subscription
Name Description
3 8 0 (Cluster SSR NONE) PCEF PCEF cluster
PCRF Nodes Configuration in PCRF O&M Console
@ Yota 2013 17
4. Go to Configuration -> Network Topology -> Peers.
Information about this PCRF cluster nodes and DDF nodes will be displayed.
5. Click Add button in appeared Peers table and set information about all peers, PCRF nodes
interact with:
Parameter Description
Peer ID ID of a node
Cluster ID ID of the cluster which the node belongs to
Host Internal domain name of new node
Realm Realm of new cluster
Address External FQDN or external IP address of new node
Port Port which is used by node for interaction via Diameter interface (3868)
Dialect Vendor-specific Diameter protocol dialect
Auto Connect
0 – new node will wait for income connection from a server with which the node is going to interact via Diameter protocol 1 – new node will initiate connection with a server with which the node is going to interact via diameter protocol
Priority New node priority over other PCRF nodes in the system
Mandatory Monitoring parameter. 1 – system monitoring alarms if no connection with the node; 0 – system monitoring ignores connection miss
PCEF peer information sample
Parameter Value:
Peer ID 31
Cluster ID 3
Host labugw01
Realm gx.yota.ru
Address labugw01.gx.yota.ru
Port 3868
Dialect Default
Auto Connect 1
Priority 0
Mandatory 1
PCRF Cluster Parameters Setup
It is required to set specified PCRF cluster parameters after initial system installation in the O&M Console:
Important
Make sure PCRF node IDs are the same as in DDF O&M Console setting (Peers and HTTP Peers tables)
Important
It is required to set a unique ID for every PCEF (DPI) peer. The ID must differ from
PCRF peers IDs within a cluster.
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 18
Instance Name
Default Region
Use DDF subscriber info
Default cisco package ID (If Cisco SCE is used as DPI)
Perform the following steps to set required parameters values:
1. Choose a PCRF node in the Server List block and go to Operations block ->
Configuration -> Server settings.
2. Click on required parameter row and set parameter value:
instance_name=<any_name>, where < any_name > is a user-frienly node name
default_region=<region_ID>, where <region_ID> is ID of the region, where the PCRF
cluster is installed
use_ddf_subscr_info=[1/0], where 1 – DDF component is used in the system; 0 – DDF
component is not used in the system, or PCRF cluster is installed as single cluster system
diameter.send_burst_Gx=<required_value>, where <required_value> should be set
base on hardware capabilities
diameter.send_rate_Gx=<required_value>, where <required_value> should be set base
on hardware capabilities
default_cisco_package_id=<cisco_policy>, where <cisco_policy> is a policy which will
be applied to unknown subscribers on Cisco SCE
For more information about other PCRF cluster configuration parameters, see "PCRF Cluster Parameters" section.
Service Dictionary Provisioning
After initial Yota PCRF system installation information about all services should be provisioned
to the Service Dictionary of DDF cluster SPR database. For this purpose SPR configuration interface is used.
To add service information use the following HTTP request:
http://ddf_host/spr/conf/addServiceInfo?id=<service_id>&name=<service_name>&desc
ription=<desription>
Where:
Parameter Type Value range Default value Comment
ID String Length:1~32 - Service identifier
NAME String Length:0~128 - Service name, that can be a human readable name
DESCRIPTION String Length:0~256 - Service description
Request sample:
Important
Make sure ix_version parameter value on PCRF node is the same as on DDF node.
IP Range Configuration
@ Yota 2013 19
http://vsk-
ddf1.scartel.dc/spr/conf/addServiceInfo?id=www&name=WWW_Default_service&description=Basic_service_with_default_QoS_parameters
For more information about SPR configuration interface, see "
SPR Configuration Interface" section.
IP Range Configuration
Each PRCF cluster has specified IP range. It is required to set IP Range values for each PCRF cluster on DDF node manually.
In DDF O&M Console go to Configuration -> IP Ranges and set required information.
Push Interface of Session IP Address
Notification
It is possible to configure session IP address notification to DDF (or PCEF) from PCRF clusters if
required.
For session IP address notification Yota I1 interface is used. If IP Address Notification is set
then PCRF cluster will send specified information inside SSR message to DDF (or PCEF). And DDF (or PCEF) will answer with SSA.
In O&M Console of PCRF cluster go to:
Configuration - > Network Typology ->Clusters and set SSR subscription type for DDF (or PCEF) cluster:
0 (Cluster SSR NONE). Session IP address notification won’t be performed (default).
1 (Cluster SSR IP-only). Only session IP address will be sent inside SSR messages to DDF
(or PCEF) when a session with unknown IP address has appeared.
2 (Cluster SSR IP and Location). Session IP address and new BS_ID will be sent inside SSR
messages to DDF (or PCEF) when a session with unknown IP address has appeared or
session location has changed.
SSR sample when unknown IP address has appeared:
SSR I1-Session-Start-Request
App:3333335, Cmd: 3010, Len: 156, Flags:RP--, HbH:07DB2448, EtE:07DB45E9
Origin-Host: "spb-pcrf1.scartel.dc"
Origin-Realm: "scartel.dc"
Destination-Realm: "scartel.dc"
Destination-Host: "spb-ddf1"
Origin-State-Id: 1382355069 (0x5265107d)
Subscription-Id-Data: "250110000000001"
Framed-IP-Address: [10.11.12.20]
Answer from DDF:
SSA I1-Session-Start-Answer
Information
It is also possible to configure session IP address notification to PCEF but some additional development may be required on PCEF side to be able to read the received information.
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 20
App:3333335, Cmd: 3010, Len: 72, Flags:-P--, HbH:07DB2448, EtE:07DB45E9
Origin-Host: "spb-ddf1"
Origin-Realm: "scartel.dc"
Result-Code: 2001 (0x000007d1)
SSR sample when location has changed:
SSR I1-Session-Start-Request
App:3333335, Cmd: 3010, Len: 176, Flags:RP--, HbH:07DB2459, EtE:07DB45FA
Origin-Host: "spb-pcrf1.scartel.dc"
Origin-Realm: "scartel.dc"
Destination-Realm: "g4lab.ru"
Destination-Host: "test.ugw02"
Origin-State-Id: 1382355069 (0x5265107d)
Subscription-Id-Data: "250110000000001"
Last-Location: 40686534845696 (0x000025011274e500)
Framed-IP-Address: [10.11.12.20]
Answer from PCEF:
SSA I1-Session-Start-Answer
App:3333335, Cmd: 3010, Len: 68, Flags:-P--, HbH:07DB2459, EtE:07DB45FA
Origin-Host: "test.ugw02"
Origin-Realm: "g4lab.ru"
Result-Code: 2001 (0x000007d1)
Policy Logic Configuration
Each PRCF cluster has specified files for policy logic configuration. That is why every PCRF cluster can be configured differently.
The following files are used for PCRF policy logic configuration:
engine.lua
rules.xml
engine.lua
The file is stored in the /etc/pcrf/config/lua directory and must contain all policies that can
be applied to a subscriber.
This file contains an algorithm of policy selection. The algorithm selects a policy, which will be
applied to a specified subscriber, based on subscription information, network congestion and other criteria.
When a policy is chosen, the system searches the information about this policy in the
rules.xml files and if the search result is successful sends access conditions change request to
PCEF.
A language used for writing of policy selection algorithm is Lua scripting language.
Information
For more information about engine.lua file and policy logic configuration, see "Yota PCRF
3.6 Policy Engine.docx".
DDF Cluster Nodes Switchover
@ Yota 2013 21
rules.xml
The file is stored in the /etc/pcrf/config/rules directory.
This configuration file contains the list of policies which will be applied at PCEF to subscribers
and contains mapping between PCEF policies (rules) and Yota PCRF policies.
DDF Cluster Nodes Switchover
DDF cluster supports Active/Hot-standby architecture.
In order to perform DDF node switchover, do the following:
1. Stop ddf_propagator process on the primary node by command:
/etc/init.d/ddf_propagator stop
2. Start ddf_propagator process on the secondary node by command:
/etc/init.d/ddf_propagator start
The secondary DDF node will start to process all management commands from BSS.
System Restart
DDF cluster restart
Only one DDF node operates in cluster at a time. To restart DDF cluster the following script is
used:
/opt/pcrf_utils/bin/pcrf_full_restart.sh
PCRF cluster restart
PCRF node must be restarted only if there are no any message queues with management
command from DDF to the PCRF cluster. To stop messages flow from DDF the PCRF cluster should be disabled in the DDF O&M Console:
1. Choose the primary DDF node and go to Configuration -> Peers table
2. Set parameter Enabled=0 for both nodes of a specified PCRF cluster.
3. Return parameter value Enabled=1 after PCRF node restart.
To restart PCRF cluster use the same script on PCRF node as for DDF cluster restart:
/opt/pcrf_utils/bin/pcrf_full_restart.sh
Information
For more information about how to create rules.xml file and what AVPs can be used in the
file, see "Yota PCRF 3.6 Policy Library.docx".
Important
DDF node must be restated only if there are no any queues of management command from BSS to PCRF clusters.
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 22
System Settings Export\Import
It is possible to export and import the system settings (in O&M Console Configuration ->
Server Settings) in case if default settings were changed and if upgrade to new system
version is required.
The import of the settings is performed by command
/opt/pcrf_utils/bin/import_settings.sh <file_name>
where <file_name> - the *.txt file, where setting will be imported to.
Command sample:
/opt/pcrf_utils/bin/import_settings.sh settings.txt
The export of the settings is performed by command
/opt/pcrf_utils/bin/export_settings.sh <file_name>
where <file_name> - the *.txt file, where setting will be imported from.
Command sample:
/opt/pcrf_utils/bin/export_settings.sh settings.txt
DDF as Single Subscriber Profiles Storage
To make DDF cluster work as single profile storage the following system configuration must be
performed:
1. Disable all PCRF peers in DDF O&M Console. Go to Configuration -> Network Topology
-> Peers and set Enable field value to 0 for all PCRF peers.
2. In DDF O&M Console go to Configuration -> Server Settings and set ix_version
parameter to 305030. This will enable new interaction scheme between DDF and PCRF
clusters. Subscriber Management Interface commands will no longer be propagated to
PCRF cluster.
3. In PCRF O&M Console of each PCRF node go to Configuration -> Server Settings and set
use_ddf_subscr_info=0. PCRF node will stop to receive management command from DDF.
4. In PCRF O&M Console of each PCRF node set ix_version parameter to 305030 as well.
5. In PCRF O&M Console of each PCRF node change back to use_ddf_subscr_info=1.
6. Make sure compress_profile_data_batch_enable parameter has identical values on PCRF
and DDF clusters (Configuration -> Server Settings).
7. Enable all PCRF peers in DDF O&M Console. Go to Configuration -> Network Topology -
> Peers and set Enable field value to 1 for all PCRF peers.
Session Event Setup
@ Yota 2013 23
Session Event Setup
An event is the sign that the subscriber must be processed and session information must be
updated. Events information is written to EVENT table of PCRF database and contains internal
session ID, event type, PCEF peer ID that created the session and event time. Event time is
the time when the event should be processed or, in other words, when Policy Engine should be called.
Events are generated after provisioning or after PCRF has made some calculations (e.g.,
congestion management calculation). To prevent great load on PCRF some processes can
create events with delayed event time. For example, one base station suffers from overload
and new policies should be applied to all subscribers of this base station. In this case, Policy Engine is called in different time for different affected subscribers’ sessions.
But if required it is possible to enforce events processing (Policy Engine launch) for all sessions
or specified ones, i.e. set event on sessions. This procedure will update event time information
in PCRF database.
If policies applied at PCEF (e.g., UGW, Procera) differ from policies that were chosen for the
subscriber session after the event had been processed then these chosen policies are sent to
PCEF. If policies applied at PCEF do not differ from after event policies than nothing happens (nothing is sent to PCEF).
Session event setup can be helpful in the following situations:
It is needed to enforce policy decision for sessions of specified base station
It is needed to enforce policy decision for sessions created on specified PCEF only (for
example, for Procera)
To avoid the use of addEvent method for thousands of subscribers via curl utility
And other situations
To set event on sessions special utility is used:
/opt/pcrf_utils/bin/set_event [<options>]
Option Description
-h --help Print this message and exit
-v --version Show version information and exit
-p
--sql-password --sql-username --sql-query-timeout --sql-debug
Password for SQL Server (default: "password") Username for SQL Server (default: "appuser") SQL query timeout in seconds (default: 2) Write to extended information about database interaction (PARANOID log level) (default: 0 (no))
-d --dec Location in decimal (default: 0 (no))
-w --force Force write operation (default: 0 (no))
-m --dispersion If set the dispersion will be made with this max events per second restriction (default: 1000)
-e --event_type Set event with the given type: session_update (1), session_check (2), request_usage (3) (default: 1)
-t --session_type Set event to those sessions that are primary (1), secondary (2), all (3) (default: 3)
-i --peer_id Set event to sessions on given peer (default: -1)
-y --my_peer_id Set event to sessions with given my_peer_id (default: -1)
-a --location Location where to set events (default: "")
-s --subscriber_id Subscriber to whom the event should be set (default: -1)
-f --logfile Specifies log file name (default: "/var/log/roox/set_event.log")
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 24
-l
--loglevel --onlineloglevel --logformat --backtrace-at-error --backtrace-at-critical --interactive-log
Specifies log level (DEBUG, NOTICE, INFO, WARNING, ERROR, CRITICAL) (default: ) Set online log level for the currently running application and exit (DEBUG, NOTICE, INFO, WARNING, ERROR, CRITICAL) (default: ) Specifies log format (plain,plain_no_color,html,shm) (default: ) Show backtrace at error (default: 0 (no)) Show backtrace at critical (default: 0 (no)) Duplicate logs to stdout (default: 1 (yes))
Command examples:
Set event for sessions in specified locations:
/opt/pcrf_utils/bin/set_event -p <db_user_password> -l DEBUG --location
25011274E500 --dispersion 100 -w
Set event for sessions created on specified PCEF (Procera, UGW). For example, Procera peer ID is 29 then the following command of used:
/opt/pcrf_utils/bin/set_event -p <db_user_password> -l DEBUG --peer_id 29 --
dispersion 10 -w
Set event for sessions created on the secondary PCRF peer. In this case, my_peer_id of the
secondary node must be set. For example, my_peer_id of primary node in PCRF DB is 1 and
my_peer_id of secondary node is 2. Then the following command is used:
/opt/pcrf_utils/bin/set_event -p <db_user_password> -l DEBUG --my_peer_id 2 --
dispersion 10 -w
Set event for primary sessions only:
/opt/pcrf_utils/bin/set_event -p <db_user_password> -l DEBUG --session_type 1 --
dispersion 10 -w
Important
To set an event –w key is required in the command.
If set_event utility is launched by command with only –w key and without any other
parameters then the events will be set for all existing sessions and that may create great
load on PCRF, and if events launch policies apply, then PCEF may be overloaded too.
@ Yota 2013 25
3
System Processes
PCRF Node Processes
DDF Node Processes
System Log Files
Process Log Level
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 26
This chapter describes Yota PCRF system processes on PCRF and DDF nodes.
PCRF Node Processes
The System has the following processes on the PCRF nodes:
pcrf_core
This is the main process of the system which selects policies and sends messages about a chosen policy to the PCEF.
Process and sub processes tree are shown below:
ps auxf | grep pcrf_core
/opt/box/bin/pcrf_core -l debug --worker-count=3 -d --my-peer-id=5 --sql-
password=xxx
/opt/box/bin/pcrf_core worker 0
/opt/box/bin/pcrf_core worker 1
/opt/box/bin/pcrf_core worker 2
/opt/box/bin/pcrf_core [CS] - connection_state
/opt/box/bin/pcrf_core [EV] - events
/opt/box/bin/pcrf_core [PC] - check_sessions
/opt/box/bin/pcrf_core [CO] - congestion
/opt/box/bin/pcrf_core [GS] - get_subscriber
/opt/box/bin/pcrf_core [CT] - check_state_chart_timers
/opt/box/bin/pcrf_core [KS] - kill_sessions
/opt/box/bin/pcrf_core [RX] - rx_events
/opt/box/bin/pcrf_core [UP] - update process
drug
DiameterR Universal Gateway – drug.
This process sends and receives Diameter messages on the network. It creates connections
between PCRF nodes, sends and receives packages between the nodes. Nodes interaction via
Diameter interface is performed by the drug process.
Process is shown below:
ps auxf | grep drug
/opt/drug/bin/drug -l debug -d
pcrf_console
This process is responsible for displaying O&M, receiving provisioning commands from DDF and monitoring PCRF nodes via HTTP interface.
Process and sub processes tree are shown below:
ps auxf | grep pcrf_console
pcrf_console: master process /opt/pcrf_console/bin/pcrf_console -c
/opt/pcrf_console/config/pcrf_console.conf
pcrf_console: worker process
pcrf_console: worker process
pcrf_console: worker process
pcrf_console: worker process
The number of worker process may change.
PCRF Node Processes
@ Yota 2013 27
rx_watchdog
This process finds defunct processes and restarts these processes if needed. Also watchdog
process finds queues which were locked for a specified period of time and unlocks these queues.
Process is shown below:
ps auxf | grep rx_watchdog
/opt/rx_watchdog/rx_watchdog -d -l info
This process doesn’t have sub processes.
The following processes are tracked on PCRF nodes:
pcrf_core
drug
edr_writer
log_writer
pcrf_console
pcrf_check
pcrf_notify
stat_writer
Watchdog process has configuration files for each traced process or queue. These files are
located in the /etc/rx_watchdog directory and have the following content:
[QUEUE] <name> <max_count> <track_enable>,
Where:
[QUEUE] – a queue monitoring sigh. if a process is tracked do not write this
parameter;
<name> – name of queue or process which is tracked;
<max_count> – maximum number of seconds for process to give no sign of activity or
maximum queue size in bytes;
<track_enable> – enable/disable this process tracking (1/0).
Sample of process tracking:
pcrf_core 10 1
Sample of queue tracking:
QUEUE rx_transport_queue_send 10000 1
pcrf_notify
This process takes messages from notification queue and sends them to external systems. Notification text and URL are taken from message body.
Process is shown below:
ps auxf | grep pcrf_notify
/opt/pcrf_notify/bin/pcrf_notify -l info -d
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 28
log_writer
This process writes log messages to the temporary *.gz file on a disk. After the midnight, message recording to this temporary file is stopped and the new temporary one is created.
Process is shown below:
ps auxf | grep log_writer
/opt/log_writer/bin/log_writer -l info -d
stat_writer
This process collects statistics information and creates various graphics (RRD Charts) for monitoring purposes.
Process and sub processes tree are shown below:
ps auxf | grep stat_writer
/opt/stat_writer/bin/stat_writer -l info -d
edr_writer
This process generates files with information about all events that lead to a QoS policy or rule
set change.
Process and sub processes tree are shown below:
ps auxf | grep edr_writer
/opt/edr_writer/bin/edr_writer -l info -d
trace_writer
This process writes Diamter traces to /var/log/pcrf/traces directory.
ps auxf | grep trace_writer
/opt/trace_writer/bin/trace_writer -l info -d
pcrf_check
This process periodically performs requests to database and:
Searches primary session doubles with the same subscriber ID and existing time more
than 10 seconds. Then deletes the double session.
Searches primary session doubles with the same session IP and existing time more
than 10 seconds. Then deletes the double session.
Searches secondary sessions that has no primary ones and deletes them.
Searches secondary sessions and deletes them.
Process tree is shown below:
ps ax | grep pcrf_check
/opt/pcrf_check/bin/pcrf_check -l debug -d --my-peer-id=11 --sql-password=xxx
nms_import
This process obtains statistical data from NMS server every 1 minute.
DDF Node Processes
@ Yota 2013 29
DDF Node Processes
The System has the following main processes on the DDF nodes:
ddf
Depending on the system components (DDF, PCRFs) interaction scheme this process performs
subscriber profile migration from one PCRF cluster to another sends requested profile
information to PCRF clusters by I0 protocol or. Also I1 protocol is used to get IPs that are not specified in DDF IP ranges from PCRF clusters (SSR subscription).
Process and sub processes tree are shown below:
ps auxf | grep ddf
/opt/ddf/bin/ddf -l debug --worker-count=3 -d --my-peer-id=1 --sql-password=xxx
/opt/ddf/bin/ddf worker 0
/opt/ddf/bin/ddf worker 1
/opt/ddf/bin/ddf worker 2
/opt/ddf/bin/ddf [CS] - connection_state
/opt/ddf/bin/ddf [UP] - update stats
/opt/ddf/bin/ddf [CP] - check profiles
/opt/ddf/bin/ddf [SY] - Sync db peers info to shm
ddf_propagator
This process sends only SPR Configuration Interface provisioning commands received from BSS to PCRF clusters.
Process and sub processes tree are shown below:
ps auxf | grep ddf_propagator
/opt/ddf_propagator/bin/ddf_propagator -l debug -d --sql-password=xxx
/opt/ddf_propagator/bin/ddf_propagator periodic task
ddf_console
This is the process of the O&M Console and receives and redirects provisioning command from
BSS to PCRF clusters except SPR Configuration Interface commands
Process and sub processes tree are shown below:
ps auxf | grep ddf_console
ddf_console: master process /opt/ddf_console/bin/ddf_console -c
/opt/ddf_console/config/ddf_console.conf
ddf_console: worker process
ddf_console: worker process
rx_watchdog
This process has the same functions as the rx_watchdog process which operates on PCRF
nodes.
The following processes are tracked on DDF nodes:
ddf
ddf_console
ddf_propagator
drug
log_writer
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 30
stat_writer
trace_writer
Configuration files for each traced process are located in the /etc/rx_watchdog directory.
Process and sub processes tree are shown below:
ps auxf | grep rx_watchdog
/opt/rx_watchdog/rx_watchdog -d -l info
This process doesn’t have sub processes.
System Logging
Most system processes write log messages to the shared memory. Then information from shared memory is written to a disk to *.gz files, which are stored in the following directory:
/var/log/pcrf/
System logging is shown in the figure below:
Figure 5. PCRF system logging
Message Queue (in Shared Memory)
1 day period
pcrf_corenms_import rx_watchdogOther
Processesdrfront
log
message
Processes
log_viewer
log messagelog message log message log message log message
pcrf_today.log.gz.tmp
log_writer
pcrf_yesterday.log.gz
After midnight
Temp File
...
Archive
Archive
Archive
log
message
log
message
log
message
log
message
Previous days
archives
There are two components in the system logging: Log Writer and Log Viewer.
Log Writer
This component takes information from shared memory, creates log file, archives this log file
and writes it to the disk. Once a day the current log file is closed, named by date and new log
file is created. Every log file contains information about all processes that write messages to
the shared memory.
The following processes send log messages to the shared memory:
PCRF node
pcrf_core
System Logging
@ Yota 2013 31
stat_writer
pcrf_notify
pcrf_check
log_writer
edr_writer
drug
rx_watchdog
nms_import
shm_log_viewer
DDF node
ddf
ddf_propagator
drug
rx_watchdog
delete_old_tmp_mapping
Log messages are written with log level that is specified in O&M Console and has the following
name format:
pcrf_YY-MM-DD_HH-MM-SS.log.gz
It is possible to make coloring of log messages in archives files. For this purpose perform the following steps:
1. Set additional coloring option in any directory by command:
export log_writer_EXTRA_OPTS="--colorful=1"
2. Restart log_writer process by command:
/etc/init.d/log_writer restart
Log Viewer
This component displays log-messages. Log Viewer is launch by the following commands in the command line:
lvt
lv
The main difference is that lvt command just shows log-messages and lv not only shows log-
messages but allows filtering these messages by log level, process name, and process worker.
The log level is specified in O&M Console.
lvt command just displays last log messages in shared memory (analog of tail –f
command). There is one message filter available – by log level (DEBUG, NOTICE, INFO,
WARNING, ERROR, CRITICAL). To turn on the filter execute command:
lvt -g <log level>
For example, if WARNING level was set, messages only with WARNING, ERROR, CRITICAL level will be displayed.
For more information, execute lvt –help or lv –help commands.
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 32
Other PCRF node log files
Other PCRF cluster log files are located in the /var/log/roox/ directory on every PCRF node:
crm.log – MiniCRM log. Contains all HTTP requests to MiniCRM
grapi.log – Generic request API commands log
pcrf_console_access.log – O&M Console access errors log
pcrf_console.log – O&M Console access log and log of the HTTP provisioning
pcrf_info.log – log of Subscriber Session Information Interface provisioning
commands
spr_sm.log – log of Subscriber Management Interface provisioning commands
spr_conf.log – log of Configuration Interface provisioning commands
pcrf_info.log – log of Subscriber Session Information Interface HTTP requests
TimesTen/pcrf_replication_conflict.log – replication conflict log
Other DDF node log files
Other DDF cluster log files are located in the /var/log/roox directory of every DDF node:
crm.log – MiniCRM log. Contains all HTTP requests to MiniCRM
ddf_console_access.log – O&M Console access log
ddf_console.log – O&M Console errors log
grapi.log – Generic request API commands log
pcrf_info.log -- Subscriber Session Info Interface commands log
spr_conf.log – log file SPR Configuration Interface provisioning commands
spr_sm.log – log of Subscriber Management Interface provisioning commands
Process Log Level
Every process can have its own log level:
Paranoid
Debug
Notice
Info
Warning
Error
Critical
Notice log level is set after the system installation by default.
It is possible to change log level for each process to a desired one in the O&M Console:
1. Go to Configurations -> Log Level.
2. Select required log level for the process.
Important
If the system installation is performed default log level will be set (Notice).
@ Yota 2013 33
4
System Configuration
Parameters
PCRF Cluster Parameters
DDF Cluster Parameters
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 34
This chapter describes global Yota PCRF system configuration parameters and settings.
PCRF Cluster Parameters
PCRF cluster configuration parameters are stored in the SETTINGS table of PCRF database. Almost all parameters can be changed without restart of the cluster.
Configurations parameters description is available at:
http://<pcrf_host>:8091/doc/rx_config.html
And also in O&M Console Configuration -> Server Settings
PCRF cluster node parameters:
Name Default value
Reboot flag
Description
diameter.send_dwr_period 30 0 Send DWR after period of inactivity
diameter.send_dwr_interval 2 0 Interval between DWRs
diameter.cluster_send_dwr_period 3 0 Send DWR after period of inactivity for cluster mode
diameter.peer_update_interval 10 0 How frequently scan Peers table to connect/disconnect
diameter.message_pool_interval 2 1 How frequently check queue for new messages (ms)
diameter.message_timeout 5000 0 Message timeout in ms. If message is not processed within this interval, do not send answer
diameter.message_timeout_busy 4500 0
Message timeout in ms. If message is not processed within this interval, send answer with diameter.TOO_BUSY answer
diameter.send_burst_Base 30000 0 How many Base Diameter requests to send per second in burst (messages per second from node)
diameter.send_rate_Base 10000 0 How many Base Diameter requests to send per second (messages per
Important
SETTINGS table is filled in based on deploy configuration files. Only these parameters should
be changed after the PCRF cluster installation:
• default_region
• instance_name
• use_ddf_subscr_info
• default_cisco_package_id (if Cisco SCE is used as DPI)
Other configuration parameters should be changed only on exceptional cases!!!
Important
Make sure ix_version parameter value on PCRF node is the same as on DDF node.
PCRF Cluster Parameters
@ Yota 2013 35
second from node)
diameter.send_burst_Base_Accounting
30000 0 How many Base Diameter Accounting requests to send per second in burst (messages per second from node)
diameter.send_rate_Base_Accounting 10000 0 How many Base Diameter Accounting requests to send per second (messages per second from node)
diameter.send_burst_Gx 30000 0 How many Gx requests to send per second in burst (messages per second from node)
diameter.send_rate_Gx 10000 0 How many Gx requests to send per second (messages per second from node)
diameter.send_burst_Rx 30000 0 How many Rx requests to send per second in burst (messages per second from node)
diameter.send_rate_Rx 10000 0 How many Rx requests to send per second (messages per second from node)
diameter.send_burst_I0 30000 0 How many I0 requests to send per second in burst (messages per second from node)
diameter.send_rate_I0 10000 0 How many I0 requests to send per second (messages per second from node)
diameter.send_burst_I1 30000 0 How many I1 requests to send per second in burst (messages per second from node)
diameter.send_rate_I1 10000 0 How many I1 requests to send per second (messages per second from node)
diameter.log_events 1 0 Add events info in short command log. Enable (1) / disable (0)
diameter.log_rules 1 0 Add rules info in short command log. Enable (1) / disable (0)
diameter.log_usage 1 0 Add usage info in short command log. Enable (1) / disable (0)
diameter.log_extended 1 0 Add extended info (from session db) in short command log. Enable (1) / disable (0)
default_region UNKNOWN
0 Default Region name of this PCRF instance
debug_users 0 Users which needs debug log level to be switched on, in string with commas
check_config_period 1 0 Period for check config updates in seconds (Lua Script, rules.xml, settings table)
optimize_lua_calls 1 0 Optimize lua calls in CCR-U handling
add_qos_to_rar 0 0 Add default bearer qos information to RARs
system_id_type 0 0 System ID type (0 (Default) - IMSI, 1 - MSISDN)
default_cisco_package_id -1 0 Cisco-SCA-BB-Package-Install for temporary subscribers. Set to -1 to ignore this settings
check_interval_cisco 1800 0 Session check interval in seconds for Cisco sessions. Set to 0 to disable
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 36
checking
check_interval_procera 1800 0 Session check interval in seconds for Procera sessions. Set to 0 to disable checking
check_interval_huawei 1800 0 Session check interval in seconds for Huawei sessions. Set to 0 to disable checking
check_interval_default 1800 0 Session check interval in seconds for other sessions. Set to 0 to disable checking
usage_request_interval_procera 0 0 Usage request interval in seconds for Procera sessions. Set to 0 to disable periodic usage request
usage_request_interval_huawei 0 0 Usage request interval in seconds for Huawei sessions. Set to 0 to disable periodic usage request
usage_request_interval_default 0 0 Usage request interval in seconds for other sessions. Set to 0 to disable periodic usage request
validation_time_procera 0 0 Session validity interval in seconds for Procera sessions. Set to 0 to disable Revalidation-Time AVP generation
validation_time_huawei 600 0 Session validity interval in seconds for Huawei sessions. Set to 0 to disable Revalidation-Time AVP generation
validation_time_default 0 0 Session validity interval in seconds for other sessions. Set to 0 to disable Revalidation-Time AVP generation
check_delete_on_ext_errors 1 0 Delete session if check RAA returns some extended error (DIAMETER_REALM_NOT_SERVED)
read_tgpp_rat_type 0 0 Read 3GPP RAT type from incoming messages
read_ms_timezone 0 0 Read MS Timezone from incoming messages
read_rai 0 0 Read RAI from incoming messages
read_imeisv 1 0 Read IMEISV Information from incoming messages
read_imsi 1 0 Read IMSI Information from incoming messages
read_msisdn 1 0 Read MSISDN Information from incoming messages
calculate_session_amount 0 0 Calculate session amount per location
read_sgsn_addr 1 0 Read SGSN IP address Information from incoming messages
use_ddf_subscr_info 1 0 Use DDF to consult about subscriber profile change
ix_version 305020 0 I0 Version PCRF is working with
states_migration_waiting_gpa 3 0 Timeout for WAITING_GPA state for getting profile state machine
states_migration_waiting_rpa 3 0 Timeout for WAITING_RPA state for getting profile state machine
states_migration_waiting_rpr 3 0 Timeout for WAITING_RPR state for getting profile state machine
states_migration_waiting_raa 30 0 Timeout for WAITING_RAA state for RAR-sending state machine
states_rar_state_max_resend 3 0 Max number of RAR resends, if no
PCRF Cluster Parameters
@ Yota 2013 37
answer after all then delete session
instance_name 0 User-friendly Instance name (will be shown in console header)
free_traffic_bytes 0 0 Bytes to free usage
default_rating 83 0 Default rating value
new_lua_percentage 0 0 Percentage of subscribers to be processed with new, uncommited lua files
qci_for_rx_video 4 0 QCI for Rx VIDEO Media Flows
compress_profile_data_batch_enable 0 0 Switch on (1) or off (0) the compression of the profile data batch when sending to DDF
profile_data_accum_batch_count 130 0 Maximum number of accum entries in the batch with profile data (should not be more than 130)
profile_data_location_batch_count 100 0 Maximum number of location entries in the batch with profile data (should not be more than 100)
profile_data_send_interval 3000000 0 Interval in microseconds from profile data modification to DPR changes to DDF
default_profile_subscription_time 43200 0 Default profile subscription live length in seconds
default_profile_subscription_check_time
300 0 Default profile subscription renew checks in seconds
default_profile_data_resend_interval 30 0 Default profile BPR resend interval seconds
check_session_doubles_on_ccri 1 0 Check session duplicates by ip and subscriber id on CCR-I
check_session_doubles_interval 0 0 Check double session interval in seconds
write_edrs 1 0 Write EDR records
update_subscriber_last_online 0 0 Update last online info in subscriber profile (including region)
monitor_sessions_count 0 0 Calculate sessions count by peer, primary/secondary state, beacon
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 38
DDF Cluster Parameters
DDF cluster configuration parameters are stored in the SETTINGS table of PCRF database on DDF node:
Name Default value
Reboot flag
Description
diameter.send_dwr_period 30 0 Send DWR after period of inactivity
diameter.send_dwr_interval 2 0 Interval between DWRs
diameter.cluster_send_dwr_period 3 0 Send DWR after period of inactivity for cluster mode
diameter.peer_update_interval 10 0 How frequently scan Peers table to connect/disconnect
diameter.message_pool_interval 2 1 How frequently check queue for new messages (ms)
diameter.message_timeout 5000 0 Message timeout in ms. If message is not processed within this interval, do not send answer
diameter.message_timeout_busy 4500 0
Message timeout in ms. If message is not processed within this interval, send answer with diameter.TOO_BUSY answer
diameter.send_burst_Base 30000 0 How many Base Diameter requests to send per second in burst (messages per second from node)
diameter.send_rate_Base 10000 0 How many Base Diameter requests to send per second (messages per second from node)
diameter.send_burst_Base_Accounting
30000 0 How many Base Diameter Accounting requests to send per second in burst (messages per second from node)
diameter.send_rate_Base_Accounting 10000 0 How many Base Diameter Accounting requests to send per second (messages per second from node)
diameter.send_burst_Gx 30000 0 How many Gx requests to send per second in burst (messages per second from node)
diameter.send_rate_Gx 10000 0 How many Gx requests to send per second (messages per second from node)
diameter.send_burst_Rx 30000 0 How many Rx requests to send per second in burst (messages per second from node)
diameter.send_rate_Rx 10000 0 How many Rx requests to send per second (messages per second from node)
diameter.send_burst_I0 30000 0 How many I0 requests to send per second in burst (messages per second from node)
diameter.send_rate_I0 10000 0 How many I0 requests to send per second (messages per second from node)
diameter.send_burst_I1 30000 0 How many I1 requests to send per second in burst (messages per second from node)
diameter.send_rate_I1 10000 0 How many I1 requests to send per
DDF Cluster Parameters
@ Yota 2013 39
second (messages per second from node)
diameter.log_events 1 0 Add events info in short command log. Enable (1) / disable (0)
diameter.log_rules 1 0 Add rules info in short command log. Enable (1) / disable (0)
diameter.log_usage 1 0 Add usage info in short command log. Enable (1) / disable (0)
diameter.log_extended 1 0 Add extended info (from session db) in short command log. Enable (1) / disable (0)
propagator.wait_peer_reply 2 0 Period of waiting for peer reply in seconds
propagator.check_peer_alive 300 0 Period before checking the peer is back to live in seconds
propagator.resend_interval 2 0 Resend interval in seconds
propagator.try_time 10 0 Try send command before switching to next peer
debug_users 0 Users which needs debug log level to be switched on, in string with commas
instance_name 0 User-friendly Instance name (will be shown in console header)
check_config_period 1 0 Period for check config update
ix_version 305020 0 I0 Version DDF is working with
states_migration_waiting_gpa 3 0 Timeout for WAITING_GPA state for getting profile state machine
states_migration_waiting_rpa 3 0 Timeout for WAITING_RPA state for getting profile state machine
states_migration_waiting_rpr 3 0 Timeout for WAITING_RPR state for getting profile state machine
default_profile_subscription_time 43200 0 Default profile subscription live length in seconds (one hour by default)
check_profile_push_update 3 0 Period for profiles push update
check_profile_delete 5 0 Period for profiles delete request
compress_profile_data_batch_enable 0 0 Switch on (1) or off (0) the compression of the profile data batch when sending to DDF
@ Yota 2013 40
5
Administration Interfaces
Description
Provisioning Interfaces (HTTP)
Subscriber Session Information Interface
Generic Request API (GRAPI)
DDF Information Interface
HTTP Interfaces
@ Yota 2013 41
This chapter describes interfaces that are used for administration of Yota PCRF system.
HTTP Interfaces
Subscriber Management Interface
This interface is used for provisioning of subscription information to SPR and allows managing
subscriber profile, services, and accumulators. Supported request format is HTTP.
The interface supports the following command types:
Commands for subscriber profile management (add\delete\update\get subscriber
information, etc.)
Commands for services management (add\delete services to a subscriber, update
service information, get all subscriber’s services, etc.)
Commands for accumulators’ management (add\delete\update\get accumulator
information, etc.)
SPR Configuration Interface
This interface is used for managing SPR dictionaries. Supported request format is HTTP
The interface supports the following command types:
Service dictionary provisioning commands (add\delete\update\get service information,
etc.)
Threshold scheme dictionary provisioning commands (add\delete\update\get threshold
scheme information, etc.)
Accumulator dictionary provisioning commands (add\delete\update\get accumulator
information, etc.)
Attribute dictionary provisioning commands (add\delete\get attribute information, etc.)
Service dictionary is a repository which contains information about all service that can be assigned to a subscriber.
Threshold scheme dictionary is a repository which contains information about all threshold
schemes that are used in the Yota PCRF system.
Accumulator dictionary is a repository which contains information about all accumulators that are used in the Yota PCRF system.
Attribute dictionary is a repository which contains information about all supplementary attributes that can be added to a specified service or subscriber.
Information
For more information about the interface please see "Yota PCRF 3.6 Subscriber Management Interface.docx".
Information
For more information about the interface please see "Yota PCRF 3.6 SPR Configuration Interface.docx".
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 42
Subscriber Session Information Interface
BSS addresses to PCRF clusters via this interface for obtaining the information from active Gx session context. Such information includes session ID, subscriber ID, and other parameters.
Generic Request API (GRAPI)
This is a custom interface which is used to obtain required information about specified subscriber or session or add services to a specified subscriber.
GRAPI involves launching custom Lua script to a PCRF node and then getting an output of its result.
DDF Information Interface
This interface is used by BBS to request subscriber home PCRF address by subscriber ID or IP on DDF cluster.
Information
For more information about the interface please see "Yota PCRF 3.6 Subscriber Session Information Interface.docx".
Information
For more information about the interface please see "Yota PCRF 3.6 Generic Request
API.docx".
Information
For more information about the interface please see "Yota PCRF 3.6 DDF Information Interface.docx".
@ Yota 2013 43
6
Administration Tools
O&M Console
Command Line Interface
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 44
This chapter describes basic administration tools.
O&M Console
O&M Console Usage
O&M Console is one of the main management tools of Yota PCRF system.
O&M Console is a web interface, which is used for monitoring of PCRF and DDF nodes state,
databases and Diameter interfaces state; for configuring, administering and maintaining the system.
The O&M Console is installed during the system installation by deploy scripts.
Users with appropriate permission access the O&M Console through the following URL:
http://<ddf_or_pcrf_host>/
When a user goes to the URL above, the system displays the O&M Console Home page. The
O&M Console Home page lists the server list, statistics and operations available to configure
the system.
O&M Console Home Page
The O&M Console home page is displayed in the figure below:
Figure 6. DDF O&M Console home page
The DDF O&M Console has the following blocks:
Server List
Server Statistics
Operations
Workplace
O&M Console
@ Yota 2013 45
Server List
The Server List shows the tree of all Yota PCRF system clusters and cluster nodes: DDF cluster with two nodes and all PCRF clusters with two nodes each.
DDF cluster and nodes information is added automatically to O&M Console during DDF cluster information.
Information about all PCRF clusters and their nodes should be added manually to the Server
List by the following steps:
1. Choose the primary DDF node in the Server List
2. Choose Configuration -> Network Topology -> Clusters in the Operations block. Only
DDF cluster information will be displayed.
3. Click Add button in appeared Clusters table and set required parameters for all PCRF
clusters:
Parameter Description
Cluster ID ID of the a cluster
Role Cluster role. 3 – PCRF with SPR
Cluster SSR Subscription
Subscription type 0 (Cluster SSR NONE)
Name Optional cluster name
4. Go to Configuration -> Network Topology -> Peers. Click Add button in appeared
Peers table and set required parameters for all PCRF nodes:
Parameter Description
Peer ID ID of a node
Cluster ID ID of the cluster which the node belongs to
Host Internal domain name of new node
Realm Realm of new cluster
Address External FQDN or external IP address of new node
Port Port which is used by node for interaction via Diameter interface (3868)
Dialect Vendor-specific Diameter protocol dialect
Auto Connect
0 – new node will wait for income connection from a server with which the node is going to interact via Diameter protocol 1 – new node will initiate connection with a server with which the node is going to interact via Diameter protocol
Mandatory Monitoring parameter. 1 – system monitoring alarms if no connection with the node; 0 – system monitoring ignores connection miss
Priority New node priority over other PCRF nodes in the system
Sample (PCRF cluster information):
Parameter Value:
Peer ID 41
Cluster ID 4
Host vsk-pcrf1.scartel.dc
Realm scartel.dc
Important
Make sure that ID of each cluster of the Yota PCRF system in unique within the whole system configuration.
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 46
Address vsk-pcrf1.scartel.dc
Port 3868
Dialect 0
Auto Connect 1
Mandatory 1
Priority 0
5. Go to Configuration -> Network Topology ->HTTP Peers. Click Add button in
appeared HTTP Peers table and set required parameters for all PCRF nodes:
Parameter Description
Cluster ID ID of the cluster which the node belongs to
Peer ID ID of a node
HTTP address IP address of the PCRF node for receiving HTTP requests from DDF in HTTP format
HTTP port Port for receiving HTTP requests from DDF(80)
This table is used by DDF to send HTTP requests to PCRF nodes.
6. Go O&M Console on every PCRF node, choose Configuration -> Network Topology ->
Clusters than Peers than HTTP Peers and make sure that cluster ID and peer IDs are the
same as was set in DDF O&M Console setting in the same tables.
Server Statistics
The Server Statistics block shows statistics information of a node (PCRF or DDF).
Node Statistic Information
The following statistics information groups will be displayed:
Figure 7. Statistics information
Status
This icon shows the DDF cluster status:
OK
External Warning
Warning
External Critical
Critical
The status is defined based on monitoring sensors values.
Trace
Click View Trace in Server Statics Bar of O&M Console to see Diameter messages in real time.
Trace page contains the following Diameter information:
Field Name Description
Seq ID Message number in the queue
Peer Name Peer from/to the message was received\sent
Name Message name
O&M Console
@ Yota 2013 47
Direction Send or receive
Connection ID Connection ID
Size Message size
Time Message time
Result Result-Code AVP value
The trace page is displayed below:
Figure 8. Trace page
It is possible to set message filter by subscriber ID or\and Host. But this filter will be applied to
next messages only and only filtered messages will be displayed and written to a trace file.
To set a filter for messages do the following:
1. Set subscriber ID into Subscribers field or\and host into Host field.
2. Enter supplementary filter information into Filter Info field if needed. This text will be seen
by other users.
3. Click Apply Filter to filter next trace messages.
Other operations available on trace page:
Writing of tracing messages to a file. Click Start Writing to File button to start and click
Stop Writing to File button stop writing process. By default files are not written.
Trace files are written to the following directory:
/var/log/pcrf/traces
Browsing of written trace files. Click Show Files button to see the trace file list. The trace
file list will be open in a new browser tab. Click required trace file to see the file content.
Trace page update stop. Click Stop Page Update button to stop trace message generation
on the page.
Trace page cleansing. Click Clear Page button to clear trace message queue on the page.
Search
Search of sessions by session ID, search of subscribers by subscriber ID and search of IP are
available in the Search field.
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 48
MiniCRM
MiniCRM is centralized interface, which is used to obtain subscriber profiles and session information by subscriber ID or session IP.
The following information is available via MiniCRM:
Subscriber location
Subscriber mapping status (mapping type: normal or temporary)
Full subscriber information from SPR (services, attributes, accumulators, accumulators
schemes, services attributes)
Session information (session ID, base station ID, QoS policy, region, etc.)
MiniCRM is available on DDF cluster at:
http://<ddf_host>:8093/
The following page will be shown:
Figure 9. MiniCRM page
Enter subscriber ID or session IP and click Search button to see information. Available information will be displayed:
Figure 10. MiniCRM search result page
MiniCRM
@ Yota 2013 49
MiniCRM can be opened in O&M Console workplace because almost every subscriber ID and
session ID is a link to MiniCRM.
Figure 11. Result page in O&M Console
Operations and Workplace
PCRF operations
The following list of operation groups are displayed in the O&M Console:
Configuration
Maintenance
Monitoring
Congestion
Configuration
The Configuration group allows a Yota PCRF user to make flexible Yota PCRF configurations.
Maintenance
The Maintenance group makes sessions, generates connections, and provides process browsing, which occurs in the system.
Monitoring
The Monitoring group enables faults monitoring, overload management, and shows counters’
statistics and problems subscribers.
Congestion
The Congestion group shows BS utilization information.
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 50
Command Line Interface
Yota PCRF has Command Line Interface, which allowing performing of various simple operations.
There are 2 ways of commands execution:
to execute single command directly in the command line
to execute commands, which were specified in the file
To execute single command use the following:
/opt/cli/cli_execute <command_name> [--<command_arguments>]
To execute commands, which were specified in the file, use the following:
/opt/cli/cli_execute <command_list_file>
Where <command_list_file> - path to the file, where commands are listed.
Command List File
This file must contain command list (one command per one row) in the following format:
<command_name> [--<command_arguments>]
File content sample:
CLUSTER_ADD --CLUSTER_ID=9 --CLUSTER_ROLE=1 --CLUSTER_NAME="Test PCRF"
PEER_ADD --PEER_PEER_ID=91 --PEER_CLUSTER_ID=9 --PEER_HOST="smoke-
pcrf11.test.com" --PEER_ADDRESS="smoke-pcrf11.testik.com" --
PEER_REALM="test.com"
PEER_ADD --PEER_PEER_ID=92 --PEER_CLUSTER_ID=9 --PEER_HOST="smoke-
pcrf22.test.com" --PEER_ADDRESS="smoke-pcrf22.test.com" --
PEER_REALM="testik.com" --PEER_PRIORITY=1
If all commands in the list were executed correctly the following text would be displayed:
Parsing commands from file <command_list_file >...
Executing commands from file <command_list_file.txt>...
...Done
Available Commands
The list of available command:
Command name Description Command Arguments Required
Commands group for CLUSTER:
CLUSTER_ADD Adds cluster information to the database. CLUSTER_NAME in not required, but it is preferable to set one.
cluster_id cluster_role cluster_name cluster_description
+ +
CLUSTER_GET Shows information about a cluster (clusters). To show all
cluster_id cluster_role
Important
In case if only one command fails all commands won’t be executed as well.
Command Line Interface
@ Yota 2013 51
clusters execute command without command arguments
cluster_name cluster_description
CLUSTER_UPD
Updates cluster information
col_id_cluster_id cluster_role cluster_name cluster_description
+
CLUSTER_DEL
Deletes cluster information
cluster_id cluster_role cluster_name cluster_description
+
Commands group for IP_RANGE:
IP_RANGE_ADD Adds IP range information. IP_RANGE_NAME in not required, but it is preferable to set one.
ip_range_cluster_id ip_range_start ip_range_stop ip_range_name
+ + +
IP_RANGE_GET Shows information about an IP range (ranges). To show all ranges execute command without command arguments
ip_range_cluster_id ip_range_start ip_range_stop ip_range_name
IP_RANGE_DEL
Deletes IP range information
ip_range_cluster_id ip_range_start ip_range_stop ip_range_name
+
Commands group for PEER:
PEER_ADD
Adds peer information to the database. Make sure to set different priority to nodes.
peer_peer_id peer_cluster_id peer_dialect peer_host peer_realm peer_address peer_port peer_protocol peer_auto_connect peer_enabled peer_priority peer_mandatory
+ + + + + + +
PEER_GET
Shows information about a peer (peers). To show all peers execute command without command arguments
peer_peer_id peer_cluster_id peer_dialect peer_host peer_realm peer_address peer_port peer_protocol peer_auto_connect peer_enabled peer_priority peer_mandatory
PEER_UPD
Updates peer information
col_id_peer_peer_id peer_cluster_id peer_dialect peer_host peer_realm peer_address peer_port peer_protocol peer_auto_connect
+
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 52
peer_enabled peer_priority peer_mandatory
PEER_DEL
Deletes peer information
peer_peer_id peer_cluster_id peer_dialect peer_host peer_realm peer_address peer_port peer_protocol peer_auto_connect peer_enabled peer_priority peer_mandatory
+
Commands group for SETTINGS:
SETTINGS_GET Shows information about cluster settings. (Configuration -> Server Settings). To show all cluster settings execute command without command arguments
settings_name settings_val_type settings_val settings_def_val settings_comment settings_reboot_flag
SETTINGS_UPD
Changes cluster settings
col_id_settings_name settings_val_type settings_val settings_def_val settings_comment settings_reboot_flag
+
Commands group for SERVICE:
SERVICE_ADD Adds service information to the dictionary
service_id service_name service_description
+ +
SERVICE_GET Shows information about a service (services). To show all services execute command without command arguments
service_id service_name service_description
SERVICE_UPD Updates service information in the dictionary
col_id_service_id service_name service_description
+
SERVICE_DEL Deletes service information from the dictionary
service_id service_name service_description
+
Commands group for SCHEME:
SCHEME_ADD
Adds scheme information to the dictionary
scheme_id scheme_name scheme_description scheme_reset_period scheme_level_1 scheme_level_2 scheme_level_3 scheme_level_warn scheme_level_full
+ + +
SCHEME_GET Shows information about a scheme (schemes). To show all schemes execute command without command arguments
scheme_id scheme_name scheme_description scheme_reset_period scheme_level_1
Command Line Interface
@ Yota 2013 53
scheme_level_2 scheme_level_3 scheme_level_warn scheme_level_full
SCHEME_UPD
Updates scheme information in the dictionary
col_id_scheme_id scheme_name scheme_description scheme_reset_period scheme_level_1 scheme_level_2 scheme_level_3 scheme_level_warn scheme_level_full
+
SCHEME_DEL
Deletes scheme information from the dictionary
scheme_id scheme_name scheme_description scheme_reset_period scheme_level_1 scheme_level_2 scheme_level_3 scheme_level_warn scheme_level_full
+
Commands group for ACCUM_INFO:
ACCUM_INFO_ADD
Adds accumulator information to the dictionary
accum_info_id accum_info_name accum_info_description accum_info_default_scheme_id accum_info_type
+ +
ACCUM_INFO_GET Shows information about an accumulator (accumulators). To show all accumulators execute command without command arguments
accum_info_id accum_info_name accum_info_description accum_info_default_scheme_id accum_info_type
ACCUM_INFO_UPD
Updates accumulator information in the dictionary
col_id_accum_info_id accum_info_name accum_info_description accum_info_default_scheme_id accum_info_type
+
ACCUM_INFO_DEL
Deletes accumulator information from the dictionary
accum_info_id accum_info_name accum_info_description accum_info_default_scheme_id accum_info_type
+
Commands group for ATTRIBUTE:
ATTRIBUTE_ADD Adds attribute information to the dictionary
attribute_id attribute_name attribute_description
+ +
ATTRIBUTE_GET Shows information about an attribute (attributes). To show all attributes execute command without command arguments
attribute_id attribute_name attribute_description
ATTRIBUTE_UPD Updates attribute information in the dictionary
col_id_attribute_id attribute_name
+
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 54
attribute_description
ATTRIBUTE_DEL Deletes attribute information from the dictionary
attribute_id attribute_name attribute_description
+
Extra commands group:
SETTINGS_RESET Resets system configuration parameters (Configuration -> Server Settings). To reset all settings execute command without command arguments
name
SESSION_KILL Terminates specified session
session_id peer_id my_peer_id
+ + +
SESSION_EXT Shows session external information. Choose one of attributes by which the session info will be displayed
session_id subscriber_id session_ip
Information
To see options description, use the following command:
/opt/cli/cli_execute <command name> -h
@ Yota 2013 55
7
Databases
PCRF Cluster Database
DDF Cluster Database
Databases Access
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 56
This chapter contains information about Yota PCRF database. Access to the databases is
performed via TimesTen client (cluster installation) or PostgreSQL client (standalone node
installation). Some operations with databases can be performed in the O&M Console (a cluster or a node information change, add new peers, etc.)
PCRF Cluster Database
PCRF database on PCRF node contains:
SPR information
Session information
Other service information
PCRF database description on PCRF node is available at:
http://<pcrf_host>:8091/sql_db_info/getDbInfo?db=pcrf
DDF Cluster Database
PCRF database on DDF node contains:
SPR information
Session information (required only for interaction with regional PCRF clusters)
PCRF database description on DDF node is available node at:
http://ddf_host>:8091/sql_db_info/getDbInfo?db=pcrf
Databases Access
There are several ways to access the databases:
ttIsql utility
SQL queries
O&M Console
ttIsql utility
The ttIsql utility is a general tool for working with a TimesTen data source. The ttIsql command
line interface is used to execute SQL statements and built-in ttIsql commands to perform various operations.
To access ttIsql command line interface perform the following command in the command line
of a node:
ttIsql pcrf, where pcrf – PCRF database name.
SQL queries
SQL queries can be executed directly from the command line.
O&M Console
Adding or changing any cluster/peer information can be done in the O&M Console.
@ Yota 2013 57
8
Monitoring
Monitoring Instruments
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 58
This chapter contains information about Yota PCRF system monitoring tools.
Monitoring Instruments
For Yota PCRF system monitoring purposes HTTP interface is used and the system monitoring is available at:
http://<ddr_or_pcrf_node_domain>:8091/
Monitoring tools include:
Counters
Sensors
Upstream stats (only DDF nodes)
Diameter Peer Connection Status
RRD Charts
Counters
The full list of various counters and its values is available at:
http://<ddr_or_pcrf_node_domain>:8091/counters
Sensors
Sensors collect information from various sources (including counters values) and check if values have reached a specified level (OK, CRITICAL, WARNING).
The sensors tree with color presentation of sensor values for every PCRF or DDF nodes is also available in the O&M Console.
The sensors tree in json format is available at:
http://<ddr_or_pcrf_node_domain>:8091/sensors/getSensors
This HTTP request shows all available sensors or the whole sensors tree.
To see only specified sensors a filter can be used. For example:
http://<ddf_or_pcrf_host>:8091/sensors/getSensors?filter=root.general.processes
This HTTP request will show only sensors of the following processes (for example, for PCRF node):
drug
pcrf_check
pcrf_console
pcrf_core
pcrf_notify
edr_writer
stat_writer
log_writer
rx_watchdog
The sensors tree in nagios format is available at:
http://<ddr_or_pcrf_node_domain>:8091/sensors/getSensors?format=nagios
Monitoring Instruments
@ Yota 2013 59
Upstream stats
Shows information about redirection of provisioning requests (various BSS management commands) from DDF nodes to PCRF nodes.
Available in sensors tree or at:
http://<ddr_node>:8091/ustats
Diameter Peer Connection Status
Shows status of connection between cluster nodes and available at:
http://<ddr_or_pcrf_node_domain>:8091/monitoring/diameter/getNagiosPeerStatus
RRD Charts
To analyze the system behavior within a specified period of time RRD Charts can be used.
RRD Charts present various statics information (CPU or memory usage, provisioning errors,
Diameter requests, etc.) in diagrams within: 30 min, 1 hour, 2 hours, 12 hours, 1 day, 4 days,
7 days, 4 weeks, 180 days.
Charts are available at:
http://<ddr_or_pcrf_node_domain>/rrd
The following page will be presented:
Figure 12. RRD Charts Start Page
To see RRD Charts do the following
1. Specify period of time by setting From and To fields.
2. Click Select Charts and chooses requires charts in appeared chart list. Click Close button
to close the chart list.
3. Click Load button to see the charts. The charts will be displayed.
Information
Description of all sensors is presented in "Yota PCRF 3.6 Sensors.xlsx" document.
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 60
RRD Charts samples
Several charts for 4 day period are presented below:
Figure 13. Diameter Gx request chart
Figure 14. Diameter Rx request chart
Monitoring Instruments
@ Yota 2013 61
Figure 15. Session beacon chart
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 62
Time Marker
It is possible to point to a specified date and time. The time will be marked as red vertical line
in all RRD Charts.
1. Click Add Marker button to set the time in the following format:
DD.MM.YY[YY]-hh:mm
Time must be set in UTS. Time input samples:
31.01.2013-15:00
10.02.13-00:00
2. Click Load button to update the charts and see the time pointer:
Figure 16. Diameter Gx request chart with time marker
Monitoring Instruments
@ Yota 2013 63
Appendix 1. Utilities list
This section describes the scripts that are available on the PCRF and DDF clusters.
PCRF cluster scripts
Scripts that can be executed directly in the command line on PCRF nodes are listed below.
Deploy scripts:
Script Description
/mnt/JumpStart/Pcrf/Smoke/latest/!deploy.sh Launches PCRF modules installation
Rules configuring scripts:
Script Description
/opt/pcrf_core/utils/rules_validator Checks if rules.xml is correct
/etc/pcrf/config/lua/provisionFs.sh
Verifies new engine.lua and replaces the old
one in /etc/pcrf/config/lua/ directory of a
PCRF node. Should be executed on the PCRF
node where the file is copied to
/etc/pcrf/config/rules/provisionFs.sh
Verifies new rules.xml and replaces the old
one in /etc/pcrf/config/rules/ directory of
a PCRF node. Should be executed on the PCRF
node where the file is copied to
/opt/pcrf_core/utils/engine_script_run Checks if policy selection logic works correct
Processes start/stop scripts
Script Description
/opt/pcrf_utils/bin/pcrf_full_start.sh Start all processes on the node
/opt/pcrf_utils/bin/pcrf_full_stop.sh Stops all process on the node
/opt/pcrf_utils/bin/pcrf_full_restart.sh Restarts process on the
Session termination
Script Description
/opt/pcrf_core/utils/sql/
kill_all_primary_sessions_on_this_peer.sh
Terminates all primary sessions on
this peer
Yota PCRF 3.6
Administrator's Guide
@ Yota 2013 64
/opt/pcrf_core/utils/sql/
kill_all_sessions_on_this_peer.sh Terminates all sessions on this peer
/opt/pcrf_core/utils/sql/kill_on_second_node.sh Terminates sessions on the
secondary node
DB Utilities
Utility Description
/opt/pcrf_utils/bin/spr_util
Fills in PCRF database with SPR part from a
file. Only information about subscribers, their
services and attributes is taken based on a
specified template
/opt/pcrf_utils/bin/tt_export Exports information form database into a file
/opt/pcrf_utils/bin/tt_import Imports data from a file to the database of
PCRF node
/opt/pcrf_utils/bin/import_settings.sh Imports the server settings (Configuration -
> Server Settings) to a file
/opt/pcrf_utils/bin/export_settings.sh Exports the server settings (Configuration ->
Server Settings) from a file
Other
Utility Description
/opt/cli/cli_execute Executes CLI command
/opt/pcrf_utils/bin/mop/mop Lunches Muss Operation Utility
/opt/pcrf_utils/bin/set_event Sets events (like session update, session check,
request usage) for specified sessions.
DDF cluster scripts
Scripts that can be executed directly in the command line on DDF nodes are listed below.
DDF start/stop scripts:
Script Description
/opt/pcrf_utils/bin/pcrf_full_stop.sh
/opt/pcrf_utils/bin/pcrf_full_start.sh
/opt/pcrf_utils/bin/pcrf_full_restart.sh
Stops, stars, restarts all ddf node processes
Monitoring Instruments
@ Yota 2013 65
DB scripts
Script Description
/opt/pcrf_utils/bin/spr_util
Fills in PCRF database with SPR part
from a file. Only information about
subscribers, their services and attributes
is taken based on a specified template
/opt/pcrf_utils/bin/tt_export Exports information from database into a
file.
/opt/pcrf_utils/bin/tt_import Imports subscriber profile data from a
file to the database of DDF node
/opt/pcrf_utils/bin/import_settings.sh
Imports the server settings
(Configuration -> Server Settings) to
a file
/opt/pcrf_utils/bin/export_settings.sh
Exports the server settings
(Configuration -> Server Settings)
from a file
top related