This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Table of contents.......................................................................................................................................
1.1 TIBCO 3GPP FrameworkThe Third Generation Partnership Program (3GPP) defines interfaces and behaviour for variouscomponents that are deployed in the service layer in telecom operators networks. See http://www.3gpp.org/ for further details.
The TIBCO 3GPP Framework allows the rapid development of solutions based on the 3GPPstandards including :-
• Policy Control Rules Function (PCRF)• Online Charging System (OCS)• Diameter Routing Agent (DRA)• and others
Since the core TIBCO technology used is BusinessEvents Extreme (BE-X), the final solution willinherit all the BE-X features including :-
• High availability (including geographic), data partitioning and distribution• Internal routing• High performance and low latency• Persistent in-memory objects with key support• Transactions (ACID) and locking• Business rule definitions (rules and decision tables)• Hybrid development model (business rules and Java)• Web-based and command-line administration
The 3GPP framework builds on BE-X and include the following major components :-
• Diameter channel• RADIUS channel• LDAP channel (for user profile lookups)• SMPP channel (to send SMS messages)• SMTP channel (to send email messages)• File reader channel (to support exception lists)• Data model for users and sessions• Core features such as configurable state machine and diameter rules• General purpose rule groups• Utility functions• Automated test harness• Manual test harness (GUI)• Examples and demos
1.2 Document convensions• Italics - specific API name• Bold - important concept or idea in the corrent context• Monospaced - pending or issue
2.1 OverviewThe TIBCO 3GPP framework is designed to quickly facilitate the development of 3GPP basedsolutions. In this overview we introduce the key concepts.
The typically development process may be described as :-
1. Capturing requirements2. High level design - this will include articulating how the requirements will be met, the data
model and what rule groups are required3. Create a new BusinessEvents Extreme studio project for the solution and import the framework
as a library4. Design and configuration of the session state machine - this includes connecting together the
right rule groups in the right order5. Configuration of each rule group with business rules - typically with decision tables6. Configuration of the network details ( IP address, port numbers, HA partitioning etc )7. Defining, implementing and running system test cases using the automated test framework
If there is a need for multiple frameworks, for re-use across projects for the same customer, then thisis possble.
The framework includes a base data model which include the definition of a diameter session and auser profile. It is expected that solutions will extended these as required.
See page Data Model for further details on the data model.
2.1.2 Rule groups
A rule group may be defined as a re-usable and configurable component that executes a particularbusiness function. Examples of rule groups might include :-
• Access control - depending on decision table configuration, in-bound message and user profiledetermine if access is permitted or not.
• Routing - depending on decision table configuration and in-bound message, select a destinationnetwork element.
• Usage control - sum usage on a per user basis• User notification - send a notification (eg SMS) to a user• Logs - write a CDR record when processing of a message is completed
A rule group may be deployed multiple times in a solution ( with unique names ).. each instance canshare its configuration or have its own.
The framework supplies a number of rule groups, solutions can also implement their own since rulegroups are implemented in BusinessEvents Extreme Rules.
See page FIX THIS for further details on rule groups.
2.1.3 State machines
The common case is work is started by in-bound diameter messages. Since these will be tied to adiameter session, most solutions will focus on a state machine for diameter sessions.
The session state machine is configured in a decision table and describes how the rule groupsare connected. The decision table also describes related configurations for each rule group and anytimeouts.
Multiple session state machines may be defined, but only one is selected for a give session. Thisallows, for example, definition of a test state machines.
At runtime, the execution of the state machine is captured in the audit log ... specifically, what rulegroups were executed along with key decisions made.
See page State Machine for further details on state machines.
2.1.4 Network configuration
The framework allows for configuration of network related settings via a kcs configuration file. Thefile contains settings for :-
• Diameter channel configuration ( IP address, port number, capabilities information, applicationhandler etc )
• LDAP channel configuration ( IP address, port number, query strings etc )• Override list configuration• Dynamic variables• Metrics• High availability ( what data is partitioned and how, partition mapper settings etc )
Note that the network configuration is seperate to the business rules - the rules remain configured indecision tables.
The network configuration file is loaded and activated once the application is up and running. Onceactive, the channels are enabled and so the solution will start processing work.
See page FIX THIS for further details on network configuration.
2.1.5 Utility functions
The framework also supplies a number of utility functions which may be involked from rule groupsand in decision tables. These include tasks such as :-
• Internal manipulations ( such as managing usages )• Simplified tests ( specifically for use in decision tables )
The framework supplies a number of utility functions, solutions can also implement their own sinceutility functions are implemented in BusinessEvents Extreme Rules Functions.
See page FIX THIS for further details on utility functions.
2.1.6 Typical execution steps
Putting these ideas together, a typical application execution flow for a PCRF application is shownbelow.
Typical execut ion - CCR- I
Credit Control Request Initial (CCR-I) execution
On recept of the diameter Credit Control Request Initial (CCR-I) message, the application is requiredto create a session, execute rules and return a suitable Credit Control Answer Initial (CCA-I).
1. CCR-I message from the diameter client2. Diameter channel parses the message3. Diameter channel calls the appropriate application handler4. Application handler creates a BE-X CCR Event from the data in the diameter message5. Application handler asserts the BE-X CCR Event, thus causing the Run To Completion (RTC)
cycle to start6. At this point there is no user profile in memory, hence the Fetch Profile rule group is triggered.
Typically this will send a request to an external system (such as a LDAP server) to retrieve theuser profile details.
7. When the user profile details arrive back from the external system, the Process Profile rule groupis triggered to create the User Profile Concept in memory.
8. Now that the User Profile Concept exists, the CCR-I rule is triggered to create the User SessionConcept, to select the state machine to use and to start the state machine execution.
9. Depending on the state machine configuration, a set of rule groups will be executed to processthe request.
10.Return CCA rule group is triggered during the state machine execution to create the CreditControl Answer (CCA-I) message
11.Return CCA rule group passes the CCA-I message to the channel endpoint12.Diameter channel composes the message13.Diameter channel sends the answer message back to the client14.Wait rule group is triggers to end the Run To Completion (RTC) cycle and to wait for further
messages
Typical execut ion - CCR- U
Credit Control Request Update (CCR-U) execution
On recept of the diameter Credit Control Request Update (CCR-U) message, the application isrequired to locate the session, execute rules and return a suitable Credit Control Answer Update(CCA-U).
1. CCR-U message from the diameter client2. Diameter channel parses the message3. Diameter channel calls the appropriate application handler4. Application handler creates a BE-X CCR Event from the data in the diameter message5. Application handler asserts the BE-X CCR Event, thus causing the Run To Completion (RTC)
cycle to start6. Since the User Profile Concept exists (from CCR-I stage), the CCR-U rule is triggered to
continue the state machine execution from the Wait rule group7. Depending on the state machine configuration, a set of rule groups will be executed to process
the request.8. Return CCA rule group is triggered during the state machine execution to create the Credit
Control Answer (CCA-U) message9. Return CCA rule group passes the CCA-U message to the channel endpoint10.Diameter channel composes the message11.Diameter channel sends the answer message back to the client12.Wait rule group is triggers to end the Run To Completion (RTC) cycle and to wait for further
Credit Control Request Terminate (CCR-T) execution
On recept of the diameter Credit Control Request Terminate (CCR-T) message, the application isrequired to locate the session, execute rules, return a suitable Credit Control Answer Terminate(CCA-T) and terminate the session.
1. CCR-T message from the diameter client2. Diameter channel parses the message3. Diameter channel calls the appropriate application handler4. Application handler creates a BE-X CCR Event from the data in the diameter message5. Application handler asserts the BE-X CCR Event, thus causing the Run To Completion (RTC)
cycle to start6. Since the User Profile Concept exists (from CCR-I stage), the CCR-T rule is triggered to
continue the state machine execution from the Wait rule group7. Depending on the state machine configuration, a set of rule groups will be executed to process
the request.8. Return CCA rule group is triggered during the state machine execution to create the Credit
Control Answer (CCA-T) message9. Return CCA rule group passes the CCA-T message to the channel endpoint10.Diameter channel composes the message11.Diameter channel sends the answer message back to the client12.End Session rule group is triggers to remove the session ( and possibly user profile ) from
3 Base Diameter Messages.......................................................................................................................................
3.1 Base diameter messagesThis section describes the base diameter messages as supported by the diameter channel and PCRFframework. These message exchanges are typically not shown in business use-cases to avoidcluttering of the diagrams. However, these will always apply in the cases discussed.
For further details on the diameter messages see the diameter channel documentation.
3.1.1 Capabilities Exchange
On initial connection to the PCRF, the diameter client (PCEF) is required to send a capabilitiesexchange request ( CER) message and the PCRF will return a capabilities exchange answer message (CEA).
Firmware-Revision 267 Mandatory - N (relevant firmware revision)
4. Further requestsIf the diameter channel is configured to validate Application-Id, then the Auth-Application-Id AVP ismandatory and the PCRF will validate its value against the list of configured application ids. Shouldvalidation fail then the result NO_COMMON_APPLICATION is returned in the CEA message.
3.1.2 Device Watchdog
Once the initial connection has been established ( including CER/CEA sequence above ) then devicewatchdog timers are started. Either side can timeout and thus trigger a Device Watchdog Request (DWR) and expecte a Device Watchdog Answer ( DWA).
The PCRF is configured with a watchdog timeout. If no messages have been received after this time,then the PCRF will initiate a Device Watchdog Request.
4 State machine.......................................................................................................................................
4.1 State MachineThe TIBCO 3GPP framework provides a mechanism for definition and execution of a state machine.This is typically used to define how each in-bound message is processed, which rule groups areinvoked and in what order.
4.1.1 Defining the state machine
A state machine is defined in a decision table which implements the /RuleFunctions/VRF/StateDefinitionVRF virtual rule function. Multiple state machine decision tables can be created withany name.
To create the table in studio select New->Decision Table and browse for the virtual rule function.
The decision table columns must be created by dragging the StateDefinition attributes into the table.stateNumber must be a condition, whilst the remaining attributes must be an action.
Finally, the states themselves must be added to the table, which each table row representing a statetransition :-
• stateNumber must be unique and numerically increasing• stateName represents the name of each state. Duplicate names listed in the table will, in effect, be
the same state• ruleGroupName must match the name of a define rule group
• transitionName names the transition. Typical names will be "Pass" or "Fail"• nextState names the state that is invoked next. So stateName is connected to nextState via the
transition transitionName• VRFName names any virtual rule function associated with this state• decisionTableName names any decision table associated with this state. This will typically
contain state specific configurations. Note that to associate multiple decision tables to the samestate, multiple rows must be used.
• timeout defines a timeout (in milliseconds). If the state machine remains at this state longer thanthis timeout value, then a transition timeout is invoked. The timeout transition also needs to bedefined.
• gaugeMetricName and gaugeMetricResult if set, on state transition the gauge metric isincremented
A simple state machine definition is shown below :-
CONDITION
sd.stateNumberACTION sd.stateName
ACTION sd.ruleGroupName
ACTION sd.transitionName
ACTION sd.nextStateName
ACTION sd.VRFName
ACTION sd.decisionTableName
ACTION sd.timeout
ACTION sd.gaugeMetricName
ACTION sd.gaugeMetricResult
1 "Start" "Start" "Pass" "WaitState"
2 "WaitState" "Wait" "Update" "Wait"
3 "WaitState" "Wait" "Terminate" "EndSession"
4 "EndSession" "EndSession" "Pass" "End"
DiameterRulesTest
4.1.2 Using the state machine
State machines are created on startup from the decision table configuration via theprocessStateMachineDefinitions rule function. This rule function must be configured to be called onstartup.
Calaog functions are available to ensure successful transition between states. These are :-
• String getInitialState(String id, String stateMachineName) - retrieve the first state• String getInitialRuleGroup(String id, String stateMachineName) - retrieve the first rule group• getNextRuleGroup(String id, String stateMachineName, String currentState, String
transitionName) - get next rule group• String getNextState(String id, String stateMachineName, String currentState, String
transitionName) - get next state and increment metrics• checkStateTimeout(String id, String stateMachineName, String currentState) - check state
timeoutA typical rule group has the following pattern :-
... implementation // signal pass to move onto next state // userSession.ruleGroup = StateManager.getNextRuleGroup(userSession.sessionId, userSession.stateMachineName, userSession.state, "Pass");
When the first message for a session is processed via the CCR-I rule, a state machine is assigned tothis session and the stateMachineName attribute is set. The CCR-I rule determines the state machinename thus :-
1. From a decision table - this could set the state machine name on a variety of factors such asdiameter application id
2. From the dynamic variable UserSessionStateMachine3. If not found, default to UserSessionStateMachine
A simple state machine selection decision table is shown below :-
5 Data model.......................................................................................................................................
5.1 Data modelThe framework includes a basic data model which can be extended as needed.
5.1.1 ThreeGPPUserProfile
The ThreeGPPUserProfile represents a user profile, typically fetched from an external system.
It contains the following attributes :-
• userId - user identifier. Is key'ed by the unique ByUserId key.• defaultProfile - flag to indicate if a default profile was used• tariff - list of tariff plans• addOns - list of add ons
To extend ThreeGPPUserProfile and include solution specific attributes the following is required :-
1. Create a new concept which inherits from ThreeGPPUserProfile
User profile extension
2. Create new virtual rule functions which reference this new type. This will be copies ofthe virtual rule functions supplied in the framework, but using the new type instead ofThreeGPPUserProfile
3. Configuration decision tables in the solution will need to use the new virtual rule functions.
Decision table user profile
4. The session state machine configuration in the solution will reference the solution decisiontables as normal
5. A factory class that extends ThreeGPPUserProfileFactory and provide a implementation ofgetOrCreateInstance(). This factory will need to be registered using variables.
/** * (c) Copyright 2012-2013, TIBCO Software Inc. All rights reserved. * * LEGAL NOTICE: This source code is provided to specific authorized end * users pursuant to a separate license agreement. You MAY NOT use this
* source code if you do not have a separate license from TIBCO Software * Inc. Except as expressly set forth in such license agreement, this * source code, or any portion thereof, may not be used, modified, * reproduced, transmitted, or distributed in any form or by any means, * electronic or mechanical, without written permission from TIBCO * Software Inc. * * $Id: PCRFUserProfileFactory.java 20295 2014-11-27 10:39:23Z plord $ */package com.tibco.threegpp.examplepcrf.userdata;
public class PCRFUserProfileFactory extends ThreeGPPUserProfileFactory { /** * get or create an instance of PCRFUserProfile concept * * The concept is asserted. * * @param userId User identifier * @param lockMode lock mode * @return PCRFUserProfile */ public ThreeGPPUserProfile getOrCreateInstance(String userId, LockMode lockMode) {
KeyManager<PCRFUserProfileDerived> km = new KeyManager<PCRFUserProfileDerived>(); KeyQuery<PCRFUserProfileDerived> keyQuery = km.createKeyQuery(PCRFUserProfileDerived.class, PCRFUserProfileDerived.BYUSERID_KEY_NAME); KeyFieldValueList kfvl = new KeyFieldValueList(); kfvl.add(PCRFUserProfileDerived.USERID_PROPERTY_NAME, userId); keyQuery.defineQuery(kfvl);
The variable UserProfileFactory should be set to the class name of the factory class - in the exampleabove this should be com.tibco.threegpp.examplepcrf.userdata.PCRFUserProfileFactory.
The rule Rules/Variables/SetUserProfileFactory must be enabled in the Cluster DeploymentDescriptor file.
5.1.2 ThreeGPPSession
The ThreeGPPSession represents one diameter session. Its created when the credit control requestinitial (CCR-I) diameter message is processed and is deleted when the credit control request terminate(CCR-T) diameter message is processed.
• sessionId - unique diameter session identifier. Is key'ed by the unique BySessionId key.• userId - user identifier. Is key'ed by the non-unique ByUserId key.• ueIpAddress - user equipment IP address• apn - access point name the user has connected to• imei - international mobile station equipment identity• numberOfNotifications - count of the number of user notification for this session• numberOfBreachNotifications - count of the number of breach notifications for this session• Rules :-
• pccRules - list of policy and charging control (PCC) rules which currently have beenapplied to this session
• previousPccRules - previous policy and charging control (PCC) rules• grantedServiceUnits - list of granted service units which have been applied to this session
• rarSent - flag to determine if a re-authorization request (RAR) message is in progress• Multi-session support :-
• reAuthRequesterDC - flag used to manage multi-session re-authorization requests betweendata centers (DC)
• reAuthRequesterId - identifier used to manage multi-session re-authorization requests• nMultiSessionUpdateEventsReceived - number of multi-session update events received
• mccMnc - mobile country code (MCC) and mobile network code (MNC)• nasIpAddress - network access server (NAS) IP address• Common diameter message information :-
1. Create new virtual rule functions which reference this new type. This will be copies of the virtualrule functions supplied in the framework, but using the new type instead of ThreeGPPSession
VRF user session extension
2. Configuration decision tables in the solution will need to use the new virtual rule functions.
3. The session state machine configuation in the solution will reference the solution decision tablesas normal
4. A factory class that extends ThreeGPPUserSessionFactory and provide a implementation ofgetOrCreateInstance(). This factory will need to be registered using variables.
/** * (c) Copyright 2012-2013, TIBCO Software Inc. All rights reserved. * * LEGAL NOTICE: This source code is provided to specific authorized end * users pursuant to a separate license agreement. You MAY NOT use this * source code if you do not have a separate license from TIBCO Software * Inc. Except as expressly set forth in such license agreement, this * source code, or any portion thereof, may not be used, modified, * reproduced, transmitted, or distributed in any form or by any means, * electronic or mechanical, without written permission from TIBCO * Software Inc. * * $Id: PCRFUserSessionFactory.java 20295 2014-11-27 10:39:23Z plord $ */package com.tibco.threegpp.examplepcrf.userdata;
public class PCRFUserSessionFactory extends ThreeGPPUserSessionFactory { /** * get or create an instance of PCRFUserSession * * The concept is asserted. * * @param sessionId session identifier * @param userId user identifier * @param ipAddress IP address
public ThreeGPPUserSession getInstance(String sessionId, LockMode lockMode) {
KeyManager<PCRFUserSessionDerived> km = new KeyManager<PCRFUserSessionDerived>(); KeyQuery<PCRFUserSessionDerived> keyQuery = km.createKeyQuery(PCRFUserSessionDerived.class, PCRFUserSessionDerived.BYSESSIONID_KEY_NAME); KeyFieldValueList kfvl = new KeyFieldValueList(); kfvl.add(PCRFUserSessionDerived.SESSIONID_PROPERTY_NAME, sessionId); keyQuery.defineQuery(kfvl); Logging.debug(keyQuery.toString()); PCRFUserSessionDerived s = keyQuery.getSingleResult(lockMode); if (s != null) { s.assertConcept(); } return s; } public void processEventInstance(ThreeGPPUserSession session, Event event) { if (session != null) { PCRFUserSessionDerived s = (PCRFUserSessionDerived) session; s.processEvent(event); } else { // assert locally // PCRFUserSessionDerived.processEventNoSession(event); } }}
The variable UserSessionFactory should be set to the class name of the factory class - in the exampleabove this should be com.tibco.threegpp.examplepcrf.userdata.PCRFUserSessionFactory.
The rule Rules/Variables/SetUserSessionFactory must be enabled in the Cluster DeploymentDescriptor file.
5.1.3 ThreeGPPPolicyControl
The ThreeGPPPolicyControl is used to contain transient data as the inbound message is processed.
It contains the following attributes :-
• sessionId - unique diameter session identifier. Is key'ed by the unique BySessionId key.
• applicablePolicyRules - list of rules applicable so far• grantedPCCRules - list of PCC rules granted so far• usedServiceUnits - used service units from CCR message• eventTriggers - event triggers from CCR message• notificationId - user notification identifier• concurrentSessions - number of active sessions for the user• multiSession - flag to indicate multi-session is in progress• decisionResult - descriptive result of decision for audit logs• timeStamp - timestamp of CCR message• trace - rule execution trace• tetheringScore - sum of tethering indicators• tetheringAllowed - flag indicating if tethering is permitted
To extend ThreeGPPPolicyControl and include solution specific attributes the following isrequired :-
1. Create a new concept which inherits from ThreeGPPPolicyControl
User profile extension
2. Create new virtual rule functions which reference this new type. This will be copies ofthe virtual rule functions supplied in the framework, but using the new type instead ofThreeGPPPolicyControl
3. Configuration decision tables in the solution will need to use the new virtual rule functions.
Decision table user profile
4. A factory class that extends ThreeGPPPolicyControl and provide a implementation ofgetOrCreateInstance(). This factory will need to be registered using variables.
The variable PolicyControlFactory should be set to the classname of the factory class - in the example above this should becom.tibco.threegpp.examplepcrf.userdata.PCRFPolicyControlFactory.
The rule Rules/Variables/SetPolicyControlFactory must be enabled in the Cluster DeploymentDescriptor file.
5.1.4 BreachHistory
The BreachHistory is used to maintaining Breach History against a rule for a MSISDN. This wouldallow the user to control when notifications are send for a policy rule violation
It contains the following attributes :-
• userId - MSISDN• ruleName - Policy or PCC rule name• currentNumberOfBreaches - current number of Breaches against this rule• createTime - time this breach history object was created first• expirationTime - time till this breach history object is valid
5.1.5 NotificationHistory
The NotificationHistory is used to maintaining Notification History for a MSISDN. This wouldallow the user to control how many notifications are sent for a specific notification.
It contains the following attributes :-
• userId - MSISDN• notifictionId - Notification Identifier• currentNumberOfNotifications - current number of Notifications• createTime - time this notification history object was created first• expirationTime - time till this notification history object is valid
5.1.6 Diameter events
Diameter events are used to diamater requests and answers to and from the application. The followingtypes are defined :-
• ThreeGPPCCREvent - base CCR event• ThreeGPPCCRGx - CCR Gx - java extension of ThreeGPPCCREvent containing complex data
and Gx specifics• ThreeGPPCCRRx - CCR Rx - java extension of ThreeGPPCCREvent containing complex data
and Rx specifics• ThreeGPPCCRRo - CCR Ro - java extension of ThreeGPPCCREvent containing complex data
and Ro specifics• ThreeGPPCCAEvent - base CCA event• ThreeGPPCCAGx - CCA Gx - java extension of ThreeGPPCCAEvent containing complex data
and Gx specifics• ThreeGPPCCARx - CCA Rx - java extension of ThreeGPPCCAEvent containing complex data
and Rx specifics• ThreeGPPCCARo - CCA Ro - java extension of ThreeGPPCCAEvent containing complex data
and Ro specifics• ThreeGPPAAREvent - base AAR event• ThreeGPPAAR - java extension of ThreeGPPAAREvent containing complex data• ThreeGPPASAEvent - base ASA event• ThreeGPPASA - java extension of ThreeGPPASAEvent containing complex data
• ThreeGPPSTREvent - base STR event• ThreeGPPSTR - java extension of ThreeGPPSTREvent containing complex data
The following factory classes are also provided :-
• ThreeGPPCCRGxFactory - Factory class can be changed using the CCRGxFactory variable• ThreeGPPCCRRxFactory - Factory class can be changed using the CCRRxFactory variable• ThreeGPPCCRRoFactory - Factory class can be changed using the CCRRoFactory variable• ThreeGPPAARFactory - Factory class can be changed using the AARFactory variable• ThreeGPPASAFactory - Factory class can be changed using the ASAFactory variable• ThreeGPPRAAFactory - Factory class can be changed using the RAAFactory variable• ThreeGPPSTRFactory - Factory class can be changed using the STRFactory variable
To extend an event and include solution specific attributes the following is required :-
1. A new java class that inherits from one of the java extension classes above2. A factory class that extends one of the factory classes above and provide a implementation of
getOrCreateInstance(). This factory will need to be registered using variables.
Note that events that are expected to be routed between nodes should have a time-to-live of -1. Thisallows the event to still be routed even if the nodes have different versions of the application.
5.1.6.1 Diameter hop-by-hop
In normal operation, the diameter header hop-by-hop is included in the in-bound events and is auto-generated in the diameter channel when the out-bound event is sent. This works well for most cases,but in the special case of correlating a response to a request the out-bound hop-by-hop is unknown. Inthis case the diameter channel operation getNextHopByHop() can be used to assign a unique hop-by-hop value to the out-bound events and subsequently be used to correlate the in-bound response event.
6.1 LoggingThe TIBCO 3GPP framework provides two mechanisms for logging and alarms based on log4j.Standard Logging and Log Services.
6.1.1 Standard Logging
The following channels are provided :-
1. Tracing - low-level developer orientated tracing, usually disabled2. Debug - developer orientated debug, usually disabled3. Info - general information messages, usually enabled4. Warning - warning messages, usually enabled5. Error - serious messages, likely to effect business rules6. Fatal - very serious messages, likely to effect service, usually enabled and mapped to alarms7. Audit - audit logs showing inbound message and rules execution. Usually directed to log file.8. CDR - call data record. Usually directed to log file.9. Trap - specific trap. Usually directed to SNMP.10.Statistics - periodic statistics flushing if needed• Catalog and java functions
Operations are provided to generate a logging message through both a catalog function ( for usein Rules and RuleFunctions ) and java.
• Logging.trace(String msg)• Logging.debug(String msg)• Logging.info(String msg)• Logging.warn(String msg)• Logging.error(String msg)• Logging.fatal(String msg)• Logging.audit(String msg)• Logging.auditOnCommit(String msg) - same as audit(), but the threadpool component is
used to write the audit log when the current transaction commits. This ensure that the auditlog is transactionally correct.
• Logging.cdr(String msg)• Logging.cdrOnCommit(String msg) - same as crd(), but the threadpool component is used
to write the cdr log when the current transaction commits. This ensure that the cdr log istransactionally correct.
• Logging.trap(String level, String msg)Additional functions are available to check for current log levels - these should be used toimprove efficieny.
• Appenders and LayoutsIn addition to the standard log4j appenders, the following are also available :-
• com.tibco.logging.ANSIColorLayout - highlights the log4j levels with ANSI escapesequences thus making the console output more readable ( for example, critical errors canbe highlighted in red ).
• com.tibco.snmpappender.SNMPTrapAppender - formats a log message into a SNMP trapmessage
• Administration targetAn administration target is available to change the log level at runtime. This can be used totemorarly enable debug messages on a busy system.
Log Services is a Tibco component, that allows configuration of log messages via a message catalogkcs file. Features include :-
• log messages configurable via message catalog (kcs file)• message catalog auditing - no duplicate ids, ...• sending SNMP trap messages (uses snmptrapappender component)• default messages are used when message catalog is not loaded• administration target to display components with loaded message catalogs and their log levels• administration target to enable/disable specific log level for given parameter and value, e.g.
• administration target to generate MIB file from SNMP trap messages defined in the messagecatalog
See logservices component for full details.
• Using Log ServicesA catalog function is provided to call Log Services Log3GPP.message(long messageId, StringdefaultString, String[] params).
• messageId - Unique message identifier. This should match the identifier set in the messagecatalog
• defaultString - Format string to be used if this message is not found in the message catalog• params - List of parameters to be substituted in the string
7.1 MetricsThe TIBCO 3GPP framework provides a interface to the metrics component to provide the followingfeatures :-
1. Gauge metrics - measures and reports a single value2. Rate metrics - measures and reports based on rates3. Cardinality metrics - measures and reports a single value obtained from the cardinality of
managed object4. Rule functions to create and update metrics5. Helper function to register and unregister a set of commonly used metrics6. Additional support to measure latency
See the metrics component for further details on how to display the metrics values via a web page,command-line, JMX and snmp.
7.2 Application metricsOn startup of the 3GPP framework, a set of common metrics are registered. These metrics are updatedby the framework functions. Note, however, additional metrics can be added by applications - see theavailable rule functions below.
The hierarchy of the application metrics are :-
• Application : "3GPPFramework"• Group : "User Data"
• Metrics create dynamically, metric name is the origin host, results are "CCRI:Total","CCRI:Pass", "CCRI:Fail" and so on for CCAI, CCRU, CCAU, CCRT, CCAT, DPR,DPA, CER, CEA
• Group : "Latency"• Gauge Metric : "CCAI Latency" ( Total latency, Total messages )• Gauge Metric : "CCAU Latency" ( Total latency, Total messages )• Gauge Metric : "CCAT Latency" ( Total latency, Total messages )
• Group : "Result codes"• Metrics created dynamically on new result code
• Group : "APNs"• Metrics created dynamically on new APNs
• Group : "Policy"• Gauge Metric : "Tethering" ( Total )• Gauge Metric : "Modem abuse" ( Total )• PCC Rules metrics created dynamically on new PCC rule returned
• Group : "Error conditions"• Gauge Metric : "3GPPFramework" ( Total, CCA send failed, RAR send failed, ASR
send failed, DPR send failed, DPA send failed, CEA send failed, Missing mandatoryparameter, Duplicate CCR, User session already exists, User session unknown,Request number out of order, CCA Non success error codes, RAA Non success errorcodes, ASA Non success error codes, CEA Non success error codes, DPA Non successerror codes, Internal error )
DESCRIPTION "Group: Metrics related to APNs Metric: Number of diameter messages for APN2 Result: Pass Value: Current counter value" DEFVAL { 0 } ::= { apns-apn2 1 }apns-apn1 OBJECT IDENTIFIER ::= { apns 1 }apn1-fail OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to APNs Metric: Number of diameter messages for APN1 Result: Fail Value: Current counter value" DEFVAL { 0 } ::= { apns-apn1 2 }apn1-pass OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to APNs Metric: Number of diameter messages for APN1 Result: Pass Value: Current counter value" DEFVAL { 0 } ::= { apns-apn1 1 }
---- Group Result codes--
resultCodes OBJECT IDENTIFIER ::= { threegppframework 5 }resultCodes-ldapResultCodes OBJECT IDENTIFIER ::= { resultCodes 2 }ldapResultCodes-code1 OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to result codes Metric: Number of LDAP messages per result code Result: Code:1 Value: Current counter value" DEFVAL { 0 } ::= { resultCodes-ldapResultCodes 2 }ldapResultCodes-code0 OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to result codes Metric: Number of LDAP messages per result code Result: Code:0 Value: Current counter value" DEFVAL { 0 } ::= { resultCodes-ldapResultCodes 1 }resultCodes-diameterResultCodes OBJECT IDENTIFIER ::= { resultCodes 1 }diameterResultCodes-code4001 OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to result codes Metric: Number of diameter messages per result code Result: Code:4001 Value: Current counter value" DEFVAL { 0 } ::= { resultCodes-diameterResultCodes 2 }diameterResultCodes-code2001 OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to result codes Metric: Number of diameter messages per result code Result: Code:2001 Value: Current counter value"
MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCA-T messages Result: Fail Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccat 3 }ccat-pass OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCA-T messages Result: Pass Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccat 2 }ccat-total OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCA-T messages Result: Total Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccat 1 }diameter-ccrt OBJECT IDENTIFIER ::= { diameter 7 }ccrt-fail OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCR-T messages Result: Fail Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccrt 3 }ccrt-pass OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCR-T messages Result: Pass Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccrt 2 }ccrt-total OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCR-T messages Result: Total Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccrt 1 }diameter-ccau OBJECT IDENTIFIER ::= { diameter 6 }ccau-fail OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCA-U messages Result: Fail Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccau 3 }ccau-pass OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only
STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCA-U messages Result: Pass Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccau 2 }ccau-total OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCA-U messages Result: Total Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccau 1 }diameter-ccru OBJECT IDENTIFIER ::= { diameter 5 }ccru-fail OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCR-U messages Result: Fail Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccru 3 }ccru-pass OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCR-U messages Result: Pass Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccru 2 }ccru-total OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCR-U messages Result: Total Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccru 1 }diameter-ccai OBJECT IDENTIFIER ::= { diameter 4 }ccai-fail OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCA-I messages Result: Fail Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccai 3 }ccai-pass OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCA-I messages Result: Pass Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccai 2 }ccai-total OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current
DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCA-I messages Result: Total Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccai 1 }diameter-ccri OBJECT IDENTIFIER ::= { diameter 3 }ccri-fail OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCR-I messages Result: Fail Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccri 3 }ccri-pass OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCR-I messages Result: Pass Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccri 2 }ccri-total OBJECT-TYPE SYNTAX Counter64 MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to diameter protocol Metric: Number of CCR-I messages Result: Total Value: Current counter value" DEFVAL { 0 } ::= { diameter-ccri 1 }
---- Group User Data--
userData OBJECT IDENTIFIER ::= { threegppframework 1 }userProfiles-cardinality OBJECT-TYPE SYNTAX Gauge MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to user data Metric: Number of user profiles Value: Number of threegppframework.managed.concepts.userdata.ThreeGPPUserProfile objects" DEFVAL { 0 } ::= { userData 2 }userSessions-cardinality OBJECT-TYPE SYNTAX Gauge MAX-ACCESS read-only STATUS current DESCRIPTION "Group: Metrics related to user data Metric: Number of user sessions Value: Number of threegppframework.managed.concepts.userdata.ThreeGPPUserSession objects" DEFVAL { 0 } ::= { userData 1 }
END
Once the MIB is installed, meaning is applied to snmp gets. For example, snmpwalk can be run fromthe command line :-
7.4 Extending the framework metricsThe metrics may be extended in applications by using the following rule functions :-
• registerApplication - Register an application name• unregisterApplication - Unregister an application name• registerGroup - Register a metrics group name - groups are always contain within an application• unregisterGroup - Unregister a metrics group• registerRateMetric - Register a rate metric - metrics are always contained within a group• unregisterRateMetric - Unregister a rate metric• registerGaugeMetric - Register a gauge metric - metrics are always contained within a group• unregisterGaugeMetric - Unregister a gauge metric• registerCardinalityMetric - Register a cardinality metric - metrics are always contained within a
group• unregisterCardinalityMetric - Unregister a cardinality metric• incrementMetric - Increment a metric by a given value• setMetric - Set a metric to a specific value
8.1 Network configurationThe TIBCO 3GPP framework provides a mechanism to manage network orientated configurations viaa configuration file. Configurations may be loaded, activiated, deactivated and unloaded. Since theconfiguration file has a version, multiple configuration may be loaded, but only one is active.
When the configuration is activated, any channels configured will be started.
Since the underlying mechanism is the BusinessEvents Extreme configuration service, configurationsmay be manged on the command line and also through the ActiveSpaces Transaction Administrationconsole (ASTA). See the Buiness Events Extreme administration guide for full details.
8.1.1 Configuration file
The configuration file itself, typically named framework.kcs, is broken down into separate sections.Example sections are listed below.
FIX THIS - ensure that all configuration sections are fully documented
• File header
configuration "framework" version "1.0" type "com.tibco.threegpp.framework.config"{ configure com.tibco.threegpp.framework.config { Configuration {
• Threegppframework configuration
// Configuration of the threegppframework//threegppframework ={ // class to handle normalization and denormalization of msisdn // msisdnHandlerClass = "com.tibco.threegpp.framework.userdata.DefaultMSISDNHandler";};
• Diameter configuration
// Configuration of the Diameter channel//diameter = { { // name // name = "Diameter";
// description // description = "Diameter Gx server channel";
// connection type - TCP or SCTP. Default is TCP // connectionType = TCP;
// Diameter host // host = "0.0.0.0";
// Diameter port number // port = ${DIAMETERPORT};
// Diameter origin host name // originHost = "originHost";
// Diameter list of application ids // applicationIds = { 4, 16777238 };
// Diameter watchdog time in seconds // watchdogTime = 60;
// Thread pool size // threadPoolSize = 8;
// Maximum thread pool size // maxThreadPoolSize = 16;
// Keep alive time for idle threads (seconds) // keepAliveTime = 10;
// capacity of queue when all threads are busy // queueCapacity = 16;
// reject policy of messages when server is busy. Can be one of :- // // Abort - throws a RejectedExecutionException // CallerRuns - runs the rejected task directly in the calling thread // DiscardOldest - discards the oldest unhandled request // Discard - silently discards // rejectPolicy = CallerRuns;
// option to validate application id during CER/CEA exchange // validateApplicationId = false;
// name of message handler that could process incoming requests // the following handlers are available in the framework :- // // "com.tibco.threegpp.framework.diameter.DiameterGxMessageHandler" // Standard handler for Gx channels // // "com.tibco.threegpp.framework.diameter.DiameterRoMessageHandler" // Standard handler for Ro channels // // "com.tibco.threegpp.framework.diameter.DiameterRxMessageHandler" // Standard handler for Rx channels // // Applications can supply their own. // setMessageHandlerClass = "com.tibco.threegpp.framework.diameter.DiameterGxMessageHandler";
// auto start - Manual or Automaic // startPolicy = Automatic;
// Timeout (in ms) to wait for DPA on endpoint stop. // dpaTimeout = 5000;
// Value of AVP Disconnect-Cause to use in DPR on endpoint stop. // dprDisconnectCause = 0;
}};
• LDAP configuration
// Configuration of the LDAP channel//ldap = { { // Strategy to load balance requests across the LDAP servers // RANDOM or ROUNDROBIN // ldapLoadBalancingStrategy = "RANDOM";
// number of servers to try before returning without response // ldapResilienceCount = 1;
// list of error codes for which retry should be attempted // ldapRetryErrorList = { 51, 85, 91 };
// search base to use for user search // ldapUserSearchBase = "msisdn=[MSISDN],domainName=msisdnD,O=3,C=UK";
// search filter to use for user search // ldapUserSearchFilter = "(objectclass=*)";
// attributes of interest for user search // ldapUserSearchAttributesOfInterest = "subOrganisation,servicePS20,subAccounttype,subSubtype,billingId,servicePS21,servicePS22,mobTechnology,subTetheringOverride,staContractStartDate,staContractEndDate,subBAN,acVoiceVideoWorthy";
// Configuration of the LDAP channel // ldapServer = { { // name // name = "LDAP";
// milliseconds to wait for establishing connection to the LDAP server. Default is 0 which means no timeout. // connectionTimeout = 0;
// number of times to try connecting to the LDAP server. Default is 4. // maximumConnectionAttempts = 4;
// milliseconds delay between reconnection attempts. Default is 500. // retryDelay = 500;
// milliseconds to wait for an LDAP targeted search operation to complete. Default is 0 which means no timeout. // searchTimeout = 0;
// milliseconds to wait for an LDAP bulk search operation to complete. Default is 0 which means no timeout. // bulkSearchTimeout = 0;
// send interval in milliseconds, at which, message should be sent to LDAP server to keep the connection alive. // Default is 0 which means no keep alive. // keepAliveSendInterval = 10000; } }; }};
• Override list configuration
// Configuration of the override lists//overrideList = { { // name // name = "Override List";
// Time-out for latency measurements // latencyMeasurementTimeout = 0;
// Unregister metrics on configuration deactivation // unregisterOnDeactivate = true;
// enable metrics - if set to false, application updating the metrics // will have no effect // metricsEnabled = true;
// Top-level index for this application - each application should have // its own index to avoid clashes. Current we have :- // // 1 - Reserved for 3GPP framework, demos and testing // 2 - PCRF Data application // 3 - OCS application // 4 - VoLTE application // // When the metrics are exposed via SNMP, this index is used to generate the top-level // OID from the common Tibco base of { enterprises 2000 }. Thus this OID is { enterprises 2000 1 }. // index = 1;
// Enable SNMP exposure of the metrics // snmpEnabled = true;
// Note that rate metrics are defined but not exposed via SNMP - this is because the network management // console can plot histories from the other metrics //
groups = { { // User data group of metrics // // SNMP OID = { enterprises 2000 1 1 } // groupName = "User Data"; groupDescription = "Metrics related to user data"; index = 1; metrics = { {
}, { resultName = "User session unknown"; index = 9; }, { resultName = "Request number out of order"; index = 10; }, { resultName = "CCA Non success error codes"; index = 11; }, { resultName = "RAA Non success error codes"; index = 12; }, { resultName = "ASA Non success error codes"; index = 13; } }; }, { metricName = "LDAP"; metricDescription = "Metrics relating to LDAP error conditions"; metricType = GAUGE; index = 2; results = { { resultName = "Send failed"; index = 1; }, { resultName = "Non success error codes"; index = 2; }, { resultName = "Unsupported operation"; index = 3; } }; } }; } }; }};
• Variables configuration
// Configuration for variables// variables ={ { //What should happen during activation of new variables config, before new variables are activated //RESET_ALL - all existing variables should be cleared (default) //RESET_NONE - all existing variables should not be cleared // activateAction = RESET_ALL;
// What should happen during deactivation of variables config // CLEAR_ALL - all existing variables should be cleared (default) // CLEAR_NONE - all existing variables should not be cleared - left as it is // CLEAR_DEFINED - all existing variables that were defined by this activated configuration should be cleared // deactivateAction = CLEAR_ALL;
variableList = { // Name of the user profile rule // // default is LDAPUserProfile // { name = "UserProfileRule";
type = STRING; value = "LDAPUserProfile"; }, // Name of the user session state machine to use // // default is UserSessionStateMachine // { name = "UserSessionStateMachine"; type = STRING; value = "UserSessionStateMachine"; }, { // Rules pool size - perhaps around 3/4 of total cores name = "RulesPoolSize"; type = INTEGER; value = "12"; }, { // diameter response pool name = "DiameterResponsePoolSize"; type = INTEGER; value = "4"; }, { name = "var1"; type = STRING; value = "val1"; } }; }};
• File footer
}; };};
8.1.2 Loading a configuration file
After the application is started, the configuration file may beloaded and activated. This can be loadedfrom ASTA or from the command line. From the command this is simply :-
$ administrator adminport=8000 domainname=CLUSTER load configuration source=framework.kcs[A] type = com.tibco.threegpp.framework.config[A] name = framework[A] version = 1.0$ adminsitrator adminport=8000 domainname=CLUSTER activate configuration type=com.tibco.threegpp.framework.config name=framework version=1.0
The example above loads and activates the configuration file framework.kcs on all nodes configuredin the CLUSTER.
To make changes to a activated configuration, the recommended way is to increase the version of theconfiguration (in this example, 1.0 to 1.1), make the required changes, then load and activate the newconfiguration.
Activating a replacement configuration will automatically deactive the older version.
$ administrator adminport=8000 domainname=CLUSTER load configuration source=framework.kcs[A] type = com.tibco.threegpp.framework.config[A] name = framework[A] version = 1.1$ adminsitrator adminport=8000 domainname=CLUSTER activate configuration type=com.tibco.threegpp.framework.config name=framework version=1.1
At this point in the example, version 1.1 is loaded and actived. Version 1.0 is still loaded. To roll backto the previous 1.0 configuration, the following may be run again :-
Note that when a configuration is being replaced, the framework will deactivate and activate only thesections that have changed. For example when the new kcs files has any change only in ldap section,only ldap endpoints are stopped, removed and started again as per new configuration.
8.1.3 Enabling the configuration notifier
A configuration notifier is required to process changes in the configuration. The catalog functionLifecycle.register() will create a suitable notifier, so this should be invoked on application startup.
The fule function Deployment/registration can be used as a startup function in the deploymentconfiguration to achieve this.
8.1.4 Extending the configuration
Projects using the framework may extend the configuration by providing the following :-
• Configuration java class to define the configuration data. This can extend the Configuration javaclass provided in the framework.
• ConfigurationNotifier java class to define the behaviour on configuration load and unload. Thiscan extend the ConfigurationNotifier java class in the framework
• Creating of an instance of the ConfigurationNotifier on application startupExamples are shown below.
• Example Configuration.java
/** * (c) Copyright 2012-2013, TIBCO Software Inc. All rights reserved. * * LEGAL NOTICE: This source code is provided to specific authorized end * users pursuant to a separate license agreement. You MAY NOT use this * source code if you do not have a separate license from TIBCO Software * Inc. Except as expressly set forth in such license agreement, this * source code, or any portion thereof, may not be used, modified, * reproduced, transmitted, or distributed in any form or by any means, * electronic or mechanical, without written permission from TIBCO * Software Inc. * * $Id: Configuration.java 18892 2014-09-17 17:51:33Z plord $*/package com.tibco.threegpp.examplepcrf.config;
/** * @author Peter Lord * * Example PCRF Configuration */public class Configuration extends com.tibco.threegpp.framework.config.Configuration { /** * Configuration of the SMPP channel * */ public SMPP[] smpp = new SMPP[0];}
• Example ConfigurationNotifier.java
/** * (c) Copyright 2012-2013, TIBCO Software Inc. All rights reserved. * * LEGAL NOTICE: This source code is provided to specific authorized end * users pursuant to a separate license agreement. You MAY NOT use this * source code if you do not have a separate license from TIBCO Software * Inc. Except as expressly set forth in such license agreement, this * source code, or any portion thereof, may not be used, modified, * reproduced, transmitted, or distributed in any form or by any means, * electronic or mechanical, without written permission from TIBCO * Software Inc. * * $Id: ConfigurationNotifier.java 18892 2014-09-17 17:51:33Z plord $
/** * Configuration notifier for examplepcrf */public class ConfigurationNotifier extends com.tibco.threegpp.framework.config.ConfigurationNotifier {
/** * Constructor * * @param name name of this notifier */ public ConfigurationNotifier(@KeyField(fieldName = "name") String name) { super(Configuration.class.getPackage().getName(), name); }
/** * Handle activating of new configuration * * @param version Configuration version to be activated */ public void activated(final Version version) { super.activated(version);
for (int c = 0; c < version.getConfigs().length; c++) { final Configuration config = (Configuration) version.getConfigs()[c];
/** * Handle deactivating of existing configuration * * @param version Configuration version to be deactivated */ public void deactivated(Version version) { super.deactivated(version);
for (int c = 0; c < version.getConfigs().length; c++) { final Configuration config = (Configuration) version.getConfigs()[c];
/** * Handle replace of existing configuration with new * * @param deactivated Configuration version to be deactivated * @param activated Configuration version to be activated */ public void replaced(Version deactivated, Version activated) { super.replaced(deactivated, activated);
// simplified for now for (int c = 0; c < deactivated.getConfigs().length; c++) { final Configuration config = (Configuration) deactivated.getConfigs()[c];
} catch (InstantiationException e) { Log3GPP.message( 8, "FRAMEWORK_INTERNAL_ERROR: Problem with setting MessageHandler class ${exception}", new String[]{ "alarmId:8", "exception:" + e });
} catch (IllegalAccessException e) { Log3GPP.message( 8, "FRAMEWORK_INTERNAL_ERROR: Problem with setting MessageHandler class ${exception}", new String[]{ "alarmId:8", "exception:" + e });
} catch (SecurityException e) { Log3GPP.message( 8, "FRAMEWORK_INTERNAL_ERROR: Problem with setting MessageHandler class ${exception}", new String[]{ "alarmId:8", "exception:" + e });
KeyQuery<com.tibco.jsmppchannel.ClientEndpoint> endpointKeyQuery; endpointKeyQuery = new KeyManager<com.tibco.jsmppchannel.ClientEndpoint>().createKeyQuery(com.tibco.jsmppchannel.ClientEndpoint.class, "ByName"); kfvl = new KeyFieldValueList(); kfvl.add("name", smppConfig.name); endpointKeyQuery.defineQuery(kfvl); com.tibco.jsmppchannel.ClientEndpoint endpoint = endpointKeyQuery.getSingleResult(LockMode.WRITELOCK);
if (endpoint != null) { MessageHandler h = endpoint.getMessageHandler(); if (h != null) { ManagedObject.delete(h); } ManagedObject.delete(endpoint); }
}}
• Example Lifecycle.java
/** * (c) Copyright 2012, TIBCO Software Inc. All rights reserved. * * LEGAL NOTICE: This source code is provided to specific authorized end * users pursuant to a separate license agreement. You MAY NOT use this * source code if you do not have a separate license from TIBCO Software * Inc. Except as expressly set forth in such license agreement, this * source code, or any portion thereof, may not be used, modified, * reproduced, transmitted, or distributed in any form or by any means, * electronic or mechanical, without written permission from TIBCO * Software Inc. * * $Id: Lifecycle.java 18807 2014-09-12 13:10:08Z plord $ */package com.tibco.threegpp.examplepcrf.lifecycle;
/** * Register administration target and configuration notifier */ @Function public static void register() { // register configuration // class Register extends Thread { @Override public void run() { new Transaction("register") { @Override public void run() { // Register configuration notifier - we allow only one // KeyManager<ConfigurationNotifier> keyManager = new KeyManager<ConfigurationNotifier>(); KeyQuery<ConfigurationNotifier> keyQuery = keyManager.createKeyQuery(ConfigurationNotifier.class, "ByName"); KeyFieldValueList keyFieldValueList = new KeyFieldValueList(); keyFieldValueList.add("name", "configurationnotifier"); keyQuery.defineQuery(keyFieldValueList); ConfigurationNotifier notifier = keyQuery.getOrCreateSingleResult( LockMode.WRITELOCK, null); m_createdByThisJVM = ObjectServices.modifiedInTransaction(notifier); } }.execute(); } } Register r = new Register(); r.start(); try { r.join(); } catch (InterruptedException e) { // ignore } Logging.info("Registering target AdminTarget"); // register admin target // Target.register(AdminTarget.class); }
/** * Un-register administration target and configuration notifier */ @Function public static void unregister() { // Remove notifier ... but only if it was this JVM that created it // class UnRegister extends Thread { @Override public void run() { new Transaction("unregister") { @Override public void run() { if (m_createdByThisJVM) { KeyManager<ConfigurationNotifier> keyManager = new KeyManager<ConfigurationNotifier>(); KeyQuery<ConfigurationNotifier> keyQuery = keyManager.createKeyQuery(ConfigurationNotifier.class, "ByName"); KeyFieldValueList keyFieldValueList = new KeyFieldValueList(); keyFieldValueList.add("name", "configurationnotifier"); keyQuery.defineQuery(keyFieldValueList); ConfigurationNotifier notifier = keyQuery.getSingleResult(LockMode.WRITELOCK); if (notifier != null) { ManagedObject.delete(notifier); } } } }.execute();
9 High availability.......................................................................................................................................
9.1 High availabilityThe TIBCO 3GPP framework provides extensions to the high availability (HA) features inBusinessEvents Extreme. These include :-
• Configuration of HA via configuration• Selection of partition mappers• Selection of partition selectors• Data centre management• Routing• Remote procedure calls
9.1.1 Partitions and partition groups
Data is stored in a partition. Each partition will have an initial primary node and zero or more replicanodes. Partitions are organized into groups.
Part it ion D- U2
Replica
Part it ion D- U1
Primary
Part it ion D- U1
Replica
Group APPLICATION_GROUP2
Part it ion D- L2
Replica
Shared Memory
Node A1
Part it ion D- L3
Replica
Part it ion D- L3
Primary
Part it ion D- L1
Primary
Shared Memory
Node A3
Part it ion D- L2
Primary
Part it ion D- L1
Replica
Node A2
Group APPLICATION_GROUP1
Shared Memory
Partitions and partition groups
In this example, one group LOCAL is defined which consists of three partitions L1, L2 and L3. Eachpartition has one primary and one replica. So, should node A1 fail, node A2 is promoted to be primaryfor partition L1.
To realize this configuration, the following could be included in the network configuration file :-
9.1.2 Partition mappers
Actual data needs to be mapped to a partition. For example, when an object which represents asubscriber profile is created, a partition needs to be selected and mapped.
To achieve this, objects are assigned a partition mapper along with a selector. An exampleconfiguration for the partition mapper is shown below.
The framework includes the following selector services :-
• Examines field keyFieldPartition mappers can be added in projects and then configured as required.
9.1.3 Routing
The high availability system is also used to route messages between nodes as needed.
Note that all nodes should be configured as replicas.
9.1.4 Data centres
Data centres may also be defined. This supports certain cases that requre messages to be sent betweendata centres ( for example, multi session support ).
10.1 Override ListThe TIBCO 3GPP framework supports a list of user identifiers which can be used in rule groupconfigurations to enable exceptional handling of those user identifiers. Potential uses include :-
• VIP lists - don't degrade or disconnect these• Special numbers such as crimestoppers, childline and emergency services• Special offers• Engineering numbers
Each list supports start and end dates that define the validity of the list in time. The format of the startand end dates is ISO8601 "yyyy-MM-ddTHH-mm-ss".
The override list consists of a formatted file - when this file is placed into a configured directory, theapplication will process the file. A file with no subscribers causes the override list to be removed.
The lists are not stored in highly available memory, and so the lists should be loaded on all nodes inthe cluster. Multiple lists may be loaded and are independent of each other.
First entry consists of :-
• Name of list• From date of the validity of the list• To date of the validity of the list
Each subsequent entry consists of :-
• MSISDNThe override lists is loaded into the 3GPP Framework by copying to a configured directory. Afterprocessing, the 3GPP Framework moves the file to either a "pass" directory or a "fail" directory.Loading of a new list will replace any previously loaded list of the same name.
10.3 Accessing the exception list in the policy rulesA custom function is available to check for the presence of a user in an exception list. The format ofthis custom function is :-
11.1 Dynamic VariablesThe TIBCO 3GPP framework provides a mechanism to manage dyanmic variables. This includes :-
• Setting of dynamic variables in configuration• Setting and updating of dynamic variables from the command line• Setting and updating of dynamic variables from java code• Setting and updating of dynamic variables from rules• Java notifiers on variable change• Event notifiers on variable change• A commit function so that a set of variables are updated at once
Uses of dynamic variables include :-
• Setting of certain framework parameters• Application specific settings• Test overrides
11.1.1 Configuration
Variables are best set in the variables section of the configuration file. An example is shown below.
// Configuration for variables// variables ={ { //What should happen during activation of new variables config, before new variables are activated //RESET_ALL - all existing variables should be cleared (default) //RESET_NONE - all existing variables should not be cleared // activateAction = RESET_ALL;
// What should happen during deactivation of variables config // CLEAR_ALL - all existing variables should be cleared (default) // CLEAR_NONE - all existing variables should not be cleared - left as it is // CLEAR_DEFINED - all existing variables that were defined by this activated configuration should be cleared // deactivateAction = CLEAR_ALL;
variableList = { // Name of the user profile rule // // default is LDAPUserProfile // { name = "UserProfileRule"; type = STRING; value = "LDAPUserProfile"; }, // Name of the user session state machine to use // // default is UserSessionStateMachine // { name = "UserSessionStateMachine"; type = STRING; value = "UserSessionStateMachine"; }, { // Rules pool size - perhaps around 3/4 of total cores name = "RulesPoolSize"; type = INTEGER;
value = "12"; }, { // diameter response pool name = "DiameterResponsePoolSize"; type = INTEGER; value = "4"; }, { name = "var1"; type = STRING; value = "val1"; } }; }};
The following variables are used by the TIBCO 3GPP framework :-
• UserProfileRule - sets the name of the user profile rule ( eg LDAPUserProfile orStubbedUserProfile)
• UserSessionStateMachine - sets the name of the user session state machine• RulesPoolSize - sets the size of the rules thread pool, default is number of cores• DiameterResponsePoolSize - sets the size of the diameter response thread pool, default is number
of cores
11.1.2 Updating on the command line
An administrator target is provide to access the dynamic variables :-
valid commands and parameters for target "variable": set variable name=<String> name of variable [ type=<VariableType, default = STRING> ] type of the variable (STRING, INTEGER, BOOLEAN, LONG) value=<String> value of the variable Sets variable. Variables then can be used for example in decision tables using: Variables.getStringVariable("name"), Variables.getIntegerVariable("name"), Variables.getLongVariable("name"), Variables.getBooleanVariable("name")
display variable name=<String> name of variable Display committed variable identified by name rollback variable "Rollback" recently set/unset variables. unset variable name=<String> name of variable Unset variable - removes them from list of defined variables commit variable
"Commit" recently set/unset variables. After commit they will become effective displayall variable Display all variables that are effective (committed) displayuncommitted variable Display all uncommitted (pending) set variable actions Description: variables administration
The dynamic variables can be read and updated in rules ( for example, in decision tables ) and alsoaction can be taken when rules are changed.
The following catalog functions are available for use in rules :-
• Variables.getStringVariable(name, defaultValue) - get a string variable• Variables.getIntegerVariable(name, defaultValue) - get a integer variable• Variables.getLongVariable(name, defaultValue) - get a long variable• Variables.getBooleanVariable(name, defaultValue) - get a boolean variable
Rules can perform some application action if a dynamic variable is changed. For example :-
/** * Copyright 2014 TIBCO Software Inc. ALL RIGHTS RESERVED. * TIBCO Software Inc. Confidential Information * * $Id: VariableUpdated.rule 12793 2014-02-03 12:59:33Z plord $ **/rule Rules.VariableUpdated { attribute { priority = 5; forwardChain = true; } declare { Events.SetVarEvent event; } when { event.variableName == "MyVariable"; event.action == Variables.getEventCommit(); } then { Logging.info(event.variableName+" has been committed. New value is "+event.variableValue); }}
12 Group Rules.......................................................................................................................................
12.1 Group rules overviewThe policy framework organizes rules into groups. These groups may be deployed to form anapplication.
12.2 Session related rule groupsThe following rule groups are related to the diameter session and the session state machine.
12.3 AccessControl• Overview
This group provide access control capabilities from a configured decision table.
Access Control provides a clear and hard decision (Pass/Fail) about whether a request is allowedto proceed and be processed by the system. It does so, based on parameters primarily, but notlimited to, the User Profile and User Session information. e.g. Location Zone can be derivedusing session parameters through Virtual Rule Function getLocationZone.
A similar use of Access Control is to define some default behaviour when checks fail but thenallow processing as normal.
Virtual rule function is AccessControlVRF. Configured decision tables must implement thisvirtual rule function.
• BehaviourThis rule group has the following behaviour :-
• If no decision table is configure, state transition Pass is returned• If a decision table is configured and there are no matches, state transition Pass is returned• If a decision table is configured and there are matches ( ie decisionResult is set ), state
transition Fail is returned• Example use
Graphically, the rule group may me deployed like this :-
An AccessControl decision table to pass through emergency numbers might be :-
CONDITION p.userId
ACTION pc.decisionResult
"999" "Emergency"
12.4 DetectDuplicateCCRByE2E• Overview
The rule determines if a Credit Control Request message is a duplicate or not. The processing inthe state machine will determine how a duplicate message is processed.
Note that this rule just needs to be enabled - its not part of a state machine• Behaviour
A duplicate CCR is determined by :-
• Same diameter end-to-end header field and same origin host AVP.When a duplicate is detected, this rule sets the duplicateCCR attribute of policy control.
This rule should be enabled in the cluster deployment description (cdd file).
12.5 DetectDuplicateCCRBySessionId• Overview
The rule determines if a Credit Control Request message is a duplicate or not. The processing inthe state machine will determine how a duplicate message is processed.
Note that this rule just needs to be enabled - its not part of a state machine• Behaviour
A duplicate CCR is determined by :-
• T bit set to 1• Same session Id• Same CC Request Number
When a duplicate is detected, this rule sets the duplicateCCR attribute of policy control.• Example use
This rule should be enabled in the cluster deployment description (cdd file).
12.6 EndSession• Overview
This group provides state cleanup and is designed to be deployed as the last state in the statemachine.
• BehaviourThis rule group has the following behaviour :-
• Remove the session from memory• Remove policy control from memory• If no other sessions remain, remove the profile from memory
When removing concepts from memory, referenced concepts are also removed.• Example use
Graphically, the rule group may me deployed like this :-
ruleGroupName: EndSessionstateName: MyEndSession
Pass
Example State Machine
The equivalent definition in the state machine decision table would include :-
This group records the audit logs and statistics. Audit log is created transactionally - ie only if thecurrent transaction commits.
• BehaviourThe Audit log file is comma delimited and contains the following information :-
• Transaction time stamp (yyyy-MM-dd'T'HH:mm:ss.SSS)• Diameter session identifier• User identifier• Used service units (contained in the original CCR message)
• Monitoring key• Used time• Used volume
• List of PCC rules granted in total• List of granted service units
• Monitoring key• Granted time• Granted volume
• Trace of rules executed• Example use
Graphically, the rule group may me deployed like this :-
End of RTC and transact ioncommit - so Audit log writ ten
ruleGroupName: LogsStatsstateName: MyLogsStats
ruleGroupName: WaitstateName: MyWait
Pass
Example State Machine
12.8 MultiSession• Overview
This group provide support to share a single quota across more than one session. This is achievedby providing down stream rule groups (such as PolicyControl) with the following information :-
• Number of concurrent sessions open for this subscriber• Recorded quota is up-to-date - this is achieved by sending RARs
This group should be placed near the start of the state flow.• Behaviour
The behaviour of the group is as follows :-
A.If there is just one session for this subscriber no action. Signal Pass.B.If there is more than one session for this subscriber and this is a CCR-U message due to a
previous RAR then :-i. Signal the originating session with MultiUpdateii. Wait for MultiComplete
C.Otheriwse :-i. Send a RAR to other sessionsii. Wait for (number of sessions - 1) MultiUpdate signalsiii.Signal other sessions with MultiComplete
In this way, whenever a single quota needs to be shared across all sessions then the calculationcan be done in the knowlege that the used quota is up to date.
• Typical flowsThe following diagram shows the case of a new session when a session already exists for thesubscriber.
Typically, policy control will actually grant the volume based on the remaining un-used volumeand the number of sessions left.
A similar case occurs when there is more than one session active for a subscriber and one sessionissues a CCR-U indicating a breach. Since the remaining session's volume may now be sharedbetween the two sessions, policy control can re-balance the granted volume.
Mult iple Session / update
CCR-U
Finally one session may terminate leaving all the quota for the remaining sessions.
Note that the terminate case is optional since the granted quota will be re-calculated on nextbreach anyway.
• DeploymentThe rule group should be configured in the user session state machine along with theMultiSessionWait rule group. An example deployment is shown below :-
With no decision table configured, the multiple session support will become active when thereare two or more sessions open for the same subscriber.
If a configuration decision table is present (see example below) then multiple session support isonly enabled when a row is matched and there are two or more sessions open for that subscriber.
• InternalOn sending a RAR, the SendRAR rule group sets the sendRAR attribute on the UserSessionto true. This is reset back to false in ReturnCCA rule group. Hence this attribute can be used todetermine if a given CCR message is due to a previous RAR.
MultiSession ensures that the concurrentSessions attribute on PolicyControl is maintainedwith the number of sessions for this subscriber. Rule groups such as PolicyControl may use thisto calculate granted quota.
12.9 PolicyControl• Overview
The purpose of this rule group is to execute configured Policy Control Decision Table.These decision tables should end up adding Policy Rules based on the UserSession state andUserProfile parameters.
There can be multiple instances of the Policy Control Rule, each instance being associated witha specific Policy Control Decision Table that determines what Policy Rules apply. e.g. PolicyControl Decision Table could apply Policy Rules for heavy usage, speed restrictions, tetheringrestrictions, device restrictions.
• Example useGraphically, the rule group may me deployed like this :-
Here, if the profile is for a prepaid account and the accumulated usage is less than 1000000000octets, NormalHeavyUser0040 would be applied.
If the profile is for a postpaid account and the accumulated usage is greater than 1000000000octets, BreachedHeavyUser0050 would be applied, and so on ...
Policy Control only determines the Policy Rules that apply. These Policy Rules are then mappedto external PCC Rules within Policy Evaluation.
If policy escalation is required, a custom action column can be added to accumulate breaches.For example, using incrementBreachHistory(p.userId, "Breached1Mbs", 600); would increment
the Breached1Mbs counter for this subscriber - the counter is reset after 600 seconds of nobreaches.
12.10 PolicyEvaluation• Overview
After all Policy Control rules have been executed then the list of Policy Rules that apply willbe known. It is possible that some of these Policy Rules will conflict with each other. Theseconflicts are resolved by first applying priorities to each of the Policy Rules and then checkingto see if there are any conflicts between any of these Policy Rules. If there are conflicts then thepriorities applied previously determine which Policy Rules apply.
• BehaviourOnce the definitive list of the Policy Rules is determined then these can be mapped to theexternal PCC Rules, Usage Monitoring Information and any Event Triggers that need to beinstalled on the PCEF. In addition, several other actions are determined including the PolicyRule application duration (i.e. A Policy Rule once applied then its application will last for aspecific duration of time), whether the Policy Rule defines a Redirect rule which will determineif Redirect history needs recording, whether there is a breach code associated with the PolicyRule and also whether any Usage Control action is needed on any associated usage buckets.
Virtual rule functions involved are getRulePriority, PolicyRuleResolutionVRF,UsageMonitoringControlVRF. Configured decision tables must implement this virtual rulefunction.
Firstly, Policy Evaluation will run the list of Policy rules through the RulePriority Decision tableand remove those which conflict with others on the list and are of lower priority.
Next, Policy Evaluation, will convert the Policy rules to corresponding PCC rules usingPolicyRulesResolution Decision table. Same step will also append usage monitoring informationand event trigger, if any, required to enforce the policy rule.
Finally, for quota based policy rules, it will setup the usage buckets as determined by UsageControl derived from PolicyRuleResolution. Usage Control parameters are determined from theUsageMonitoringControl Decision table.
• Breach history and escalationPolicyRulesResolution table may use a custom condition column to check for previous breaches.The function getBreachHistory() can be used :-
This group determines at the start of session establishment/update/reauthorization whether anyexisting policy rules installed previously still apply. Also it cleans up any stale usage bucketsrelated to quota rules.
Concepts referenced are UserProfile and UserPolicyHistory.
This group can return Pass or Fail to move to the next state
12.12 ProfileDownload• Overview
This group passes subscriber profile details to the PCEF via additional PCC rules. The group isconfigured by a Decision Table that implements the ProfileDownloadVRF virtual rule function.
• BehaviourTwo rule functions are available to populate the PCC rules. This should be used in a customcondition :-
appendPCCRule(pccRuleName, policyControl) - will add the named PCC rule. For example:-
• appendPCCRule("Adult:True", pc) will append "Adult:True"• appendPCCRule("Tariff:"+p.tariff, pc) will append "Tariff:" followed by the actual tariff
value from the profileappendPCCRuleList(pccRuleName, list, policyControl) - will add a list of PCC rules. Forexample :-
• appendPCCRuleList("AddOns:", p.addOns, pc) will, for each value in addOns, append apcc rule "AddOns:" followed by the actual addon name from the profile.
• Example useGraphically, the rule group may me deployed like this :-
The decision table must be configured with "Take Actions Of One Row At Most" disabled(which is the default) to allow all matching rows to be evaluated.
Note that in this example, a custom condition is used to test for the existence of any addOns.
12.13 RAR• Overview
This group retrieves the necessary concepts ready for down-stream RAR processing.
12.14 ReAuthUser• Overview
This rule function sends a re-auth session event for each session active for the subscriber - thesubscriber session will send the actual RAR.
12.15 ReturnCCA• Overview
This group takes the result of the policy decision, updates the session and generates a CCAmessage.
Main input is PolicyControl• Behavior
The list of PCC rules on PolicyControl is compared with the list of PCC rules on the sessionand the appropriate rules installed or removed in the CCA message. The list of PCC rules on thesession always represents the current set installed in the PCEF
Once completed, the group signals Pass to the session state machine.
This group takes the result of the policy decision, updates the session and generates a RARmessage.
• BehaviourMain input is PolicyControl
The list of PCC rules on PolicyControl is compared with the list of PCC rules on the sessionand the appropriate rules installed or removed in the RAR message. The list of PCC rules on thesession always represents the current set installed in the PCEF.
Once completed, the group signals Pass to the session state machine.
12.17 TetheringControl• Overview
This group determines if the subscriber is tethered based on a configured list of parameters suchas :-
• Historic usage• Usage notifications from PCEF (including tethering notifications)
The group is configured via three decision tables: ApplyTethering, TetheringScore andTetheringControl.
• BehaviourApplyTethering determines whether subscriber is allowed tethering. It uses information fromsubscriber's profile (3DS) and tethering override table. If subscriber is allowed tethering,PolicyControl.tetheringAllowed flag is set to true and the rest of the logic is skipped.
TetheringScore determines the score that is then used in TetheringControl to determine the rulesthat need to be applied.
The group will create an instance of TetheringControl if not already created and augmentTetheringControl with the results of policy decision based on TetheringScore. The instanceshould be deleted at the end of the flow.
The group will create an instance of PolicyControl if not already created and augmentPolicyControl with the results of this policy decision based on TetheringControl The instanceshould be deleted at the end of the flow.
Rule functions are available to access any previous captured usage. See UsageControl rule groupfor further details. Additional rule functions must be implemented, as they are used in decisiontables:-
This group cannot fail and so always signals "Pass" to move onto next state.
12.18 UsageControl• Overview
This group records usage over time. Usage is extracted from the diameter message (Used-Service-Units AVP) are mapped to usage bucket(s) attached to the subscriber profile via thediameter monitoring key.
Other groups, especially Policy Control, can access the current usage buckets to determinepolicy.
• BehaviourThis group cannot fail and so always signals "Pass" to move onto next state.
Zero or more instances of the Usage Control State may be configured, each instance configuredwith a unique or shared decision table.
Usage for each user is recorded each time the PCRF receives a update via Diameter. An updateis received by the PCRF if there has been a Quota Unit Breach or a Quota Time Breach detectedby the PCEF. Each Update will contain the used units during a particular period and will beassociated with a particular Monitoring Key. Each relevant usage bucket for the user and theassociated monitoring key is updated with this usage as per the following logic:
• FIXED window:• Update the usage. There should always be a bucket that exists that represents fixed
usage when a usage report is received.• ROLLING window:
• If the Bucket does represent the current time period then update the usage.• If the Bucket does not represent the current time period then check to see if the
bucket is no longer valid for the current time period based on the duration and themeasurement period. If it is no longer valid then discard
• If no bucket exists that represents the current time period then create one and updatewith the usage.
The UserNotification rule group is responsible for sending notifications to NAING server foreach applicable PCC rule determined by PolicyEvaluation.
• BehaviourBased on the notificationId, the notification parameters are fetched from a configured decisiontable prior to calling the NAINGClient.notify() catalog function that constructs and sends the RVrequest to NAING.
Virtual rule function is 'Lookup/getUserNotification. Configured decision table must implementthis virtual rule function.
Concepts referenced are UserProfile, UserSession, PolicyControl and UserNotification.
This rule group has only one outcome - "Pass". Failure to send the notification request to NAINGserver is logged.
• UserNotification Decision Table (example)
CONDITION n.notificationId
ACTION n.bearer
ACTION n.msisdn
ACTION n.ipAddress
ACTION n.emailAddress
ACTION n.notification
24 4 s.ueIpAddress p.email "MMS Notification"
21 1 p.email "Email Notification"
22 2 s.ueIpAddress "Alert Notification"
* 3 p.userId "SMS Notification"
For example, for notificationId '24', in addition to populating the notification text, theDecisionTable also sets the bearer to 4 (=MMS) and populates the IP address and email fieldwith values taken from both the UserSession and UserProfile concepts.
After the UserNotification Decision table lookup, the NAINGClient.notify() is called. Thiscatalog function validates that mandatory fields for the particular bearer are set and then create aTIBCO RV message destined for the NAING server defined in the PCRF configuration.
The UserNotification concept contains fields for the following optional message attributesallowing any of them to be configurable in the Decision Table :-
12.23 Profile related rule groupsThe following rule groups are related to the user profile.
12.24 LDAPFetchProfile• Overview
This rule group initiates a request to the LDAP server to fetch the subscriber profile. TheProcessProfile will process the response from the LDAP server.
This rule group assumes that the LDAP channel has been configured along with a suitable LDAPsearch string. See the configuration section for further details.
Note that this rule just needs to be enabled - its not part of a state machine.• Behaviour
The rule group is activated when :-
• A Credit Control Request Initial (CCR-I) event is asserted• When the subscriber profile has not already been fetched• The dynamic variable UserProfileRule is set to LDAPUserProfile• An LDAP lookup is not already in progress
• Example useThis rule should be enabled in the cluster deployment description (cdd file).
12.25 LDAPProcessProfile• Overview
This rule group processes a response from the LDAP server, and, using some decision tableconfigurations, creates a subscriber profile.
Note that this rule just needs to be enabled - its not part of a state machine.• Behaviour
The rule group is activated when :-
• A Credit Control Request Initial (CCR-I) event is asserted• A LDAP Search event is asserted• The dynamic variable UserProfileRule is set to LDAPUserProfile• The LDAP search event matches the LDAP search control concept
The decision tables associated with this rule group are configured in the RuleConfigurationstable. These tables are :-
• LDAPInboundUserProfile - maps the LDAP response to a user profile• DefaultUserProfile - provides a default user profile if LDAP lookup fails• CheckUserProfile - validates the user profile (either created from
The LDAPInboundUserProfile decision table maps the LDAP response to the user profile byextracting attributes from the LDAP response. An example table is shown below.
Should the LDAP lookup fail, the DefaultUserProfile decision table may be used to provide adefault profile and thus allow processing to continue. An example table is shown below.
CUSTOM CONDITION ACTION p.orgId
ACTION p.bam
"true" "UK" ""
The CheckUserProfile decision table provides validation of the profile (or default profile) and isrequired to set validUserProfile to true or false. You can also specify validationFailedReasonto a string describing the validation issue. Should you have a requirement to not use defaultprofile when profile is not found (ldap error 32 NO_SUCH_OBJECT), you can configure it asbelow. In case validUserProfile is set to false, an event UserProfileValidationFailedEvent issent.
CUSTOM CONDITION
CONDITION p.defaultProfile
CONDITION l.resultCode
ACTION l.validUserProfile
ACTION l.validationFailedReason
(p.orgId != "") false "Mandatory attribute Org Id is missing"
true LDAPClient.getResultCode_NO_SUCH_OBJECT()false "Profile not found, default profile cannot be used"
12.26 FetchStubbProfile• Overview
This rule group simulates a request to an external system to fetch the subscriber profile. TheProcessStubbProfile will process the response from the LDAP server.
This rule group can be used when an external system isn't availabe for for testing purposes.
Note that this rule just needs to be enabled - its not part of a state machine.
• A Credit Control Request Initial (CCR-I) event is asserted• When the subscriber profile has not already been fetched• The dynamic variable UserProfileRule is set to StubbedUserProfile
• Example useThis rule should be enabled in the cluster deployment description (cdd file).
12.27 ProcessStubbProfile• Overview
This rule group, using some decision table configurations, creates a stubbed subscriber profile.
Note that this rule just needs to be enabled - its not part of a state machine.• Behaviour
The rule group is activated when :-
• A Credit Control Request Initial (CCR-I) event is asserted• A LDAP Search event is asserted• The dynamic variable UserProfileRule is set to StubbedUserProfileRule
The decision tables associated with this rule group are configured in the RuleConfigurationstable.
• Example useThis rule should be enabled in the cluster deployment description (cdd file).
The StubbedUserProfile provides the names of the configurable decision tables. An example isshown below.
13 Group Rule Functions.......................................................................................................................................
13.1 Group rule functions overviewThe policy framework includes a number of rule functions which can be used within decision tables.These are described here
13.2 deployThis rule function is called on startup and hot-deploy and reads and processes the following decisiontables :-
• State machine tables• Rule conflicts• Usage monitoring control
13.3 RereadDecisionTablesThis rule function is called: * when some variables are set and commited (using variable administratortarget or via configuration) :-
• during startup/ hot deploy processIt reloads following tables
• Rule conflicts• Usage monitoring control
It doesnt reload stat machine tables
13.4 cleanBreachHistoryThis rule function checks for expired breach history's and deletes them.
13.5 recordBreachHistoryThis rule function update breach history, if history is created expiration time is calculated as now addvalidityTime.
Arguments:
• userId - user ID, MSISDN• ruleName - Policy or Rule Name• timeStamp - transaction timestamp in milliseconds (pc.timeStamp)• validityTime - validity time in milliseconds. ExpirationTime is calculated as: expirationTime =
pc.timeStamp + validityTime
13.6 recordBreachHistoryByPeriodThis rule function update breach history, if history is created expiration time is calculated by fixedtime period.
Arguments:
• userId - user ID, MSISDN• ruleName - Policy or Rule Name
• timeStamp - transaction timestamp in milliseconds (pc.timeStamp)• expirationPeriod - Expiration Period, could be:
• number - Number of seconds from now (not round)• HH:mm - time (24 hours format), rounding by days (reset every expirationPeriodLength
day in fixed time in day)• DayOfWeek HH:mm - day name (Monday, Sunday etc.) and time, rounding by weeks
(reset every expirationPeriodLength week in fixed day of week and time)• DayInMonth HH:mm - day in month (number) and time, rouding by months (reset every
expirationPeriodLength mounth in fixed day of mount and time)• expirationPeriodLength - period length, number of days, week or mounts.
For example expirationPeriod :-
• "1234" - Number of seconds• "10:30" - HH:mm, time (24 hours format) at 10:30 AM• "Monday 13:13" - DayOfWeek HH:mm, Monday at 1:13 PM• "3 00:01" - DayInMonth HH:mm, 3th day of N month one minute after midnight.
13.7 cleanNotificationHistoryThis rule function checks for expired notification history's and deletes them.
13.8 recordNotificationHistoryThis rule function update notification history, if history is created expiration time is calculated as nowadd validityTime.
Arguments:
• userId - user ID, MSISDN• notificationId - Notification Identifier• timeStamp - transaction timestamp in milliseconds (pc.timeStamp)• validityTime - validity time in milliseconds. ExpirationTime is calculated as: expirationTime =
pc.timeStamp + validityTime
13.9 recordNotificationHistoryByPeriodThis rule function update notification history, if history is created expiration time is calculated byfixed time period.
Arguments:
• userId - user ID, MSISDN• notificationId - Notification Identifier• timeStamp - transaction timestamp in milliseconds (pc.timeStamp)• expirationPeriod - Expiration Period, could be:
• number - Number of seconds from now (not round)• HH:mm - time (24 hours format), rounding by days (reset every expirationPeriodLength
day in fixed time in day)• DayOfWeek HH:mm - day name (Monday, Sunday etc.) and time, rounding by weeks
(reset every expirationPeriodLength week in fixed day of week and time)
• DayInMonth HH:mm - day in month (number) and time, rouding by months (reset everyexpirationPeriodLength mounth in fixed day of mount and time)
• expirationPeriodLength - period length, number of days, week or mounts.For example expirationPeriod :-
• "1234" - Number of seconds• "10:30" - HH:mm, time (24 hours format) at 10:30 AM• "Monday 13:13" - DayOfWeek HH:mm, Monday at 1:13 PM• "3 00:01" - DayInMonth HH:mm, 3th day of N month one minute after midnight.
13.10 appendRuleThis rule function appends Policy Rule to the list of application policy rules on PolicyControlconcept.
13.11 genTraceThis rule function generates a trace message in the form -name[result] >
Arguments:
• name - name• result - value
13.12 createAuditLogThis rule function creates a single audit log entry. Audit log is written when the current transactioncommits.
13.13 isInExceptionListThis rule function determines if the userId is on the specified exception list. Returns true if so, falseotherwise.
Arguments:
• listName - name of the exception list• userId - user identifier
Returns:
• true if the user identifier is on the specified exception list, false otherwise
13.14 LogThis rule function logs Policycontrol, UserSession and UserProfile if trace is enabled.
14.1 Rules overviewThe policy framework supplies some standard rules to process in-bound work. These are describedhere.
14.2 CCRI• Overview
This rule is triggered on new diameter CCR-I message with a profile already in memory. Itwill :-
• Optionally validate the message• Validate that the session doesn't already exists - if it does an error CCA message is returned• Record the in-bound endpoint so that a future CCA (or RAR) is sent to the correct
destination• Initialize internal data structures such as PolicyControl• Determines which user session state machine to use for this session• Create the session• Creates the associated session state machine which manages the execution of rule groups• Signal Pass to the session state machine
• BehaviourThe mechanism used to select the user session state machine is :-
1. If the decision table StateMachineSelect exists it is called. This table can define various wayto select the state machine including :-
• Pre-set• By message type ( application id set in the diameter header )• By subscriber type
2. The value of the UserSessionStateMachine variable3. A default value of UserSessionStateMachine
• Example useThis rule should be enabled in the cluster deployment description (cdd file).
The RuleConfigurations provides the names of the configurable decision tables. An example isshown below.
The ValidateCCRI decision table can be used to validate the message and if found invalid,provide a result code and description. An example table is shown below.
This rule is triggered on new diameter CCR-U message. It will :-
• Optionally validate the message• Validate that the session already exists - if it doesn't an error CCA message is returned• Record the in-bound endpoint so that a future CCA (or RAR) is sent to the correct
destination• Update internal data structures such as PolicyControl• Update the session• Signal Update to the session state machine
• Example useThis rule should be enabled in the cluster deployment description (cdd file).
The RuleConfigurations provides the names of the configurable decision tables. An example isshown below.
The ValidateCCRU decision table can be used to validate the message and if found invalid,provide a result code and description. An example table is shown below.
CONDITION e.framedIPAddress
ACTION v.decisionResult
ACTION v.resultCode
null""
"Missing mandatory parameter framedIPAddress"
14.4 CCRT• Overview
This rule is triggered on new diameter CCR-T message. It will :-
• Optionally validate the message• Validate that the session already exists - if it doesn't an error CCA message is returned• Record the in-bound endpoint so that a future CCA (or RAR) is sent to the correct
destination• Update internal data structures such as PolicyControl• Update the session• Signal Terminate to the session state machine
• Example useThis rule should be enabled in the cluster deployment description (cdd file).
The ValidateCCRT decision table can be used to validate the message and if found invalid,provide a result code and description. An example table is shown below.
CONDITION e.framedIPAddress
ACTION v.decisionResult
ACTION v.resultCode
null""
"Missing mandatory parameter framedIPAddress"
14.5 CCAErrorThis rule is triggered on CCA error event. It will :-
• Raise SNMP trap• Send CCA error response back to the diameter client• Create audit log• Increment metrics
Applications can choose to implement a replacement as needed - for example log different audit log.
15 Example PCRF / demo.......................................................................................................................................
15.1 Example Policy Charging Rules FunctionThe example Policy Charging Rules Function (PCRF) module is a simple PCRF using thethreegppframework with the following features :-
• Family based data model with data loaded via administrator• Stores IP vs subscriber name, with the IP recorded in the CCR-I message and the subscriber
name on login• Stores family quota and individual quota• Grants 50% of the quota initially to send an SMS warning for low credit ( via SMPP channel )• Reduces bandwidth when 100% of quota is consumed• Block & redirect on first connect ( subscriber name is unknown ) or on command• EMS based interface allowing management of the family• Data partitioned by family last name, with messages routed to the primary node• Audit EMS messages designed for LiveView integration
EMSAudit
SMPPSend SMS
Diamater GxTo PCEF
EMSManagement Example PCRF Applicat ion
SMPPChannel
DiameterChannel
3GPP Framework
Business Events Extreme
PCRFn
Example PCRF
15.1.1 Subscriber profiles
The subscriber profiles are pre-configured in the PCRF in a family structure. The head of the familyowns the quota for the whole family.
The policy rules are set via the chargingRuleInstall and chargingRuleRemove AVP's :-
• FirstName:value - the subscribers first name. For display purposes only.• LastName:value - the subscribers last name. For display purposes only.• BillPayerFirstName:value - the bill payers first name. For display purposes only.• BillPayerLastName:value - the bill payers first name. For display purposes only.• FamilyQuota:value - the total family quota available. For display purposes only.• FamilyNames:value - list of family names. For display purposes only.• Block:value - if True, then block & re-direct all traffic to the web server. False otherwise• Throttle:value - if True, then throttle. If False, then don't throttle
15.1.3 EMS audit details
The PCRF generates a EMS audit message on every CCR message. The event is defined thus :-
• Number of subscribers processed by node ( shows partitioned data )• For each subscriber, quota consumed over time• For each family, quota consumed over time• Purchases• Notifications
15.1.4 EMS management details
The following events are available :-
• GetLastNamesRequest eventA request for the list of last names configured in the PCRF. This is to support populating of thelogin page.
• GetProfileRequest eventA request to get the profile of an IP address. This is to determine if the handset has been loggedin, if so the profile details. This the web page can display Welcome back Bart.
• GetProfileResponse eventThe response to GetProfileRequest. This is an array of profiles - in the bill payer case all familymembers are returned, otherwise a single profile.
Name Data Type Description
firstName String Subscribers first name. Empty if not logged in.
lastName String Subscribers last name. Empty if not logged in.
status long Subscriber status (0 for family member, 1 for bill payer).
1 Low quota warning Hi firstName, please note that you are running low of quota
2 Out of quota Hi firstName, bandwidth is reduced. Please ask billPayerFirstName for more quota
Hi firstName, bandwidth is reduced Use http://admin to update
Hi billPayerFirstName, firstName is out of quota. Use http://admin to update
3 Login Hi firstName, you have x MB quota remaining
Hi firstName, you have x MB quota remaining
15.2 PCRF DemoThis section describes the PCRF demo - this uses the example PCRF described above along with API-X to expose the management interface to a MVNO and LiveView to display the audit messages.
The idea is to use a demo WiFi network that allows customers to use their own handset and actuallyuse the PCRF and explore how it works.
• Customers use their own handset to "play"• Real SMS's to the handset• Hardware Policy Charging Enforcement Function (PCEF) in the room with display
• Text to display events on the PCEF• Colour change for alerts
• PCRF displayed on projector including :-• LiveView for real-time graphs• Wireshark display diameter messages
The only requirement on the handset is WiFi enabled and a standard browser.
Raspberry Pi
A short video describing the components and message flow is available here https://docs.google.com/a/tibco.com/file/d/0B9q93PbNZHsOSE0wMWNMZUhzZEU/edit
15.2.1 Components
The following components are required :-
1. Raspberry Pi Policy Charging Enforcement Function (PCEF)Runs PCEF code to change Linux firewall rules via diameter Gx commands. Also displays somestatus on internal LCD.
• Rasperry Pi model B• RGB LCD display• 8G Micro SD card with Adapter• Pi Hut WiFi card• Power supply• Case
Vmware image ( deployed locally or in a cloud service ) running Linux.
• 3 Business Events Extreme nodes running example PCRF. 3 nodes allows us to configureHA. Standard diameter Gx interface, EMS interface to live view, EMS interface to API-Xand SMPP interface to send real SMS messages
• LiveView to display on-going status of the PCRF• API-X to expose PCRF interface to MVNO ( Mobile Virtual Network Operator )• MVNO web server to allow customers to interact with the service• Wireshark to display and decode diameter / SMPP messages for those who don't believe this
is real3. SpotFire server
Vmware image ( deployed locally or in a cloud service ) running Windows.
• Oracle database• API-X CL that takes API-X eMS messages and logs them to the Oracle database• Spotfire server that reads data from the Oracle database
4. Access to the internetPossibly use a 3G/4G MiFi to guarantee required access.
Use a on-line SMPP service such as smsglobal or clickatellA Raspberry Pi is selected since access to the network at the kernel level is needed. Plus, a separatebox is a visual clue to the demo.
Windows virtual machine
Linux virtual machine
PCEF ( Raspberry Pi )
SQL
SpotFire Client
Spotf ireDB
SpotFire Server
DataDB
BW Adaptor
BW Adaptor
RVSQL
API- X CL
EMS
Telco Network
MVNO Network
http
SMPP
Rest
API- X
InternetInternet Traff ic
Diameter Gx WiFi
WireShark
3 x BE- X PCRF
Linux Kernel
LiveView
LED Display
MVNO Web Server (BW)
PCEF
Linux Kernel CustomerHandset(s)
Demo Hardware
The main data flows in this deployment is :-
• Handset -> PCEF. WiFi access point with DHCP and DNS provided by the PCEF• PCRF -> PCEF. Diameter Gx protocol to install policy rules that determine behaviour to the
handset• APIX -> PCRF. EMS requests for status and action originated from the MVNO• Web server -> APIX. REST requests for status and action originated from the MVNO• PCRF -> LiveView. EMS live audit messages of PCRF/PCEF interactions to visualise in real
time• APIX -> CL -> SpotFire database. API usage information for analytics.• APIX -> BW -> LiveView. API live usage information to visualise in real time.• LiveView -> BW -> SpotFire database. PCRF/PCEF interactions for analytics.
1. Start vmware2. Select File->Import and import TibcoNow14Linux.ova3. Start virtual machine4. PCRF, MVN web server, API-X and LiveView should get started on boot5. LiveView can be found at http://linuxserver:10080/app/policy/
Start Windows vmware :-
1. Start vmware2. Select File->Import and import TibcoNow14Spotfire.ova3. Start virtual machine4. Login in as Administrator / Admin1235. Oracle, API-X CL and SpotFire should get started on boot6. Start Spotfire and login as tibco / tibco1237. From the Spotfire library, select Host
When the PCRF running on the Raspberry Pi is started, it should detect the PCRF running and auto-connect.
More details are described below in the relevant sections.
15.2.3 Power on
1. The PCRF server is started first2. The PCRF nodes register their IP address to MDNS (standard with BE-X)3. PCEF is powered on4. PCEF uses MDNS to locate the PCRF server5. PCEF establishes a diameter connection to the PCRF node(s)6. PCEF starts WiFi access point7. PCEF configured default firewall rules
15.2.4 Bill payer connect use-case
The presenter is the bill payer and connects first via his handset.
1. Connects to the WiFi hotspot2. PCEF sends diameter CCR-I message to PCRF3. PCRF determines that this user should be re-redirected and sends CCA-I to PCEF4. PCEF re-directs all http traffic to the MVNO web server5. The presenter selects that he is the bill payer ( logs in )6. PCRF calculates quota and sends RAR to PCEF
7. PCEF allows the bill payer free access to the internet, returns RAA message8. Bill payer can still access the MVNO web page for management. This includes :-
• Query name, MSISDN and maybe other profile attributes• Query total family quota remaining• Query individual family member quota remaining ( total adds up to the family quota )• Set MSISDN• Set low balance notification level• Set / re-share remaining family quota between family members• Purchase extra family credit
Bill payer f irst connect
First at tempt to access internet - unknown user so forced to login
Second attempt to access internet - now allowed
Bill payer first connect
15.2.5 Family member connects use-case
This is pretty similar to the bill payer first connect.
Pre-built images are available to get the demo started quickly.
Linux image ( containing example PCRF, API-X, EMS and MVNO web server ) :-
• Download the TibcoNow14Linux.ova image from ftp://plord:[email protected]/PLord/TibcoNow14Linux.ova ( MFT or FTP SSLmight work out better )
• Use vmware to import• On start, all servers should be started since init.d scripts are enabled• If needed, username tibco with password tibco for demo use. Root password is tibco.
Windows image ( containing Oracle database, API-X CL and SpoitFire ) :-
• Download the TibcoNow14SpotFire.ova image from ftp://plord:[email protected]/PLord/TibcoNow14SpotFire.ova ( MFT or FTPSSL might work out better )
• Use vmware to import• Administrator password is Admin123• After first start, edit Hosts file to ensure the IP address for the bex server ( Linux image above )
is correct. ping bex should be successful• Re-start• Start spotfire and login with user tibco and password tibco123
Raspberry pi image ( containing raspian operating system, PCEF code and required packages ) :-
• Download the pi-img.gz image from ftp://plord:[email protected]/PLord/pi-img.gz ( MFT or FTP SSL might work out better )
• Use a SD card reader• Ensure any auto-mounted filesystems are unmounted• Use dd to burn the image gzip -dc tib-test.img.gz | dd bs=16m of=/dev/rdisk2. The location of the
image will need to be checked and may well be different• On first login, run sudo raspi-config and select
• Allow a re-boot• If the Linux image is on the local network, then the PCEF should connect on boot. Otherwise
PCRFIP will need to be set
15.2.11 PCEF management
When the pi is started, the PCEF IP address is displayed. This can be used to ssh to the device as userid pi password raspberry. To gain root access use sudo su -.
To check remaining quota for a given IP address use cat /proc/net/xt_quota/192.168.42.10.
To block & redirect for a given IP address, use echo 1 > /proc/net/nf_condition/BLOCK-192.168.42.10. Echo'ing a 0 un-blocks.
To throttle a given IP address, use echo 1 > /proc/net/nf_condition/THROTTLE-192.168.42.10.Echo'ing a 0 un-throttles.
On first run, the configuration file pcef.properties is created - this can contain the following settings :-
• PCRFNAME - MDNS name of the PCRF to connect to. Default is PCRF1• PCRFIP - if set, IP address if PCRF server. If not set, MDNS is used• SSID - WiFi SSID. Default is TIBCO_PCEF• WPAPASSWORD - if set, WiFi password to be used - must be 8 characters or more. Default is
aaaaaaaa• BASEIP - WiFi IP address range. Default is 192.168.42.0• THROTTLERATE - Bandwidth to set to throttle (in kb/s). Default is 64.• CHANNEL - WiFi channel number. Default is 6.• BEACONMINOR - iBeacon minor value. Default is 0.• BEACONMAJOR - iBeacon major value. Default is 0.• ADMINWEBSERVERHOST - Real admin web server host. If not set, PCRF address is used.• ADMINWEBSERVERPORT - Real admin web server port. Default is 8080.• MOCKUP - If set to true, LCD display is mocked over X11. Note that ssh -X working is
assumed.DO NOT JUST POWER OFF - there is a very real chance of killing the SD card. To power off :-
• To quit, press the select button on the pi or login and run killall java.• sudo halt
If you are really really stuck, press the soldered in reset button by the HDMI port - this sends a hardreset without powering off. Any errors caused should be cleaned up on reboot.
15.2.12 PCRF management
Service should be started on boot, but :-
• To stop, as root, run service bex stop• To start, as root, run service bex start
To use the command line tools, firstly set up the correct environment using . /opt/bex/env.sh.
• To display sessions run administrator adminport=2000 domainname=FRAMEWORKdisplayallsessions threegpp
• To send a EMS login message use administrator adminport=2000 domainnode=PCRF1loginEMS examplepcrf ip=192.168.5.10 firstName=Homer lastName=SimpsonnotificationMSISDN=.
• Download Raspian image from http://www.raspberrypi.org/downloads/ and follow instructionsto brun to SD card
• Boot up raspberry pi and you can log in with user pi and password raspberry. You might chooseto copy ssh keys.
• On first login, run sudo raspi-config and select• Expand Filesystem option.• Set machine name• Change password• Allow a re-boot when prompted.
• Copy pcef-1.0.0-SNAPSHOT-jar-with-dependencies.jar to the home directory• On the pi run java -jar pcef-1.0.0-SNAPSHOT-jar-with-dependencies.jar
On first run, the pi is upgraded to the latest package versions and any further dependencies aredownload and installed (so first run can take some time). Also, the application is configured to autorun on each boot.
15.2.20 VPN on the pi
Running the vpn client on the pi might be of use when testing at home with the server in the office.Here are the steps :-
$ sudo apt-get install vpnc
Create /etc/vpnc/tibco.conf :-
IPSec gateway vpn.tibco.comIPSec ID tibco-paIPSec secret tibr0cks!IKE Authmode pskXauth username YOURUSERNAMEXauth password YOURPASSWORDLocal Port 0
To start :-
$ sudo vpnc-connect tibcoVPNC started in background (pid: 3883)...$ ping 10.128.6.17PING 10.128.6.17 (10.128.6.17) 56(84) bytes of data.64 bytes from 10.128.6.17: icmp_req=1 ttl=61 time=325 ms
• cd piutils; mvn install; cd ..• cd pcef; mvn install; cd ..• cp pcef/target/pcef-1.0.0-SNAPSHOT-jar-with-dependencies.jar to raspberry pi
This is also built automatically and deployed to nexus - see http://nexus.kabira.fr:8081/nexus/content/repositories/snapshots/com/tibco/pi/pcef/1.0.0-SNAPSHOT/.
15.2.22 PCRF install / startup from release zip
Release zips are built by the continuous integration process - see http://nexus.kabira.fr:8081/nexus/content/repositories/snapshots/com/tibco/threegpp/framework/examplepcrf/1.1.0-SNAPSHOT/ todownload the latest.
No access is needed to internal server, not is maven or other development tools a requirement.
To install :-
• Ensure that ssh to localhost works - ssh localhost pwd must return no error. Fix .sshconfiguration as needed.
• unzip examplepcrf-*-release.zip• cd examplepcrf-1.2.0-SNAPSHOT• mkdir products• cp somepath/TIB_bex_1.2.0_linux26gl23_x86_64.zip products• bin/bex_install• Set environment settings as recommended
To start :-
• cp config/init.d/ems /etc/init.d• Edit /etc/init.d/ems as needed to specify the right paths• # chkconfig --add ems• # service ems start• # chkconfig ems on• cp config/init.d/bex /etc/init.d• Edit /etc/init.d/bex as needed to specify the right paths• # chkconfig --add bex• # service bex start• # chkconfig bex on
You can use the usual init.d commands to start/stop/restart/setus. For example # service bex stop or #service bex status.
See also Install guide and Admin targets.
15.2.23 PCRF build / startup from development environment
Its possible to run the PCRF from a development tree instead of from release zip files. Note, however,this requires access to the nexus server and maven installed.
• Build vmware image ( For easy create on OS/X or Linux, see http://svn.tibco.com/bexapps/trunk/common/scripts/create_dev_vmware_image.sh )
• Ensure vmware has a public IP address ( bridged mode )• svn co http://svn.tibco.com/bexapps/trunk/frameworks/threegppframework• cd threegppframework
Source is in SVN at http://svn.tibco.com/ql/trunk/Products/ASG/Projects/NOW.
ASG_DefaultiImplementation is the BE project containing the small amount of customisation that Idid to APIX to support the demo:
• I made the TargetService config available to the mapper so that I could use a generic mapper tocreate the request payload
• added the payload XSD to the request event and changed the location to match that used in theBEX project
config - the actual gateway config used in the demo
bin - the asg_core.ear file compiled from the ASG_DefaultiImplementation project
15.2.25 LiveView / MNVO installation from source
Source is in SVN at http://svn.tibco.com/ql/trunk/Products/ASG/Projects/NOW/APIX_Client/PolicyManager
Required software and installation directories on the VM:
TIBCO_HOME_001 (/opt/bw/tibco_home)
• ActiveSpaces 2.1.3 HF1• BW 5.12 HF1• BW AS Plugin 2.1.0• BW REST/JSON Plugin 1.1.1 HF1• RV 8.4.2• TRA 5.9.0 HF1• Place lvadapter.jar in TIBCO_HOME/bw/5.12/lib/palettes• Place jms.jar and tibjms.jar in TIBCO_HOME/bw/5.12/lib/palettes
There is only one built file and that’s the lvadapter.jar which is used for BW to talk directly toLiveView. The source for this is in another SVN repository so this can be considered to be a providedjar for the purposes of this project.
15.2.26 SpotFire installation from source
FIX THIS - To be properly documented
15.2.27 Linux Desktop
• Running wireshark with wireshark -i any -f "port diameter or port smpp" -k will startcapturing immediately diameter and smpp traffic
When the handset is re-directed to the MVNO web server, the IP address of the handset needs to bemaintained. So this is the process :-
1. Handset attempts to connect to, say, http://www.google.com2. PCEF's firewall rules catches with and re-directs the TCP traffic to the proxy server running on
the PCEF3. The PCEF proxy server adds in a http header, X-Forwarded-For, and proxies the request to the
BW server running on the Linux image ( port 8080 )Note that the DNS server on the PCEF ( used by the handset ) adds in resolution of the MVNO webserver name to the IP address of the BW server.
15.2.29 Enhancement for simple congestion
To do congestion management the following is needed :-
• PCEF reports the handset location to the PCRF ( eg cellid )• PCRF maintains a list of locations against congestion status• External system feeds PCRF with updates to congested area ( eg cellid plus Red/Amber/Green )• On receipt of updated congestion information, the PCRF needs to :
• Find all effected sessions - ie session whose location matches the new congestioninformation, then for each session :-
• Re-calculate policy rules - for example if a normal user is in a red congestion zone, reducebandwidth
• Send a RAR message to each session with updated rules• Potentially send an SMS message to the effected users
So the following changes are required on the PCRF :-
• New EMS message to notify PCRF of change of congestion area• Update PCEF to send fake location to PCRF in CCR messages• Update PCRF to parse and store location of handset• Update PCRF policy rules to calculate reduced bandwidth if in congested area - Allow VIP ( ie
bill payer ) to continue to use full bandwidth• Add new PCRF behaviour to act on new congestion information - ie find effected sessions, re-
evaluate the rules and send RAR messagesObviously this requires design and planning. However, potentially, there is a short-cut. Maybe theexternal congestion system simply directly sends a request to throttle a single user. In which case thefollowing changes are required on the PCRF :-
• New EMS message to throttle a single user• Update PCRF to store this throttle request• Update PCRF policy rules to merge the throttle request - eg would override other settings in most
cases, except bill payer• Add new PCRF behaviour to act on throttle request - ie store the rule, re-evaluate the policy rules
and send RAR messageThe ThrottleRequest EMS message might be :-
• Improve easy of use of sliders on handset• Fix looping on old android phones
Not within scope of current demo, but nice to haves in the future.
• Re-direct https ( as well as the existing http re-direct )• Optionally use second WiFi to connect to internet ( will require mechanism to select SSID and
enter any password )• Block specific service - eg you tube• Add to Spotfire the audit logs and correlate with API-X data• Demo HA
• Show partitioning across the 3 PCRF nodes• Kill a node• Show sessions still maintained
• Rules update ( with no down time )• Deploy to the cloud ( no need to copy vmware images around )• Use a much bigger LED display so show how messages are passed around in real-time ( or
maybe a smaller LCD )
15.2.31 Uploading to Amazon EC2
There is a script available in svn that should upload the ova file to amazon ready to run - ova_to_ec2.sh. So the steps are :-
• Ensure you are subscribed to both EC2 and S3 on Amazon web services
• Use vmware menu File->Export to OVF and select OVA file. Alternatively use the ovftool - onOS/X this might be "/Applications/VMware Fusion.app/Contents/Library/VMware OVFTool/ovftool" --compress=9 "CentOS6.5.vmx" "CentOS6.5.ova"
• Install the EC2 API tools from Developer Tools. On OS/X I followed Install EC2 on OS/X• Run ova_to_ec2.sh my.ova• In Security Group suggest creating groups such as :-
• Home Network, All Traffic allowed inbound from home IP address• Maidenhead Network, All Traffic allowed inbound from 81.144.243.226/32• Between Images, All Traffic allowed inbound from instances via private IP addresses
• In Elastic Ips create an elastic addresses and assign to the instance• In Instances select the instance and right click to configure (or, on first run, you should be
prompted for these settings )• Use change security groups to assign the groups created above• Use hardware virtualization• Use instance type m3.xlarge• Use start from the pop-up menu to start the instance
• In Instances, suggest naming it to avoid later confusion• For a linux instance you should be able to ssh to the public IP address. Or you can install vnc
something like :-• # yum install tigervnc-server• $ vncpasswd• add the following to /etc/sysconfig/vncservers
• # service vncserver restart• # /sbin/chkconfig vncserver on• On OS/X you can simply $ open "vnc://server:5901"
• For a windows instance you can select the instance and press connect. This downloads a remotedesktop file that microsoft remote desktop can use to connect ( if you have microsoft remotedesktop installed you should be able to just double click on the rdp file )
If outbound port numbers are blocked ( 3868 and 8080 to the linux EC2 instance ) then its possible torun diameter and re-directs over ssh :-
Note that many WiFi cards will not work - some require a lot of power, others have custom drivers tobuild. http://elinux.org/RPi_USB_Wi-Fi_Adapters is helpful.
RTL8188EU driver plus updated hostapd can be found https://github.com/lwfinger/rtl8188eu.git( included in latest image ).
Name USB id ChipSet Kernel Hostapd Notes
Pi Hut 148f:5370 RT5370 rt2x00 nl80211 Works out of the box.
LB-Link BL-LW0505R1 148f:5370 RT5370 rt2x00 nl80211 Works out of the box.
16 Creating a Release.......................................................................................................................................
16.1 Application Release ProcessThe policy framework includes a mechanism to make releases. The final release archive includes :-
• Selected application binaries• Dependencies• Configurations• Scripts• Site documentation in html and pdf formats
Binaries are downloaded from the maven nexus server.
16.1.1 Maven release mechanism
The maven toolset can generate a release archive (zip file) and deploy to the nexus server. Thefollowing steps are required in the project configurations :-
1. Ensure dependencies are complete - this should include the com.tibco.release common-files aswell as documentation artifacts. For example, pom.xml might include :-
This configuration triggers generating a release archive when the maven site target is invoked. Whenrun as part of a regression run ( for example under jenkins ) then the archive will be deployed to thenexus server. So the generated archive can be downloaded from http://nexus.kabira.fr:8081/nexus/content/repositories/releases/.
The archive contents depends on the scripts/assembly.xml file ( fromthe bexappspom component ),this assembly causes the following to be generated in the release archive :-
1. jars directory - contains all the java dependencies required to run the application - these can befrom TIBCO or 3rdparty open source.
2. config directory - contains any project specific configuration files3. bin directory - contains useful scripts used to start, stop and administer the application4. projlib directory - contains any project libraries required to run the application5. ear directory - contains the main application in Enterprise Archive format6. doc/javadoc directory - contains javadoc documentation7. doc/site directory - contains site (html + pdf) documentation
8. web directory - contains any additions to the AST webserver9. THIRD-PARTY.txt file - describes the licences for each third party dependency
An example THIRD-PARTY.txt is shown below :-
Lists of 20 third-party dependencies. (TIBCO DevZone) channel (com.kabira:channel:1.6.3 - http://devzone.tibco.com/) (The OpenLDAP Public License) LDAP Class Libraries for Java (JLDAP) (com.novell.ldap:jldap:4.3 - http://www.openldap.org/jldap/) (TIBCO Project) TIBCO ANSI color layout for log4j (com.tibco:ansicolorlayout:1.2.0-SNAPSHOT - http://nexus.kabira.fr/) (TIBCO DevZone) ActiveSpaces Transactions (com.tibco:ast:2.3.3 - http://devzone.tibco.com/) (TIBCO Project) TIBCO Diameter Channel for Active Spaces Transactions (com.tibco:diameterchannel:2.2.0-SNAPSHOT - http://nexus.kabira.fr/) (TIBCO Project) TIBCO File Reader Channel for Active Spaces Transactions (com.tibco:filereaderchannel:2.2.0-SNAPSHOT - http://nexus.kabira.fr/) (TIBCO Project) TIBCO High Availability Services for Active Spaces Transactions (com.tibco:haservice:1.1.0-SNAPSHOT - http://nexus.kabira.fr/) (TIBCO Project) TIBCO Log Services (com.tibco:logservices:1.1.0-SNAPSHOT - http://nexus.kabira.fr/) (TIBCO Project) TIBCO Metrics for Active Spaces Transactions (com.tibco:metrics:2.2.0-SNAPSHOT - http://nexus.kabira.fr/) (TIBCO Project) TIBCO Openldap Channel for Active Spaces Transactions (com.tibco:openldapchannel:1.2.0-SNAPSHOT - http://nexus.kabira.fr/) (TIBCO Project) TIBCO SNMP Trap appender for log4j (com.tibco:snmpappender:2.2.0-SNAPSHOT - http://nexus.kabira.fr/) (TIBCO Project) TIBCO shared thread pool resource for Active Spaces Transactions (com.tibco:threadpool:1.2.0-SNAPSHOT - http://nexus.kabira.fr/) (TIBCO Project) Policy POM Common Files (com.tibco.bexappspom:common-files:2.2.0-SNAPSHOT - http://nexus.kabira.fr/bexappspom/common-files) (TIBCO Project) TIBCO 3GPP Framework Model (com.tibco.threegpp.framework:threegppframework:1.1.0-SNAPSHOT - http://nexus.kabira.fr/policy/threegpp) (Common Public License Version 1.0) JUnit (junit:junit:4.11 - http://junit.org) (Apache License, Version 2.0) Apache Extras Companion™ for Apache log4j™. (log4j:apache-log4j-extras:1.1 - http://logging.apache.org:80/log4j/companions/extras) (The Apache Software License, Version 2.0) Apache Log4j (log4j:log4j:1.2.17 - http://logging.apache.org/log4j/1.2/) (New BSD License) Hamcrest Core (org.hamcrest:hamcrest-core:1.3 - https://github.com/hamcrest/JavaHamcrest/hamcrest-core) (The Apache Software License, Version 2.0) SNMP4J (org.snmp4j:snmp4j:1.10.1 - http://www.snmp4j.org) (The Apache Software License, Version 2.0) SNMP4J Agent (org.snmp4j:snmp4j-agent:1.3.1 - http://www.snmp4j.org)
16.1.2 Manual mechanism to produce a release
An alternative way to make a release is a manual mechanism. Note that this is now deprecated infavour of the maven mechanism above.
To make a release, run the bex_generate_release script. This script will work out the dependencies ofthe selected package, download and generate a zip file. An example run is shown below.
$ ./bex_generate_release -s com.tibco.policyorchestration:policy3uk:1.0.0-SNAPSHOTDownloaded 3uk-deploy-1.0.0-SNAPSHOT.jarDownloaded 3uk-deploy-1.0.0-SNAPSHOT-config.jarDownloaded 3uk-deploy-1.0.0-SNAPSHOT-site.jarDownloaded 3uk-deploy-1.0.0-SNAPSHOT.pomDownloaded 3uk-setup-1.0.0-SNAPSHOT-config.jarDownloaded 3uk-setup-1.0.0-SNAPSHOT-site.jarDownloaded 3uk-setup-1.0.0-SNAPSHOT.pom....[INFO] Scanning for projects...[INFO] [INFO] ------------------------------------------------------------------------[INFO] Building Maven Stub Project (No POM) 1[INFO] ------------------------------------------------------------------------[INFO] [INFO] --- maven-install-plugin:2.3.1:install-file (default-cli) @ standalone-pom ---[INFO] Installing /home/ptolani/src/policyorchestration/trunk/scripts/tmp/pom/policypom/pom.xml to /home/ptolani/src/policyorchestration/trunk/scripts/repository/com/tibco/policypom/2.0.0-SNAPSHOT /policypom-2.0.0-SNAPSHOT.pom[INFO] ------------------------------------------------------------------------[INFO] BUILD SUCCESS[INFO] ------------------------------------------------------------------------[INFO] Total time: 0.878s[INFO] Finished at: [INFO] Final Memory: 5M/234M[INFO] ------------------------------------------------------------------------
17.1 Application Installation Process3GPP framework applications are released in a zip archive containing the application, TIBCOdependencies and utility scripts. The following outlines the steps needed to install and start theapplication.
Please also see the BE-X administration guides.
17.1.1 Extracting the archive
The supplied zip archive should be extracted into a empty directory.
TIBCO product installers - BusinessEvents Extreme and Rendezvous Enterprise Daemon - shouldthen be copied into products.
The supplied script, bex_install, should be run to install the products and extract the shipped jars.
If the bex_install step fails for any reason, an easy quick way to debug and/or report the issue wouldbe to run this ksh script as ksh -x ./bin/bex_install.
$ export JAVA_HOME=<location of JDK 1.7>$ export PATH=$JAVA_HOME/bin:$PATH$ bin/bex_install
Installing base products
Extracting app-1.0.0-SNAPSHOT-sources.jar...
Assuming the bash shell is used, the following can be appended on the end of .bashrc to set the correct environment
PATH=/home/tibc/app/./mvn/apache-maven-3.0.5/bin:/home/tibco/app/./bin:/home/tibco/app/. /tibco_home/tibcojre64/1.7.0/bin: /home/tibco/app/./tibco_home/be-x/1.1.0/distrib/ kabira/kts/scripts:/home/tibco/app/./tibco_home/be-x/1.1.0/distrib/kabira/bin: :/home/tibco/app/./tibco_home/tibrv/8.4/bin:$PATH TIBCO_HOME=/home/tibco/app/./tibco_home SW_HOME=/home/tibco/app/./tibco_home/be-x/1.1.0 JAVA_HOME=/home/tibco/app/./tibco_home/tibcojre64/1.7.0 export PATH TIBCO_HOME SW_HOME JAVA_HOME S99mdns should be copied to /etc/init.d and enabled
For the .bashrc suggestions at the end of the above run to be in effect, you would need to either exitthe shell and restart or source the .bashrc file in the current shell. Also note that, on certain linuxdistributions, you might need to soft link .bashrc file to .bash_profile to be automatically sourced onlogin.
17.1.3 mdnsd
The Multicast DNS daemon, mdnsd, must be running on all hosts in the cluster. To enable this, thesupplied S99mdns script can be copied into /etc/init.d and enabled.
The file $ TIBCO_HOME/mdns.env can be used to override variables used in the S99mdns script.
17.1.4 Replicating installation
If you need to install the release on multiple machines, now is a good time to zip up the installation,i.e. app folder, to bypass repeating the above steps including this one.
17.1.5 Starting the domain manager and the nodes
The domain manager and the nodes can be started using the bex_startdomainmanager and bex_startscripts. See example below.
$ ./bin/bex_startdomainmanager
Installing application kabira/kdm on node domainmanager...
Node domainmanager is configured to use PRODUCTION executables Node domainmanager shared memory size is 512Mb Node domainmanager path: /home/tibco/app/runtime/nodes/domainmanager Node domainmanager host: corentin System Coordinator Host: All Interfaces System Coordinator Port: 8000 Installing components for application kabira/kdm
Administration Port: 8000 Service Name: "None" Node Name: "domainmanager"
Components started Loading configurations Auditing security configuration Host: localhost Administration Port: 8000 Service Name: "None" Node Name: "domainmanager"
Node APP1 is configured to use PRODUCTION executables Node APP1 shared memory size is 512Mb Node APP1 path: /home/tibco/app/runtime/nodes/APP1 Node APP1 host: corentin System Coordinator Host: All Interfaces System Coordinator Port: 7010 Installing components for application tibco/be-x
Administration Port: 7010 Service Name: "None" Node Name: "APP1"
INFO: Node [APP1] version: [TIBCO BusinessEvents(R) Extreme 1.1.0 (build 130831.1)]INFO: Starting JVM [/home/tibco/app/ear/app-1.0.0-SNAPSHOT.ear] ...[APP1] INFO: JMX Management Service started at:[APP1] corentin:2099[APP1] 127.0.1.1:2099[APP1] service:jmx:rmi:///jndi/rmi://corentin:2099/jmxrmi[APP1] Using property file: be-engine.tra[APP1] [ main] DEBUG root -[APP1] ****************************************************************************[APP1] TIBCO BusinessEvents 5.1.1.086 (2013-08-01)[APP1] Using arguments :-c deploy/resources/deploy-1367644210306/app.cdd deploy/resources/ deploy-1367644210306/app-1.0.0-SNAPSHOT.ear[APP1] Copyright 2004-2013 TIBCO Software Inc. All rights reserved.[APP1][APP1] ****************************************************************************[APP1][APP1] [ main] INFO container.standalone -[APP1] ****************************************************************************[APP1] TIBCO BusinessEvents 5.1.1.086 (2013-08-01)[APP1] Using arguments :-c deploy/resources/deploy-1367644210306/app.cdd deploy/resources/ deploy-1367644210306/app-1.0.0-SNAPSHOT.ear[APP1] Copyright 2004-2013 TIBCO Software Inc. All rights reserved.[APP1][APP1] ****************************************************************************....[A] [ main] INFO runtime.service - Registering all BE-Engine level Group MBeans...[A] [ main] INFO runtime.service - All BE-Engine level Group MBeans SUCCESSFULLY registered[A] [ main] INFO runtime.session - BE Engine corentin started
Note that additional deployment options can be supplied at the end of the command.
if [ -f ${BASE}/config/log4j.properties ]then EXTRAS="${EXTRAS} -Dlog4j.configuration=file:${BASE}/config/log4j.properties"fi
17.1.7 Loading application configuration
Once the nodes have started, the application configuration can be loaded using the bex_loadconfigor bex_loadmulticonfig script. This will load and activate a configuration file though the domainmanager on all the nodes.
bex_loadconfig will load the same configuration on each node.
bex_loadmulticonfig will load a different configuration on each node
The app.kcs files loaded will need to match the local machine topology.
$ ./bin/bex_loadconfig config/nodeconfig.kcs[APP1] type = nodeconf[APP1] name = nodeconfig[APP1] version = 2.0
$ ./bin/bex_loadmulticonfig config/appLoading config/app.APP1.kcs on node APP1[APP1] type = com.tibco.app.config[APP1] name = app[APP1] version = 1.1
$ ./bin/bex_loadconfig config/statisticsflusher.kcs [APP1] type = scheduledcommand[APP1] name = statisticsflusher[APP1] version = 1.1
17.1.8 Displaying the status
The script bex_status can be used to simply view the status of the nodes.
$ ./bin/bex_statusName = APP1Host = localhostPort = 7010Add Method = ConfiguredAdded = Groups = ManchesterApplication = kabira/fluencyDescription = Application NodeNode Agent Address = TCP:localhost:7011Connection State = ConnectedInstall Time = Last Start Time =
Service Name = ActiveSpaces Transactions AdminstratorWeb Server URL = http://corentin:7001Web Server State = ActiveService Discovery State = DisabledService Discovery Name = ActiveSpaces Transactions Adminstrator
17.1.9 ASTA graphical administration
To start the ASTA graphical console use the URL supplied by bex_status :-
$ firefox http://corentin:7001
Note that the username and password are set in the config/configuration file and are set during domainmanager startup.
Shipped html documents are also available using the base URL plus /site plus component name. Forexample :-
18.1 Administration TargetsThis section describes the administration targets available from the command line ( using thenadministration command ) and in the AST GUI.
18.2 Target threegppThe following commands are available with the target threegpp. For example :-
displaybreachhistorycount threegpp breach history objects count
• Display expired breach history
displayexpiredbreachhistorycount threegpp [ userId=<String> ] user identifier expired breach history objects count
• Display breach history
displaybreachhistory threegpp. userId=<String> user identifier [ rulenName=<String> ] rule name [ max=<Integer> ] maximum number of breach history to display. If not specified, all relevant breach history are displayed
display breach history objects.
• Display all breach history
displayallbreachhistory threegpp [ max=<Integer> ] maximum number of breach history to display. If not specified, all breach history are displayed
display all breach history objects. TEST PURPOSES ONLY
• Delete breach history
deletebreachhistory threegpp userId=<String> user identifier [ ruleName=<String> ] rule name [ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete breach history objects
• Delete expired breach history
deleteexpiredbreachhistory threegpp [ userId=<String> ] user identifier [ ruleName=<String> ] rule name
[ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete expired breach history objects
• Delete all breach history
deleteallbreachhistory threegpp [ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete all breach history objects. TEST PURPOSES ONLY
• Log levels
log4j threegpp name=<String> Logger name. level=<String> Logger level - ALL,DEBUG,ERROR,FATAL,INFO,TRACE or WARN Set log4j level for a given component.
• Display profile
displayprofile threegpp userId=<String> user identifier display user profile
• Display all profiles
displayallprofiles threegpp [ max=<Integer> ] maximum number of user profiles to display. If not specified, all profiles are displayed
display user profiles. TEST PURPOSES ONLY
• Display profiles count
displayprofilescount threegpp display user profiles count
• Delete profile
deleteprofile threegpp userId=<String> user identifier [ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete user profile
• Delete all profiles
deleteallprofiles threegpp [ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete all user profiles. TEST PURPOSES ONLY
• Display session
displaysession threegpp sessionId=<String> session identifier display user session
• Display all sessions
displayallsessions threegpp [ max=<Integer> ] maximum number of user sessions to display. If not specified, all sessions are displayed
displaysessionscount threegpp display user sessions count
• Delete session
deletesession threegpp sessionId=<String> session identifier [ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete user session
• Delete all sessions
deleteallsessions threegpp [ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete all user sessions. TEST PURPOSES ONLY
• Display usage buckets
displayusagebuckets threegpp userId=<String> user identifier [ bucketName=<String> ] bucket name [ monitoringKey=<String> ] monitoring Key [ max=<Integer> ] maximum number of usage buckets to display. If not specified, all usage buckets that match the parameters are displayed
display bucket(s) for user.
• Display all usage buckets
displayallusagebuckets threegpp [ max=<Integer> ] maximum number of usage buckets to display. If not specified, all usage buckets for all users are displayed
display usage bucket(s) for all users. TEST PURPOSES ONLY
deleteusagebuckets threegpp userId=<String> user identifier [ bucketName=<String> ] bucket name [ monitoringKey=<String> ] monitoring Key [ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete usage bucket(s). Buckets must be specified by at least by userId. They can be also specified by: userId + bucketName, userid + monitoringKey, userId + bucketName + monitoringKey
[ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete usage buckets for all users. TEST PURPOSES ONLY
• Display policy history
displaypolicyhistory threegpp userId=<String> user identifier [ max=<Integer> ] maximum number of UserPolicyHistory objects to display. If not specified all are displayed
display policy history for user.
• Display all policy history
displayallpolicyhistory threegpp [ max=<Integer> ] maximum number of UserPolicyHistory objects to display. If not specified all are displayed
display policy history for all users. TEST PURPOSES ONLY
• Display policy history count
displaypolicyhistorycount threegpp display policy history count
• Delete policy history
deletepolicyhistory threegpp userId=<String> user identifier [ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete user policy history
• Delete all policy history
deleteallpolicyhistory threegpp [ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete policy history for all users. TEST PURPOSES ONLY
• Display diameter sessions
displaydiametersessions threegpp endpointName=<String> endpoint name display diameter session names
enablelatencymetrics threegpp [ timeout=<Long, default = 10000> ] timeout (ms) for waiting for responses for pairing with requests enable latency metrics. When the latency metrics is enabled, there will be some internal hashmaps created for pairing request/response
displaynotificationhistorycount threegpp notification history objects count
• Display expired notification history count
displayexpirednotificationhistorycount threegpp [ userId=<String> ] user identifier expired notification history objects count
• Display notification history
displaynotificationhistory threegpp userId=<String> user identifier [ notificationId=<Integer> ] notification identifier [ max=<Integer> ] maximum number of notification history to display. If not specified, all relevant notification history are displayed
display notification history objects
• Display all notification history
displayallnotificationhistory threegpp [ max=<Integer> ] maximum number of notification history to display. If not specified, all notification history are displayed
display all notification history objects. TEST PURPOSES ONLY
• Delete notification history
deletenotificationhistory threegpp userId=<String> user identifier [ notificationId=<Integer> ] notification identifier
[ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete notification history objects
• Delete expired notification history
deleteexpirednotificationhistory threegpp [ userId=<String> ] user identifier [ notificationId=<Integer> ] notification identifier [ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete expired notification history objects
• Delete all notification history
deleteallnotificationhistory threegpp [ reportonly=<Boolean, default = false> ] Do not delete, just report what would be deleted delete all notification history objects. TEST PURPOSES ONLY
19.1 UpgradeTIBCO BusinessEvents Extreme and the TIBCO 3GPP framework provides the followingmechanisms to upgrade an installed application :-
• Solution Update ( application changes on same BusinessEvents Extreme version )• BusinessEvents Extreme update ( compatible releases only )• Parallel running during upgrade• Cluster shutdown and restart• Add a new node to the cluster
The exact upgrade process depends on the following factors :-
• Capabilities of any clients to failover and re-connect• High Availability Topologies• BusinessEvents Extreme runtime compatibility• Acceptable down-time requirements
Hence the detailed steps are project specific and need to be documented in the project's administrationguide. Here we describe the general process and principles.
See Conceptual model in the Artictects Guide for definitions.
19.1.1 High Availability Topologies
The exact details of the upgrade process depends on the high availability topology - this in turn isdetermined by the project specific requirements including :-
• Capabilities of any client system(s) - notably routing and failover• Data replication requirement details• Service availability requirement details• Data and service partitioning requirements
Some of the high availability topologies are shown below.
19.1.1.1 Rolling partitions
In this case, partitions are assigned in a rolling way :-
• Can work with any number of nodes• Additional option to have a second replica in a remote data center• Clients can connect directly to the optimum node or connect to any and route internally• Extra dependency - for example if node A1 should fail, then there is possible impact to A2 ( as
the replica for P1 ) and A4 ( as the primary for P4 )• Mix of BusinessEvents Extreme versions limited
• Requires even number of nodes in one data center• No data sharing between pairs• Additional option to have a second replica in a remote data center ( otherwise not tolerant to data
center failure ), although this can complicate the upgrade process• Less dependencies - for example if node A1 should fail, then only A2 is impacted• Pairs can be upgraded easily
19.1.1.3 Remote replica
In this case, partitions are assigned in pairs across data centers :-
• Any number of nodes can be used• Data is shared within nodes in the same cluster, but not between clusters• Tolerant to a data center failure• Less dependencies - for example if node A1 should fail, then only B1 is impacted• Pairs can be upgraded easily
In all cases, BusinessEvents Extreme can be updated via the Cluster shutdown and restart process -however, sometimes its possible to upgrade nodes in a running cluster one-by-one in the same way as Cluster Update above.
The high availability protocol between nodes contains a Cyclic Redundancy Check (CRC) - nodeswith different CRC's cannot exchange high availability data. To view the CRC of the current runningsystem, the administrator display cluster type=local command can be run. The CRC can change onmajor and minor releases.
The following commands can be used to display the current CRC :-
BusinessEvents Extreme releases with the same CRC can be used in the same cluster.
19.1.3 Solution Update
This mechanism allows for the application in a cluster to be updated one node at time without overallloss of service or data. BusinessEvents Extreme allows for nodes within the cluster to be at differentapplication versions.
The approach can be used in the following cases :-
• Change in decision table• New decision table• Change in rule function• Updated rule function signature• Change in catalog function ( ie java code )• Updated catalog function signature• Change in concept• New concept• Change in event ( assuming haservices is used to route events between nodes )• New event
See also the Cluster Update section of BE-X documentation.
For the purposes of describing the process, the initial deployment is shown below :-
These nodes are upgraded one at at time - so the cluster as a whole remains available to accept work.The process is :-
• Take the currently running enterprise archive (ear) file and the new enterprise archive and runthen through the offline upgrade tool. This analyzes the changes and produces a report on thechanges and generates an upgrade file. The report can be checked for any errors, warnings orunexpected application changes.
• The configuration is deactivated. This has the effect of stopping any channels and the nodeleaves the HA cluster. Any clients will be disconnected but they can re-connect to a secondarynode.
Starting node A ... Waiting for application to start Components started Loading configurations Auditing security configuration Host: localhost Administration Port: 2010 Service Name: "A" Node Name: "A"
Completed upgrade.
Summary: Node: A Node state when command started: Stopped Action: upgrade Execute: true Started: Wed 5 Mar 10:05:55 GMT 2014 Finished: Wed 5 Mar 10:06:01 GMT 2014 Upgrade Report: /tmp/upgrade18115/upgrade Original Deployment Directories: /home/pcrf/runtime/nodes/A/../deploy New Deployment Directories: /home/pcrf/runtime/A/../deploy
Classes upgraded: Class name: pcrf.managed.concepts.userdata.UserProfile Current version: 1393526732980 New version: 1393576773514
This mechanism allows for nodes in a cluster to be updated to a new BusinessEvents Extreme releaseone at at time without overall loss of service or data.
The approach works in the following cases :-
• Upgrade to a new compatible release of BusinessEvents ExtremeThe process is :-
1. Install new BusinessEvents Extreme product in a new location2. Set environment variables to reference old product install3. Deactivate configuration on first node4. Run bex_stop5. Set environment variables to reference new product install6. Run bex_start7. Run bex_runapp referencing the exisiting and new ear file8. Load and activate configuration9. Repeat for remaining nodes
19.1.5 Parallel running during upgrade
This mechanism allows for nodes in a cluster to be updated to a new BusinessEvents Extreme releasewith no loss of service or data.
The approach can be used in the following cases :-
• Upgrade to a new incompatible release of BusinessEvents Extreme• Operating system updates ( assuming additional hardware is available )
The following mechanisms are available to assist in parallel running - projects may use one or more toachieve the upgrade.
19.1.5.1 Client re-direct
Sometimes the client is able to re-direct new sessions to a new cluster whilst keeping old sessions onthe old cluster. This is sometimes called session stickyness.
The process is :-
1. Start a new cluster with the new BusinessEvents Extreme product. This can be on dedicatedhardware or shared hardware with different TCP port numbers ( although extra resources may beneeded such as additional RAM )
2. Client is configured to direct new session messages to the new cluster and existing sessionmessages to the old cluster ( with the possibility of the application terminating old sessions at theapproriate point )
3. After a period of time, the old cluster will contain no sessions4. Old cluster is terminated
New cluster - Node BNew cluster - Node AOld cluster - Node BOld cluster - Node A
TCP Connect ionNew sessions
FrameworkVersion 1
BE- X PlatformVersion 2
Shared Memory
FrameworkVersion 1
BE- X PlatformVersion 2
Shared Memory
TCP Connect ionExist ing sessions
Client
FrameworkVersion 1
BE- X PlatformVersion 1
Shared Memory
FrameworkVersion 1
BE- X PlatformVersion 1
Shared Memory
Applicat ionVersion 1
Applicat ionVersion 1
Applicat ionVersion 1
Applicat ionVersion 1
Parallel Running
19.1.5.2 Re-direct new sessions
FIX THIS - to be implemented
In the case that the client cannot forward requests, then the framework is able to do this internally.
The process is :-
1. Start a new cluster with the new BusinessEvents Extreme product. This can be on dedicatedhardware or shared hardware with different TCP port numbers.
2. Enable session forwarding on the old cluster3. Sessions which are currently being processed on the old cluster continue to be processed on the
4. New sessions are forwarded to the new cluster for processing - these sessions will continue to beprocessed on the new cluster until the session terminates ( with the possibility of the applicationterminating old sessions at the approriate point )
5. After a period of time, the old cluster will contain no sessions6. Old cluster is terminated and the client can re-connect to the new cluster.
New cluster - Node BNew cluster - Node AOld cluster - Node BOld cluster - Node A
New sessionsAll messages
FrameworkVersion 1
BE- X PlatformVersion 2
Shared Memory
FrameworkVersion 1
BE- X PlatformVersion 2
Shared Memory
Client
FrameworkVersion 1
BE- X PlatformVersion 1
Shared Memory
FrameworkVersion 1
BE- X PlatformVersion 1
Shared Memory
Applicat ionVersion 1
Applicat ionVersion 1
Applicat ionVersion 1
Applicat ionVersion 1
Re-direct new sessions
19.1.5.3 Non-HA replication of non-session data
FIX THIS - to be implemented
Some projects maintain key data outside of the sessions - in this case this data must be replicated tothe new cluster.
The process is :-
1. Enable data export mode on the old cluster - this will write out all existing non-session data aswell as writing out on-going changed data. Note that only committed data is exported.
2. Enable data import mode on the new cluster - this will read the non-session data and commit toshared memory.
3. When the old cluster is no-longer processing sessions, the export can be terminated
New cluster - Node BNew cluster - Node AOld cluster - Node BOld cluster - Node A
non- HAcopy
All messages
FrameworkVersion 1
BE- X PlatformVersion 2
Shared Memory
FrameworkVersion 1
BE- X PlatformVersion 2
Shared Memory
Client
FrameworkVersion 1
BE- X PlatformVersion 1
Shared Memory
FrameworkVersion 1
BE- X PlatformVersion 1
Shared Memory
Applicat ionVersion 1
Applicat ionVersion 1
Applicat ionVersion 1
Applicat ionVersion 1
Non-HA replication of non-session data
19.1.6 Cluster shutdown and restart
If the client can perform stand-in processing during the short upgrade period, it may be simpler to shutdown the cluster, upgrade and re-start.
The process is :-
1. Install new BusinessEvents Extreme product in a new location2. Set environment variables to reference old product install3. Run bex_stop on all nodes in the cluster4. Set environment variables to reference new product install5. Run bex_start6. Run bex_runapp referencing the new ear file7. Load and activate configuration
19.1.7 Add a new node to the cluster
It is possible to add a new node to the cluster and adjust the HA configuration to absorb the new node.This process is well documented in the administration guide Replacing one node with another .
20 Web Studio.......................................................................................................................................
20.1 Web StudioThe rules may be edited using either TIBCO Studio installed on a desktop or TIBCO Web Studiorunning on a server. With webstudio, rules can be edited using a standard browser.
The bex_install script includes extracting source archives into src/bex such that Studio and WebStudio can make changes.
See tib_bex_webstudio_users_guide.pdf for full documentation.
20.1.1 Starting Websudio
The supplied script, bex_webstudio, can be used to launch Web Studio on the server.
$ bin/bex_webstudio Checking examplepcrfWeb Studio is starting
Logs can be found at /home/plord/workspace/common/scripts/e/logsAccess Control files are in directory /home/plord/workspace/common/scripts/e/src/bexProject home is /home/plord/workspace/common/scripts/e/src/bex
To access, see http://localhost:8090/WebStudioUser=admin password=admin
To access Web Studio use a web browser to access the URL given.
$ firefox http://localhost:8090/WebStudio
The default username and password is displayed when running bex_webstudio. Seetib_bex_webstudio_users_guide.pdf for details on how to change users, passwords and associatedroles.
Once changes are commited by, perhaps, several team members, all changes need to be approved bythe approver. Once approved, the changes are written to disk. The approval option is found in theWorklist.