JMS Compliance in NaradaBrokering Shrideep Pallickara, Geoffrey Fox Community Grid Computing Laboratory Indiana University
Jan 16, 2016
JMS Compliance in NaradaBrokering
Shrideep Pallickara, Geoffrey Fox
Community Grid Computing Laboratory
Indiana University
Talk Outline
• Brief overview of NaradaBrokering• JMS Compliance• Objectives for achieving JMS compliance• The Process of achieving compliance• The Distributed JMS solution • Performance• Future Directions
NaradaBrokering
• Based on a network of cooperating broker nodes– Cluster based architecture allows system to scale to arbitrary size
• Based on the publish/subscribe model
• Also supports another model, peer-to-peer (P2P) via JXTA
• Incorporates algorithms for– Topic matching and calculation of destinations
– Efficient routing to computed destinations
• Supports local broker accesses– Clients do not need to reconnect to the remote broker that it was
last connected to.
JMS Compliance
• JMS clients are written while conforming to the messaging specification.– Vendor specific calls result in applications that are not
JMS-compliant.
• JMS clients are vendor agnostic– One provider should be just as good as another
– All that needs to be changed is the initialization sequence.
• JMS providers do not interoperate with each other– Interactions between clients of 2 different providers can
be achieved through a client connected to the two providers.
JMS Compliance in NaradaBrokering: Rationale
• Providing support for JMS clients within the system.– Mature messaging specification
• Several existing applications
– Opens NaradaBrokering to applications developed around JMS
• Bring NaradaBrokering functionality to JMS based systems– Distributed solution, load balancing, scaling and failure
resiliency.
Providing JMS support• JMS Interactions
– Supported locally or mapped to corresponding NaradaBrokering calls
• JMS Interconnection Bridge– Operations supported locally or mapped to corresponding
NaradaBrokering interactions – One bridge instance per connection– Maintains list of registered sessions– Responsible for routing events to appropriate sessions
• Support for – Creation of different message types e.g. ObjectMessage,
BytesMessage etc.– Operations that can be invoked on these message types.
Providing JMS Support• Topic
– NaradaBrokering topics are created as <tag,value> pairs.
– JMS Topics are generally “/” separated.
• JMS selector mechanism– We augment NaradaBrokering’s topic matching with
the selector mechanism implemented by openJMS.
• JMS subscriptions– Mapped to corresponding NaradaBrokering
subscription requests.
• The Narada JMS Event.– Encapsulates the entire JMS message as a payload
for the event.– Matching is done based on the topic name contained
in the message.
NARADA_JMS Event
Topic Name
Delivery Mode(Persistent/Transient)
Priority
JMS Message
Headers
PayLoad
Replacing JMS providers in existing systems
• The Anabas Conferencing System, Anabas Inc.– Shared Display, Text, Whiteboard, audio and video
conferencing
– JMS provider – SonicMQ
• The Online Knowledge center – IU Grid Labs– JMS provider - SonicMQ
Towards a distributed JMS solution
• Benefits– Features in NaradaBrokering best exploited in
distributed settings.– Clients of distributed solution to inherit
NaradaBrokering features• Routing efficiencies, load balancing, scaling etc.
– Eliminate the Single point of failure model– Highly Available System
Distributed solution: Constraints
• Each broker should still function as a standalone JMS broker.
• Distributed Network should preferably be transparent to clients
• Existing systems should easily be replaced with distributed system– Minimal changes to clients– No change to initialization sequences
• No changes to the NaradaBrokering core and the protocol suite.
A simple distributed solution
• Set up NaradaBrokering broker network.• Clients choose the broker they connect to• Cons
– Distributed network not transparent to clients
– Clients need to keep track of un-predictabilities in distributed settings
• Broker up-times, network partitions
• Clients could use a certain known broker over and over again.
– Newly added brokers, not incorporated into the solution.
Broker Locators: Distributed JMS Solution
• Primary Function– Discovery of brokers that clients can connect to
• Obviates need for client to keep track of broker states within the broker network.
• Keeps track of– Broker additions and removals
• Changes to network fabric
– Published Limit on concurrent connections at a broker node
• Set during broker initializations
– Active connections at a broker node.• Individual brokers notify changes to broker locator.
Broker Locators: Features
• Dynamic Real time Load Balancing– Connection requests always forked to best available
broker.
• Incorporation of new brokers into solution– A newly added broker is among the best brokers to
handle a connection request.
• Slower clients could all be hosted on specific brokers– Eliminates broker choking resulting from servicing
very slow clients.
Broker Locator: Constraints
• Availability– Should not constitute a bottleneck or single point of
failure.• Multiple broker locators per domain. Number of broker
locators would be much less than number of brokers.
• Minimal logic– No active connections to any element of brokering
system.• Loss of locator should not affect the network fabric
– Should not affect processing pertaining to any node in the system.
Broker Locator: Decision Metrics• IP-address of requesting client• Published limit on concurrent connections• Number of active connections still possible• Availability of broker
– A simple ping test
– If broker is not available, remove broker from list of available brokers.
• Computing capabilities at a broker– CPU speed, RAM etc.
Broker Locator: Sequence of Operations
• Locate valid broker• Propagate broker information to client
– Hostname/IP-address information– Port number on which it listens for
connections– Transport protocol over which it
communicates
• Client then uses info to establish communication channel with broker– Done transparently.
• Clients with multiple connections– A client could sometimes have connections
to multiple brokers.
NARADA Broker Cloud
Broker Locator pinging thebest available broker
Client connection to broker
Client request for Connection
Client
BrokerLocator
JMS Performance Data
• SonicMQ (version 3.0) and NaradaBrokering broker– Dual CPU (Pentium 3, 1 GHz, 256 MB RAM) machine.
• 100 subscribers– Over 10 different JMSTopicConnections
– All hosted on a Dual CPU (Pentium 3, 866 MHz, 256 MB RAM) machine.
• Publisher and Measuring Subscriber– Hosted on another dual CPU (Pentium 3, 866 MHz, 256 MB
RAM) machine.
• Operating System and Run time Environment– Linux (version 2.2.16)
– Java 2 JRE (Java 1.3.1, Blackdown-FCS, mixed-mode)
Performance Data:Factors Measured
• Transit Delay– No need for clock synchronization and accounting for
clock drifts.
• Standard Deviation in the transit delay for the sample of messages received.
• System Throughput– Measured in terms of rate at which messages are
received.
• Factors measured under varying – publish rates – message payload sizes.
Performance: Transit Delay & Standard Deviation
Transit Delays for Message Samples in Narada and SonicMQ
NaradaSonicMQ
05
1015
2025
Publish Rate (Messages/sec) 100
150200
250300
350400
450500
550
Payload Size (Bytes)
02468
101214
Mean Transit Delay (MilliSeconds)
Standard Deviation in the Message Samples - Narada and SonicMQ
NaradaSonicMQ
05
1015
2025
Publish Rate (Messages/sec) 100
150200
250300
350400
450500
550
Payload Size (Bytes)
02468
10121416
Standard Deviation
(MilliSeconds)
Lower Payloads & Low Publish Rates
Performance: Transit Delay & Standard Deviation
Transit Delays for Message Samples in Narada and SonicMQ
NaradaSonicMQ
05
1015
2025
Publish Rate (Messages/sec) 1000
20003000
40005000
6000
Payload Size (Bytes)
05
1015202530
Mean Transit Delay (MilliSeconds)
Standard Deviation in the Message Samples - Narada and SonicMQ
NaradaSonicMQ
05
1015
2025
Publish Rate (Messages/sec) 1000
20003000
40005000
6000
Payload Size (Bytes)
05
101520253035
Standard Deviation
(MilliSeconds)
Higher Payloads & Low Publish Rates
Performance: Transit Delay & Standard Deviation
Transit Delays for Message Samples in Narada and SonicMQ
NaradaSonicMQ
050
100150
200250
300350
400Publish Rate
(Messages/sec) 100150
200250
300350
400450
500550
Payload Size (Bytes)
05
1015202530
Mean Transit Delay (MilliSeconds)
Standard Deviation in the Message Samples - Narada and SonicMQ
NaradaSonicMQ
050
100150
200250
300350
400Publish Rate
(Messages/sec) 100150
200250
300350
400450
500550
Payload Size (Bytes)
02468
101214
Standard Deviation
(MilliSeconds)
Lower Payloads & High Publish Rates
Performance: System Throughput
System Throughputs - Narada
Narada
050
100150
200250
300350
400Publish Rate
(Messages/sec) 100150
200250
300350
400450
500550
Payload Size (Bytes)
050
100150200250300350
Receiving Rate (Messages/sec)
System Throughputs - SonicMQ
SonicMQ
050
100150
200250
300350
400Publish Rate
(Messages/sec) 100150
200250
300350
400450
500550
Payload Size (Bytes)
050
100150200250300350
Receiving Rate (Messages/sec)
Lower Payloads & Higher Publish Rates
Conclusions
• JMS Compliance in NaradaBrokering• Replacing existing systems with a distributed
solution• Support for JMS to go along with support for JXTA
– Enables JXTA peers and JMS clients to interact with each other.
– Provides infrastructure for building standards based peer-to-peer (P2P) grids.
JMS over UDP• Server Machine
– Dual CPU (1.266 GHz Pentium 3, 1024 MB RAM)– JMF server and NaradaBrokering Broker
• Client Machine – 1.8 GHz Pentium 4, 512 MB RAM– Transmitter and Receiver
• Red Hat Linux 7.1, Blackdown-1.3.1, Java 2 JRE JVM • H.263 video file (30 second part of a movie)
– Average bit-rate of 600Kbps (Kilo bits per second) – Frame-rate of 30 frames/sec
• Factors Measured– Jitter J = J + (|D(i-1, i)| - J)/16– Delay
0
10
20
30
40
50
60
70
80
0 200 400 600 800 1000 1200 1400 1600 1800
Jitte
r (
Mill
isec
onds
)
Packet Number
JMF-RTPNaradaBrokering-RTP
1
10
100
1000
10000
0 200 400 600 800 1000 1200 1400 1600 1800
Del
ay (
Mill
isec
onds
)
Packet Number
JMF-RTPNaradaBrokering-RTP