Session Presented by: Obbie Pet T Brought to you by: 340 Corporate Way, Suite Orange Park, FL 32073 888‐2 W10 Concurrent 4/9/2014 2:00 PM “Continuous Performance Testing: The New Standard” icketmaster 300, 68‐8770 ∙ 904‐278‐0524 ∙ [email protected]∙ www.sqe.com
In the past several years the software development lifecycle has changed significantly with high-speed software releases, shared application services, and platform virtualization. The traditional performance assurance approach of pre-release testing does not address these innovations. To maintain confidence in acceptable performance in production, pre-release testing must be augmented with in-production performance monitoring. Obbie Pet describes three types of monitors—performance, resource, and VM platform—and three critical metrics fundamental to isolating performance problems—response time, transaction rate, and error rate. Obbie reviews techniques to acquire and interpret these metrics, and describes how to develop a continuous performance monitoring process. In conjunction with pre-release testing, this monitoring can be woven into a single integrated process, offering a best bet in assuring performance in today’s development world. Take away this integrated process for consideration in your own shop.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Session
Presented by:
Obbie Pet T
Brought to you by:
340 Corporate Way, Suite Orange Park, FL 32073 888‐2
W10 Concurrent4/9/2014 2:00 PM
“Continuous Performance Testing: The New Standard”
Obbie Pet has twenty years of experience in QA, as tester, test lead, and QA manager. For the past thirteen years, Obbie has focused on performance testing in the area of Internet ticketing (Tickets.com, LiveNation, Ticketmaster) and in the insurance field (Wellpoint). He is certified on the LoadRunner testing tool by both Mercury and HP. Five years ago, Obbie realized that achieving performance assurance in production required expanding beyond just testing to include performance monitoring. His focus now is sharing these ideas and implementing monitoring solutions in support of performance assurance. Read more about Obbie and his thoughts on performance assurance at QAStrategy.com.
● Latency matters. Amazon found every 100ms of latency cost them 1% in sales. Google found an extra .5 seconds in search page generation time dropped traffic by 20%.
• Selecting an appropriate monitoring technology is highly dependant on your specific environment. Below I share the classes of monitoring technologies to consider for your solution.
BUILD IT: Custom code needed to collect metrics, open source leveraged for metric storage, BUILD IT: Custom code needed to collect metrics, open source leveraged for metric storage, analysis and reportingnd reporting
SysLog harvesting, custom code is used to push performance data to SysLogs, which are than digested by log analyzers. (Kibana, Splunk)
Tcollector agents, performance information is pushed to a time series database (OpenTSDB)
BUY IT: End-2-End vendor monitoring solutions
Network sniffers
Network monitors or sniffer (OpNet)
Stitching agent deployment required, piecing together transaction parts from header info
Transaction marking
agent deployment required, insert and then track headers
JVM monitors agent deployment usually required (Dynatrace, AppDynamics)