YOU ARE DOWNLOADING DOCUMENT

Please tick the box to continue:

Transcript
Page 1: LDAP Proxy Logstash - NASA · PDF fileDebian8wereinitiallyassembledwithversions2.4ofElasticsearch,Logstash,and Graylog. This demo infrastructure reproduces the current NCCS log cluster

MachineLearningTechniquesforSecurityInformationandEventManagementJordanA.Caraballo-Vega1,2,GeorgeRumney2,JohnJasen2,JasaunNeff2

1UniversityofPuertoRicoatHumacao,DepartmentofMathematics,Humacao,PR2HighPerformanceComputingBranch,NASACenterforClimateSimulation(NCCS),NASAGoddardSpaceFlightCenter,MailCode606.2, Greenbelt,MD,USA

Motivation&ObjectivesThe deployment and maintenance of a High Performance Computing facility, such as theNASA Center for Climate Simulation, requires services able to monitor and report liveresults of hardware and software operational statistics. With more than 4k computingnodes and more than 90,000 processors cores, it is crucial for the NCCS to implementtechniques to advance, improve, and speed up our way of analyzing failures to fix andprevent future downtimes.

One important technique used over time to superviseinformation and events is to automatically store time-stamped documentation of relevant procedures in logfiles. This technique helps organizations, businesses ornetworks to proactively mitigate different risks. Evenwhen this information is very useful, as fast-movingdata increases, it becomes nearly impossible forhumans to detect these error causes or incomingthreats.

Figure1.Wereceivedaily~115to120millionlogmessagesfrom~3,000servers.

Therefore our aim is to:- Enhance and improve our ability to view,analyze, and monitor logs files.

- Upgrade our existing ELK+Grayloginfrastructure.

- Prove that machine learning (ML)techniques are useful for log analysis.

- Implement and develop ML jobs toautomate the detection of common andsecurity events.

- Produce a recipe for future productionupgrade procedures.

Figure2.Diagramwithsomeservicesthatarecurrentlystoredinourloginfrastructure.Theyareallcentralizedand

monitoredinanElasticsearch +Logstash +Graylogenvironment.

ProxyCPUSSHD

Firewall DNSSoftwareLDAP

• NASA Minority University Research and Education Scholarship• Thanks to George Rumney (mentor), John E. Jasen (mentor), and JasaunNeff (mentor) for their continued advice and help during this project.• Special thanks to Bennett Samowich and Maximiliano Guillen for theirtechnical support and assistance.•Thanks to Melissa Canon and Mablelene Burrell for their organizationand support throughout the internship experience.

•Elasticsearch.RetrievedonJune10,2017fromhttps://www.elastic.co• X-PackML.RetrievedonJune15,2017fromhttps://www.elastic.co/products/x-pack/machine-learning• Logstash.RetrievedonJune16,2017fromhttps://www.elastic.co/products/logstash• Kibana.RetrievedonJune17,2017fromhttps://www.elastic.co/products/kibana

• ContinuemonitoringtheX-PackBetareleasefornewupdatesandimprovements.• Combineresultsfromdifferentmachinelearningjobstodetectincomingthreatsthatmayharmmultipleservices.• EnablenewfeaturestoanalyzeawidervarietyoflogswithMLmodules.• Deploymachinelearningtechniquesinaproductionenvironment.ThismayincludethedevelopmentofourownmodelsandUIfeatures.• Thefinalresultwillbeanenvironmentwiththecapacityofanalyzing,monitoring,andvisualizinglogsthroughmachinelearningmodels.

SIEMComponents

MachineLearningOverview

Acknowledgements

References

FutureWork

Security Information and Event Management combines SIM (Security InformationManagement) and SEM (Security Event Management) for a consolidated analysis of yourlogs from multiple perspectives. While SEM centralizes the storage of logs and allowsreal-time analysis, SIM collects the data and provides automated trend analysis that willlead to a fully compliant and centralized service report.

Elasticsearch – a highly scalable anddistributable open-source search andanalytics engine that allows the user tostore, search, and analyze big volumesof data in near real time and with slightlatency. It provides a fast data retrievinginterface thanks to its indexing storage.

Forwardingandstorage

(A) (B) (C)

The combination of these two techniques is the one that gives SIEM the powerful andeffective ability to detect incoming threats in real time and to perform forensic analysison log data. By storing events historically, SIEM gives the administrator the flexibility ofcorrelating events and to recognize unusual behavior more easily by using baselines.

In order to implement, configure, and test these forensic techniques, an Elasticsearch +Logstash + Graylog + Kibana Demo system was built. Three Dell C60100 servers runningDebian 8 were initially assembled with versions 2.4 of Elasticsearch, Logstash, andGraylog. This demo infrastructure reproduces the current NCCS log clusterfunctionalities and provides a real scenario of the procedures that need to be done atthe time of deploying the upgraded infrastructure in production. The multiple softwareelements that integrate the cluster are described below.

LogAnalysisDemoCluster

Logstash – a server-side data collectionand processing pipeline that ingests,transforms, normalizes, and sends datafrom a multitude of sources to a widerange of destinations. It comes with over200 pre-built plugins that ease the processof filtering the unstructured data.Messages are produced and filteredthrough configuration files that seek thedifferent patterns from logs.

Graylog – powerful log management and analysis tool that parses and enriches logswhile providing a centralized configuration management for 3rd party collectors. Its RESTAPI and web interface lets the user forward and pull data to and from multiple systemswith the ability of integrating LDAP user directories.

Kibana – an analytics and visualization platform that is built on top of Elasticsearch andlets the user search, view, and interact with data stored on indices. Its browser-basedinterface enables quick creation and sharing of dynamic dashboards that can includegraphs, histograms, charts, and many other representations. Kibana plays and enormousroll at the time of visualizing and identifying the anomalies encountered in the systems.

Figure5.DiagramrepresentingElasticsearch workflow.AJSONqueryisexecutedtomatchfiles,while

Elasticsearch returnsinJSONoutputthematches.

Figure3.DiagramillustratingtheworkflowofourSIEMinfrastructure.Itbeginswithsystemsinput(A)(CPU’s,Firewalls,Switches,Servers,etc.)thatisthenparsedandstoredinalogcluster(B).Aftermessagesareindexed,theyareanalyzed

throughMLmodelsandresultscanthenbevisualizedindashboards(C) thatincludereal-timegraphsandspecificanomaliesinformation.

Figure6.DiagramrepresentingLogstash workflow.Itreceivesdataasinput,parseslogsthroughitsfilterplugin,

andsendsmessagestothedeclareddestinations.

Beats – single-purpose lightweight agents that ship data to multiple Logstash andElasticsearch instances. Beats is great for gathering and centralizing data; and canforward data from the network (packetbeat), log files (filebeat), operating system(metricbeat), and many others.

Elasticsearch X-Pack 5.4 release brings unsupervised machine learning techniques intoplay. X-Pack ML automatically models the behavior of your data in order to identifystreamline root causes, and to detect issues faster while reducing false positives. Ituses statistical models that calculate baselines over time regarding what is normal inyour logs or messages. After baselines are calculated for a set of points, it searches fordeviations within the data sets that are later on identified as anomalies. X-Pack MLmodels are adaptable and as more data enters the systems, models are able to updateautomatically; which suites effectively SIEM objectives.

By applying multiple probability distributionfunctions it gives the analysis more flexibilityto determine which model is more effectiveregarding your data set. Some examples ofanomalies that a system can encounter arewhen an entities’ behavior changes suddenly,and when an entity is drastically differentthan others in a population. Once the modelis determined, analysis functions like mean,sum, and many others are calculated over thedata in order to identify deviations from thebaseline values and their influencers.

Methods

Information/EventsSources Analysis&Visualization SecurityAlerts

Data from multiple services were collected, filtered, ingested, and analyzed. Theexample below reflects a job created from sample data acquired from the internet.Fields like detectors, types of data, period detection, influencers, and analysis functionsare key piece for detecting anomalies. These analysis functions are responsible fordetecting what is abnormal in the data range and are usually confirmed by mean, sum,metric, time, and many others.

Figure4.Representationofeventsasfunctionoftimethroughdays.Theinitialgraphbehaviorissignificantlymoderatecomparedtotheendwhensomepeaks

raised.Thisisanexampleofadrasticallychangedentitybehavior.Possibleanomaliesarecoloredinred.

elasticsearch

Documents

MatchingDocumentsJSONQuery

DataSource

DataDestination

InputPlugin

FilterPlugin

OutputPlugin

MachineLearningJobsandFindings

X-Pack – an Elastic Stack extension thatbundles security, alerting, monitoring,reporting, graph, and machine learningfeatures that are designed to worktogether seamlessly. Its reportingcapabilities can generate and emaildashboards as PDF reports, and can sendautomatic alerts about your system.

Figure7.RepresentationoftheclusterworkflowwithX-Packintegrated.Logsareimportedtothedatabasewithsecurityfeaturesenabled;andthenanalyzedandvisualizedwithML.

LogImporterLogstashBeats

Databaseelasticsearch

VisualizationKibanaGraylogs

X-Pack

MachineLearningJobsandFindings

• The Elastic Stack upgrade from version 2.4 to 5.5 brought significant changes thatrequired analysis, problem resolution and documentation.• Beats tools are convenient easy-to-use packages that can play an important role in aproduction environment at the time of ingesting logs.•Machine learning techniques were effective and extremely useful for analyzing real-time and archived data. The implemented jobs were able to detect sensitive anomaliesthat would have required a big effort and time investment to be manually detected.• X-Pack release brings useful security and alerting features that will be very useful fordetecting incoming threats. It does not require huge cpu-power.•Machine learning will emerge as a powerful analytics technique for log analysis and itwas substantiated as a great engine for SIEM purposes.

Conclusions

ResponseRequestByApplication

Data Description – Stats taken from apache log data.Job Details – A multi-metric job is created in order tomonitor the total sum of the requests and the meanof the response values taking into account thecountry of origin.Aim of the Job – Detect if there is an ip addressissuing high amounts of requests over time, and ifthose requests are legitimate.Graphs Descriptions – Figure 8A represents the highpeak from amount of requests made (100X higherthan baseline); red circles exhibit the biggestanomalous events. Figure 8B map illustrates theorigin and intensity of the requests based on their ipaddresses and total sum.

LDAPEventRate

Figure8.AnomalyMLKibana Dashboardrepresentationoftheresponse-request-by-appjob.Therearehighamountofrequestsfromunauthorizedcountries.

Jobs described below are intended to cover connection events rate, userauthentication, and real-time system usage stats. The data was taken from multipleNCCS production systems and was filtered and ingested with Logstash and Metricbeat.These representations are examples of the ML advanced job option and are designedto monitor multiple events with a variety of detectors and influencers.

Data Description – Stats taken from a week worthof logs from four NCCS LDAP servers. Data wasparsed with Logstash grok and kv plugins.Job Details – A multi metric job was created usinglog sources as influencers and the high mean ofoperations per event as detectors.Aim of the Job – Monitor if there is a server issuinghigh amounts of operations over time, while it takesinto account the log source and operation type.

SSHDEventRate

Data Description – Three weeks worth of logs fromNCCS SSHD servers.Job Details – An advanced job was created in orderto calculate the event rate received from multipleinput servers categorized by their hostnames.Aim of the Job – Detect if there are servers sendingnon-common amounts of events over time. This willbuild a wider picture of a cluster baseline behavior.

SystemMetricChange

Data Description – Six hours of CPU data togetherwith a high CPU signature produced by a stress tool.Job Details – An advanced job is created in order tomonitor the mean of the system CPU usage. Thisjob will detect as anomalous high values as events.Aim of the Job – Detect if there is an unusualbehavior in CPU consumption. The system isperforming some tasks while it is being monitored.

Figure9.LDAP2eventraterepresentationillustratesa22timeshigherpeakidentifiedonJuly21st at~8:00

pm.

Figure10.Redsquaresrevealpossibleanomaliesfound.OnJune24thsevenofthetenanalyzedserversexhibitananomalousbehaviorbasedontheamountofeventssent.

Figure11.AhighCPUconsumptioneventthatdroppeddrasticallywasdetectedat~13:30.Thiscanrepresentthatthesystemrebootedorwasdownmomentarily.

(A)

(B)

Related Documents