T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : [email protected]W : www.rittmanmead.com Rittman Mead BI Forum 2015 Masterclass Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
174
Embed
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
The Next-Gen BI Environment from this Architecture
•Traditional RDBMS DW now complemented by a Hadoop/NoSQL-based data reservoir • “Data Factory” term used for ETL and loading processes that provide conduit between them •Some data may be loaded into the data reservoir and only exist there •Some will be further processed and loaded into the DW (“Enterprise Information Store”) •Some may get directly loaded into the RBDMS •Use best option to support business needs
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Oracle Big Data Appliance ‣Optimized hardware for Hadoop processing ‣Cloudera Distribution incl. Hadoop ‣Oracle Big Data Connectors, ODI etc
•Oracle Big Data Connectors •Oracle Big Data SQL •Oracle NoSQL Database •Oracle Data Integrator •Oracle R Distribution •OBIEE, BI Publisher and Endeca Info Discovery
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Engineered system for big data processing and analysis •Optimized for enterprise Hadoop workloads •288 Intel® Xeon® E5 Processors •1152 GB total memory •648TB total raw storage capacity ‣Cloudera Distribution of Hadoop ‣Cloudera Manager ‣Open-source R ‣Oracle NoSQL Database Community Edition ‣Oracle Enterprise Linux + Oracle JVM ‣New - Oracle Big Data SQL
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Don’t underestimate the value of “pre-integrated” - massive time-saver for client ‣No need to integrate Big Data Connectors, ODI Agent etc with HDFS, Hive etc etc
•Single support route - raise SR with Oracle, they will route to Cloudera if needed •Single patch process for whole cluster - OS, CDH etc etc •Full access to Cloudera Enterprise features •Otherwise … just another CDH cluster in terms of SSH access etc •We like it ;-)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Very good product stack, enterprise-friendly, big community, can do lots with free edition •Cloudera have their favoured Hadoop technologies - Spark, Kafka
•Also makes use of Cloudera-specific tools - Impala, Cloudera Manager etc •But ignores some tools that have value - Apache Tez for example
•Easy for an Oracle developer to get productive with the CDH stack •But beware of some immature technologies / products ‣Hive != Oracle SQL ‣Spark is very much an “alpha” product ‣Limitations in things like LDAP integration, end-to-end security ‣Lots of products in stack = lots of placesto go to diagnose issues
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Oracle-licensed utilities to connect Hadoop to Oracle RBDMS ‣Bulk-extract data from Hadoop to Oracle, or expose HDFS / Hive data as external tables ‣Run R analysis and processing on Hadoop ‣Leverage Hadoop compute resources to offload ETL and other work from Oracle RBDMS ‣Enable Oracle SQL to access and load Hadoop data
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Oracle Loader for Hadoop, Oracle SQL Connector for HDFS - rarely used ‣Sqoop works both way (Oracle>Hadoop, Hadoop>Oracle) and is “good enough” ‣OSCH replaced by Oracle Big Data SQL for direct Oracle>Hive access
•Oracle R Advanced Analytics for Hadoop has been very useful though ‣Run MapReduce jobs from R ‣Run R functions across Hive tables
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Part of Oracle Big Data 4.0 (BDA-only) ‣Also requires Oracle Database 12c, Oracle Exadata Database Machine
•Extends Oracle Data Dictionary to cover Hive •Extends Oracle SQL and SmartScan to Hadoop •Extends Oracle Security Model over Hadoop ‣Fine-grained access control ‣Data redaction, data masking ‣Uses fast c-based readers where possible(vs. Hive MapReduce generation) ‣Map Hadoop parallelism to Oracle PQ ‣Big Data SQL engine works on top of YARN ‣Like Spark, Tez, MR2
Exadata Storage Servers
HadoopCluster
Exadata DatabaseServer
Oracle Big Data SQL
SQL Queries
SmartScan SmartScan
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Still a Key Role for Data Integration, and BI Tools•Fast, scaleable low-cost / flexible-schema data capture using Hadoop + NoSQL (BDA) •Long-term storage of the most important downstream data - Oracle RBDMS (Exadata) •Fast analysis + business-friendly interface : OBIEE, Endeca (Exalytics), RTD etc
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•For searching and cataloging data in the data reservoir •Typically use concepts of faceted search, and reading from Hive metastore •Options include Elasticsearch, Cloudera Search / Hue, Oracle Big Data Discovery
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•We’re still loading and storing into Hadoop and NoSQL, but… ‣There’s governance and change control ‣Data is secured ‣Data loading and pipelines are resilient and “industrialized” ‣We use ETL tools, BI tools and search tools to enable access by end-users ‣We think about design standards, file and directory layouts, metadata etc
•Build on insights and models created in the Discovery Lab •Put them into production so the business can rely on them
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Components Required for Typical Production Environment
•Hadoop cluster - typically 6-20 nodes, CDH or Hortonworks HDP with YARN / Hadoop 2.0 ‣Can deploy on-premise, or in cloud (AWS etc) using Cloudera Director
•Oracle Database, ideally Exadata for Big Data SQL capabilities •ODI12c 12.1.3.0.1 with Big Data Options (additional license required over ODI EE) •Oracle Big Data Discovery ‣Currently only certified on CDH5.3, no Kerberos support yet
•Oracle Business Intelligence 11g ‣Limited Hive compatibility with 11.1.1.7; 11.1.1.9 promises HiveServer2 + Impala support
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Configure BDA directory structure, user access, LDAP integration etc •Connect ODI12c 12.1.3.0.1 to Hive, HDFS, Pig and Spark on Hadoop cluster •Connect OBIEE11g to Hive (and Impala) •Set up a developer workstation with client libraries, ODI Studio, OBIEE BI Administrator etc
•Both Cloudera Manager (with CDH Enterprise) and Hue can be linked to corporate LDAP •Hive, Impala etc also need to be configured if you want to use Apache Sentry
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Best practice is to create application-specific HDFS directories for shared data •Separate ETL out from archiving, store data in subdirectory partitions •Use POSIX security model to grant RO access to groups of users •Consider using new HDFS ACLs where appropriate (beware memory implications though)
•Usual access control strategy is to limit users to accessing data through Hive tables
•Consider using Apache Sentry to provide RBAC over Hive and Impala tables ‣Column-based restrictions possible through SQL views ‣Requires Kerberos authentication and Hive/Impala LDAP integration as prerequisites
•Oracle Big Data SQL potentially a more complete solution, if available
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Configuring ODI12c 12.1.3.0.1 for Hadoop Data Integration
•New Hadoop DS technology used for registering base cluster details •New WebLogic Hive drivers used for Hive table access •Pig and Spark datasources configured for Pig Latin / Spark execution •Either client workstation needs to be configured as Hadoop client,or ODI agent installed on a Hadoop node ‣To execute Pig, Hive etc mappings
•Option now to use Oozie scheduler rather than ODI agent ‣Avoids need to install ODI agent on cluster ‣Integrates ODI workflows with other Hadoop scheduling
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Not officially supported with OBIEE 11.1.1.7, but does work •Only possible using Windows version of OBIEE (looser rules around unsupported drivers) •OBIEE 11.1.1.9 will come with Impala support
•Use Cloudera ODBC drivers •Configure Database Type as Apache Hadoop •For earlier versions of Impala, may need to disable ORDER BY in Database Features, have the BI Server do sorting
• Issue is that earlier versions of Impala requires LIMIT with all ORDER BY clauses ‣OBIEE could use LIMIT, but doesn’t for Impala at the moment (because not supported)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Configuring OBIEE to Access a Kerberos-Secured Cluster
•Most production Hadoop clusters are Kerberos-secured •OBIEE can access secured clusters with appropriate ODBC drivers •Typically install Kerberos client on Windows workstation, and on server side
• If OBIEE runs using a system service account, ensure it can request a ticket too
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Configuration done during BDD installation, tied to a particular Hadoop cluster •Specify Cloudera Manager + Hadoop service URLs •May need to adjust RAM allocated to Spark Workers in Cloudera Manager ‣Currently only Spark Standalone(not YARN) supported
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Rittman Mead want to understand drivers and audience for their website ‣What is our most popular content? Who are the most in-demand blog authors? ‣Who are the influencers? What do they read?
Two Analysis Scenarios : Reporting, and Data Discovery
• Initial task will be to ingest data from webserver logs, Twitter firehose, site content + ref data •Land in Hadoop cluster, basic transform, format, store; then, analyse the data:
Combine with Oracle Big Data SQL for structured OBIEE dashboard analysis
Combine with site content, semantics, text enrichment Catalog and explore using Oracle Big Data Discovery
What pages are people visiting? Who is referring to us on Twitter? What content has the most reach?
Why is some content more popular? Does sentiment affect viewership? What content is popular, where?
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Apache Flume : Distributed Transport for Log Activity
•Apache Flume is the standard way to transport log files from source through to target • Initial use-case was webserver log files, but can transport any file from A>B •Does not do data transformation, but can send to multiple targets / target types •Mechanisms and checks to ensure successful transport of entries
•Has a concept of “agents”, “sinks” and “channels” •Agents collect and forward log data •Sinks store it in final destination •Channels store log data en-route
•Simple configuration through INI files •Handled outside of ODI12c
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Developed by LinkedIn, designed to address Flume issues around reliability, throughput ‣(though many of those issues have been addressed since)
•Designed for persistent messages as the common use case ‣Website messages, events etc vs. log file entries
•Consumer (pull) rather than Producer (push) model •Supports multiple consumers per message queue •More complex to set up than Flume, and can useFlume as a consumer of messages ‣But gaining popularity, especially alongside Spark Streaming
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Starting Flume Agents, Check Files Landing in HDFS Directory
•Start the Flume agents on source and target (BDA) servers •Check that incoming file data starts appearing in HDFS ‣Note - files will be continuously written-to as entries added to source log files ‣Channel size for source, target agentsdetermines max no. of events buffered ‣If buffer exceeded, new events droppeduntil buffer < channel size
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Adding Social Media Datasources to the Hadoop Dataset
•The log activity from the Rittman Mead website tells us what happened, but not “why” •Common customer requirement now is to get a “360 degree view” of their activity ‣Understand what’s being said about them ‣External drivers for interest, activity ‣Understand more about customer intent, opinions
•One example is to add details of social media mentions,likes, tweets and retweets etc to the transactional dataset ‣Correlate twitter activity with sales increases, drops ‣Measure impact of social media strategy ‣Gather and include textual, sentiment, contextualdata from surveys, media etc
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Flume log data from webserver arrives as files in HDFS •Can either be accessed in that form by ODI, or presented as a Hive table to ODI using SerDe ‣Both are fine, but creating the Hive table in advance makes ODI developer job simpler
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Creating a Hive Table over the Log Data, using SerDe
•Hive works by defining a table structure over data in HDFS, typically plain text with delimiter •But can make use of SerDes (serializer-deserializers) to parse other formats •Takes semi-structured data (Apache Combined Log Format) and turns into structured (Hive) ‣Can also use IKM File to Hive with same SerDe definition, to do within ODICREATE external TABLE apachelog_parsed( host STRING, identity STRING, user STRING, time STRING, request STRING, status STRING, size STRING, referer STRING, agent STRING) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES ( "input.regex" = "([^]*) ([^]*) ([^]*) (-|\\[^\\]*\\]) ([^ \”]*|\"[^\"]*\")(-|[0-9]*) (-|[0-9]*)(?: ([^ \"] *|\".*\") ([^ \"]*|\".*\"))?" ) STORED AS TEXTFILE LOCATION '/user/flume/rm_website_logs;
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Simplest approach again is to define a Hive table over the Twitter data •Arrives in files via Flume agent, but in JSON format •Potentially contains more fields than we are interested in - and in JSON format •Can address in ODI data load, but simpler to parse and select elements of interest beforehand
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Second table extracts the individual fields from STRUCT datatypes in first table ‣Could be done through a view, but Big Data Discovery doesn’t support them yetCREATE TABLE `tweets_expanded` AS select `tweets`.`id`, `tweets`.`created_at`, `tweets`.`user`.screen_name as `user_screen_name`, `tweets`.`user`.friends_count as `user_friends_count`, `tweets`.`user`.followers_count as `user_followers_count`, `tweets`.`user`.statuses_count as `user_tweets_count`, `tweets`.`text`, `tweets`.`in_reply_to_screen_name`, `tweets`.`favorited`, `tweets`.`retweeted_status`.user.screen_name as `retweet_user_screen_name`, `tweets`.`retweeted_status`.retweet_count as `retweet_count`, `tweets`.`entities`.urls[0].expanded_url as `url1`, `tweets`.`entities`.urls[1].expanded_url as `url2`, `tweets`.`entities`.hashtags[0].text as `hashtag1`, `tweets`.`entities`.hashtags[1].text as `hashtag2`, `tweets`.`entities`.hashtags[2].text as `hashtag3`, `tweets`.`entities`.hashtags[3].text as `hashtag4`, `tweets`.`entities`.user_mentions[0].screen_name as `user_mentions_screen_name1`, `tweets`.`entities`.user_mentions[1].screen_name as `user_mentions_screen_name2`, `tweets`.`entities`.user_mentions[2].screen_name as `user_mentions_screen_name3`, `tweets`.`entities`.user_mentions[3].screen_name as `user_mentions_screen_name4`, `tweets`.`entities`.user_mentions[4].screen_name as `user_mentions_screen_name5` from `tweets`;
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Configuring the ODI12c 12.1.3.0.1 Hadoop Datasource
•New feature in ODI12.1.3.0.1 with Big Data Extensions •Defines the physical server and Java library locations for other tools (Pig etc) to use ‣Namenode location ‣Working area in HDFS for ODI ‣Location on HDFS to store basicdetails of ODI installation / repo
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Used for reverse-engineering Hive table structures from Hadoop •Uses JDBC connection, new WLS-derived driver •Need to also either install Hadoop/Hive client on ODI Studio workstation, or install ODI Agent on target Hadoop cluster to actually execute mappings ‣New option to use Oozie removes need for ODI Agent though
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Connections to Hive, Hadoop (and Pig) set up earlier •Define physical and logical schemas, reverse-engineer the table definitions into repository ‣Can be temperamental with tables using non-standard SerDes; make sure JARs registered
1
2
3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
1. Join initial log data extract to additional reference data (already in Hive) 2. Supplement with additional Oracle RDBMS data (brought in via Sqoop) 3. Filter log data to leave just requests for blog pages 4. Take the Twitter data, and filter to just tweets referencing RM web pages 5. Join Twitter activity to page hits, to create aggregate for the two 6. Geocode page hits to determine
country + city of visitor 7. Sessionize the log data for use with
an R classification routine
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
ETL Step 1 : Join Incoming Log Hive Table to Hive Ref Data
• IKM Hive Append can be used to perform Hive table joins, filtering, agg. etc. • INSERT only, no DELETE, UPDATE etc •Join to other Hive tables, or combine with Sqoop KMs etc to bring in Oracle data •Supports most ODI operators ‣Filter ‣Aggregate ‣Join (ANSI-style) ‣etc
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
ETL Step 1 : Join Incoming Log Hive Table to Hive Ref Data
•ODI 12.1.3.0.1 replaces the previous template-style KMs (IKM Hive-to-Hive Control Append) with new component-style KMs ‣Makes it possible to mix-and-match sources ‣Enables logical mapping to generate Hive, Pig and Spark code
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
ETL Step 2 : Supplement with Oracle Reference Data
• In this step, the log data will be supplemented with additional reference data in Oracle •Uses Sqoop (LKM SQL to Hive Sqoop) to extract Oracle data into Hive staging table •Join temporary Hive table to the main log Hive table ‣Logical mapping just references theOracle source table, no need formapping designer to consider Sqoop
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
ETL Step 2 : Supplement with Oracle Reference Data
•Mapping execution then runs in three stages: ‣Create temporary Hive table for staging data ‣Generate and run Sqoop job to export reference data out of Oracle RBDMS ‣Join incoming reference Hive table to log data Hive table
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Alternative to Batch Replication using Sqoop : GoldenGate
•Oracle GoldenGate 12c for Big Data can replicate database transactions into Hadoop •Load directly into Hive / HDFS, or feed transactions into Apache Flume as flume events •Provides a way to replicate Oracle + other RBDMS data into the data reservoir ‣Works with Flume to provide a single streaming route into the the data reservoir
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Enabling Oracle Database 12c for GoldenGate Replication
•Oracle GoldenGate 11gR2 for Oracle Database introduced Integrated Capture Mode ‣Integrated with database, just enable with alter system set enable_goldengate_replication=true ‣Required for Oracle Database 12c container databases (as found on Big Data Lite 4.1 VM)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Oracle RDBMS to Hive via Flume Configuration Steps
1. Configure the source database for ARCHIVELOG mode, integrated capture and supplementary logging
2. Create data source definition file to specify the database schema / tables to replicate 3. Set up the database capture (extract) process to write transactions to the trail file 4. Configure the GoldenGate Flume adapter to send transactions written to the trail file to a
Flume Adapter, via Avro RPC messages 5. Set up and configure a Flume Adapter to receive those messages, and write them in Hive
data storage format to HDFS for the target Hive table
Program Status Group Lag at Chkpt Time Since Chkpt
select CONCAT('Rows loaded from gg_Test.logs into HDFS via Flume: ', count(*)) from gg_test.logs; … Rows loaded from gg_Test.logs into HDFS via Flume: 100
sqlplus gg_test@orcl/welcome1 begin P_GENERATE_LOGS(100); end;
2
1 3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
ETL Step 4 : Filter Tweets to Just Leave RM Blog References
•Same process as previous step; extract from Hive source, filter, load into Hive target •Filter on two URL columns as tweet can contain multiple URL references ‣Two picked as arbitrary limit to URL extraction
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Mapping Variant : Generate as Pig Latin vs. HiveQL
•ODI 12.1.3.0.1 comes with the ability to generate Pig Latin as well as HiveQL •Alternative to Hive, defines data manipulation as dataflow steps (like an execution plan) •Start with one or more data sources, add steps to apply filters, group, project columns •Generates MapReduce to execute data flow, similar to Hive; extensible through UDFsa = load '/user/oracle/pig_demo/marriott_wifi.txt'; b = foreach a generate flatten(TOKENIZE((chararray)$0)) as word; c = group b by word; d = foreach c generate COUNT(b), group; store d into '/user/oracle/pig_demo/pig_wordcount';
•A way of linking a Pig execution environment to a previously-defined Hadoop DS •Also gives ability to define additional JARs to use with Pig - DataFu, Piggybank etc •Can be defined as either Local (running Pig code on workstation) or MapReduce
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Creating a Physical Mapping Configured for Pig Latin
•Create additional deployment specification for Pig physical mapping •Mapping operators will use Pig component KMs •Set KM for target table or file to <Default> (from original IKM Hive Append)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Another requirement we have is to “geocode” the webserver log entries •Based on the fact that IP ranges can usually be attributed to specific countries •Not functionality normally found in Hive etc, but can be done with add-on APIs •Approach used by Google Analytics etc to show where visitors are located
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Uses free Geocoding API and database from Maxmind •Convert IP address to an integer •Find which integer range our IP address sits within •But Hive can’t use BETWEEN in a join…
•Solution : Expose PAGEVIEWS Hive table using Big Data SQL, then join to lookup tablein Oracle database
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Gives us the ability to easily bring in Hadoop (Hive) data into Oracle-based mappings •Allows us to create Hive-based mappings that use Oracle SQL for transforms, joins •Faster access to Hive data for real-time ETL scenarios •Through Hive, bring NoSQL and semi-structured data access to Oracle ETL projects •For our scenario - join weblog + customer data in Oracle RDBMS, no need to stage in Hive
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•By default, Hive table has to be exposed as an ORACLE_HIVE external table in Oracle first •Then register that Oracle external table in ODI repository + model
External table creation in Oracle
Logical Mapping using just Oracle tables
1
2
Register in ODI Model
3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•New KM works in similar way to Sqoop KM : Creates temporary ORACLE_HIVE tableto expose Hive data in Oracle environment ‣Allows Hive+Oracle joins by auto-creating ORACLE_HIVE extttab definition to enable Big Data SQL Hive table access
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
ETL Step 7 : Sessionize Log Data, for R Classification Model
•Discovery Lab part of the masterclass created a classification model using R
•Used as input a sessionized version of the log activity, grouping page views within 60s
•Sessionization routine was written as Pig script, using DataFu and Piggybank UDFs ‣DataFu is a library of Pig functions initially developed by LinkedIn, now an Apache project ‣Piggybank is a community-created library of Pig UDFs and store/load routines
•So why was Pig used for this sessionization task?
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Ability to load data into a defined schema, or use schema-less (access fields by position) •Fields can contain nested fields (tuples) •Grouping records on a key doesn’t aggregate them, it creates a nested set of rows in column •Uses “lazy execution” - only evaluates data flow once final output has been requests •Makes Pig an excellent language for interactive data exploration
vs.
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Pig Data Processing Example : Join to Post Titles, Authors
•Pig allows aliases (datasets) to be joined to each other •Example below adds details of post names, authors; outputs top pages dataset to fileraw_posts = LOAD '/user/oracle/pig_demo/posts_for_pig.csv' USING TextLoader AS (line:chararray); posts_line = FOREACH raw_posts GENERATE FLATTEN ( STRSPLIT(line,';',10) ) AS ( post_id: chararray, title: chararray, post_date: chararray, type: chararray, author: chararray, post_name: chararray, url_generated: chararray ); posts_and_authors = FOREACH posts_line GENERATE title,author,post_name,CONCAT(REPLACE(url_generated,'"',''),'/') AS (url_generated:chararray); pages_and_authors_join = JOIN posts_and_authors BY url_generated, page_request_group_count_limited BY group; pages_and_authors = FOREACH pages_and_authors_join GENERATE url_generated, post_name, author, total_hits; top_pages_and_authors = ORDER pages_and_authors BY total_hits DESC; STORE top_pages_and_authors into '/user/oracle/pig_demo/top-pages-and-authors.csv' USING PigStorage(‘,');
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Similar to Apache Hive, Pig can be programatically extended through UDFs •Example below uses Function defined in Python script to geocode IP addresses
#!/usr/bin/python import sys sys.path.append('/usr/lib/python2.6/site-packages/') import pygeoip @outputSchema("country:chararray") def getCountry(ip): gi = pygeoip.GeoIP('/home/nelio/GeoIP.dat') country = gi.country_name_by_addr(ip) return country
Pig Sessionization Script used in Discovery Labregister /opt/cloudera/parcels/CDH/lib/pig/datafu.jar; register /opt/cloudera/parcels/CDH/lib/pig/piggybank.jar; DEFINE Sessionize datafu.pig.sessions.Sessionize('60m'); DEFINE Median datafu.pig.stats.StreamingMedian(); DEFINE Quantile datafu.pig.stats.StreamingQuantile('0.9','0.95'); DEFINE VAR datafu.pig.VAR(); DEFINE CustomFormatToISO org.apache.pig.piggybank.evaluation.datetime.convert.CustomFormatToISO(); DEFINE ISOToUnix org.apache.pig.piggybank.evaluation.datetime.convert.ISOToUnix(); -------------------------------------------------------------------------------- -- Import and clean logs raw_logs = LOAD '/user/flume/rm_logs/apache_access_combined' USING TextLoader AS (line:chararray); -- Extract individual fields logs_base = FOREACH raw_logs GENERATE FLATTEN (REGEX_EXTRACT_ALL(line,'^(\\S+) (\\S+) (\\S+) \\[([\\w:/]+\\s[+\\-]\\d{4})\\] "(.+?)" (\\S+) (\\S+) "([^"]*)" "([^"]*)"')) AS (remoteAddr: chararray, remoteLogName: chararray, user: chararray, time: chararray, request: chararray, status: chararray, bytes_string: chararray, referrer:chararray, browser: chararray);
-- Remove Bots and convert timestamp logs_base_nobots = FILTER logs_base BY NOT (browser matches '.*(spider|robot|bot|slurp|Bot|monitis|Baiduspider|AhrefsBot|EasouSpider|HTTrack|Uptime|FeedFetcher|dummy).*'); -- Remove uselesss columns and convert timestamp clean_logs = FOREACH logs_base_nobots GENERATE CustomFormatToISO(time,'dd/MMM/yyyy:HH:mm:ss Z') as time, remoteAddr, request, status, bytes_string, referrer, browser; -------------------------------------------------------------------------------- -- Sessionize the data clean_logs_sessionized = FOREACH (GROUP clean_logs BY remoteAddr) { ordered = ORDER clean_logs BY time; GENERATE FLATTEN(Sessionize(ordered)) AS (time, remoteAddr, request, status, bytes_string, referrer, browser, sessionId); }; -- The following steps will generate a tsv file in your home directory to download and work with in R store clean_logs_sessionized into '/user/jmeyer/clean_logs' using PigStorage('\t','-schema');
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
ODI 12.1.3.0.1 Logical Mapping for Log Sessionization
Expression operator used instead of Hive table target;generated as ALIAS when deployed as Pig Latin mapping Table Function operator used to generate another ALIAS
by running input attributes through arbitrary Pig Latin script
Only data materialised is in Hive table,at end of dataflow
3
21
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Expression Mapping Operator Used to Create Next Alias
•Using Expression rather than datastore operator creates transformation “in-line” •With Pig execution, generates expression as ALIAS •Allows use of expressions (e.g. CustomFormatToISO Piggybank UDF) •Filters etc included in ALIAS definition
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Table Function Operator used for Executing Pig Commands
•Table function operator processes input attributes through arbitrary script • In pig mappings, allows use of more complex Pig transformations ‣GENERATE FLATTEN, use of DataFu Sessionize UDF
•Final ALIAS defined within Pig Latin script has to match name of Table Function operator
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Pig Latin Generated Script for Sessionization Task
•Creates single dataflow using series of ALIASes • Includes Pig Latin commands added through Table Function •Matches logic and approach of original hand-coded Pig script, but now managed within ODI
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•We’ve now processed the incoming data, filtering it and transforming to required state •Joined (“mashed-up”) datasets from website activity, and social media mentions • Ingestion and the load/processing stages are now complete •Now we want to make the Hadoop output available to a wider, non-technical audience…
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Options for Sharing Data Reservoir Data with Users
•Several options for reporting on the content in the data reservoir and DW ‣Using a reporting & dashboarding tool compatible with Hive + DW, e.g. OBIEE11g ‣Using a search/data discovery tool, for example Big Data Discovery ‣Export Hadoop/Hive data into Oracleand report from there Actionable
Events
Event.Engine Enterprise.Information.Store
Reporting
Discovery.Lab
ActionableInformation
ActionableInsights
InputEvents
Execution
Innovation
Discovery.Output
Events.&.Data
StructuredEnterprise.Data
OtherData
Data.Reservoir
Data.Factory
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Alternative to Reporting Against Hadoop : Export to Data Mart
• In most cases, for general reporting access, exporting into RDBMS makes sense •Export Hive data from Hadoop into Oracle Data Mart or Data Warehouse •Use Oracle RDBMS for high-value data analysis, full access to RBDMS optimisations •Potentially use Exalytics for in-memory RBDMS access
Loading Stage
Processing Stage
Store / Export Stage
Real-Time Logs / Events
RDBMSImports
File / Unstructured Imports
RDBMSExports
File Exports
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Hadoop for large scale, high-speed data ingestion and processing •Oracle RDBMS and Exadata for long-term storage of high-value data •Oracle Exalytics for speed-of-though analytics in TimesTen and Oracle Essbase
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•OBIEE 11g from 11.1.1.7 can connect to Hadoop sources ‣OBIEE 11.1.1.7+ supports Hive/Hadoop as a data source, via specific Hive ODBC driversand Apache Hive Physical Layer database type
‣But practically, it comes with limitations ‣Current 11.1.1.7 version of OBIEE only ships with HiveServer1 ODBC drivers ‣HiveQL is a limited subset of ISO/Oracle SQL ‣… and Hive access is really slow
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•As of OBIEE 11.1.1.7, access is through Oracle-supplied Data Direct Drivers ‣Not compatible with HiveServer2 protocol used by CDH4+ ‣As workaround, use Windows version of OBIEE and Cloudera ODBC drivers ‣OBIEE 11.1.1.9 will come with HiveServer2 drivers (hopefully)
•Need to configure on both server, and BI Administration workstation
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Setting up the ODBC Connection to Hadoop Environment
•Example uses OBIEE 11.1.1.7 on Windows, to allow use of Cloudera Hive ODBC drivers (HiveServer2) ‣Linux OBIEE 11g version only allows use of Oracle-supplied HiveServer1 drivers
• Install ODBC drivers, create system DSN •Use username/password authentication, or Kerberos if required
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
1. Use BI Administration tool, File > Import Metadata 2. Select DSN previously created for Hive datasource 3. Import table metadata from correct Hive database 4. Set Database Type to Apache Hadoop
3
21
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Confirm that Hive table data can be returned by the BI Administration tool ‣Basic check before carrying on; should also check with the RPD online too (for BI Server)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Join Hive Fact (Log) Data to Oracle Reference Data
•BI Server issues two separate queries; one to Hive, one to Oracle •Returned datasets then joined (stitch-join) by BI Server and returned as single resultset
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Gives the ability to supplement Hadoop data with reference data from Oracle, Excel etc
•But response time is still quite slow •What about faster versions of Hive - Cloudera Impala for example?
•Cloudera’s answer to Hive query response time issues •MPP SQL query engine running on Hadoop, bypasses MapReduce for direct data access •Mostly in-memory, but spills to disk if required
•Uses Hive metastore to access Hive table metadata •Similar SQL dialect to Hive - not as rich though and no support for Hive SerDes, storage handlers etc
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Log into Impala Shell, run INVALIDATE METADATA command to refresh Impala table list •Run SHOW TABLES Impala SQL command to view tables available •Run COUNT(*) on main ACCESS_PER_POST table to see typical response time[oracle@bigdatalite ~]$ impala-shell Starting Impala Shell without Kerberos authentication
•Add Oracle HTTP Status table to business model sourced from Impala data •Join HTTP Status table to Impala fact table in Physical layer •Recreate query to compare response time to Hive + Oracle version
Logical Query Summary Stats: Elapsed time 102, Response time 102, Compilation time 0 (seconds)
Logical Query Summary Stats: Elapsed time 1, Response time 1, Compilation time 0 (seconds)
Federated Query joining Hive + Oracle Data
Federated Query joining Impala + Oracle Data
vs
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
• If available, use Oracle Big Data SQL to query Hive data only, or federated Hive + Oracle •Access Hive data through Big Data SQL SmartScan feature, for Exadata-type response time •Use standard Oracle SQL across both Hive and Oracle data •Also extends to data in Oracle NoSQL database
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Part of Oracle Big Data 4.0 (BDA-only) ‣Also requires Oracle Database 12c, Oracle Exadata Database Machine
•Extends Oracle Data Dictionary to cover Hive •Extends Oracle SQL and SmartScan to Hadoop •Extends Oracle Security Model over Hadoop ‣Fine-grained access control ‣Data redaction, data masking ‣Uses fast c-based readers where possible(vs. Hive MapReduce generation) ‣Map Hadoop parallelism to Oracle PQ ‣Big Data SQL engine works on top of YARN ‣Like Spark, Tez, MR2
Exadata Storage Servers
HadoopCluster
Exadata DatabaseServer
Oracle Big Data SQL
SQL Queries
SmartScan SmartScan
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
View Hive Table Metadata in the Oracle Data Dictionary
•Oracle Database 12c 12.1.0.2.0 with Big Data SQL option can view Hive table metadata ‣Linked by Exadata configuration steps to one or more BDA clusters
•DBA_HIVE_TABLES and USER_HIVE_TABLES exposes Hive metadata •Oracle SQL*Developer 4.0.3, with Cloudera Hive drivers, can connect to Hive metastore
SQL> col database_name for a30 SQL> col table_name for a30 SQL> select database_name, table_name 2 from dba_hive_tables;
Hive Access through Oracle External Tables + Hive Driver
•Big Data SQL accesses Hive tables through external table mechanism ‣ORACLE_HIVE external table type imports Hive metastore metadata ‣ORACLE_HDFS requires metadata to be specified
•Access parameters cluster and tablename specify Hive table source and BDA cluster
Leverages Hive Metastore for Hadoop Java Access Classes
•As with other next-gen SQL access layers, uses common Hive metastore table metadata •Provides route to underlying Hadoop data for Oracle Big Data SQL c-based SmartScan
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Example Usage : Use Big Data SQL for Geocoding Exercise
•Earlier on we used ODI and Big Data SQL to join incoming log data to Geocoding table •Big Data SQL used as it enabled Hive data to use BETWEEN join •We will now reproduce using OBIEE environment
•Use the ORACLE_HIVE access driver type to create Oracle external table over Hive table •ACCESS_PER_POST_EXTTAB and POSTS_EXTTAB now appear in Oracle data dictionary
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Map incoming physical tables into a star schema •Add aggregation method for fact measures •Add logical keys for logical dimension tables •Remove columns from fact table that aren’t measures
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Create Report against Oracle + Big Data SQL Tables
•BI Server thinks that all data sourced from Oracle •Uses full Oracle SQL features, guarantees all Oracle-sourced reports will work if DW data offloaded to Hadoop (Hive)
•Fast access through SmartScan feature
WITH SAWITH0 AS (select count(T45134.TIME) as c1, T45146.POST_AUTHOR as c2, T44832.DSC as c3 from BDA_OUTPUT.POSTS_EXTTAB T45146, BLOG_REFDATA.HTTP_STATUS_CODES T44832, BDA_OUTPUT.ACCESS_PER_POST_EXTTAB T45134 where ( T44832.STATUS = T45134.STATUS and T45134.POST_ID = T45146.POST_ID ) group by T44832.DSC, T45146.POST_AUTHOR) select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select distinct 0 as c1, D1.c2 as c2, D1.c3 as c3, D1.c1 as c4 from SAWITH0 D1 order by c3, c2 ) D1 where rownum <= 65001
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Uses Concept of Query Franchising vs Query Federation
•Oracle Database handles all queries for client tool, then offloads to Hive if needed •Contrast with Query federation - BI Server has to issue separateSQL queries for each source, then stitch-join results ‣And be aware of different SQL dialects, DB features etc
WITH SAWITH0 AS (select count(T45134.TIME) as c1, T45146.POST_AUTHOR as c2, T44832.DSC as c3 from BDA_OUTPUT.POSTS_EXTTAB T45146, BLOG_REFDATA.HTTP_STATUS_CODES T44832, BDA_OUTPUT.ACCESS_PER_POST_EXTTAB T45134 where ( T44832.STATUS = T45134.STATUS and T45134.POST_ID = T45146.POST_ID ) group by T44832.DSC, T45146.POST_AUTHOR) select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select distinct 0 as c1, D1.c2 as c2, D1.c3 as c3, D1.c1 as c4 from SAWITH0 D1 order by c3, c2 ) D1 where rownum <= 65001
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Uses Concept of Query Franchising vs Query Federation
•Oracle Database handles all queries for client tool, then offloads to Hive if needed •Contrast with Query federation - BI Server has to issue separateSQL queries for each source, then stitch-join results ‣And be aware of different SQL dialects, DB features etc
•Only columns (projection) and rows (filtering) required to answer query sent back to Exadata
•Storage Indexes used on both Exadata Storage Servers and BDA nodes to skip block reads for irrelevant data
•HDFS caching used to speed-upaccess to commonly-usedHDFS data
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Prepare Physical Model for Big Data SQL Join to GEOIP Data
•Create SELECT table view in RPD over ACCESS_PER_POST_EXTTAB tableto derive IP address integer from hostname IP address ‣Also add in a conversion of access date field - for later…
• Import GEOIP_COUNTRY reference table into RPD •Join on
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•No longer restricted to HiveQL equi-joins - Big Data SQL supports all Oracle join operators •Use to join Hive data (using View over external table) to the IP range country lookup tableusing BETWEEN join operator
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Enables time-series reporting; pre-req for forecasting (linear regression-type queries) •Map to Date field in view over ORACLE_HIVE table ‣Convert incoming Hive STRING field to Oracle DATE for better time-series manipulation
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Use Exalytics In-Memory Aggregate Cache if Required
• If further query acceleration is required, Exalytics In-Memory Cache can be used •Enabled through Summary Advisor, caches commonly-used aggregates in in-memory cache •Options for TimesTen or Oracle Database 12c In-Memory Option •Returns aggregated data “at the speed of thought”
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Enable Incoming Site Activity Data for Data Discovery
•Another use-case for Hadoop data is “data discovery” ‣Load data into the data reservoir ‣Catalog and understand separate datasets ‣Enrich data using graphical tools ‣Join separate datasets together ‣Present textual data alongside measuresand key attributes ‣Explore and analyse using faceted search
2 Combine with site content, semantics, text enrichment Catalog and explore using Oracle Big Data Discovery
Why is some content more popular? Does sentiment affect viewership? What content is popular, where?
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
• “The Visual Face of Hadoop” - cataloging, analysis and discovery for the data reservoir •Runs on Cloudera CDH5.3+ (Hortonworks support coming soon) •Combines Endeca Server + Studio technology with Hadoop-native (Spark) transformations
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Ingesting & Sampling Datasets for the DGraph Engine
•Datasets in Hive have to be ingested into DGraph engine before analysis, transformation •Can either define an automatic Hive table detector process, or manually upload •Typically ingests 1m row random sample ‣1m row sample provides > 99% confidence that answer is within 2% of value shownno matter how big the full dataset (1m, 1b, 1q+) ‣Makes interactivity cheap - representative dataset
Amount'of'data'queried
The'100%'premium
CostAccuracy
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Ingesting Site Activity and Tweet Data into DGraph
•Two output datasets from ODI process have to be ingested into DGraph engine •Upload triggered by manual call to BDD Data Processing CLI ‣Runs Oozie job in the background to profile,enrich and then ingest data into DGraph
Ingesting Site Activity and Tweet Data into DGraph
•Two output datasets from ODI process have to be ingested into DGraph engine •Upload triggered by manual call to BDD Data Processing CLI ‣Runs Oozie job in the background to profile,enrich and then ingest data into DGraph
Hive
Apache Spark
Full Table
SampledTable
Profiling ProfiledSampled Tbl
Enrichment EnrichedSampled Tbl
BDD
BDD Dataset1
2
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
• Ingestion process has automatically geo-coded host IP addresses •Other automatic enrichments run after initial discovery step, based on datatypes, content
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Initial Data Exploration On Uploaded Dataset Attributes
•For the ACCESS_PER_POST_CAT_AUTHORS dataset, 18 attributes now available •Combination of original attributes, and derived attributes added by enrichment process
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Select refinement (filter) values from refinement pane •Visualization in scratchpad now filtered by that attribute ‣Repeat to filter by multiple attribute values
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Group and bin attribute values; filter on attribute values, etc •Use Transformation Editor for custom transformations (Groovy, incl. enrichment functions)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Datatype Conversion Example : String to Date / Time
•Datatypes can be converted into other datatypes, with data transformed if required •Example : convert Apache Combined Format Log date/time to Java date/time
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Users can upload their own datasets into BDD, from MS Excel or CSV file •Uploaded data is first loaded into Hive table, then sampled/ingested as normal
12
3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Used to create a dataset based on the intersection (typically) of two datasets •Not required to just view two or more datasets together - think of this as a JOIN and SELECT
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Join Example : Add Post + Author Details to Tweet URL
•Tweets ingested into data reservoir can reference a page URL •Site Content dataset contains title, content, keywords etc for RM website pages •We would like to add these details to the tweets where an RM web page was mentioned ‣And also add page author details missing from the site contents upload
Main “driving” dataset Contains tweet user details,tweet text, hashtags, URL referenced, location of tweeter etc
Contains full details of each site page, including URL, title, content, category
Join on URL referenced in tweet
Contains the post author details missing from the Site Content dataset
Join on internal Page ID
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Multi-Dataset Join Step 1 : Join Site Contents to Posts
•Site contents dataset needs to gain access to the page author attribute only found in Posts •Create join in the Dataset Relationships panel, using Post ID as the common attribute •Join from Site contents to Posts, to create left-outer join from first to second table
1
2
3
Previews rows from the join, based onpost_id = a (post_id column)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•URLs in Twitter dataset have trailing ‘/‘, whereas URLs in RM site data do not •Use the Transformation feature in Studio to add trailing ‘/‘ to RM site URLs •Select option to replace the current URL values and overwrite within project dataset
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Multi-Dataset Join Step 3 : Join Tweets to Site Content
•Join on the standardised-format URL attributes in the two datasets •Data view will now contain the page content and author for each tweet mentioning RM
1
2
3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Key BDD Studio Differentiator : Faceted Search Across Hadoop
•BDD Studio dashboards support faceted search across all attributes, refinements •Auto-filter dashboard contents on selected attribute values - for data discovery •Fast analysis and summarisation through Endeca Server technology
Further refinement on“OBIEE” in post keywords
3Results now filteredon two refinements
4
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Key BDD Studio Differentiator : Faceted Search Across Hadoop
•BDD Studio dashboards support faceted search across all attributes, refinements •Auto-filter dashboard contents on selected attribute values •Fast analysis and summarisation through Endeca Server technology
“Mark Rittman” selected from Post Authors Results filtered on
selected refinement
1 2
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Oracle Big Data, together with OBIEE, ODI and Oracle Big Data Discovery •Complete end-to-end solution with engineered hardware, and Hadoop-native tooling
1 Combine with Oracle Big Data SQL for structured OBIEE dashboard analysis 2 Combine with site content, semantics, text enrichment
Catalog and explore using Oracle Big Data Discovery
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Articles on the Rittman Mead Blog ‣http://www.rittmanmead.com/category/oracle-big-data-appliance/ ‣http://www.rittmanmead.com/category/big-data/ ‣http://www.rittmanmead.com/category/oracle-big-data-discovery/
•Slides will be on the BI Forum USB sticks •Rittman Mead offer consulting, training and managed services for Oracle Big Data ‣Oracle & Cloudera partners ‣http://www.rittmanmead.com/bigdata
•Thank you for attending this presentation, and more information can be found at http://www.rittmanmead.com
•Contact us at [email protected] or [email protected] •Look out for our book, “Oracle Business Intelligence Developers Guide” out now! •Follow-us on Twitter (@rittmanmead) or Facebook (facebook.com/rittmanmead)