T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : [email protected]W : www.rittmanmead.com Leveraging Hadoop with OBIEE 11g and ODI 11g Mark Rittman, CTO, Rittman Mead UKOUG Tech’13 Conference, Manchester, December 2013
70
Embed
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13
The latest releases of OBIEE and ODI come with the ability to connect to Hadoop data sources, using MapReduce to integrate data from clusters of "big data" servers complementing traditional BI data sources. In this presentation, we will look at how these two tools connect to Apache Hadoop and access "big data" sources, and share tips and tricks on making it all work smoothly.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Mark Rittman, Co-Founder of Rittman Mead •Oracle ACE Director, specialising in Oracle BI&DW •14 Years Experience with Oracle Technology •Regular columnist for Oracle Magazine •Author of two Oracle Press Oracle BI books •Oracle Business Intelligence Developers Guide •Oracle Exalytics Revealed •Writer for Rittman Mead Blog :http://www.rittmanmead.com/blog
•Oracle BI and DW Gold partner •Winner of five UKOUG Partner of the Year awards in 2013 - including BI •World leading specialist partner for technical excellence, solutions delivery and innovation in Oracle BI
•Approximately 80 consultants worldwide •All expert in Oracle BI and DW •Offices in US (Atlanta), Europe, Australia and India •Skills in broad range of supporting Oracle tools: ‣OBIEE, OBIA ‣ODIEE ‣Essbase, Oracle OLAP ‣GoldenGate ‣Endeca
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
OLAP / In-Memory Tool with data load into own database
DirectRead
DataLoad
Traditional structureddata sources
DataLoad
DataLoad
DataLoad
Traditional Relational Data Warehouse
•Three-layer architecture - staging, foundation and access/performance •All three layers stored in a relational database (Oracle) •ETL used to move data from layer-to-layer
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Recent Innovations and Developments in DW Architecture
•The rise of “big data” and “hadoop” ‣New ways to process, store and analyse data ‣New paradigm for TCO - low-cost servers, open-source software, cheap clustering
•Explosion in potential data-source types ‣Unstructured data ‣Social media feeds ‣Schema-less and schema-on-read databases
•New ways of hosting data warehouses ‣In the cloud ‣Do we even need an Oracle database or DW?
•Lots of opportunities for DW/BI developers - make our systems cheaper, wider range of data
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Unstructured, Semi-Structured and Schema-Less Data
•Gaining access to the vast amounts of non-financial / application data out there ‣Data in documents, spreadsheets etc
-Warranty claims, supporting documents, notes etc ‣Data coming from the cloud / social media ‣Data for which we don’t yet have a structure ‣Data who’s structure we’ll decide when wechoose to access it (“schema-on-read”)
•All of the above could be useful informationto have in our DW and BI systems ‣But how do we load it in? ‣And what if we want to access it directly?
Schema-less / NoSQLdata sources
Unstructured/Social / Doc
data sources
Hadoop / Big Data
data sources
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Apache Hadoop is one of the most well-known Big Data technologies ‣Family of open-source products used to store, and analyze distributed datasets ‣Hadoop is the enabling framework, automatically parallelises and co-ordinates jobs ‣MapReduce is the programming framework for filtering, sorting and aggregating data ‣Map : filter data and pass on to reducers ‣Reduce : sort, group and return results ‣MapReduce jobs can be written in anylanguage (Java etc), but it is complicated
•Can be used as an extension of the DW staging layer - cheap processing & storage •And there may be data stored in Hadoop that our BI users might benefit from
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•The filesystem behind Hadoop, used to store data for Hadoop analysis ‣Unix-like, uses commands such as ls, mkdir, chown, chmod
•Fault-tolerant, with rapid fault detection and recovery •High-throughput, with streaming data access and large block sizes •Designed for data-locality, placing data closed to where it is processed •Accessed from the command-line, via internet (hdfs://), GUI tools etc
•MapReduce jobs are typically written in Java, but Hive can make this simpler •Hive is a query environment over Hadoop/MapReduce to support SQL-like queries •Hive server accepts HiveQL queries via HiveODBC or HiveJDBC, automaticallycreates MapReduce jobs against data previously loaded into the Hive HDFS tables
•Approach used by ODI and OBIEEto gain access to Hadoop data
•Allows Hadoop data to be accessed just like any other data source (sort of...)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•HiveQL queries are automatically translated into Java MapReduce jobs •Selection and filtering part becomes Map tasks •Aggregation part becomes the Reduce tasks
SELECT a, sum(b) FROM myTable WHERE a<100
GROUP BY a
MapTask
MapTask
MapTask
ReduceTask
ReduceTask
Result
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
An example Hive Query Session: Display Table Row Count
hive> select count(*) from src_customer;!
Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapred.reduce.tasks= Starting Job = job_201303171815_0003, Tracking URL = http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201303171815_0003Kill Command = /usr/lib/hadoop-0.20/bin/ hadoop job -Dmapred.job.tracker=localhost.localdomain:8021 -kill job_201303171815_0003 2013-04-17 04:06:59,867 Stage-1 map = 0%, reduce = 0% 2013-04-17 04:07:03,926 Stage-1 map = 100%, reduce = 0% 2013-04-17 04:07:14,040 Stage-1 map = 100%, reduce = 33% 2013-04-17 04:07:15,049 Stage-1 map = 100%, reduce = 100% Ended Job = job_201303171815_0003 OK !25 Time taken: 22.21 seconds
Request count(*) from table
Hive server generates MapReduce job to “map” table key/value pairs, and then reduce the results to table count
•Where many organisations are going: •Traditional DW at core of strategy •Making increasing use of low-cost, cloud/big data tech for storage / pre-processing
•Access to non-traditional data sources,usually via ETL in to the DW
•Federated data access throughOBIEE connectivity & metadata layer
DW 2013: The Mixed Architecture with Federated Queries
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Oracle Big Data Appliance - Engineered System for Big Data Acquisition and Processing ‣Cloudera Distribution of Hadoop ‣Cloudera Manager ‣Open-source R ‣Oracle NoSQL Database Community Edition ‣Oracle Enterprise Linux + Oracle JVM
•Oracle Big Data Connectors ‣Oracle Loader for Hadoop (Hadoop > Oracle RBDMS) ‣Oracle Direct Connector for HDFS (HFDS > Oracle RBDMS) ‣Oracle Data Integration Adapter for Hadoop ‣Oracle R Connector for Hadoop ‣Oracle NoSQL Database (column/key-store DB based on BerkeleyDB)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Oracle technology for accessing Hadoop data, and loading it into an Oracle database •Pushes data transformation, “heavy lifting” to the Hadoop cluster, using MapReduce •Direct-path loads into Oracle Database, partitioned and non-partitioned •Online and offline loads •Key technology for fast load of Hadoop results into Oracle DB
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Enables HDFS as a data-source for Oracle Database external tables •Effectively provides Oracle SQL access over HDFS •Supports data query, or import into Oracle DB •Treat HDFS-stored files in the same way as regular files ‣But with HDFS’s low-cost ‣… and fault-tolerance
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•ODI 11g Application Adapter (pay-extra option) for Hadoop connectivity •Works for both Windows and Linux installs of ODI Studio ‣Need to source HiveJDBC drivers and JARs from separate Hadoop install
•Provides six new knowledge modules ‣IKM File to Hive (Load Data) ‣IKM Hive Control Append ‣IKM Hive Transform ‣IKM File-Hive to Oracle (OLH) ‣CKM Hive ‣RKM Hive
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•ODI is the data integration tool for extracting data from Hadoop/MapReduce, and loading into Oracle Big Data Appliance, Oracle Exadata and Oracle Exalytics
•Oracle Application Adaptor for Hadoop provides required data adapters ‣Load data into Hadoop from local filesystem,or HDFS (Hadoop clustered FS) ‣Read data from Hadoop/MapReduce usingApache Hive (JDBC) and HiveQL, loadinto Oracle RDBMS usingOracle Loader for Hadoop
•Supported by Oracle’s Engineered Systems ‣Exadata ‣Exalytics ‣Big Data Appliance
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•OBIEE 11g can also make use of big data sources ‣OBIEE 11.1.1.7+ supports Hive/Hadoop as a data source ‣Oracle R Enterprise can expose R models through DB functions, columns ‣Oracle Exalytics has InfiniBand connectivity to Oracle BDA
•Endeca Information Discovery can analyze unstructured and semi-structured sources ‣Increasingly tighter-integration betweenOBIEE and Endeca
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Opportunities for OBIEE and ODI with Big Data Sources and Tools
•Load data from a Hadoop/HDFS/NoSQL environment into a structured DW for analysis •Provide OBIEE as an alternative to Java coding or HiveQL for analysts
OBIEE and ODI Access to Hive: MapReduce with no Java Coding
•Requests in HiveQL arrive via HiveODBC, HiveJDBCor through the Hive command shell
•JDBC and ODBC access requires Thift server ‣Provides RPC call interface over Hive for external procs
•All queries then get parsed, optimized and compiled, thensent to Hadoop NameNode and Job Tracker
•Then Hadoop processes the query, generating MapReducejobs and distributing it to run in parallel across all data nodes
•Hadoop access can still be performed procedurally if needed,typically coded by hand in Java, or through Pig, etc ‣The equivalent of PL/SQL compared to SQL ‣But Hive works well with the OBIEE/ODI paradigm
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•You can download your own Hive binaries, libraries etc from Apache Hadoop website •Or use pre-built VMs and distributions from the likes of Cloudera ‣Cloudera CDH3/4 is used on Oracle Big Data Appliance ‣Open-source + proprietary tools (Cloudera Manager)
•Other tools for managing Hive, HFDS etc including ‣Hue (HDFS file browser + management) ‣Beeswax (Hive administration + querying)
•ODI accesses data in Hadoop clusters through Apache Hive ‣Metadata and query layer over MapReduce ‣Provides SQL-like language (HiveQL) and a data dictionary ‣Provides a means to define “tables”, into which file data is loaded, and then queried via MapReduce ‣Accessed via Hive JDBC driver(separate Hadoop install requiredon ODI server, for client libs)
•Additional access throughOracle Direct Connector for HDFSand Oracle Loader for Hadoop
Hadoop Cluster
Hive Server
ODI 11gOracle RDBMS
HiveQL
MapReduce
Direct-path loads using Oracle Loader for Hadoop, transformation logic in MapReduce
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Relationship Between ODI and OBIEE with Big Data Sources
•OBIEE now has the ability to reportagainst Hadoop data, via Hive ‣Assumes that data is already loadedinto the Hive warehouse tables
•ODI therefore can be used to loadthe Hive tables, through either: ‣Loading Hive from files ‣Joining and loading from Hive-Hive ‣Loading and transforming via shell scripts (python, perl etc)
•ODI could also extract the Hive dataand load into Oracle, if more appropriate
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Obtain an installation of Hadoop/Hive from somewhere (Cloudera CDH3/4 for example) •Copy the following files into a temp directory, archive and transfer to ODI environmentfor example... !!!
•Copy JAR files into userlib directory and (standalone) agent lib directory !!
Reverse Engineering Hive, HDFS and Local File Datastores + Models
•Hive tables reverse-engineer just like regular tables •Define model in Designer navigator, uses Hive RKM to retrieve table metadata • Information on Hive-specific metadata stored in flexfields ‣Hive Buckets ‣Hive Partition Column ‣Hive Cluster Column ‣Hive Sort Column
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•ODI 11g Application Adapter (pay-extra option) for Hadoop connectivity •Works for both Windows and Linux installs of ODI Studio ‣Need to source HiveJDBC drivers and JARs from separate Hadoop install
•Provides six new knowledge modules ‣IKM File to Hive (Load Data) ‣IKM Hive Control Append ‣IKM Hive Transform ‣IKM File-Hive to Oracle (OLH) ‣CKM Hive ‣RKM Hive
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Oracle technology for accessing Hadoop data, and loading it into an Oracle database •Pushes data transformation, “heavy lifting” to the Hadoop cluster, using MapReduce •Direct-path loads into Oracle Database, partitioned and non-partitioned •Online and offline loads •Key technology for fast load of Hadoop results into Oracle DB
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Enables HDFS as a data-source for Oracle Database external tables •Effectively provides Oracle SQL access over HDFS •Supports data query, or import into Oracle DB •Treat HDFS-stored files in the same way as regular files ‣But with HDFS’s low-cost ‣… and fault-tolerance
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
IKM File to Hive (Load Data): Loading Hive Tables from File or HDFS
•Uses the Hive Load Data command to load from local or HDFS files
•Calls Hadoop FS commands for simple copy/move into/around HDFS
•Commands generated by ODI through IKM File to Hive (Load Data)
hive> load data inpath '/user/oracle/movielens_src/u.data' > overwrite into table movie_ratings; Loading data to table default.movie_ratings Deleted hdfs://localhost.localdomain/user/hive/warehouse/ movie_ratings OK Time taken: 0.341 seconds
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
IKM File-Hive to Oracle: Extract from Hive into Oracle Tables
•Uses Oracle Loaded for Hadoop (OLH) to processany filtering, aggregation, transformation in Hadoop,using MapReduce
•OLH part of Oracle Big Data Connectors (additional cost) •High-performance loader into Oracle DB •Optional sort by primary key, pre-partioning of data •Can utilise the two OLH loading modes: •JDBC or OCI direct load into Oracle •Unload to files, Oracle DP into Oracle DB
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•No specific technology or driver for NoSQL databases, but can use Hive external tables •Requires a specific “Hive Storage Handler” for key/value store sources ‣Hive feature for accessing data from other DB systems, for example MongoDB, Cassandra ‣For example, https://github.com/vilcek/HiveKVStorageHandler
•Additionally needs Hive collect_set aggregation method to aggregate results ‣Has to be defined in Languages panel in Topology
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Pig, Sqoop and other Hadoop Technologies, and Hive
•Future versions of ODI might use other Hadoop technologies ‣Apache Sqoop for bulk transfer between Hadoop and RBDMSs
•Other technologies are not such an obvious fit ‣Apache Pig - the equivalent of PL/SQL for Hive’s SQL
•Commercial vendors may produce “better” versions of Hive, MapReduce etc ‣Cloudera Impala - more “real-time” version of Hive ‣MapR - solves many current issues with MapReduce, 100% Hadoop API compatibility
•Watch this space...!
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Two main scenarios for OBIEE 11g accessing “big data” sources 1. Through the data warehouse - no different to any other data provided through the DW 2. Directly - through OBIEE 11.1.1.7+ Hadoop/Hive connectivity
1 2
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
New in OBIEE 11.1.1.7 : Hadoop Connectivity through Hive
•MapReduce jobs are typically written in Java, but Hive can make this simpler •Hive is a query environment over Hadoop/MapReduce to support SQL-like queries •Hive server accepts HiveQL queries via HiveODBC or HiveJDBC, automaticallycreates MapReduce jobs against data previously loaded into the Hive HDFS tables
•Approach used by ODI and OBIEE to gain access to Hadoop data •Allows Hadoop data to be accessed just like any other data source
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•HiveODBC driver has to be installed into Windows environment, so that BI Administration tool can connect to Hive and return table metadata
• Import as ODBC datasource, change physical DB type to Apache Hadoop afterwards •Note that OBIEE queries cannot span >1 Hive schema (no table prefixes)
1
2
3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•OBIEE 11.1.1.7+ ships with HiveODBC drivers, need to use 7.x versions though (only Linux supported)
•Configure the ODBC connection in odbc.ini, name needs to match RPD ODBC name •BI Server should then be able to connect to the Hive server, and Hadoop/MapReduce
[ODBC Data Sources] AnalyticsWeb=Oracle BI Server Cluster=Oracle BI Server SSL_Sample=Oracle BI Server bigdatalite=Oracle 7.1 Apache Hive Wire Protocol
Dealing with Hadoop / Hive Latency Option 1 : Exalytics
•Hadoop access through Hive can be slow - due to inherent latency in Hive •Hive queries use MapReduce in the background to query Hadoop •Spins-up Java VM on each query •Generates MapReduce job •Runs and collates the answer •Great for large, distributed queries ... • ... but not so good for “speed-of-thought” dashboards •So what if we could use Exalytics to speed-up Hadoop queries?
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Engineered system, complements Oracle Exadata Database Machine (but can work standalone) •Combination of high-end hardware (Sun x86_64 architecture, 3RU rack-mountable, 1-2TB RAM)and optimized versions of Oracle’s BI, In-Memory Database and OLAP software
•Delivers “in-memory analytics” focusing on analysis, aggregation and UI ‣Rich, interactive dashboards with split-second response times ‣1-2TB (and now 4TB) of RAM, to run your analysis in-memory ‣Infiniband connection to Exadata and Oracle BDA ‣40 CPU cores (and now 128) to support high user numbers ‣Lower TCO through known configuration, combined patch sets ‣Contains software features only licensable throughExalytics package
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
• In conjunction with a well-tuned data warehouse, Exalytics adds an in-memory analysis layer •Based around Oracle TimesTen for Exalytics, Oracle’s In-Memory Database •Aggregates are recommended based on query patterns, automatically created in TimesTen •Summary Advisor makes recommendations, which adapt as queries change •Meant to be “plug-and-play” - no need for expensive data warehouse tuning
•So can we use this for speeding-up Hadoop/Hive queries?TimesTen BI Server
Exal
ytic
s
Aggregates
Data WarehouseDetail-level
Data
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Summary Advisor for Aggregate Recommendation & Creation
•Utility within Oracle BI Administrator tool that recommends aggregates •Bases recommendations on usage tracking and summary statistics data •Captured based on past activity •Runs an iterative algorithm that searches,each iteration, for the best aggregate
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•A simple Hadoop / Hive BMM was created, based off of a single Hive table •Queries run against that BMM that requested aggregates •Query details, and requested aggregates, go in usage tracking & summary statistics tables •Avg. query response time = 30 secs+
select avg(T44678.age) as c1, T44678.sales_pers as c2, sum(T44678.age) as c3, count(T44678.age) as c4 from dwh_customer T44678 group by T44678.sales_pers
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Generate Aggregate Recommendations using Summary Advisor
•Ensure BMM has one or more logical dimensions + 2 or more logical levels •Ensure S_NQ_SUMMARY_ADVISOR table has aggregate recordings + level details •Generate summary recommendations using Summary Advisor, output as nqcmd script
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
Dealing with Hadoop / Hive Latency Option 2 : Use Impala
•Hive is slow - because it’s meant to be used for batch-mode queries •Many companies / projects are trying to improve Hive - one of which is Cloudera •Cloudera Impala is an open-source but commercially-sponsored in-memory MPP platform
•Replaces Hive and MapReduce in the Hadoop stack •Can we use this, instead of Hive, to access Hadoop? ‣It will need to work with OBIEE ‣Warning - it won’t be a supported data source (yet…)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Warning - unsupported source - limited testing and no support from MOS •Requires Cloudera Impala ODBC drivers - Windows or Linux (RHEL etc/SLES) - 32/64 bit •ODBC Driver / DSN connection steps similar to Hive
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
• Import Impala tables (via the Hive metastore) into RPD •Set database type to “Apache Hadoop” ‣Warning - don’t set ODBC type to Hadoop- leave at ODBC 2.0 ‣Create physical layer keys, joins etc as normal
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•With ORDER BY disabled in DB features, it appears to •But not extensively tested by me, or Oracle •But it’s certainly interesting •Reduces 30s, 180s queries down to 1s, 10s etc • Impala, or one of the competitor projects(Drill, Dremel etc) assumed to be thereal-time query replacement for Hive, in time ‣Oracle announced planned support for Impala at OOW2013 - watch this space
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
•Thank you for attending this presentation, and more information can be found at http://www.rittmanmead.com
•Contact us at [email protected] or [email protected] •Look out for our book, “Oracle Business Intelligence Developers Guide” out now! •Follow-us on Twitter (@rittmanmead) or Facebook (facebook.com/rittmanmead)