Top Banner
Building Data Warehouse Zhenhao Qi Department of Biochemistry & Department of Computer Science and Engineeri State University of New York at Buffalo March 23rd, 2000
25

Building Data Warehouse

Feb 25, 2016

Download

Documents

Tala

Building Data Warehouse . Zhenhao Qi Department of Biochemistry & Department of Computer Science and Engineering State University of New York at Buffalo March 23rd, 2000. Outline: 1. Migrating data from legacy systems: an iterative, incremental methodology. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Building Data Warehouse

Building Data Warehouse

Zhenhao QiDepartment of Biochemistry & Department of Computer Science and EngineeringState University of New York at Buffalo

March 23rd, 2000

Page 2: Building Data Warehouse

Outline:

1. Migrating data from legacy systems: an iterative, incremental methodology.

2. Building high data quality into data warehouse.

3. Optimal machine architectures for parallel query scalability

Page 3: Building Data Warehouse

The difficulties of migrating data from legacy systems:

1. The same data is presented differently in different system.

2. The schema for a single database may not be consistent over time.

3. Data may simply be bad.

4. The data values are not represented in a form that is meaningful to

end users.

5. Conversions and migrations in heterogeneous environments typically

involve data from multiple incompatible DBMS and hardware platform.

6. The execution windows for data conversion programs must be

coordinated carefully in order to provide the new applications with a

consistent view of data without impacting production system.

Page 4: Building Data Warehouse

The need for an iterative, incremental methodology

IT organizations failed to use the same serial methodology for large migration problems that they used for relatively discrete projects.

1. In a large organization, the complexity of the analysis and design can involve person-years of effort without any demonstrable results.

2. The sheer complexity of both the data analysis and the design of the target system has prevented effective progress.

3. The rate of change in operational systems has outstripped the migration team’s ability to keep current.

Page 5: Building Data Warehouse

Implications for metadata and the data warehouse

3 sources of change that the data warehouse team must anticipate

• Those arising from the normal regular changes to operational system.

• Those that result from using an iterative, incremental methodology.

• Those that result from external business drivers like acquisition.

The key to dealing with change cost effectively lies in metadata.

Page 6: Building Data Warehouse

Four Dimensions of Metadata:

Another way of looking at the sources of technical challengewith respect to the type of metadata required to minimizethe impact of change. The need to adapt to change, error andcomplexity is regular over time can be seen like waves ona beach.

change time

complexityerror

Page 7: Building Data Warehouse

Metadata that capture the current environment

• The record and data element definitions.• Inter- and intra- database relationships.• A definition of each interface program used to build or refresh the warehouses a. which inter-database joins it uses; b. the timing and direction of the execution; c. any execution parameters d. dependencies on any other interface programs e. the use or production of other ancillary database f. the name and location of the file that contains the source code for this data interface program. g. the tool and session name if this interface program was automatically generated.

Page 8: Building Data Warehouse

Metadata required to reduce the cost of errors

The meta-model should allow the inclusion of the information discovered at the level of data element, record, database, and join. This information includes but is not limited to:

•legal ranges and values•any exception logic that the data interface program should take if an illegal value is found

Page 9: Building Data Warehouse

Metadata that can reduce the cost of complexity Other types of metadata may be needed to reduce the complexity of specifying, maintaining, and executing the data interface program Factoring in time 1. Information about each database that can effect execution time such as: a. Database size and volatility. b. The time window during which each database can be accessed. c. The mechanism that should be used for changed data capture. 2. Versioning a. Design the meta-model to anticipate change. b. Choose tools that provide sufficient versioning support to facilitate input analysis.

Page 10: Building Data Warehouse

On the lack of an integrated development environment

IMS

DB2

IDMS

Operational systems

Data conversion Tool

Data discoverand cleansing tools

Query tools

Rawdata

Corrected data

Case Tools Repositories

Warehouse DBMS Servers

Replication Tools

Metadata

Data

Data

Datamart Servers

Page 11: Building Data Warehouse

Developing an evaluation grid

The best strategy would be to • create a list of the types of change the organization is most likely to encounter.• determine the types of metadata required to respond to this change cost effectively.From this data one should be able to determine a set of requirements regarding1. The number of systems and tools that must be interfaced.2. The types of metadata required for the meta-model.3. The best versioning strategy for performing impact analysis.4. The desired set of functionality for automating this process.

Page 12: Building Data Warehouse

The greatest cost benefit of high data quality

• Data quality assures previously unavailable competitive advantage and strategic capability

1. Improved accuracy, timeliness, and confidence in decision making. 2. Improved customer service and retention. 3. Unprecedented sales and marketing opportunities. 4. Support for business reengineering initiatives

• High data quality improves productivity

1. Enables smart corporate-wide purchasing strategies. 2. Streamlines work process.

Page 13: Building Data Warehouse

• High data quality reduces costs

1. Reduces physical inventory by identifying anomalies and redundancies within manufacturing parts, pharmaceutical prescription drugs, and so on. 2. Simplifies database management and reduces storage requirements for information system. 3. Reduces mailing and production costs. 4. Reduces clerical staff. 5. Spares costly redesigns of data models.

• The absence of high data quality precludes effective use of new systems

• High data quality avoids the compounding effect of data contamination

Page 14: Building Data Warehouse

What is high quality data ?

1. Addressability.2. Domain integrity.3. Be accurate.4. Be properly integrated to attain entity integrity.5. Adhere to business rules.6. Satisfy business needs.7. Integrity.8. Be consistent.9. Data redundancy must be intentional.10. Be complete.11. Correct cardinality.

Page 15: Building Data Warehouse

Data reengineering: a four-phase process to attain high data quality

External Files

LegacyApplication

HistoricalExtracts

1. Data investigation2. Data conditioning & Standardization3. Data Integration4. Data survivorship and Formatting

•Customer information System

•Data Warehouses

•Client/Server Application

•EISs

Page 16: Building Data Warehouse

• Data investigation

ParsingLexical analysisPattern investigationData typing

• Data integration

The integration phase identifies and consolidates related records lacking common keys through statistical matching techniqueFlexible construction of search keys to optimize machine resourcesFlexible definition of match fields to increase data points for statistical analysisVariable weights and penalties for each data point to take into account an organization’s business rules and produces scores relative to probability.

Page 17: Building Data Warehouse

Why uniform data access times are optimal for parallel query execution?

• Algorithmic parallelism is achieved using a paradigm similar to the division of labor• The material (data) must be evenly distributed among the personnel (CPUs) or the effect of parallelism is lost

Page 18: Building Data Warehouse

Symmetric multiprocessor (SMP)

CPUs

Shared symmetric system bus

Sharedmemory Disks

One hopOne hop

•A classical SMP is a tightly coupled connection model where all components connected to a single bus are equidistant.•Disadvantage: very short buses limit scalability.

Page 19: Building Data Warehouse

Loosely Coupled Architectures

• The 2-D mesh

CPU Mem

Disk

CPU Mem

Disk

CPU Mem

Disk

CPU Mem

Disk

CPU Mem

Disk

CPU Mem

Disk

Auxiliary Disk

Auxiliary Disk

Auxiliary Disk

It has a connection density of 4, in that each node is attached toat most four of its neighbors.

Page 20: Building Data Warehouse

• Crossbar Switch Borrowed from telephony, this technology creates a direct, point-to-point connection between every node, with only one hop through the switch to get from one node to any other node.

Switchelement

Switchelement

Switchelement

Switchelement

NodeConnectionsAre typicallyBi-directionalAndNon-blocking

A direct connection between each node and every other node

All nodes are only one hop away from one another

Page 21: Building Data Warehouse

A query parse tree example

SELECT * FROM Table_a ORDER BY Column_2

Concatenation merge of sorted runs into result

Parallel sort on column_2 (ORDER BY)

Parallel full table scan of Table_a (SELECT *)

Page 22: Building Data Warehouse

The Machine architecture is causing the data skew

DiskSortFTS

SortSort

DiskFTS

DiskFTS

1

2

3

4

5

6

•Sort(5) receives data from FTS(1) in two hops (there are 50As)•Sort(5) receives data from FTS(2) in three hops (there are 100As)•Sort(5) receives data from FTS(3) in one hops (there are 10As)

Page 23: Building Data Warehouse

SMPs: No machine-architecture-induced data skew

FTS FTS FTS Sort Sort Sort

FTS(1) FTS(2) FTS(3) Sort(4) Sort(5) Sort(6)Disk Disk

CPUs

Shared system bus

Shared memory

Page 24: Building Data Warehouse

Crossbar switch connection models Eliminate data skew

CPU Mem

Disk

CPUMem

Disk

CPU Mem

Disk

CPU Mem

Disk

CPU Mem

Disk

CPU Mem

Disk

CrossbarSwitch

All nodes are directly connected to all other nodes via crossbar switch.There is only one hop to and from any destination, guaranteeing uniformdata access times.

Page 25: Building Data Warehouse

Refinements to the crossbar switch architecture

CrossbarSwitch

SMP SMP

SMP SMP

SMP SMP

Disk

Disk

Disk

Disk

Disk

Disk

1. Share the disk drives using the switch, thus combining the virtues of shared nothing and clustered architectures in one architecture.2. Make the loosely coupled nodes symmetric multiprocessors.