This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Data can be partitioned across multiple disks for parallel I/O.
Individual relational operations (e.g., sort, join, aggregation) can be executed in parallel data can be partitioned and each processor can work independently
on its own partition.
Queries are expressed in high level language (SQL, translated to relational algebra) makes parallelization easier.
Different queries can be run in parallel with each other.Concurrency control takes care of conflicts.
Thus, databases naturally lend themselves to parallelism.
Range partitioning: Choose an attribute as the partitioning attribute.
A partitioning vector [vo, v1, ..., vn-2] is chosen.
Let v be the partitioning attribute value of a tuple. Tuples such that vi vi+1 go to disk I + 1. Tuples with v < v0 go to disk 0 and tuples with v vn-2 go to disk n-1.
E.g., with a partitioning vector [5,11], a tuple with partitioning attribute value of 2 will go to disk 0, a tuple with value 8 will go to disk 1, while a tuple with value 20 will go to disk2.
Comparison of Partitioning Techniques (Cont.)Comparison of Partitioning Techniques (Cont.)
Range partitioning: Provides data clustering by partitioning attribute value. Good for sequential access Good for point queries on partitioning attribute: only one disk
needs to be accessed. For range queries on partitioning attribute, one to a few disks
may need to be accessed Remaining disks are available for other queries. Good if result tuples are from one to a few blocks. If many blocks are to be fetched, they are still fetched from one
to a few disks, and potential parallelism in disk access is wasted Example of execution skew.
The distribution of tuples to disks may be skewed — that is, some disks have many tuples, while others may have fewer tuples.
Types of skew: Attribute-value skew.
Some values appear in the partitioning attributes of many tuples; all the tuples with the same value for the partitioning attribute end up in the same partition.
Can occur with range-partitioning and hash-partitioning. Partition skew.
With range-partitioning, badly chosen partition vector may assign too many tuples to some partitions and too few to others.
Less likely with hash-partitioning if a good hash-function is chosen.
Handling Skew in Range-PartitioningHandling Skew in Range-Partitioning
To create a balanced partitioning vector (assuming partitioning attribute forms a key of the relation): Sort the relation on the partitioning attribute.
Construct the partition vector by scanning the relation in sorted order as follows.
After every 1/nth of the relation has been read, the value of the partitioning attribute of the next tuple is added to the partition vector.
n denotes the number of partitions to be constructed.
Duplicate entries or imbalances can result if duplicates are present in partitioning attributes.
Alternative technique based on histograms used in practice
Handling Skew Using Virtual Processor Handling Skew Using Virtual Processor Partitioning Partitioning
Skew in range partitioning can be handled elegantly using virtual processor partitioning: create a large number of partitions (say 10 to 20 times the number
of processors)
Assign virtual processors to partitions either in round-robin fashion or based on estimated cost of processing each virtual partition
Basic idea: If any normal partition would have been skewed, it is very likely the
skew is spread over a number of virtual partitions
Skewed virtual partitions get spread across a number of processors, so work gets distributed evenly!
Queries/transactions execute in parallel with one another.
Increases transaction throughput; used primarily to scale up a transaction processing system to support a larger number of transactions per second.
Easiest form of parallelism to support, particularly in a shared-memory parallel database, because even sequential database systems support concurrent processing.
More complicated to implement on shared-disk or shared-nothing architectures Locking and logging must be coordinated by passing messages
between processors.
Data in a local buffer may have been updated at another processor.
Cache-coherency has to be maintained — reads and writes of data in buffer must find latest version of data.
Example of a cache coherency protocol for shared disk systems: Before reading/writing to a page, the page must be locked in
shared/exclusive mode.
On locking a page, the page must be read from disk
Before unlocking a page, the page must be written to disk if it was modified.
More complex protocols with fewer disk reads/writes exist.
Cache coherency protocols for shared-nothing systems are similar. Each database page is assigned a home processor. Requests to fetch the page or write it to disk are sent to the home processor.
Execution of a single query in parallel on multiple processors/disks; important for speeding up long-running queries.
Two complementary forms of intraquery parallelism : Intraoperation Parallelism – parallelize the execution of each
individual operation in the query.
Interoperation Parallelism – execute the different operations in a query expression in parallel.
the first form scales better with increasing parallelism becausethe number of tuples processed by each operation is typically more than the number of operations in a query
Parallel Processing of Relational OperationsParallel Processing of Relational Operations
Our discussion of parallel algorithms assumes: read-only queries
shared-nothing architecture
n processors, P0, ..., Pn-1, and n disks D0, ..., Dn-1, where disk Di is associated with processor Pi.
If a processor has multiple disks they can simply simulate a single disk Di.
Shared-nothing architectures can be efficiently simulated on shared-memory and shared-disk systems. Algorithms for shared-nothing systems can thus be run on shared-
The join operation requires pairs of tuples to be tested to see if they satisfy the join condition, and if they do, the pair is added to the join output.
Parallel join algorithms attempt to split the pairs to be tested over several processors. Each processor then computes part of the join locally.
In a final step, the results from each processor can be collected together to produce the final result.
For equi-joins and natural joins, it is possible to partition the two input relations across the processors, and compute the join locally at each processor.
Let r and s be the input relations, and we want to compute r r.A=s.B s.
r and s each are partitioned into n partitions, denoted r0, r1, ..., rn-1 and s0, s1, ..., sn-1.
Can use either range partitioning or hash partitioning.
r and s must be partitioned on their join attributes r.A and s.B), using the same range-partitioning vector or hash function.
Partitions ri and si are sent to processor Pi,
Each processor Pi locally computes ri ri.A=si.B si. Any of the standard
General case: reduces the sizes of the relations at each processor.
r is partitioned into n partitions,r0, r1, ..., r n-1;s is partitioned into m partitions, s0, s1, ..., sm-1.
Any partitioning technique may be used.
There must be at least m * n processors.
Label the processors as
P0,0, P0,1, ..., P0,m-1, P1,0, ..., Pn-1m-1.
Pi,j computes the join of ri with sj. In order to do so, ri is replicated to Pi,0, Pi,1, ..., Pi,m-1, while si is replicated to P0,i, P1,i, ..., Pn-1,i
Any join technique can be used at each processor Pi,j.
Both versions of fragment-and-replicate work with any join condition, since every tuple in r can be tested with every tuple in s.
Usually has a higher cost than partitioning, since one of the relations (for asymmetric fragment-and-replicate) or both relations (for general fragment-and-replicate) have to be replicated.
Sometimes asymmetric fragment-and-replicate is preferable even though partitioning could be used. E.g., say s is small and r is large, and already partitioned. It may be
cheaper to replicate s across all processors, rather than repartition r and s on the join attributes.
Assume s is smaller than r and therefore s is chosen as the build relation.
A hash function h1 takes the join attribute value of each tuple in s and maps this tuple to one of the n processors.
Each processor Pi reads the tuples of s that are on its disk Di, and sends each tuple to the appropriate processor based on hash function h1. Let si denote the tuples of relation s that are sent to processor Pi.
As tuples of relation s are received at the destination processors, they are partitioned further using another hash function, h2, which is used to compute the hash-join locally. (Cont.)
Once the tuples of s have been distributed, the larger relation r is redistributed across the m processors using the hash function h1
Let ri denote the tuples of relation r that are sent to processor Pi.
As the r tuples are received at the destination processors, they are repartitioned using the function h2
(just as the probe relation is partitioned in the sequential hash-join algorithm).
Each processor Pi executes the build and probe phases of the hash-join algorithm on the local partitions ri and s of r and s to produce a partition of the final result of the hash-join.
Note: Hash-join optimizations can be applied to the parallel case e.g., the hybrid hash-join algorithm can be used to cache some of
the incoming tuples in memory and avoid the cost of writing them and reading them back in.
Assume that relation s is much smaller than relation r and that r is stored by
partitioning.
there is an index on a join attribute of relation r at each of the partitions of relation r.
Use asymmetric fragment-and-replicate, with relation s being replicated, and using the existing partitioning of relation r.
Each processor Pj where a partition of relation s is stored reads the tuples of relation s stored in Dj, and replicates the tuples to every other processor Pi.
At the end of this phase, relation s is replicated at all sites that store tuples of relation r.
Each processor Pi performs an indexed nested-loop join of relation s with the ith partition of relation r.
Partition the relation on the grouping attributes and then compute the aggregate values locally at each processor.
Can reduce cost of transferring tuples during partitioning by partly computing aggregate values before partitioning.
Consider the sum aggregation operation:
Perform aggregation operation at each processor Pi on those tuples
stored on disk Di
results in tuples with partial sums at each processor.
Result of the local aggregation is partitioned on the grouping attributes, and the aggregation performed again at each processor Pi to get the final result.
Fewer tuples need to be sent to other processors during partitioning.
Factors Limiting Utility of Pipeline Factors Limiting Utility of Pipeline ParallelismParallelism
Pipeline parallelism is useful since it avoids writing intermediate results to disk
Useful with small number of processors, but does not scale up well with more processors. One reason is that pipeline chains do not attain sufficient length.
Cannot pipeline operators which do not produce output until all inputs have been accessed (e.g. aggregate and sort)
Little speedup is obtained for the frequent cases of skew in which one operator's execution cost is much higher than the others.
The number of parallel evaluation plans from which to choose from is much larger than the number of sequential evaluation plans. Therefore heuristics are needed while optimization
Two alternative heuristics for choosing parallel plans: No pipelining and inter-operation pipelining; just parallelize every operation
across all processors. Finding best plan is now much easier --- use standard optimization
technique, but with new cost model Volcano parallel database popularize the exchange-operator model
– exchange operator is introduced into query plans to partition and distribute tuples
– each operation works independently on local data on each processor, in parallel with other copies of the operation
First choose most efficient sequential plan and then choose how best to parallelize the operations in that plan. Can explore pipelined parallelism as an option
Choosing a good physical organization (partitioning technique) is important to speed up queries.