Parallel Crawlers Efficient URL Caching for World Wide Web Crawling Presenter Sawood Alam salam@cs.odu.edu AND.

Post on 21-Jan-2016

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

Parallel Crawlers

Efficient URL Caching for World Wide Web Crawling

PresenterSawood Alamsalam@cs.odu.edu

AND

Parallel Crawlers

Hector Garcia-MolinaStanford University

cho@cs.stanford.edu

Junghoo ChoUniversity of California

cho@cs.ucla.edu

ABSTRACTDesign an effective and scalable parallel

crawlerPropose multiple architectures for a parallel

crawlerIdentify fundamental issues related to

parallel crawlingMetrics to evaluate a parallel crawlerCompare the proposed architectures using

40 million pages

Challenges for parallel crawlersOverlapQualityCommunication bandwidth

AdvantagesScalabilityNetwork-load dispersionNetwork-load reduction

CompressionDifferenceSummarization

Related workGeneral architecturePage selectionPage update

Geographical categorizationIntra-site parallel crawlerDistributed crawler

CommunicationIndependentDynamic assignmentStatic assignment

Crawling modes (Static)Firewall modeCross-over modeExchange mode

URL exchange minimizationBatch communicationReplication

Partitioning functionURL-hash basedSite-hash basedHierarchical

Evaluation modelsOverlapCoverageQualityCommunication overhead

Firewall mode and coverage

Cross-over mode and overlap

Exchange mode and communication

Quality and batch communication

ConclusionFirewall mode is good if processes <= 4URL exchange poses network overhead <

1%Quality is maintained even in the batch

communicationReplicating 10,000 to 100,000 popular

URLs can reduce 40% communication overhead

Efficient URL Caching for World Wide Web Crawling

Andrei Z. BroderIBM TJ Watson Research Center

abroder@us.ibm.com

Janet L. WienerHewlett Packard Labsjanet.wiener@hp.com

Marc NajorkMicrosoft Research

najork@microsoft.com

IntroductionFetch a pageParse it to extract all linked URLsFor all the URLs not seen before, repeat the

process

ChallengesThe web is very large (coverage)

doubling every 9-12 monthsWeb pages are changing rapidly (freshness)

all changes (40% weekly)changes by a third or more (7% weekly)

CrawlersIA crawlerOriginal Google crawlerMercator web crawlerCho and Garcia-Molina’s crawlerWebFountainUbiCrawlerShkapenyuk and Suel’s crawler

CachingAnalogous to OS cacheNon-uniformity of requestsTemporal correlation or locality of reference

Caching algorithmsInfinite cache (INFINITE)Clairvoyant caching (MIN)Least recently used (LRU)CLOCKRandom replacement (RANDOM)Static caching (STATIC)

Experimental setup

URL Streamsfull tracecross sub-trace

Result plots

Result plots

Result plots

ResultsLRU & CLOCK performed equally well but slightly

worse than MIN except for critical region (for both traces)

RANDOM is slightly inferior to CLOCK and LRU, while STATIC is generally much worse

Concludes considerable locality of reference in the traces

For very large cache STATIC is better than MIN (excluding initial k misses)

STATIC is relatively better for cross traceLack of deep links, often pointing to home pages.Intersection between the most popular URLs and the

cross trace tends to be larger

Critical regionMiss rate for all efficient algorithms is

constant (~70%) in k = 2^14 - 2^18Above k = 2^18 miss rate drops abruptly

to ~20%

Cache Implementation

Conclusions and future directions1,800 simulations over 26.86 billion URLs

resulted cache of 50,000 entries gives 80% hit rate

Cache size of 100 ~ 500 entries per thread is recommended

CLOCK or RANDOM implementation using scatter table with circular chain is recommended

To what order graph traversal method affects caching?

Global cache or per thread cache is better?

THANKS

top related