Transcript

AJAX Crawl: Making AJAX Applications Searchable

Cristian Duda

and many others

ICDE09

Outline

Introduction

Modeling AJAX

AJAX Crawling

Architecture of A Search Engine

Experimental Results & Conclusions

What is AJAX?

Asynchronous JavaScript and XML

What is AJAX?

Asynchronous JavaScript and XML

AJAX applications Google Mail, Yahoo! Mail, Google Maps.

No URL changing

Why Search Engines Ignore AJAX Content?

No caching/pre-crawl Events cannot be cached.

Duplicate states. Several events can lead to the same state.

Very granular events. Lead to a large set of very similar states.

Infinite event invocation.

Now

Introduction

Modeling AJAX Event Model AJAX Page Model AJAX Web Sites Model

AJAX Crawling

….

Event Model

When JavaScript is used, the application reacts to user events: click, doubleClick, mouseover, etc.

Event structure in JavaScript:

AJAX Page Model

An AJAX application = a simple page identified by an URL + a series of states, events and transitions

Page model: a view of all states in a page (e.g., all comment pages).

In particular it is an automaton, a Transition Graph, which contains all application entities (states, events, transition).

Model of An AJAX Web Page

Model of An AJAX Web Page

Nodes: application state. An application state is represented as a DOM tree.

Edges: transitions between states. A transition is triggered by an event activated on the source element and applied to one or more target elements, whose properties change through an action. Annotation For The Transition

Graph of An AJAX Application

AJAX Web Sites Model

Now

Introduction

Modeling AJAX

AJAX Crawling

Architecture of A Search Engine

Experimental Results & Conclusions

Crawling Algorithm

Build the model of AJAX Web Site.

Focus on how to build the AJAX Page Model. (i.e., for YouTube, indexing all comment pages of a video).

A Basic AJAX Crawling AlgorithmFirst Step:Read the initial DOM of the Document at a given URI.(line 2)

A Basic AJAX Crawling AlgorithmNext Step:AJAX-specific and consists of running the onLoad event of thebody tag in the HTML document.(line 3)

All JavaScript-enabled browsersinvoke this function at first.

A Basic AJAX Crawling Algorithm

Crawling starts after this initialstate has been constructed.(line 5)

The algorithm performs a breadth-first crawling, i.e., it triggers allevents in the page and invokesthe corresponding JavaScriptfunction.

A Basic AJAX Crawling Algorithm

Whenever the DOM changes, anew state is created (line 11) andthe corresponding transition is added to the application model(line 16).

Problem of The Basic Algorithm

The network time needed to fetch pages. In case of AJAX Crawling, multiple individual events per

page lead to fetching network content.

Traditional way: pre-caching the Web and crawling locally. Two pages can be checked to be identical using a single URL.

A Heuristic Crawling Policy For AJAX Applications

We observe: Stable structure, Contains a menu, present in all states, And a dynamic part.

By identifying the same state but without fetching the content.

JavaScript Invocation Graph

The heuristic we use is based on the runtime analysis of the JavaScript invocation graph.

JavaScript Invocation GraphEvents And Functionalities In The JavaScript Invocation Graph

Nodes: JavaScript functions.The functionally of an AJAX page is expressedthrough events.

JavaScript Invocation GraphFunctions In The JavaScript Invocation Graph On YouTube Page

Functions in the JavaScript code can be invokedeither directly by event triggers (event invocations)or indirectly by other functions (local invocations).

The dependencies in the code are listed below:

JavaScript Invocation Graph

Hot Node: the functions that fetch content from the server.

Hot Call: a call to a hot node

A single function fetches content fromthe server, i.e., getURLXMLResponseAndFillDiv.

In AJAX, the same function can be invoked in order to fetch the same content from the server from different comment pages.

In this approach we detect this situation and we avoid invoking the same function twice.

How to solve it?

We solve the problem of caching in AJAX applications and detecting duplicate states by identifying and reusing the result of server calls.

Just as in traditional I/O analysis in databases, we tend to minimize the number of the most expensive operations, i.e., the Hot Calls, invocations which generate AJAX calls to the server.

Optimized Crawling Algorithm

Optimized Crawling Algorithm

Step 1: Identifying Hot Nodes.

The crawler tags the Hot Nodes, i.e., the functions that directly contain AJAX calls.(line 34)

Optimized Crawling Algorithm

Step 2: Building Hot Node Cache.

The crawler builds a table containing all Hot Node invocations, the actual parameters used in the call and the results returned by the server(line 34-53). This step uses the current runtime stack trace.

Optimized Crawling Algorithm

Step 3: Intercepting Hot Node Calls.

The crawler adopts the following policy:

1. Intercept all invocations of Hot Nodes (functions) and actual parameters (line 34).

2. Lookup any function call within the Hot Node Cache (line 37-39).

3. If match is found (hot node with same parameters) do not invoke AJAX call and reuse existing content instead (line 41).

Simplifying Assumptions Snapshot Isolation

An application does not change during crawling.

No Forms Do not deal with AJAX parts that require user inputting

data in forms, such as Google Suggest.

No Update Events Avoid triggering update events, such as Delete buttons.

No Image-based Retrieval

Now

Introduction

Modeling AJAX

AJAX Crawling

Architecture of A Search Engine

Experimental Results

The Components AJAX Crawler

Indexing

Query Processing

Result Aggregation

Indexing

Starts from the model of the AJAX Site and builds the physical inverted file.

Opposed to traditional way, a result is an URI and a state.

Processing Simple Keyword Queries

Each query returns the URI and the state(s) which contain the keywords.

Ranked by the score.

Processing Conjunctions

Query: Morcheeba singer

Processing Conjunctions

Conjunctions are computed as a merge between the individual posting lists of the corresponding keywords.

Entries are compatible if the URLs are compatible, then if the States are identical.

Parallelization

Crawling AJAX faces the difficulty of not being able to really cachedynamic Web content and network connections must continuouslybe created.

Parallelization

A precrawler is used to build the traditional, linked-based Website structure.

The total list of URLs of AJAX Web pages is then partitionedand supplied to a set of parallel crawlers.

Parallelization

Each crawler applies the crawling algorithm and builds foreach crawled page the AJAX Model.

Parallelization

More indexes are then built from the disjunct sets of AJAXModels.

Parallelization

Query processing is then performed by query shipping, computing the results from each Index, and then performinga merge of the individual results from each index, returningthe final list to the client.

Now

Introduction

Modeling AJAX

AJAX Crawling

Architecture of A Search Engine

Experimental Results & Conclusions

Experimental Setup

YouTube Datasets

Algorithms: Traditional Crawling AJAX Non-Cached AJAX Cached AJAX Parallel

YouTube Statistics

Crawling Time

Network time is predominant. Underline the importance of applying the Hot Node

optimization.

Total Crawling Time & Network Time

Crawling Time

The Hot Node heuristics is effective. The heuristic approach of the Hot Nodes causes a 1.29

factor of improvement in crawling time as opposed to the Non-Cached Approach.

Number of AJAX Events Resulting In Network Requests

Crawling Time

Parallelization is effective. The running time decreases almost by 25%, as opposed to

the AJAX Non-Parallel version.

Query Processing Time

YouTube queries.

Query processing times on YouTube.

Recall

For each query we evaluated the number of videos returned by just using the traditional approach, as opposed to the total number of videos returned in the AJAX Crawl approach, when also comment pages are taken into account.

Discussions Combine with existing search engines.

Focusing on a specific user’s interaction with the server.

Support more AJAX applications, such as forms.

Irrelevant events. This paper focus on the most important events (click, doubelclick, mouseover).

Questions? Comments?

top related