Top Banner

of 12

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Concept relation extraction using NaveBayes classier fanswering system

    G. Suresh kumar a,*, G

    a Department of Computer Science,b ience

    R d 18 N

    A

    KEYWORDS

    Dependency parsing; extract concept relations from unstructured text using a syntactic and semantic probability-based

    coded dependency parsing pattern rules and a binary decision tree-based rule engine were developed

    for this purpose. This ontology construction process is initiated through a question answering pro-

    performance of the constructed ontology was evaluated using gold standard evaluation and com-

    hat the proposed

    her accuracy. Fur-

    answering frame-

    All rights reserved.

    Question answering (QA) systems are considered more com-

    plex than information retrieval (IR) systems and require exten-sive natural language processing techniques to provide anaccurate answer to the natural language questions. Question

    answering systems in general use external knowledge sourcesto extract answers. Domain-specic question answering sys-tems require pre-constructed knowledge sources, such as adomain ontology. A major challenge in knowledge-based QA

    * Corresponding author. Tel.: +91 9677528467.E-mail addresses: [email protected] (G. Suresh kumar),

    [email protected] (G. Zayaraz).

    Peer review under responsibility of King Saud University.

    Production and hosting by Elsevier

    Journal of King Saud University Computer and Information Sciences (2015) 27, 1324

    King Saud

    Journal of King SComputer and Info

    www.ksuwww.science1. Introductionpared with similar well-performing methods. The experimental results reveal t

    approach can be used to effectively construct a generic domain ontology with hig

    thermore, the ontology construction method was integrated into the question

    work, which was evaluated using the entailment method. 2014 King Saud University. Production and hosting by Elsevier B.V.1319-1578 2014 King Saud University. Production and hosting by Elsevier B.V. All rights reserved.http://dx.doi.org/10.1016/j.jksuci.2014.03.001cess. For each new query submitted, the required concept is dynamically constructed, and ontology

    is updated. The proposed relation extraction method was evaluated using benchmark data sets. TheQuestion answering system Nave Bayes classier. We propose an algorithm to iteratively extract a list of attributes and asso-

    ciations for the given seed concept from which the rough schema is conceptualized. A set of hand-Relation extraction;

    Ontology development;

    Abstract Domain ontology is used as a reliable source of knowledge in information retrieval sys-

    tems such as question answering systems. Automatic ontology construction is possible by extracting

    concept relations from unstructured large-scale text. In this paper, we propose a methodology toDepartment of Computer Sc

    eceived 10 July 2013; revise

    vailable online 9 May 2014or ontology-based questions

    . Zayaraz b

    Pondicherry University, Indiaand Engineering, Pondicherry Engineering College, India

    ovember 2013; accepted 13 March 2014University

    aud University rmation Sciences.edu.sadirect.com

  • system development is building a huge knowledge base with The QA framework proposed in this paper includes two

    proposed question answering approach was tested using a

    14 G. Suresh kumar, G. Zayarazobjective and correct factual knowledge in the preferreddomain. The process of collecting useful knowledge from var-

    ious sources and maintaining this information in a knowledgerepository is a useful process when providing a requiredanswer on demand with greater accuracy and efciency. The

    domain ontology is considered a set of representationalprimitives used to model a knowledge domain. Ontologyknowledge can be easily translated into rst-order-logic repre-

    sentations for use with the semantic web (Horrocks, 2008). Anontology can provide extensive vocabularies of terms, eachwith a well-dened meaning and relationships with otherterms; they are essential components in many knowledge-

    based applications (Miller, 1995). Ontologies have had a greatimpact on several elds, e.g., biology and medicine. Mostdomain ontology constructions are not performed automati-

    cally (Gacitua et al., 2008). Most of the work on ontology-driven QAs tend to focus on the use of ontology for queryexpansion (Mc Guinness, 2004). However, domain ontology

    is considered a rich source of knowledge (Ferrnandez et al.,2009) and is used to improve the efciency of QAs.

    Manually constructing an ontology with the help of tools is

    still practiced to acquire knowledge of many domains. How-ever, this is a difcult and time-consuming task that involvesdomain experts and knowledge engineers (Navigli et al.,2003). The potential size, complexity and dynamicity of a spe-

    cic domain increase the difculty of the manual ontologyconstruction process. The solution to this problem is the useof ontology learning techniques with knowledge-rich web

    resources. Many efforts have been undertaken in the last dec-ade to automate the ontology acquisition process. However,there are many restrictions in terms of building ontologies that

    accurately express the domain knowledge and informationrequired for a question answering system. Many supervisedlearning methods proposed for automatic ontology construc-

    tion are decient in the handling of large-scale data. Here,the scale represents a characterization of the algorithm(small, medium and large) with respect to its performanceand adaptability as follows. Algorithms with high complexity

    are classied as small scale. The algorithms that delivermoderate performance were considered in the medium-scalecategory. The unsupervised algorithms that are scalable

    incrementally adapt and that work with voluminous data areconsidered to be in the large scale category.

    The proposed methodology addresses the problem of how

    to construct the domain ontology from an empty ontologyand keep updated for further question answering processes.The novelty of this approach relies on the combinations ofmature NLP technologies, such as the semantic similarity-

    based attribute association identication for relational ontol-ogy conceptualization using a Nave Bayes classier withwidely accepted large-scale web resources. In this proposed

    approach, the attributes and associations of the given seedconcept are automatically extracted using a set of hand-codedrules devised from the dependency parsing pattern of relevant

    sentences. We introduced an extension to the Nave Bayes clas-sier to learn concept relations from the extracted associations.Then, the predicted concept relations are used to model the

    domain concepts of the resulting ontology. Furthermore, weproposed an experimental framework for the concept-rela-tional ontology-based question answering process.benchmark data set as well as using an entailment-basedevaluation method. The QA performance improvement was

    proven by comparing the results with another similar QAsystem.

    We established the following hypotheses to test the perfor-

    mance of the proposed methods for relation extraction andquestion answering:

    H1. There will be an improvement in relation extractionaccuracy by using our proposed hand-coded rules formulated

    from dependency-parsing sentence patterns and a binarydecision tree-based rule engine.

    H2. There will be a considerable improvement in the accuracy

    of the concept relation learning for automatic ontology con-struction using our proposed expectation maximization-basedNave Bayes classier with syntactic and semantic probabilities.

    H3. There will be a considerable performance improvement in

    ontology-based open domain question answering using ourproposed question answering framework.

    The rest of this paper is organized as follows. The relatedwork is summarized in Section 2. The proposed concept rela-tion extraction method using hand-coded rules formulated

    from dependency-parsing sentence patterns and the binarydecision tree-based rule engine are elaborated in Section 3.Section 4 depicts the design of an expectation maximization-based Nave Bayes classier using syntactic and semantic

    probabilities, followed by the proposed concept relationalontology-based question answering framework in Section 5.The evaluation method and experimental setup are elaborated

    in Section 6, after which, the results and discussion are pre-sented in Section 7, followed by the conclusion and references.

    2. Related work

    2.1. Question answering systems

    We investigated a number of novel techniques that performopen-domain question answering. The investigated tech-

    niques consist of document retrieval for question answering,domain ontology-based question answering systems, web-based semantic question answering systems, and answer

    extraction via automatically acquired surface matching textpatterns for question answering.subsystems: (1) a dynamic concept relational (CR) ontologyconstruction module capable of extracting new concepts from

    the web and incorporating the extracted knowledge into theCR Ontology knowledge base, and (2) an answer extractionmodule that formulates the query string from the natural lan-

    guage question according to the expected answer and retrievesthe information from the ontology for answer formation. Anexperimental setup was established to test the performance

    of our proposed relation extraction approach using a bench-mark data-set (Voorhees, 1999). The obtained result wascompared with the performance of similar well-performingrelation extraction and ontology construction methods. The

  • these categories, the system will process it correctly. There

    Concept relation extraction using Nave Bayes classier 15are a few question answering systems based on conditionalknowledge structures, which were introduced by Areanu andColhon (2009). In these systems, a conditional schema is usedto generate XML-based conditional knowledge structure,

    which is used for question answering. Ferrnandez et al.(2009)proposed an ontology-based question answering systemcalled QACID to answer natural language queries related to

    the cinema domain. This system extracts answers from a pre-constructed ontology by comparing question attributes withontology attributes. QACID was evaluated using entailment

    queries composed for the cinema domain. The overall ofcialF1-accuracy reported by QACID is 93.2% with an ABIthreshold of 0.5.

    2.2. Automatic ontology construction

    Ontology learning is a knowledge acquisition activity thatrelies on automatic methods to transform unstructured datasources into conceptual structures. The rst proposals for

    ontology learning (Maedche, 2002) built all resources fromscratch, but the manner of the tackling ontology populationhas evolved due to the existence of complementary resources,

    such as top-level ontologies or semantic role repositories.Some ontology learning approaches, such as TERMINAE(Aussenac-Gilles et al., 2008), provide conceptualizationguidance from natural language text integrating functions for

    linguistic analysis and conceptual modeling.A number of methods have already been proposed for auto-

    matically constructing an ontology from text. Graph-based

    approaches are very popular for representing concept relations(Hou et al., 2011). There are some approaches using mixedmethodologies, such as using relational databases and seman-

    tic graphs (Ra et al., 2012). Some ontology development toolshave been proposed to extract deep semantic relation betweenconcepts using mapping functions and to generate rough

    schema. OntoCmaps (Zouaq et al., 2011) is an ontology devel-opment tool that extracts deep semantic relations from text ina domain-independent manner. Mining the situation contextfrom text and constructing a situation ontology is an interest-

    ing area in information retrieval. Jung et al. (2010) have per-formed notable work in this area. There were a few studiesthat utilized lexico-syntactic patterns and lexico-semantic

    probabilities for automatically extracting concept relationships(Hearst, 1992, 1998) from raw text.

    2.3. Semantic relation extraction

    The mining of concept relation semantics is a sophisticated

    technique for automatic conceptualization of ontology con-cepts and instances. Most machine-learning approaches usedto automatically construct an ontology are decient becauseof the need for annotated data. Even though this annotationAutomatic QA systems, such as AnswerBus (Zhang et al.,2005) and MULDER (Kwok et al., 2001), extend their dataresource from the local database to the web resources, which

    also extend the scope of the questions they can handle. In1999, TREC set the rst QA track (Voorhees, 1999). AquaLog(Lopez et al., 2007) is an ontology-based question answering

    system that processes input queries and classies them into23 categories. If the input question is classied into one ofis possible by using hand-coded rules, it requires a high levelof processing time. Unsupervised methods, which can learnfrom un-annotated raw text, are considered superior alterna-

    tive. Yangarber and Grishman, 2001proposed a method forrelation extraction from large-scale text. They used pairs ofco-occurring entities available in target documents for extract-

    ing relevant patterns of the given seed concept. However, theunsupervised methods are lacking in providing the requiredrelation extraction accuracy. The importance of measuring

    semantic similarity between related concepts has been well-explored by many researchers (Said Hamani et al., 2014),and its effectiveness has been demonstrated in many naturallanguage applications (Sarwar Bajwa et al., 2012).

    Most methods of relation extraction start with some lin-guistic analysis steps, such as full parsing, to extract relationsdirectly from the sentences. These approaches require a lexical-

    ized grammar or link grammars. Information extraction toolssuch as GATE (developed from earlier TIPSTER architecture)NLP tools use a set of hand-coded rules to extract relations

    from text (Cowie and Wilks, 2000). There are few open IE(information extraction) systems proposed to extract relationaxioms from large web documents (Banko et al., 2007; Wu

    and Weld, 2010; Zhu et al., 2009). The open IE systems havebeen used to learn user interests (Ritter et al., 2010), acquirecommon sense knowledge (Lin et al., 2010), and recognizeentailment (Schoenmackers et al., 2010; Berant et al., 2011).

    Open IE systems such as TEXTRUNNER (Banko et al.,2007), KNOWITALL (Etzioni et al., 2005), REVERB(Etzioni et al., 2005), WOEpos, and WOEparse (Wu and

    Weld, 2010), extract binary relations from text for automaticontology construction. The Snowball (Agichtein andGravano, 2000) system extracts binary relations from docu-

    ment collections that contain user specied tuples, which areused as sample patterns to extract more tuples. KNOWITALLautomatically extracts relations by using a set of domain-

    independent extraction patterns to learn labeled data.REVERB uses the syntactic and lexical constraints to identifyrelation phrases and extracts pairs of arguments for eachrelation phrase. Then, a logistic regression classier is used

    to assign a condence score. Furthermore, we compared ourmethod with an early relation extraction method originallyproposed by Brin (1998)called DIPRE (Dual Iterative Pattern

    Relation Extraction). The overall F1-accuracy reported for theopen-IE systems such as DIPRE, Snowball_VS, TextRunner,WOEparse, and REVERB is 67.94, 89.43, 50.0, 58.3, and

    60.9, respectively.Carlson et al. (2010) proposed a coupled semi-supervised

    learning method for information extraction. The goal of themethod is to extract new instances of concept categories and

    relations using an initial ontology containing dozens of pre-constructed categories and relations. The method exploits therelationship among categories and relations through coupled

    semi-supervised learning. Fifteen seed concepts were used toextract relations from 200 million web pages. The average pre-cision reported was 95%. Recently, Krishnamurthy and

    Mitchell (2013) proposed a component called ConceptResolv-er for the Never-Ending Language Learner (NELL) (Carlsonet al., 2010) that learns relations from noun phrase pairs.

    ConceptResolver performs both word sense induction and syn-onym resolution on the extracted relations. The experimentalevaluation was conducted using gold standard clustering data.When ConceptResolver was used to learn real-world concept

  • for use with NELLs knowledge base, it demonstrated an over-all accuracy of 87%.

    components: candidate key word, which represents the given

    between them. The concepts x and y are said to be associated

    The hand-coded rules framed using the Denitions 13 are

    shown in Table 1. The pattern-matching engine considers thepresence of the Attribute lexical-pattern to identify the pred-icate used along with the connectors as one of the attributes of

    the concept C. The nearest VP node to target object is (rightmost NP) only considered for identifying attributes exceptthe case if verb pattern is OR< VBS>,

    then convert VB into VBZ and attach BY to it (VBZ+ BY)to construct the attribute (e.g., to create as created by).

    Seed Concept

    Training Documents (D)

    Sentence Extraction

    Parse Tree Construction

    Triple Extraction

    Ontology Construction

    WordNet Similarity Measure

    DPR-BDTRule Engine

    Relation extraction using EM-NB classifier

    Table 1 Hand coded dependency parsing pattern rules.

    Rule No. Rule components Example

    attributesAttribute

    Lex-pattern

    RHS of

    parent VP

    1 VBZ NP, S is

    2 VBZ +DT NP, S is a

    3 VBD NP, S lived

    4 VBZ + IN PP lives in

    5 VBD+ IN PP lived in

    6 VBG+ IN PP living in

    7 VBN+ TO PP used to

    8 VBN+ IN PP used for

    9 VB + RP NP carry out

    10 VBP ADJP are

    11 VBP NP are

    12 VBP ADVP drive west

    16 G. Suresh kumar, G. Zayarazwith each other if there exists a relation r between x and y suchthat r is a predicate and x and y belong to the superset of x andy.

    Denition 3. When a concept x is succeeded by a verb phrase

    VP that is a subset in VPList, which is further succeeded by a{NP|S|PP|ADJP|ADVP}, then the object y in the NP is anattribute of x.seed concept; predicate and target object, which is considered

    the associated concept. The triple set is used to extract featuredata for training the proposed expectationmaximization-based Nave Bayes classier, which predicts whether thereexists a relation between the seed concept and associated con-

    cept through the predicate. Then, the ontology concept schemais generated for the relevant relations. In this paper, the con-cept relation extraction process is modeled as an unsupervised

    classication problem using an expectationmaximization-based Nave Bayes classier that makes use of lexico-syntacticand lexico-semantic probabilities calculated using the

    WordNet similarity between the seed concept and associatedconcept. The overall process sequence is depicted in Fig. 1.

    3.1. Hand-coded rules for concept triple extraction

    The concept triple extraction from the dependency parsingpattern is performed using hand-coded rules. The rules are

    formulated by empirical analysis.

    Denition 1. Attribute(s) of a concept x is/are dened as thepredicate(s) p is a subset of P {P: set of all predicates} used to

    dene another concept y, where x is determined by y.

    Denition 2. Association(s) between concepts x and y is/aredened as the relationship r is a subset in R {R: relation set}From the investigated related works, it is evident that the

    existing pattern-based relation extraction methods are decientin handling large-scale data. On the other hand, the proposedsupervised learning methods are decient in providing the

    required level of accuracy. Most of the relation extractionsystem require pre-constructed labeled (Carlson et al., 2010)data for learning. The relation extraction method proposed

    in this paper addresses these issues.

    3. Concept relation extraction for automatic ontology

    construction

    The proposed method to automatically construct domainontology concepts extracts the domain attributes and associa-

    tions from a set of relevant documents. We used the Stanforddependency parser (Marie-Catherine de Marneffe, 2008) forgenerating a parse tree for each individual sentence in relevant

    documents concerning the seed concept. Then, the proposedbinary decision tree-based rule engine applied the set ofhand-coded rules to the dependency parsing pattern. The out-

    come of the rule engine is a set of triples consisting of threeOntology WordNet

    Figure 1 Ontology construction using concept relations.

  • The following three basic pattern components are used togeneralize the rule:

    A= [NP, {PPER|NN|PRF}]B= [VP closest to C] & [right child ofVP{S|NP|PP|ADJP|ADVP}]

    C= [NP, {PPER|NN}]Precedence: A< B < C

    The mapping C (A, B) denotes that the B is one of the attri-butes that describes the concept A with the value C.

    The general format of the rule is as follows:

    {NP (concept)} * {VP (Rule: 112)} * {PP (Rule: 112)}{NP (object)}.

    The rules are very specic to the dependency parsing pat-terns generated by Standford parser. Extracting concept triples

    first NP(s) = concept?

    RHS(S) = VP & RHS (VP)! = VP?

    Add s to D-

    Process(next(D+))

    Add s to D-

    Process(next(D+))RHS(VP)=NP?

    Extract the child nodes of VP and NP

    Add s to D-

    Process(next(D+))

    yesno

    yesno

    no yes

    D+ != empty

    Terminate

    no yes

    Figure 2 Recursive binary decision tree for concept triple extraction.

    Concept : computer

    Sentence: computer consists of a processor

    Step 1: Dependency Parser

    Step 2: The pattern of the parse tree is matched with DP Rule No. 4. BDT sequence:- Decision 1: D!=empty -> True, Decision 2: firstNP(s)==c -> True Decision 3: RHS(S) = VP & RHS (VP)! = VP? -> True

    -> of

    ist

    (ROOT(S((NP(NN computer))(VP((VBZ consists)(PP((IN of)(NP((DT a)(NN

    processor))))))))

    f c

    Concept relation extraction using Nave Bayes classier 17Decision 4: RHS(VP)=NP? Result: value(VP)=consists

    < processor >

    oncept triple extraction.

  • from the relevant sentences is a basic pattern matching process.Thus, the rules are treated in the rule engine as If-then NormalForm (INF) rules. A binary decision tree-based rule engine

    was designed for this purpose.

    3.2. Recursive BDT-based rule engine

    Decision rules and decision trees are key techniques in datamining and knowledge discovery in databases (Takagi, 2006;Breiman et al., 1984). The proposed binary decision tree

    (BDT)-based rule engine is used to extract the three compo-nents of a relation from which the attribute and associationsare predicted. The training sample set D is a collection of text

    documents that consists of the dependency parsing patterns forthe corresponding sentences of the seed concepts. We used thesubclass method proposed by Takagi (2006) to separate thenegative only samples from the training set, so that the concept

    triple could be precisely extracted from the remaining positivesample. Fig. 2 depicts the BDT rule engine designed to extractthe triple; subject, predicate and object. Each decision node

    generates only a negative sample set D = {y1, y2,. . . yp}when the decision result is false and otherwise generates asubclass D+ = {x1, x2,. . . xp} consisting of the remainingsamples. For each false decision, a new BDT is constructedrecursively until the sample set D becomes empty or reachesthe goal decision. On reaching the goal decision, the subclasssample set D+ will have only a single positive sample from

    which the resultant components are extracted. Then, theontology concept schema is generated using a classierdesigned to extract concept relations from the set of triples.

    Fig. 3 shows an example of extracting a relation triple usingour proposed hand-coded rules. When the seed concept (c) iscomputer, and the sentence in the training sample instance

    is computer consists of a processor, the parser generatesan equivalent parse tree of the sentence (s). The parsing patternis expected to match with any one of the 12 rules listed in

    Table 1. This rule matching process is automatically performedby the BDT rule engine by checking for the four decisions ofthe parse pattern. A concept triple is successfully extractedwhen all four decisions are TRUE. Otherwise, the sample

    instance is considered as a negative sample. In our example,the parse tree pattern structure matches with rule 4 in Table 1.Hence, the relation consists of and a related concept pro-

    cessor is successfully extracted for the given seed conceptcomputer.

    4. Automatic relation classication using a Nave Bayes

    classier

    Nave Bayes (NB) classiers have been proven to be very effec-

    tive for solving large-scale text categorization problems withhigh accuracy. In this research, we used an expectationmaximization-based Nave Bayes classier for classifying the

    relation between the seed concept and predicate object through

    es

    18 G. Suresh kumar, G. ZayarazFigure 4 The EM-based Nave Bay classier for attribute identication.

  • 4.1.1. Lexico-syntactic probability

    The structural similarities of a sentence can be used as features

    for extracting useful knowledge from the sentence (Kang andMyaeng, 2005). Our sentence pattern shown in Eq. (3) isexpressed as a triple consisting of concept noun (N), an attri-

    bute describing the concept, composed using the functionalverb combined with any connectives (VP|DT), and a text seg-ment with one or more nearest nouns (NN). The missing ele-

    ments in the source sentence are indicated using NULL values.

    SPfx; y; z fN;AttrVPjDT;NNg 3The following features are computed using the above sen-

    tence pattern: presence of concept noun, presence of functionalverb, structural similarity between the nouns, and semantic

    Concept relation extraction using Nave Bayes classier 19the predicate that exits in a sentence. Thus, the sentenceclassication problem is converted to a concept relationclassication problem. The proposed classier model is

    depicted in Fig. 4.

    4.1. Nave Bayes classiers for concept relation classication

    Several extensions to Nave Bayes classiers have been pro-posed (Nigam et al., 2000), including combining expectationmaximization (EM) (Dempster et al., 1977) and Nave Bayes

    classiers for learning from both labeled and unlabelled docu-ments in a semi-supervised algorithm. The EM algorithm isused to maximize the likelihood with both labeled and

    unlabeled data. Liu et al. (2002) proposed a heuristicapproach, Spy-EM, that can learn how to handle trainingand test data with non-overlapping class labels.

    We extended the basic Nave Bayes classier model for con-

    cept relation classication in which the concept relation identi-cation problem is posed as a self-supervised learningproblem. The attribute ai of the given concept c is described

    by the triple t, which consists of concept pair connectedthrough a predicate. The attribute (predicate) ai is a subset inA, where A is the attribute set of the concept c, and triple tiis a subset of T, where T is the set of triples for all of the attri-butes. The proposed classier is used to categorize the attributecandidate triples into two classes: relation class c1 or non-relation class c0. Thus, ti is either classied into c1 or c0 depend-

    ing on feature probabilities. A triple instance ti will be consideredfor ontology construction only when it is classied as a relationclass c1. We used the lexico-syntactic probability of triples and

    lexico-semantic probability of the triples as features to computethe classication probability P(ti|cl), where l is the label 0 or 1. Inthe trained Nave Bayes classier model, the target class c* of the

    triple ti is computed as shown in Eq. (1).

    c argmaxcl

    Pcljti argmaxcl

    Pcl PtijclPti 1

    where P(cl) is the target label probability; P(ti) is the prob-ability of the training sample initialized by the classier, andP(ti|cl) is computed probability of assigning the class label

    (1 or 0) to the triple ti. We applied the lexico-syntactic proba-bility LSPti and lexico-semantic probability LSemPti; thus,P(ti|cl) is rewritten as shown in Eq. (2).

    Ptljci PLSPti jcl Xjtjk1

    PLSemPti;kjcl 2

    where P(LSPti|cl) and P(LSemPti,k|cl) can be learned fromthe annotated triple for the target attribute class. The initialtraining data D+ and D are generated from the triplesextracted by the BDT rule engine and annotated using Word-Net similarity measures. We empirically xed a threshold valuefor similarity score, using the class labels assigned to the initial

    training set. The expectationmaximization procedure is usedwith the Nave Bayes classier to optimize the classier inthe estimation of the probability for unlabeled new data-sets.

    The parameters proposed to train in the EM procedure areprior probability P(cl), lexico-syntactic probability P(LSPti|cl),and lexico-semantic probability P(LSemPti,k|cl). We used theLaplacian smoothing method to adjust the parameters of

    training data. The Nave Bayes classier is bootstrapped usingEM procedure.similarities between nouns. The presence of a concept noun

    and functional verb is indicated by the value 1. Otherwise,the value is 0. In addition, another feature, sentence weightscore, is calculated from the original source sentence s. The

    three components such as concept (a), predicate (b), and targetobject (c) presence in the triple extracted for each sentence areconsidered as feature parameters. The sentence weight is thesum of weights calculated from the list of weight values

    assigned to various sentence features given in the Table 2. Aparticular feature value is selected based on the arrangementof triple components in the original sentence. The sentence

    weight (SW) score is calculated from the following Eq. (4)using dependency parsing pattern generated by the Stanfordparser:

    SWscore XNi1

    wi fi 4

    where, fi is the feature, and wi is the corresponding weight.The value of fi as 1 or 0 depends on the presence or absence ofthe particular feature in the sentence. Some of the features aremutually exclusive. The lexico-syntactic probability, LSP, is

    considered as 1 when the SWscore is greater than or equal to1.

    The feature weight values are calculated using a condence

    scoring function that adjusts the weight based on the keywordbased information gain. We used 1000 manually annotatedsentences extracted from TREC 2008 documents. The weight

    values were empirically tuned to achieve optimum precisionand recall values. For example, the lexico-syntactic probabil-ity for the concept triple < computer, consists of, proces-

    sor > extracted from the sentence computer consists of aprocessor is calculated by adding the weight values of thesentence pattern features 1, 2, 3, and 4. Hence, the calculated

    Table 2 Sentence pattern features.

    Feature No. Feature Weight

    1 a, b and c cover all the words in source

    sentence

    1.32

    2 Sentence starts with a 0.58

    3 b is the immediate successor of a 0.42

    4 a is a proper noun 0.16

    5 b is a proper noun 0.35

    6 There is a verb between a and b 0.507 There is a preposition before a 0.438 There is an NP after c 0.93

  • SWscore is 1.32 + 0.58 + 0.42 + 0.16 = 2.48, which is greaterthan or equal to 1, and hence the LSP = 1.

    4.1.2. Lexico-semantic probability

    The noun-phrase pairs are formed using the candidate conceptnoun paired with each noun in the attribute target value in thetriple. For each noun pair, the rank calculated using the Word-

    Net similarity based semantic similarity measure is shown inEq. (6). Overall, NPrank is the sum of all similarity scores ofall noun phrase pairs. The lexico-semantic probability is calcu-lated as shown in Eq. (5). If a predicate exists between the

    noun pairs, the lexico-semantic probability is assigned as 1,

    After the whole training corpus is classied with an initialclassier, highly ranked triples are selected as the initial attri-bute class annotated set. From this, the parameters of the

    Nave Bayes classier are initialized. The second training stageis called the Expectation step. The whole training corpus,including the annotated part, is classied with the current clas-

    sier. The nal training stage is called the Maximization step.Based on the newly classied data, parameters are re-esti-mated. The expectation and maximization steps are repeated

    while the classier parameters converge.

    4.2. Ontology concept schema modeling

    om

    IC

    of

    hem

    20 G. Suresh kumar, G. Zayarazand the rank of the noun phrase pair is greater than the meanthreshold h. Otherwise, it is 0.

    PLSemPti;kjcl 1 if PLSPtijcl;NPrank > h0 otherwise

    5

    We used weighted links, a WordNet semantic similarity-

    based measure, to calculate the NPrank of two noun phrasesin the noun phrase pairs. Weighted links (Richardson et al.,1994) are proposed for computing the similarity between twoconcepts using the WordNet taxonomy of two concepts. The

    weight of a link is calculated by (1) the density of the taxon-omy at that point, (2) the depth in the hierarchy, and (3) thestrength of connotation between parent and child nodes. The

    similarity between two concepts is computed by calculatingthe sum of weights of the links. We calculated the followingthree similarity scores, based on which the overall rank was

    calculated. Wu and Palmer (1994) similarity measure considersthe position of concepts of c1 and c2 in the taxonomy relativeto the position of the most specic common concept c. Li et al.

    (2003), which was intuitively and empirically derived, combinethe shortest path length between two concepts. The measure-ment of Leacock and Chodorow (1998) is a relatedness mea-sure between two concepts.

    NPrank XNi1

    simw&p; simLi; simlchNP1;NP2 6

    For example, the w&p, Li, and lch similarity values forthe concept noun phrases computer and processorare 3.445068277596125, 2.0794415416798357, and

    0.04632716059120144, respectively. Thus, the NPrank is equalto 5.57083698. We initialized the threshold value h as themean threshold value of 2.0. As the calculated NPrank is

    greater than h and also the LSP is 1, the lexico-semantic prob-ability LSemP is calculated as 1. Thus, the training sample isassigned to the positive label 1, which indicates that the con-cept computer has a relation with the concept processor

    through the relationship consists of.

    C

    Electronic device

    CPU

    is ahas a made

    Figure 5 A sample concept scThe rough schema of the ontology concept is dynamicallymodeled using the set of concept relations extracted for thegiven seed concept. The ontology schema is generated with abottom-up approach in which the attributes are identied

    using instances. An attribute is considered for inclusion intothe target schema when there is an existing relationshipbetween the candidate concept and associated concept key-

    word in the instance. A sample ontology schema constructedusing this approach is depicted in Fig. 5.

    5. CR-Ontology portable question answering framework

    The proposed framework is similar to Watsons three compo-nent architecture (Hirschman and Gaizauskas, 2001), which

    describes the approach taken to build QA systems. Our pro-posed framework consists of a (1) question analysis compo-nent, (2) answer extraction component and (3) automaticontology construction component. Fig. 6 depicts our proposed

    three component framework for our Concept RelationalOntology-based Question Answering system (CRO-QAs).

    The role of the question analysis component is to identify

    the question type (QT) from which the expected answer target(AT) is selected. We used an AT database consisting of 52 QTsand corresponding ATs.

    To extract an answer from the ontology for the natural lan-guage query, we utilized Attribute-Based Inference (ABI), whichwas introduced by Ferrnandez et al. (2009). The ontology attri-

    bute that is available in the submitted query is identied usingABI. An ontology attribute considered for generating an answerto a query depends on the ABI score. The score value is obtainedby using positive weights assigned to the patterns matched

    between the query attribute and ontology attribute. The nalweight obtained by this inference is dened as shown in Eq. (7).

    ABIscore

    Xai2Q;aj2O

    Eqlai; aj

    jQj 7

    puter

    Chips Software

    used for

    a of the constructed ontology.

  • and mean error (ME). A similar experiment was conducted

    Concept relation extraction using Nave Bayes classier 21where O is the list of ontology attributes and |Q| is a set ofquery attributes. Then, Eql(ai,aj) is calculated using the Eq. (8).

    Eqlai; aj f1 if ai aj orai 2 aj

    0 otherwiseg 8

    Question Identification & Q-Pattern Generation

    Query String

    Query Processor

    CR Ontology

    Answer Generator

    Iterative Attribute, Association

    Extraction using DPR+NB Relation

    Classification

    Ontology Reconstruction

    Concept Available?

    Keywords

    Yes

    No

    Answer Extraction

    Automatic Ontology Construction

    Natural Language Question

    Answer

    Query Analysis

    Figure 6 Proposed ontology based question answering system

    architecture.For each positive inference, a similarity weight betweenzero and one is assigned, and then, the nal entailment coef-cient is calculated using the sum of all weights divided by the

    number of inferences. We empirically established a thresholdusing the number of user query patterns, based on which theentailment decision was made. The answer is constructed usingthe ontology attribute with entailment coefcients higher than

    the threshold. When the relevant ontology attribute to theinput query is not present in the ontology, the procedure forautomatic ontology construction for the new concept is initi-

    ated. Once again, the answer construction process is restartedafter updating the existing ontology with the newly con-structed ontology concept.

    6. Experimental setup

    6.1. Relation extraction for automatic ontology construction

    The proposed relation extraction algorithm was implemented

    using Java, and we ran the implementation to extract 1000concepts from ten different domains. Each concept extractionwas experimented using the benchmark TREC-QA 2008 data-

    set. The data set was validated using the10-fold cross valida-tion technique. The data-set was clustered into 10 equal parti-tions with 100 instances each. Then, the experiment wasrepeated by changing the validation data set k from 1 to

    10. For each experiment, we calculated the mean accuracy

    were used to adjust the entailment decision threshold and toevaluate the nal system performance. The accuracy was calcu-lated using the ratio between the number of questions correctlyanswered by the system and the total number of questions sub-

    mitted to the system. To evaluate the effectiveness of our pro-posed system, we compared the overall accuracy with theaccuracy obtained by the well-performing benchmark TREC

    QA systems and another ontology-based QA system, QACID,which uses entailment and an on-eld evaluation framework.

    7. Results and discussion

    7.1. Relation extraction for automatic ontology construction

    We hypothesize that the creation of well-performing depen-dency parsing-based hand coded rules with a self-supervisedlearning approach will deliver a greater accuracy when com-

    pared with the existing semi-supervised and unsupervisedmethods. The rule set exploits the relationship between thecandidate concept and target concepts by using the predicate

    that connects both. Because of the very few and limited num-ber of rules that are executed in predetermined sequence, thedesign of the recursive BDT-based rule engines naturally

    reduces the complexity of eliminating the negative samplesand allowing remaining subclass to the next iteration. For eachfor all data samples that cover the concepts belonging to tendifferent domains. Finally, the condence interval (CI) value

    was calculated for each mean accuracy value by using a t-test.The ontology concepts were constructed using the maximumof eight attributes and twelve associations. We used the stan-

    dard measure of precision, recall and F1-measure in the eldof information extraction for calculating the relation extrac-tion accuracy.

    The precision P is dened as shown in Eq. (9).

    P jfRelevantg \ fFoundgjjFoundj 9

    The Recall R is dened as shown in Eq. (10).

    R jfRelevantg \ fFoundgjjRelevantj 10

    where Relevant is the set of relevant relations (attributes orassociations) and Found is the set of found relations. There is atrade-off between precision and recall, and thus, the F1 mea-sure is computed (b= 1). The F1 measure is applied to com-pute the harmonic mean of precision and recall as shown inEq. (11).

    F1 2 PrecisionRecallPrecisionRecall 11

    6.2. Question answering using CR-Ontology

    Our experimental model was designed to evaluate our pro-

    posed CR-Ontology based question answering system. Weused the benchmark TREC-QA 2008 data set with denitionquestions and factoid questions that cover all 10 domains. In

    our entailment evaluation, 10 new users were asked to formu-late ve queries for each domain stored in CR-Ontology. Intotal, 500 new input queries were generated. These new queries

  • successful relation extraction, it takes only three decision com-putations and the number of iterations is directly proportionalto the number sentences in the training set. Thus, the DPR

    engine is comparatively less expensive than the other comparedmethods.

    The 10-fold cross validation results obtained with 10 differ-

    ent domains of data are presented in Table 3. The t-test wasperformed on the overall accuracy obtained for each domaindata for a 95% condence value (level of signicant

    a= 0.05). All of the mean accuracy values fall within the cal-culated CI with p value less than a.

    The proposed method achieved the highest accuracy of95.63% in the electronic domain, and the overall mean accu-

    racy is 90.55%, which is 1015% higher than the best perform-ing relation extraction method Snowball_VS. The 10-fold crossvalidation results of the electronic domain data sample are pre-

    sented in Fig. 7. The standard error value is minimum for thevalue of k= 7. The comparison between the relation extrac-tion performance (F1-accuracy with b= 1) obtained by ourproposed method and Snowball_VS method for the same dataset is visualized in Fig. 8. Except for the nance and automo-bile domains, our proposed method achieved better accuracy.

    Thus, the objective of achieving better performance by usinga DPR-based self-supervised method was achieved.

    7.2. Question answering using CRO-QAs

    The user queries formulated by the entailment evaluation pro-

    cess were used to experiment with our proposed CR-Ontology-based question answering system. In addition, the answerswere extracted for the denition questions in the TREC 2008

    R = 0.531

    0

    2

    4

    6

    8

    10

    K=6 K=7 K=8 K=9 K=10

    Mea

    n er

    ror i

    n %

    alidation setMean error

    tronics domain data set, F1 accuracy mean (%) and error mean (%).

    0 10 20 30 40 50 60 70 80 90

    100

    DPR_NB Snowball_VS

    Figure 8 Relation extraction performance comparisons between

    proposed DPR_NB method and a well-performing Snowball_VS

    method.

    Table 3 Relation extraction performance results, 10-fold cross validation performed on ten different domain data with 95%

    condence interval (CI).

    Data set t df Sig. (2-tailed) Mean dierence 95% Condence interval of the dierence

    Lower Upper

    Nature 22.740 9 .000 86.94600 78.2967 95.5953

    Technology 28.519 9 .000 91.13000 83.9015 98.3585

    Health 49.016 9 .000 91.29000 87.0768 95.5032

    Finance 37.513 9 .000 83.96000 78.8969 89.0231

    Automobile 52.818 9 .000 92.55000 88.5861 96.5139

    Persons 49.655 9 .000 90.80000 86.6634 94.9366

    Locations 52.487 9 .000 92.59000 88.5994 96.5806

    6.85

    5.63

    3.36

    22 G. Suresh kumar, G. Zayaraz82

    84

    86

    88

    90

    92

    94

    96

    98

    100

    K=1 K=2 K=3 K=4 K=5

    Mea

    n ac

    cura

    cy in

    %

    k-fold vMean accuracy

    Figure 7 Results of 10-fold cross validation performance on elec

    Animals 27.366 9 .000 8

    Electronics 103.481 9 .000 9

    Science 67.849 9 .000 912

    000 79.6706 94.0294

    000 93.5395 97.7205

    500 90.2521 96.4779

  • and denition. The main objectives of this research of

    Concept relation extraction using Nave Bayes classier 23data set. We used the precision, recall and F1 measure for eval-uating our proposed CRO-QAs and to compare with the well-

    performing benchmark QA systems and a similar ontology-based QA system, QACID. The experiment was conductedin an iterative manner by varying the entailment threshold

    from 0.4 to 1 with a 0.1 scale. Fig. 9 depicts the resulting pre-cision, recall and accuracy (b= 1) obtained for the six differ-ent ABI threshold values. Our proposed method achieved themaximum recall of 99% without compromising the precision

    (96%) for the ABI threshold value of 0.5. The maximum recallvalue was achieved for the same ABI threshold value as that ofthe compared ontology-based question answering system,

    QACID. However, there is a great improvement in the accu-racy percentage achieved by our proposed method. The overall

    0

    20

    40

    60

    80

    100

    0.4 0.5 0.6 0.7 0.8 0.9QA

    per

    form

    ance

    in %

    ABI ThresholdPrecision Recall F1

    Figure 9 Question answering performance of proposed CRO-

    QA system, precision, recall and F1-measures obtained with

    entailment query data set with different ABI threshold values.

    Table 4 QA accuracy of proposed and compared QA systems.

    Systems TREC 2007

    BEST

    TREC 2009

    (QA@CLEF)

    QACID CRO-QAs

    Accuracy

    (b= 1)0.706 0.61 0.932 0.97QA accuracy obtained by our proposed method and other

    well-performing QA systems is given in Table 4. It is evidentthat the performance of our system is much better than thebest-performing QA systems in terms of the TREC 2008 and

    TREC 2009 benchmark.

    8. Conclusion

    A system for automatically extracting attributes and associa-tions from a large volume of unstructured text for automaticdomain ontology modeling was successfully developed, andthe experimental results were presented in this paper. The pro-

    posed dependency parsing pattern-based iterative concept rela-tion extraction algorithm was implemented to extractattributes and associations using lexico-syntactic and lexico-

    semantic probabilities. The empirical results were encouraging,and it has been proven that our proposed method outperformssimilar well-performing relation extraction methods. The suit-

    ability of the constructed concept relational ontology for usewith ontology portable question answering systems was exper-imentally evaluated using our concept relational ontology-

    based question answering framework. The system performancewas above average for all three question types: factoid, list,automatically constructing a domain ontology using conceptrelations and creating QA systems capable of precisely answer-

    ing natural language questions without compromising the ef-ciency and accuracy were achieved. It is encouraging that notonly are the techniques introduced in this paper capable of

    answering questions relatively quickly, but their answer perfor-mance is better than the available web-based and ontology-based QA systems when independently evaluated using a

    benchmark data-set. The proposed QA framework can beextended to generate answers for more complex types ofquestions by introducing additional natural languagetechniques.

    References

    Agichtein, E., Gravano, L., 2000. Snowball: extracting relations from

    large plain-text collections. In: Fifth ACM Conference on Digital

    libraries, 27 June 2000, San Antonio, TX, USA, pp. 8594.

    Areanu, N.T., Colhon, M., 2009. Conditional graphs generated by

    conditional schemas, Annals of the University of Craiova. Math.

    Comput. Sci. Ser. 36, 111.

    Aussenac-Gilles, N., Despres, S., Szulman, S., 2008. The TERMINAE

    method and platform for ontology engineering from texts. In:

    Proceeding of the 2008 Conference on Ontology Learning and

    Population: Bridging the Gap between Text and Knowledge, pp.

    199223.

    Banko, Michele, Michael, J., Cafarella, Stephen Soderland, Matt

    Broadhead, Oren Etzioni, 2007. Open information extraction from

    the web. In: Proceedings of the Twentieth International Joint

    Conference on Articial Intelligence, pp. 26702676.

    Berant, Jonathan, Ido Dagan, Jacob Goldberger, 2011. Global

    learning of typed entailment rules. In: Forty Ninth Annual Meeting

    of the Association of Computational Linguistics: Human Language

    Technologies, 1924 June 2011, Portland, Oregon, USA, pp. 610

    619.

    Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.H., 1984.

    Classication and Regression Trees. Chapman & Hall, Wadsworth,

    Belmont, CA.

    Brin, S., 1998. Extracting patterns and relations from the world wide

    web. In: WebDB Workshop at EDBT, 2728 March 1998,

    Valencia, Spain, UK, pp. 172183.

    Carlson, Andrew, et al., 2010. Toward an architecture for never ending

    language learning. In: Proceedings of the Twenty-Fourth AAAI

    Conference on Articial Intelligence, pp. 13061313.

    Cowie, Jim, Wilks, Yorick, 2000. Information extraction. In: Dale, R.,

    Moisl, H., Somers, H. (Eds.), Handbook of Natural Language

    Processing. Marcel Dekker, New York, pp. 249269.

    Dempster, A.P., Maird, L.N., Rubin, D.B., 1977. Maximum likelihood

    from incomplete data via the EM algorithm. J. R. Stat. Soc. 39, 1

    38.

    Etzioni, O., Cafarella, M., Downey, D., Kok, S., Popescu, A., Shaked,

    T., Soderland, S., Weld, D., Yates, A., 2005. Unsupervised named-

    entity extraction from the web: an experimental study. Artif. Intell.

    165, 91134.

    Ferrnandez, Oscar, Izquierdo, Ruben, Ferrandez, Sergio, Vicedo, Jose

    Luis, 2009. Addressing ontology-based question answering with

    collections of user queries. Inf. Process. Manage. 45, 175188.

    Gacitua, R., Sawyer, P., Rayson, P., 2008. A exible framework to

    experiment with ontology learning techniques. Knowl. Based Syst.

    21, 192199.

    Hearst, M.A., 1992. Automatic acquisition of hyponyms from large

    text corpora. In: Fourteenth Conference on Computational

    Linguistics (COLING 1992), pp. 539545.

    Hearst, M., 1998. Automated discovery of WordNet relations. In:

    Fellbaum, Christiane. (Eds.), WordNet: An Electronic Lexical

  • Database and Some of its Applications, MIT Press, USA, pp. 131

    151.

    Hirschman, Lynette, Gaizauskas, Robert, 2001. Natural language

    question answering: the view from here. Nat. Lang. Eng. 7, 275

    300.

    Horrocks, I., 2008. Ontologies and the semantic web. Commun. ACM

    51, 5867.

    Hou, Xin, Ong, S.K., Nee, A.Y.C., Zhang, X.T., Liu, W.J., 2011.

    GRAONTO: a graph-based approach for automatic construction

    of domain ontology. Expert Syst. Appl. 38, 1195811975.

    2010. Automatic construction of a large-scale situation ontology by

    mining how-to instructions from the web. Web Semant.: Sci. Serv.

    Agents World Wide Web 8, 110124.

    Nigam, K., McCallum, A.K., Thrun, S., Mitchell, T., 2000. Text

    classication from labeled and unlabeled documents using EM.

    Mach. Learn. 39, 103134.

    Minyoung Ra, Donghee Yoo, Sungchun No, Jinhee Shin, Changhee

    Han, 2012. The mixed ontology building methodology using

    database information. In: International Multi-conference of Engi-

    neers and Computer Scientists, 1416 March 2012, Hong Kong,

    China, pp. 650655.

    Richardson, R., Smeaton, A., Murphy, J., 1994. Using WordNet as a

    knowledge base for measuring semantic similarity between words.

    Applications, Dublin City University, Dublin, Ireland.

    Ritter, Alan, Mausam, Oren Etzioni, 2010. A latent dirichlet allocation

    24 G. Suresh kumar, G. ZayarazKang, B.Y., Myaeng, S.H., 2005. Theme assignment for sentences

    based on head-driven patterns. In: Proceedings of Eighth Confer-

    ence on Text Speech and Dialogue (TSD), pp. 187194.

    Krishnamurthy, Jayant, Mitchell, Tom M., 2013. Jointly learning to

    parse and perceive: connecting natural language to the physical

    world. Trans. Assoc. Comput. Linguist. 1, 193206.

    Kwok, C., Etzioni, O., Weld, D., 2001. Scaling question answering to

    the web. Proceedings of the Tenth World Wide Web Conference,

    pp.150161.

    Leacock, Claudia, Chodorow, Martin, 1998. Combining Local Con-

    text and WordNet Similarity for Word Sense Identication. The

    MIT Press, USA.

    Li, Yuhua, Zuhair, A., Bandar, David McLean, 2003. An approach

    for measuring semantic similarity between words using multiple

    information sources. IEEE Trans. Knowl. Data Eng. 15, 871882.

    Thomas Lin, Mausam, Oren Etzioni, 2010. Identifying functional

    relations in web text. In: 2010 Conference on Empirical Methods in

    Natural Language Processing, 911 October 2010, MIT, Massa-

    chusetts, USA, pp. 12661276.

    Liu, B., Lee, W.S., Yu, P.S., Li, X., 2002. Partially supervised

    classication of text documents. In: Nineteenth International

    Conference on Machine Learning, 812 July 2002. Australia,

    Sydney, pp. 387394.

    Lopez, V. et al, 2007. AquaLog: an ontology-driven question

    answering system for organizational semantic intranets. J. Web

    Semant. 5, 72105.

    Maedche, A., 2002. Ontology Learning for the Semantic Web. The

    Kluwer Academic Publishers in Engineering and Computer,

    Science, p. 665.

    Marie-Catherine de Marneffe, Christopher, D., Manning, 2008. The

    Stanford typed dependencies representation. In: COLING Work-

    shop on Cross-framework and Cross-domain Parser Evaluation, 23

    August 2008, Manchester, UK, pp. 18.

    Mc Guinness, D., 2004. Question answering on the semantic web.

    IEEE Intell. Syst. 19, 8285.

    Miller, G.A., 1995. WordNet a lexical database for English.

    Commun. ACM 38, 3941.

    Navigli, R., Velardi, P., Gangemi, A., 2003. Ontology learning and its

    application to automated terminology translation. IEEE Intell.

    Syst. 18, 2231.method for selectional preferences. In: Forty Eighth Annual

    Meeting of the Association for Computational Linguistics; 1116

    July 2010, Uppsala, Sweden, pp. 424434.

    Mohamed Said Hamani, Ramdane Maamri, Yacine Kissoum, Maa-

    mar Sedrati, 2014. Unexpected rules using a conceptual distance

    based on fuzzy ontology. J. King Saud Univ. Comput. Inf. Sci.

    26, 99109.

    Sarwar Bajwa, Imran, Mark Lee, Behzad Bordbar, 2012. Translating

    natural language constraints to OCL. J. King Saud Univ.

    Comput. Inf. Sci. 24, 117128.

    Stefan Schoenmackers, Oren Etzioni, Daniel, S., Weld, Jesse Davis,

    2010. Learning rst-order horn clauses from web text. In: 2010

    Conference on Empirical Methods in Natural Language Process-

    ing, 911 October 2010, MIT, Massachusetts, USA, pp. 10881098.

    Takagi, Noboru, 2006. An application of binary decision trees to

    pattern recognition. J. Adv. Comput. Int. Intell. Inform. 10, 682

    687.

    Voorhees, E.M., 1999. The TREC-8 question answering track report.

    In: Proceedings of the Eighth Text REtrieval Conference, NIST

    Special Publication, pp. 500246.

    Wu, Z., Palmer, M., 1994. Verb semantics and lexical selection. In:

    Proceedings of the Thirty second Annual Meeting of the Associ-

    ations for Computational Linguistics (ACL94), Las Cruces, New

    Mexico, pp. 133138.

    Wu, Fei, Daniel S. Weld, 2010. Open information extraction using

    Wikipedia. In: Proceedings of the Forty eighth Annual Meeting of

    the Association for Computational Linguistics (ACL 10), 1116

    July 2010, Morristown, NJ, USA, pp. 118127.

    Yangarber, Roman, Ralph Grishman, 2001. Machine learning of

    extraction patterns from unannotated corpora: Position statement.

    In: Proceedings of Workshop on Machine Learning for Informa-

    tion Extraction, pp. 7683.

    Zhang, P., Li, M., Wu, J., et al, 2005. The community structure of

    science of scientic collaboration network. Complex Syst. Com-

    plexity Sci. 2, 3034.

    Zhu, Jun, Zaiqing Nie, Xiaojiang Liu, Bo Zhang, Ji-Rong Wen, 2009.

    StatSnowball: a statistical approach to extracting entity relation-

    ships. In: Eighteenth international conference on World Wide Web,

    2024 April 2009, Madrid, Spain, pp. 101110.

    Zouaq, Amal, Gasevic, Dragan, Hatala, Marek, 2011. Towards open

    ontology learning and ltering. Inf. Syst. 36, 10641081.Jung, Yuchul, Ryu, Jihee, Kim, Kyung-Min, Myaeng, Sung-Hyon, Technical Report Working paper CA-1294, School of Computer

    Concept relation extraction using Nave Bayes classifier for ontology-based question answering systems1 Introduction2 Related work2.1 Question answering systems2.2 Automatic ontology construction2.3 Semantic relation extraction

    3 Concept relation extraction for automatic ontology construction3.1 Hand-coded rules for concept triple extraction3.2 Recursive BDT-based rule engine

    4 Automatic relation classification using a Nave Bayes classifier4.1 Nave Bayes classifiers for concept relation classification4.1.1 Lexico-syntactic probability4.1.2 Lexico-semantic probability

    4.2 Ontology concept schema modeling

    5 CR-Ontology portable question answering framework6 Experimental setup6.1 Relation extraction for automatic ontology construction6.2 Question answering using CR-Ontology

    7 Results and discussion7.1 Relation extraction for automatic ontology construction7.2 Question answering using CRO-QAs

    8 ConclusionReferences