Evaluating the conformance and interoperability of semantic technologies. Slides presented at the ESWC 2010 Tutorial on Evaluation of Semantic Web Technologies
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Conformance in the Semantic Web
• Conformance is the ability that semantic technologies have to adhere to existing specifications – In terms of ontology representation languages (RDF(S), OWL, etc.)
• Different types of conformance, regarding the ontology language: – Knowledge model – Serialization – Semantics
• Conformance is a primary requirement for semantic technologies: – Tool validation – Feature analysis
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Metrics
• Execution informs about the correct execution: – OK. No execution problem – FAIL. Some execution problem – Platform Error (P.E.) Platform exception
• Information added or lost in terms of triples.
• Conformance informs whether the ontology has been processed correctly with no addition or loss of information: – SAME if Execution is OK and Information added and
Information lost are void – DIFFERENT if Execution is OK but Information added
or Information lost are not void – NO if Execution is FAIL or P.E.
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Interoperability in the Semantic Web
• Interoperability is the ability that Semantic Web technologies have to interchange ontologies and use them – At the information level; not at the system level – In terms of knowledge reuse; not information integration
• In the real world it is not feasible to use a single system or a single formalism
• Different behaviours in interchanges between different formalisms:
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Interoperability evaluation
• Goal: to evaluate the interoperability of semantic technologies in terms of the ability that such technologies have to interchange ontologies and use them
• Applicability: – Only requirement: that the tool is able of importing and exporting ontologies
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Metrics
• Execution informs about the correct execution: – OK. No execution problem – FAIL. Some execution problem – Platform Error (P.E.) Platform exception – Not Executed. (N.E.) Second step not executed
• Information added or lost in terms of triples.
• Interchange informs whether the ontology has been interchanged correctly with no addition or loss of information: – SAME if Execution is OK and Information added and
Information lost are void – DIFFERENT if Execution is OK but Information added
or Information lost are not void – NO if Execution is FAIL, N.E., or P.E.
Group No. Components Class 2 rdfs:Class Metaclass 5 rdfs:Class, rdf:type Subclass 5 rdfs:Class, rdfs:subClassOf Class and property 6 rdfs:Class, rdf:Property, rdfs:Literal Property 2 rdf:Property Subproperty 5 rdf:Property, rdfs:subPropertyOf Property with domain and range
Subclass of class Subclass of restriction Value constraints
Cardinality + object property
Cardinality + datatype property
Set operators
Group No. Class hierarchies 17 Class equivalences 12 Classes defined with set operators 2 Property hierarchies 4 Properties with domain and range 10 Relations between properties 3 Global cardinality constraints and logical property characteristics
5
Single individuals 3 Named individuals and properties 5 Anonymous individuals and properties 3 Individual identity 3 Syntax and abbreviation 15 TOTAL 82
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Parameterize generation
• Examples: – “…for every type of class description” – “…using all the built-in annotation properties” – “…starting from a depth of 500 and to a depth of 5.000” – …
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:owl="http://www.w3.org/2002/07/owl#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" arkOntology#" arkOntology#"> <owl:Ontology rescription of the benchmark suite inputs.</rdfs:comment> <owl:versionInfo>24 October 2006</owl:versionInfo> </owl:Ontology> <!-- classes -->
OWL Lite Import Test
Suite
1
2 3
benchmarkOntology
rdf:type
resultOntology
rdf:type
… • Automatically executes tests between all the tools • Allows configuring different execution parameters • Uses ontologies to represent tests and results • Depends on external ontology comparers (Jena + Pellet and RDF-utils)
CONFORMANCE INTEROPERABILITY TEST DATA RUNNING CONCLUSIONS
Conclusions
Methods for evaluating conformance and interoperability • Common for different semantic technologies • Problem-focused instead of tool-focused • Provides data about other characteristics (e.g., robustness)
Resources for evaluating conformance and interoperability • All the test suites, software and results are publicly available • Independent of:
– The interchange language – The input ontologies
Keyword-based test definition + Automatic test execution • Affordable for evaluators (end users, developers, etc.) • Test definition at large scale • Need effective tests, which requires effort • Result analysis is still hard
5 evaluation datasets • RDF(S) Import Test Suite • OWL Lite Import Test Suite • OWL DL Import Test Suite • OWL Full Import Test Suite • Scalability Test Suite
Timeline: • May 2010: Registration opens • May-June 2010: Evaluation materials and documentation are provided to participants • July 2010: Participants upload their tools • August 2010: Evaluation scenarios are executed • September 2010: Evaluation results are analysed • November 2010: Evaluation results are discussed in a workshop