<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://ontologforum.com/index.php?action=history&amp;feed=atom&amp;title=TrackA</id>
	<title>TrackA - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://ontologforum.com/index.php?action=history&amp;feed=atom&amp;title=TrackA"/>
	<link rel="alternate" type="text/html" href="https://ontologforum.com/index.php?title=TrackA&amp;action=history"/>
	<updated>2026-04-17T21:41:40Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.39.0</generator>
	<entry>
		<id>https://ontologforum.com/index.php?title=TrackA&amp;diff=653&amp;oldid=prev</id>
		<title>imported&gt;KennethBaclawski: Fix PurpleMediaWiki references</title>
		<link rel="alternate" type="text/html" href="https://ontologforum.com/index.php?title=TrackA&amp;diff=653&amp;oldid=prev"/>
		<updated>2016-01-09T08:44:48Z</updated>

		<summary type="html">&lt;p&gt;Fix PurpleMediaWiki references&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Track&lt;br /&gt;
|Title=Intrinsic Aspects of Ontology Evaluation&lt;br /&gt;
|Aim=Ontologies are built to solve problems, and ultimately an ontology's worth can be measured by the effectiveness with which it helps in solving a particular problem. Nevertheless, as a designed artifact, there are a number of intrinsic characteristics that can be measured for any ontology that give an indication of how &amp;quot;well-designed&amp;quot; it is. Examples include the proper use of various relations found within an ontology, proper separation of concepts and facts (sometimes referred to as classes vs. instance distinctions), proper handling of data type declarations, embedding of semantics in naming (sometimes called &amp;quot;optimistic naming&amp;quot;), inconsistent range or domain constraints, better class/subclass determination, the use of principles of ontological analysis, and many more. This Track aims to enumerate, characterize, and disseminate information on approaches, methodologies, and tools designed to identify such intrinsic characteristics, with the aim of raising the quality of ontologies in the future.&lt;br /&gt;
|Scope=The scope of this track includes: Dimensions of evaluation, methods, criteria, properties to measure&lt;br /&gt;
|Champion=LeoObrst,SteveRay&lt;br /&gt;
|Sessions=Session03,Session08&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;div style=&amp;quot;background: #404040; padding: 5px 10px 5px 10px; font-size: larger;  color:#F4F4F4;&amp;quot;&amp;gt;SYNTHESIS&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;text-align: justify; font-size: 14px; font-family:Verdana;&amp;quot;&amp;gt; &lt;br /&gt;
It is useful to partition the ontology evaluation space into three regions:&lt;br /&gt;
&lt;br /&gt;
# Evaluation that does not depend at all on knowledge of the domain being modeled, but does draw upon mathematical and logical properties such as graph-theoretic connectivity, logical consistency, model-theoretic interpretation issues, inter-modularity mappings and preservations, etc. Structural properties such as branching factor, density, counts of ontology constructs, averages, and the like are intrinsic. Some meta-properties such as transitivity, symmetry, reflexivity, and equivalence may also figure in intrinsic notions. &lt;br /&gt;
# Evaluation where some understanding of the domain is needed in order to, for example, determine that a particular modeling construct is in alignment with the reality it is supposed to model. It may be that some meta-properties such as rigidity, identity, unity, etc., suggested by metaphysics, philosophical ontology, and philosophy of language are used to gauge the quality of the subclass/isa taxonomic backbone of an ontology and other structural aspects of the ontology. &lt;br /&gt;
# Situations where the structure and design of the ontology is opaque to the tester, and the evaluation is determined by the correctness of answers to various interrogations of the model. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We have chosen to call Region 1 Intrinsic Evaluation and Region 3 Extrinsic Evaluation. The reason this sort of partitioning is helpful is that purely intrinsic evaluation is highly amenable to automation (which is not to say that the other partitions are not automatable eventually, with more effort) and thus to scaling to many ontologies of any size. Examples of such tools include the Oops! Evalution web site at http://oeg-lia3.dia.fi.upm.es/oops/index-content.jsp and described by [[MariaPovedaVillalon|Mar&amp;amp;iacute;a Poveda Villal&amp;amp;oacute;n]] {{{[ [http://ontolog.cim3.net/file/work/OntologySummit2013/2013-01-31_OntologySummit2013_OntologyEvaluation-IntrinsicAspects/OntologySummit2013_Ontology-pitfalls-OOPS--PovedaVillalon-SuarezFigueroa-GomezPerez_20130131.pdf see slides] ]}}}, and the use of [[OntoQA]] to develop metrics for any ontology based on structural properties and instance populations, described by [[SamirTartir|Samir Tartir]] {{{[ [http://ontolog.cim3.net/file/work/OntologySummit2013/2013-01-31_OntologySummit2013_OntologyEvaluation-IntrinsicAspects/OntologySummit2013_OntoQA--SamirTartir-IsmailcemBudakArpinar-AmitSheth_20130131.pdf see slides] ]}}}. By the very nature of the Oops! web tool, it is not possible for it to depend upon any domain knowledge. Instead, it reports only on suspected improper uses of various OWL DL modeling practices. &lt;br /&gt;
&lt;br /&gt;
Similarly, Region 3, purely extrinsic evaluation, implies no ability whatsoever to peer inside a model, and depends entirely on model behavior through interactions. In some cases, it may be appropriate that extrinsic evaluation criteria be considered as intrinsic criteria with additional, relational arguments, e.g., precision with respect to a specific domain and specific requirements. &lt;br /&gt;
&lt;br /&gt;
For the purposes of developing reasonable expectations of different evaluation approaches, the challenge mainly lies in clarifying the preponderance of work that falls within Region 2, where some domain knowledge is employed and combined with the ability to explore the ontology being evaluated. For example, the [[OQuaRE]] framework described by [[AstridDuqueRamos|Astrid Duque Ramos]] {{{[ [http://ontolog.cim3.net/file/work/OntologySummit2013/2013-01-31_OntologySummit2013_OntologyEvaluation-IntrinsicAspects/OntologySummit2013_OQuaRE--FernandezBreis-DuqueRamos-RobertStevens-AussenacGilles_20130131.pdf see slides] ]}}} falls in this middle region as it combines both context dependent and independent metrics. Indeed, the [[OQuaRE]] team has stated their desire to better distinguish between these two categories of metrics. Another example is the [[OntoClean]] methodology (not reported on in Ontology Summit 2013, but generally well-known [1, 2]), that draws upon meta-domain knowledge, i.e., standard evaluative criteria originating from the practices of ontological analysis. &lt;br /&gt;
&lt;br /&gt;
Of course, structural integrity and consistency are only two kinds of evaluation to be performed, even in a domain-context-free setting. Entailments, model theories and subtheories, interpretability and reducibility are just a few of the other properties that should be examined. It is the goal of this summit to define a framework in which these examinations can take place, as part of a larger goal of defining the discipline of ontological engineering. &lt;br /&gt;
&lt;br /&gt;
[1] N. Guarino, C. Welty. 2002. Evaluating Ontological Decisions with OntoClean. Communications of the ACM. 45(2):61-65. New York: ACM Press. http://portal.acm.org/citation.cfm?doid=503124.503150.&lt;br /&gt;
&lt;br /&gt;
[2] Guarino, Nicola and Chris Welty. 2004. An Overview of OntoClean. In Steffen Staab and Rudi Studer, eds., The Handbook on Ontologies. Pp. 151-172. Berlin:Springer-Verlag. http://www.loa-cnr.it/Papers/GuarinoWeltyOntoCleanv3.pdf.&lt;br /&gt;
&amp;lt;/div&amp;gt;&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>imported&gt;KennethBaclawski</name></author>
	</entry>
</feed>