• In general, XML benchmarks can be classified into two main categories:
    • Application (Macro) benchmarks [XOO7, XMach-1, TPoX, XMark, XBench, Michigan] which are used to evaluate the overall performance of an XML management system. Hence, this kind of benchmarks are not very useful for detailed assessment of specific aspects of an implementation that need improvement.
    • Micro-benchmarks [XPathMark, MemBer] which are designed to assess the performance of specific features of a system.
  • Although the XML research community has proposed several benchmarks [XOO7, XMach-1, XPathMark, MemBer, TPoX, XMark, XBench] which are very useful for their targets and perspectives, none of these benchmarks fits in the context of assessing and evaluating the different selectivity estimation approaches of XML queries.
  • Several research efforts have proposed different selectivity estimation approaches in the XML domain. However, these approaches are never comprehensively assessed, evaluated and compared. One of the main reasons for this situation is that there is a lack of suitable benchmark that allows performing such real assessment and comparison. This implies that there is no clear view about the state-of-the-art in this domain, which in turn makes it difficult to decide where the next steps should go.
  • XSelMark (A Micro-Benchmark for Selectivity Estimation Approaches of XML Queries) is considered as a first step to bring an overview of the state-of-the-art of the available approaches in the domain of selectivity estimation of XML queries along with their strengths and weaknesses. It aims of guiding researchers and implementors in benchmarking and improving their research efforts in this domain. XSelMark consists of 25 queries organized into seven groups where each group is intended to address the challenges posed by the different aspects of XML query result size estimation.

Navigation