Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Benchmarking requirement template systems: comparing appropriateness, usability, and expressiveness

View through CrossRef
AbstractVarious semi-formal syntax templates for natural language requirements foster to reduce ambiguity while preserving human readability. Existing studies on their effectiveness focus on individual notations only and do not allow to systematically investigate quality benefits. We strive for a comparative benchmark and evaluation of template systems to assist practitioners in selecting appropriate ones and enable researchers to work on pinpoint improvements and domain-specific adaptions. We conduct comparative experiments with five popular template systems—EARS, Adv-EARS, Boilerplates, MASTeR, and SPIDER. First, we compare a control group of free-text requirements and treatment groups of their variants following the different templates. Second, we compare MASTeR and EARS in user experiments for reading and writing. Third, we analyse all five meta-models’ formality and ontological expressiveness based on the Bunge-Wand-Weber reference ontology. The comparison of the requirement phrasings across seven relevant quality characteristics and a dataset of 1764 requirements indicates that, except SPIDER, all template systems have positive effects on all characteristics. In a user experiment with 43 participants, mostly students, we learned that templates are a method that requires substantial prior training and that profound domain knowledge and experience is necessary to understand and write requirements in general. The evaluation of templates systems’ meta-models suggests different levels of formality, modularity, and expressiveness. MASTeR and Boilerplates provide high numbers of variants to express requirements and achieve the best results with respect to completeness. Templates can generally improve various quality factors compared to free text. Although MASTeR leads the field, there is no conclusive favourite choice, as most effect sizes are relatively similar.
Title: Benchmarking requirement template systems: comparing appropriateness, usability, and expressiveness
Description:
AbstractVarious semi-formal syntax templates for natural language requirements foster to reduce ambiguity while preserving human readability.
Existing studies on their effectiveness focus on individual notations only and do not allow to systematically investigate quality benefits.
We strive for a comparative benchmark and evaluation of template systems to assist practitioners in selecting appropriate ones and enable researchers to work on pinpoint improvements and domain-specific adaptions.
We conduct comparative experiments with five popular template systems—EARS, Adv-EARS, Boilerplates, MASTeR, and SPIDER.
First, we compare a control group of free-text requirements and treatment groups of their variants following the different templates.
Second, we compare MASTeR and EARS in user experiments for reading and writing.
Third, we analyse all five meta-models’ formality and ontological expressiveness based on the Bunge-Wand-Weber reference ontology.
The comparison of the requirement phrasings across seven relevant quality characteristics and a dataset of 1764 requirements indicates that, except SPIDER, all template systems have positive effects on all characteristics.
In a user experiment with 43 participants, mostly students, we learned that templates are a method that requires substantial prior training and that profound domain knowledge and experience is necessary to understand and write requirements in general.
The evaluation of templates systems’ meta-models suggests different levels of formality, modularity, and expressiveness.
MASTeR and Boilerplates provide high numbers of variants to express requirements and achieve the best results with respect to completeness.
Templates can generally improve various quality factors compared to free text.
Although MASTeR leads the field, there is no conclusive favourite choice, as most effect sizes are relatively similar.

Related Results

An optimisational model of benchmarking
An optimisational model of benchmarking
PurposeThe purpose of this paper is to develop a quantitative methodology for benchmarking process which is simple, effective and efficient as a rejoinder to benchmarking detractor...
A review on benchmarking of supply chain performance measures
A review on benchmarking of supply chain performance measures
PurposeThe purpose of this paper is to redress the imbalances in the past literature of supply chain benchmarking and enhance data envelopment analysis (DEA) modeling approach in s...
Maximizing coverage, reducing time: a usability evaluation method for web-based library systems
Maximizing coverage, reducing time: a usability evaluation method for web-based library systems
AbstractThe usability of a Web Based Library System (WBLS) is an important quality attribute that must be met in order for the intended users to be satisfied. These usability quali...
Perancangan Usability Website Interface Sistem Informasi Kerusakan Laboratorium Universitas AMIKOM Yogyakarta
Perancangan Usability Website Interface Sistem Informasi Kerusakan Laboratorium Universitas AMIKOM Yogyakarta
INTISASIUsability sebagai ukuran kualitas pengalaman pengguna seringkali dikatakan sebagai suatu nilai penerimaan (acceptance) seseorang terhadap suatu produk ketika berinteraksi d...
The need for adaptive processes of benchmarking in small business‐to‐business services
The need for adaptive processes of benchmarking in small business‐to‐business services
PurposeThis paper aims to explore current management attitudes towards benchmarking and its implementation within small business‐to‐business service firms in order to enhance a dee...
Maximizing Coverage, Reducing Time: A Usability Evaluation Method for Web-Based Library Systems
Maximizing Coverage, Reducing Time: A Usability Evaluation Method for Web-Based Library Systems
Abstract Usability of a Web Based Library Systems (WBLS) is a major quality attribute. Checklists have become common and easy method to evaluate the usability of these WBLS...

Back to Top