Foreword from the New JDIQ Editor-in-Chief
Healthcare is evolving towards patient-centered care, and Shared Decision Making (SDM) holds great promise to improve health, reduce costs, and better align care with patients values. Information quality is a key aspect to empowering patients to make informed decisions. However, the progress on shared decision making is impeded by several unresolved information quality challenges. In this paper we propose three key challenges we believe need to be addressed to better facilitate SDM, including consistency and reconciliation, optimizing timeliness and accuracy tradeoff, and integrating decision aids. We call on the information quality community to begin addressing the challenges above to support the on-going transition of healthcare to SDM.
Wireless sensor networks are widely applied in data collection applications. Energy efciency is one of the most important design goals. In this paper, we propose QAAC, Quality-Assured Adaptive data Compression, to reduce the amount of data communication so that to save energy. QAAC rst builds clusters from dataset using an adaptive clustering algorithm; then a code for each cluster is generated and stored in a Huffman encoding tree, which is used to encode the original dataset in an encoding algorithm with improvement approach. After the encoded data, the Huffman encoding tree and parameters used in the improvement algorithm have been received at the sink, a decompression algorithm is used to retrieve the approximation of the original dataset. The performance evaluation shows that QAAC is efcient and achieves much higher compression ratio than compared lossy and lossless compression algorithms and much less information loss than compared lossy compression algorithms.
We discuss challenges for enabling quality across multiple data analytics contexts.
Spam on Online Social Networks (OSNs) has received a booming interest in the last few years. Following the rise of these platforms and their establishment as a ubiquitous part of the online existence, spammers have found in them an opportunity to make a lucrative business. A major part of the literature that aims at detecting spammers on OSNs uses the supervised learning model as the building schema of their contributions. This model assumes that it is possible to classify entities based on their statistical characteristics. A vital condition for the successful implementation of this model is to ensure that data is collected and labeled in a clean, accurate and non-biased way, resulting in high-quality datasets. In this paper, we discuss the different steps of the supervised classification methodology applied to social spam detection. This includes data collection, labeling, transformation, and sharing. From this, various issues arise in relation to collection bias, inaccurate and irreproducible labeling, obscure provenance of adjunct datasets (such as blacklists and spam dictionaries), imprecise description of features extraction and data transformation, and finally, complete or partial unavailability of raw and final datasets used to build statistical decision models.
The massive volumes of data in biological sequence databases provide a remarkable resource for large-scale biological studies. However the underlying data quality of these resources is a critical concern. A particular is duplication, in which multiple records have similar sequences, creating a high level of redundancy that impacts database storage, curation, and search. Biological database de-duplication has two direct applications: for database curation, where detected duplicates are removed to improve curation efficiency; and for database search, where detected duplicate sequences may be flagged but remain available to support analysis. Clustering methods have been widely applied to biological sequences for database de-duplication. Given high volumes of data, exhaustive all-by-all pairwise comparison of sequences cannot scale, and thus heuristics have been used, in particular use of simple similarity thresholds. This heuristic introduces a trade-off between efficiency and accuracy that we explore in this paper: if the similarity threshold is very high, the methods are accurate but slow; if the similarity threshold is too low, the methods are fast but inaccurate. We study the two best-known clustering tools for sequence database de-duplication, CD-HIT and UCLUST. Our contributions include: a detailed assessment of the redundancy remaining after de-duplication; application of standard clustering evaluation metrics to quantify the cohesion and separation of the clusters generated by each method; and a biological case study that assesses intra-cluster function annotation consistency, to demonstrate the impact of these factors in practical application of the sequence clustering methods. The results show that the trade-off between efficiency and accuracy becomes acute when low threshold values are used and when cluster sizes are large. The evaluation leads to practical recommendations for users for more effective use of the sequence clustering tools for de-duplication.
Editor in Chief (January 2014 - May 2017) Farewell Report
Data Quality is gaining momentum among organizations from when they realized that poor data quality might cause failures and/or inefficiencies, thus compromising business processes and application results. However, enterprises often adopt data quality assessment and improvement methods based on practical and empirical approaches, without conducting a rigorous analysis of the data quality issues and the outcome of the enacted data quality improvement practices. In particular, data quality management, and especially the identification of the data quality dimensions to be monitored and improved is up to knowledge-workers on the basis of their skills and experience. Control methods are therefore designed on the basis of expected and evident quality problems and thus they may not be effective in dealing with unknown and/or unexpected problems. This paper aims to provide a methodology, based on fault injection, for validating the data quality actions used by organizations. We show how it is possible to check if the adopted techniques properly monitor the real issues that may damage business processes. At this stage we focus on scoring processes, i.e., processes in which the output represents the evaluation or ranking of a specific object. We show the effectiveness of our proposal by means of a case study in the financial risk management area.