This paper describes an approach for automated ingestion of biomedical data dictionaries. Automated ingestion or reading is the process of extracting element details for each of the data elements from a data dictionary in a document format (such as PDF) to a completely structured format. The structured format is essential if the data dictionary metadata is to be used in applications such as data integration, and also in evaluating the quality of the associated data. We present a machine-learning classification solution to the problem using conditional random field (CRF) classifiers and leveraging multiple text and character based features of text rows in the document. We present an evaluation using several actual data dictionary documents demonstrating the effectiveness of our approach.
Queue mining is a novel research area of data mining that learns queueing models from data logs. These models are then used for performance prediction in queueing-oriented systems. Queue mining combines techniques from process mining, queueing theory, statistics, and optimization. This paper reviews challenges that stem from data quality issues in queue mining, as well as some existing solutions to these challenges.
Data from Twitter have been increasingly employed to study the impact of events. Conventionally, researchers have relied on keywords to create a panel of Twitter users who mention event-related keywords during and after an event. There are limitations to the keyword-based approach. First, the technique suffers from selection bias since users who discuss an event are already more interested in event-related topics beforehand; it is thus unclear whether observed impacts are merely driven by a set of users who are intrinsically more interested in an event. Second, there are no viable groups for comparison to a keyword-based sample of Twitter users. We propose an alternative sampling approachgeolocated panels defined by users geolocation to studying response to events on Twitter. Geolocated panels are exogenous to the keywords in users tweets, resulting in less selection bias than the keyword-based approach. Geolocated panels allow us to follow within-person changes over time and enable the creation of comparison groups. We evaluate our panel selection approach in two real-world settings: response to mass shootings and response to TV advertising. We first empirically show that geolocated panels are subject to selection biases, while geolocated panels reduce selection biases. Then we show how geolocated panels can provide qualitatively different results. We believe that we are the first to provide a clear empirical example of how a better panel-selection design, based on an exogenous variable such as geography, both reduces selection bias compared to the current state-of-the-art and increases the value of Twitter research for studying events.
Through the different aspects of Knowledge Bases, data quality is one of the most relevant in order to obtain the benefit of such information. Knowledge Bases quality assessment poses a number of big data challenges such as high volume, variety, velocity and veracity. In this paper we focus on answering questions to the assessment of veracity of facts through Deep Fact Validation (DeFacto), a fact checking framework designed to validate facts in RDF Knowledge Bases. Despite current development in the research area, the underlying framework for such task still faces many challenges. This article pinpoints and discusses these issues and conduct a thorough analysis of its pipeline, aiming at reducing the error propagation through its components. As a result of this exploratory analysis, we give insights for an enhanced architecture which is able to better execute the complex task of fact checking, moving towards a better engineering of DeFacto.
Editorial: Special Issue on Improving the Veracity and Value of Big Data
Healthcare organizations increasingly rely on electronic information to optimize their operations. Information of high diversity from various sources accentuate the relevance and importance of information quality (IQ). The quality of information needs to be improved to support a more efficient and reliable utilization of healthcare information systems (IS). This can only be achieved through the implementation of initiatives followed by most users across an organization. The purpose of this study is to examine how awareness of IS users about IQ issues would affect their actual practices toward IQ initiatives. Influenced by the awareness on beneficial and problematic situations generated by IQ practices, users motivation is found to influence their IQ-related behavior. In addition, social influences and facilitating conditions moderate the relationship between user intention and actual practice. The theoretical and practical implications of findings are discussed, especially IQ best practices in the healthcare settings.
Software metrics are becoming more acceptable measures for software quality assessment. However, there is no standard form for representing metric definitions, which would be useful for metrics exchange and customization. In this paper, we propose the Software Product Metrics Definition Language (SPMDL). We developed an XML-based description language, for defining software metrics in a precise and reusable form. Metric definitions in SPMDL are based on meta-models extracted from either source code or design artifacts, such as the Dagstuhl Middle Meta-model, with support for various abstraction levels. The language defines several flexible computation mechanisms such as extended OCL queries and predefined graph operations on the meta-model. SPMDL provides unambiguous description of the metric definition; it is also easy to use and extensible.