Welcome to the IKCEST
Journal
Journal of Web Semantics

Journal of Web Semantics

Archives Papers: 203
Elsevier
Please choose volume & issue:
Publishing privacy logs to facilitate transparency and accountability
Reza Samavi; Mariano P. Consens;
Abstracts:Compliance with privacy policies imposes requirements on organizations and their information systems. Maintaining auditable privacy logs is one of the key mechanisms employed to ensure compliance, but the logs and their auditing reports are designed and implemented on an application by application basis. This paper develops a Linked Data model and ontologies to facilitate the sharing of logs that support privacy auditing and information accountability among multiple applications and participants. The L2TAP modular ontologies accommodate a variety of privacy scenarios and policies. SCIP is the key module that synthesizes contextual integrity concepts and enables query based solutions that facilitate privacy auditing. Other L2TAP modules describe logs, participants, and log events, all identified by web accessible URIs and include relevant provenance information to support accountability. A health self-management scenario is used to illustrate how privacy preferences, accountability obligations, and access to personal information can be published and accessed as Linked Data by multiple participants, including the internal and external auditors. We contribute query based algorithmic solutions for two fundamental privacy auditing processes that analyse L2TAP logs: obligation derivation and compliance checking. The query based solutions that we develop require SPARQL implementations with limited RDFS reasoning power, and are therefore widely supported by commercial and open source systems. We also provide experimental validation of the scalability of our query based solution for compliance checking over L2TAP logs.
Impact analysis of data placement strategies on query efforts in distributed RDF stores
Daniel Janke; Steffen Staab; Matthias Thimm;
Abstracts:In the last years, scalable RDF stores in the cloud have been developed, where graph data is distributed over compute and storage nodes for scaling efforts of query processing and memory needs. One main challenge in these RDF stores is the data placement strategy that can be formalized in terms of graph covers. These graph covers determine whether (a) the triples distribution is well-balanced over all storage nodes (storage balance) (b) different query results may be computed on several compute nodes in parallel (vertical parallelization) and (c) individual query results can be produced only from triples assigned to few – ideally one – storage node (horizontal containment). We analyse the impact of three most commonly used graph cover strategies in these terms and found out that balancing query workload reduces the query execution time more than reducing data transfer over network. To this end, we present our novel benchmark and open source evaluation platform Koral.
The dataLegend ecosystem for historical statistics
Rinke Hoekstra; Albert Meroño-Peñuela; Auke Rijpma; Richard Zijdeman; Ashkan Ashkpour; Kathrin Dentler; Ivo Zandhuis; Laurens Rietveld;
Abstracts:The main promise of the digital humanities is the ability to perform scholarly studies at a much broader scale, and in a much more reusable fashion. The key enabler for such studies is the availability of sufficiently well described data. For the field of socio-economic history, data usually comes in a tabular form. Existing efforts to curate and publish datasets take a top-down approach and are focused on large collections, produce scarce metadata, require expertise for effective integration, provide poor user support while producing mappings, and present issues at data access. This paper presents the datalegend platform, which addresses the long tail of research data by catering for the needs of individual scholars. datalegend allows researchers to publish their (small) datasets, link them to existing vocabularies and other datasets, and thereby contribute to a growing collection of interlinked datasets. We present the architecture of datalegend; its core vocabularies and data; and QBer, an interactive, user supportive mapping generator and RDF converter. We evaluate our results by showing how our system facilitates use cases in socio-economic history.
Hot Journals