Title

Direct private query in location-based services with GPU run time analysis

Authors

Authors

C. Asanya;R. Guha

Comments

Authors: contact us about adding a copy of your work at STARS@ucf.edu

Abbreviated Journal Title

J. Supercomput.

Keywords

Disjointed neighborhood; Hausdorff space; Parallel processing; GPU; computing; CUDA; Topological space; K-ANONYMITY; MODEL; Computer Science, Hardware & Architecture; Computer Science, Theory &; Methods; Engineering, Electrical & Electronic

Abstract

Private query in location-based service allows users to request and receive nearest point of interest (POI) without revealing their location or object received. However, since the service is customized, it requires user-specific information. Problems arise when a user due to privacy or security concerns is unwilling to disclose this information. Previous solutions to hide them have been found to be deficient and sometimes inefficient. In this paper, we propose a novel idea that will partition objects into neighborhoods supported by database design that allows a user to retrieve the exact nearest POI without revealing its location, or the object retrieved. The paper is organized into two parts. In the first part, we adopted the concept of topological space to generalize object space. To help limit information disclosed and minimize transmission cost, we create disjointed neighborhoods such that each neighborhood contains no more than one object. We organize the database matrix to align with object location in the area. For optimization, we introduce the concept of kernel in graphical processing unit (GPU), and we then develop parallel implementation of our algorithm by utilizing the computing power of the streaming multiprocessors of GPU and the parallel computing platform and programming model of Compute Unified Device Architecture (CUDA). In the second part, we study serial implementation of our algorithm with respect to execution time and complexity. Our experiment shows a scalable design that is suitable for any population size with minimal impact to user experience. We also study GPU-CUDA parallel implementation and compared the performance with CPU serial processing. The results show 23.9 improvement of GPU over CPU. To help determine the optimal size for the parameters in our design or similar scalable algorithm, we provide analysis and model for predicting GPU execution time based on the size of the chosen parameter.

Journal Title

Journal of Supercomputing

Volume

71

Issue/Number

2

Publication Date

1-1-2015

Document Type

Article

Language

English

First Page

537

Last Page

573

WOS Identifier

WOS:000349259600006

ISSN

0920-8542

Share

COinS