Direct Private Query In Location-Based Services With Gpu Run Time Analysis

Keywords

CUDA; Disjointed neighborhood; GPU computing; Hausdorff space; Parallel processing; Topological space

Abstract

Private query in location-based service allows users to request and receive nearest point of interest (POI) without revealing their location or object received. However, since the service is customized, it requires user-specific information. Problems arise when a user due to privacy or security concerns is unwilling to disclose this information. Previous solutions to hide them have been found to be deficient and sometimes inefficient. In this paper, we propose a novel idea that will partition objects into neighborhoods supported by database design that allows a user to retrieve the exact nearest POI without revealing its location, or the object retrieved. The paper is organized into two parts. In the first part, we adopted the concept of topological space to generalize object space. To help limit information disclosed and minimize transmission cost, we create disjointed neighborhoods such that each neighborhood contains no more than one object. We organize the database matrix to align with object location in the area. For optimization, we introduce the concept of kernel in graphical processing unit (GPU), and we then develop parallel implementation of our algorithm by utilizing the computing power of the streaming multiprocessors of GPU and the parallel computing platform and programming model of Compute Unified Device Architecture (CUDA). In the second part, we study serial implementation of our algorithm with respect to execution time and complexity. Our experiment shows a scalable design that is suitable for any population size with minimal impact to user experience. We also study GPU–CUDA parallel implementation and compared the performance with CPU serial processing. The results show 23.9× improvement of GPU over CPU. To help determine the optimal size for the parameters in our design or similar scalable algorithm, we provide analysis and model for predicting GPU execution time based on the size of the chosen parameter.

Publication Date

2-1-2015

Publication Title

Journal of Supercomputing

Volume

71

Issue

2

Number of Pages

537-573

Document Type

Article

Personal Identifier

scopus

DOI Link

https://doi.org/10.1007/s11227-014-1309-4

Socpus ID

84925515841 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/84925515841

This document is currently not available here.

Share

COinS