Title

A Gpgpu Compiler For Memory Optimization And Parallelism Management

Keywords

Compiler; GPGPU

Abstract

This paper presents a novel optimizing compiler for general purpose computation on graphics processing units (GPGPU). It addresses two major challenges of developing high performance GPGPU programs: effective utilization of GPU memory hierarchy and judicious management of parallelism. The input to our compiler is a naïve GPU kernel function, which is functionally correct but without any consideration for performance optimization. The compiler analyzes the code, identifies its memory access patterns, and generates both the optimized kernel and the kernel invocation parameters. Our optimization process includes vectorization and memory coalescing for memory bandwidth enhancement, tiling and unrolling for data reuse and parallelism management, and thread block remapping or address- offset insertion for partition-camping elimination. The experiments on a set of scientific and media processing algorithms show that our optimized code achieves very high performance, either superior or very close to the highly fine-tuned library, NVIDIA CUBLAS 2.2, and up to 128 times speedups over the naive versions. Another distinguishing feature of our compiler is the understandability of the optimized code, which is useful for performance analysis and algorithm refinement. Copyright © 2010 ACM.

Publication Date

6-1-2010

Publication Title

ACM SIGPLAN Notices

Volume

45

Issue

6

Number of Pages

86-97

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1145/1809028.1806606

Socpus ID

77957600490 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/77957600490

This document is currently not available here.

Share

COinS