Title

Enhancing Memory-Level Parallelism Via Recovery-Free Value Prediction

Keywords

Single data stream architectures

Abstract

The ever-increasing computational power of contemporary microprocessors reduces the execution time spent on arithmetic computations (i.e., the computations not involving slow memory operations such as cache misses) significantly. Therefore, for memory-intensive workloads, it becomes more important to overlap multiple cache misses than to overlap slow memory operations with other computations. In this paper, we propose a novel technique to parallelize sequential cache misses, thereby increasing memory-level parallelism (MLP). Our idea is based on value prediction, which was proposed originally as an instruction-level parallelism (ILP) optimization to break true data dependencies. In this paper, we advocate value prediction in its capability to enhance MLP instead of ILP. We propose using value prediction and value-speculative execution only for prefetching so that not only the complex prediction validation and misprediction recovery mechanisms are avoided, but better performance can also be achieved for memory-intensive workloads. The minor hardware modifications that are required also enable aggressive memory disambiguation for prefetching. The experimental results show that our technique enhances MLP effectively and achieves significant speedups, even with a simple stride value predictor. © 2005 IEEE.

Publication Date

7-1-2005

Publication Title

IEEE Transactions on Computers

Volume

54

Issue

7

Number of Pages

897-912

Document Type

Article

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/TC.2005.117

Socpus ID

22944462650 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/22944462650

This document is currently not available here.

Share

COinS