Using Deep Learning To Configure Parallel Distributed Discrete-Event Simulators

Keywords

Complexity; Deep Learning; Neural Networks; Parallel Distributed Simulation

Abstract

This research discusses the utilization of deep learning for selecting the time synchronization scheme that optimizes the performance of a particular parallel discrete simulation hardware/software arrangement. The deep belief neural networks are able to use measures of software complexity and architectural features to recognize, match patterns and therefore to predict performance. Software complexities such as simulation objects, branching, function calls, concurrency, iterations, mathematical computations, and messaging frequency were given a weight based on the cognitive weighted approach. In addition, simulation objects and hardware/network features such as the distributed pattern of simulation objects, CPUs features (e.g., multithreading/multicore), and the degree of loosely vs tightly coupled of the utilized computer architecture were also captured to define the parallel distributed simulation arrangement. Deep belief neural networks (in particular the restricted Boltzmann Machines (RBMs) were then used to perform deep learning from the complexity parameters and their corresponding time synchronization scheme value as measured by speedup performance. The simulation optimization techniques outlined could be implemented within existing parallel distributed simulation systems to optimize performance.

Publication Date

1-1-2017

Publication Title

Artificial Intelligence: Advances in Research and Applications

Number of Pages

23-47

Document Type

Article; Book Chapter

Personal Identifier

scopus

Socpus ID

85044669295 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85044669295

This document is currently not available here.

Share

COinS