Abstract

Stochastic optimization is an optimization method that solves stochastic problems for minimizing or maximizing an objective function when there is randomness in the optimization process. In this dissertation, various stochastic optimization problems from the areas of Manufacturing, Health care, and Information Cascade are investigated in networks systems. These stochastic optimization problems aim to make plan for using existing resources to improve production efficiency, customer satisfaction, and information influence within limitation. Since the strategies are made for future planning, there are environmental uncertainties in the network systems. Sometimes, the environment may be changed due to the action of the decision maker. To handle this decision-dependent situation, the discrete choice model is applied to estimate the dynamic environment in the stochastic programming model. In the manufacturing project, production planning of lot allocation is performed to maximize the expected output within a limited time horizon. In the health care project, physician is allocated to different local clinics to maximize the patient utilization. In the information cascade project, seed selection of the source user helps the information holder to diffuse the message to target users using the independent cascade model to reach influence maximization. The computation complexities of the three projects mentioned above grow exponentially by the network size. To solve the stochastic optimization problems of large-scale networks within a reasonable time, several problem-specific algorithms are designed for each project. In the manufacturing project, the sampling average approximation method is applied to reduce the scenario size. In the health care project, both the guided local search with gradient ascent and large neighborhood search with Tabu search are developed to approach the optimal solution. In the information cascade project, the myopic policy is used to separate stochastic programming by discrete time, and the Markov decision process is implemented in policy evaluation and updating.

Notes

If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu

Graduation Date

2019

Semester

Fall

Advisor

Zheng, Qipeng

Degree

Doctor of Philosophy (Ph.D.)

College

College of Engineering and Computer Science

Department

Industrial Engineering and Management Systems

Degree Program

Industrial Engineering

Format

application/pdf

Identifier

CFE0007792

URL

http://purl.fcla.edu/fcla/etd/CFE0007792

Language

English

Release Date

December 2019

Length of Campus-only Access

None

Access Status

Doctoral Dissertation (Open Access)

Share

COinS