Date of Original Version
Abstract or Description
We consider an assemble-to-order system with multiple products, multiple components which may be demanded in different quantities by different products, batch ordering of components, random lead times, and lost sales. We model the system as an infinite-horizon Markov decision process under the discounted cost criterion. A control policy specifies when a batch of components should be produced (i.e., inventory replenishment) and whether an arriving demand for each product should be satisfied (i.e., inventory allocation). As optimal solutions for such problems are computationally intractable for even moderate sized systems, we approximate the optimal cost function by reducing the state space of the original problem via a novel aggregation technique that uses knowledge of products' component requirements and components' replenishment batch sizes.
We establish that a lattice-dependent base-stock and lattice-dependent rationing policy is the optimal inventory replenishment and allocation policy for the aggregate problem under a disaggregation rule that disaggregates each aggregate state into its two extreme original states. This rule drastically reduces the per iteration computational complexity of the value iteration algorithm for the aggregate problem (without sacrificing much accuracy, according to our numerical experiments). We further alleviate the value iteration computational burden by eliminating suboptimal actions based on our optimal policy structure.
For systems in which there is a product that has fulfillment priority over all other products at optimality, we are able to derive finite error bound for the cost function of the aggregate problem. With these bounds we show that the value iteration algorithm in the original problem that starts with the aggregate solution converges to the optimal cost function. Numerical experiments indicate that such an algorithm has distinct computational advantage over the standard value iteration method in the original problem.