We improve the previously proposed Q(s, S) policy for the stochastic joint replenishment problem. In our value-based Q(s, S) policy, item inventories are reviewed after each demand instant when the total demand since the last replenishment is close to Q. At each such review instance a new replenishment order is issued, if the expected cost of ordering immediately according to the (s, S) policy is less than the expected cost of deferring the order until the next demand or until the level Q is reached. We use simulation to evaluate our policy. Applying the value-based Q(s, S) policy to a standard set of 12-item numerical examples from the literature, the long-run average cost of the best known solution is reduced by approximately 1%. Further examples are also investigated and in some cases for which the cost structure implies a high service level, the cost reduction exceeds 10% of the cost of the pure Q(s, S) policy.
Main Research Area:
LCCC Theme Semester 2010 Workshop on Distributed Model Predictive Control and Supply Chains