1 Department of Business Studies, Aarhus School of Business, Aarhus BSS, Aarhus University2 CORAL - Centre for Operations Research Applications in Logistics, Aarhus School of Business, Aarhus BSS, Aarhus University3 Department of Economics and Business Economics, Aarhus BSS, Aarhus University4 Department of Economics and Business Economics, Aarhus BSS, Aarhus University
Re-entrant systems have been thoroughly investigated in the past, often with respect to answering the question as to how bad a behaviour of which a seemingly proper configured system is capable. Convincing results, with respect to the role of so-called virtual bottlenecks, have been established by for instance J.G. Dai (1995, ). However, as interesting as these examples might be theoretically, from a practical point of view it seems that a more interesting question to answer would be as to how frequent these badly behaving constellations might appear, more or less by accidence, in day-to-day operations. It might well be the case that those badly behaving systems merely constitute a set with a measure of almost zero. The key parameters that determine whether a re-entrant system is one of the badly behaving ones are essentially flow-structure, processing times and rules of priority between different job-classes on specific machines. Let us therefore face a re-entrant flow-shop production set-up in an order-driven production system, by which I mean that the flow structure relative to each production-batch shows great variability and where the size of the individual batches is quite large. One way to assess the relevance of bad performance structures could then be to simulate a number of scenarios based on a variety of priority policies, different number of machines and job-classes, and let the structure of the job-routes and process times be chosen randomly within the system's realistic capabilities. Individual machine utilisation is arbitrarily set to 80% in this study. Each scenario is replicated a 1000 times and the measure used to evaluate the system's performance is the total batch throughput time based on 5000 processed units. It is the spread of this measure that is of interest. Large and even spreads indicate a situation where the priority policy has a clear tendency often to induce severe virtual bottlenecks into the system at hand, resulting in poor performance. The findings, in summary, are as follows. Almost every priority scheme examined in this simulation study showed a considerable spread in the observed batch throughput times. As the number of machines and job classes were increased things got worse. So to answer the initially posed question, I think it is fair to say that in general, if re-entrant systems are configured randomly, bad system behaviour is not that infrequent a phenomenon at all. However, there was one (not really surprising) exception to these findings. All scenarios based on the Last-Buffer-First-Served (LBFS) policy resulted in almost no spread in the batch throughput times and on top of this a satisfactory throughput performance in absolute terms was also achieved. That the LBFS policy is almost always stable, have been established by S. Kumar and P. R. Kumar way back in 1995 . The present study therefore adds to this result, in that when system configurations are varying considerably, none of the other analysed priority schemes perform really satisfactory. They are all vulnerable to certain job flow patterns and process time settings to a significant degree. The LBFS priority policy therefore presents itself not only as a stable, but also the safe and robust choice par excellence in relation to the control of re-entrant systems.
16th Int. Conf. on Production Research (cd-rom), 2001