• Skip to main content
  • Skip to footer

Steve Beeler

You have a goal…I have a way to get you there.

  • About Me
  • Project Management
  • Operations Engineering
  • Motorsports
  • FF50th
  • Blog

Discrete Event Simulation

Variation and Waste

February 1, 2019 by stevebeeler

Variation is at the root cause of almost all waste in manufacturing and business systems.  It is no coincidence then that variability reduction is the foundation for the Toyota Production System (Lean Thinking) and its success in continuously reducing waste.  Two games of chance with dice and cards illustrate the linkage between variation and waste.

The first game is a single piece flow system with five machines.  The output of each machine is represented by the total of two dice.

The average of two dice, of course, is 7.  However, our five-machine system only averages 4.3…a 38% loss!  Why?  Variation and waste.  In single piece flow, no machine can outproduce another.  Production is lost through waiting losses (blocks and starves) when the machines interact with each other.  Balanced and dependent systems are surprisingly common.  They never work as expected.  In isolation yes, in combination no is a key lesson from Theory of Constraints.

The second game is a system of three machines feeding an assembly area.  The output of each machine is represented by the draw from a deck of cards.  In order to assemble a product, all three cards must match.

Variation and Waste

Each machine is produces one card per time period.  Jokers represents a defective product and cannot be matched.  The decks are shuffled so the three output sequences are independent.

Variation and Waste

Quite a few cards collect (to the right of the decks) before there are matches for the assembly machine to assemble (to the left of the decks).  Work in process inventory (WIP) and lead time are horrendous.  Throughput suffers as the assembly area waits for matches.  In isolation, each machine is successfully producing cards.  In combination, the system is performing poorly.  Variation and waste again.  This time the variation is in sequence, but the waste is equally dramatic.

Variability reduction is a big part of my day job as a Professional Engineer.  Visit my Operations Engineering page for methods and case studies.  While variation is always present, robust systems can be designed  to minimize the linkage between variation and waste.

Filed Under: Operations Engineering Tagged With: Balanced and Dependent Systems, Discrete Event Simulation, Lean Thinking, Theory of Constraints, Toyota Production System, Variation, Waste

Manufacturing Plan Verification

June 12, 2018 by stevebeeler

Our dynamic economy is characterized by constant change. New products are brought to market, achieve commercial success, and then are made obsolete by something better, faster, cheaper. An essential element in successful product creation is manufacturing plan verification.

A manufacturing plan is a comprehensive compilation of all the facts, figures, and assumptions around making something of commercial value. A plant layout is necessary but not sufficient. How fast must each machine operate? What are the quality requirements? What are the optimum inventory levels? What is the annual volume for these things? How much money is available for investment? What is the unit cost target?

manufacturing plan verification

A manufacturing plan describes a long chain of events from order to delivery. It is not just enough to get each link in the chain to work. The entire system of links must work in harmony to deliver business plan operational and financial metrics. This is not easy to accomplish. Manufacturing plan verification manages this risk.

A robust manufacturing plan starts with business plan objectives which are cascaded down into plant department performance targets through a high-level discrete event simulation model. Next, department performance targets are cascaded down into line level performance targets through a more detailed, line-level discrete event simulation model.

 

Manufacturing Plan Verification

With performance targets in hand, manufacturing teams can confidently do the detailed processing of their link in the chain. Systemic risk has been minimized if not eliminated totally: if each link meets its target, then the chain will work as expected. Manufacturing teams may be working remotely but they are not working in isolation…they are connected through the line-level discrete event simulation.

Before program approval, each manufacturing team reports back on their performance predictions for its link in the chain. These values replace the targets in an integrated discrete event simulation model of the entire manufacturing process. The manufacturing plan is verified when this integrated model meets or exceeds business plan objectives.

In addition to verifying production flows through discrete event simulation, material handling simulation models are often utilized prior to program approval to predict and optimize fork lift requirements, traffic flows and congestion, and indirect labor requirements.

Does simulation lead manufacturing planning or vice versa? A little of both, in a collaborative Plan-Do-Check-Act continuous improvement cycle.

Filed Under: Operations Engineering Tagged With: Continuous Improvement, Discrete Event Simulation, Manufacturing Plan Verification, Plan-Do-Check-Act, Product Creation

Simulation Output Reports

May 30, 2018 by stevebeeler

Discrete Event Simulation is a portal to the future: find constraints, test strategies to break them, improve performance to the goal, maximize investment returns, and reduce risk. Shown below are three powerful simulation output reports.

Simulation Output Reports

Time in State. Time in state simulation output reports are especially useful in finding the constraint in the process. Before the constraint, machine elements are generally blocked (blue) more than they are starved. After the constraint, they are starved (yellow) more than they are blocked. The machine element with the most uptime (green) is likely the constraint.  Knowledge of the constraint’s location is a key factor in improving the process. Focus on the constraint for opportunities to increase throughput. Look to non-constraints for opportunities to reduce operating costs.

Simulation Output Reports

Volume Histogram. The volume histogram is the first to two methods used to validate a simulation model against its real-world process. How production counts vary over time is an important metric…the less uncertainty the better.  The volume histogram provides a qualitative comparison of the hour-to-hour variation in volume throughput.

Simulation Output Reports

X-bar & R Chart. The production count X-bar & R chart provides a more quantitative comparison between the simulation and the real-world process. Statistical process control charts not only quantify the magnitude of the common cause process variation but also identify special cause events. Characterizing the output variation as common cause vs special cause is an important factor in validating the simulation model. It is difficult if not impossible to simulate special cause events. That is because discrete event simulation “engines” utilize constant probability random number streams. If the real-world process variation is being driven by non-random events, then the root causes of the special cause events will have to be removed before the simulation what-if results will predict future performance.

These three simulation output reports are the foundation for a Plan-Do-Check-Act continuous improvement of the simulated manufacturing or business process. Simulate, validate, and experiment. A robust solution will follow.

Filed Under: Operations Engineering Tagged With: Continuous Improvement, Discrete Event Simulation, Plan-Do-Check-Act, Simulation Output Reports, Time In State, Volume Histogram, X-bar & R Chart

Discrete Event Simulation

May 10, 2018 by stevebeeler

Discrete event simulation provides a dynamic view of the value stream as well the ability to run “what-if” experiments to predict the effectiveness of changes. As such, discrete event simulation is a very powerful tool to maximize operational and investment efficiency and to find robust solutions that mitigate risk.

Discrete Event Simulation

A discrete event simulation is built by connecting modeling elements (machines, conveyors, buffers, parts, people, etc) in the process flow logic. Next the performance of each element is described with variables such as cycle times, downtimes (MTBF or MCBF and MTTR), changeover times, conveyor min/max/floats, buffer sizes, shift times, etc. A Value Stream Map organizes most of the data required to build a discrete event simulation model.

Uncertainty in any performance variable can be captured by fitting a probability distribution around a mean value. For example, a cycle time might average 60 seconds but vary between 54 and 66 seconds. This variation is simulated by sampling the appropriate probability distribution with a random number stream. By using a different random number stream for each probability distribution, the events in the model are independent of each other…just like in the real world.

At a constant time interval, the simulation software “engine” assigns a random number to each element in the model. Next, all the elements try to advance one step. Then the elements all report back on their status: up, down, blocked, starved, etc. The simulation software notes the status of each element in the model and then repeats the process over and over until the experiment is complete. At the end of the experiment, the discrete event simulation software collates the results and generates the desired reports.

What-if experiments are easily performed by making changes to the input data set (typically an external Excel spreadsheet) and then re-running the model. Discrete event simulation is a versatile tool equally applicable to both manufacturing and business processes.

Filed Under: Operations Engineering Tagged With: Discrete Event Simulation, What-If

Balanced and Dependent Systems

May 2, 2018 by stevebeeler

The ten machine puzzle in my Theory of Constraints blog post is a simple example of the balanced and dependent systems that are surprisingly frequent in the real world. Balanced because all elements have the same capacity. Dependent because events at one element affect the performance of other elements. Frequent because lean thinking drives people and organizations towards them.  None work very well.

balanced and dependent systems

How does this happen? Inventory, conveyance, motion, and over production are wastes that are relatively easily recognized and reduced. When these wastes are removed, waiting losses (blocks and starves) can replace them. In the extreme, system performance deteriorates as lean “improvements” are made. In isolation yes, in combination no is a primary lesson from Theory of Constraints.

There are three options to improving balanced and dependent systems. The first is to improve the reliability of all of its dependent elements. That is lean thinking, but perfection is a high hurdle. In the ten machine puzzle, each machine’s reliability must be improved from 98% to 99.8% to achieve the 98% system availability target.

Cumulative probability predicts that the perfection hurdle gets even higher for larger balanced and dependent systems. Take a process with 100 dependent steps, not unusual in manufacturing or business. If each element has a 98% reliability, the system will only be available 13% of the time. To achieve 98% system availability, the reliability requirement for each element is 99.98%. Ouch!

The second option is to oversize each of the process elements. In the ten machine puzzle, oversizing each machine from 50 to 60 units per hour does the trick. With 100 process steps, each machine would have to be oversized by almost a factor of three…now that is expensive waste!

The third (and by far the best) option is to decouple process elements with buffers and to unbalance capacities to create a distinct constraint. This option trades inventory and conveyance waste against overproduction and waiting. The trick is to find the optimum balance. Is the trick magic? No, not with discrete event simulation…the next blog’s topic.

Filed Under: Operations Engineering Tagged With: Balanced and Dependent Systems, Discrete Event Simulation, Lean Thinking, Theory of Constraints

Footer

Find Success.

We can reach your goal. Contact me to start things off.

Get in Touch

  • LinkedIn
  • YouTube
  • About Me
  • Project Management
  • Operations Engineering
  • Motorsports
  • FF50th
  • Blog

Copyright © 2023 Steve Beeler · All Rights Reserved · Privacy Policy