The manufacturer’s dilemma centered on the problem of selling a higher cost, more reliable sensor in the face of strategically priced lower durability sensors being offered by some competitors. His sensors were priced several times higher than the alternative product but they also had a predicted Mean Time Between Failure (MTBF) almost 10x better.
I say the lower-priced offerings were ‘strategically placed’ for a number of reasons, not least of which is their acute appeal to cash-conservation-focused buyers in most markets.
As a selling point to customers he was using a linear calculation that showed the higher MTBF sensors were a much better value when you calculated the replacement cost it was something like this:
Yearly Cost = (1 year /MTBF) * ((MTTR * (lost production + labor cost)) + sensor cost)
MTTR = Mean Time to Repair
Which translated into X number of events per year * event cost.
When he plugged in the numbers the results were impressively in favor of his sensors. Upon thinking about maintenance events, I was convinced that the costs could be even higher. The linear replacement cost model assumes several things:
* The Maintenance person is immediately available when the fault occurs.
* The Maintenance person is an expert that locates the fault and makes the replacement in a short period of time.
* The sensor in question has a hard failure and does not cause product faults before the event.
* The factory is being 100% utilized.
* The sensor application was in a destructive environment.
The biggest drivers in the calculation were the labor and production costs. The customers might not see the numbers the same way however.
1. If the Maintenance person is on staff, they are already being paid.Even so, I thought that the assumption of the available repair person was the big one particularly in this economic environment. In most cases, factory staffing has been cut to the minimum. At the same time, companies have been more cautious about adding capacity so with any kind of recovery, production lines will be running nearer their maximum capacity.
2. If the factory is not at 100% utilization, the downtime cost may not be as large as modeled.
If you plug in any randomization and ran a Monte Carlo type simulation, I am confident that you would find some really long downtime events mixed in. This would be particularly true if you modeled the idea of several machines/lines with similar sensor issues. A look at the event / time / cost calculation might be something like this:
Cost per event:
( (Wait for Maint. time + Troubleshoot time + Replace Time) * Lost Production Cost) + Sensor cost.
Each time and the production cost could have a random (normal curve) applied to it. The sensor cost would be fixed. The average number of faults per year would be: (365 x 24) / MTBF
In closing, I thought the best sales case for the manufacturer would be the idea of preventative maintenance. This strategy would be a lot more feasible with the high MTBF sensors. In fact, it moves the maintenance event to “planned downtime” therefore eliminating the loss of production cost. In most production lines there are usually some events where material is not available to be processed. Often these events can be predicted and they are the perfect time to replace sensors that are getting near their predicted lifetime. It can be a very efficient and predictable use of limited factory resources. If you look at it this way, the replacement cost is pretty close to the price of the sensor.
That’s my view of it although I’m sure other people might have other thoughts. If so, drop me an e-mail or give me a call. I would be happy to highlight other theories and models in this space. Stay posted!