Wednesday, April 25, 2012

You Had To Think There


“It’s funny because it’s bad science.” – Actual Work Conversation


Oh, you nutty engineers and your nerdy discussions.  What's for lunch?  


So not long ago, we were having a discussion at work about flow controllers.  As with everything in engineering, nothing is ever completely exact, so there is always some component of error that has to be dealt with, and that came into play when talking about these particular components.  Let's say, for example, that a flow controller is designed to put anywhere between 0 and 100 liters of gas into a chamber every minute.  (Do keep in mind this is a completely hypothetical example, so don't go trying to recreate this process.)  You actually need 10 Liters per minute, so you set the flow controller to 10.  Great!  The flow controller will have an error amount relative to the maximum flow, let's say 1%.  So, in this case, you'll be getting 10 Liters, give or take 1...so anywhere from 9 to 11 liters will be your eventual flow.  Seems like a lot of error.

One solution is to reduce the size of the flow controller...let's say to a 20L max component.  Then, assuming the same 1% full range tolerance, you'll be at 10 +/- 0.2 Liters, or from 9.8 to 10.2 .  Much better!  Unless, of course, you have a different process on the same tool that requires a flow greater than 20 Liters.  You can't have it all.  What's the best solution?  As with everything in engineering, the answer is "It Depends."  

One possible solution that was hypothesized in a gedanken at work recently (It means "Thought Experiment," look it up.  Also look up Schrodinger's Cat while you're at it) was to place multiple flow controllers in parallel.  The tolerance of each individual one will be lower, thus the overall variation will be lower, right?  So this is obviously the way to go, right?

NOPE! 




Uhm...wow...no.  The problem with this theory isn't actually so much with the tolerance.  It's certainly possible for the error to be less, sure.  In order to have the same extreme values, the performance of each of the smaller flow controllers would have to be off by the maximum amount, and all in the same direction.  Let's say you have a set of ten 10L controllers to replace a single 100L.  At a total flow of 10L, the single controller can be off by 1L, for a max flow of 11L.  If all of the 10L controllers were off by .1, and were putting out 1.1L, you'd still end up with the same 11L flow.  However, the likelihood of this is pretty low.  In reality, you're much more likely to have a Normal (or Gaussian) distribution of error, with half of the controllers flowing high, and half flowing low, with the mean flow value at your target:




In theory, the way to get closest to your target flow is to have an infinite number of infinitesimally small flow controllers.  The distribution will normalize everything out, and you're left with the most precise measure you can get.  This was the original point of the gedanken. 

The problem with this, of course, is that in order to implement this in the real world, you have to put more components into the system, increasing the complexity and probability that something will go wrong.  You also need to re-engineer everything from the supply cabinets to the incoming facilities to the software used to control all of these new flow controllers.  It's terrible engineering and, at least to us...very funny. 

You're probably not laughing.  I guess that's okay. 

No comments: