Very often, I am tasked with assessing a team’s requirement verification plan – essentially determining whether or not they have the necessary artifacts to show compliance to each requirement. This involves reviewing drawings, analysis, test data, and inspection reports.
In the most recent review, I quickly noticed discrepancies between drawings. Fortunately, none would cause the hardware not to work, but these kinds of discrepancies are still indicative of sloppy engineering. More concerning, if I found mistakes after just a few minutes, there’s a good chance more existed. And what bothered me most is these errors should have been caught during the drawing’s peer reviews.
As a point of clarification, I’m defining peer reviews as the frequent, but informal reviews of “piece-parts” held throughout a design phase to ensure your hardware is properly vetted ahead of the formal gate reviews (e.g. Preliminary Design Review, Critical Design Review).
There are two causes of an oversight such as this; 1) an inadequate number peer reviews were conducted, and/or 2) the wrong people reviewed the design. I suspected both in this case, and my suspicions were realized as I dug deeper.
No one on the shortlist of engineers I consider experienced enough to participate in peer reviews had been invited. Further, only one engineer outside of the program was involved. As for the number of reviews, the team only held one per drawing set. In summary, their peer review process lacked adequate experience, impartiality, and thoroughness.
The justifications given were the same justifications given for all poor engineering decisions; tight schedule and lack of budget. The team said it was under pressure to get its drawings released, and, due to overruns, had no room in the budget for “overly comprehensive peer reviews”. In other words, the team did only enough to check off the peer review line item in the process sheet.
Unfortunately, as budgets and schedules shrink across the board, brushing aside peer reviews is becoming a trend. The process is viewed as a mandatory evil whose expense must be minimized. Under this approach, the value of the peer review is simply the checkmark in the schedule.
The real value of the peer review process is being lost, because its value is in preventing future expenses. The later in the design process (including build and test) a mistake is caught, the costlier it is to fix. Peer reviews, by their purpose, catch mistakes early when they’re cheap to correct.
To explain this, let’s you conduct say a peer review of a bus bar in the detailed design phase. It costs you 15 man-hours, and results in 25 actions, most of which are correcting typos. But let’s say one action corrects an error in the material call-out; the drawing specified the standard structural aluminum alloy 6061 instead of the more electrically conductive aluminum alloy 6101.
Bus bars made of structural aluminum will not work. It’s a total non-starter. This very simple mistake would have resulted in the entire lot of parts being scrapped had it remained in place through the fabrication phase. Those 15 man hours ($2500 depending on the labor rate) you spent on the peer review just saved your program tens of thousands of dollars and four weeks by catching the error before parts were built.
Everyone should rightfully be concerned with budget and schedule overruns, and there is plenty of fat to be trimmed. But skimping on peer reviews is the epitome of “penny-wise, pound foolish”. They are the most valuable tool at your disposal for minimizing costs long term, and their value must not be underestimated.