So How Do I Know It Worked?

So How Do I Know It Worked?

OK, you have made the command decision to improve your company’s live production performance and/or economics. You are simultaneously glorified and mortified. We have been here before, and implemented improvements that don’t seem to pan out. The product/methods you chose were backed by scientific and third-party research. So, what happened?

It just may have to do with the frame of reference of the research you have been given. Remember that scientific research is purposely done in a setting where interferences (sometimes called background or nuisance variables) are eliminated and/or excluded. This is a powerful way to determine root cause and/or underlying mechanisms, i.e., science. The challenge can be that when we change the reference frame from a controlled pen, barn, etc. the improvement may not be as evident across a complex or whole business. In essence, how do I validate the science I have purchased?

Many businesses develop validation methods for some or part of their business. If you have an established method or protocol (e.g., paired barns or split barns), great. If not, the most effective way to determine the number of comparisons that need to be made (e.g., number of barns assigned to control and treated product) is by conducting a power analysis using actual farm variance. Often however, the number of barns needed far exceeds the capacity to properly test the usefulness of a product. An alternative is to find ways to reduce the variance within the data set and thereby the number of barns needed to make a valid comparison.

One way that this can be done is to look at the before, during and after data on a given set of barns. There is always the idea of complete study repetition. Doing multiple studies over and over will undoubtedly build confidence, but how many study replications do I need? Does it have to work every time or is 2 out of 3, 7 of 8, or 14 of 15 enough? The potential issue with replication is there isn’t a straight forward way to determine success probability and therefore we don’t know the risk.

Another alternative is to use enumerated or analytic statistics. The advantage of using either enumerative or analytic statistics is that we can determine our risk and success probability. Our approach at Ab E Discovery is to partner with customers and/or potential customers to validate our product performance in a way that meets customer’s needs, perhaps using existing business measures to do so.