Thursday, March 18, 2010

KPI & Scorecard Benchmarking


I think that
Benchmarking methods differ relatively to the production model of the outsourcing provider. A unit based production such as data entry and processing should be handled in a total different way than a software service or product model. Below I will explain my benchmarking
method for a quality performance indicator in 3 different production models.

Data Processing - Unit based production
Obviously it is a highly (if not the highest) competitive market in outsourcing. India, China are on top of the list of data entry providers in the world for many reasons but mainly due to the availability of a huge man power pool and at the same at the lowest rates in the world. That does not mean that they make less money than eastern European or middle eastern providers, we are talking about the power of economy of scale. That makes India and China hard to beat on the cost (if not impossible). However when I started my blog, I defined outsourcing as a scientific solution for the best trade-off on cost, quality and production. Cost and production are always in advantage of the largest scale then that makes the quality as a unique competitive element between all providers. The following site http://www.dataentryindia.co.uk/ is top #1 on Google when you search for data entry, if you check it then you will notice their focus on quality on their main page where they claimed a 99.95% accuracy. But how easily can we benchmark our accuracy? is it by comparing to the top #1 Google site? How do we know whether our benchmark is reachable or a dream in the sky?
I am going to start from my last point "Reachable benchmark". Indeed ratios of 70% and 75% are too low because we can simply reach those levels by automated data processing, OCR scanning and other methods with a lower cost than manual data entry. So we know that the benchmark is higher than 75%. My second argument consists of watching what similar industries are doing such as factory massive production. That leads us straight to 6-sigma that's based on the science of probabilities and aim to minimize the probability of anomalies in production. The following table shows a simple demostration of how 6-sigma might be used for benchmarking in data processing production model




As you notice, sigma level 6 represents the case where there are only 3 anomalies in a production of 1 million units. So my suggested approach is to first locate your team on sigma scale (identify the level that you currently produce) and set the next sigma level as a target. My next question is whether or not should we adapt 99.99966% as a benchmark?
I believe there are many constraints that should be taken into consideration. However if the production process is very similar to a factory production then yes, the benchmark should be 99.99966%. The most important similarity aspect s to see whether there is a pattern in production process or not. As much as we can identify patterns and cluster them as much as we converge to a factory model because simply that's how machines work! In other cases where it is not that obvious to identify a production process pattern then a more modes sigma level should be set as a benchmark according to key constraints.



No comments:

Post a Comment