Innovations in scoring tenders

The latest developments to make scoring tender responses faster, more accurate and defensible

Scoring tenders has always been a headache. Plagued by differences in TET opinions, often significant variations in numerical scoring and risks of challenge, the onus on tender evaluators to provide unambiguous, robust scoring mechanisms is severe.

The core of the matter lies in the flexibility that’s inherent in any scoring system that’s based on personal opinion. What one evaluator considers as a major benefit may not be considered as material at all by another evaluator.

Some evaluators will say that’s exactly what they want – the discretion to award marks as they see fit, based on their personal opinion and experience. In other words, they will know what meets, exceeds or fails the requirement when they see it.

This approach is flawed in two important areas. First, it provides no effective guidance for tenderers on what is valued and what isn’t. This means they are limited to conjecture when they prepare their responses, trying to guess what they should focus on to achieve high scores. The chances of getting this wrong are high, resulting in the client not getting what will deliver best value for money.

The second flaw with this method is that if well defined, factual guidance is not given to evaluators, there will inevitably be differences of opinion on what should score higher marks, once the bids are reviewed by members of the evaluation team. This leads to conflict among the evaluators, time wasted in moderation, and a lack of defensibility for the decision-making process. It may even weaken an agency’s position if there is a legal challenge to the tendering process.

All of these challenges are, however, quite easily solved. The secret lies in adapting anchored scales –  a methodology that was developed many decades ago in the education sector, and has been comprehensively tested and proven in the context of assessing student capability. This method is BAU in education, and has been extensively used for many years. Perhaps surprisingly, the methodology can be equally successfully applied in assessing tenderer capability within a procurement context.

The instructions below provide a step-by-step guide on how to make this work for your next procurement project.

Step 1 – Agree the Differentiators. First, the evaluators need to agree – ideally before the RFT is released – what components or characteristics within each attribute should be scored highly. These are also known as the differentiators – the factors that are most likely to be critical to the success of the project and will also vary in quality for the different tenderers. Those qualities will be the basis of the scoring system.

Step 2 – Define a Fail. The TET also should decide on a factual definition within each attribute category, of what will constitute a fail. This needs to be defined in concrete and quantifiable terms or clear, objective, pass/fail criteria.

Step 3 – Include the differentiators and definitions of fail for each attribute in the RFT. Making both these areas (high-scoring factors and fail definitions) explicit in the RFT will be invaluable to suppliers. Those who can’t meet the minimum standard will not waste time and money on bidding; and the others will put their best foot forward so they score as well as they possibly can on the factors that they know are critical to the success of the project.

Once these aspects are determined (and often, when the RFT has gone out to the market and tenderers are busy preparing their responses), the Tender Evaluation Team (TET) then gets to work to agree a logical basis for their scoring system. To be defensible and fair, it’s important that this is done before the responses are reviewed.

Step 4 – Start work on the scoring scale. The most common scale used is from 0 to 100. The first task is to determine what specific characteristics will be unacceptable. Those definitions have already been decided and made clear in the RFT, so this should be simple.

Step 5 – Define what will satisfy the requirement in each attribute. Determining the definition of what will satisfy the requirement comes next. This will differ for each attribute, for example:

  • In Relevant Experience one component may be that the tenderer gives evidence of having completed three projects involving pavements, traffic management and service relocations valued at over $1M, within the past five years.
  • In Track Record, a component that satisfies the requirement may be that the PACE score is greater than 70%; that the Lost Time Injury rate is less than 2.5 per 100,000 man-hours; or that the referee indicates overall satisfaction with timeliness, quality standards and budget compliance.
  • In Relevant Skills, a component that satisfies the requirement may be that the nominated Project Manager has worked as project manager on three projects involving concrete structure construction on or adjacent to motorways in the past four years.

Step 6 – Get creative: give one or two concrete examples of a major benefit. Usually a benefit can be phrased as “Satisfies the requirement AND…” .To fit into this scoring band, a supplier must provide extra benefit, but the challenge lies in providing benchmarks for the TET to assess the level of benefit. You can never hope to think of all the potential major benefits they could offer, so your task is to identify one or two examples that your evaluation team can use to assess the unknown, unexpected benefits that could be provided. Usually these are items of significant added value. For example:

  • In Resources, the contractor can provide all standard equipment needed for the contract, and also has available a brand new sheet piling rig that can complete the piling in half the time of traditional methods.
  • In Track Record, the supplier’s referee not only considers the project was carried out reasonably successfully, but it also won a national award.
  • In Methodology, the tender programme shows that through smart resource allocation, and an innovative new ground stabilisation technique, they can complete the project three months ahead of the required schedule, thereby easing traffic congestion with no adverse effect on safety.

In a similar way, some concrete examples of minor benefits can be added to your scale to benchmark against. It’s important to try to avoid using predictable or incremental descriptions for these benchmarks. The wider the frame of reference, the more useful they will be when unexpected advantages are proposed in a tenderer’s proposal.

Step 7 – Likewise, generate one or two concrete examples of major reservations or risk factors that could apply. These are most likely phrased “Satisfies the requirement, BUT…”. It’s helpful to consider the qualities of suppliers who are close to being unsuitable, but who might be worth trying if they were significantly cheaper than an average supplier. For example:

  • In Relevant Skills, the Project Manager meets all the criteria within the definition of “Satisfies the Requirement”, but they won’t be available for the first three months of the project and a transition manager will be needed.
  • In Methodology, the programme deadline can be met but work cannot start until winter, implying risks to quality and budget.
  • In Relevant Experience, the tenderer completed three projects involving pavements, traffic management and service relocations, but a significant portion of the work was carried out by subcontractors who are not confirmed on this project.

As in Step 6, examples of the minor reservations should also be identified to provide a broad and useful framework overall for benchmarking the scoring.

Step 8 – Finally, review the scale you have developed and get all the TET team to amend, contribute and thereby buy in to it, before they go away to score the responses individually. You’ll be astonished at the speed and the consistency of the scores they come back with. What’s more, with the scale inserted into your Tender Evaluation Report and shown when you de-brief with tenderers, it’s very unlikely that you’ll receive any complaints or challenges about the fairness and transparency of the process.

The anchored scale technique has now been trialled on scores of tender evaluations in many different sectors. Although there is more effort needed to develop this tool than using the traditional hit-and-miss scales that were based on personal opinion and subjective judgment, procurement professionals who have used this method a few times are unanimous in their endorsement.

“This has completely revolutionised our procurement process”, reported one Council Tender Evaluator. “We’re now able to complete our tender evaluations in a few days rather than several weeks. Our justification of the decisions made is utterly clear, meaning that conflicts of interest and potential bias are all but eliminated.”

“Our Councillors are no longer likely to challenge the decisions when we appoint a tenderer who will clearly deliver extra value. And, perhaps best of all, our supplier community are more proactive in bidding to us because, they tell us, they know the process will be clear and fair and won’t waste their time.”

For more information and examples, please contact us at info@cleverbuying.com.