Skip to Content
View site list

Profile

Pre-Bid Projects

Pre-Bid Projects

Click here to see Canada's most comprehensive listing of projects in conceptual and planning stages

Government

Procurement Perspectives: Government RFP scoring still perceived as questionable

Stephen Bauld
Procurement Perspectives: Government RFP scoring still perceived as questionable

Rarely, if ever, will a week go by that I don’t receive a call from a client complaining they have been treated unfairly in the scoring process on an RFP.

My first reaction is to suggest they ask for a debrief from the government agency to find out how they were scored on the RFP they submitted. Many take my advice and go for the debrief, while others say it is a waste of time as they have no confidence in the way RFPs are scored.

One of the difficulties in any RFP in which there are significant differences among the competing proposals received is to ensure all of them are given fair consideration.

Offers that are widely different in substance will almost certainly vary significantly in price. In such a case, it is clearly unrealistic to accept the “lowest” price without reconciling the prices with the terms of the offer, to generate a common base comparator.

The most critical question then becomes how to compare each offer received. Such variations in each proposal make it very difficult to carry out a direct comparison of the raw information provided by each proponent.

To compare only the cost of the different designs would be inherently unrealistic, since it obviously costs less to build a smaller building than a large one. Similar disparities also exist with respect to each of the identified critical aspects of functionality.

To allow a sensible comparison to be made, it is necessary to harmonize the prices quoted by the proponents to reflect the value (benefit) and cost per feature of the proposals that each was bringing forward. Given the radically different solutions each proponent is recommending, the first step in this process would be to generate a common base comparator for each proponent so that their respective proposals can be compared in a meaningful way. Only by doing so is it possible to make an “apples to apples” comparison.

One of the problems with evaluation is with every increase in the number of features taken into account in the evaluation process, the weighting given to each feature decreases. It is critical for contractors to make sure they hit all the mandatory criteria to garner the maximum amount of points. Issues do arise when you start naming certain features as “mandatory.”

The problem with this approach is that mandatory criteria are a very blunt instrument. If the criteria is all designs must have elevators, then no consideration is given as to which offered configuration is most efficient. Mandatory criteria of this nature encourages contractors to bid cheap and small, which rarely results in optimal building performance.

I know it is frustrating for good contractors that submit a solid bid that would ultimately result in a good product being built. Knowing some competitors cut every corner just to meet the minimum requirements then change-order the project up to what it should have been in the first place causes most of the mistrust in the RFP process.

In my opinion, evaluation tools could be used to improve on the RFP approach. Each tool can be programmed to identify a minimum level of satisfactory performance. Once that threshold is satisfied, a fine-tuned weighting can be given to a very large number of assessment criteria. The final score given to each proponent is a much firmer number, based upon a highly sophisticated and uniformly applied analysis.

While it is still possible — and no doubt desirable — to include in the final scoring such soft criteria as overall esthetics, the municipality avoids the problem of a scoring process that generates scores that are entirely or substantially subjective. My feeling is that subjective scoring opens the door for skullduggery in the evaluation process.

Stephen Bauld is a government procurement expert and can be reached at swbauld@purchasingci.com.

Some of his columns may contain excerpts from The Municipal Procurement Handbook published by Butterworths.

Recent Comments (1 comments)

comments for this post are closed

Fábio Alonso Image Fábio Alonso

I just loved the discussion that mentions how unfair certain RFP are considered. I just could not foresee how evaluation tools can be more reliable or fair, if the scoring criteria is made by humans. Not sure how different it would be from a traditional one, unless there are tools I am not aware of doing it.

More

You might also like