
Evaluating outcomes resulting from our actions is part and parcel of our roles as delivery professionals. Self-evaluation and self-critique is common practice with most professionals looking to improve and succeed, and that process of extracting lessons from the past to recycle them for the future can be positive. However, there is a significant risk in doing this.
If we are not careful in our evaluation process, there is a tendency to place too much importance of the outcome of a decision than the process by which we made that decision.
Hindsight is not 20-20.
If a decision led to a bad outcome, you are much more likely to critique your decision-making process and modify it than if you had a good outcome from your decision.
Scale that up to a project or change programme;
If the Project Manager gets a successful outcome from their decision making, they are much more likely to repeat that methodology in future, even if their team are advising them against it. However, if a PM gets a poor outcome, they are much more likely to be receptive to their teams' suggestions for change and improvement process-wise.
This is Outcome Bias
A cognitive bias that enables us to judge our decision making process based on the results of the process rather than the quality of the process itself
None of us can avoid outcome bias. But how can we effectively reduce its impact on our teams and organisations?
1. Reflections Early & Reflect Often
Ideally, you want to have a first reflection point in your process before your team experience the outcome of that decision-making process. This is not always possible, but if you can seek an opportunity to challenge and reflect upon the quality and integrity of your decision-making processes before you experience the outcome, it has the biggest impact of eradicating any potential outcome bias in your evaluation.
Additionally, you want to ensure that you conduct a reflection or evaluation process consistently irrelevant to success or failure. Often teams will only find the time to undertake reflections on the more challenging elements of their project or change programme, neglecting the perceived "quick wins" - this automatically increases outcome bias, therefore it is important to have a balanced and consistent reflection process inbuilt into daily routines.
Within these reflection points, consider;
What circumstances lead up to the decision?
What information was available / was not available at this decision point?
Were all areas of the decision-making process fully considered and evaluated?
Were there other people who should have / could have been consulted?
Was there any need to make the decision at the time it was made?
2. Raise your Standards & Persist in your Processes
In our decision-making process, do we have defined standards to which we make consistent choices?
For example;
If you look at a weather forecast and it has a 40% chance of rain, do you take an umbrella?
Do you always make that decision, or is it inconsistent?
If inconsistent, then you have no defined standard and therefore no consistent decision-making process.
If you leave your house without an umbrella that day and it does not rain, it does not mean that you have a good decision-making process.
If we have no standards which we can consistently apply and benchmark against, then we can have no improvement to our decision making processes.
A standard doesn't necessarily need to be a threshold or a percentage if this is not suitable, but it does have to be a repeatable, audit-able process which you repeat in a consistent manner.
Once you have established this repeatable process or standard threshold, then ensure that it has a clear owner. This enables consistent decision making and governance over the standard or consistent process so that it is not under constant modification due to a lack of control and ownership.
3. Understand your Data, Maintain your Data
The deliberate use of historic trends within the analysis and reflection process can help to highlight what is anomalous and what is effective. Without historic data, it is very difficult to ascertain if an outcome was expected or unexpected in the context of the decision making process.
Ensure however that your data is comparable - there is little point comparing data sets across fundamentally different decision-making processes as, without consistency, trends cannot be gathered and analysed.
A final point on data is that it is equally important to not store and utilise all data. There comes a time when data is no longer valid, and identifying when historic data has reached that point and can therefore be disregarded is important. Key questions to ascertain the validity of the data that you are utilising in your analysis are;
What are the assumptions behind using the data you have been using?
Are these assumptions still valid in the current context/environment?
Enjoying our content? We'd love to hear from you, and ensure you subscribe via our mailing list
コメント