How AI and Human Expertise Combine to Eliminate Bias in Project Planning
Overcoming Human Bias Through Artificial Intelligence.
The science of project planning relies on two key ingredients: human expertise and supporting analytical tools. Of those, the inputs provided by human experts (discipline leads, planners, cost estimators and engineers) are really the true building blocks for helping determine the most realistic cost and schedule forecasts possible on projects that often span multiple years and cost many hundreds of millions of dollars.
While the tools used to help with these forecasts are, of course, also important, they pale in significance to the mission-critical nature of the human input being accurate and void of bias. For the past 50 years, the project controls industry has been constrained by the availability of such human expertise offering non-biased and objective insight. With the advent of artificial intelligence (AI), that is thankfully changing.
Human Bias in Project Controls
Humans use their cognitive skills – that is, their knowing or understanding of a situation – to offer expert opinion. This is absolutely applicable when it comes to project planning and controls. The human expert uses historical patterns (prior projects and benchmarks) to help predict the future (the outcome of the project in question). Unfortunately, the probability of introducing bias into this reasoning can be very high.
At its most basic, human cognitive bias can be defined as a systematic pattern of deviation from the norm, or irrationality in judgment. Again, relating back to project planning and scheduling, such a deviation can be caused by many factors, including:
- A lack of experience or data points from which to draw reasonable analogies or benchmarks;
- An emotional skew towards a desired outcome (such as desire for the project to finish within a given time-scale); and
- Political pressure to plan and forecast to a given pre-determined target.
How Artificial Intelligence Can Overcome Human Bias
As artificial intelligence and machine learning have become more mainstream, computers have also become more sophisticated in their ability to reduce or eradicate human bias.
At its most basic, AI can be defined as “the ability for a computer, through understanding of context, to learn from past historical data, thus intelligently putting forward suggestions as to future state in a manner that is akin to human expertise.” This is achieved through a simple process of:
- Knowledge capture
- Knowledge classification
- Knowledge mining
This three-step process begins with the capture of historical project knowledge (e.g., as-built schedules, cost estimates and risk registers). Modern databases support the storing of what is often non-structured data without the need to normalize it down into fixed database tables and fields. Once captured, elements of knowledge need to be classified so that the AI engine can make an informed selection of relevant knowledge during the knowledge mining process (often known as “inference”).
By having sufficient data points against which to make an inference (e.g., project location, weather conditions, historical productivity rate achievements, etc.), the computer is able to make an informed suggestion or critique. The more data points the AI engine has to work with, the stronger the inference – and hence, the less chance of bias.
AI engines are becoming increasingly sophisticated in their ability to accurately predict through inference, and this is very much the case with regards to AI within the science of project planning. Today’s AI planning tools are growing more capable of not only validating plans that have already been created by the human planner, but also making informed suggestions as to what activities, durations and even sequence of work should be used to develop a project plan.
Can Computers Suffer from Bias as Well?
While computers and AI engines are, of course, very good at making objective decisions, their ability to be non-biased is still largely dependent on having sufficient data that is actually representative. Having a knowledge library that consists of plenty of poorly executed historical projects is likely to result in the computer offering overly pessimistic suggestions, as it knows of nothing else.
This is where machine learning can help. While AI algorithms are very smart, they also require training. Such training can then overcome potential bias. One very interesting avenue of AI development is for the computer to self-adjust based on a project planner’s adoption or even rejection of AI-generated suggestions. If the computer can determine certain suggestions that get consistently ignored, then it can adjust its inference engine weightings accordingly, resulting in better suggestions being made the next time around.
The concept of AI engines getting smarter through machine learning and taking on board human acceptance or rejection presents a very exciting opportunity for project planning and controls. By the computer being humble enough to know that its suggestion may be wrong through listening to feedback from the planner, there is a much higher likelihood that the planner himself will take on board suggestions from the computer – creating a self-perpetuating loop of knowledge enrichment.
Can Computers and Humans Eradicate Bias?
Human and computer expertise can work together in harmony; the two are synergistic. Computer expertise can help guide a human expert in developing a project plan, fill in the gaps where human planner knowledge is lacking and even raise red flags through intelligent benchmarking. Likewise, the computer can learn from the human planner and adjust its AI algorithms to make better subsequent suggestions.
Taking this bidirectional knowledge flow to the next level, an ideal project planning environment would enable the computer AI engine and the expert project participants to deliver their opinions to the planner in the planning process. The AI engine would provide calibration against historical knowledge and the human experts would then provide validation. Having both a calibrated and validated plan would be an ideal scenario.
The Bottom Line
Accurate project forecasting has long suffered from human bias despite our best efforts to capture and leverage historical benchmarks and patterns. AI is, without a doubt, starting to reduce this human shortcoming, resulting in more predictable and achievable plans.
In the same way that humans can introduce bias, incomplete or misrepresentative data stored in a computer can also lead to machine bias. AI machine learning is the computer’s vehicle for overcoming this, and the speed at which AI is evolving means that both human and computer bias is rapidly being eradicated, resulting in far better project planning and forecasting.
To learn more, visit ineight.com/bias.
Dr. Dan Patterson is chief design officer with InEight. In this role, he focuses on expanding upon his vision of creating next-generation planning and scheduling software solutions for the construction industry. Dan is a certified Project Management Professional (PMP) by the Project Management Institute (PMI).