Until new legal standards are established, parties should address AI risk in contracts 

Artificial intelligence (AI) is quickly making its way into seemingly every corner of American life. The spread of AI has been so rapid that existing laws, precedents, and industry standards often fail to provide guidance for how to properly determine liability should something go wrong when it involves AI.  

The construction industry can certainly be included among the sectors forced to navigate murky legal waters over the next decade until detailed language on the use of AI, and a standard approach to mitigating or allocating its risks, becomes the norm in all contracts. With a technology still in its infancy, AI is both creating opportunities and efficiency in construction, while also contributing to errors, mistakes, and failures on the jobsite based on flaws in the AI or how it is used. Who shoulders the blame and, more importantly, the liability for issues caused by the use of AI is oftentimes unclear.  

Owners of construction firms of all sizes are increasingly turning to AI for reduced costs and improved accuracy at all stages from design optimization and scheduling to safety monitoring and inventory management. Clear contract language governing AI, and who is responsible for it, will both help avoid costly litigation and alleviate fears about using a tool so new and powerful, yet far from perfected in an industry that requires precision.  

Where does the liability lie? 

When there’s a design flaw, a building defect, an HVAC issue, or a safety failure and AI was part of the process, who is responsible for the error? 

Tom Lambert
Tom Lambert

For example, let’s say AI software determines the optimal concrete mix for a particular job in a certain area of the country. If the software fails to consider regional factors — perhaps assuming the project is in a warm, dry climate instead of New England — the resulting mix could be unsuitable, leading to premature cracking or structural issues in the freeze and thaw of the Northeast. Who’s at fault? The software developer, the general contractor who approved its use, the subcontractor who entered the data or the developer who demanded AI integration in order for the project to not go over budget.  

Traditional construction contracts don’t address the use of AI, meaning there’s no clear or uniform way to assign legal responsibility if a project fails due to a software error. Industry-standard contracts like those from the American Institute of Architects (AIA) will eventually include detailed language about AI — specifying why it’s used, who’s responsible for inputting and verifying data, and how its outputs are applied. But that’s years away. Instead, right now, the AIA has only gone so far as to issuing guidance and passing resolutions encouraging the profession to adopt AI responsibly.  

Legal cases take time to move through the courts, and it will likely be several years before a body of case law provides some clarity. In the meantime, contractors need to cover their bases, and work with legal counsel to update contracts to include language about AI. Even interim or supplemental clauses that define roles, responsibilities and data ownership can help reduce exposure.  

Data privacy and ownership 

Open-source AI thrives on data. But where that data ends up, who owns it, and how it’s used are issues construction firms must take seriously. A current case in Illinois (AXG Roofing LLC v. RB Global Inc et al) pits construction companies against equipment rental providers, alleging that they inflated prices by sharing real-time, confidential data. 

Instead of reducing costs, firms could potentially pay more if the data gathered by AI platforms exposes buying habits that suppliers can exploit. When using AI, it’s imperative to know who has access to the data, if it’s closed or open source, if it’s being sold or shared with third parties, and what protections exist against misuse. 

The same questions apply to AI-assisted design tools. Traditional contracts define ownership of the work product typically between the owner, architect and engineer.   

What about when AI creates or refines those designs through a series of inputs or prompts that are collaboratively made? Who owns it now? 

These issues can become especially critical when projects change hands. If an architect or engineer is fired or quits, can they retain and reuse the data that was entered into the project’s AI? Does the project owner get the rights? Unless the contract addresses this, both sides may find themselves at a legal standstill.  

Employee rights and insurance 

Cameras have become increasingly common in every aspect of life and on the jobsite. AI software integrated with cameras can monitor workers’ safety compliance, detect exposed wiring, or identify other hazardous conditions. While the potential benefits to safety are huge, the technology raises serious privacy concerns.  

If employees are constantly being recorded, how are those recordings stored and used? Were all workers informed? Did they consent? Can footage be used for disciplinary action? What happens if those systems are hacked and sensitive or biometric information is exposed? Full transparency from the company and written acknowledgement from employees should be seriously considered. 

 Full transparency must also take place with insurance providers. The use of AI introduces new complexities with coverage plans, as many policies might not account for risks related to the introduction or use of AI by an insured or a third party working with the insured. If an AI tool fails and it leads to injury, property damage, or financial loss, will the insurer cover it? Again, like with employees, transparency is key. If AI is being used at any stage of a construction project and it hasn’t been disclosed to the insurer, a carrier may try to deny coverage.  

Ten years ago, AI wasn’t an issue in the construction industry. And ten years from now, AI may no longer be an issue in the construction industry because the industry as a whole will have adapted. But right now we’re in the gray area. Until laws, policies, and standards catch up to AI, it’s up to construction firms to build up protection for themselves.   

Thomas Lambert  

www.pullcom.com  

Thomas Lambert is an attorney in litigation and artificial intelligence practices at the law firm Pullman & Comley with a particular focus on the construction industry. Tom has over a decade of experience in representing individuals, businesses, fiduciaries, and municipal clients in probate, state and federal court. He represents construction industry clients in all phases and types of litigation including disputes involving contracts, insurance, indemnification, liability and more. Tom is chair of the technology committee at Pullman & Comley. The firm has offices in Connecticut, New York, Massachusetts and Rhode Island.