Skip to Content
View site list

Profile

Pre-Bid Projects

Pre-Bid Projects

Click here to see Canada’s most comprehensive listing of projects in conceptual and planning stages

Technology

Entering the AI realm? — consider the legal issues

Angela Gismondi
Entering the AI realm? — consider the legal issues

There are a lot of different legal issues businesses need to deal with when considering implementing artificial intelligence (AI) including intellectual property rights, privacy and civil liability concerns.

Artificial intelligence is the use of technology to replace human thought, explained Adam Allouba, partner at Dentons. AI runs on data which is used to train algorithms and intellectual property is the idea of the ownership in a database or a compilation of data.

“The problem is you can’t just go online and start grabbing things and using them because people own copyright on those images,” he said. “You’ve really got to think about ‘do I have the rights for the data I’m using?’… It’s very important to think those things through because if you don’t your whole system might be trained on something that is not compliant with law which could open the door to lawsuits.”

Another issue is privacy. Jurisdictions like Canada and Europe have legislation that governs the use of personal information in the law, but the term is really broad and could include any information that relates to an individual and can be used to identify that individual.

“Whether you are talking about AI that is going to help advise you on your finances, or help you get a medical diagnosis, or monitor employees in the workplace, that AI is going to be trained on personal information and is going to be analyzing personal information when it is doing whatever it is you implemented it to do,” said Allouba. “There are very major questions surrounding that and they need to be dealt with because the laws really aren’t designed for today’s world. These are laws that were written in some cases decades ago when privacy concerns meant telemarketers phoning you at home during your supper hour.”

There are also particular civil liability concerns when you are talking about artificial intelligence. He used the example of a company who enters into agreements or contracts with different players in the AI space to implement an AI system. This could include a service provider, a software designer or consultants.

 

You can’t just go look at the Artificial Intelligence Act and it will tell you the 23 different things you have to do to be compliant,

— Adam Allouba

Dentons

 

“You have really got to think about who is going to be liable if something goes wrong,” Allouba said. “Is it the fault of the person who implemented it or is it being not used for the proper purpose? You’ve got to think about how to define who is responsible for what.”

In construction specifically, AI can be trained to recognize safety problems.

“The amazing thing about artificial intelligence is that I give you individual examples and you think ‘well a human being can do that’ and they can but not only can the machine do it faster, what the machine can do is take into account hundreds, thousands, millions of variables at once and say based on characteristics… I can tell you when there is going to be higher than normal safety risk and I can flag that for you,” said Allouba.

AI can also be used to monitor equipment to predict breakdowns. Equipment on construction sites usually has a maintenance schedule defining the maintenance you should perform after a certain amount of time or hours of use.

“The AI can get a lot smarter about that and say ‘never mind what the schedule says, based on my actual monitoring of this equipment, based on all the data you gave me about past breakdowns, I can tell you that this part is going to fail way before the schedule says because it has had unusually high use or the conditions in which it has been used are much more rigorous so you’re going to have to do maintenance or you’re going to have to replace it.’ Equipment that breaks down can create safety problems and it certainly is a pain because it leaves people idle so if you can get a better handle on that everyone is a winner.”

One of the challenges when it comes to implementing AI is there are no laws that specifically apply to it.

“We’re starting to see the beginnings of it but we’re not quite there yet,” Allouba explained. “You can’t just go look at the Artificial Intelligence Act and it will tell you the 23 different things you have to do to be compliant.”

If you want to be mindful of privacy and civil liability concerns, Allouba suggests looking at ethical frameworks such as the Montreal Declaration, which calls for the responsible development of AI and the European Union also offers guidelines.

“Some of the things they talk about is the importance of artificial intelligence not reproducing or even exacerbating biases in the real world,” said Allouba. “Make sure that your systems are trained on data that is very unbiased because it gets even more dangerous when human bias is reproduced by a machine.”

Avoid automated decision making in cases that are sensitive, such as hiring for a job, he added.

“To the extent that you can, especially when you are making decisions that are going to have a significant and possibly long-lasting impact on people’s lives, make sure there is a human in the loop monitoring those decisions,” he said.

 

Follow Angela Gismondi on Twitter @DCN_Angela.

Recent Comments

comments for this post are closed

You might also like