Insight

A.I and insurance - why businesses need to be careful

Published

Read time

Listen to this article

AI – or Artificial Intelligence – is everywhere now, and there seems to be no end of fine artworks, animation stills, or sketches that have been created online by various AI tools and software. Chat GPT for example, is being blamed for robbing students of their motivation to carry out research and write their own essays for their degrees.

Using AI for content generation is no doubt useful; it can help develop ideas and bring about quality writing for research by doing huge data-scrapes from the internet. It can also produce emails which you can send at the click of a button, rather than laboriously typing out a tailored response to a query.

This brings to mind a recent example used on a podcast I heard. In this case, what harm would be caused if a landlord were to generate automated emails to respond to queries or complaints by tenants?  With such a vast quantity and quality of data for UK landlords, as well as the almost limitless supply of law and tenancy agreements available online, there is little doubt the right AI software could formulate a good response which appears somewhat personal to each recipient.

But what about the other side? A glimpse at the dark side of AI

AI can be dangerous for businesses in the UK if it is not implemented properly. Poorly designed or biased AI algorithms can lead to incorrect or unfair decisions, which can harm a company’s reputation and bottom line. Additionally, the rapid pace of AI development may make it difficult for some businesses to keep up.

Before you reset that puzzled look on your face, the paragraph above is a real AI language modelling response when I asked it “Why might AI be dangerous for UK businesses?”.

Though the consensus across the business community is that AI can be a useful tool, it does have a number of potential downfalls. Yes, it presents all the information and data it researches to create a bespoke response – but what if the data is wrong? The AI software still  puts the data together to create a compelling argument, and even sounds convincing to most experienced human readers. However, is this enough to make it full proof?

In many instances AI software will often forewarn you of  its capabilities and limitations, as in the table below;

table

Another potential pitfall is that AI software equips you with potential answers, but  minus the understanding on the how or why of the process.  This is particularly important for the insurance sector, where a big part of the value is understanding the process, and not just the answer at the end.

Another way to look at this would be: Imagine you require a potentially life-threatening operation:

  • On one hand you have an experienced, skilled surgeon with the practice, knowledge, successes, and failures gained through years of organic experience, who can understand and react immediately to the procedure, implementing the best strategy for a desired outcome
  • On the other hand, you have a database model, which can perfectly explain to an implement how to perform a process arbitrarily of the circumstances, regardless of variables, or the understanding of the wider responsibility to your life

Who would you select to perform your surgery?

Of course, not everything is life and death. Let’s  look at PI which is a clear candidate insurance class which could benefit from a bit of AI insight. In the case below  it’s a law firm  investigating whether AI software could help them win a case I asked the AI language model how reliable AI was, and here’s the answer, word for word:

AI has the potential to negatively impact businesses’ professional indemnity insurance in a few ways. Firstly, if businesses rely too heavily on AI-powered systems for decision-making, there is a risk that errors or biases could be introduced into the decision-making process. If a business makes an incorrect decision based on faulty AI output, it could lead to a claim being made against their professional indemnity insurance. [Thanks, chat GPT]

Regardless of who ultimately caused the misguidance, the claimant will likely impose litigation against the company who received money in exchange for advice,   no matter who provided that advice. It’s the responsibility of everyone to use AI with absolute caution – because ignorance isn’t an excuse.

Lasting impressions impressed the pace of the response by the AI language model was indeed impressive –  full explanations and answers were provided within a second of asking these questions.  My parting advice would be to explore it fully but treat with absolute caution.