Being intelligent with AI projects
Have you been asked to implement an AI system in your business? In this article, we look at the key legal issues likely to require consideration by an in-house lawyer before going live.
Contract
When contracting with an AI provider, many of the usual commercial and legal considerations for regular software or technology licence agreements will be relevant. For instance, you may need to consider whether the scope of licence and service specifications are sufficient, or how any necessary integration with your systems and third party software will be achieved. There will also be unique issues presented by an AI system. For example, are you an early adopter of the AI and, if so, to what extent can you rely on the AI solution to deliver the desired results in a legally compliant way? This may well depend on the test data used to develop and train the AI and the robustness of the testing process. There may be wider ethical issues for you to consider as well as how AI may impact your employees and current working practices. We discuss below some of the key legal issues you are likely to face, namely those relating to data protection compliance, IP ownership/infringement and protection of confidential information.
Data protection
Use of AI will require consideration of data protection laws where the AI system processes personal data. You should take a risk-based approach. This means assessing the risks to individuals’ rights and freedoms which may arise when you use AI and implementing appropriate and proportionate technical and organisational measures to mitigate such risks. This includes:
- using the AI system in compliance with data protection principles. For example, identifying the purpose and lawful basis for the data processing carried out by the AI system; providing transparent information to data subjects; processing the minimum amount of data necessary for your purposes and only in ways that data subjects would reasonably expect, keeping data for only as long as you need it and ensuring the AI system is sufficiently statistically accurate and avoids bias and discrimination;
- assuring data subjects’ rights during the development and use lifecycle of the AI system such as the right to withdraw consent, rectify incorrect data or have data erased; and
- adopting appropriate security measures and complying with the rules on international transfers.
There are additional requirements if the AI system will use automated decision making which produces legal effects concerning or significantly affecting the individual, such as in a recruitment context. Suitable safeguards are required if such automated decision making will be carried out, including providing data subjects with transparent information and the right to obtain human intervention and an explanation of the decision and to challenge the decision. Appropriate procedures should also be implemented to ensure factors resulting in personal data inaccuracies are corrected, the risk of errors is minimised and to prevent discriminatory effects.
Intellectual property (IP)
IP infringement
You will need to ensure that you will not infringe third parties’ rights when using the AI system and appropriate licences are in place to use of the AI system and any of its outputs. Will you have the necessary licences to make use of the AI and any generated content?
AI systems using machine learning by large data sets can give rise to potential third-party IP infringement claims from owners of the source data and content. Training the AI system in this way may involve using, copying or providing downloadable links to content including third party IP rights without permission from the owner. Additionally, if training data that includes copyright works is used to develop the AI system, outputs may infringe third party rights if they reproduce the whole or a substantial part of a copyright works.
Liability for such infringement may not be limited to the AI system’s developer. For example, a user may be found liable for secondary infringement by possessing or dealing with, amongst other things, an article they knew or had reason to believe was an infringing copy of a copyright work. You should consider getting appropriate indemnities from your AI system’s provider.
IP ownership
Where the system uses generative AI to take information inputted by users and generate outputs based on its training data without any human intervention, it may be unclear who is the owner of the IP rights in both the inputs and outputs. If you are expecting to own such IP, your contract with the provider needs to reflect this. Again, you should consider getting appropriate agreement and indemnification from your AI system’s provider in respect of such inputs and the content generated by the AI system.
Security and confidentiality
An AI system will have security considerations similar to other software services used by the business. Systems may become a target for hackers due to the vast amounts of data sets sitting behind the AI. Hackers may exploit AI system vulnerabilities to gain unauthorised access to the business’ network and systems or extract the training data sat behind the genAI system, for example through carefully crafted prompts.
Inputting data into a genAI system with inadequate security measures may lead to that data forming part of the data training sets. There may be no technical or legal recourse to remove your IP or confidential information once it has been input into the AI system. Without sufficient protection, your prompts, outputs and other data may be used to benefit third parties without your consent. If outputs contain personal data, this may amount to a data breach.
Accuracy, discrimination and bias
An AI system’s accuracy is generally determined by how often the system provides a correct answer when measured against its training data. AI is only as accurate as the data input into it. If the data sets include biased or incorrect information or reflect human developers’ own biases, this may form part of the system’s outputs. AI systems have been seen to produce convincing outputs which are misleading or factually inaccurate.
Accuracy is also a fundamental data protection principle, requiring you to keep personal data accurate and where necessary, up to date. This applies to all personal data, whether included in inputs or outputs of the AI system. Whilst this does not mean the AI system needs to be 100% statistically accurate, if the AI system is making informed guesses about individuals which may be true at the time or in the future, any data should be labelled as such. As mentioned above, there are additional accuracy requirements if the AI system is used for automated decision making or profiling.
Practical tips
As an in-house lawyer, you may be asked to assess risk when the business embarks on an AI project.
As an initial step, ensure you have sufficient information to allow you to properly understand how the AI system works including the nature of the data used to train the AI and the data the AI system will use. Carry out a risk assessment on the points raised in this article, and if necessary, a data protection impact assessment (DPIA) where the AI system’s processing is likely to result in high risk to individuals. A DPIA is a process which helps you identify and minimise the risks of projects involving data processing. Template DPIA’s are available on the ICO’s website.
You should check the AI system has sufficient security measures in place for any personal data processing. You must be able to justify any decisions made relating to AI. If you cannot assess whether using the AI is right for the business, best practice would be to choose another system.
Reviewing and negotiating a contract for the supply of an AI system will likely include considering many similar factors of a software licence agreement. From an AI perspective, the contract should also include:
- specific reference to the AI system and a clearly defined specification
- provisions protecting your existing IP and clarifying the ownership of the IP in the data used to train the AI, any customisations you request for the AI system and the inputs and outputs. As mentioned, you may wish to consider obtaining protection against third party IP claims in the form of indemnification from relevant suppliers. Your inputs and outputs should also be classed as confidential information
- supplier warranties and indemnities such as in relation to the AI’s training data and compliance with applicable law. You will also want commitments relating to the system’s availability and maintenance
- details of the supplier’s security measures, clear procedures for notification and management of security breaches
- ensuring appropriate data processing provisions are in place in line with the outcome of your DPIA.
The review and consideration of the AI system should continue throughout its lifecycle of use. Laws, regulations, governance and risk management practices relating to AI are developing quickly so you should keep under review developments in law and ICO guidance which may affect the business’ use of AI systems.
If you would like to discuss anything in this article further, please contact:

Andrew West
Partner - Commercial Services
T: +44 (0) 161 393 9078 M: +44 (0) 7931 790894

Grace Astbury
Solicitor, Commercial Services
T: +44 (0) 161 393 9062 M: +44 (0) 7949 033514
What does the new duty to prevent sexual harassment mean for employers?
As of 26 October 2024, employers are subject to a legislative duty to prevent sexual harassment of employees. This legislation marks a new approach, turning what was previously a defence into a proactive duty with clearly defined consequences for non-compliance, placing an emphasis on the need for employers to look at their current culture and behavioural practices across their organisation.
A compelling alternative - Courts can compel parties to engage in ADR
The benefits of ADR have long been recognised. It is an efficient mechanism for resolving disputes quickly, privately and usually far more cheaply than taking a case to trial. New changes to the CPR mean that the court, since 1 October 2024, can now order parties to engage in ADR where it is proportionate and does not undermine the parties’ right to a judicial hearing.
Without prejudice – the legal shield
The principle of without prejudice privilege is a cornerstone of legal negotiation. However, like most legal principles, relying on the without prejudice label is not without its risks. Understanding the limits, nuances, and application of without prejudice correspondence is crucial to ensuring that this valuable legal protection does not backfire when you least expect it.
In the IHL seat
We are excited to bring you our first Pannone x IHL interview as a “fly on the wall” insight into life as an in-house lawyer. We are delighted to introduce Tom Kershaw and Sarah Petrie from the Manchester fashion house, boohoo. We are grateful for their time, allowing us to shine a spotlight on a role within one of the fastest growing fashion businesses in the world.
Pannone x IHL conference Q&A
Thank you to all who attended our annual IHL conference on 5 November 2024 hosted in partnership with BCL Legal Recruitment. We’ve collated a few of the interesting Q&As that arose during the day which we hope are helpful and give you a flavour of the variety of discussions from this year’s event.
Our Pannone x IHL is designed to bring you the latest news and legal developments relevant to in-house lawyers. If there are any areas you would like more information on or if you have any questions or feedback, please do not hesitate to let us know via our feedback form or get in touch with any member of our team.
Copyright in this publication is owned by Pannone Corporate LLP and all rights in such copyright are reserved. Pannone Corporate LLP is a limited liability partnership registered in England and Wales with number OC388393. Authorised and Regulated by the Solicitors Regulation Authority. A list of members is available for inspection at the registered office, 378-380 Deansgate, Manchester M3 4LY. We use the terms “partner” to refer to a member of the LLP.