The explosive growth of artificial intelligence is reshaping industries, unlocking new opportunities, and accelerating innovation. This momentum is driven by advances in computing power, big data availability, and improved algorithms—alongside strong commercial interest from investors eager to fuel AI development. As M&A activity intensifies, this article explores the legal foundations critical to acquiring AI businesses, focusing on customer contracts and intellectual property.
Investor interest in AI businesses is rising, buoyed by the wave of commercial opportunities enabled by AI. M&A in the AI sector has been a major driver of innovation, market expansion, and industry consolidation. Transactions in this space tend to involve strategic acquisitions to consolidate AI ecosystems (e.g. Microsoft acquiring Nuance Communications). AI startups with niche applications also find favour with big tech firms and industry funders (e.g. Apple acquiring AI startup C3.ai).
There are a few key considerations for acquiring AI businesses (via a share purchase or an asset purchase) and the legal due diligence on targeted AI assets or businesses will typically revolve around the target’s customer contracts, intellectual property rights, data privacy and security elements, employment of key personnel, ESG (environmental, social, and governance) characteristics, and compliance with local regulations. These factors will inform the deal structure and the buyer’s post-acquisition integration plans.
In conducting due diligence on AI assets, a prospective buyer should go over any customer contracts that the target business might have with existing customers for the provision of services reliant on its AI assets. Such contracts may take the form of software-as-a-service (SaaS) agreements, software license agreements, or software subscription agreements. If the AI tool relies heavily on specific datasets provided by the business, there might also be data licensing agreements entered into, which outline the terms of use for those datasets.
The contracts for services running on AI-powered systems should clearly and comprehensively define performance expectations and standards. Crucially, these contracts should be scrutinised for clauses pertaining to intellectual property rights, performance warranties, risk allocation, indemnities, and limitations of liabilities where the AI-powered systems make mistakes, encounter errors, or result in losses.
In AI SaaS Agreements, for instance, it will be typical for the service provider (i.e. the target business) to limit its legal risks and potential liabilities in a few ways. These include incorporating clear provisions that the service provider does not guarantee that the AI software-as-a-service platform will operate uninterrupted, in a timely manner, securely, error-free, or free from viruses, vulnerabilities , or other malicious software; or that where the platform includes links to other websites and resources provided by third parties, those sites and resources are being offered solely for informational purposes or that such websites and any content contained therein are not endorsed by the service provider. It would also be prudent to negotiate provisions to the effect that if the customer is dissatisfied with the platform, the customer’s sole and exclusive remedy is to discontinue its use.
All contractual provisions should be commensurate with the capabilities, accuracy, reliability, and robustness of the AI systems. These technical characteristics are pertinent to the scope of the due diligence process. The likelihood and severity of losses resulting from AI-driven mistakes or errors should be thoroughly assessed. AI-powered systems may make mistakes owing to, for instance, biased algorithms leading to discriminatory outcomes in areas such as hiring and risk assessment. Thus, the technical and legal perspectives of the due diligence findings should be considered collectively.
One key question for companies which are developing, adopting, or acquiring AI solutions - whether off-the-shelf or tailored versions of the same - will be how their innovations can be protected. This is where IP rights come into the picture.
AI inventions are patentable, subject to the classic requirements for patentability. For software and computer-implemented inventions, it is also necessary to show that the invention has a technical effect. For instance, the mere application of known technology to the field of energy will not be patentable. On the other hand, any innovative way in which the computer carries out a scheme or method - whether this is to forecast energy outputs, predict electricity price data using market trends, or recommend plans and packages to customers based on their usage patterns - is potentially patentable.
The source code and other creative elements of AI are also protectable by copyright. For instance, the graphical user interfaces of AI tools (e.g., a mobile app to provide support to customers) would be protected as artistic works. Copyright can also protect the datasets used for AI training (e.g., weather data that can be used to forecast the output of clean energy technologies). However, such protection is limited to the selection and arrangement of the data and does not extend to the data itself, and thus may not be very effective in the context of AI.
Database rights, where available, would provide more effective protection for datasets. Where such rights are unavailable (such as in Singapore), it may be possible to rely on the law of confidence. This is provided, however, that the necessary confidentiality safeguards are in place whether within the organisation or vis-a-vis any third party who is given access to the dataset, as protection will be lost once the data enters the public domain.
As AI applications are data-driven systems, the lawful and authorised use of the data obtained and processed by these systems, as well as the data produced by these systems, are salient considerations. Unauthorised use of data would attract infringement risks, so care is required when using data obtained using web-scraping techniques or from dubious sources. Care is especially important where AI applications process personal data. In Singapore, the Personal Data Protection Act 2012 sets out the general framework for the collection, use, and disclosure of personal data. There should be appropriate systemic safeguards for the management of data, especially personal data, built into AI systems, including for the personnel who have access to these applications.
With the legal groundwork laid, Part 2 of this series will explore the human, ethical, and regulatory dimensions of AI M&A. We’ll examine how buyers can retain top talent, navigate ESG risks, and anticipate future compliance challenges—critical factors for long-term success in a rapidly evolving sector.
This article is produced by our Singapore office, Bird & Bird ATMD LLP. It does not constitute legal advice and is intended to provide general information only. Information in this article is accurate as of 15 October 2025.