19.09.2025 | Peter A. Emmi

The Impact of Artificial Intelligence on M&A Deals — Part II

In this two-part article, the author explores the impact of artificial intelligence (AI) on the mergers and acquisitions (M&A) deal value chain. In the first part, published in the March-April 2025 issue of The Journal of Robotics, Artificial Intelligence & Law, the author provided a high-level overview of generative AI, discussing recent advancements and applications across various industries. He then delved into how AI is used at different stages of the M&A deal cycle, including the role AI can play in target identification, due diligence, and post-merger integration. In this conclusion, the author illustrates the practical applications and benefits of AI as it applies to the M&A deal cycle by providing an overview of M&A transactions that implemented AI tools to improve certain aspects of the M&A deal process. Among other things, the author also discusses the limitations of the use of AI and why, despite the efficiencies gained through the use of AI, human expertise remains crucial for interpreting and evaluating the strength of AI-generated insights, making strategic decisions, and managing complex interpersonal dynamics and efficiencies.

 

Case Studies: Recent Deals Utilizing AI

The transformative impact of artificial intelligence (AI) can be seen through several high-profile acquisitions, each of which used AI at various steps along the mergers and acquisitions (M&A) process.

Salesforce and Tableau (2019)

In 2019, Salesforce acquired Tableau, a leading analytics platform, for $15.7 billion, in a strategic move that was aimed at enhancing its analytics capabilities and providing customers with advanced data visualization tools. AI played a pivotal role in this acquisition by enabling Salesforce to analyze vast amounts of customer data and market trends. AI tools were employed to assess Tableau’s market position, customer feedback, and the potential synergies between the two companies. Salesforces’ AI-aided analysis helped identify Tableau as a strategic acquisition target and ultimately enabled Salesforce to strengthen its data analytics portfolio and offer more comprehensive solutions to its customers. This, in turn, led to increased revenue and customer satisfaction.

IBM and Red Hat (2019)

Another notable AI-assisted M&A deal was IBM’s acquisition of Red Hat for $34 billion in 2019. The primary objective of IBM’s acquisition was to bolster its cloud computing services. AI-driven analytics were crucial in evaluating Red Hat’s business model and its potential fit within IBM’s strategic vision for hybrid cloud services. IBM used AI tools to assess operational efficiencies and identify integration opportunities. The acquisition positioned IBM as a stronger player in the cloud market, enabling it to lever-age Red Hat’s open-source technologies and accelerate its cloud transformation strategy.

Siemens and Mentor Graphics (2017)

In 2017, Siemens acquired Mentor Graphics for $4.5 billion to strengthen its position in the electronics and automation industries. Siemens employed AI tools to analyze Mentor Graphics’ product offerings, customer base, and market positioning. By conducting a comprehensive analysis using AI, Siemens was able to identify key areas where Mentor’s technology could complement Siemens’ existing solutions. As a result, Siemens enhanced its digital offerings, particularly in design and manufacturing solutions. This drove innovation and growth in Siemens’ digital industries segment.

Adobe and Marketo (2018)

Adobe’s acquisition of Marketo for $4.75 billion in 2018, aimed at enhancing Adobe’s marketing cloud services, used AI-driven analytics to assess Marketo’s capabilities in customer engagement and its fit within Adobe’s marketing ecosystem. AI tools helped evaluate how Marketo’s solutions could complement Adobe’s existing products. As a result, Adobe was able to strengthen its marketing offering, leading to increased customer acquisition and retention. This acquisition further established Adobe as a leader in digital marketing solutions.

 

Challenges and Risks of AI in M&A

While incorporating AI into the M&A deal chain offers significant benefits, it also introduces several challenges. These challenges stem from various factors, including the quality of data, changing laws and regulations, and the limitations of AI systems in areas that require human judgment and accountability.

Data Quality and Integrity

Poor data quality can affect the performance and reliability of AI systems. When data is inaccurate, incomplete, or inconsistent, the AI algorithms that depend on this data are likely to generate flawed predictions and analyses. For example, data inconsistencies, gaps, or inaccuracies can lead to misleading insights. If M&A professionals rely on such flawed insights to determine strategies or next steps in a deal, it can result in poor decision-making. In the context of M&A, these risks are amplified due to the high-stakes and time-sensitive nature of the decisions involved.

Examples of poor data quality include incomplete datasets, where essential information is missing; inconsistent data formats, which can cause errors in data processing; and outdated information, which can lead to analyses that do not reflect the current state of affairs. These issues can compromise the reliability of AI-driven insights, making it difficult for organizations to trust the outcomes generated by their AI systems, especially with respect to high-stakes areas such as compliance. Poor data quality can result in missed regulatory violations or misinterpretation of risk, potentially exposing organizations to legal and financial consequences. For example, if an AI system is trained on outdated or biased data, it may fail to identify emerging risks or regulatory changes that could affect the success of an M&A deal.

Robust data management practices are critical in ensuring that AI systems provide accurate and reliable insights. This includes regularly updating and cleaning data to remove inaccuracies and inconsistencies, ensuring that data from different sources is harmonized and standardized, and implementing policies and procedures to manage data quality and integrity. By maintaining high data quality, organizations can ensure that their AI systems support better decision-making in the M&A process, ultimately leading to more successful outcomes.

Interpretability and Transparency and an Overreliance on Automation

The “black box” nature of AI, where the decision-making pro-cesses of machine learning models are not easily understood by human users, can lead to difficulties in understanding AI-driven recommendations, and may result in an overreliance on AI without adequate human oversight. Because the complexity of AI models often makes it difficult for users to grasp how specific recommendations are generated, a lack of understanding may encourage M&A professionals to place trust in AI systems, and deter them from critically evaluating AI outputs. This reliance on AI output can be particularly risky in high-stakes environments like M&A, where decisions must be well-informed and justifiable.

The challenge of overreliance on AI automation is particularly pronounced in complex regulatory environments that require judgment and contextual understanding. While AI can automate many tasks, it may fail to capture nuanced risks or interpret legal regulations that are open to interpretation. This can have significant impacts, as reduced human oversight may lead to missed compliance requirements or a failure to account for gray areas in the law. For example, in heavily regulated industries like healthcare or finance, regulations are often complex and subject to frequent changes. AI systems might not fully understand the subtleties of new regulations, leading to compliance gaps if they are not regularly updated and monitored by human experts. To address these challenges, efforts to enhance AI transparency are crucial.

Adapting to Changing Regulations

The regulatory landscape is in a state of constant evolution, with new laws and standards being introduced at a rapid pace across multiple jurisdictions. This dynamic environment poses a significant challenge for AI systems, especially those that rely heavily on historical data. As regulations evolve, AI systems that are not updated become increasingly obsolete, reducing their effectiveness and reliability. Without frequent updates and retraining, these systems may struggle to keep up with the latest regulatory changes. This lag can result in AI systems providing inaccurate assessments or failing to flag emerging risks, which can lead to compliance failures. For example, a company using AI to monitor financial transactions for compliance with anti–money laundering (AML) regulations may miss new methods of money laundering or fail to comply with updated AML laws if the AI is not properly trained to address these changes. Companies that do not keep their AI systems current may find themselves at a disadvantage, as they are unable to accurately assess risks or identify compliance issues. This can result in missed opportunities and increased vulnerability to regulatory scrutiny. Noncompliance can result in significant financial penalties, legal repercussions, and reputational damage.

To address the issues, companies should implement a robust strategy for the continuous improvement and updating of their AI systems. Companies should regularly monitor regulatory changes and train AI models as new data and information becomes available. There should also be collaboration among technical and legal experts to confirm that any AI is developed to include and address current laws and standards.

Bias and Discrimination

AI models can inherit biases from the data they are trained on, leading to skewed outputs and biased decision-making and necessitating measures to ensure fairness and equity. Biases in AI can result in disproportionately flagging certain demographics or businesses based on historical data and can lead to regulatory violations related to discrimination laws, especially in industries like banking, where compliance with antidiscrimination regulations is critical. For example, an AI-driven credit risk assessment tool may inherit biases from historical data, leading to discriminatory lending practices that violate antidiscrimination regulations. Moreover, the public disclosure of biased AI decisions can cause significant reputational damage to a company. For instance, if a financial institution’s AI system is found to be discriminating against certain demographics, it could face public backlash and lose the trust of its customers.

To address these issues, organizations must implement measures to detect and mitigate bias in AI. This includes collecting data from various sources and ensuring that minority groups are adequately represented. Additionally, the adoption of ethical AI frameworks and guidelines (as discussed in more detail below) can help organizations ensure their AI practices are fair and non-discriminatory, and provide accountability throughout the organization.

Legal and Accountability Issues

The use of AI in areas such as compliance and risk management within the M&A process raises significant legal and accountability issues. Companies may face challenges in determining who is responsible when an AI system makes an incorrect decision or fails to identify a risk. This ambiguity can lead to legal complications, especially if an organization faces noncompliance or regulatory breaches. In such cases, proving that the organization exercised due diligence can be difficult if there was heavy reliance on AI. For example, if an AI system fails to detect financial irregularities during the due diligence phase of an M&A transaction, resulting in regulatory action or enforcement, a company may find it challenging to defend its reliance on AI in court, especially if a company does not have clear AI governance frameworks and robust docu-mentation to support the decision-making process. Establishing these frameworks not only helps in legal defense but also enhances the overall trust and reliability of AI systems in the M&A process.

Moreover, there are international variations in AI accountability laws that organizations must navigate. For example, the European Union has stringent regulations regarding AI accountability and transparency, while the United States may have different standards and requirements. Understanding these variations is crucial for multinational corporations to ensure compliance across different jurisdictions.

Cost of Implementation and Maintenance

The costs associated with adopting and implementing AI into the M&A process can be significant. Initial investment costs include those related to the acquisition of software and hardware, licensing fees for AI platforms and tools, and integration with existing systems. Furthermore, ongoing operational costs also need to be evaluated. These costs can include regular updates and upgrades to AI systems, cloud storage, and computing costs, as well as data acquisition and management expenses.

In addition to product and maintenance costs, companies must invest in employee training programs to ensure staff can effectively and safely use AI tools. Companies may incur significant costs to hire and retain specialized AI talent. Hidden costs, such as those associated with data privacy and security compliance, and unforeseen expenses due to technology obsolescence, can further strain financial resources.

For smaller firms, the costs associated with AI implementation may outweigh the benefits, especially if the AI system is not properly tailored to their specific compliance needs. Frequent updates to keep AI aligned with regulatory changes can further increase operational costs, making it difficult for these companies to sustain their AI initiatives. For example, a mid-sized firm may invest in AI for compliance monitoring but struggle with the ongoing costs of retraining the system to meet new legal requirements or address emerging risks. This highlights the importance of careful planning and strategic management of AI implementation and maintenance costs to ensure that the benefits of AI adoption are fully realized without placing undue financial strain on the organization.

Limitations in Cross-Border M&A Transactions

While AI is well-poised to help buyers navigate the complexities of cross-border M&A for many of the reasons described earlier, using AI in international deals presents unique risks, particularly with respect to data privacy laws, which vary significantly. Non-compliance with these laws can result in severe penalties and legal repercussions. As a result, M&A professionals must ensure that AI tools used for data processing and analysis comply with all relevant laws in each jurisdiction involved in the deal, especially if personal data is being used. For example, some jurisdictions mandate explicit consent for the processing of personal data and may even require individuals to be notified when personal data is used with an AI tool. Additionally, many jurisdictions impose restrictions on transferring personal data across borders, especially if the destination country does not have adequate data protection measures. Using AI that requires transferring sensitive data to countries with less stringent regulations may expose organizations to legal risks and compliance challenges. M&A teams must navigate these restrictions carefully to avoid potential legal issues.

As previously discussed, AI systems can unintentionally perpetuate biases present in training data, leading to unfair or discriminatory outcomes. This can be particularly problematic in cross-border M&A, where data related to different demographics or cultural contexts is analyzed. Organizations must ensure that AI models are unbiased and transparent to avoid violating local laws or regulations. AI tools may not account for cultural nuances, leading to misinterpretations of data or insights that could negatively affect cross-border negotiations or integrations. Professionals must be cautious in how they apply AI-generated insights, ensuring they consider cultural contexts to avoid missteps.

Different jurisdictions may have varying laws regarding data ownership, which can complicate the use of data in AI models, especially when integrating information from multiple sources. Organizations need to clarify data ownership and intellectual property rights to avoid disputes arising from data usage in AI applications.

 

The Continued Need for Human Intervention

Contextual Understanding

Despite the impressive capabilities of AI, human judgment remains essential for understanding complex and ambiguous situations during an M&A transaction. While AI excels at processing large volumes of structured data, it may fall short in interpreting nuanced, context-specific information. The ability of human advisors to incorporate qualitative insights and industry expertise ensures a more holistic evaluation of the transaction’s future potential.

Humans are better suited to interpret subjective factors like corporate culture during an M&A transaction. Corporate culture is a critical determinant of post-merger integration success, and understanding it requires qualitative assessment methods that AI cannot fully replicate. Qualitative insights help M&A professionals gauge the cultural fit between merging entities and anticipate potential integration challenges. Consider a scenario where a company is evaluating leadership quality and corporate culture during due diligence. AI can provide insights based on employee surveys or public sentiment, but the nuanced understanding of how leadership styles and corporate values align requires human judgment and experience.

Experienced M&A professionals can draw on their past experiences to make informed judgments, even when data is ambiguous or contradictory. For example, during negotiations, human negotiators can read between the lines, understand the motivations of the other party, recall the styles and tactics their counterparties have used in previous deals, and adapt their strategies in real time—capabilities that AI currently lacks. This adaptability and real-time decision-making are critical in ensuring the success of M&A transactions.

Flexibility in Decision-Making

Human adaptability is vital in novel or unforeseen circum-stances during M&A transactions. While AI operates based on predefined algorithms and historical data, humans can adapt to changing environments and make decisions in real time. During the COVID-19 pandemic, many M&A deals faced unforeseen challenges such as market volatility, supply chain disruptions, and changes in consumer behavior. Human adaptability was crucial in navigating these challenges by assessing situations as they evolved, weighing competing priorities, and adjusting strategies accordingly. During the integration phase, human flexibility is critical for addressing unexpected challenges, such as cultural clashes, operational inefficiencies, and employee resistance. Adaptive leadership ensures a smooth transition and successful integration.

Strategic and Long-Term Vision

Human strategic thinking and vision are critical for long-term success in M&A transactions. Effective M&A strategies consider both short-term integration challenges and long-term opportunities for synergy, market expansion, and competitive advantage. Human leadership is essential for setting long-term M&A goals that align with the company’s strategic vision. Leaders provide direction, inspire teams, and make decisions that drive sustainable growth and value creation. Strategic thinking involves evaluating market trends, competitive dynamics, and emerging opportunities to develop a clear roadmap for the future. For example, while AI might identify an acquisition of a technology start-up by a large corporation as a high-risk transaction due to its volatile financials, human analysts can recognize the start-up’s innovative technology and strategic fit with the corporation’s long-term goals. Human analysts can foresee potential synergies and market opportunities that AI might overlook, allowing them to appreciate the long-term benefits despite the current risks. Human leaders play a crucial role in balancing short-term financial gains with long-term strategic objectives. This involves making decisions that prioritize sustain-able growth, innovation, and value creation over immediate profits.

Balancing Automation with Human Intervention

There are several human-centered approaches that can be implemented to mitigate the risk of an overreliance on AI automation. One approach is to develop and implement Explainable AI (XAI) techniques. XAI aims to make AI models more interpretable, allowing users to understand the rationale behind AI-driven decisions. In the context of M&A, XAI can provide clear insights into how AI models evaluate potential deals, assess risks, and identify synergies, thereby facilitating more informed decision-making. By making AI more interpretable and transparent, organizations can leverage their full potential while maintaining the trust and confidence of all stakeholders involved.

Human-in-the-loop systems are also an effective strategy because they integrate human oversight at critical decision points to ensure that AI-generated outputs are reviewed and validated by experts. By leveraging the strengths of both AI and human judgment, these models can provide more accurate and reliable outcomes. For instance, while AI can handle large volumes of data and identify patterns, human experts can provide the con-textual understanding and critical thinking necessary for nuanced decision-making.

Ensuring Fairness

Human intervention is crucial to ensure fairness and equity in AI systems. AI alone cannot implement bias detection tools to identify and mitigate biases in its models. Inclusive design practices, which involve diverse teams in the design and development of AI systems, can significantly reduce biases. Additionally, ensuring transparency and accountability in AI decision-making processes is essential. This involves developing robust risk management strategies, ethical guidelines, and standards to hold AI systems account-able. Regular audits and updates of AI systems are also necessary to address any identified biases and ensure ongoing compliance with antidiscrimination regulations. By adopting these measures, companies can enhance the fairness and equity of their AI systems and mitigate the risks associated with biased decision-making.

Ethical and Legal Considerations

While AI systems are powerful, they lack the ability to make ethical judgments or anticipate the societal impact of a merger or acquisition. Ethical dilemmas often arise in M&A, and human judgment is required to balance financial gains with social impact, ensuring that the broader implications of M&A activities are responsibly managed. Human decision-makers must weigh these ethical considerations and ensure that the actions taken are in the best interests of all stakeholders.

Regulatory bodies and compliance officers play a crucial role in overseeing the ethical and legal aspects of M&A transactions. They ensure that AI-driven decisions align with regulatory standards and ethical guidelines. For example, regulatory bodies may require clear, explainable decision-making processes, especially when assessing whether a company complies with laws and standards. Human compliance officers are responsible for ensuring that AI-generated insights are interpreted correctly and that any potential legal or ethical issues are addressed. To fully realize the potential of AI, additional steps must be taken to set up and implement ethical guidelines and standards codified in universally accepted and enforced regulations. This includes educating the workforce about such regulations and related policies and practices, and fostering key collaborations and partnerships related to AI policy formation, research, and other initiatives. Regulatory frameworks should include ethical guidelines and standards for AI development and deployment, emphasizing fairness, transparency, and accountability to ensure responsible use of AI systems. Regulatory bodies should continue to engage with stakeholders, including industry leaders, ethicists, and the public, to create inclusive and effective policies. Developing comprehensive regulatory frameworks is crucial to ensure the ethical and responsible use of AI. These frameworks should address issues such as data privacy, security, and the mitigation of bias while promoting transparency and accountability.

Preparing the legal workforce for an AI-driven future requires a focus on education and training tailored specifically for lawyers. Law schools should integrate AI and technology-related subjects into their curriculums, preparing students for the evolving legal job market. This includes courses on legal technology, AI applications in law, and data privacy. Additionally, continuous learning programs, such as online courses and workshops, can help practicing lawyers develop new skills and stay updated with technological advancements.

Collaboration between law schools, bar associations, legal tech companies, and governments can ensure that training programs align with the needs of the legal market. By integrating human judgment, ethical considerations, and strategic vision, law firms and legal departments can navigate the complexities of AI implementation and achieve successful outcomes that align with their long-term goals. This holistic approach will enable lawyers to leverage AI tools effectively while maintaining the core values of the legal profession.

Driving AI Innovation

Human advisors are necessary to drive further AI innovation. Part of ethical and responsible AI development includes fostering a more open and cooperative environment within the AI industry. In October 2024, OpenAI, a leading U.S. research organization, announced that it will only use its patents for defensive reasons.1 Therefore, it will not use its patents unless a party threatens or asserts a claim, initiates a proceeding, or aids others in such activities and the use of such patents is required to defend OpenAI against such actions. Although OpenAI currently holds a relatively small number of patents, this pledge has the potential to set a powerful precedent within the AI industry. By committing to this approach, OpenAI aims to encourage innovation, build trust, promote transparency, and reduce legal barriers within the AI industry.

Public and private sectors should collaborate on research initiatives to drive AI innovation. Funding from governments and corporations can support cutting-edge research in AI, fostering breakthroughs in machine learning, natural language processing, and other areas. Partnerships between academic institutions, industry leaders, and government agencies can facilitate knowledge sharing and accelerate the development of AI technologies. In addition, public-private partnerships should be contemplated and formed to drive AI innovation and ensure that AI technologies are developed responsibly. Governments can provide funding and regulatory support, while private companies can contribute expertise and resources. Collaboration with academic institutions can facilitate research and development, ensuring that AI advancements benefit society as a whole. AI has the potential to revolutionize industries; the existing laws have not kept pace with technological advancements.

Establishing Comprehensive Regulatory Frameworks to Regulate AI in M&A

Governments should implement comprehensive regulations specifically addressing the use of AI in business transactions, including M&A. These frameworks should focus on several key areas:

•     Data Privacy. Governments must ensure that AI systems used in M&A adhere to existing data privacy laws (e.g., the General Data Protection Regulation and the California Consumer Privacy Act) and define clear guidelines for the ethical use of sensitive data, including employee and customer data, during M&A processes.

•     Cybersecurity. Regulations should enforce strict cybersecurity protocols, ensuring that AI tools processing corporate data during the M&A deal cycle are secure from breaches or attacks. Governments should mandate compliance with cybersecurity standards, such as those from the NIST (National Institute of Standards and Technology) Cyber-security Framework or ISO (International Organization for Standardization) 27001.

•     Competition and Antitrust. Governments should monitor how AI tools affect competition in M&A. By analyzing how AI algorithms assess market dominance, governments can prevent anticompetitive practices like data monopolies or price manipulation enabled by AI.

•    Fairness and Bias. Regulatory bodies should require companies to audit their AI systems for bias and fairness. AI tools in M&A must be designed to avoid discriminatory outcomes, particularly in workforce integration, customer analysis, or supplier negotiations.

 

In addition to these areas, setting ethical standards and guide-lines, regulating cross-border data transfers, encouraging innovation while protecting stakeholders, strengthening antitrust and competition law enforcement, ensuring accountability and liability, promoting international collaboration and harmonization, addressing AI-specific risks in workforce integration, and regulating AI’s role in financial analysis and valuation are all critical components of a robust regulatory framework. These measures will help ensure that AI is used responsibly and ethically in M&A processes, protecting the interests of all stakeholders involved.

On July 26, 2024, NIST released a publication, “AI Risk Management Framework GenAI Profile” (NIST AI 600-1),2 to help organizations manage the risks associated with generative AI. This guidance is crucial as the use of generative AI has surged, yet many organizations lack robust AI governance programs. NIST’s framework identifies 12 high-level risks, including data privacy issues, harmful biases, and intellectual property concerns. To mitigate these risks, NIST categorizes its recommendations into four key areas: govern, map, measure, and manage:

1.   The “govern” category emphasizes aligning AI risk management with organizational principles and legal requirements.

2.   The “map” category focuses on documenting the context and intended uses of generative AI.

3.   The “measure” category involves developing processes to evaluate and improve AI system performance.

4.   The “manage” category prioritizes addressing AI risks based on their potential impacts.

Although NIST’s guidance is not legally binding, it is recognized as a valuable resource for demonstrating compliance with AI laws and regulations. Organizations are encouraged to integrate NIST’s recommendations into their AI governance programs to facilitate future compliance and mitigate risks effectively.

 

Current Regulatory Frameworks and Ethical Guidelines

Regulating AI has become a pivotal focus in both the United States and the European Union, reflecting a concerted effort to balance innovation with ethical and safe AI deployment. Currently, there is no comprehensive federal legislation or regulation in the United States that regulates the development of AI or specifically prohibits or restricts its use. However, the United States has taken a proactive stance on AI regulation through several key initiatives aimed at ensuring the safe and equitable development and use of AI technologies.

United States

•  White House Blueprint for an AI Bill of Rights. In October 2022, the White House introduced the Blueprint for an AI Bill of Rights. This blueprint provides guidance on the equitable access and use of AI systems, articulated through five core principles: creating safe and effective systems, protecting against algorithmic discrimination, ensuring data privacy, providing notice and explanation, and maintaining human alternatives and considerations in automated processes.

•  White House Executive Order on AI Development and Use. In August 2023, an executive order was issued by Presi-dent Biden (rescinded by an executive order of President Trump on January 23, 2025), focusing on the safe, secure, and trustworthy development and use of AI across various sectors. This order calls for the development of federal standards and requires developers of powerful AI systems to share safety test results and critical information with the U.S. government. It also mandates the Department of Commerce to provide guidance on content authentication and watermarking for AI-generated content.

•  Senate AI Working Group’s AI Roadmap. In May 2024, the Bipartisan Senate AI Working Group released an AI Roadmap, which encourages further research on AI-related issues such as AI’s impact on the workforce and high-risk uses. The roadmap stresses the application of existing laws to AI systems and highlights the importance of best practices and human oversight in high-impact automated tasks.

•  NO FAKES Act. On July 31, 2024, U.S. lawmakers introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe Act, commonly known as the NO FAKES Act. This proposed legislation aims to establish a federal property right to protect individuals from unauthorized digital replicas of their voice or likeness. This bill represents a significant shift in the protection of publicity rights in the United States, creating the first nationwide harmonized right of publicity.

•   White House Executive Order Entitled Removing Barriers to American Leadership in Artificial Intelligence. On January 23, 2025, President Trump issued an executive order aimed at establishing U.S. leadership in AI by revoking restrictive policies enacted by the previous administration and administrators. The order mandates the development of an action plan within 180 days to promote AI-driven economic competitiveness, national security, and allows humans to flourish in a world with AI. It directs govern-mental agencies to review conflicting regulations, updates the U.S. government’s Office of Management and Budget guidance, and clarifies that it does not create new legal rights. This executive order is directed toward deregulation that is intended to foster AI innovation in the United States.

 

Turning back to the NO FAKES Act, which remains under con-sideration in the U.S. Congress, the primary provision of the bill is to grant individuals, or their rights holders, the exclusive right to authorize the use of their voice and likeness in digital replicas. This right is defined to cover highly realistic, computer-generated representations that are readily identifiable as the individual’s voice or likeness. The bill specifies that these rights are a form of intellectual property, sharing similarities with copyright but tailored to address the unique challenges posed by digital replicas.

The legislation also addresses postmortem rights, extending protections to deceased individuals for up to 70 years after their death, provided there is active and authorized public use of their likeness or voice. This is a significant departure from the current state-by-state approach to postmortem rights, providing a more consistent framework.

Furthermore, the bill includes several provisions to safeguard these rights. For example, it allows individuals to license their rights for a maximum of 10 years, with shorter terms for minors. It also imposes civil liability on those who create or distribute unauthorized digital replicas, with penalties ranging from $5,000 to $25,000 per infringement, and offers injunctive relief and punitive damages for willful violations.

Importantly, the NO FAKES Act includes exemptions to address First Amendment concerns, such as protections for bona fide news, public affairs, sports broadcasts, documentary and historical uses, and works of parody or satire. The bill also preempts state laws related to digital replicas, aiming for a consistent national standard.

The introduction of the NO FAKES Act has been well-received by the entertainment industry, particularly within the music sector, where the threat of AI deepfake technology has raised significant concerns. The bill’s progress will be closely monitored by stakeholders across various industries, as its enactment could set a global precedent for protecting individuals from the evolving threats posed by AI technologies.

State/City Legislation on AI

In parallel with federal efforts, over 25 states have introduced AI legislation to address regulatory gaps. The 2024 State Summary on AI report by the Software Alliance highlights that while no specific model for AI legislation has emerged, in 2024 state policymakers introduced almost 700 pieces of AI legislation.3 The establishment of AI task forces in 33 states further underscores the growing momentum for AI legislation. For example, in May and June 2024, California proposed numerous AI-related laws, while Colorado enacted legislation regulating the private sector’s use of AI in decision-making, focusing on protecting consumers from discrimination.

New York has taken significant steps to regulate the use of AI to ensure that its application is fair, transparent, and protective of individual rights. On October 13, 2023, the state introduced the “New York Artificial Intelligence Bill of Rights,” which aims to safeguard residents from the potential harms of automated decision-making systems. This legislation focuses on protecting sensitive data, ensuring equitable treatment across all communities, and mandating transparency and oversight in the deployment of AI technologies.

New York City legislators have targeted the use of AI in the workplace through the introduction of the AI Bias Law, which restricts AI-only decision-making in hiring, promotions, and other employment-related decisions. This law requires human oversight and mandates bias audits for AI tools used in these processes to prevent discrimination and ensure fairness.

Following New York City’s lead, New York State solidified its commitment to ethical AI governance by introducing and subsequently passing on February 13, 2025, the Legislative Oversight of Automated Decision-Making in Government (LOADinG) Act. This act, known formally as A9430-B, aims to provide assessment, transparency, and oversight of automated decision-making systems used for high-stakes decisions by state agencies. In addition, New York State legislators have proposed Assembly Bill A768 in January 2025, which seeks to prevent the use of AI algorithms from discriminating against protected classes, which is still under consideration.

These legislative efforts reflect New York’s commitment to leading in the ethical regulation of AI, balancing innovation with the protection of its citizens’ rights.

Similarly, Illinois is setting benchmarks with its new legislative measures, particularly with the enactment of HB (House Bill) 3773, which introduces significant advancements in AI employment regulation. This law, effective on January 1, 2026, mandates employer transparency when AI is used in significant employment decisions and uniquely prohibits the use of ZIP codes in AI algorithms to prevent proxy discrimination based on geographic data. This pioneering step not only highlights Illinois’ proactive approach in AI regulation but also sets a precedent that may influence future legislative efforts in other states.

Not all AI legislation has received overwhelming support, with government officials, judiciaries, and even those within the AI industry divided on the most appropriate regulatory approach. One example is California Governor Gavin Newsom’s veto of SB (Senate Bill) 1047,4 in September 2024. SB 1047 aimed to regulate AI models that cost more than $100 million to train and required over 10^26 integer or floating-point operations, as well as models resulting from the fine-tuning of such covered models using more than $10 million. The bill mandated, among other things, the implementation of technical and organizational controls to prevent AI models from causing “critical harms” and imposed annual audit obligations.5

Despite overwhelming support in the legislature and from key AI scientists, Governor Newsom vetoed the bill. He argued that SB 1047 would stifle innovation by disproportionately burdening those developing “the most expensive and large-scale models” with-out recognizing the potential for smaller, more-specialized models to be equally disruptive.6

Federal courts also grappled with the complexities of AI regulation when, in October 2024, a U.S. federal judge issued a preliminary injunction against AB (Assembly Bill) 2839,7 a California law that allows a person to sue for damages over AI election deepfakes. While acknowledging the risk that AI and deepfakes pose, U.S. District Judge John A. Mendez cited free speech concerns, stating that “[m]ost of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”8 The complexity of AI regulation necessitates a nuanced and well-considered approach, but the swift growth of AI emphasizes the need for prompt and effective regulatory action to address emerging ethical and technological concerns.

 

European Union—EU AI Act

The European Union has also been at the forefront of AI regulation with the EU AI Act. In December 2023, the EU AI Act received Council of the European Union approval. This comprehensive legislation classifies AI systems based on their risk levels and imposes stringent requirements for high-risk applications. It focuses on ensuring AI systems are safe, transparent, and respectful of fundamental rights, with rigorous testing, documentation, and monitoring requirements, particularly for AI used in critical sectors such as healthcare, transportation, and public services. The EU AI Act officially came into effect on August 1, 2024; however, most provisions of the regulation will not become applicable until August 2, 2026.

These legislative measures in the United States and the Euro-pean Union reflect a shared commitment to creating a regulatory environment that fosters public trust and ensures AI advancements benefit society. By establishing clear guidelines and protections, these frameworks aim to promote the ethical and safe deployment of AI technologies, addressing current challenges and setting the stage for future innovations.

China

China is also enacting AI regulation, implementing key measures that significantly impact businesses. To navigate the complex landscape of Chinese AI-related acts listed below, businesses are mandated to protect commercial secrets during filings, assess algorithm compliance, prepare security documents in advance, ensure all roles in deep synthesis technology comply, and monitor evolving regulations closely.

•  Generative AI Measures (Effective August 15, 2023). This act was passed to encourage innovation and international cooperation while mandating transparency and security for all generative AI services in China, including those from foreign providers.

•  Deep Synthesis Provisions (Effective January 10, 2023). This act intends to standardize the management of algorithmic recommendations and deepfakes, requiring clear labeling of AI-generated content and robust data security systems.

•   Ethical Review Measures (Effective December 1, 2023). This act addresses ethical challenges in AI development. Companies must establish internal review committees and seek external approvals for sensitive research.

•   Algorithm Recommendation Provisions (Effective March 1, 2022). This act requires the filing of algorithms influencing public opinion or social engagement with the Cyber Administration of China, including detailed documentation and ongoing compliance.

 

Conclusion

AI is transforming the M&A deal value chain by significantly enhancing efficiency and decision-making processes. It has been integrated into various activities at each step of the M&A process, from target identification and due diligence to post-merger integration. By automating routine tasks, analyzing large datasets, and providing data-driven insights, AI has enabled us to become more efficient and to address risks and issues earlier in the M&A process, as well as identify previously unknown risks. However, human intervention remains essential. AI is not well equipped to handle aspects of the M&A process that require judgment, experience, and emotional intelligence – areas in which human advisors are uniquely capable and excel.

AI can automate many time-consuming tasks, allowing advisors to focus on higher-value activities such as deal structuring and negotiation. AI-powered analytics provide deeper insights, enhancing the strategic guidance advisors can offer. Moreover, AI tools can be integrated into advisory services, offering clients advanced analytics and personalized solutions. Advisors who adopt AI early can differentiate themselves in a crowded market by providing more precise, data-driven recommendations that speed up processes and reduce risks. The rise of AI in M&A is unlikely to completely eliminate the role of traditional M&A professionals, but has clearly increased the value of specialists in the legal and consulting fields, especially in the areas of advanced software and data processing. Specialist or not, AI will transform the roles of all such advisors which will create a plethora of new opportunities for advisors of all types to add value.

As AI continues to play an increasingly larger role in the M&A process, it will be crucial to find the appropriate balance between AI and human intervention. M&A professionals must ensure that data is accurate and of high quality, and they should balance human judgment with AI outputs to determine the best course of action during the M&A process. By understanding both the strengths and limitations of AI, companies and M&A professionals can more effectively use AI to improve the M&A process, potentially resulting in more successful and advantageous transactions.

M&A professionals must evolve their skills, processes, and approaches to thrive in a landscape increasingly influenced by AI and automation. Key strategies for adaptation include:

•   Developing AI and data literacy;

•     Embracing collaboration with AI;

•     Focusing on strategic, high-value work;

•     Adopting AI-enabled tools for process efficiency;

•     Strengthening soft skills and emotional intelligence;

•     Building expertise in ethical and regulatory compliance;

•     Staying updated on AI innovations and trends;

•     Specializing in AI-enhanced M&A functions;

•     Expanding roles beyond traditional M&A functions; and

•    Developing essential skills for the next generation of M&A professionals.

 

The next generation of M&A professionals will need a diverse skill set to work effectively alongside AI. Essential skills include:

 

•    Data literacy and analytical capabilities;

•    Technical proficiency;

•    Strategic thinking and problem-solving abilities;

•    Human-centric skills such as negotiation, interpersonal communication, and emotional intelligence;

•    A strong understanding of the limitations, ethical considerations, and compliance requirements related to AI;

•    Adaptability and a commitment to lifelong learning; and

•    Post-merger integration skills and project management expertise.

 

The integration of AI into the M&A process offers significant benefits, but it also requires a thoughtful approach that balances technological advancements with human expertise. By embracing AI while maintaining the indispensable role of human judgment, companies can achieve more efficient, informed, and success-ful M&A transactions. This balanced approach ensures that the strengths of both AI and human advisors are leveraged and optimized, leading to more advantageous outcomes during an M&A deal.

 


Notes

1. Ryan Davis, OpenAI Says It Will Only Use Its Patents “Defensively,” Law360 (Oct. 15, 2024), https://www.law360.com/articles/1890156.

2. National Institute of Standards and Technology, NIST AI Risk Management Framework: Generative Artificial Intelligence Profile, NIST AI 600-1 (2024), https://doi.org/10.6028/NIST.AI.600-1.

3. https://statescoop.com/ai-legislation-state-regulation-2024/.

4. California Senate Bill 1047, 2023-2024 Reg. Sess. (2023), https://leginfo.

5.  Id.

6.  Governor Newsom’s Veto of Senate Bill 1047, Sept. 29, 2024.

7.  California Assembly Bill 2839, 2023-2024 Regular Session, https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB2839.

8. Kohls v. Bonta, 2:24-cv-02527 JAM-CKD (E.D. Cal. Oct. 2, 2024) at 20-21.

Autor
Peter A. Emmi

PeterA. Emmi is a partner and leader in the Technology Transactions practice within Reed Smith’s Global Corporate Group. Peter is also a Patent Attorney. Prior to his legal career which has spanned over 20 years, he gained over
17 years’ experience as an engineer and manager at I.B.M., and had received a Masters in Science degree and Bachelors in Science degree in electrical engineering from a prestigious university in the United States. He has
extensive experience advising corporate clients regarding tech transactions, including mergers, acquisitions, spinoffs, and complex IP transactional matters.

Profil
Das könnte Sie auch interessieren