National Security And Investment Bill

National Security and Investment Bill

On 11 November 2020, the National Security and Investment Bill 2019-21 was introduced to the House of Commons and given its first reading. The Bill will establish a new statutory regime for government scrutiny of, and intervention in, investments for the purposes of protecting national security and follows the government's 2017 and 2018 Green and White Papers on the national security and infrastructure investment review.

National Security and Investment Bill Purpose

The Bill will enable the Secretary of State to "call in" statutorily defined acquisitions of control over qualifying entities and assets (trigger events) to undertake a national security assessment (whether or not they have been notified to the government). Proposed acquirers of shares or voting rights in companies and other entities operating in sensitive sectors of the economy will be required to notify to and obtain approval from the Secretary of State before completing their acquisition. The National Security and Investment Bill also creates, where there is no requirement to notify, a voluntary notification system to encourage notifications from parties who consider that their trigger event may raise national security concerns. It includes five-year retrospective call-in powers, allowing for post-completion review of non-notified transactions, and, where parties fail to notify a trigger event that is subject to mandatory notification, a call-in power at any time.

Trigger Events

The following would trigger a requirement to notify the Secretary of State:

  • The acquisition of more than 25% of the votes or shares in a qualifying entity.
  • The acquisition of more than 50% of the votes or shares in a qualifying entity.
  • The acquisition of 75% or more of the votes and shares in a qualifying entity.
  • The acquisition of voting rights that enable or prevent the passage of any class of resolution governing the affairs of the qualifying entity.
  • The acquisition of material influence over a qualifying entity’s policy.
  • The acquisition of a right or interest in, or in relation to, a qualifying asset providing the ability to:
    • use the asset, or use it to a greater extent than prior to the acquisition; or
    • direct or control how the asset is used, or direct or control how the asset is used to a greater extent than prior to the acquisition.

A qualifying entity is an entity engaged in the sectors referred to below.

National Security and Investment Bill Consultation

The government has published a consultation on proposed draft definitions of 17 sensitive sectors in which it will be mandatory to notify and gain approval for certain types of transactions, covering, for example, energy, telecommunications, artificial intelligence, defence, engineering biology, cryptographic authentication, computing hardware, and military and dual use. It invites comments on these definitions by 6 January 2021.

Policy Intent

The government has also published a Statutory Statement of Policy Intent describing how the Secretary of State expects to use the call-in power, and the three risk factors (target risk, trigger event risk and acquirer risk) that the Secretary of State expects to consider when deciding whether to use it. Once a transaction is notified or called in, assessment should be carried out within a 30-working day review period (which is extendable in certain circumstances).

The National Security and Investment Bill gives the Secretary of State powers to impose remedies to address risks to national security (including the imposition of conditions, prohibition and unwinding) and sanctions for non-compliance with the regime, which include fines of up to 5% of worldwide turnover or £10 million (whichever is the greater) and imprisonment of up to five years. Transactions covered by mandatory notification that take place without clearance will be legally void.

The Bill also sets out provisions for interaction with the Competition and Markets Authority (CMA) and amendment of the Enterprise Act 2002. These include removal of section 23A, which sets out the criteria for a merger to be a "relevant merger situation", thereby qualifying it for investigation by the CMA and repeal of the Enterprise Act 2002 (Share of Supply Test) (Amendment) Order 2018, the Enterprise Act 2002 (Turnover Test) (Amendment) Order 2018, the Enterprise Act 2002 (Share of Supply) (Amendment) Order 2020 and the Enterprise Act 2002 (Turnover Test) (Amendment) Order 2020.

National Security and Investment Bill Specified sectors

The list of specified sectors will be set out in secondary legislation, the definitions of which will be kept under review to reflect any changes in the risks facing the UK.

The government is consulting on proposed draft definitions to set out the parts of the economy in which it will be mandatory to notify and gain approval for certain types of transactions. These cover 17 sectors:

  • Advanced materials.
  • Advanced robotics.
  • Artificial intelligence.
  • Civil nuclear.
  • Communications.
  • Computing hardware.
  • Critical suppliers to the government.
  • Critical suppliers to the emergency services.
  • Cryptographic authentication.
  • Data infrastructure.
  • Defence.
  • Energy.
  • Engineering biology.
  • Military and dual use.
  • Quantum technologies.
  • Satellite and space technologies.
  • Transport.

The consultation document sets out the government's proposed definitions for the types of entity within each sector that could come under the National Security and Investment Bill's mandatory regime. The definitions differ from those in the 2018 and 2020 Enterprise Act merger control amendments, which, as noted, were only ever intended as short-term measures and will be repealed by the Bill.

The deadline for commenting on the proposed definitions is 6 January 2021.

Comment

To date, very few transactions have been reviewed on national security grounds under the current UK framework, most recently Gardner Aerospace/ Northern AerospaceAdvent/ CobhamConnect Bidco/ InmarsatGardner Aerospace / Impcross and Aerostar/ Mettis. The Gardner/Impcross and Aerostar/Mettis transactions were abandoned following government opposition.

Currently, the Secretary of State has the right to intervene and take decisions on mergers only in strictly defined circumstances, where a defined public interest is at stake. National security is one of the grounds set out in the Enterprise Act upon which the Secretary of State can intervene. The government lowered the thresholds for intervention for the development or production of military items and dual-use items, and computing hardware and quantum technology sectors in June 2018 and for the advanced materials, Artificial Intelligence and cryptographic authentication sectors in June 2020.

Competition and Markets Authority

The CMA currently has a role in assessing jurisdictional and competition aspects of such mergers, providing advice to the Secretary of State. Under the National Security and Investment Bill, the CMA will no longer have a role in national security reviews. The Bill separates the national security assessment from the CMA's merger control assessment. However, it also gives the Secretary of State power to overrule the CMA, meaning that, in the event of a conflict, the national security review may take precedence over the merger control assessment.

If you have any questions on the National Security Investment Bill or corporate law more generally please contact our specialist corporate lawyers.


Statement Of Objections To Amazon

EC Sends Statement Of Objections To Amazon - Big Data Law

On 10 November 2020, the European Commission announced that it has sent a statement of objections to Amazon as part of its investigation into whether Amazon's use of sensitive data from independent retailers who sell on its marketplace is in breach of Article 102 of the Treaty on the Functioning of the European Union (TFEU). The Commission has also opened a formal investigation into Amazon's allegedly discriminatory business practices.

What data is amazon collecting?

Amazon has a dual role as a platform:

  • It provides a marketplace where independent sellers can sell products directly to consumers. Amazon is the most important or dominant marketplace in many European countries.
  • It sells products as a retailer on the same marketplace, in competition with those sellers.

As a marketplace service provider, Amazon has access to non-public business data of third party sellers. This data relates to matters such as the number of ordered and shipped units of products, the sellers' revenues on the marketplace, the number of visits to sellers' offers, data relating to shipping, to sellers' past performance, and other consumer claims on products, including the activated guarantees.

Investigation into use of independent sellers’ data

In July 2019, the Commission announced that it had opened a formal investigation to examine whether Amazon's use of competitively sensitive information about marketplace sellers, their products and transactions on the Amazon marketplace constitutes anti-competitive agreements or practices in breach of Article 101 of the Treaty on the Functioning of the European Union (TFEU) and/or an abuse of a dominant position in breach of Article 102 of the TFEU.

Statement of objections to Amazon

The Commission has now sent a statement of objections to Amazon alleging that Amazon has breached Article 102 of the TFEU by abusing its dominant position as a marketplace service provider in Germany and France. Having analysed a data sample covering over 80 million transactions and around 100 million product listings on Amazon's European marketplaces, the Commission is alleging in its statement of objections to Amazon that:

  • Very large quantities of non-public seller data are available to employees of Amazon's retail business and feed into automated systems. Granular, real-time business data relating to third party sellers' listings and transactions on the Amazon platform is systematically feed into the algorithms of Amazon's retail business, which aggregates the data and uses it to calibrate Amazon's retail offers and strategic business decisions (such as which new products to launch, the price of each individual offer, the management of inventories, and the choice of the best supplier for a product).
  • This acts to the detriment of other marketplace sellers as, for example, Amazon can use this data to focus its offers in the best-selling products across product categories and to adjust its offers in light of the non-public data of competing sellers.
  • The use of non-public marketplace seller data, therefore, allows Amazon to avoid the normal risks of retail competition and to leverage its dominance in the market for the provision of marketplace services in France and Germany, which are the biggest markets for Amazon in the EU.

The Commission's concerns are not only about the insights Amazon Retail has into the sensitive business data of one particular seller, but rather about the insights that Amazon Retail has about the accumulated business data of more than 800,000 active sellers in the EU, covering more than a billion different products. Amazon is able to aggregate and combine individual seller data in real time, and to draw precise, targeted conclusions from these data.

The Commission has, therefore, come to the preliminary conclusion that the use of these data allows Amazon to focus on the sale of the best-selling products. This marginalises third party sellers and limits their ability to grow. Amazon now has the opportunity to examine the documents in the Commission's investigation file, reply in writing to the allegations in the statement of objections and request an oral hearing to present its comments on the case.

Investigation into Amazon practices regarding the “Buy Box” and Prime label

The Commission states that, as a result of looking into Amazon's use of data, it identified concerns that Amazon's business practices might artificially favour its own retail offers and offers of marketplace sellers that use Amazon's logistics and delivery services. It has, therefore, now formally initiated proceedings in a separate investigation to examine whether these business practices breach Article 102 of the TFEU.

Problems with digital platforms

In announcing these developments, EU Commission Vice-President Vestager commented that:

“We must ensure that dual role platforms with market power, such as Amazon, do not distort competition. Data on the activity of third party sellers should not be used to the benefit of Amazon when it acts as a competitor to these sellers. The conditions of competition on the Amazon platform must also be fair. Its rules should not artificially favour Amazon's own retail offers or advantage the offers of retailers using Amazon's logistics and delivery services. With e-commerce booming, and Amazon being the leading e-commerce platform, a fair and undistorted access to consumers online is important for all sellers.”

The report prepared for the Commission by three special advisers on "Competition Policy for the digital era" highlighted possible competition issues in relation to digital platforms. As part of the Digital Services Act package, the Commission is now considering the introduction of ex ante regulation for "gatekeeper" platforms, and consulted on issues related to this in June 2020

Big data regulation

It remains to be seen how these EC investigations will play out and whether the same principles can be applied to smaller online platforms. UK regulators also appear to be ramping up their interest in the overlap between competition law and digital business. Chief Executive of the UK Competition and Markets Authority (CMA), Andrea Coscelli, noted last month that the CMA is increasingly focused on “scrutinising how digital businesses use algorithms and how this could negatively impact competition and consumers” and “will be considering how requirements for auditability and explainability of algorithms might work in practice”.

If you have any questions on the EC’s statement of objections to Amazon, data protection law or on any of the issues raised in this article please get in touch with one of our data protection lawyers.


ICO guidance on AI

ICO Guidance On AI Published - AI And Data Protection

On 30 July 2020, the Information Commissioner’s Office (ICO) published its long-awaited guidance on artificial intelligence (AI) and data protection (ICO guidance on AI), which forms part of its AI auditing framework. However, recognising that AI is still in its early stages and is developing rapidly, the ICO describes the guidance as foundational guidance. The ICO acknowledges that it will need to continue to offer new tools to promote privacy by design in AI and to continue to update the guidance to ensure that it remains relevant.

The need for ICO guidance on AI

Whether it is helping to tackle the coronavirus disease (COVID-19), or managing loan applications, the potential benefits of AI are clear. However, it has long been recognised that it can be difficult to balance the tensions that exist between some of the key characteristics of AI and data protection compliance, particularly under the General Data Protection Regulation (679/2016/EU) (GDPR).

The Information Commissioner Elizabeth Denham’s foreword to the ICO guidance on AI confirms that the underlying data protection questions for even the most complex AI project are much the same as with any new project: is data being used fairly, lawfully and transparently? Do people understand how their data is being used and are they being kept secure?

That said, there is a recognition that AI presents particular challenges when answering these questions and that some aspects of the law require greater thought. Compliance with the data protection principles around data minimisation, for example, can seem particularly challenging given that many AI systems allow machine learning to decide what information is necessary to extract from large data sets.

Scope of the ICO guidance on AI

The guidance forms part of the ICO’s wider AI auditing framework, which also includes auditing tools and procedures for the ICO to use in its audits and investigations and a soon-to-be-released toolkit that is designed to provide further practical support for organisations auditing their own AI use.

It contains recommendations on good practice for organisational and technical measures to mitigate AI risks, whether an organisation is designing its own AI system or procuring one from a third party. It is aimed at those within an organisation who have a compliance focus, such as data protection officers, the legal department, risk managers and senior management, as well as technology specialists, developers and IT risk managers. The ICO’s own auditors will also use it to inform their statutory audit functions.

It is not, however, a statutory code and there is no penalty for failing to adopt the good practice recommendations if an alternative route can be found to comply with the law. It also does not provide ethical or design principles; rather, it corresponds to the data protection principles set out in the GDPR.

Structure of the guidance

The ICO guidance on AI is set out in four parts:

Part 1. This focuses on the AI-specific implications of accountability; namely, responsibility for complying with data protection laws and demonstrating that compliance. The guidance confirms that senior management cannot simply delegate issues to data scientists or engineers, and are responsible for understanding and addressing AI risks. It considers data protection impact assessments (which will be required in the majority of AI use cases involving personal data), setting a meaningful risk appetite, the controller and processor responsibilities, and striking the required balance between the right to data protection and other fundamental rights.

Part 2. This covers lawfulness, fairness and transparency in AI systems, although transparency is addressed in more detail in the ICO’s recent guidance on explaining decisions made with AI (2020 guidance). This section looks at selecting a lawful basis for the different types of processing (for example, consent or performance of a contract), automated decision making, statistical accuracy and how to mitigate potential discrimination to ensure fair processing.

Part 3. This section covers security and data minimisation, and examines the new risks and challenges raised by AI in these areas. For example, AI can increase the potential for loss or misuse of large amounts of personal data that are often required to train AI systems or can introduce software vulnerabilities through new AI-related code. The key message is that organisations should review their risk management practices to ensure that personal data are secure in an AI context.

Part 4. This covers compliance with individual rights, including how individual rights apply to different stages of the AI lifecycle. It also looks at rights relating to solely automated decisions and how to ensure meaningful input or, in the case of solely automated decisions, meaningful review, by humans.

ICO guidance on AI - headline takeaway

According to the Information Commissioner, the headline takeaway from the ICO guidance on AI is that data protection must be considered at an early stage. Mitigation of risk must come at the AI design stage as retrofitting compliance rarely leads to comfortable compliance or practical products.

The guidance also acknowledges that, while it is designed to be integrated into an organisation’s existing risk management processes, AI adoption may require organisations to reassess their governance and risk management practices.

A landscape of guidance

AI is one of the ICO’s top three strategic priorities, and it has been working hard over the last few years to both increase its knowledge and auditing capabilities in this area, as well as to produce practical guidance for organisations.

To develop the guidance, the ICO enlisted technical expertise in the form of Doctor (now Professor) Reuben Binns, who joined the ICO as part of a fellowship scheme. It produced a series of informal consultation blogs in 2019 that were focused on eight AI-specific risk areas. This was followed by a formal consultation draft published in February 2020, the structure of which the guidance largely follows. Despite all this preparatory work, the guidance is still described as foundational.

From a user perspective, practical guidance is good news and the guidance is clear and easy to follow. Multiple layers of guidance can, however, become more difficult to manage. The ICO has already stated that the guidance has been developed to complement its existing resources, including its original Big Data, AI and Machine Learning report (last updated in 2017), and its more recent 2020 guidance.

In addition, there are publications and guidelines from bodies such as the Centre for Data Ethics and the European Commission, and sector-specific regulators such as the Financial Conduct Authority are also working on AI projects. As a result, organisations will need to start considering how to consolidate the different guidance, checklists and principles into their compliance processes.

Opportunities and risks

“The innovation, opportunities and potential value to society of AI will not need emphasising to anyone reading this guidance. Nor is there a need to underline the range of risks involved in the use of technologies that shift processing of personal data to complex computer systems with often opaque approaches and algorithms.” (Opening statement of ICO guidance on AI and data protection.)

If you have any questions on data protection law or on any of the issues raised in the ICO guidance on AI please get in touch with one of our data protection lawyers.