AI

AI - Consultation on International Standards  

On 25 June 2020, the International Organization of Securities Commissions (IOSCO) published a consultation document (CR02/2020) on the use of artificial intelligence (AI) and machine learning (ML) by market intermediaries and asset managers, which it has identified as a key priority.

IOSCO consultation paper on AI

IOSCO, the global standard setter for the securities sector,IOSCO  and machine learning by market intermediaries and asset managers. Once finalised, the guidance would be non-binding but IOSCO would encourage its members to take it into account when overseeing the use of AI by regulated firms.

IOSCO’s membership comprises securities regulators from around the world. It aims to promote consistent standards of regulation for securities markets.

Why market intermediaries and asset managers?

IOSCO believes that the increasing use of AIML by market intermediaries and asset managers may be altering their business models. For example, firms may use AIML to support their advisory services, risk management, client identification and monitoring, selection of trading algorithms and portfolio management, which may also alter their risk profiles.

One fear is that this use of AIML may create or exacerbate certain risks, which could potentially have an impact on the efficiency of financial markets and could result in consumer harm.

AI industry discussions

As well as setting out its guidance, the report also indicates some of its findings from industry discussions:

Firms implementing AI and ML mostly rely on existing governance and oversight arrangements to sign off and oversee the development and use of the technology. In most instances, the existing review and senior leadership-level approval processes were followed to determine how risks were managed, and how compliance with existing regulatory requirements was met. AI and ML algorithms were generally not regarded as fundamentally different from more traditional algorithms and few firms identified a need to introduce new or modify existing procedural controls to manage specific AI and ML risks.

Some firms indicated that the decision to involve senior leadership in governance and oversight remains a departmental or business line consideration, often in association with the risk and IT or data science groups. There were also varying views on whether technical expertise is necessary from senior management in control functions such as risk management. Despite this, most firms expressed the view that the ultimate responsibility and accountability for the use of AI and ML would lie with the senior leadership of the firm.

Some firms noted that the level of involvement of risk and compliance tends to focus primarily on development and testing of AI and ML rather than through the lifecycle of the model (i.e., implementation and ongoing monitoring). Generally, once implemented, some firms rely on the business line to effectively oversee and monitor the use of the AI and ML. Respondents also noted that risk, compliance and audit functions should be involved throughout all stages of the development of AI and ML.

Many firms did not employ specific compliance personnel with the appropriate programming background to appropriately challenge and oversee the development of ML algorithms. With much of the technology still at an experimental stage, the techniques and toolkits at the disposal of compliance and oversight (risk and internal audit) currently seem limited. In some cases, this is compounded by poor record keeping, resulting in limited compliance visibility as to which specific business functions are reliant on AI and ML at any given point in time.

AI Areas of concern

IOSCO has identified the following areas of potential risk and harm relating to the development, testing and deployment of AIML: governance and oversight; algorithm development, testing and ongoing monitoring; data quality and bias; transparency; outsourcing; and ethical concerns.

Its proposed guidance consists of measures to assist IOSCO members in providing appropriate regulatory frameworks to supervise market intermediaries and asset managers that utilise AIML. These measures cover:

  • Appropriate governance, controls and oversight frameworks over the development, use and performance monitoring of AIML.
  • Ensuring staff have adequate knowledge, skills and experience to implement, oversee and challenge the outcomes of AIML.
  • Robust, consistent and clearly defined development and testing processes to enable firms to identify potential issues before they fully deploy AIML.
  • Appropriate transparency and disclosures to investors, regulators and other relevant stakeholders.

How the FCA regulates AI in the UK

For an idea of how AI is currently regulated in finance by the UK read below:

The Financial Conduct Authority (FCA) deems it good practice to review how trading algorithms are used; develop appropriate definitions; ensure all activities are captured; identify any changes to algorithms; and have a consistent methodology across the testing and deployment of AI and ML. Markets in Financial Instruments Directive (MiFID II) requires firms to develop processes to identify algorithmic trading across the business. These can be either investment decisions or execution algorithms, which can be combined into a single strategy. Firms are also required to have a clear methodology and audit trail across the business. Approval and sign-off processes should ensure a separation of validation and development a culture of collaboration and challenge and consistency of a firm’s risk appetite. Whilst the algorithms are field-deployed, it is a requirement to maintain pre-trade and post-trade risk controls, real-time monitoring of algorithms in deployment, with the ability to kill an algorithm or a suite of algorithms centrally, a functionality commonly known as the kill-switch.

It is a best practice, but not a requirement, to have an independent committee to verify the completion of checks. However, under the SM&CR, a firm’s governing body would be expected explicitly to approve the governance framework for algorithmic trading, and its management body should identify the relevant Senior Management Function(s) with responsibility for algorithmic trading.

How to submit comments

Comments may be submitted by one of the three following methods on or before 26 October 2020. To help them process and review your comments more efficiently, please use only one method.

Important: All comments will be made available publicly, unless anonymity is specifically requested. Comments will be converted to PDF format and posted on the IOSCO website. Personal identifying information will not be edited from submissions.

  1. Email
  • Send comments to consultation-02-2020@iosco.org.
  • The subject line of your message must indicate ‘The use of artificial intelligence and machine learning by market intermediaries and asset managers’.
  • If you attach a document, indicate the software used (e.g., WordPerfect, Microsoft WORD, ASCII text, etc) to create the attachment.
  • Do not submit attachments as HTML, PDF, GIFG, TIFF, PIF, ZIP or EXE files.
  1. Facsimile Transmission

Send by facsimile transmission using the following fax number: + 34 (91) 555 93 68.

  1. Paper

Send 3 copies of your paper comment letter to:

Alp Eroglu
International Organization of Securities Commissions (IOSCO) Calle Oquendo 12
28006 Madrid
Spain

Your comment letter should indicate prominently that it is a ‘Public Comment on The use of artificial intelligence and machine learning by market intermediaries and asset managers’.

For more information read our blog ‘AI in Financial Services.’

What happens next?

The consultation on the draft guidance closes on 26 October 2020. In the UK, the FCA is currently working with the Alan Turing Institute to look at the implications of the financial services industry deploying AI. Meanwhile, the European Commission has released its own guidelines for trustworthy AI and is expected to propose legislation in this area later in 2020.

EM law specialises in technology law. Get in touch if you have any questions on the above.


Initial Coin Offering

Initial Coin Offering - Legal Aspects

An Initial Coin Offering (ICO) is a low-cost and time-efficient type of crowdfunding which is facilitated through the use of distributed ledger technology. For more information on distributed ledger technology and its most common form, blockchain, read our blog on the topic.

What is an Initial Coin Offering?

In much the same way that an initial public offering involves the issue of shares to investors in exchange for fiat currency, an initial coin offering involves the issue of transferable tokens to investors typically in exchange for cryptocurrency such as Bitcoin or Ether. Some tokens may resemble traditional securities such as shares or debt securities, while others may represent a right to access or receive future services. It is the legal status of such tokens and the cryptocurrency used to purchase them which needs to be explored.

Advantages and disadvantages of an Initial Coin Offering

The rights attaching to tokens vary widely. Some tokens may resemble traditional securities such as shares or debt securities, while others may represent a right to access or receive future services. A key appeal of ICOs is that tokens are easily tradeable. This means that investors can, assuming sufficient liquidity, buy and sell tokens on cryptocurrency exchanges, unlike more traditional venture capital investments, which may not be easily traded.

Other benefits of ICOs compared to more traditional fundraising models are seen to include:

  • The ease and speed with which tokens can be issued and funds raised, in many cases without the use of intermediaries.
  • Low transaction and settlement costs.
  • A perceived lack of regulatory barriers.
  • For many issuers, the formation or augmentation of a wide and motivated user base of the underlying product or service.

Commonly cited disadvantages of initial coin offerings when compared to traditional fundraising models include:

  • The price volatility of the most popular cryptocurrencies. ICO issuers will commonly seek to exchange cryptocurrencies subscribed by investors into fiat currency following the ICO, therefore incurring substantial exchange rate risk. It may be prohibitively expensive or difficult to mitigate this risk effectively.
  • A lack of clarity regarding numerous legal issues relating to the underlying distributed ledger technology, including the enforceability of code-based smart contracts.As you can see in our blog such uncertainty in the UK, although untested in court, is likely to be overcome.
  • An uncertain and evolving regulatory position globally. Combined with the absence of any industry standardisation, this increases the advisory costs and slows the speed at which a compliant ICO may be carried out.
  • Cyber security risks, compounded by the irreversibility of many cryptocurrency transactions.

What ICOs are being used for?

The earliest ICOs were used to launch new cryptocurrencies but increasingly they have been used by early stage companies to fund the development of other projects or services and, in particular, the development of decentralised software applications that run on existing blockchain platforms, such as Ethereum.

However, an ICO can be executed by any company looking to issue tradeable rights to investors in exchange for capital, regardless of the sector in which it operates or the product that it wishes to develop. In September 2017, Kik, an established social media platform, raised approximately $98 million through an ICO of “Kin” tokens to support the development of its existing messaging ecosystem. It remains to be seen whether other non-blockchain centric businesses will use ICOs as a means of raising funds.

How do you launch an ICO?

To launch an initial coin offering, an issuer will generally produce a white paper, which is analogous to the prospectus that a company is required to produce in connection with the admission of securities to trading on the Main Market of the London Stock Exchange. A subscriber will subscribe for tokens by transferring consideration to a specified account, and in doing so it is deemed to have accepted the terms and conditions applicable to that ICO. The tokens themselves are typically created, allocated and distributed through a pre-existing blockchain platform, such as Ethereum, in each case without requiring an intermediary.

Regulation of Initial Coin Offerings

A lack of regulatory barriers is seen by some participants as one of the primary attractions of carrying out ICOs. However, while there is no regulatory framework in the UK which is specific to ICOs, or which refers to the specific technology or terminology used in ICOs, it is a common misconception to say that all ICOs are unregulated. Issuers and their advisers must therefore consider carefully the applicability and effect of the full range of relevant legislation.

Regulatory perimeter

An initial coin offering may or may not fall within the Financial Conduct Authority’s (FCA) regulatory perimeter depending on the nature of the tokens (the terms used by the FCA to denote different types of cryptoassets) issued, and the legal and regulatory position of each ICO proposition must be assessed on a case by case basis.

Although many ICOs will fall outside the regulated space (depending on how they are structured, such as exchange and utility tokens), some ICOs (such as security tokens) may involve regulated investments, and firms involved in an ICO may be conducting regulated activities (such as arranging, dealing or advising on regulated financial investments).

The FCA outlines perimeter issues relating to ICOs in CP19/3 on perimeter guidance on cryptoassets. It explains that the majority of tokens that are issued through ICOs to the market tend to be marketed as utility tokens (non-regulated). The perimeter guidance being proposed by the FCA will focus on this area to make sure that firms are aware when their tokens may be considered securities, and therefore fall within the FCA’s regulatory perimeter. The FCA explains that it will be paying increasing attention, especially where those preparing ICOs attempt to avoid regulation by marketing securities as utility tokens.

Other points to note about the regulation of ICOs include:

  • The features of some ICOs are parallel with initial public offerings (IPOs), private placement of securities, crowdfunding or even collective investment schemes (CISs) which need to be examined individually in order to comply with regulation.
  • Some tokens may also constitute transferable securities and therefore may fall within the FCA's prospectus regime.
  • Digital currency exchanges that facilitate the exchange of certain tokens should consider whether they need to be authorised by the FCA to be able to deliver their services.

Risks if ICO’s outside regulatory perimeter

  • Unregulated space: Most ICOs are not regulated by the FCA and many are based overseas.
  • No investor protection: You are extremely unlikely to have access to UK regulatory protections like the Financial Services Compensation Scheme or the Financial Ombudsman Service.
  • Price volatility: Like cryptocurrencies in general, the value of a token may be extremely volatile – vulnerable to dramatic changes.
  • Potential for fraud: Some issuers might not have the intention to use the funds raised in the way set out when the project was marketed.
  • Inadequate documentation: Instead of a regulated prospectus, ICOs usually only provide a ‘white paper’. An ICO white paper might be unbalanced, incomplete or misleading. A sophisticated technical understanding is needed to fully understand the tokens’ characteristics and risks.
  • Early stage projects: Typically ICO projects are in a very early stage of development and their business models are experimental. There is a good chance of losing your whole stake.

FMLC paper on ICOs

The Financial Markets Law Committee (FMLC) published a paper outlining issues of legal uncertainty arising from ICOs in July 2019. The FMLC outlines how existing laws apply to ICOs and looks at some of the challenges for regulators, providers and market participants. These challenges include a lack of international and regional harmonisation relating to the categorisation of tokens issued in ICOs, as well as in their regulatory treatment.

FCA consumer warnings on ICOs

The FCA has warned consumers of the risks of ICOs. The FCA warns that ICOs are very high-risk, speculative investments due to, among other things, their price volatility, lack of access to UK regulatory protections such as the Financial Services Compensation Scheme (FSCS) or the Financial Ombudsman Service (FOS), potential for fraud, and the lack of adequate documentation.

Financial crime risks of ICOs

The FCA wrote to CEOs of banks in June 2018 warning of the risk of abuse of cryptoassets, which arises from the potential anonymity and the ability to move money between countries that crypotassets allow. Banks were warned to take reasonable and proportionate measures to lessen the risk that they might facilitate financial crimes that are enabled by cryptoassets.

An often unregulated area

Unregulated initial coin offerings are not considered safe investments by the FCA and should therefore always be treated with caution. On the other hand they offer businesses a quicker and easier way to raise capital. If you are looking to invest in an initial coin offering you should always be aware of such risks. If you are a business looking to raise capital through an ICO then the extent to which you may be regulated needs to be considered.

ICO’s is an area likely to develop alongside the recent increase in legal certainty granted to cryptoassets and smart contracts under English law. As it stands, however, ICO’s are yet to be addressed in such a direct manner.

EM law are experts in technology law. Please contact us if you have any question on the above.


Cybersecurity

Cybersecurity – Overview of Some Legal Aspects

Cybersecurity is an area rife with regulation and energetic regulators. Having strong cybersecurity measures in place is an essential part of any business using computers and the internet to store information i.e. most businesses.

What do we mean by cybersecurity?

The term "cybersecurity" refers to the need to protect the following from unlawful use, access or interference:

  • Information and data that is stored electronically (rather than only in physical form).
  • The communications networks which underpin societal, business and government functions.

Reasons for ensuring cybersecurity

Businesses are faced with numerous and varied cybersecurity threats. One leading antivirus software provider reported that it identified over 60,000,000 new forms of malware in the third quarter of 2018 alone. The persons responsible for threats are varied and include computer vandals, organised cybercriminals, "hacktivist" groups and nation states.

Potential consequences

The results of a cyberattack can be devastating for a business. It can result in:

  • Contractual and tortious liability towards individuals seeking compensation for damage and/or distress caused by the unlawful acquisition, disclosure and/or use of their personal information.
  • Prosecution or regulatory sanctions being imposed for failing to comply with legal obligations to keep the information and networks secure or, in some cases, to respond appropriately in the event of a cyberattack. Sanctions may include fines as well as the "naming and shaming" resulting from publication of the authority's investigations into businesses that failed to comply with their statutory obligations.
  • Reputational damage flowing from adverse media coverage, the publication of investigatory reports by regulatory authorities, and where the business is required by law to notify its customers and users of the cyberattack.

Managing cybersecurity risk and compliance

Businesses should be alert to the cybersecurity risks posed by commercial transactions that will involve a third party introducing goods or services into (or being provided with access to) the business's secure IT environment. A business's own cybersecurity obligations will include managing risk within its supply chain and outsourcing to service providers.

These risks can be managed by, for example, implementing various technical and organisational precautions and procedures, inserting appropriate provisions into commercial contracts, obtaining adequate insurance, identifying applicable laws and regulations and ensuring compliance.

Practical steps towards compliance

The steps a business should take to comply with its cybersecurity obligations depend on the nature of the business, its circumstances and the industry in which it operates. There is potential overlap between the different regulatory regimes.

Full compliance with legal obligations and best practice guidance may require a business to implement sophisticated security measures and risk management procedures. However, most security breaches (including some of the most high-profile and significant breaches) are the result of businesses failing to implement relatively basic security precautions and procedures, for example:

  • Not encrypting data or storing encryption keys on vulnerable systems.
  • Using outdated software and systems (containing flaws or vulnerabilities), failing to install fixes, patches and upgrades, retaining redundant systems and servers and not implementing software updating policies.
  • Retaining data for longer than necessary. Data that a business no longer requires may still be valuable to cybercriminals, creating a potential liability for a business rather than an asset.
  • Failing to carry out background checks and vetting on employees with access to data and systems.
  • Not providing sufficient staff training and failing to implement policies relating to employee-data interaction (such as authorised data access or bring your own devices (BYOD) policies).
  • Failing to securely destroy or dispose of data or equipment containing data (or verify destruction by subcontractors).
  • Using removable media (such as USB drives and CDs) or portable computers (such as laptops and tablets) in an insecure manner (for example, not scanning media for viruses before introducing new hardware into a secure environment or failing to encrypt data).

Ascertaining which regulations apply

Every business should assume it has a legal duty to implement effective information risk management procedures, of which cybersecurity measures are an essential part. In particular, there are few businesses that do not handle any personal data (whether in relation to employees, customers or other individuals). At a minimum, businesses should seek to comply with the obligations set out in the General Data Protection Regulation ((EU) 2016/679) (GDPR) and Data Protection Act 2018 (DPA 2018), in particular:

  • Sixth data protection principle(Article 5(1)(f) GDPR): personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.
  • Articles 32 to 34, GDPR:both the controller and the processor are required to ensure a level of security appropriate to the risk, taking into account factors such as the costs of implementation and the context of the processing, and there are obligations to report personal data breaches.
  • Controller and processor contracts(Article 28, GDPR): Specific requirements as to what should be included in a contract between a controller and a processor.

OESs and RDSPs

In addition, certain operators of essential services in the UK, and certain relevant digital service providers who have their head office, or have nominated a representative, in the UK (OESs and RDSPs, respectively) are subject to additional cybersecurity and incident notification requirements under the Network and Information Systems Regulations 2018 (SI 2018/506) (NIS Regulations).

OES are organisations that operate services deemed critical to the economy and wider society. They include critical infrastructure (water, transport, energy) and other important services, such as healthcare and digital infrastructure.

RDSPs are organisations that provide specific types of digital services: online search engines, online marketplaces and cloud computing services. To be an RDSP, you must provide one or more of these services, have your head office in the UK (or have nominated a UK representative) and be a medium-sized enterprise.

There is a general small business exemption for digital services; if you have fewer than 50 staff and a turnover and/or balance sheet of less than €10 million then you are not an RDSP, and NIS does not apply. However, if you are part of a larger group, then you need to assess the group’s staffing and turnover numbers to see if the exemption applies.

Generally speaking, OESs and RDSPs have the following main obligations under the NIS Regulations:

  • Under regulation 10, an OES must take appropriate and proportionate:
    • technical organisational measures to manage risks posed to the security of the network on which their essential service relies; and
    • measures to prevent and minimise the impact of incidents affecting the security of the network and information systems used for the provision of an essential service, with a view to ensuring the continuity of those services,
    • having regard to any relevant guidance issued by their competent authority.
  • Under regulation 11, an OES must notify their competent authority without undue delay and no later than 72 hours after becoming aware of any incident which has a significant impact on the continuity of the essential service which that OES provides, having regard to:
    • the number of users affected by the disruption of the essential service;
    • the duration of the incident; and
    • the geographical area affected by the incident.
  • Under regulation 12, RDSPs must identify and take appropriate and proportionate measures to manage the risks posed to the security of network and information systems on which it relies to provide, within the European Union, either an online marketplace, online search engine or cloud computing service.
  • Under regulation 12,RDSPs must notify the ICO without undue delay and in any event no later than 72 hours after becoming aware of any incident having a substantial impact on the provision of any of the digital services mentioned above, providing sufficient information to enable the ICO to determine the significance of any cross-border impact.

It will be important for any organisation that identifies as an OES to follow published guidance from its designated competent authority as it is released.

Other regulatory frameworks

The Information Commissioner's Office (ICO), which is responsible for enforcing the GDPR and Data Protection Act 2018 in the UK, as well as the NIS Regulations against relevant digital service providers, has also published much cybersecurity guidance for those organisations falling under its remit.

In addition to the above, special consideration must be given to businesses that:

  • Handle particularly sensitive information.
  • Carry out certain activities (such as merchants that process payments).
  • Provide certain services (such as financial services or publicly available electronic communications services)
  • Operate as part of a regulated profession or industry (for example, legal or accounting services).

They are likely to be subject to additional regulation and be required to comply with certain industry standards. These businesses should be able to obtain advice and details of their obligations (for example, guidance on mandatory obligations and best practice) from their relevant regulatory authority, professional body or industry group.

Implementing cybersecurity measures, policies and procedures

There are several different ways in which the risk of cybercrime can be reduced:

  • Technical measures: installing firewalls and antivirus software, limiting employee access rights and controlling document retention.
  • Practical measures, for example:
    • a business should have policies in place that enable it to react properly in the event of an incident. These policies should address issues such as information disaster recovery and backup, response to a security breach (including notification) and remedial steps; and
    • a business's policies and measures will both need to be kept under review. Audits and risk assessments should be carried out from time to time and the robustness of policies and measures should be tested regularly. Where appropriate, this may involve engaging independent third parties (such as penetration testers).

For small and medium sized enterprises (SMEs) unsure as to how to proceed, the UK government's ten steps to cybersecurity provide a useful starting point. For any consultancy assistance with achieving the recommended security baselines you could discuss your needs with our friends at Tantivy or other specialist security firms.

EM Law are experts in technology law and data protection law. Please get in touch if you need any help with cybersecurity compliance or if you have any other legal issues.


AI In Financial Services - Latest Developments

AI in financial services is not new. In fact, financial services was one of the first sectors to deploy Artificial Intelligence at scale. The trading activities of many financial institutions are now predominantly algorithmic, using technology to decide on pricing and when to place orders.

AI in Financial Services - Some Developments

With increased data and reporting volumes and advanced algorithms, the potential for AI in financial services to be further harnessed and developed is endless. For example:

  • Anti-Money Laundering (AML).The Financial Conduct Authority (FCA), in a 2018 speech, identified the potential use of AI to combat money laundering and financial crime.
  • Asset management. In the asset management industry, the increasing use of AI is a growth area. Areas using AI include risk management, compliance, investment decisions, securities trading and monitoring, and client relationship management. An FCA speech on the subject, suggests that investment managers may well have to increase their technology spend to keep up with AI developments.

Bank of England speech

The pace at which firms are adopting AI in financial services varies. In November 2018, the Bank of England (BoE) published a speech on the application of advanced analytics reporting that the scale of adoption of advanced analytics across the industry is relatively slow. The speech identified the increased cost to firms in the short-term of increasing levels of automation, machine learning and AI, as well as the likely impact of such innovation on execution and operational risks, which may make businesses more complex and difficult to manage. This leaves space for plenty of business opportunity and innovation.

Financial Services Artificial Intelligence Public-Private Forum

The FCA and BoE have established the Financial Services Artificial Intelligence Public-Private Forum (AIPPF) to further constructive dialogue with the public and private sectors to better understand the use and impact of AI and machine learning (see AIPPF terms of reference published 23 January 2020). The forum builds on the work of the FCA and BoE, who published a joint report on Machine Learning (ML) in UK financial services in October 2019 based on 106 responses. Key findings include:

  • Two thirds of respondents already use ML in some form.
  • In many cases, ML development has passed the initial development phase and is entering more advanced stages of deployment. Deployment is most advanced in the banking and insurance sectors.
  • ML is most commonly used in AML and fraud detection, as well as in customer-facing applications (for example, customer services and marketing). Some firms use ML in areas such as credit risk management, trade pricing and execution, as well as general insurance pricing and underwriting.
  • Regulation is not seen as a barrier to ML deployment. However, some firms stress the need for additional guidance on how to interpret existing regulations. The biggest reported constraints are internal to firms, such as the legacy IT systems and data limitations.

AI in Financial Services - FCA expectations

There has been little from the FCA in terms of guidance on AI compliance with its rules. Like other forms of technology, the use of AI must not conflict with a firm's regulatory obligations, such as its obligation to treat customers fairly. The FCA has expressed concern, for example, that the use of AI in financial services might make it harder for vulnerable customers to obtain insurance cover if the algorithms take into account certain characteristics that would deem it not viable to offer products and services to those less affluent. So firms may wish to ensure that they have systems and processes in place to monitor the impact of AI on their target customers. The use of AI also raises issues around accountability, particularly where firms rely on outsourcing arrangements.

Case-by-case basis

The FCA has said that it would approach potential harm caused by AI in financial services on a case-by-case basis. However, firms that deploy AI and machine learning must ensure they have a solid understanding of the technology and the governance around it, especially when considering ethical questions around data. The FCA wants boards to ask themselves what the worst thing is that can go wrong and mitigate against those risks. Indeed, an FCA Insight article on AI in the boardroom suggests that AI is principally a business rather than a technology issue. Boards therefore need to consider a range of factors: the need for new governance skills, ethical decision-making, explainability (do they understand how the AI operates?), transparency (customer consent for use of data), and the potentially changing nature of liability.

Some existing law and regulation applicable to AI in financial services

Misuse of data

Under GDPR, individuals have the right to know how their personal data is being used by AI. Financial institutions should be aware that GDPR (and section 168 of the DPA 2018) gives individuals the right to bring civil claims for compensation, including for distress, for personal data breaches.

Fairness, discrimination and bias

Principle 6 of the FCA is ‘to pay due regard to the interests of its customers and treat them fairly’. AI only reads the data presented to it on a one-size-fits-all basis and therefore discrimination is probable.

Anti-competitive behaviour

The UK Competition and Markets Authority (CMA), has already used its powers to restrain technology with an anti-competitive objective. In August 2016, it fined Trod, an online seller of posters and frames, for using software to implement an agreement with a competitor not to undercut each other’s prices.

Systems and control

Firms should be aware that the FCA can require them to produce a description of their algo-trading strategies within just 14 days, and that it recommends that firms have a detailed “algorithm inventory” setting out coding protocols, usages, responsibilities and risk controls.

Liability in contract and tort

AI usage (whether by a firm’s suppliers or by the firm with its customers) may give rise to unintended consequences and may expose institutions to claims for breach of contract or in tort, and test the boundaries of existing exclusion clauses. Firms need to assess whether their existing terms and conditions remain fit for purpose, where AI is concerned.

AI in Financial Services - Case Law

The courts are due to consider in mid-2020 the question of where liability lies when an investor suffers substantial losses at the hands of an AI-powered trading or investment system in Tyndaris v VWM. While the outcome of the dispute will principally depend on the facts, the judgment may include wider comments on the use of AI systems by funds or investment managers.

Industry reports on AI

In an October 2019 report, the CityUK concluded that AI-specific regulation was not currently appropriate. The report highlights best practices relating to fairness, transparency and consumer protection, data privacy and security, governance and ecosystem resilience. It also sets out a suggested AI policy approach for the UK government and regulators.

UK Finance has prepared a report in conjunction with Microsoft on AI in financial services. A key takeaway from the report include the need to recognise AI as more than a tool and consider the wider cultural and organisational changes necessary to become a mature AI business. Also as they start to embed AI into core systems, firms need to consider the implications of AI that go beyond the technical, including the wider impact on culture, behaviour and governance. Part Two of the report is intended to help firms determine where AI is the right solution, and how to identify the high-value use cases, looking more deeply at analysing the business case. The report states that firms must consider how to supplement existing governance frameworks, or create new ones, to ensure that the ethics, appropriateness and risk of AI is in balance with the benefits it promises and the firm's corporate standpoint.

The future is here

AI is becoming more and more incorporated into everyday business practice. With regard to AI in financial services a key takeaway from current regulations is that having a strong understanding of how AI is used within your business and for what purposes can make compliance less of a headache.

EM law specialises in technology law. Get in touch if you have any questions on the above.


SaaS Contracts

SaaS Contracts – Things To Look Out For

SaaS contracts are increasingly relevant as SaaS is now the model that most software suppliers are looking to supply through. This article provides some insight into the kind of things you need to consider if you are dealing with SaaS contracts.

What is SaaS?

SaaS is the practice of accessing software solutions over the internet, as opposed to by downloading solutions onto your computer. Before SaaS, businesses and consumers would buy a physical version of the software that required installation.

Remember the plastic-wrapped boxes that held the software’s CD-ROM? SaaS eliminates the need for that thanks to the internet. Businesses and consumers simply subscribe to access externally hosted software. As long as they have a connection to Wi-Fi, customers can access the software from anywhere, on any computer.

Example

Take your email server, for example. You want to know that you’ll continue to send and receive emails without needing to fiddle with your email settings or worry about updates. Imagine if your email server went under because you forgot to update it and you went days without email? That’s simply not an option in today’s marketplace. If you use a SaaS product like Microsoft 365 as your email provider, the chances of something going wrong are very small.

Why use SaaS?

With SaaS, you don’t need to install and run software applications on your computer (or any computer).

Everything is available over the internet when you log in to your account online.

You can usually access the software from any device, anytime (as long as there is an internet connection).

The same goes for anyone else using the software. All your staff will have personalized logins, suitable to their access level.

Issues

One-to-many model means SaaS customers do not get bespoke services.

Reliance on online connectivity. The internet is fast becoming a single point of failure for many organisations: how long could a company operate without it?

Compliance issues, such as cybersecurity, data protection and encryption.

Risk that customer fails to control usage or increased storage.

Commercial setting

Although most famously deployed on a business to customer basis, SaaS is also used on a business to business model. If you are looking to offer SaaS to customers or businesses or are a business looking to subscribe to a SaaS offering, then being aware of the negotiating positions on SaaS contracts is crucial.

Negotiation Checklist – What to ask for and consider in SaaS Contracts?

  • A detailed description of the services being offered.
  • How is data being processed? This is important when looking to comply with data protection law i.e. who has access to the personal data that the SaaS provider is collecting? Who is responsible in the event of a data breach? For the purposes of GDPR the customer i.e. the person using the software and putting data into it, is usually considered the data controller. The obligations of data protection law are mainly on the data controller and therefore, usually, the customer of a SaaS provider. A data controller should only allow a third party to process data on its behalf if it has appropriate organisational and technical measures in place to protect the data. So appropriate data processing provisions need to be set out in the SaaS Contract.
  • The right of access to the application. Who does and does not have the right to use the application? For example, is the charging structure in the SaaS Contract based on a per person subscription fee or can any of the customer’s staff access the service in return for the customer paying a (significant) upfront annual licence fee?
  • The provision of updates, maintenance and integration of third-party tools. Depending on the context, the customer may want to see some response time commitments if things go wrong as well as service availability commitments. If the SaaS product is for consumers such provisions are unlikely to be included in the SaaS Contract. If the service is fairly niche and for businesses rather than consumers then response time commitments for fixing faults are more likely to be found or negotiated into the SaaS Contract.
  • Intellectual property rights. The supplier of a SaaS application and its licensors will own the intellectual property rights to the software whilst a customer will own the data which is imputed into the software.
  • Term and Termination. Clear language in the Saas Contract is needed so there are no doubts about the length of the subscription term. Is the term service to be automatically renewed? If so, can prices increase in future?
  • Limitation of liability. Generally the liability of the supplier is limited to total subscription fees for 12 months but this can vary. Customers must be mindful of the kinds of losses that they may incur if things go wrong and check whether or not the limitation being imposed by the supplier is fair.
  • Scalability of pricing options i.e. How can you get or offer the best price for the size of businesses you are likely to attract.
  • Rights of third parties? If the customer needs its consultants as well as its employees to be able to access the applications then check that the SaaS Contract allows this. What about staff belonging to other members of the same group of companies as the customer?

The Present and Future of SaaS

Since its early beginnings, the SaaS industry has continued to grow, evolve, and thrive. It’s an equal-opportunity industry, with SaaS tools coming from startups, tech giants, and every company size in between. Even traditional software companies now have SaaS offerings to stay relevant and on-trend.

The SaaS industry is also home to quite a few unicorns (private companies valued at $1 billion or more). While the tech sector dominates lists of unicorns in general, SaaS tools are beginning to gain more and more real estate. Some SaaS companies with unicorn status are Dropbox, Domo, and Slack.

In the future, SaaS companies are expected to adapt their offerings based on significant tech trends. For example, artificial intelligence is likely to play a major role as SaaS companies begin to incorporate AI into their tools, ultimately increasing functionality and improving the user experience. Artificial intelligence is often seen in the form of chatbots, but it will also be useful in automating manual tasks and personalizing SaaS offerings.

Cybersecurity is also a vital aspect of the future of SaaS. There is always a risk to storing sensitive data in the cloud, but consumers’ concerns and hesitations have pushed SaaS companies to take necessary security measures.

These enhancements are formulated through encryption algorithms, identify management, and anti-malware – three measures that work to protect software, and its customers, from data breaches and viruses.

We are SaaS Experts

EM Law’s technology lawyers have helped clients with a wide range of SaaS Contracts both nationally and internationally.

Please contact us if you have any questions on SaaS Contracts or you can find out more about SaaS arrangements by checking our other blogs on cloud services or Software as a Service.


COVID-19 Contact Tracing Apps - Privacy Concerns

Contact Tracing Apps – Privacy Concerns

Contact tracing apps are being developed by governments and private enterprises to fight COVID-19. Their design and use however raise serious privacy concerns.

How do contact tracing apps work?

Contact tracing apps are mobile software applications designed to help identify individuals who may have been in contact with another person.

In the context of COVID-19 this means that anyone with the app who has been diagnosed with the virus or has self-diagnosed can enter that information into the app. Then, via the use of Bluetooth, anyone who has come, or comes, into contact with that diagnosed or self-diagnosed person will be notified by the app. If you are notified of such contact then you can take steps to self-quarantine or otherwise manage your exposure. This all relies upon individuals carrying their mobile phones at all times with Bluetooth activated which has cast doubt on their potential effectiveness.

Why adopt contact tracing apps?

By tracing the contacts of infected individuals, testing them for infection, treating the infected and tracing their contacts in turn, public health authorities aim to reduce infections in the population. Diseases for which contact tracing is commonly performed include tuberculosis, vaccine-preventable infections like measles, sexually transmitted infections (including HIV), blood-borne infections, some serious bacterial infections, and novel infections (e.g. coronavirus).

Privacy issues with contact tracing apps

Numerous applications are in development, with official government support in some territories and jurisdictions. Several frameworks for building contact tracing apps have been developed. Privacy concerns have been raised, especially about systems that are based on tracking the geographical location of app users.

Less intrusive alternatives include the use of Bluetooth signals to log a user's proximity to other mobile phones. On 10 April 2020, Google and Apple jointly announced that they would integrate functionality to support such Bluetooth-based apps directly into their Android and iOS operating systems.

These Bluetooth signals offer greater privacy protection because they operate on an anonymous basis. Therefore someone who comes into contact with an infected person will not have any information besides the fact that they have come into contact with an infected person. Rather than receiving any unnecessary information such as a unique identifying code or the name of the infected person.

ICO’s blog

The Information Commissioner (IC), Elizabeth Denham, has published a blog setting out data protection considerations for organisations using contact tracing and location data technologies in connection with the COVID-19 pandemic.

While the IC is maintaining a pragmatic and flexible approach to data protection compliance during the pandemic, the IC reminds organisations that the public must remain assured that their data will be processed lawfully in connection with the use of technology to track the spread of COVID-19 by individuals.

To help achieve the IC's twin goals of maintaining public trust and promoting compliance, the blog includes a series of questions for organisations to bear in mind when using new technologies to combat the pandemic. It focusses on compliance with data protection requirements under Article 25 of the General Data Protection Regulation ((EU) 2016/679) (GDPR), the data minimisation and storage limitation principles under Article 5(1)and data subject rights generally under the GDPR.

The IC asks organisations to consider the following questions:

  • Have you demonstrated how privacy is built into the processor technology?
  • Is the planned collection and use of personal data necessary and proportionate?
  • What control do users have over their data?
  • How much data needs to be gathered and processed centrally?
  • When in operation, what are the governance and accountability processes in your organisation for ongoing monitoring and evaluation of data processing, that is to ensure it remains necessary and effective, and to ensure that the safeguards in place are still suitable?
  • What happens when the processing is no longer necessary?

The IC extends an offer to assist organisations with these processes, by providing guidance and tools to consider data protection requirements in the planning and development phase for projects adopting new technology, and by performing an audit of the measures and processes implemented by an organisation when the project has become operational.

In practice

The Information Commissioner's Office (ICO) has published a discussion document setting out its expectations and recommended best practice for the development and deployment of COVID-19 contact tracing apps.

The document was published in advance of Information Commissioner Elizabeth Denham's and Executive Director of Technology and Innovation Simon McDougall's appearance before the Human Rights Joint Committee on 4 May 2020 and is intended to help NHSX and other developers of contact tracing apps comply with information provision and data protection by default and design requirements under the GDPR.

Key principles and recommendations for developers to consider include

  • Performing a Data Protection Impact Assessment (DPIA) prior to implementation of the app and refreshing the DPIA whenever the app is updated during its life cycle.
  • Being transparent with users and providing them with clear information about the purpose and design choices for the app and the benefits the app seeks to deliver for both users and the NHS. Users must also be fully informed about the data to be processed by the app before the processing takes place.
  • Complying with data minimisation, retention and security principles under Articles 5(1) and 32 of the GDPR.
  • Ensuring participation is voluntary and users can opt in and out of participation and exercise their data subject rights (including rights of access, erasure, restriction and rectification) with ease. This could involve the developer providing users with a dedicated privacy control panel or dashboard.
  • Relying on valid user consent or an alternative lawful basis under Article 6(1) of the GDPR for the processing of personal data where this is necessary and more appropriate, such as performance of a task in the public interest (particularly where an app is developed by or on behalf of a public health authority).
  • The collection of personal data relating to health shall be allowed only where the processing is either based on explicit consent, is necessary for reasons of public interest in the area of public health, is for health care purposes, or is necessary for scientific research or statistical purposes.

The ICO will keep these recommendations under review and remains open to feedback.

What does this mean for businesses?

If contact tracing apps are designed in line with ICO guidance, businesses looking to monitor employees can have confidence in asking employees to use such apps. In all likelihood the NHSX app will be used in the UK and therefore businesses should be aware of how that app is being developed.

NHSX development

On 12 April 2020, Matthew Hancock, the Minister for Health and Social Care and the politician directly responsible for the NHS, announced that the NHS was developing a mobile app that will allow for contact tracing. The app is being developed by NHSX, a specialist unit responsible for digital transformation in the NHS.

In response to the Information Commissioner’s approach, NHSX has stated that they are prioritising security and privacy in all stages of the app’s design. They are planning to publish their security designs and the source code of the app to demonstrate this. Furthermore, they have confirmed that all data gathered by the app will only be used for NHS care, management, evaluation and research, and that individuals will be able to delete the app and their data at any point.

Constraints

Two key constraints for contact tracing apps to be effective:

  • 80 per cent or more of the UK population who own a smartphone need to download it; and
  • the UK needs to test more than 100,000 people a day.

This is because contact tracing relies on large numbers of citizens being involved in the effort.

Encouraged technology

The UK Information Commissioner, Elizabeth Denham, has been supportive of the development of contact tracing apps. On 17 April she stated that “data protection laws [should] not get in the way of innovative use of data in a public health emergency – as long as the principles of the law (transparency, fairness and proportionality) are applied. The same approach applies to the use of contact tracing applications.”

Even though they are encouraged, organisations developing contact tracing apps and using them need to be conscious of the privacy issues.

If you have any questions on technology law, data protection law or on any of the issues raised in this article please get in touch with one of our data protection and technology lawyers.


Smart Contracts Law

Smart Contracts – Legally Enforceable?

Smart contracts aren't necessarily well written! The word ‘smart’ is used in a number of contexts to describe the use of information and communication technologies (ICT) to increase operational efficiency. ‘Smart cities’’, for example, use these technologies to share information with the public and improve both the quality of government services and citizen welfare. In a legal context, ‘Smart contracts’ use similar ICT technology to execute legally binding agreements. Smart contracts are going to become increasingly prevalent as we leave it to machines to do stuff for us, Why bother ordering pizza when the fridge can do it for us?

What is a smart contract?

A smart contract is computer code that can automatically monitor, execute and enforce legal agreements.

Back in 1996, Nick Szabo (an American legal scholar, cryptographer and programmer) defined smart contractsas "digital protocols for information transfer that… automatically execute a transaction once the established conditions are met".

A smart contract is not a contract in the sense of a document, or exchanges of words, letters, emails or other digital communications that can be evidence of legally binding rights and duties. As the term is used in the context of today's blockchain applications, a smart contract is a set of programmatic rules that can be fully or partly executed in response to user-defined independent input and without the need for further intervention from the parties, the outcome being achieved via computer code.

See our blockchain blog here to understand the context in which smart contracts are generally used at present.

In practice

With the emergence of Bitcoin in 2009 as an application of blockchain, smart contracts developed as self-executing contracts that were used to exchange money for bitcoins and exchange Bitcoins for any other goods on the market.

The technology can therefore be compared to a vending machine, where the legal understanding between two parties is initially based on their willingness to transact under a self-executing system.

Different to other automated transactions?

Smart contracts are considered to be partially or wholly self-executing meaning they are not administered or controlled by any third party. This is different from the automated transfers that already take place online such automated bank payments, standing orders, buying music online and downloading it because a third party usually retains control over such a transaction and computer program is usually run on the third party’s server to facilitate it.

What can smart contracts be used for?

Here are a few examples:

  • Automatic payment of a customs levy upon delivery of goods to a port.
  • Escrowarrangements, under which release of the escrowed asset can be effected upon a designated trigger event occurring.
  • Settlement of an insurance policy upon the occurrence of an insurable event.
  • Automatic ordering based on supply levels (this has been in existence for some time).
  • Royalty management and distribution, involving the automated payment of agreed royalties to IP and other asset rights holders.
  • Payment for goods upon delivery.

Is a smart contract a legal contract?

A smart contract is not per se a legal contract. However, provided that all of the requirements for the formation of a legal contract are met (offer/acceptance, valid consideration, an intention to create legal relations and certainty), then there is no reason why, in principle, a smart contract is not capable of being a legal contract, in the same way that binding contracts can be formed electronically through online applications.

A key feature of a smart contract is that, once the code is entered onto the blockchain, it is immutable and so (unless designed otherwise) once a trigger event has been met, its performance cannot be avoided or varied by either party unilaterally. Accordingly, in a practical sense, a smart contract does not need enforcing under English law: self-enforcement is built in.

In any case, the majority of present uses of smart contracts are likely to involve a wider and more traditional framework. For example, access to most blockchain applications will be via a web- or app-based interface which will require the users' acceptance of legal terms and conditions of use. Additionally there will usually be traditional off-chain agreements which govern the overall legal relationship between the various stakeholders and participants as negotiated between them.

UK Jurisdiction Taskforce legal statement on the smart contract

Some legal certainty on these issues has been added by the publication of a statement on the legal status of cryptoassets and smart contracts, following a public consultation by the UK Jurisdiction Taskforce (UKJT), a taskforce of the Law Society’s LawTech Delivery Panel. On 18 November 2019, the Chancellor of the High Court, Sir Geoffrey Vos, in his capacity as Chair of the UKJT, launched the findings of the UKJT's consultation, set out in a document entitled Legal statement on cryptoassets and smart contracts.

A key finding was that smart contracts are capable of satisfying the requirements of English law contract formation principles and can therefore be interpreted and enforced using ordinary and well-established legal principles. Subsequently, such contracts can be enforced by the courts.

The UKJT hopes that these findings will bring some legal certainty in this area, thus improving market confidence. The findings are, as yet, untested in the English courts, although they are likely to have persuasive authority. Ultimately, stakeholders are likely to want to see appropriate legislation and regulation to address these issues.

If you are looking to use smart contracts within your business or build a relationship with someone who does then the following needs to be considered:

Testing and auditing a smart contract

Testing a smart contract should comprise quality assurance and user acceptance testing by lawyers, business parties, technical personnel and third party auditors to ensure that the code, among other things:

  • Meets the relevant legal and/or contractual requirements.
  • Responds to the user inputs properly and as expected.
  • Does not contain any bugs or errors.
  • Does not contain any security flaws or other vulnerabilities which can be exploited.
  • Performs the functionality for which it was developed in an efficient manner.

A host of new and existing businesses offer smart contract audit services.

However, smart contract code is written by humans (at least for the time being) and, therefore, it is highly likely that regardless of the robustness of any testing, the code will be discovered to have some type of bug or error, or there may be improvements which could be made to the code.

Issues

Dealing with void contracts could pose a challenge since any smart contract, once executed, cannot be legally reversed. The parties could agree on further transactions reversing the result of the void transactions but the void transaction would be kept on the smart contract's blockchain.

The most pertinent issue will be aligning the legal layer, i.e. the agreement of the parties involved, with the technical layer, i.e. the computer code breaking down certain parts of the agreement. If these two layers are not aligned properly, a smart contract might generate more legal issues than it aims at solving given there immutable and irreversible nature.

Legal certainty increasing

Given the recent statement by the UKJT, smart contracts are becoming more legally enforceable and hence, when entering into one, businesses and individuals need to be more wary of their implications. The problem can be that smart contracts are self-executing and therefore easily overlooked.

On a more positive note, such certainty should increase market confidence for those looking to offer financial and commercial services online. Especially those using blockchain.

If you have any questions on the issues raised in this article please get in touch with one of our technology lawyers.


Blockchain lawyers

Blockchain Usage and Legal Issues

Blockchain technology is most notably used to facilitate cryptocurrencies and financial services. Its potential to be used for wider commercial purposes is being explored by a number of industries. This articles provides a short explanation of how blockchain systems work, why they are used and some of the legal issues.

A Short History of Blockchain

Blockchain was invented in 2008 to serve as the public transaction ledger of the cryptocurrency bitcoin. The invention of blockchain for bitcoin made it the first digital currency to solve the double-spending problem.

The double-spending problem is the potential flaw in a digital cash scheme in which the same single digital token can be spent more than once. Unlike physical cash, a digital token consists of a digital file that can be easily duplicated or falsified.

Blockchain solved this problem by making it impossible to double-spend digital goods that are being stored on blockchain, such as paying twice with bitcoins. It does this by requiring a more sophisticated system of authentication and through the innovative use of cryptography and distributed ledger technology.

Distributed ledger technology

The expression "distributed ledger technology" (DLT) is used to refer to technologies that enable secure validation, recording and sharing of data in a database. This means that copies of the database can be kept and maintained simultaneously by many people or organisations and no copy is the master or lead copy. As such, the database is said to be "distributed" or sometimes "decentralised".

Therefore "distributed" means that the database is stored and maintained across multiple servers by multiple people, rather than one central database controlled by one person. Whilst "ledger" refers to the fact that the database is a record of many individual transactions.

The expression DLT is often used interchangeably with blockchain. However, what is referred to as blockchain is just one type of implementation of DLT. This blog will use the term blockchain interchangeably with DLT.

Trustless

Traditional ledgers can be altered retroactively but this is impossible with blockchain, since the latest block (or journal entry) contains data of the prior one. Moreover, blockchain provides irrefutable proof of any prior transaction and a clear allocation to an individual ID at any given time.

Therefore blockchain technology is often described as “trustless” in the sense that there is no need to trust (or indeed know) the counterparty to your blockchain transaction.

By cutting transaction costs and providing accessibility blockchain’s decentralised and open nature allows people to trust each other and to directly transact peer-to peer, making intermediaries and third parties obsolete.

Bitcoin – Blockchain Applied

The technologies underpinning the Bitcoin network enable each bitcoin to be represented by a unique set of public and private keys, making the holder of those keys the sole person that is able to transfer that bitcoin. Using blockchain to store the record of transactions in bitcoin means every subsequent acquirer of a bitcoin can have full confidence that the transferor in fact controls that bitcoin without the need for any third party verification or concerns that the bitcoin has been copied or already transferred to another person.

Since the introduction of the Bitcoin network in 2009, many other public blockchain networks utilising digital tokens as a method of payment ("cryptocurrencies") have been introduced, such as Ethereum and Litecoin.

Examples in commercial situations

Marine insurance:

In May 2018, Danish shipper Maersk announced it was insuring its vessels' hulls and machinery using the Insurwave platform, a joint venture between Ernst & Young and software provider Guardtime. Using Insurwave, Maersk provides real-time data on its fleet of vessels which providers use to rate and price insurance and reinsurance products through DLT. Endorsements and invoices are able to be issued automatically to reflect any necessary changes as the vessel's risk profile changes over time.

Trade finance:

The "we.trade" trade finance platform (see we-trade.com) launched in July 2018 is the first commercial blockchain platform developed by a consortium of financial institutions. The platform enables corporates to conclude trade finance transactions with management, tracking and payment information made available to all relevant parties on a real-time basis and automating final payment based on the fulfilment of agreed conditions. The platform is intended to be opened up to further international financial institutions with the goal of creating a global trade platform.

Future Uses

Supply chain applications generally:

A blockchain can be built up from the moment the goods are manufactured, and indeed before manufacture, as it could chart the origin of the parts of which the goods are composed. This sort of tracking already exists, but blockchain would increase certainty and confidence.

IP rights management:

Ownership of IP rights can be stored on a blockchain, making it easier to identify when a certain right was first applied for, registered, licensed, or commercially used, and parties to whom any right has been transferred.

Internet of things:

The Internet of Things (IoT) essentially connects devices to the internet or to one another. IoT sensors facilitate the remote monitoring of patients' well-being, stock levels and machine components, and can even allow machines to be operated remotely.

Blockchain could enable IoT devices to communicate securely with one another as to the status of a device or good.

Land registries:

Last year it was reported that Kenya, the Republic of Georgia, and Russia are test piloting projects whereby real property records will be managed on a public blockchain database. In 2018, HM Land Registry announced that it was working with R3's Corda platform to investigate potential uses of blockchain.

Personal identity:

It has been suggested that blockchain service providers might offer individuals a service that stores securely their personal data and enables individuals to authenticate their identity (or certain identity credentials, such as being over 18) without having to disclose their personal data itself.

Construction:

The UK construction industry is highly regulated and construction projects create a significant volume of diverse data, which often needs to be shared or certified by multiple stakeholders. It follows that blockchain lends itself well to the construction industry. While there do not appear to be any current examples of the use of blockchain in the construction industry, articles (see for example medium.com: The Impact of Blockchain Technology on the Construction Industry (19 February 2017)) have explored the possibilities.

Blockchain Legal Issues

If you are looking to use blockchain within your business and/or work with businesses that do you should have an understanding of the legal issues raised by the technology.

Regulation and compliance

Depending on the application, there is likely to be a range of local, national, international and supranational law and regulation which must be catered for in any implementation of blockchain.

The key areas where specialist legal advice is likely to be required are:

  • KYC/AML checks. If the application requires customer due diligence, the methods of conducting KYC and AML checks will need to comply with relevant legislation and guidance. In a cross-border transaction context, this may require extremely complex analysis, as regulatory requirements and local custom vary from jurisdiction to jurisdiction, and the application will need to be able to meet the requirements of every relevant jurisdiction.
  • Competition law. Where the application involves a consortium or group of actual or perceived competitors, or otherwise involves the sharing of a significant data pool, competition authorities may require notification, consultation or approval. Early analysis of any potential issues under competition law is essential.
  • Tax. The tax treatment of digital tokens native to, or otherwise used on, blockchain networks remains uncertain. The blockchain application needs to be considered in the context of the business's existing tax and financial position as well as from the end user's personal tax perspective.
    Governing law

Which law governs, and whether to rely on the courts or arbitration, are questions which should be considered at the outset. Considerations for this should include what is customary for the particular application, where the activities and assets of the business are located, where the participants are located and where the end users are located.

IP rights

It is important to understand who will own the IP rights in any blockchain application; such rights may be created across a variety of parties.

Naturally, if the application utilises a public chain network or a third party blockchain platform, the owner(s) of the IP rights in the underlying technology (or the components of that technology) will assert their rights over such technology through the relevant terms and conditions or other agreements covering the use of that technology. In these circumstances, usual contractual assurances, such as warranty and/or indemnity protection, in respect of IP should be sought.

As within many other areas of computer programming, sample and example source code for blockchain applications is widely available on the Internet through code-sharing websites such as GitHub. Direct copying of such code for a proprietary blockchain application must be analysed so as to reduce the risk of copyright infringement claims. This will include looking at the permission or licence granted by the code author. Similarly, usage of open-source software must be carefully monitored, and the relevant licence terms of its use must be understood before incorporating open-source code into any commercial blockchain application.

Blockchain and Data protection

Completion of a full data mapping of flows, storage and usage of data is imperative to identify whether any personal data will be processed by a proposed blockchain solution. Data mapping will also be important to help establish whether any personal data is to be transmitted within or outside the network and/or between participants or other third parties. Each collection and transmission of personal data will require a lawful basis under relevant local regulation.

Technology offering opportunity

Blockchain technology has created a platform upon which businesses can look to economise and improve the reliability of online financial and commercial activity. As with most things, though, the legal issues raised by such technology should always be considered.

If you have any questions on blockchain or smart contract get in touch with one of our technology lawyers.


Cryptoassets Law

Legal Status Of Cryptoassets In The UK

Cryptoassets are often depicted as inhabiting an online wild west. A place where people can thrive off uncertainty and a lack of regulation. This is starting to change in the UK. A Legal Statement made by the UK Jurisdiction Taskforce of the LawTech Delivery Panel (UKJT) and subsequent case law has introduced greater cryptoasset certainty under English Law.

Legal Status of Cryptoassets in the UK

In the ten years since Bitcoin was born, there has been a proliferation of cryptoassets. These include Litecoin, Ethereum, Ripple, Zcash and many more.

The term cryptoassets, rather than cryptocurrencies, is preferred by the UK regulatory authorities as it is more neutral and captures the broader range of tokens that are not just designed to act as a means of exchange (to which cryptocurrency typically applies, such as Bitcoin).

Although the area is plagued with a lack of accepted definitions, in this blog, "cryptoasset" is used in the sense of the Financial Conduct Authority's (FCA) category of "exchange token" as distinct from "security tokens" or “utility tokens” which provide similar legal rights and obligations to traditional securities and money.

Definition of “exchange token”

In broad terms, the FCA has created a framework by categorising cryptoassets based on their intrinsic structure, as well as their designed use. On its webpage on cryptoassets, the FCA explains that cryptoassets such as Bitcoin are classified as unregulated “exchange tokens”, which:

"… are usually decentralised and designed to be used primarily as a medium of exchange. We sometimes refer to them as exchange tokens and they do not provide the types of rights or access provided by security or utility tokens, but are used as a means of exchange or for investment."

More broadly, cryptoassets are considered to be "virtual assets", as defined by the Financial Action Task Force (FATF) in their report Guidance for a risk based approach to virtual assets and virtual asset providers. Unsurprisingly perhaps, related regulation has been introduced at a national level; in some jurisdictions, at least.

Various concerns have been raised at the supranational level about what these virtual assets mean for the global financial system and data protection authorities; Libra, for example, being the latest, and so far greatest, crypto-bogeyman.

Uncertainty

What is surprising is that, despite the speed with which virtual assets have infiltrated the financial system (for example, there are already futures and exchange-traded notes in Bitcoin), the fact is that in England, and in many other jurisdictions, quite what a cryptoasset is remains unclear and unresolved.

The fundamental question of "what is a Bitcoin?" is far from new, and has been addressed recently both in the High Court and in the City of London Law Society's (CLLS) submissions to the UKJT in relation to cryptoassets, distributed ledger technology (DLT) and smart contracts.

These submissions have now been responded to by a UKJT statement.

The Legal Statement

A legal statement published by UKJT in November 2019, says that cryptoassests are capable of being property which can be owned. While this Legal Statement is not binding, it will give market participants greater certainty around crypto transactions. Recent case law has enhanced such certainty.

The UKFT did not sought to define the term cryptoasset, saying that it would not be a useful exercise to do so given the rapid development of technology. Instead it focused on identifying the key features of a cryptoasset, and on answering key questions to do with ownership, transfer and whether under English law cryptocurrencies constitute “property”. Further:

  • Despite being property, cryptoassets are not things in possession because they are “virtual” and cannot therefore be possessed.
  • The novel and distinctive features possessed by some cryptoassets (intangibility, cryptographic authentication, decentralisation, rule by consensus) do not disqualify them from being property).
  • Cryptoassets are not disqualified from being property as pure information, or because they might not be classifiable either as things in possession or things in action.
  • A private key is not in itself to be treated as property because it is information.

In summary, the UKJT concluded that cryptoassets have all the legal characteristics of property and are, as a matter of English legal principle, to be treated as property.

AA v Persons Unknown [2019] EWHC 3556

In October 2019, a hacker bypassed the firewall and anti-virus software of a Canadian insurance company, encrypting its computer systems. The unknown hacker demanded $1,200,000 in equivalent Bitcoin in exchange for the decryption software.

The claimant, a cybercrime insurer of the Canadian company, paid the ransom by purchasing Bitcoin and transferring the amount into a Bitcoin wallet. Following recovery of the encrypted files, the claimant took steps to recover the ransom amount, which was traced back to a wallet linked to and controlled by Bitfinex, a cryptoexchange operated by two British Virgin Island entities.

The claimant subsequently issued various proceedings seeking, amongst other things, a proprietary injunction over the Bitcoins that had been traced back to the wallet controlled by Bitfinex.

Judgement

Given that proprietary injunctions can only be granted over property, the court had rule on whether or not Bitcoins constituted property under English Law.

The court considered in detail the UKJT's "compelling" analysis (Legal Statement) of the proprietary status of intangible assets, which concluded that, although having many novel and unique characteristics, cryptoassets may be objects of property rights. The Court emphatically approved this approach.

Having accepted that cyrptoassets constituted property, the Court determined that all other requirements for a proprietary injunction were met and granted the injunction. This was the first time the courts have applied, and accepted, the analysis set out in the legal statement.

The International Perspective

Cryptoassets are usually not confined to a single jurisdiction and so the international picture remains a relevant consideration. This remains very fragmented. Even on a regional basis. The two questions to consider are whether cryptoassets are defined as legal tender and whether their exchange is legal. Cryptoassets and exchange are illegal in China for example. In the US cryptoassets have in some states been treated as legally substitutable for currency whilst cryptoexchange is legal, subject to state regulation.

Data Protection

Libra offers an example of how regulators will seek to protect data in the context of cryptoassets. Especially when such assets are linked to other forms of personal data. Such as a social media account.

Data protection authorities from around the world, including the UK's Information Commissioner's Office (ICO) and the European Data Protection Supervisor, have called for more openness about the proposed Libra cryptocurrency and infrastructure. On 5 August 2019, they issued a statement to Facebook and 28 other companies behind the project (the Libra Network) asking for details of how customers' personal data will be processed in line with data protection laws.

Libra is a project to create a global cryptocurrency using blockchain technology, with Facebook as a founding member (through its subsidiary, Calibra). The Information Commissioner, Elizabeth Denham, noted that;

"Facebook's involvement is particularly significant, as there is the potential to combine Facebook's vast reserves of personal information with financial information and cryptocurrency, amplifying privacy concerns about the network's design and data sharing arrangements".

The statement notes that the ICO “are supportive of the economic and social benefits that new technologies can bring, but this must not be at the expense of people's privacy".

It asks a set of detailed, non-exhaustive questions of the Libra Network, for example, about the provision of clear and transparent information, informed consent, data sharing, use of processors, data privacy impact assessments, and how privacy policies and standards will work consistently across multiple jurisdictions.

The signatories note that they will work together to assert strong privacy safeguards at a global level.

Caution still advised

UKJT’s Legal Statement and subsequent case law should encourage continued innovation and improve market confidence for those in the global financial services market. As can be seen in the ICO statement it is in regulators interest to encourage the economic advantages of cryptoassets. But caution is still advised.

Investors should be wary of the international uncertainty in the regulation of cryptoassets and consider seeking specific legal advice for individual projects. If a business is looking to offer services which involve cryptoassets then data protection should also be considered.

If you have any questions on the issues raised in this article please get in touch.


Web scraping lawyers London

Web Scraping – Legal Issues

Web scraping (or data scraping) is more prevalent than you think. It is estimated that more than 50% of all website visits are for data scraping purposes. This is why users are often asked to go through a series of tests to prove they are not an unwanted bot. There are plenty of new businesses with large datasets or web scraping capabilities which look attractive to investors given the nature of online marketing and the appeal of tools which offer businesses new innovative ways to collect and process data. Being aware of the legal issues is of paramount importance before becoming involved with, or setting up, such businesses. This involves being aware of licences to datasets and possible infringements of database and intellectual property rights.

What is web scraping?

The process of using software to harvest automatically, or scrape, publicly available data from online sources. It has many purposes including recruitment, sentiment analysis, assessing credit risk, identifying trends, marketing and sales. It is also something permitted to certain extents under bespoke licences. In the public sector datasets often operate under the Open Government Licence (OGL), inspired and re-highlighted by an EU directive, the INSPIRE directive (2007/2), which required public authorities to make spatial information datasets publicly available.

In the news

Elections in Brazil have made an example of how marketing companies could potentially abuse web scraping software. It was alleged that political parties used software to gather phone numbers from Facebook which were then used to create WhatsApp groups and spread fake news. Brazil’s electoral court are to investigate whether this undermined the legitimacy of the elections.

In the UK, the investigation of Cambridge Analytica and Facebook by the Information Commissioner’s Office (ICO) has put data scraping under public scrutiny. Facebook were fined a maximum £500,000 for two breaches of the Data Protection Act (UK) 1998 for not adequately safeguarding users’ personal data. When reflecting on the investigation, Elizabeth Denham, the UK information Commissioner, called for an “ethical pause” to allow Government, Parliament, regulators, political parties, online platforms and the public to reflect on their responsibilities in the era of big data before there is greater expansion in the use of new technologies.

Businesses should therefore consider what the legal implications may be if they intend to scrape data. If operating under a licence to scrape data, a business should understand the scope of such licence and, if personal data is involved, whether the activity complies with data protection laws. If no licence exists then scraping data may infringe copyright and database rights. If the website you wish to scrape has an acceptable use policy or other similar terms and conditions attached to it, the chances are that any scraping activity will breach that policy or conditions.

A recent case in the UK has explored the extent of licences and database rights when applied to web scraping.

77m Ltd v Ordnance Survey Ltd [2019] EWHC 3007 (Ch)

The high court found a geospatial address dataset creator liable for database right infringement and in breach of a number of licences.

The claimant, 77m, created a dataset called Matrix of the geospatial co-ordinates of all residential and non-residential addresses in Great Britain, for which it wished to sell access. It had created Matrix by combining large amounts of data from various datasets. The data at issue derived from the defendant, Ordnance Survey (OS). 77m did not contract with OS but with Her Majesty's Land Registry (HMLR) and Registers of Scotland (RoS). It also accessed data including addresses and geospatial co-ordinates made public by Lichfield District Council (LDC) under the Open Government Licence (OGL) (Lichfield data). HMLR, RoS and LDC licensed the relevant data from OS.

Before looking at database rights, the court had to decide whether 77m had acted within the terms of the licences; if they did, then 77m’s activities in relation to OS’s datasets would be shielded from database right infringement claim; if they did not, then 77m would remain exposed to the infringement claim.

77m had extracted data under the terms of a number of licences. It was found that in many instances 77m had gone beyond the behaviour permitted by the licences. Under the OGL the court deemed the use of publicly available data to create software which was not then sold or included in the software itself, lawful. In most instances however 77m’s use of the data to specify geospatial co-ordinates was in breach of the licences.

The court then went on to see whether 77m’s activity infringed database rights. Firstly it was critical to access whether or not the database in question was subject to such rights. The Database Directive (EU), implemented in the UK in 1997, states that protection shall be granted to the maker of a database who shows that there has been qualitatively and/or quantitively a substantial investment in either the obtaining, verification or presentation of the contents. The court ruled that Ordnance Survey clearly had made such an investment when putting the database together. The High Court judge, Mr Justice Birss, specifically pointed to the investment that went into verifying new addresses as they came into Ordnance Survey’s database which in recent years had an operating expenditure of £6 million per annum.

The way in which 77m used the database was then put into question. The important distinction here is between extraction or consultation of the data within the database. Where extraction would be an infringement of database rights. Some muddled case law coming from the ECJ made the question laborious. Put simply consultation has been defined as being limited to a person merely reading data on a screen, where the only possible other medium to which the data was transferred was the person’s brain. Whereas extraction would be transferring data to a medium other than the person’s brain such as downloading the data onto your own computer.

Therefore 77m’s use of data on such a vast scale and for commercial purposes was always going to amount to an extraction and thus an infringement. The court made clear, however, that in some instances data could be consulted for a commercial purpose. But a user who took all or part of a database’s contents and transferred them to another medium so that they could use them, appropriated to themselves a substantial part of the investment that went into creating the database and was therefore clearly in breach of database rights. Database rights are not only about protecting the data but also about the work that went into compiling the data and synthesising it.

This case highlights the need to be aware of licences a company has in place to use data, the scope of such licencing and if there is no licence, or the licence has been breached, if database rights could protect the database owner.

Web scraping things to consider

Below is a list of things to consider before you scrape data or before you buy a business that has been scraping data:

  • Check the scope of the licences to scrape data, and to store and use that data.
  • If there is no licence in place then a business should consider whether the scraped data is subject to copyright and/or database rights.
  • If no licence exists you could then also check the website’s acceptable use policy and/or term and conditions. If they explicitly forbid scraping or contain other content restrictions this may enable the website owner to sue under breach of contract. Although there is no clear precedent on whether website terms and conditions form binding contracts in the UK, it is worth assuming they could be. The Irish High Court recently ruled that such terms and conditions could form a binding contract. Even if there is no acceptable use policy and/or terms and conditions, it should be noted that such a website may still be subject to copyright and/or database rights.
  • Check whether the target business you want to purchase uses a third party to scrape or store data and, if so, their contractual arrangements.
  • Legal positions differ by country, even between European countries. This is important to be aware of especially when storing data from one nation and making it available to another.
  • Check if personal data is involved and therefore if GDPR / Data Protection Act 2018 / other data protection laws are applicable.

The US perspective on Web Scraping

A recent case involved LinkedIn and HiQ, a small data analytics company that used automated bots to scrape information from public LinkedIn profiles. The Ninth Circuit Court of Appeals ruled in favour of HiQ implying that data scraping of publicly available information from social media websites is permitted. LinkedIn have expressed intent to escalate the case to the supreme court and therefore the law may still be amended.

In the US, similarly to the UK, data scrapers may find themselves on the receiving end of legal action under the following regimes:

  • Intellectual property: Scraping data from websites may infringe intellectual property rights. In 2013 a Federal Court ruled that a software as a service company, Meltwater U.S. Holdings, which offered subscribers access to scraped information about news articles had been acting illegally. Such companies are often referred to as ‘news aggregators’. The news provider, whose data had been scraped, sold licences to many companies and without one, when copying 0.4% to 60% of each article, Meltwater was deemed to have had ‘substantial’ negative effect upon the potential market or the value of the copyrighted work. Therefore getting a licence before scraping data in the US is advised. As mentioned above in the LinkedIn v. HiQ case though it may still be possible to scrape publicly available information from social media sites without a licence.
  • Contract: In the US, if a website user is bound by the Website’s terms of service and causes damage by breaching those terms, the user may be liable for breach of contract.
  • The Computer Fraud and Abuse Act: This provides a civil cause of action against anyone who accesses a computer without authorisation, as well as providing for criminal offences. Although courts have come to differing conclusions, it has generally been ruled that if a scraper uses technical steps, i.e. specialised and complex methods, to circumvent protections to data on websites then the scraper can become liable under the act.
  • Data protection: The US does not currently have comprehensive data privacy legislation at the federal level. On the state level there are plenty of statutes that mandate certain privacy-related rights, but most do not broadly regulate the collection and use of personal data. This is not always the case. California recently passed a state law which regulates data privacy. Coming into effect in 2020, it requires certain companies collecting personal data to disclose how such data will be used and allow consumers to opt-out of data collection. Data scrapers who collect such personal data in California could therefore be found liable when not disclosing the use of such data and allowing an opt-out option.

Final Thoughts

Most business aren’t in the business of web scraping - most business owners or directors aren’t even aware of what web scraping is. However, it’s something to be aware of. Maybe with this awareness you now want to make sure that your website has an acceptable use policy or other security measures in place. If you buy data you should think about how that data was collected. If you are buying a business you should include checks in your due diligence and appropriate warranties in the share purchase agreement to protect yourself from buying a business that collected data unlawfully.

If you have any questions on the points raised above please contact one of our technology lawyers.