Agile Software Development

Agile Software Development - A Legal Perspective

Agile Software Development came to the fore as a concept in 2001 with the bringing together of seventeen software developers in Snowbird, Utah. All with a shared vision of how software development could be improved for supplier and customer alike. They distilled their views in The Manifesto for Agile Software Development. Among other values it called for “customer collaboration over contract negotiations”. While a potentially wince inducing statement for the conventional lawyer, Agile Software Development has nonetheless proved popular (it is not to be forgotten, however, that such alternative management styles had been experimented with since the 1950s – most notably in Project Mercury, the first human spaceflight programme).

Software development is often a creative and exciting moment for any business. Sadly (but importantly), it is the job of the lawyer to bring things down to earth and so whilst Agile Software Development may well be the best model to use for the platform that you want built, it is important to understand the process and its risks.

Waterfall method

In order to understand the conceptual basis for Agile Software Development, it is useful to know what it opposes. The first formal description of the waterfall model is often cited as the 1970’s article, Managing the Development of Large Software Systems by Winston W. Royce, although the term ‘waterfall’ was not explicitly used. The waterfall model was presented as a flawed, non-working model. It is the waterfall model to which Agile Software Development is opposed.

Here is how the waterfall method usually pans out:

Specification requirements – Design – Coding – Testing and Error Detection – Integration – Deployment – Maintenance.

This is how a product is usually made for a customer i.e. the customer specifies what it would like, the supplier creates the product, the product is tested. Once the product passes the tests it is then integrated in the customer’s system, tested again then goes live. Having gone live the supplier usually then provides support and maintenance services to fix any defects in the software.

Agile Software Development - Flexibility

Potential disadvantages of the waterfall method include the difficulty for customers to define their requirements clearly at the outset and that in many cases it does not easily accommodate changes to these requirements made throughout the project. By contrast to the waterfall method, an Agile Software Development project does not have detailed demands for the end product at the outset, although overall project scope and goals are agreed. Flexibility is the core to agile projects as they acknowledge that a customer’s demands and priorates may change from time to time during the course of the project.

Agile Software Development -Method

Typical features of the agile method include:

  • The goal is to deliver functionality and business value to the customer. This means that the solution to business goals and needs cannot necessarily be defined from the outset. It is only the goals and needs that are themselves initially important.
  • The project is divided into a number of sprints, each has its own set of priorities and goals. At the end of each sprint is an approval process assessed by the customer.
  • Planning and review meetings occur at the start and end of each sprint meaning close communication is maintained between the two parties.
  • The customer will reprioritise its needs from time to time, based on its ongoing assessment, and communicate this to the supplier. This may mean that some of the original requirements may become redundant over time.


Although Agile Software Development projects can run on their own terms, there is generally a set of positions and relevant terminology used:

  • Product owner – customer’s representative who acts as the first point of communication with the supplier. Key functions include understanding the business needs and organising the development tasks. They must be present at sprint planning and review meetings.
  • Development team – typically supplier personnel. They are responsible for enacting the development sprints.
  • ScrumMaster – focuses on the development team, its progress, removal of obstacles and quality assurance. Typically a member of the supplier’s personnel.
  • Stakeholders – representatives of the customer’s management.
  • Product vision – an explanation of what needs to be developed, focusing on business goals and targeted benefits, rather than technical solutions.
  • Product backlog – a prioritised list of the customer’s business requirements that are to be developed during the project term. These can be reset at any time with new ones added and old ones deleted.

Importance of the product backlog

Maintenance of the product backlog (described above) is a key part of the agile process. The business requirements contained in it and the priorities accorded to them steer the development of the software and testing/approval procedures. These needs are generally described in the form of user stories. Customers should therefore ensure that any mandatory data protection law principles or cybersecurity obligations are contained within the product backlog.

Agile Software Development Scrum

The software development term ‘scrum’ was first used in a 1986 paper titled The New New Product Development Game. The term is borrowed from rugby, where a scrum is a formation of players and is illustrative of teamwork. Scrum is an agile framework used for complex projects (sometimes called extreme programming). Here are its key components:

  • Sprint – a timeboxed effort in which development takes place.
  • Sprint planning – establishes sprint goals.
  • Daily scrum – each day during a sprint, the team holds a daily scrum, preferably at the same place and time, and not longer than fifteen minutes. The scrum is concerned with what each player is contributing to a present/future sprint and has contributed to previous ones.
  • Sprint review and retrospective – demonstrates the work from the current sprint and improve processes respectively.
  • As another sprint begins, the product owner and development team select further items from the product backlog and begin work again.

Legal remedies for a dissatisfied customer

In a waterfall-type agreement the customer can rely upon damages, termination or remedial work in the case of defects or delays. In the case of Agile Software Development it can be difficult to determine whether there is actually any delay or defect because the flexibility of the system means that either can be easily buried behind prioritisation or vague notions of acceptability. Meaning that a delay could be deemed unprioritised by a product backlog and therefore not really a delay or the lack of an accepted definition of acceptance could allow a supplier to argue that no defect exists. The informality of decision-making can also make it difficult to gather the relevant evidence in case of a perceived breach of contract.

Time limits

Time, whilst fluid under an Agile Software Development process, is also bound to sprints and usually given a deadline. In a standard arrangement the customer would estimate the project duration and the number of sprints. It is, however, the discretion of the supplier/customer as to whether or not to formally agree to such estimates. This will need to be negotiated before an agreement is reached. Whilst setting time limits may be tempting, it may also be the undoing of the whole purpose of an agile agreement, in that flexibility will seem undervalued. On the other hand, having some framework in place to ensure contractual remedies is by no means discouraged.


Here are a some risk areas to be aware of:

  • Not meeting specific requirements within the original timeframe may not necessarily be seen as a breach of the agreement (as they may be re-arranged in the backlog).
  • Acceptance criteria of a sprint – if the agile software development contract is unclear around acceptance criteria the parties can be left arguing over whether an element of the build is completed or not. Acceptance criteria should be described in detail in any agreement so that there is no reliance on less formal descriptions that may be contained in user stories.
  • It is common for agile software development projects to go over budget. Leaving aside delays or quality issues that ramp up costs, if a customer walks into a project not really sure about what they want then it can often happen that their eyes will light up when the developer says “hey, do you want the product to do this cool thing”.
  • Everyone needs to be involved. Developing software using agile methodology is quite an intense process, depending on the project. As it is a collaborative process the parties need each other to be on focused on the project and communicating with each other the whole time. If a key member of the team suddenly goes AWOL then the project may grind to a halt.

Agile Software Development in practice

Agile Software Development as a methodology has the potential to produce creativity and customer/supplier satisfaction  on paper and in practice. It is important, however, to be aware of the potential legal pitfalls from the outset so that each party feels satisfied with the contents of an agreement before moving forward. It is also significant to note that agile methods require both parties to commit to a significant level of time and resources. If either party is remotely located or overburdened with other responsibilities (and so unable to focus on the agile project) the chances of success will be limited.

EM law specialises in technology and contract law. Get in touch if you need advice on Agile Software Development agreements or have any questions on the above.

Statement Of Objections To Amazon

EC Sends Statement Of Objections To Amazon - Big Data Law

On 10 November 2020, the European Commission announced that it has sent a statement of objections to Amazon as part of its investigation into whether Amazon's use of sensitive data from independent retailers who sell on its marketplace is in breach of Article 102 of the Treaty on the Functioning of the European Union (TFEU). The Commission has also opened a formal investigation into Amazon's allegedly discriminatory business practices.

What data is amazon collecting?

Amazon has a dual role as a platform:

  • It provides a marketplace where independent sellers can sell products directly to consumers. Amazon is the most important or dominant marketplace in many European countries.
  • It sells products as a retailer on the same marketplace, in competition with those sellers.

As a marketplace service provider, Amazon has access to non-public business data of third party sellers. This data relates to matters such as the number of ordered and shipped units of products, the sellers' revenues on the marketplace, the number of visits to sellers' offers, data relating to shipping, to sellers' past performance, and other consumer claims on products, including the activated guarantees.

Investigation into use of independent sellers’ data

In July 2019, the Commission announced that it had opened a formal investigation to examine whether Amazon's use of competitively sensitive information about marketplace sellers, their products and transactions on the Amazon marketplace constitutes anti-competitive agreements or practices in breach of Article 101 of the Treaty on the Functioning of the European Union (TFEU) and/or an abuse of a dominant position in breach of Article 102 of the TFEU.

Statement of objections to Amazon

The Commission has now sent a statement of objections to Amazon alleging that Amazon has breached Article 102 of the TFEU by abusing its dominant position as a marketplace service provider in Germany and France. Having analysed a data sample covering over 80 million transactions and around 100 million product listings on Amazon's European marketplaces, the Commission is alleging in its statement of objections to Amazon that:

  • Very large quantities of non-public seller data are available to employees of Amazon's retail business and feed into automated systems. Granular, real-time business data relating to third party sellers' listings and transactions on the Amazon platform is systematically feed into the algorithms of Amazon's retail business, which aggregates the data and uses it to calibrate Amazon's retail offers and strategic business decisions (such as which new products to launch, the price of each individual offer, the management of inventories, and the choice of the best supplier for a product).
  • This acts to the detriment of other marketplace sellers as, for example, Amazon can use this data to focus its offers in the best-selling products across product categories and to adjust its offers in light of the non-public data of competing sellers.
  • The use of non-public marketplace seller data, therefore, allows Amazon to avoid the normal risks of retail competition and to leverage its dominance in the market for the provision of marketplace services in France and Germany, which are the biggest markets for Amazon in the EU.

The Commission's concerns are not only about the insights Amazon Retail has into the sensitive business data of one particular seller, but rather about the insights that Amazon Retail has about the accumulated business data of more than 800,000 active sellers in the EU, covering more than a billion different products. Amazon is able to aggregate and combine individual seller data in real time, and to draw precise, targeted conclusions from these data.

The Commission has, therefore, come to the preliminary conclusion that the use of these data allows Amazon to focus on the sale of the best-selling products. This marginalises third party sellers and limits their ability to grow. Amazon now has the opportunity to examine the documents in the Commission's investigation file, reply in writing to the allegations in the statement of objections and request an oral hearing to present its comments on the case.

Investigation into Amazon practices regarding the “Buy Box” and Prime label

The Commission states that, as a result of looking into Amazon's use of data, it identified concerns that Amazon's business practices might artificially favour its own retail offers and offers of marketplace sellers that use Amazon's logistics and delivery services. It has, therefore, now formally initiated proceedings in a separate investigation to examine whether these business practices breach Article 102 of the TFEU.

Problems with digital platforms

In announcing these developments, EU Commission Vice-President Vestager commented that:

“We must ensure that dual role platforms with market power, such as Amazon, do not distort competition. Data on the activity of third party sellers should not be used to the benefit of Amazon when it acts as a competitor to these sellers. The conditions of competition on the Amazon platform must also be fair. Its rules should not artificially favour Amazon's own retail offers or advantage the offers of retailers using Amazon's logistics and delivery services. With e-commerce booming, and Amazon being the leading e-commerce platform, a fair and undistorted access to consumers online is important for all sellers.”

The report prepared for the Commission by three special advisers on "Competition Policy for the digital era" highlighted possible competition issues in relation to digital platforms. As part of the Digital Services Act package, the Commission is now considering the introduction of ex ante regulation for "gatekeeper" platforms, and consulted on issues related to this in June 2020

Big data regulation

It remains to be seen how these EC investigations will play out and whether the same principles can be applied to smaller online platforms. UK regulators also appear to be ramping up their interest in the overlap between competition law and digital business. Chief Executive of the UK Competition and Markets Authority (CMA), Andrea Coscelli, noted last month that the CMA is increasingly focused on “scrutinising how digital businesses use algorithms and how this could negatively impact competition and consumers” and “will be considering how requirements for auditability and explainability of algorithms might work in practice”.

If you have any questions on the EC’s statement of objections to Amazon, data protection law or on any of the issues raised in this article please get in touch with one of our data protection lawyers.

ICO guidance on AI

ICO Guidance On AI Published - AI And Data Protection

On 30 July 2020, the Information Commissioner’s Office (ICO) published its long-awaited guidance on artificial intelligence (AI) and data protection (ICO guidance on AI), which forms part of its AI auditing framework. However, recognising that AI is still in its early stages and is developing rapidly, the ICO describes the guidance as foundational guidance. The ICO acknowledges that it will need to continue to offer new tools to promote privacy by design in AI and to continue to update the guidance to ensure that it remains relevant.

The need for ICO guidance on AI

Whether it is helping to tackle the coronavirus disease (COVID-19), or managing loan applications, the potential benefits of AI are clear. However, it has long been recognised that it can be difficult to balance the tensions that exist between some of the key characteristics of AI and data protection compliance, particularly under the General Data Protection Regulation (679/2016/EU) (GDPR).

The Information Commissioner Elizabeth Denham’s foreword to the ICO guidance on AI confirms that the underlying data protection questions for even the most complex AI project are much the same as with any new project: is data being used fairly, lawfully and transparently? Do people understand how their data is being used and are they being kept secure?

That said, there is a recognition that AI presents particular challenges when answering these questions and that some aspects of the law require greater thought. Compliance with the data protection principles around data minimisation, for example, can seem particularly challenging given that many AI systems allow machine learning to decide what information is necessary to extract from large data sets.

Scope of the ICO guidance on AI

The guidance forms part of the ICO’s wider AI auditing framework, which also includes auditing tools and procedures for the ICO to use in its audits and investigations and a soon-to-be-released toolkit that is designed to provide further practical support for organisations auditing their own AI use.

It contains recommendations on good practice for organisational and technical measures to mitigate AI risks, whether an organisation is designing its own AI system or procuring one from a third party. It is aimed at those within an organisation who have a compliance focus, such as data protection officers, the legal department, risk managers and senior management, as well as technology specialists, developers and IT risk managers. The ICO’s own auditors will also use it to inform their statutory audit functions.

It is not, however, a statutory code and there is no penalty for failing to adopt the good practice recommendations if an alternative route can be found to comply with the law. It also does not provide ethical or design principles; rather, it corresponds to the data protection principles set out in the GDPR.

Structure of the guidance

The ICO guidance on AI is set out in four parts:

Part 1. This focuses on the AI-specific implications of accountability; namely, responsibility for complying with data protection laws and demonstrating that compliance. The guidance confirms that senior management cannot simply delegate issues to data scientists or engineers, and are responsible for understanding and addressing AI risks. It considers data protection impact assessments (which will be required in the majority of AI use cases involving personal data), setting a meaningful risk appetite, the controller and processor responsibilities, and striking the required balance between the right to data protection and other fundamental rights.

Part 2. This covers lawfulness, fairness and transparency in AI systems, although transparency is addressed in more detail in the ICO’s recent guidance on explaining decisions made with AI (2020 guidance). This section looks at selecting a lawful basis for the different types of processing (for example, consent or performance of a contract), automated decision making, statistical accuracy and how to mitigate potential discrimination to ensure fair processing.

Part 3. This section covers security and data minimisation, and examines the new risks and challenges raised by AI in these areas. For example, AI can increase the potential for loss or misuse of large amounts of personal data that are often required to train AI systems or can introduce software vulnerabilities through new AI-related code. The key message is that organisations should review their risk management practices to ensure that personal data are secure in an AI context.

Part 4. This covers compliance with individual rights, including how individual rights apply to different stages of the AI lifecycle. It also looks at rights relating to solely automated decisions and how to ensure meaningful input or, in the case of solely automated decisions, meaningful review, by humans.

ICO guidance on AI - headline takeaway

According to the Information Commissioner, the headline takeaway from the ICO guidance on AI is that data protection must be considered at an early stage. Mitigation of risk must come at the AI design stage as retrofitting compliance rarely leads to comfortable compliance or practical products.

The guidance also acknowledges that, while it is designed to be integrated into an organisation’s existing risk management processes, AI adoption may require organisations to reassess their governance and risk management practices.

A landscape of guidance

AI is one of the ICO’s top three strategic priorities, and it has been working hard over the last few years to both increase its knowledge and auditing capabilities in this area, as well as to produce practical guidance for organisations.

To develop the guidance, the ICO enlisted technical expertise in the form of Doctor (now Professor) Reuben Binns, who joined the ICO as part of a fellowship scheme. It produced a series of informal consultation blogs in 2019 that were focused on eight AI-specific risk areas. This was followed by a formal consultation draft published in February 2020, the structure of which the guidance largely follows. Despite all this preparatory work, the guidance is still described as foundational.

From a user perspective, practical guidance is good news and the guidance is clear and easy to follow. Multiple layers of guidance can, however, become more difficult to manage. The ICO has already stated that the guidance has been developed to complement its existing resources, including its original Big Data, AI and Machine Learning report (last updated in 2017), and its more recent 2020 guidance.

In addition, there are publications and guidelines from bodies such as the Centre for Data Ethics and the European Commission, and sector-specific regulators such as the Financial Conduct Authority are also working on AI projects. As a result, organisations will need to start considering how to consolidate the different guidance, checklists and principles into their compliance processes.

Opportunities and risks

“The innovation, opportunities and potential value to society of AI will not need emphasising to anyone reading this guidance. Nor is there a need to underline the range of risks involved in the use of technologies that shift processing of personal data to complex computer systems with often opaque approaches and algorithms.” (Opening statement of ICO guidance on AI and data protection.)

If you have any questions on data protection law or on any of the issues raised in the ICO guidance on AI please get in touch with one of our data protection lawyers.

Legal Protection of Software

Legal Protection Of Software

This blog considers the legal protection of software under UK law. It focuses on the application of the law of copyright to software, but also briefly considers other intellectual property rights which might be relevant.

Legal Protection of Software - Copyright

 A computer program is primarily protected as a copyright work. The Copyright, Designs and Patents Act 1988 (CDPA) provides that copyright subsists in an original literary work, which is defined as including a "computer program" and the "preparatory design material for a computer program", although the CDPA does not define what constitutes a computer program. The CDPA is, in this regard, implementing the EU’s Software Directive which provides that a "computer program", including for this purpose, their preparatory design material, is protected by copyright as a literary work.

However, software is quite unlike the more traditional forms of copyright work - such as books, paintings or letters - for which copyright evolved. Accordingly, the application of copyright to software is not entirely straightforward. In particular, software has a life beyond the black letter of its text in a way that books or paintings do not. It is both a copyright work - in the sense of being a record of information - and a functioning work, which creates effects - such as screen displays or sounds and which may include errors and need to be supported or maintained. This can lead to complications in terms of the legal protection of software by copyright because it is axiomatic that copyright protects the expression of ideas, but not ideas or schemes per se.

Definition of Software

The fact that the term "computer program" appears to be interpreted as referring only to the source code and object code can also lead to difficulties in terms of analysing the copyright works which may subsist in a software package, which, in practical terms, comprises more than just the code. Above the source code or object code of a computer game, for example, there is a layer of visible content - what the user sees and hears when they are playing the game - which may include, among other things, graphics, music or sound effects protected by copyright. 

Requirements for copyright protection of software

In order for copyright to subsist under UK law:

  • A work must fall into one of the categories of work protected by copyright under UK law.
  • A work must qualify for protection under UK law (this usually depends on the nationality of the author or place of first publication).
  • The term of copyright must not have expired.

Works protected by copyright

The works protected by copyright are:

  • Original literary, dramatic, musical or artistic works which, in the case of literary, dramatic or musical works are recorded in some way. A literary work includes a:
    • table or compilation other than a database.
    • computer program and preparatory design material for a computer program.
    • database which meets a specific originality test.
  • Sound recordings, films or broadcasts.
  • The typographical arrangements of published editions.

Computer programs and preparatory design materials

Computer programs and the preparatory design material for a computer program are protected as literary works (section 3(1)(b) and (c), CDPA). The term "computer program" is not defined in the CDPA. However the term has been regarded as referring to source code or, in its machine-readable form, object code by the ECJ in the case BSA.

The source code and object code of a program will be protected as literary works, provided they are original. Software is frequently modified and updated. In each case where a program is revised or modified to a substantial degree, the new version will also be protected as a copyright work.

Additionally, design documents relating to the computer program, such as flow charts, graphs and functional or technical specifications would be protected as preparatory design material for a computer program. The definition of "computer program" in the Software Directive, provides that a computer program includes the preparatory design work leading to its development.

Legal Protection of Software - Confidentiality Laws

 While copyright is the main form of legal protection of software, most proprietary software companies also ensure that the source code of the software is kept as a trade secret, and only disclosed under a secrecy agreement where disclosure is necessary, such as to producers of related software. This is because, as discussed above, the source code is the key to understanding how the software functions and is essential for the maintenance of the software, since it will need to be examined to develop the software or correct errors or defects in it.

There are two basic requirements for information to be treated as confidential according to UK law:

  • It must have the necessary quality of confidence. In other words, it must not be public property or public knowledge.
  • It must be imparted in circumstances importing an obligation of confidence i.e. when shared it must not be done so as if it were public property or public knowledge.

Legal Protection of Software - Database right 

The EU Database Directive (96/9/EC) sought to harmonise the legal protection of databases. A database is a collection of independent works, data or other materials arranged in a systematic or methodical way and individually accessible by electronic or other means.

The Directive standardised the "originality" threshold for copyright protection of databases, limiting such protection to databases which "by reason of the selection or arrangement of their contents, constitute the author's own intellectual creation" (Article 3, EU Database Directive). This requirement is reflected in section 3A(1)of the CDPA and hence also applies to software.

Legal Protection of Software - Patents

In the UK, a patent may be obtained in respect of an invention which is new, involves an inventive step, is capable of industrial or technical application and does not fall within any of the exclusions (Patents Act 1977). The owner of a patent can prevent any third parties from selling the product or process which is the subject of the invention. However, section 1(2) of the Patents Act provides that a patent will not be granted for "a program for a computer" to the extent that the patent relates to the program "as such". This is derived from a similar provision in Article 52 of the European Patent Convention (EPC).

Although under the EPC computer programs are not patentable "as such", it is well established that the application of a computer program may well be patentable if it possesses a technical character. What gives the application of a computer program the necessary technical character and takes it beyond the exclusion is difficult to determine. It should be noted that there exists some degree of inconsistency and uncertainty with regard to the approach taken to software-patenting across Europe by different national courts and patent offices. A proposed Software Patent Directive, which would have harmonised the position with regard to patent protection of software in the EU and resolved at least some of the questions on the patentability of software, was rejected decisively by the European Parliament.

Protection of software – other ways

Other than relying on UK laws, there are other ways in which software owners can and should protect their products. Adopting technical measures is the most obvious one for example, including encryption or the embedding of anti-piracy techniques directly into hardware. It is  also essential to have robust non-disclosure agreements and software contracts in place so that if a licensee infringes important rights the software owner can point to its NDA or licence agreement and take appropriate enforcement action.

EM law specialises in technology law. Get in touch if you have any questions on the above.

Open Source Software

Open Source Software - An Overview

A feature of the software world over the last 20 years has been the rise and rise of open source software (OSS). From its origins in US academia in the early 1970s, OSS emerged into the mainstream in the 1990s, continuing into widespread use throughout the 2000s and 2010s so that it is today approaching ubiquity.

What is open source software?

In essence, open source software is software provided under a licence which grants certain freedoms to a licensee. It is often free and used by developers to produce the foundational elements of software. But not always. It should properly be seen as a range of associated licensing techniques: there are many different types of OSS licences differing widely in clarity, length and legal effect.

Looking ahead

The scope and appeal of open source software is only likely to increase, due to a fairly unique combination of circumstances:

  • The internet. Open source software modules are readily downloadable from software library sites like netand To that extent open source is similar to other software delivery techniques that the internet powers, like virtualisation, software-oriented architecture (SOA), software as a service(SaaS) and cloud computing, all of which are following a trend of increasing adoption throughout the 2010s.
  • The current generational shift in the software industry. The generational shift from the traditional "software as a licence" – on the PC at home or in the server room at the office – towards remote, service-based computing which embraces these internet-enabled delivery techniques is now firmly established. This shift is another spur for OSS.
  • The rise of smartphone and tablet devices. Smartphones and tablets are increasingly challenging the dominance of the desktop and laptop as the primary computing device. The software running on these devices both from an operating system perspective (such as Android and Tizen) and also the applications available on "app stores", have opened up new markets and scenarios for open source software to be used.
  • The rise of the Internet of Things (IoT). IoT can be broadly described as the interconnecting of physical devices with software and sensors and enabling these devices to communicate with each other and the internet. IoT is tipped to be one of the greatest technology innovations of the 2020s and open source software is a key enabler of IoT.

Plethora of open source software licences

Today, there are many hundreds of open source software licences in use, varying widely in length, clarity, intent and legal effect, and ranging from the intrusive, "copyleft" General Public Licence (GPL) through to short licences containing virtually no obligations.

OSS licences can be broadly grouped into two distinct categories. These are:

  • Permissive licences.
  • Restrictive licences (also known as "reciprocal", "hereditary" or "copyleft" licences).

While the exact terms vary between OSS licences, the key difference between the two categories of licence is how subsequent amendments, improvements and adaptations of the open source software (or combinations of the open source software with other software) are licensed or restricted.

Permissive open source software licences

Permissive OSS licences usually only require that any distribution of the original open source software be on the same terms as those on which it was provided. Importantly, permissive licences permit a licensee to freely amend, adapt open source code and combine open source code with proprietary code without placing restrictions (or significant restrictions) on such amendments, adaptations or combinations (usually called "derivative works") and how these derivative works can be licensed onwards.

Restrictive open source software licences

Restrictive OSS licences, on the other hand, go one step further than permissive licences, imposing licensing restrictions or requirements where the open source software is amended, adapted or combined with any other software (whether proprietary or open source) to produce a derivative work. While the provisions vary, restrictive OSS licences will (to a certain extent) apply to both the original open source software and any derivative works based upon it. This can be of key concern to organisations when using restrictive open source software alongside their proprietary software, as proprietary software could unintentionally be made subject to the open source licence.

Some examples

As a practical matter, when using open source software, a good starting point is to identify the OSS concerned and the licence terms under which it is made available and then to assess whether the licence attaches any particular terms which might pose a risk to your business. A leading OSS service provider publishes data in relation to trends in OSS usage under the most common OSS licences. The table below sets out the position based on the most recent data:

Top 10 open-source licences in 2016 and 2018

Licence Permissive or restrictive? 2016 (percentage of all open source licences) % 2018(Percentage of all open source licences) % Change%
MIT Permissive 25 26 1
Apache 2.0 Permissive 15 22 7
GPL 3.0 Restrictive 19 16 -3
GPL 2.0 Restrictive 15 10 -5
LGPL Restrictive 6 6 0
BSD 3 Permissive 6 5 -1
Microsoft Public Permissive 5 3 -2
BSD 2 Permissive 3 2.2 -1
Eclipse 1.0 Restrictive 1 1 0
Zlib Restrictive 1 1 0

Software as a Service

Software as a Service (SaaS) is the term used to describe an arrangement in which software is hosted by a company and made available to users indirectly via a web browser. An example would be Dropbox where a user logs in via a portal to access and use the software provided for a subscription fee. There has been considerable controversy over whether the source code for OSS hosted by a SaaS provider must be made available to the users.

Under the wording of current OSS licences (except the GNU Affero General Public License (AGPL)), the hosting of OSS software by a SaaS provider would not appear to be a problem. Indeed, Section 0 of GPLv3 notes that mere interaction with a user through a computer network, with no transfer of a copy of a program, is not conveying and as a result, the obligations to publish source code may not be triggered. As a result, AGPL was created. It is a modified version of the ordinary GPL version 3, with one added requirement: if a modified AGPL program (or a derivative of it) runs on a server and users access it there, the server must also allow them to download the corresponding source code.

Final Thoughts

This blog is only a brief introduction to open source software and some legal issues to consider. Before supplying any software which contains OSS or, in some cases, before buying any software which contains OSS, understand how the supply or acquisition of the open source software may impact your business model is crucial. Generally speaking there has been a trend towards more permissive licencing in the last decade. Whilst encouraging, this should not prevent organisations from having a deeper look into the OSS licences they use.

EM law specialises in technology law. Get in touch if you have any questions on the above.



AI - Consultation on International Standards  

On 25 June 2020, the International Organization of Securities Commissions (IOSCO) published a consultation document (CR02/2020) on the use of artificial intelligence (AI) and machine learning (ML) by market intermediaries and asset managers, which it has identified as a key priority.

IOSCO consultation paper on AI

IOSCO, the global standard setter for the securities sector,IOSCO  and machine learning by market intermediaries and asset managers. Once finalised, the guidance would be non-binding but IOSCO would encourage its members to take it into account when overseeing the use of AI by regulated firms.

IOSCO’s membership comprises securities regulators from around the world. It aims to promote consistent standards of regulation for securities markets.

Why market intermediaries and asset managers?

IOSCO believes that the increasing use of AIML by market intermediaries and asset managers may be altering their business models. For example, firms may use AIML to support their advisory services, risk management, client identification and monitoring, selection of trading algorithms and portfolio management, which may also alter their risk profiles.

One fear is that this use of AIML may create or exacerbate certain risks, which could potentially have an impact on the efficiency of financial markets and could result in consumer harm.

AI industry discussions

As well as setting out its guidance, the report also indicates some of its findings from industry discussions:

Firms implementing AI and ML mostly rely on existing governance and oversight arrangements to sign off and oversee the development and use of the technology. In most instances, the existing review and senior leadership-level approval processes were followed to determine how risks were managed, and how compliance with existing regulatory requirements was met. AI and ML algorithms were generally not regarded as fundamentally different from more traditional algorithms and few firms identified a need to introduce new or modify existing procedural controls to manage specific AI and ML risks.

Some firms indicated that the decision to involve senior leadership in governance and oversight remains a departmental or business line consideration, often in association with the risk and IT or data science groups. There were also varying views on whether technical expertise is necessary from senior management in control functions such as risk management. Despite this, most firms expressed the view that the ultimate responsibility and accountability for the use of AI and ML would lie with the senior leadership of the firm.

Some firms noted that the level of involvement of risk and compliance tends to focus primarily on development and testing of AI and ML rather than through the lifecycle of the model (i.e., implementation and ongoing monitoring). Generally, once implemented, some firms rely on the business line to effectively oversee and monitor the use of the AI and ML. Respondents also noted that risk, compliance and audit functions should be involved throughout all stages of the development of AI and ML.

Many firms did not employ specific compliance personnel with the appropriate programming background to appropriately challenge and oversee the development of ML algorithms. With much of the technology still at an experimental stage, the techniques and toolkits at the disposal of compliance and oversight (risk and internal audit) currently seem limited. In some cases, this is compounded by poor record keeping, resulting in limited compliance visibility as to which specific business functions are reliant on AI and ML at any given point in time.

AI Areas of concern

IOSCO has identified the following areas of potential risk and harm relating to the development, testing and deployment of AIML: governance and oversight; algorithm development, testing and ongoing monitoring; data quality and bias; transparency; outsourcing; and ethical concerns.

Its proposed guidance consists of measures to assist IOSCO members in providing appropriate regulatory frameworks to supervise market intermediaries and asset managers that utilise AIML. These measures cover:

  • Appropriate governance, controls and oversight frameworks over the development, use and performance monitoring of AIML.
  • Ensuring staff have adequate knowledge, skills and experience to implement, oversee and challenge the outcomes of AIML.
  • Robust, consistent and clearly defined development and testing processes to enable firms to identify potential issues before they fully deploy AIML.
  • Appropriate transparency and disclosures to investors, regulators and other relevant stakeholders.

How the FCA regulates AI in the UK

For an idea of how AI is currently regulated in finance by the UK read below:

The Financial Conduct Authority (FCA) deems it good practice to review how trading algorithms are used; develop appropriate definitions; ensure all activities are captured; identify any changes to algorithms; and have a consistent methodology across the testing and deployment of AI and ML. Markets in Financial Instruments Directive (MiFID II) requires firms to develop processes to identify algorithmic trading across the business. These can be either investment decisions or execution algorithms, which can be combined into a single strategy. Firms are also required to have a clear methodology and audit trail across the business. Approval and sign-off processes should ensure a separation of validation and development a culture of collaboration and challenge and consistency of a firm’s risk appetite. Whilst the algorithms are field-deployed, it is a requirement to maintain pre-trade and post-trade risk controls, real-time monitoring of algorithms in deployment, with the ability to kill an algorithm or a suite of algorithms centrally, a functionality commonly known as the kill-switch.

It is a best practice, but not a requirement, to have an independent committee to verify the completion of checks. However, under the SM&CR, a firm’s governing body would be expected explicitly to approve the governance framework for algorithmic trading, and its management body should identify the relevant Senior Management Function(s) with responsibility for algorithmic trading.

How to submit comments

Comments may be submitted by one of the three following methods on or before 26 October 2020. To help them process and review your comments more efficiently, please use only one method.

Important: All comments will be made available publicly, unless anonymity is specifically requested. Comments will be converted to PDF format and posted on the IOSCO website. Personal identifying information will not be edited from submissions.

  1. Email
  • Send comments to
  • The subject line of your message must indicate ‘The use of artificial intelligence and machine learning by market intermediaries and asset managers’.
  • If you attach a document, indicate the software used (e.g., WordPerfect, Microsoft WORD, ASCII text, etc) to create the attachment.
  • Do not submit attachments as HTML, PDF, GIFG, TIFF, PIF, ZIP or EXE files.
  1. Facsimile Transmission

Send by facsimile transmission using the following fax number: + 34 (91) 555 93 68.

  1. Paper

Send 3 copies of your paper comment letter to:

Alp Eroglu
International Organization of Securities Commissions (IOSCO) Calle Oquendo 12
28006 Madrid

Your comment letter should indicate prominently that it is a ‘Public Comment on The use of artificial intelligence and machine learning by market intermediaries and asset managers’.

For more information read our blog ‘AI in Financial Services.’

What happens next?

The consultation on the draft guidance closes on 26 October 2020. In the UK, the FCA is currently working with the Alan Turing Institute to look at the implications of the financial services industry deploying AI. Meanwhile, the European Commission has released its own guidelines for trustworthy AI and is expected to propose legislation in this area later in 2020.

EM law specialises in technology law. Get in touch if you have any questions on the above.

Initial Coin Offering

Initial Coin Offering - Legal Aspects

An Initial Coin Offering (ICO) is a low-cost and time-efficient type of crowdfunding which is facilitated through the use of distributed ledger technology. For more information on distributed ledger technology and its most common form, blockchain, read our blog on the topic.

What is an Initial Coin Offering?

In much the same way that an initial public offering involves the issue of shares to investors in exchange for fiat currency, an initial coin offering involves the issue of transferable tokens to investors typically in exchange for cryptocurrency such as Bitcoin or Ether. Some tokens may resemble traditional securities such as shares or debt securities, while others may represent a right to access or receive future services. It is the legal status of such tokens and the cryptocurrency used to purchase them which needs to be explored.

Advantages and disadvantages of an Initial Coin Offering

The rights attaching to tokens vary widely. Some tokens may resemble traditional securities such as shares or debt securities, while others may represent a right to access or receive future services. A key appeal of ICOs is that tokens are easily tradeable. This means that investors can, assuming sufficient liquidity, buy and sell tokens on cryptocurrency exchanges, unlike more traditional venture capital investments, which may not be easily traded.

Other benefits of ICOs compared to more traditional fundraising models are seen to include:

  • The ease and speed with which tokens can be issued and funds raised, in many cases without the use of intermediaries.
  • Low transaction and settlement costs.
  • A perceived lack of regulatory barriers.
  • For many issuers, the formation or augmentation of a wide and motivated user base of the underlying product or service.

Commonly cited disadvantages of initial coin offerings when compared to traditional fundraising models include:

  • The price volatility of the most popular cryptocurrencies. ICO issuers will commonly seek to exchange cryptocurrencies subscribed by investors into fiat currency following the ICO, therefore incurring substantial exchange rate risk. It may be prohibitively expensive or difficult to mitigate this risk effectively.
  • A lack of clarity regarding numerous legal issues relating to the underlying distributed ledger technology, including the enforceability of code-based smart contracts.As you can see in our blog such uncertainty in the UK, although untested in court, is likely to be overcome.
  • An uncertain and evolving regulatory position globally. Combined with the absence of any industry standardisation, this increases the advisory costs and slows the speed at which a compliant ICO may be carried out.
  • Cyber security risks, compounded by the irreversibility of many cryptocurrency transactions.

What ICOs are being used for?

The earliest ICOs were used to launch new cryptocurrencies but increasingly they have been used by early stage companies to fund the development of other projects or services and, in particular, the development of decentralised software applications that run on existing blockchain platforms, such as Ethereum.

However, an ICO can be executed by any company looking to issue tradeable rights to investors in exchange for capital, regardless of the sector in which it operates or the product that it wishes to develop. In September 2017, Kik, an established social media platform, raised approximately $98 million through an ICO of “Kin” tokens to support the development of its existing messaging ecosystem. It remains to be seen whether other non-blockchain centric businesses will use ICOs as a means of raising funds.

How do you launch an ICO?

To launch an initial coin offering, an issuer will generally produce a white paper, which is analogous to the prospectus that a company is required to produce in connection with the admission of securities to trading on the Main Market of the London Stock Exchange. A subscriber will subscribe for tokens by transferring consideration to a specified account, and in doing so it is deemed to have accepted the terms and conditions applicable to that ICO. The tokens themselves are typically created, allocated and distributed through a pre-existing blockchain platform, such as Ethereum, in each case without requiring an intermediary.

Regulation of Initial Coin Offerings

A lack of regulatory barriers is seen by some participants as one of the primary attractions of carrying out ICOs. However, while there is no regulatory framework in the UK which is specific to ICOs, or which refers to the specific technology or terminology used in ICOs, it is a common misconception to say that all ICOs are unregulated. Issuers and their advisers must therefore consider carefully the applicability and effect of the full range of relevant legislation.

Regulatory perimeter

An initial coin offering may or may not fall within the Financial Conduct Authority’s (FCA) regulatory perimeter depending on the nature of the tokens (the terms used by the FCA to denote different types of cryptoassets) issued, and the legal and regulatory position of each ICO proposition must be assessed on a case by case basis.

Although many ICOs will fall outside the regulated space (depending on how they are structured, such as exchange and utility tokens), some ICOs (such as security tokens) may involve regulated investments, and firms involved in an ICO may be conducting regulated activities (such as arranging, dealing or advising on regulated financial investments).

The FCA outlines perimeter issues relating to ICOs in CP19/3 on perimeter guidance on cryptoassets. It explains that the majority of tokens that are issued through ICOs to the market tend to be marketed as utility tokens (non-regulated). The perimeter guidance being proposed by the FCA will focus on this area to make sure that firms are aware when their tokens may be considered securities, and therefore fall within the FCA’s regulatory perimeter. The FCA explains that it will be paying increasing attention, especially where those preparing ICOs attempt to avoid regulation by marketing securities as utility tokens.

Other points to note about the regulation of ICOs include:

  • The features of some ICOs are parallel with initial public offerings (IPOs), private placement of securities, crowdfunding or even collective investment schemes (CISs) which need to be examined individually in order to comply with regulation.
  • Some tokens may also constitute transferable securities and therefore may fall within the FCA's prospectus regime.
  • Digital currency exchanges that facilitate the exchange of certain tokens should consider whether they need to be authorised by the FCA to be able to deliver their services.

Risks if ICO’s outside regulatory perimeter

  • Unregulated space: Most ICOs are not regulated by the FCA and many are based overseas.
  • No investor protection: You are extremely unlikely to have access to UK regulatory protections like the Financial Services Compensation Scheme or the Financial Ombudsman Service.
  • Price volatility: Like cryptocurrencies in general, the value of a token may be extremely volatile – vulnerable to dramatic changes.
  • Potential for fraud: Some issuers might not have the intention to use the funds raised in the way set out when the project was marketed.
  • Inadequate documentation: Instead of a regulated prospectus, ICOs usually only provide a ‘white paper’. An ICO white paper might be unbalanced, incomplete or misleading. A sophisticated technical understanding is needed to fully understand the tokens’ characteristics and risks.
  • Early stage projects: Typically ICO projects are in a very early stage of development and their business models are experimental. There is a good chance of losing your whole stake.

FMLC paper on ICOs

The Financial Markets Law Committee (FMLC) published a paper outlining issues of legal uncertainty arising from ICOs in July 2019. The FMLC outlines how existing laws apply to ICOs and looks at some of the challenges for regulators, providers and market participants. These challenges include a lack of international and regional harmonisation relating to the categorisation of tokens issued in ICOs, as well as in their regulatory treatment.

FCA consumer warnings on ICOs

The FCA has warned consumers of the risks of ICOs. The FCA warns that ICOs are very high-risk, speculative investments due to, among other things, their price volatility, lack of access to UK regulatory protections such as the Financial Services Compensation Scheme (FSCS) or the Financial Ombudsman Service (FOS), potential for fraud, and the lack of adequate documentation.

Financial crime risks of ICOs

The FCA wrote to CEOs of banks in June 2018 warning of the risk of abuse of cryptoassets, which arises from the potential anonymity and the ability to move money between countries that crypotassets allow. Banks were warned to take reasonable and proportionate measures to lessen the risk that they might facilitate financial crimes that are enabled by cryptoassets.

An often unregulated area

Unregulated initial coin offerings are not considered safe investments by the FCA and should therefore always be treated with caution. On the other hand they offer businesses a quicker and easier way to raise capital. If you are looking to invest in an initial coin offering you should always be aware of such risks. If you are a business looking to raise capital through an ICO then the extent to which you may be regulated needs to be considered.

ICO’s is an area likely to develop alongside the recent increase in legal certainty granted to cryptoassets and smart contracts under English law. As it stands, however, ICO’s are yet to be addressed in such a direct manner.

EM law are experts in technology law. Please contact us if you have any question on the above.


Cybersecurity – Overview of Some Legal Aspects

Cybersecurity is an area rife with regulation and energetic regulators. Having strong cybersecurity measures in place is an essential part of any business using computers and the internet to store information i.e. most businesses.

What do we mean by cybersecurity?

The term "cybersecurity" refers to the need to protect the following from unlawful use, access or interference:

  • Information and data that is stored electronically (rather than only in physical form).
  • The communications networks which underpin societal, business and government functions.

Reasons for ensuring cybersecurity

Businesses are faced with numerous and varied cybersecurity threats. One leading antivirus software provider reported that it identified over 60,000,000 new forms of malware in the third quarter of 2018 alone. The persons responsible for threats are varied and include computer vandals, organised cybercriminals, "hacktivist" groups and nation states.

Potential consequences

The results of a cyberattack can be devastating for a business. It can result in:

  • Contractual and tortious liability towards individuals seeking compensation for damage and/or distress caused by the unlawful acquisition, disclosure and/or use of their personal information.
  • Prosecution or regulatory sanctions being imposed for failing to comply with legal obligations to keep the information and networks secure or, in some cases, to respond appropriately in the event of a cyberattack. Sanctions may include fines as well as the "naming and shaming" resulting from publication of the authority's investigations into businesses that failed to comply with their statutory obligations.
  • Reputational damage flowing from adverse media coverage, the publication of investigatory reports by regulatory authorities, and where the business is required by law to notify its customers and users of the cyberattack.

Managing cybersecurity risk and compliance

Businesses should be alert to the cybersecurity risks posed by commercial transactions that will involve a third party introducing goods or services into (or being provided with access to) the business's secure IT environment. A business's own cybersecurity obligations will include managing risk within its supply chain and outsourcing to service providers.

These risks can be managed by, for example, implementing various technical and organisational precautions and procedures, inserting appropriate provisions into commercial contracts, obtaining adequate insurance, identifying applicable laws and regulations and ensuring compliance.

Practical steps towards compliance

The steps a business should take to comply with its cybersecurity obligations depend on the nature of the business, its circumstances and the industry in which it operates. There is potential overlap between the different regulatory regimes.

Full compliance with legal obligations and best practice guidance may require a business to implement sophisticated security measures and risk management procedures. However, most security breaches (including some of the most high-profile and significant breaches) are the result of businesses failing to implement relatively basic security precautions and procedures, for example:

  • Not encrypting data or storing encryption keys on vulnerable systems.
  • Using outdated software and systems (containing flaws or vulnerabilities), failing to install fixes, patches and upgrades, retaining redundant systems and servers and not implementing software updating policies.
  • Retaining data for longer than necessary. Data that a business no longer requires may still be valuable to cybercriminals, creating a potential liability for a business rather than an asset.
  • Failing to carry out background checks and vetting on employees with access to data and systems.
  • Not providing sufficient staff training and failing to implement policies relating to employee-data interaction (such as authorised data access or bring your own devices (BYOD) policies).
  • Failing to securely destroy or dispose of data or equipment containing data (or verify destruction by subcontractors).
  • Using removable media (such as USB drives and CDs) or portable computers (such as laptops and tablets) in an insecure manner (for example, not scanning media for viruses before introducing new hardware into a secure environment or failing to encrypt data).

Ascertaining which regulations apply

Every business should assume it has a legal duty to implement effective information risk management procedures, of which cybersecurity measures are an essential part. In particular, there are few businesses that do not handle any personal data (whether in relation to employees, customers or other individuals). At a minimum, businesses should seek to comply with the obligations set out in the General Data Protection Regulation ((EU) 2016/679) (GDPR) and Data Protection Act 2018 (DPA 2018), in particular:

  • Sixth data protection principle(Article 5(1)(f) GDPR): personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.
  • Articles 32 to 34, GDPR:both the controller and the processor are required to ensure a level of security appropriate to the risk, taking into account factors such as the costs of implementation and the context of the processing, and there are obligations to report personal data breaches.
  • Controller and processor contracts(Article 28, GDPR): Specific requirements as to what should be included in a contract between a controller and a processor.

OESs and RDSPs

In addition, certain operators of essential services in the UK, and certain relevant digital service providers who have their head office, or have nominated a representative, in the UK (OESs and RDSPs, respectively) are subject to additional cybersecurity and incident notification requirements under the Network and Information Systems Regulations 2018 (SI 2018/506) (NIS Regulations).

OES are organisations that operate services deemed critical to the economy and wider society. They include critical infrastructure (water, transport, energy) and other important services, such as healthcare and digital infrastructure.

RDSPs are organisations that provide specific types of digital services: online search engines, online marketplaces and cloud computing services. To be an RDSP, you must provide one or more of these services, have your head office in the UK (or have nominated a UK representative) and be a medium-sized enterprise.

There is a general small business exemption for digital services; if you have fewer than 50 staff and a turnover and/or balance sheet of less than €10 million then you are not an RDSP, and NIS does not apply. However, if you are part of a larger group, then you need to assess the group’s staffing and turnover numbers to see if the exemption applies.

Generally speaking, OESs and RDSPs have the following main obligations under the NIS Regulations:

  • Under regulation 10, an OES must take appropriate and proportionate:
    • technical organisational measures to manage risks posed to the security of the network on which their essential service relies; and
    • measures to prevent and minimise the impact of incidents affecting the security of the network and information systems used for the provision of an essential service, with a view to ensuring the continuity of those services,
    • having regard to any relevant guidance issued by their competent authority.
  • Under regulation 11, an OES must notify their competent authority without undue delay and no later than 72 hours after becoming aware of any incident which has a significant impact on the continuity of the essential service which that OES provides, having regard to:
    • the number of users affected by the disruption of the essential service;
    • the duration of the incident; and
    • the geographical area affected by the incident.
  • Under regulation 12, RDSPs must identify and take appropriate and proportionate measures to manage the risks posed to the security of network and information systems on which it relies to provide, within the European Union, either an online marketplace, online search engine or cloud computing service.
  • Under regulation 12,RDSPs must notify the ICO without undue delay and in any event no later than 72 hours after becoming aware of any incident having a substantial impact on the provision of any of the digital services mentioned above, providing sufficient information to enable the ICO to determine the significance of any cross-border impact.

It will be important for any organisation that identifies as an OES to follow published guidance from its designated competent authority as it is released.

Other regulatory frameworks

The Information Commissioner's Office (ICO), which is responsible for enforcing the GDPR and Data Protection Act 2018 in the UK, as well as the NIS Regulations against relevant digital service providers, has also published much cybersecurity guidance for those organisations falling under its remit.

In addition to the above, special consideration must be given to businesses that:

  • Handle particularly sensitive information.
  • Carry out certain activities (such as merchants that process payments).
  • Provide certain services (such as financial services or publicly available electronic communications services)
  • Operate as part of a regulated profession or industry (for example, legal or accounting services).

They are likely to be subject to additional regulation and be required to comply with certain industry standards. These businesses should be able to obtain advice and details of their obligations (for example, guidance on mandatory obligations and best practice) from their relevant regulatory authority, professional body or industry group.

Implementing cybersecurity measures, policies and procedures

There are several different ways in which the risk of cybercrime can be reduced:

  • Technical measures: installing firewalls and antivirus software, limiting employee access rights and controlling document retention.
  • Practical measures, for example:
    • a business should have policies in place that enable it to react properly in the event of an incident. These policies should address issues such as information disaster recovery and backup, response to a security breach (including notification) and remedial steps; and
    • a business's policies and measures will both need to be kept under review. Audits and risk assessments should be carried out from time to time and the robustness of policies and measures should be tested regularly. Where appropriate, this may involve engaging independent third parties (such as penetration testers).

For small and medium sized enterprises (SMEs) unsure as to how to proceed, the UK government's ten steps to cybersecurity provide a useful starting point. For any consultancy assistance with achieving the recommended security baselines you could discuss your needs with our friends at Tantivy or other specialist security firms.

EM Law are experts in technology law and data protection law. Please get in touch if you need any help with cybersecurity compliance or if you have any other legal issues.

AI In Financial Services - Latest Developments

AI in financial services is not new. In fact, financial services was one of the first sectors to deploy Artificial Intelligence at scale. The trading activities of many financial institutions are now predominantly algorithmic, using technology to decide on pricing and when to place orders.

AI in Financial Services - Some Developments

With increased data and reporting volumes and advanced algorithms, the potential for AI in financial services to be further harnessed and developed is endless. For example:

  • Anti-Money Laundering (AML).The Financial Conduct Authority (FCA), in a 2018 speech, identified the potential use of AI to combat money laundering and financial crime.
  • Asset management. In the asset management industry, the increasing use of AI is a growth area. Areas using AI include risk management, compliance, investment decisions, securities trading and monitoring, and client relationship management. An FCA speech on the subject, suggests that investment managers may well have to increase their technology spend to keep up with AI developments.

Bank of England speech

The pace at which firms are adopting AI in financial services varies. In November 2018, the Bank of England (BoE) published a speech on the application of advanced analytics reporting that the scale of adoption of advanced analytics across the industry is relatively slow. The speech identified the increased cost to firms in the short-term of increasing levels of automation, machine learning and AI, as well as the likely impact of such innovation on execution and operational risks, which may make businesses more complex and difficult to manage. This leaves space for plenty of business opportunity and innovation.

Financial Services Artificial Intelligence Public-Private Forum

The FCA and BoE have established the Financial Services Artificial Intelligence Public-Private Forum (AIPPF) to further constructive dialogue with the public and private sectors to better understand the use and impact of AI and machine learning (see AIPPF terms of reference published 23 January 2020). The forum builds on the work of the FCA and BoE, who published a joint report on Machine Learning (ML) in UK financial services in October 2019 based on 106 responses. Key findings include:

  • Two thirds of respondents already use ML in some form.
  • In many cases, ML development has passed the initial development phase and is entering more advanced stages of deployment. Deployment is most advanced in the banking and insurance sectors.
  • ML is most commonly used in AML and fraud detection, as well as in customer-facing applications (for example, customer services and marketing). Some firms use ML in areas such as credit risk management, trade pricing and execution, as well as general insurance pricing and underwriting.
  • Regulation is not seen as a barrier to ML deployment. However, some firms stress the need for additional guidance on how to interpret existing regulations. The biggest reported constraints are internal to firms, such as the legacy IT systems and data limitations.

AI in Financial Services - FCA expectations

There has been little from the FCA in terms of guidance on AI compliance with its rules. Like other forms of technology, the use of AI must not conflict with a firm's regulatory obligations, such as its obligation to treat customers fairly. The FCA has expressed concern, for example, that the use of AI in financial services might make it harder for vulnerable customers to obtain insurance cover if the algorithms take into account certain characteristics that would deem it not viable to offer products and services to those less affluent. So firms may wish to ensure that they have systems and processes in place to monitor the impact of AI on their target customers. The use of AI also raises issues around accountability, particularly where firms rely on outsourcing arrangements.

Case-by-case basis

The FCA has said that it would approach potential harm caused by AI in financial services on a case-by-case basis. However, firms that deploy AI and machine learning must ensure they have a solid understanding of the technology and the governance around it, especially when considering ethical questions around data. The FCA wants boards to ask themselves what the worst thing is that can go wrong and mitigate against those risks. Indeed, an FCA Insight article on AI in the boardroom suggests that AI is principally a business rather than a technology issue. Boards therefore need to consider a range of factors: the need for new governance skills, ethical decision-making, explainability (do they understand how the AI operates?), transparency (customer consent for use of data), and the potentially changing nature of liability.

Some existing law and regulation applicable to AI in financial services

Misuse of data

Under GDPR, individuals have the right to know how their personal data is being used by AI. Financial institutions should be aware that GDPR (and section 168 of the DPA 2018) gives individuals the right to bring civil claims for compensation, including for distress, for personal data breaches.

Fairness, discrimination and bias

Principle 6 of the FCA is ‘to pay due regard to the interests of its customers and treat them fairly’. AI only reads the data presented to it on a one-size-fits-all basis and therefore discrimination is probable.

Anti-competitive behaviour

The UK Competition and Markets Authority (CMA), has already used its powers to restrain technology with an anti-competitive objective. In August 2016, it fined Trod, an online seller of posters and frames, for using software to implement an agreement with a competitor not to undercut each other’s prices.

Systems and control

Firms should be aware that the FCA can require them to produce a description of their algo-trading strategies within just 14 days, and that it recommends that firms have a detailed “algorithm inventory” setting out coding protocols, usages, responsibilities and risk controls.

Liability in contract and tort

AI usage (whether by a firm’s suppliers or by the firm with its customers) may give rise to unintended consequences and may expose institutions to claims for breach of contract or in tort, and test the boundaries of existing exclusion clauses. Firms need to assess whether their existing terms and conditions remain fit for purpose, where AI is concerned.

AI in Financial Services - Case Law

The courts are due to consider in mid-2020 the question of where liability lies when an investor suffers substantial losses at the hands of an AI-powered trading or investment system in Tyndaris v VWM. While the outcome of the dispute will principally depend on the facts, the judgment may include wider comments on the use of AI systems by funds or investment managers.

Industry reports on AI

In an October 2019 report, the CityUK concluded that AI-specific regulation was not currently appropriate. The report highlights best practices relating to fairness, transparency and consumer protection, data privacy and security, governance and ecosystem resilience. It also sets out a suggested AI policy approach for the UK government and regulators.

UK Finance has prepared a report in conjunction with Microsoft on AI in financial services. A key takeaway from the report include the need to recognise AI as more than a tool and consider the wider cultural and organisational changes necessary to become a mature AI business. Also as they start to embed AI into core systems, firms need to consider the implications of AI that go beyond the technical, including the wider impact on culture, behaviour and governance. Part Two of the report is intended to help firms determine where AI is the right solution, and how to identify the high-value use cases, looking more deeply at analysing the business case. The report states that firms must consider how to supplement existing governance frameworks, or create new ones, to ensure that the ethics, appropriateness and risk of AI is in balance with the benefits it promises and the firm's corporate standpoint.

The future is here

AI is becoming more and more incorporated into everyday business practice. With regard to AI in financial services a key takeaway from current regulations is that having a strong understanding of how AI is used within your business and for what purposes can make compliance less of a headache.

EM law specialises in technology law. Get in touch if you have any questions on the above.

SaaS Contracts

SaaS Contracts – Things To Look Out For

SaaS contracts are increasingly relevant as SaaS is now the model that most software suppliers are looking to supply through. This article provides some insight into the kind of things you need to consider if you are dealing with SaaS contracts.

What is SaaS?

SaaS is the practice of accessing software solutions over the internet, as opposed to by downloading solutions onto your computer. Before SaaS, businesses and consumers would buy a physical version of the software that required installation.

Remember the plastic-wrapped boxes that held the software’s CD-ROM? SaaS eliminates the need for that thanks to the internet. Businesses and consumers simply subscribe to access externally hosted software. As long as they have a connection to Wi-Fi, customers can access the software from anywhere, on any computer.


Take your email server, for example. You want to know that you’ll continue to send and receive emails without needing to fiddle with your email settings or worry about updates. Imagine if your email server went under because you forgot to update it and you went days without email? That’s simply not an option in today’s marketplace. If you use a SaaS product like Microsoft 365 as your email provider, the chances of something going wrong are very small.

Why use SaaS?

With SaaS, you don’t need to install and run software applications on your computer (or any computer).

Everything is available over the internet when you log in to your account online.

You can usually access the software from any device, anytime (as long as there is an internet connection).

The same goes for anyone else using the software. All your staff will have personalized logins, suitable to their access level.


One-to-many model means SaaS customers do not get bespoke services.

Reliance on online connectivity. The internet is fast becoming a single point of failure for many organisations: how long could a company operate without it?

Compliance issues, such as cybersecurity, data protection and encryption.

Risk that customer fails to control usage or increased storage.

Commercial setting

Although most famously deployed on a business to customer basis, SaaS is also used on a business to business model. If you are looking to offer SaaS to customers or businesses or are a business looking to subscribe to a SaaS offering, then being aware of the negotiating positions on SaaS contracts is crucial.

Negotiation Checklist – What to ask for and consider in SaaS Contracts?

  • A detailed description of the services being offered.
  • How is data being processed? This is important when looking to comply with data protection law i.e. who has access to the personal data that the SaaS provider is collecting? Who is responsible in the event of a data breach? For the purposes of GDPR the customer i.e. the person using the software and putting data into it, is usually considered the data controller. The obligations of data protection law are mainly on the data controller and therefore, usually, the customer of a SaaS provider. A data controller should only allow a third party to process data on its behalf if it has appropriate organisational and technical measures in place to protect the data. So appropriate data processing provisions need to be set out in the SaaS Contract.
  • The right of access to the application. Who does and does not have the right to use the application? For example, is the charging structure in the SaaS Contract based on a per person subscription fee or can any of the customer’s staff access the service in return for the customer paying a (significant) upfront annual licence fee?
  • The provision of updates, maintenance and integration of third-party tools. Depending on the context, the customer may want to see some response time commitments if things go wrong as well as service availability commitments. If the SaaS product is for consumers such provisions are unlikely to be included in the SaaS Contract. If the service is fairly niche and for businesses rather than consumers then response time commitments for fixing faults are more likely to be found or negotiated into the SaaS Contract.
  • Intellectual property rights. The supplier of a SaaS application and its licensors will own the intellectual property rights to the software whilst a customer will own the data which is imputed into the software.
  • Term and Termination. Clear language in the Saas Contract is needed so there are no doubts about the length of the subscription term. Is the term service to be automatically renewed? If so, can prices increase in future?
  • Limitation of liability. Generally the liability of the supplier is limited to total subscription fees for 12 months but this can vary. Customers must be mindful of the kinds of losses that they may incur if things go wrong and check whether or not the limitation being imposed by the supplier is fair.
  • Scalability of pricing options i.e. How can you get or offer the best price for the size of businesses you are likely to attract.
  • Rights of third parties? If the customer needs its consultants as well as its employees to be able to access the applications then check that the SaaS Contract allows this. What about staff belonging to other members of the same group of companies as the customer?

The Present and Future of SaaS

Since its early beginnings, the SaaS industry has continued to grow, evolve, and thrive. It’s an equal-opportunity industry, with SaaS tools coming from startups, tech giants, and every company size in between. Even traditional software companies now have SaaS offerings to stay relevant and on-trend.

The SaaS industry is also home to quite a few unicorns (private companies valued at $1 billion or more). While the tech sector dominates lists of unicorns in general, SaaS tools are beginning to gain more and more real estate. Some SaaS companies with unicorn status are Dropbox, Domo, and Slack.

In the future, SaaS companies are expected to adapt their offerings based on significant tech trends. For example, artificial intelligence is likely to play a major role as SaaS companies begin to incorporate AI into their tools, ultimately increasing functionality and improving the user experience. Artificial intelligence is often seen in the form of chatbots, but it will also be useful in automating manual tasks and personalizing SaaS offerings.

Cybersecurity is also a vital aspect of the future of SaaS. There is always a risk to storing sensitive data in the cloud, but consumers’ concerns and hesitations have pushed SaaS companies to take necessary security measures.

These enhancements are formulated through encryption algorithms, identify management, and anti-malware – three measures that work to protect software, and its customers, from data breaches and viruses.

We are SaaS Experts

EM Law’s technology lawyers have helped clients with a wide range of SaaS Contracts both nationally and internationally.

Please contact us if you have any questions on SaaS Contracts or you can find out more about SaaS arrangements by checking our other blogs on cloud services or Software as a Service.