Whitepapervault Com
Company number: 10250295. Registered address: UK Finance Limited, 1 Angel Court, London, EC2R 7HJ AI Whitepaper – UK Finance response Date: 28 June 2023 Sent to: evidence@officeforai.gov.uk UK Finance is the collective voice for the banking and finance industry. Representing more than 300 firms across the industry, we act to enhance competitiveness, support customers and facilitate innovation. Key messages from our response include: • We support the sectoral, risk-based approach, based around regulatory guidance. Such an approach provides flexibility and can account for the peculiarities of each sector, use case and the existing applicable rules more readily than horizontal AI law. It is also more able to adapt to technological developments than primary legislation. • The UK government and the central function can play a key role in convening and coordinating regulators, and facilitating the sharing of information. The design of this central function should account for the following: o Where guidance covers cross-sectoral issues, there should be coordination to promote coherence and avoid undue duplication, fragmentation or contradictions, with joint guidance pursued where feasible. This will help maintain a level playing field between sectors and promote innovation. Although the policy focus is rightly on outcomes, where there is a need for technical, process- or governance-focused guidance, there may be more cross-sectoral elements, requiring more coordination. o The government and a central function will not be able to direct or override regulators (absent a new statute). Care is needed to preserve regulatory independence and any statutory ‘hook’ like a duty to have due regard to a set of principles needs to be considered carefully, in light of each relevant regulator’s statutory duties and role. • It is important to clarify in the final policy papers that regulators can rely on their existing, technology-neutral guidance and rules, with tactical AI-specific supplements when required. They should not be expected to produce a full ‘AI overlay’ when existing regulation addresses risks adequately. AI frameworks should consider existing regulation and carefully evaluate gaps in order to avoid regulatory duplication, while ensuring high risk use cases are not overlooked. Avoiding unnecessary new layers of guidance will help reduce complexity and promote innovation. • The UK approach to AI needs to consider not only risks posed by misuse or error on the part of legitimate firms but also the risks posed by bad actors using AI for fraud or other malicious purposes. Similarly, the potential for AI to contribute to other policy priorities needs to be considered, for example its use in fraud prevention. • There is a mix of principles-based and prescriptive approaches emerging globally. This will make international interoperability challenging but government should seek to promote it and find opportunities to drive alignment (or at least compatibility) when possible, diverging from international norms only when there is a good reason to do so. • We strongly support the development of sandboxes as a tool for delivering business certainty and revealing any areas of tension between regulatory expectations. • Generative AI – where made publicly available online – merits particular attention from policy makers and regulators, given the new risks and challenges it poses (alongside clear opportunities for businesses and consumers). This is a potential gap in existing regulation that should be reviewed as a priority. • Monitoring of supply chain issues will be needed, for example to ensure that firms deploying AI (and holding most regulatory obligations) are able to access the information
they need to do so with confidence while respecting vendors’ intellectual property. Cross- sectoral AI assurance tools and documentation should be developed, building on the existing work of the Centre for Data Ethics and Innovation (CDEI). In the context of discussing the issues raised in the Whitepaper, we also put forward a number of suggestions that may be most appropriately considered by regulatory authorities in due course, rather than being for the government to address in its overarching approach. We nonetheless include these, as being relevant to the thinking about how the regime will operate in practice. Please find our detailed responses to the consultation questions annexed. If you have any questions relating to this response, please contact Walter McCahon, Principal, Privacy and Data Ethics at walter.mccahon@ukfinance.org.uk. Walter McCahon Principal, Privacy and Data Ethics
Annex – UK Finance responses to AI Whitepaper consultation questions The revised cross-sectoral AI principles Question 1 – Do you agree that requiring organisations to make it clear when they are using AI would improve transparency?
public trust in the technology.
appropriate degree of transparency and explainability should be proportionate to the risk(s) presented by an AI system.” We fully endorse this approach but note that this may not be straight forward in practice. In particular, an AI vendor may (reasonably) take a view that its model does not present significant risks, but the way it is deployed by the client firm might. In this situation there could be a mismatch between the views of the vendor and the client, potentially leaving the client without the necessary insights into the model. See also our comments under Questions L1 to L3.
proposal, noting a number of subtleties and challenges to work through. In particular: a. How does this fit with the overall model of sectoral regulators taking the lead and setting expectations for use cases within their domain? Discussion of “requiring” firms to do something implies some kind of horizontal rule applied to firms directly by government, such as a new statutory requirement. This would contradict the overall framework in the Whitepaper, if indeed intended, seemingly cutting across the transparency principle. b. We are similarly unsure of the policy rationale behind this question. The policy goal will inform the approach taken, with different approaches warranted depending on whether the goal is to build customer trust, ensure regulatory compliance through communications to authorities or indeed if there is some other objective. c. Building on this point, it is unclear from the Whitepaper text to whom firms would need to make their AI use clear, or in relation to what use cases. We can understand the goal of ensuring that consumers do not mistakenly believe they are interacting with a human in the context of a chatbot for example. But many AI use cases are used in back-office administration; in our view, consumers would not want to be told about details such as business optimisation software or the basis on which banks make macro level capital management decisions. d. Clearly, regulators will need sight of certain back-office AI uses but the relevance of different applications will likely vary by sector. e. Alternatively, we note that the intention behind this provision in the Whitepaper might be for firms to issue some kind of public disclosure of certain back-office uses of AI in order to raise public awareness. This could potentially take the form of a public statement, or an explanation in the company accounts or privacy notice, or – where relevant – some kind of in-app notification. If this were intended, it would need careful consideration and would need to allow flexibility for different use cases, contexts and sectors, while avoiding the risk of ‘notification fatigue’ among users. f. The right approach in our view surely depends on how the AI is deployed; as the implications would vary, sectoral regulators seem best placed to decide where such disclosure rules are warranted. g. Even in the context of directly consumer facing applications like chatbots, relationship management tools or loan decisions, we would ask whether the key issue here is in fact whether the consumer is engaging with ‘AI’, or whether the consumer is engaging with an ‘automated system’. Is this about being transparent that the consumer is not interacting with a human being, or about AI specifically? We recognise that AI will probably be subject to increased public concern compared to more traditional automated systems, likely being perceived as posing greater
risks of potential customer harm or manipulation, but the policy rationale needs to be clear. We also highlight that guidance from the Information Commissioner’s Office (ICO) will set minimum expectations for (significant) automated decision- making information provision to data subjects under the draft Data Protection and Digital Information (DPDI) Bill already. As such, this proposed requirement may be redundant in relation to consumer-facing use cases.
obligation, if this is in fact being proposed. We agree that transparency is important but consider that Article 22 of the UK’s General Data Protection Regulation (GDPR), supplemented by ICO guidance, plus rules and guidance from sectoral regulators already provide a suitable solution that allows for flexibility to account for differences in use case and audience.
careful and context-sensitive design approach. Question 2 – Are there other measures we could require of organisations to improve AI transparency?
principles-based. Per our comments above, the government’s intent behind the transparency question is unclear and implies an interest in passing a new statutory requirement, which does not fit with the overall sectoral, guidance-based framework.
reporting they want to see from firms they regulate (for those regulators possessing powers to require such regulatory reporting).
overlapping or contradictory guidance from different regulators. We recognise that AI reporting could allow regulators to cooperate more effectively by understanding when AI use cases in their sector have shifted into the purview of a different regulator. We also recognise that the nuances of developing comparative baselines and use cases to accurately audit will be a difficult delivery for authorities.
would help support risk-based regulation, which we support. Supply chain considerations
transparent they need to be. Must both developers and deployers of AI provide transparency and explainability information? As touched on elsewhere in our response, there is an important question around whether firms deploying AI can readily access the information needed from AI vendors (or indeed former vendors) for due diligence purposes, while managing legitimate vendor concerns about intellectual property.
obligations on vendors selling into their sectors. Work by the CDEI on the AI assurance ecosystem may be able to play a valuable role, here.
See for example: https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem and https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques