How to Write Rules and Regulations as Data First: An Interview with Bank of England’s Angus Moir

Angus Moir leads the data collection transformation team at the Bank of England: responsible for transforming the way the Bank collects data from the financial sector. Previously, he led the Bank’s Digital Regulatory Reporting initiative and played a key role in delivering new supervisory data. Prior to his work in data collection, Angus held a number of roles at the Bank and in the private sector, primarily with a focus on risk analysis.

Originally an economist by training, his current primary interest, apart from improving data collection, is how to write rules and regulations as Data First.


Hudson Hollister

Angus Moir, head of data collection at the Bank of England, thank you so much for joining us.

Angus Moir

No problem at all. Great to be here.

Hudson Hollister

Angus, we’d love to ask for a quick biography, especially given that this area of legislative modernization is a very specialized one.

Angus Moir

Yeah, sure. I lead the Bank of England’s work looking at transforming data collection. This is a long term project with a long term scope. The bank is really rethinking how to do data collection — not just so it works over the next 10 years, but getting the solution correct for the decades to come.

We think that’s really important because the world of analysts is changing, technology is changing, and the firms we regulate are changing. That presents challenges for us. Data is becoming increasingly important and our use of data is becoming very important. And so that means we need to be really careful about how we collect it. Secondly, it presents opportunities. Potentially, we can use technology or the changes that are happening around us to make that data collection process more efficient.

I used to be a risk analyst in both the public and private sector. As a risk analyst working for the Bank of England I did two things. First of all, I worked on a project looking to collect data from a bunch of the firms that I was regulating. I found that pretty interesting. Second, I got frustrated that the way that we collected data wasn’t very efficient. Actually, a lot of problems I had as an analyst weren’t with the models and the analysis, but with the data that we had. If we had great data, the analysis often became 90% 95% a lot easier.

So this is really a journey for me that hopefully, at some point will end up with me going back to being an analyst and eventually having the data that I want and that I can use.

Hudson Hollister

And has that happened yet?

Angus Moir

Hasn’t happened yet. Not anytime soon.

You may ask, why this topic? How did a guy who was a risk analyst get into data collection? There was a project that I spent a lot of time involved with called Digital Regulatory Reporting. We were thinking about what the future data collection process would look like. We knew that most of the data collection that happens from the financial sector is carried out by machines and by systems that are built to supply the data and generate reports. And those systems are built after a painful process to convert our data collection requirements and instructions, often written in natural language, into code.

So the idea behind DRR was why can’t we just publish the code in the first place? Why wait for the firm to take our natural language expression and turn it into code? And that’s an idea I’ve been thinking about for about four years now.

Hudson Hollister

Angus, that was going to be the next category of questions, to ask you about the connection between the future modernization and standardization of the rules coming down, on the one hand, and the modernization and standardization of the collections going up, on the other hand. Seems as though the first touch point and what brought you up the stream, so to speak, of regulatory compliance was that regulators could be asking for collections in a machine readable fashion in which the request itself generates the collection. How has your understanding of that become more nuanced as you’ve worked on it?

Angus Moir

I think data collection is really interesting and perhaps a great use case for this idea of increased automation in regulatory compliance using rules as code and machine executable regulation.

One reason is that we know from a firm’s perspective compliance with data collection regulation often mean code running in machines.

Secondly, data collection requirements are quite prescriptive. And they need to be prescriptive, in terms of the regulations and rules that we write, because if they are not, then the data we receive tends to be very disparate, data quality tends to be very bad, and the whole process of providing us with the data tends to be quite slow. It’s a use case where we definitely see a lot of potential.

Hudson Hollister

Can you give us an example, especially for those that might be experts in regulation of a different sort and might not be banking experts?

Angus Moir

Yeah, so let’s say we want to collect a set of mortgage reporting. And we want to collect a set of aggregates. So we might ask a firm for the total amount of mortgage lending they did over the last quarter. Ideally, they’ll have some databases somewhere, that’s a list of all their mortgages, and the loan amount for the mortgages that they lent with a timestamp about when that money was lent. In order for us to import that data, they need to sum up and aggregate the data, then supply it to us at the frequency that we asked for.

Now, if the data is standardized across industry, if all the firms are recording information in the same way, then we should be able to publish code that basically tells firms when they need to report and how to get from the underlying data, what they have in their systems, to the data we’re looking for our reporting purposes.

Hudson Hollister

What category of data collection do you think is coming closest to this model at the Bank of England or in banking regulation by the FCA? How close are we?

Angus Moir

That’s a really interesting question. There’s two types of data that we tend to be interested in. One is relatively granular, low level data. We’ll ask our firms for the total load balance of all their mortgage books. There it’s easier to create machine-executed regulation because you’ve got relatively standardized underlying data. You’re just producing aggregates on top of that data.

That’s perhaps the first use case that you might want to look at. The problem with that use case is that aspects of the machine-executed regulations benefits are perhaps slower because people say, “Well, why don’t you just click on that underlying granular data?” Why bother with the aggregates? Just collect a cut of the mortgage data and be done with it. There are reasons why you might want to do that but that’s one of the challenges for that particular use case.

The other set of data we get into, which we as a potential regulator care a lot about, is aggregate risk and financial accounting data. What’s the firm’s net profit? Or, what is a firm’s common equity tier 1 ratio, which is a particular regulatory-defined metric that we use to understand how safe a bank or insurance company is. These data points often aggregate some incredibly complex calculations.

Hudson Hollister

Is the difference the complexity? On the one hand you’ve got transactions, which might be relatively simple, you’ve got aggregations of transactions, I’m thinking of financial statements, for instance, those are aggregations of transactions. On the other hand, you’ve got aggregations that come from risk models, which might be much more complex than a list of transactions.

Angus Moir

For the aggregate and financial risk data there’s a complexity problem, interpretation problem, and a scope problem. Complexity means you have portfolios, and you have all sorts of stuff going on which make it hard.

In terms of scope, a lot of these data points that we collect, we aren’t actually defined in our data collection rules. The data collections rules are just saying that we want that piece of data. The aggregations and the models that are actually defined in the regulations themselves, there are capital requirements regulations or financial accounting rules. If you want to digitize that you need to convert large swathes of financial accounting regulation or large swathes of capital regulation. Which is an incredible opportunity but also brings in a whole swathe of policy, governance, legal, and feasibility implementation problems.

This includes dealing with the interpretation question. What is really crucial to financial accounting, and regulatory accounting, often is interpretations and classifications of products. Those classifications are crucial for understanding exactly how much risk a firm has. Regulators are really worried about being too prescriptive here. They say, “Well, if we’re too prescriptive then firms can reclassify things in a way we don’t want or didn’t expect and that will impact their measured riskiness”

Hudson Hollister

Yes. So I was winding around to where we’re closest. Does that answer the question?

Angus Moir

Yes in general. Talking specifics, there’s an interesting use case that we have at the bank, which I think is perhaps the closest one for us implement machine executable regulation – in fact it arguably already is! We have this tool that we call the liquidity metric monitoring (LMM) tool. And what this is, is an Excel spreadsheet that contains algorithm firms implement in their systems. We didn’t create the LMM tool thinking “how can we create machine executable regulation?”, but we think it’s a good place to start thinking about the issues of how to write machine executable regulation in a strategic way.

Hudson Hollister

So LMM is not transactional aggregate reporting?

Angus Moir

No, it’s actually based on an aggregate liquidity report the firms submit to us. I see. So it’s already aggregate data. It’s already the kind of liquidity data that we’re asking for. And what we’re doing is we’re then further aggregating that in order to give our user various different kinds of stresses. For a firm’s liquidity position in certain scenarios. And we’re using the outputs internally to monitor firm’s risk and what we think is the liquidity risk in that firm

We publish this algorithm, which sets calculations on our website in an Excel spreadsheet. The reason why we do this is because we want firms to understand how we’re looking at their firm and to look at it in the same way as we do. Because if you go to a firm and say, “Look, we looked at the liquidity metrics, and we think you’re really risky. We want you to do something about it.” And the firm goes, “Hey, I don’t understand what you’re talking about, those metrics, I’ve never seen them before. How do I know whether I agree with you or not?” you’re on pretty uncertain ground about what to do next.

So when we publish these metrics, we publish this thing in Excel. And it’s machine executable regulation. In terms of what it’s doing, it not too dissimilar to aspects of, you know, capital regulation. And it’s one of the use cases we’re looking at as part of our data collection transformation program. It’s a little test case for machine-executable regulation, if we’re going to do it to scale, what do we need to make this work?

Hudson Hollister

Do you see a progression from machine readability on the one side along a continuum to machine executability? Or do you use other terms than that? Or are there other stages?

Angus Moir 

I’m always very careful about using the term machine readability and machine executability with experts, because you can get bogged down in definitions. I spend a lot of time focusing on — when I talk about this internally and tell people — what are the problems we’re trying to solve?

When we talk about machine readability, we typically think about problems around the kind of usability of our instructions and some regulations. It’s great for allowing machines to help us filter and search rules and regulations, making them easier to find and easier to pick out the bits relevant to you. And potentially, also to pick up those bits relevant to you and insert them into different workflows, or different tools or different applications. Remember reading regulatory texts are just the first part of a longer process. As we digitize the management of that process, we want to extract bits of rules and regulations, particular articles or sections and use them for different activities in the wider workflow, for instance, to monitor compliance with those rules and regulations. So machine readability is really about helping people solve problems.

Executability is kind of going a next step further. Now we are saying we want machines to be able to execute the logic that’s embedded in regulations. A law might say, if this thing happens, then this thing happens. Otherwise, this thing happens. Or in financial regulation, you might say this thing is equal to this thing divided by this thing, plus this thing minus this thing. Logic and calculations are things which computers can carry out and can do for us. And with machine executability you’re really starting to automate aspects of compliance. Yes. And that’s kind of the next step. Executability is a bit further down the field the machine readability in general. But actually, you know, in data collection, it’s definitely an area we are exploring.

Hudson Hollister

I do have one more question. How frequently are you contacted, or how much interest do you think there is, from regulatory sectors outside banking?

Angus Moir 

A lot.

We do speak to other UK public sector bodies about this. There are obvious places which are a bit like banking and financial sector regulations, where everyone kind of says, “this is obvious that we can do this stuff.” Tax is a great example. Welfare regulation is another great example where we’re trying to calculate Social Security payments. And those can often be determined by a very complicated set of rules.

The whole regulation of water and utilities, or pretty much anything, when you get down to it, will have a whole bunch of embedded logic. You need to filter and search to make complex regulations and laws usable. So yes there are definitely clear, obvious use cases, the finance sector serves as one of those. But the more we speak to people, the more we realize that actually these are principle ideas that you can apply across pretty much any piece of legislation or regulation.

Hudson Hollister

Yes.

To the great benefit of all the constituencies — the regulators, the regulated, and the constituencies the regulations are intended to benefit.

Angus, thank you so much for spending time with us.

Related Posts

Read the New White Paper: Laws as a Fundamental Element of Government Digital Transformation
This is default text for notification bar