Your Statistical Return Is a BRA Cheat Sheet. Are You Using It?


Your statistical return: you’ve spent weeks pulling together data, finally filed it and you have the sleepless nights and stress levels to prove it. Now it’s sitting in a folder somewhere and you are intending to spend the next 12 months trying to forget it ever existed. Totally understandable.

But it’s a shame, because buried inside it is most of the evidence base your Business Risk Assessment has been crying out for.

The FSA wrote to TCSP CEOs in 2020 to tell them, plainly, that they were seeing risk assessments that identified threats at a generic level, applied generic controls, and demonstrated no real understanding of the specific business behind them. A generic BRA, they said, does not show that you understand the risks within your own business. It just shows that you did the exercise.

Between 2023 and 2024, the FSA ran a two-phase BRA thematic across the TCSP sector. The resulting reports did something genuinely useful: they showed you exactly what BRA best practice looks like. Case studies, best practice examples, the works.


What the inspections found was that the two most commonly failed requirements are, at root, upstream and downstream data problems.

Both of these highlighted failures require you to know and understand your own numbers. And you spend weeks at the beginning of every year generating exactly those numbers, in granular detail, in a regulatory return you have filed and mentally set fire to.

So, before you light that match, let’s look at what you actually have.

Each business needs to situate itself within the context of the NRA, but it also needs to parse what is coming up from its own client base through the CRA process. That second part is where most BRAs fall short, not because firms aren’t doing CRAs, but because they never translate the aggregate picture into the BRA itself.

The first requirement is to engage meaningfully with the National Risk Assessment findings, not just cite it. The best practice example showed a firm that built a structured comparison: each relevant NRA observation alongside the firm’s own data, with a note on whether they sat above, below or in line with the sector average.

The second requirement is to reflect the actual outcomes of your Customer Risk Assessments in your BRA. Not a description of your process, the detailed picture of what your process has found. The actual, honest to goodness numbers. How many higher risk clients, what proportion of the total, where they are from, what your PEP exposure looks like.

The statistical return is where that aggregate picture lives. And that, precisely, is what the Code means when it requires your BRA to reflect the outcomes of your CRAs.

The return data captures your customer base by risk rating and type, drawn directly from your CRA outcomes. It records your PEP and Commercially Exposed Person counts. It maps the jurisdictional spread of your clients and their beneficial owners. It asks about your disclosure rates, your income concentration, your introducer usage. If you stop and look at that list, what you are actually looking at is a structured data breakdown of your client base by risk category.

The best practice firms in the FSA thematic had dedicated CRA outcomes sections in their BRAs, showing risk distribution across the client base and analysis of jurisdictional exposure. They compared their PEP concentration against sector averages. And then, critically, they used those numbers to drive the rest of the document. Where the data showed elevated geographic exposure, the BRA addressed it. Where the higher risk proportion sat above the sector average, the risk appetite statement explained why that was acceptable and what controls justified the position.

Benchmarking against the NRA

The jurisdictional data in the return gives you both sides of the NRA comparison the regulator wants to see. Your numbers on one side, the published sector aggregates on the other. If your higher risk proportion looks different from the sector average, your BRA should say why, and your risk appetite statement should give some context around that as well.

If your SAR rate looks low relative to your client base, your BRA should address that too. Not because a low number is necessarily wrong, but because the FSA will notice it and your BRA should get there first. Specific, evidenced, and written by you before anyone asks the question. That is what separates a BRA that works from one that just exists.

Leverage your data

You can now extract your return data in Excel before you submit. If you didn’t do that this year, do it next year. Your risk profile is not static. Client numbers shift, jurisdictional exposure changes, PEP counts move. Year-on-year movement in any of these figures is exactly the kind of internal trigger that should be prompting BRA updates and board reporting between formal reviews. The return gives you a consistent, comparable snapshot every twelve months. Used properly, it can provide systemic trend and management information over time.

If you want a framework for building the BRA itself, our article Weigh Anchor takes you through the structure. The data in your statistical return is what makes that structure real rather than generic. Which is, as it happens, exactly what the FSA has been asking for since 2020.