London's planning system produces 15,114 small site decisions across 33 boroughs in three years. The patterns in that data are invisible at the level of individual applications. As AI agents move into development appraisal, structured planning datasets become the infrastructure that makes informed acquisition possible.

London’s planning system produces an extraordinary volume of decision data. Every application generates a paper trail: proposal descriptions, officer assessments, committee minutes, decision notices, conditions, refusal reasons. Multiply that across 33 boroughs and three decades and you have one of the richest records of development decision-making in the world.

Almost none of it is structured.

The data sits in PDF documents, behind portal search interfaces, in Excel exports with inconsistent column names, and in the institutional memory of planning officers who have been at the same desk for fifteen years. Developers navigate this through experience, through local agents who know their borough, and through the slow accumulation of personal precedent. A developer who has submitted forty applications in Lewisham has a mental model of what gets approved there. That model is valuable but it is not transferable, not scalable, and not available to anyone starting their first scheme.

This has been the equilibrium for a long time. It is starting to shift.

The next query

AI assistants are already being used for property research. Developers and their consultants use them to summarise planning policy, draft supporting statements, and pull comparable sales data. This is useful but it is essentially document processing — faster reading.

The more consequential step is agents that can query structured datasets directly. Not searching for a document that might contain an answer, but asking a precise question and receiving a precise, referenced response. “What is the approval rate for conversions in Brent?” is a question that has a definitive answer. It should not require thirty minutes of portal navigation and mental arithmetic to retrieve it.

This requires the data to exist in structured form. For most of London’s planning history, it has not.

The London Small Sites Planning Dataset

Perfect Scale has built what we believe is the most comprehensive structured dataset of small site planning outcomes in London: 15,114 applications across all 33 boroughs, spanning January 2023 to March 2026. Every application is classified by site type (conversion, infill, extension, backland, and so on), area within the borough, conservation status, density, determination time, decision route, refusal reason taxonomy, and case officer where available.

The dataset reveals patterns that are invisible at the level of individual applications but unmistakable at scale.

Across all 33 boroughs, six-unit schemes have a combined refusal rate of 35.7% — 8.8 percentage points below the London mean of 44.5%. This is not random variation. It reflects the intersection of committee referral thresholds, affordable housing policy cliffs, and the natural massing capacity of a typical London plot. But you cannot see it by looking at one borough’s portal.

The median determination time gap between approved and refused applications is 11 days London-wide, but ranges from 2 days in Richmond to 46 days in Hammersmith and Fulham. A refused application in Hammersmith and Fulham spends, on average, six and a half additional weeks in the system compared to an approval. In Richmond, there is almost no difference. This tells you something about how different boroughs handle refusals — whether they are swift policy decisions or protracted negotiations that eventually fail.

Of the 5,498 conversion applications decided since January 2023, approval rates range from 93.6% in Kensington and Chelsea to 22.2% in Barking and Dagenham — a 71.4 percentage point spread. For the same development type. In the same city. Under the same national planning framework.

Every one of these numbers is traceable to specific applications. They are not estimates, not projections, not sentiment. They are counts.

What changes when this is queryable

Consider a developer evaluating a site before a viewing. Their AI assistant, connected to a structured planning dataset, retrieves the following in seconds: the approval rate for that site type in that ward, the typical determination time, the three most common refusal reasons for similar schemes in the area, the case officers most likely to be assigned, and their individual track records.

None of this is a prediction. It is a set of base rates — the empirical starting point before any site-specific judgement is applied. The developer still needs an architect, still needs to understand the streetscene, still needs to read the local plan. But they arrive at the site visit with a quantitative frame that previously required years of local experience to develop.

This changes acquisition decisions. A developer looking at a conversion opportunity in Newham, where the borough approves 29.9% of conversions, and an identical building in Kensington and Chelsea, where it approves 93.6%, can price the planning risk into the land bid before they have spoken to a single agent. The 64 percentage point gap between those boroughs is not a marginal consideration. It is the dominant variable in the development appraisal.

It also changes how planning consultants add value. When the base rate data is accessible to any developer with the right tools, the consultant’s role shifts from information retrieval to interpretation: why is this site different from the base rate, and what can be done to move it above the average? That is a higher-value conversation than reciting approval statistics from memory.

What it does not change

Base rates are not predictions. A 94% borough approval rate does not mean a specific scheme has a 94% chance of consent. Every site is particular. Design quality, neighbour context, officer discretion, committee dynamics, and the political weather all intervene. The data tells you what has happened to schemes like yours. It does not tell you what will happen to yours.

This is an important distinction, and one that will be tested as AI agents become more capable. The temptation to present base rates as probabilities is strong. It should be resisted. A borough that approves 70% of a given site type is not issuing a 70% guarantee. It is telling you that, historically, the typical application of that type has been approved at that rate. The gap between those two statements is where professional judgement lives.

But entering a negotiation knowing the base rate, knowing whether you are operating in a 30% borough or an 80% borough for your specific development type, is the difference between an informed bid and a guess.

Where this is going

The London Small Sites Planning Dataset is currently available through Perfect Scale’s all 33 borough reports, which provide area-level profiles, site type playbooks, refusal intelligence, officer patterns, and viability modelling for every London borough. API access for development platforms and AI integration is under consideration.

The planning system will remain complex, political, and human. Structured data does not simplify it. But it does make the complexity legible, and legibility is the precondition for better decisions.

Related

← All articles All boroughs →