When AI meets planning
Roy Pinnock and Maria Polycarpou look at the need to strike a balance between efficiency and clarity when it comes to the use of AI in the field of planning.
The Planning Inspectorate’s (PINS) recent guidance (September 2024) relating to the use of Artificial Intelligence (AI) as part of any appeal, application or examination handled by PINS both creates opportunity and suggests proceeding with caution.
The PINS Guidance recognises that “AI can be used to support our work and that this can be done positively when transparency is used“.
With AI tools like automated image analysis and text processing potentially accelerating practices, PINS is tapping into the technology’s efficiencies. With an ongoing need to achieve efficiencies in planning, embracing the benefits of AI is a logical step in the right direction.
Standards and Prizes
MHCLG’s Digital Planning Programme is already moving things forward, with the goal of shifting from “a documents-based system to one that is data-driven, standards-based, and powered by modern user-centred products and services“. The planning.data.gov.uk site is being used to help build software and services to support this, alongside https://opendigitalplanning.org/
The size of the prize from MHCLG’s perspective is clear:
- Up to 50% of planning applications are invalid on submission;
- Only around a third of Local Planning Authorities have adopted a local plan in the last 5 years;
- There are 34 days of delay per invalid planning application;
- Engagement with the development of Local Plans can be less than 1% of a district’s population;
- 50% of planning officer time is spent on ‘straightforward’ homeowner applications;
- Local Plans take an average of 7 years to produce.
Local Plans
Local Plan consultations are a prime candidate to benefit from AI applications given that average examination time currently sits at 19 months. Being able to review consultation responses and automatically categorise them, pick themes and trends out of tens of thousands of comments arising from local plan consultations would potentially improve transparency and speed. So too would restructuring the cumbersome process of Sustainability Appraisal so that assessing impacts of different plan options can be done – and presented – more clearly, concisely and accessibly.
PINS has already awarded a contract to Oxford Global Projects UK to explore a range of uses for AI in the examination of local plans. Interested parties could also utilise AI for the purposes of sifting the vast Evidence Base libraries, picking out evidence such as predictive and trend data based on current outcomes, and exhibiting the information via 3D renderings or image processing.
Guidance challenges
The PINS Guidance will play a major role, as it evolves. One of the areas that it will need to grapple with is transparency. The Guidance requires Appellants to clearly disclose any use of AI, detailing the tools used and their specific outputs. This is to prevent the risk of generative AI, false images or precedents being relied upon as purported evidence in planning appeals. False images from generative AI could be inadvertently used where photographs and maps of the area is used whereby a model that generates 3D imagery may wrongly process data inputs from a mapping database.
Although there has not been a planning related instance of false precedent being generated, there was a first tier tax tribunal case whereby an appellant falsely relied on case law which did not exist and instead was generated by an ‘AI system such as ChatGPT’. Translating this to a planning example, any data used in action local plan evidence base that has come from an AI chat system could be subject to the same risk of false outputs.
This ensures that decisions are not solely data driven but remain grounded in the rigour of human oversight protecting the planning process from becoming a legal black box.
The guidance suggests parties notify PINS if:
“You use AI to create or alter any part of your documents, information or data, you should tell us that you have done this when you provide the material to us. You should also tell us what systems or tools you have used, the source of the information that the AI system has based its content on, and what information or material the AI has been used to create or alter.
In addition, if you have used AI, you should do the following:
- Clearly label where you have used AI in the body of the content that AI has created or altered, and clearly state that AI has been used in that content in any references to it elsewhere in your documentation.
- Tell us whether any images or video of people, property, objects or places have been created or altered using AI.
- Tell us whether any images or video using AI has changed, augmented, or removed parts of the original image or video, and identify which parts of the image or video has been changed (such as adding or removing buildings or infrastructure within an image).
- Tell us the date that you used the AI.
- Declare your responsibility for the factual accuracy of the content.
- Declare your use of AI is responsible and lawful.
- Declare that you have appropriate permissions to disclose and share any personal information and that its use complies with data protection and copyright legislation.”
Where it gets p(AI)nful
Beware. The definition of AI in the guidance is very widely set out as “technology that enables the computer or other machine to exhibit intelligence normally associated with humans”.
Such definition leads to a very broad interpretation of what could be considered AI, for instance spell check and other editing functions such as autocomplete could easily fit that definition. Would those need to be disclosed? How about the use of metrics using excel formulas?
Although firms already have a log of what is generated by themselves on specific matters, it would be quite onerous for individuals to declare their responsibility for the factual accuracy of any content, and whether the use of AI is responsible and lawful. The guidance here risks imposing hefty disclosure requirements which would undo the efficiency and benefits that AI use has been encouraged to produce in the first place.
It has been suggested that the PINS guidance should be specific to the scope, scenarios and the relevant AI tools falling within it.
Lessons from Teesside
Although the guidance is non-statutory, inspectors have begun to use it. The inspector from the H2 Teesside Project examination issued a letter to all parties asking if they have used AI in any submissions and to give details in line with that guidance.
The responses from the Interested Parties all confirmed that there had been no use of AI in their submissions. However, the lack of clarity in the guidance is echoed in the response from the Highways Authority planning manager who had the following to say: “I have not used AI as part of any response. I am unsure if this is the main question you require answering, or, if this is a request to me as a preliminary check ahead of me responding to a less erroneous seeming question.”.
Equivalent letters do not appear to have been issued in any other examinations, and the definitive responses from the H2 Teesside Project indicate that the parties have not felt obliged to disclose the use of traditional editing functions like spellcheck or Grammarly which most likely have been used in some form or another in the documents used in the examination process.
The Inspector’s acceptance of these statements seems to be positive for those who were sceptical about the undue looseness of the definition of AI in the guidance. However, as Inspectors increasingly look to the guidance on the use of AI in planning applications, they may take a more rigorous approach to ensuring disclosure is complied with.
Conclusion
The ethical and transparent use of AI is pivotal where image and content manipulation, AI hallucinations and misinformation can risk being filtered into the case management process. However, the current state of the PINS guidance needs to be clarified and evidenced with examples, so practitioners and individuals alike are not faced with onerous disclosure requirements. Such requirements could prove to be counterproductive in creating time lags that the use of AI is trying to eliminate in the first instance.
Those submitting documents should be reminded to take due care and diligence in evidence production and not to mislead the inspector or inquiry.
For more information on how AI will change the planning system please visit our blog: https://www.planninglawblog.com/how-will-ai-change-the-planning-system/
Roy Pinnock is a partner and Maria Polycarpou is a Trainee Solicitor at Dentons. This article first appeared in the firm’s Planning Law Blog.