By Peter Luck, Managing Director, ROCC
Artificial intelligence (AI) has arrived in the social housing sector with a bang. From predictive repairs to tenant engagement tools, AI is now being embedded in the systems and software that social housing providers rely on day-to-day.
As it continues to infiltrate our professional lives, most of us are aware of what AI can do, both good and bad, but do we really know where the data that powers it goes?
If I’m being completely honest, I’m not sure many organisations do. So, let’s dig a little deeper.
The invisible infrastructure behind AI
Social housing providers have quite rightly embraced the opportunities presented by AI. After all, we know that smarter, AI-driven systems can improve the delivery of repairs and maintenance, creating better outcomes for organisations and the tenants they serve. However, beneath these benefits lies a more complex reality.
AI depends on vast, interconnected data pipelines that are often global in nature. Data may be processed, stored, replicated and analysed across multiple jurisdictions, forming a complex chain that is practically impossible for you and me to fully understand.
This is where the concept of data sovereignty becomes critical. Where is your data kept? Who can access it? Which legal framework is it governed under? At the national level, the UK government is investing heavily in measures to control AI infrastructure and data ecosystems, but for organisations, this isn’t as straightforward.
As AI adoption in social housing continues to accelerate, governance needs to catch up. Currently, there is a gap, and we don’t want it to widen.
Housing associations and local authorities are responsible for a large amount of sensitive data, from private tenant details to financial information. When this data is fed into AI systems, especially those provided by third parties, we have to question whether it’s secure.
As an organisation, do you know the answer to these questions?
- Where is your data processed?
- Is your data leaving the UK?
- Is it being used to train external models?
- Can evidence of this be provided to regulators, boards and tenants?
For many, the answer is probably “I’m not too sure”.
Are we asking suppliers enough?
Social housing providers must seek the right assurances from technology suppliers on data capture and use. Suppliers are often quick to reassure on “compliance” or “GDPR alignment”, but this doesn’t guarantee data sovereignty.
All housing organisations need to push harder, as ultimately accountability sits with them. This includes asking:
- Can you guarantee data residency within the UK or the EU?
- What is your AI supply chain: models, hosting, subcontractors?
- Is our data used for purposes other than our own?
- How do you evidence proper use?
Another reason it’s vital to know the answers to these questions is that poor data management and quality can lead to negative AI outcomes that disrupt an entire organisation and its services.
A pivotal moment for the sector
Achieving data sovereignty doesn’t mean shutting the door on innovation. True data sovereignty is not about dismissing the use of AI, but ensuring that an organisation has control over where their data is going and how it’s being used. This includes knowing the associated risks.
There isn’t an expectation that everyone should become an AI expert, but with AI now being deeply rooted in many social housing providers’ core services, it’s essential to fully understand data transparency and control.
Regulators demand it, and tenants deserve their private information to be handled properly. Remember, reputational risk sits with an organisation, not the technology provider.
The promise of AI in social housing is real, but so is the responsibility. How can we say we are truly in control of it if we don’t know where our data is going?
It’s time to start asking suppliers the right questions.