Board Briefing - AI, Cyber Risk and Proportionate Oversight
“With the rapid growth of AI, how worried should Boards be about cyber risk, and should they be asking for different or additional assurance?”
That question reflects the feeling that AI is being positioned as both an opportunity and a threat. Boards are rightly asking how to support innovation while remaining confident that resilience and governance are not being compromised.
Artificial intelligence is reshaping how organisations operate, it presents significant opportunity
Improved productivity
Faster analysis and decision support
Service optimisation
Better targeting of resources
Automation of repetitive processes
At the same time, AI is changing the cyber and digital risk landscape.
It increases the speed and sophistication of attacks, expands dependency risk across supply chains, and introduces new data and model governance considerations.
The Board’s role is to ensure that opportunity is realised within clear guardrails, and that governance keeps pace without becoming a barrier.
Effective oversight is about balance. Too little governance creates unmanaged exposure, too much creates unnecessary friction. The focus should be proportionate control, visibility and accountability.
1. AI as a Cyber Threat Accelerator
AI is being used by attackers to improve phishing, impersonation, vulnerability discovery and automated reconnaissance.
These are familiar risks, but AI has increased the tempo. Where controls are weak, exposure happens faster.
Where fundamentals are strong, AI does not invalidate them.
Board oversight should focus on whether core controls are consistently applied and properly assured.
2. AI as a Data Risk Accelerator
AI systems are heavily dependent on data. They consume, generate and transform large volumes of information at speed.
As AI adoption increases, data risk accelerates in parallel.
This includes:
Greater volume of personal data processing
Expanded data sharing across platforms
Increased reliance on third-party cloud services
More complex data lineage and transformation
Automated generation of derived data, classifications and risk scores
“AI systems are only as reliable as the data they rely on.”
Where data quality or integrity is poor, AI does not correct the problem.
It amplifies it.
Inaccurate, incomplete or biased data can result in flawed outputs being produced at scale and at speed.
AI is a multiplier of both data value and data weakness.
Risks can arise where:
Data is reused beyond its original purpose
Sensitive data is incorporated into AI workflows without appropriate controls
Data retention and minimisation disciplines are diluted
Outputs are stored without clear provenance
Data lineage becomes opaque
Data transfers occur through embedded supplier AI features
Boards should expect assurance that:
Data flows involving AI are documented
Data Protection Impact Assessments are completed where required
Data lineage and provenance are understood
Data quality controls are applied to both input and generated data
Retention policies extend to AI-generated outputs
Supplier data processing involving AI is visible and governed
AI does not remove existing data protection and governance obligations. It increases the speed and scale at which data is processed and decisions are influenced.
If your organisation is still working on improving data quality and data governance, then you should think carefully about whether you are ready to layer additional complexity or whether you should get the foundations right first.
3. AI-Specific Data and Model Risks
AI also introduces new governance considerations that sit across cyber, data protection and operational risk.
Synthetic and Generated Data
AI systems may generate scores, summaries or classifications that influence decisions. The logic and results used must be explainable, verifiable, and able to be interpreted correctly.
Risk arises where generated outputs are treated as verified data, stored without traceability, or incorporated into tenant records without audit clarity.
Boards should expect assurance that data quality governance extends to AI-generated outputs.
Model Drift
Over time, models can degrade, behave inconsistently or introduce bias.
Initial approval is not sufficient, ongoing monitoring and validation is required.
Boards should expect visibility of how higher-impact AI systems are reviewed and revalidated to ensure data and logic can be trusted.
Prompt and Output Risk
Where generative AI or chatbots are deployed, risk can arise through both inputs and outputs.
On the input side:
Sensitive information may be entered into external AI systems without appropriate controls
Staff may inadvertently disclose personal or confidential data in prompts
External users may attempt to manipulate the system through adversarial prompts
Adversarial prompting refers to deliberately crafted inputs designed to override safeguards, extract restricted information or produce inappropriate outputs.
On the output side:
Responses may be inaccurate, incomplete or misleading
Fabricated information may be presented confidently
Outputs may reflect bias or inappropriate content
Advice may be interpreted as authoritative when it should not be
If AI-generated responses are used in tenant communications, advice, safeguarding interactions or policy interpretation, the impact can extend beyond technical error into legal and reputational exposure.
Boards should expect assurance that:
Clear policies exist for acceptable AI use
Sensitive data is not exposed through prompts
Higher-risk chatbot use cases are formally approved
Guardrails and content controls are configured appropriately
Human oversight exists for material interactions
Outputs are monitored and periodically reviewed
Staff are trained on limitations and appropriate use
Malicious Attacks on Models
AI systems may be targeted through data poisoning, adversarial inputs or prompt manipulation.
These risks sit at the intersection of cyber security and data integrity.
Model security and training data integrity should form part of broader cyber assurance.
Opaque Decision Influence
Even where humans remain involved, AI may shape recommendations or prioritisation in ways that are not easily explained.
Where decisions materially affect tenants or colleagues, explainability and accountability are governance requirements, not optional enhancements.
4. AI-Assisted Development and Software Governance
AI-assisted coding is now widely accessible.
Developers and non-technical teams alike can generate scripts, automations, integrations and application components rapidly. This can improve productivity and reduce development time.
It also introduces governance risk.
AI-generated code may:
Contain security vulnerabilities
Introduce insecure dependencies
Circumvent secure development standards
Be deployed outside formal change control
Lack clear ownership or support arrangements
The issue is not that AI writes poor code. It is that it lowers the barrier to creating and deploying software without proportionate review.
Boards should not be reviewing development practices directly. They should be confident that:
AI-assisted development is covered by existing secure development lifecycle controls
Code deployed into production undergoes appropriate review and security testing
Internally developed tools are visible, documented and supported
Change management standards apply regardless of how code was generated
Software in use is acquired, supported and governed in line with organisational standards
Innovation should be encouraged but development standards and governance should not be diluted.
AI-assisted code must meet the same security, assurance and accountability expectations as traditionally developed software.
5. Detection, Response and Financial Resilience
As attack speed increases, response maturity becomes more important.
Boards should have assurance that:
Security monitoring is active and capable of identifying confirmed incidents
Incidents are formally classified and escalated
Incident response roles and leadership are defined
24 hour detection and response capability is available, internally or through a managed provider
Incident response plans are periodically exercised
Cyber insurance cover is appropriate to realistic exposure
Insurance is part of resilience planning, it does not replace effective controls.
6. AI in the Supply Chain
Many organisations encounter AI through supplier upgrades rather than internal development.
AI capability may be introduced through SaaS platforms, analytics providers or contractor tooling.
This can change how data is processed, how decisions are influenced and how systems interconnect.
Boards should expect visibility of:
Which critical suppliers are introducing AI functionality
Whether AI features influence material operational decisions
Whether organisational data is used for model training
Whether supplier assurance reviews include AI governance considerations
7. What Proportionate Board Assurance Looks Like
Boards do not need technical model metrics. They should expect structured assurance across six areas.
Identity and Access Resilience
Multi-factor authentication fully deployed
Privileged access tightly controlled
Regular review of high-risk accounts
Platform and Vulnerability Integrity
Critical vulnerabilities resolved within agreed timeframes
No unsupported systems in use
External exposure monitored and managed
Detection and Response Capability
Continuous monitoring capability
Clear incident classification and escalation
Defined response leadership
24 hour response coverage
Periodic incident exercises
AI Use Visibility and Oversight
Documented inventory of AI use cases
Proportionate risk classification
Impact assessments where required
Ongoing monitoring of higher-impact systems
Clear accountability for deployment
Supplier and Dependency Governance
Formal register of critical digital suppliers
Periodic assurance over supplier cyber maturity
Visibility of AI integration within supplier services
Clear recovery and notification expectations
Financial and Reputational Preparedness
Appropriate cyber insurance limits
Clear regulatory escalation pathways
Defined communications ownership
8. Alignment with Recognised Governance Principles
International guidance such as the OECD AI Principles and the NIST AI Risk Management Framework emphasises human-centric technology use, fairness, accountability, transparency, robustness and ongoing risk management.
Board oversight should ensure that AI adoption aligns with these principles in practice, particularly where AI influences tenant-facing services or sensitive data processing.
This does not require formal adoption of a specific framework. It requires structured governance aligned to recognised good practice.
9. Legal and Regulatory Considerations
The regulatory landscape for AI is evolving and remains fragmented across jurisdictions.
In the UK, the government has adopted a pro-innovation and pro-growth stance. Rather than introducing a single overarching AI Act, the current approach relies on existing regulators applying established legal frameworks to AI use within their sectors.
This means AI risk is expected to be managed within:
Data protection law
Equality and discrimination law
Consumer and contract law
Sector regulatory standards
Corporate governance and accountability frameworks
The approach is adaptive rather than prescriptive.
In contrast, the European Union has taken a more precautionary path through the EU AI Act, introducing a formal risk classification regime and specific prohibited uses and obligations for high-risk AI systems. Singapore has similarly focused on governance frameworks that emphasise explainability, accountability and ethical use.
For housing providers in England, the Regulator of Social Housing has not identified AI as a standalone risk category. However, Boards should not interpret that as absence of oversight.
The regulator will reasonably expect that AI adoption is governed within existing expectations, particularly in areas such as:
Data quality and integrity
Accurate regulatory returns
Fairness in decision-making
Tenant safety and service standards
Value for money
Robust governance and risk management
If AI influences rent setting, repairs prioritisation, tenant communications, safeguarding flags or performance reporting, governance expectations already in place will apply.
The question is not whether AI is regulated separately. It is whether AI use remains aligned with existing legal and regulatory duties.
Boards should therefore expect assurance that:
AI use is documented and accountable
Data protection impact assessments are completed where required
Equality and fairness considerations are assessed where AI influences decisions
Regulatory reporting processes are not compromised by automated data flows
Legal and contractual terms with suppliers reflect AI processing realities
AI governance should not sit outside existing risk and assurance structures. It should be integrated into them.
Closing thoughts - taking a balanced position
Boards do not need to be alarmed by AI, but concern is warranted where:
AI use is undocumented
Supplier AI adoption is not visible
Data lineage and integrity are unclear
Core cyber controls are inconsistent
Detection and response capability is not well evidenced
Dependency risk is not visible in ARAC reporting
AI is an amplifier of opportunity and risk, it also introduces distinct model and data governance considerations.
Board oversight should therefore focus on four questions:
Do we have a deliberate strategy on where we want to implement AI in our business?
Do we have visibility of where and how AI is being used?
Are our cyber and data fundamentals strong enough to support those uses?
Are the ethics and usage clear when outcomes affect tenants, colleagues or reputation?
If those answers are clear, AI becomes an opportunity managed within guardrails, not a risk operating beyond them.