We value the creativity and authentic voice of our teams, service users and communities. Our approach to AI is about supporting and enhancing how we work, rather than replacing human creativity. We will only adopt AI where it genuinely enhances our capacity to serve communities while upholding our values of equity, dignity, collaboration and empowerment.
Our use of AI is guided by rigorous policies and community oversight to enhance, not replace, our person-centred approach, ensuring technology serves to amplify the voice of lived experience, improve service accessibility for marginalised communities, and maximise the impact of funding investments while maintaining the trust and dignity that underpins all our work.
This statement explains our current approach to AI and how we do, and don’t use it across the organisation.
There are some great benefits of using AI, including
- Enhanced service delivery AI can help streamline administrative tasks, freeing up staff time for direct community engagement and support work.
- Improved accessibility AI tools can make health information more accessible to marginalised communities through translation, simplified language, or alternative formats.
- Data insights AI can help identify patterns in health inequalities and service gaps.
- Increase time and resources AI can help maximise limited resources by automating routine tasks or helping with grant writing and reporting requirements.
We also acknowledge that AI technology
- Can produce errors, inaccuracies, or “hallucinations” (false information presented as fact).
- May not understand cultural context, trauma, or the nuanced needs of marginalised communities.
- Cannot replace the empathy, creativity, and judgment of trained staff.
- Can perpetuate and amplify existing biases related to race, gender, sexuality, disability, and other protected characteristics.
- Is evolving rapidly, requiring ongoing vigilance about new risks.
We are committed to
- Prioritising human creativity and lived experience over automation.
- Discussing and reviewing ethical dilemmas openly, including with our service users and wider communities.
- Monitoring AI outputs for bias and inaccuracy.
- Ensuring every piece of AI generated content is reviewed and approved by a qualified member of staff.
- Clearly acknowledging where AI has contributed significantly to a piece of work (e.g. “with support from AI”).
- Ensuring full consent is obtained before AI is used in any way that involves a service users’ story, voice, likeness etc.
- Developing staff training opportunities and building a culture of shared learning and support.
- Considering the environmental impact and prioritising the use of AI providers which operate data centres powered by renewable energy.
- Monitoring developments in AI ethics, regulation, and community concerns.
- Carefully assessing the data security of any AI tools before use.
- Listening to concerns about AI use and responding meaningfully
- Ensuring technology enhances rather than replaces authentic community participation
- Maintaining our community health and peer-led approaches as the foundation of our work
How we currently use AI
BHA uses AI tools in limited administrative and operational capacities to enhance our team’s capacity to serve communities effectively:
- Administrative support: We may use AI writing assistants to help draft grant applications, funding reports, policy documents, and internal communications
- Content creation: AI tools occasionally support the development of training materials, promotional content, and educational resources
- Data analysis: We may use AI-powered tools to help identify trends in anonymised data
What we do NOT use AI for
AI is not used to
- Replace human contact in any of our support services, community engagement activities or community collaborations.
- Make decisions about service user eligibility, care plans, or access to services
- Make safeguarding decisions or risk assessments
- Make decisions that affect employment and volunteer matters
- Agree decisions that affect funding allocation and strategic priorities
- Input any content involving service users’ personal stories without consent
- Input or expose any identifiable special category data (such as people’s health information, demographic details etc) to AIs
Your Rights
Our work is grounded in community engagement principles and co-production with the communities we serve. As a service user, community member, or partner, you have the right to ask questions or contact us to ask about our AI use.: You retain all existing rights under GDPR to access, correct, or delete your personal information.
Accountability and Review
We will
- Review this statement and our AI practices annually, or more frequently if significant changes occur
- Seek feedback from service users, staff, and community partners about AI use
- Update our approach based on emerging evidence, regulatory guidance, and community input
- Report concerns about AI harms or failures transparently
Please also see
Questions or Concerns?
If you have questions about this statement or concerns about AI use at BHA, please contact – info@thebha.in-beta6.co.uk