Dr Becky Allen
Neighbourhood health is firmly on the government agenda in regard to improving health inequalities and associated socio-economic disparities. However, there is lack of established practice in relation to the referral, monitoring and evaluation approach, which involves multi-stakeholders, each with differing data and reporting requirements. Technology could potentially assist in providing a cohesive approach to social prescribing / neighbourhood health delivery, leading to a more streamlined, sustainable approach. However, this capability is relatively unexplored.
Yve Smith
This project explores how to support children (ages 7–11) to develop a foundational understanding of artificial intelligence, through hands-on interaction with simplified 'AI building blocks'.
Situated within a series of AI literacy workshops supporting children to ideate about potential AI applications, these AI building blocks are intended to allow children to explore and experiment with the mechanics of AI. They will present core components of AI systems (e.g. inputs, processes, outputs) as physical elements that can be assembled and manipulated.
The aim of the project is to design intuitive, tangible interactions that allow children to explore how changing elements within an AI system affects its behaviour, and to think creatively about how they might want to reconfigure or invent systems for their own needs.
The digital civics angle lies in supporting children to become informed and critical participants in an AI-mediated society. By making the underlying mechanics of AI systems visible and interactive, the project aims to enable children to better interpret, question, and engage with technologies that increasingly shape public life.
This design sprint contributes to an ongoing research-through-design project developed in collaboration with local schools and educators.
Aleeyah Mamhood
In England and Wales, police stop and search hundreds of thousands of people each year. AI is already part of this process through tools such as live facial recognition, automated weapons detection and predictive analytics. These systems are increasingly shaping how officers form suspicion and make decisions. The question is no longer whether algorithms will be involved. It is whether anyone has properly designed for what that means — for the officer, for the person stopped, and for the law.
This project is now entering its most creative phase: using speculative design and UX methods to make two possible AI policing futures tangible enough that real stakeholders (police officers, legal experts and civil society groups) can react to them and help shape what appropriate governance could look like. Students will design artefacts that don't exist yet, grounded in a year of real interview data, for use in future research workshops. They will be tools for critical reflection on fairness, accountability and legality.
Prof. Michael Lim & Dr Paul Mann
Peatlands are globally significant landscapes, accounting for 12% of the UK's land cover and storing over three billion tonnes of carbon. They also offer vital services such as water filtration, flood mitigation, and enhanced biodiversity. However, a legacy of drainage, land conversion, and peat extraction has left most UK peatlands degraded, converting them from carbon sinks into emission sources and losing their service functions. Attempts are now under way to restore peatlands, but there is no unified approach to the evaluation of the effectiveness of different restoration practices and what difference they actually make to carbon emissions. This project requires a new platform that will collate, update and integrate different sources of information on critical restored peatlands drawing on landscape changes over time determined from AI enhanced processing of freely available Sentinel 2 imagery, and potentially data from field based sensors, images and citizen science data. The vision is to have a single platform capable of automatically updating and synthesising diverse datasets on critical peatland sites and exploring the extent to which critical indicators of change could be measured and monitored.
Joella Lynch
Most AI EdTech is focused on mastery learning — helping students get answers correct. But a big part of the learning process is curiosity and critical thinking, skills that are already in danger of being lost in young learners and this loss could be further exacerbated by the use of mastery-based AI EdTech in schools. But what if we could design AI EdTech tools that focused on the process of learning and not the outcome?
Wonderchain is an AI EdTech platform that focuses on supporting students' curiosity by prompting them to wonder and come up with questions rather than answers. These questions link to a class Wonderboard where students can look at their classmates' questions and ask more. There is a teacher dashboard that gives teachers insights into what students are curious about and uses AI to suggest links to the curriculum and ways to develop students' thinking further.
Dr Opeyemi Dele-Ajayi
We are studying how educators make normative judgments about the appropriate role of AI in schools — i.e. which tasks they accept AI performing, which require human oversight, and which they consider inappropriate to delegate regardless of how well AI performs. Schools are civic institutions, and decisions about what AI is designed or permitted to do within them shape professional practice and the distribution of responsibility in educational settings. Before we begin to engage educators, we need to design the stimulus materials that will structure those conversations.