AI and Social Purpose
Prepared for the Association for Learning Technologies (ALT) event, Harnessing AI in FE 28.10.24
That AI is the latest game changer in FE is in no doubt. And it’s a game changer that’s right in our faces. We can’t ignore it in the way that we can try to ignore that other game changer of our times, the climate emergency. That’s a few years away yet but AI is everywhere, tempting, promising, cajoling, terrifying. It offers us the promise of more balanced workloads, yet at the same we’ve no idea what students are doing with it and one day will it even take our jobs?
Today is about cutting through the smoke and mirrors of that, so I’d like to begin by taking us right back to the purpose of FE. Well actually that’s a bit complicated too…
Over the past couple of years, the Education and Training Foundation, with Oxford Saïd Business School, carried out an extensive piece of research to map the complex ‘system’ that is FE and Skills. Whether you think of this as a piece of engineering, a biological system like the human body or an eco-system, you can probably agree that it looks like a chaotic mess. In systems thinking, the purpose of any system is referred to as its ‘North Star’. It emerged clearly from the research that those engaged in FE believe in its social purpose - its impact in socio-economic terms on individuals, families and communities. Yet the whole system is not set up to account for that because funding agencies and ministers judge us on qualifications gained, learner participation and inspection outcomes. Naturally, we get caught up in all of this too.
That leaves FE and Skills with a split purpose - a divided North Star. And that plays out in the systems map. At its heart is the stuff we are set up to measure - essentially ‘high quality teaching’ - which is a bit of a leap in itself, given everything else that impacts the lives of students and apprentices. The real purpose - social change - is off over here somewhere.
What has all this got to do with AI? It means that we focus our use of AI on the purpose of ‘high quality teaching’ and not the social purpose which is stranded over here. That’s not a criticism by the way, focusing here is how we deliver on the day job and that won’t change until funders and ministers understand the broader potential of FE and Skills. But it means we are not necessarily thinking about the bigger picture of how AI can contribute to social change. And that maybe leads us to assume AI is somehow neutral in its ethics.
For a start, that word ‘generative’ can really lead us astray. I do understand that contemporary AI is generative, in the sense that it uses all the ingredients in the store cupboard, to cook up a new recipe. That it’s not ‘like a big Google’ when you’re looking for information, because it doesn’t just show you something someone else has written, it writes something new based on everything that others have written (that has been mined for the large learning set the AI platform has been modelled on). That it can be applied not just to narrative, but to schemes of learning etc, or writing new code. Super useful.
But at the end of the day, it’s all based on what humans put in there. And, let’s face it, human adults are biased beyond measure. So that bias gets written into what AI regurgitates. And it perpetuates into what we create when we use AI. And when we use AI, by the way, it creates 10x more carbon emissions than when we just ‘google it’.* Not saying we shouldn’t. Just saying.
One of the first people to call out the bias inherent in AI was masters’ student Joy Buolamwini going back eight years ago now. Joy, who is dark-skinned, was doing an experiment with mirrors for one of her assignments and realised the facial recognition software she was using did not recognise her face**. Until she literally put a white mask on (there was a white Hallowe’en mask to hand, she was off out to a party later). Joy went onto research gender bias in facial recognition technology (fighting off some of the big tech companies as she did so), she created the Algorithmic Justice League and she’s done several TED Talks, which are well worth listening to. Her latest, ‘How to protect your rights in the age of AI’, starts with one of Joy’s powerful poems. She goes on to tell stories of racial bias in the ‘dangerous technology’ of the criminal justice system, the destroyed lives of people she describes as ‘X-coded’.. Joy says, “No-one is immune from AI harm. We can all be X-coded.”
Joy’s work and that of Anne-Marie Imafidon, Deb Raji and other computer scientists, has made us mindful of the inherent bias in AI, based as it is on the thinking of humans. So what can we do? Google made quite a misstep when they tried to lead the way. Back in February, they launched image-generation in Gemini, their rebranded AI platform. In an attempt to redress inherent racial bias they over-egged it and users were outraged to find America’s Founding Fathers, a female Pope and even Nazi soldiers depicted as people of colour. That egg was definitely on the face at Google, but it was an awareness-raising exercise par excellence, whether that was intentional or not. Google ‘AI bias’ now and you’ll get exhortations to diversify at the coding stage, pitches from ‘anti-bias’ organisations obviously and even some practical advice.
But I’d wager that most of us here are not coders. We are people working in education who want to a) make all our lives easier and b) work with students around ethical AI use. We are encouraged to use platforms which diversify their data but how can we critically analyse the claims companies make and anyway we don’t always have a choice over what platforms we use. We are where we are with the platforms around us. So let’s go back to the human touch.
Nancy Kline, founder of the Thinking Environment, said that, “The quality of everything we do depends on the quality of the thinking we do first.” If we think first, we can approach AI more mindfully of the bias that might be waiting for us.
This involves that shift that the word “generative” hides from view: the use of AI as a way of sifting, editing and refining ideas rather than as a primary source of new thinking. Your thinking is your own; AI is your research assistant which means that you have the power to tell it what to do. And what’s commonly referred to as ‘prompt engineering’ is part of an AI literacy skills we all need to learn, to ensure that we write the ethical human back into our AI work, whatever form it takes.
The Association of Learning Technologies is an ethical player in the field and, alongside organisations such as JISC and the Open Educational Resources movement, they can help you develop this skill set for your AI work. In my view, and notwithstanding the rapid pace of change in AI technology, tools for the ethical use of AI (and I don’t just mean student ethics) should be part of every teacher training programme and the subject of much Professional Development - a shift away from the sharing of new apps and platforms which FE can - and is - doing beautifully for itself. So there’s lots out there. I want to make just one further contribution.
If the quality of everything we do depends on the quality of the thinking we do first, then we need to be able to generate questions to frame that thinking. Values-Line Questions emerge from Thinking Environment work and they are the perfect place to start when you’re planning any new AI project or intervention. I’d love to spend the next few minutes enabling you to generate the Values-Line Questions that make sense to you. Willing to give it a go?
Firstly, what are the values that most matter to you? We’ve been thinking about bias today, so words like fairness, equality, diversity, belonging might come to mind. Or you might draw on your organisational values or other concept which are powerful for you and you work. Please share one or more in the chat.
Next, I’d like you to construct a question in this format:
Make sure you stick to the format. That ‘could’ for example, is a conditional tense which opens up possibilities. ‘Should’ would give a whole different vibe.
Thank you for participating in the human touch. If we are to use AI for social purpose in FE and Skills, we need to take a breath, step back and touch base with our ethics, so that we are very clear about what we want to ask it to do. Being explicit about the ethics of our work and AI for social purpose means that we can encourage others to do the same. We can open out the debate started by Joy, Anne-Marie and others around the inherent bias in AI design. And we can formulate our own design principles for anything we do to forward the mission of social change through education in FE and Skills.
**To hear Joy’s story, listen to her podcast with Brené Brown: https://brenebrown.com/podcast/unmasking-ai-my-mission-to-protect-what-is-human-in-a-world-of-machines/
Lou
I was at the south yorkshire education conference last month and we had a guest speaker on AI and a couple of the workshops were showcasing AI projects
It was interesting to hear how Sheffield Hallam Uni are now embracing AI and teaching students how to use it for good when writing papers / assignments as for years they have been actively blocking chatGP etc and flagging AI 'generated' work through turn it in software for submitting things.
I was at a wedding a couple of weeks ago and the bride is a secondary school science teacher and they used AI to help generate one of the readings with personalisation from them
Its a force to be reckoned with - hopefully we can balance its power with love and social purpose
Thank you for continuing to challenge and stretch my thinking 9 years after POGCE graduation
Much love
Jen