From overwhelm to opportunity: rethinking AI and digital skills in Further Education

Imagine giving every member of staff a set of car keys after years of getting the bus to work,” said Richard Foster-Fletcher, Founder of MKAI, during the second episode of the City & Guilds and FE News Future Skills livestream series, “That’s the power of AI in the sector – if we give people the skills and confidence.”

04 April 2025

Introduced by Bryony Kingsland, Head of Funding and Policy Insight at City & Guilds, and FE News CEO, Gavin O’Meara, episode two once again convened a panel of experts to discuss the challenge of workplace skills. This discussion focused on AI and digital challenges that can at times feel both urgent and overwhelming.

How can the FE sector adapt to a rapidly evolving world in which AI is transforming not just teaching, learning and assessment but also the very nature of work itself?

While the questions may be big ones, the answers are beginning to take shape. 


The difference between tactical and strategic AI

Richard framed the conversation by making a critical distinction between the tactical and strategic uses of AI. 

He defined tactical AI as tools including ChatGPT, Copilot, Gamma and NotebookLM – software that supports everyday tasks such as writing emails and creating presentations. Whereas strategic AI is, “Far less about upskilling and more about the way that you interact in terms of brainstorming and getting a sparring partner for the way that you work. That takes a lot of time and a lot of enthusiasm.”

AI livestream news imageTactical use of AI has the potential to free up valuable time and, therefore, increase productivity but, according to Rebecca Bradley, Strategic HR and Business specialist at R People, the opportunities and challenges it offers risk overwhelming people. The more “we can lower that fear, the more people are going to engage with it,” she said. “We’re going to have instances where perhaps learners know more than tutors do.” Which means giving staff and students core skills that raise their confidence; skills such as how to write effective prompts, how to spot bias or inaccuracies and how to keep proprietary data safe. “Stripping it back to the fundamental parts because everything changes quickly in this field.”

Progress is likely to require leaders being comfortable with change they don’t entirely understand – 10% of FE CEOs aren’t on LinkedIn and so are unlikely to be the digital leaders of the future. For Richard, “If you're absolutely fixated on GDPR and data being shared, good for you, let's talk about governance. But you've got to let things happen a little bit and let people beg for forgiveness too.” 

The FE sector and government need to prepare and adapt to the transformative teaching, learning and assessment potential of AI by addressing ‘overwhelm’. This means providing flexible frameworks that allow people to take ownership of how the technology is deployed in their specific contexts.

Flexible frameworks to bridge the confidence gap – and deliver economic benefits

Addressing AI’s potential to overwhelm means bridging the confidence gap. Rebecca called for agreed processes and guidelines, “if we're empowering people within a flexible framework and architecture, then they're going to take ownership of that better.” 

This is likely to mean determining and communicating clear guardrails for the use of AI within specific organisational contexts and accepting that different departments, learners or groups may use AI differently within the same institution.

In some cases, it may also mean bringing in experts to assist with the creation of appropriate tools and workflows – as was the case with Windsor Forest College Group which implemented its own internal AI instance based on ChatGPT. In others, it may be more preferable to lean into the enthusiasm of staff who are early adopters, allowing them to develop and share their skills while providing clear guidance on what is and is not allowed. 

Once organisations are confident in their use of AI, highlighted Richard, financial savings should demonstrate the case for further engagement. “If we can pull 5% of cost out of just back offices, that's £80 million for the sector,” he said, “That involves no real job losses at all because you just look at natural attrition.” And he cautioned against acting too slowly or failing to identify efficiencies, “Let's do it on our terms. Let's move carefully, quickly and effectively.”

He suggested colleges create a three-phase AI roadmap, first exploring the implementation of  tactical tools with the goal of saving time across the organisation; then increasing automation across areas such as compliance, enrolment and timetabling; before finally, investing in the adaptation of existing tools – such as ChatGPT – within environments licensed for use by their organisation. 


AI ambassadors and governance frameworks will manage risk

As organisations move through these phases, cautioned Rebecca, it is important that they do not rely too heavily on informal expertise, “You have these core skills and these people that are really, really interested in this but how do we bring everyone along, whether it's learners or staff? Don't just say John in accounts is our expert and we're going to defer to John over every little thing.” She advocated for dedicated AI ambassadors operating within a governance framework.

Governance here means thinking about risks to the organisation, its staff and learners; considering the legal implications of areas such as data storage; and addressing how students use AI to accelerate their work. It also means recognising that staff may already be using AI to complete tasks but are too nervous to admit it – no one wants to lose their job to a robot.

For Rebecca, governance must grow from organisational culture, not stand apart from it. “The governance you already have in place [and] your culture will lead how to set this up. Good governance has a bit of flexibility of interpretation, it’s not autocratic and restrictive” but it must determine the security, accountability and ethical frameworks organisations commit to operating within. 

“It comes down to being proactive,” agreed Richard, “Having a realistic understanding of the world of work and what's likely to happen. You've got to find the middle ground.”


Accepting managed risk will lead to rewards

Governance alone cannot contain the risks inherent within AI, so organisations must be prepared to address the varied and several risks that arise day-to-day. 

AI livestream news imageTeaching learners ‘what good looks like’ is as crucial with AI use as it is in any other setting. Supporting students and staff to identify untruthful or inaccurate information remains essential. “Getting the use of AI right is really hard if you don’t know what good looks like, so managing risk includes teaching young people to use it effectively …  It still takes an enormous amount of time to get to really high quality output with AI. You've got to make hard choices at every stage here. Do you want the value or don't you?” said Richard. 

Bias and inappropriate content must also be accounted for, with Richard highlighting the likelihood that AI will assume a CEO to always be a man and a nurse to always be a woman – and Bradlely recounting the embarrassing experience of an AI bot introducing deeply inappropriate content into a live stream. 

But for Rebecca the solution is simple, “Test it out in an environment where you can see what's happening before you actually do it.”

Predicting the future

In a world where machines are predicting the next word, the next pixel, the next move – it’s the human ability to think, to ask, to choose that will keep education moving forward. But in this context, the panel was clear, choosing to implement AI effectively is likely to be an appropriate and rewarding decision. 

Making a success of this decision, however, will require organisations to both empower ambassadors and consult experts, to apply clear headed governance and safeguarding frameworks and prioritise a policy of transparent use. And, as in any other setting, understanding what good looks like, recognising bias or inaccuracy and testing approaches before adopting them will be crucial workforce skills. 

Watch the Future Skills – AI and Digital livestream