Financial Times: Building an AI-Fluent Workforce for a Responsible Future
I. Company Overview
Company Name: The Financial Times
Industry: Media, Newspaper
Location: London, Scotland
I. AI Implementation and Impact
Business Problem
At the end of 2023, our CEO recognised the following truths: that generative AI was going to reshape our world and our industry, creating opportunities and risks for our organisation and that we could either be a participant in that change or a casualty to it. There were already around 500 employees using ChatGPT with their FT email addresses, and so we signed licences with OpenAI for ChatGPT Enterprise and Google Gemini to kickstart our exploration of AI. From the beginning, the mandate was to have an AI-fluent workforce and experiment with the technology in a responsible and ethical way.
Identifying AI as the Solution
One of the complexities of GenAI technology is the pace at which it changes & creates wider change. The expectation from the Board was less about applying AI to solve a specific issue but more around understanding where it could a) improve productivity and output internally and b) meet customer needs in a way that also worked within our mission and values. Because of the pace of change, it was understood that a) and b) would be moving targets, but our mission and values were concrete and sacrosanct. When the news was shared about the FT using ChatGPT enterprise, it was accompanied by a note from the editor, AI principles and an AI policy.
Selecting the Right AI Technology and Partner
It was a bit of a given that we’d go with OpenAI and Google - the former because of the amount of usage already underway in the FT and the latter because we’re already working within Google Workspace, and Gemini was a part of the package. We did actually assess the impact of both of those tools (and additional 3rd parties like Moveworks) and concluded that Gemini was not effective enough to be worth paying for.
The process for addressing the AI fluency ask was:
Being a high-value business (i.e. high quality and high price point), maintaining our integrity was key, and so principles for AI use and an AI policy were quickly (but robustly) established and shared. They formed the foundation of our strategy and approach.
Next, we focused on the adoption of tools and promoting fluency. We formed the AI Fluency Initiative that brought together representatives from Data & Analytics, Technology - Staff Experience, Learning & Development, Delivery and Communications. Our goal was to boost the adoption and effective/responsible application of AI technology to drive value. We rolled out 4 AI tools and a wide variety of training, but we knew we had to truly understand the thoughts, feelings and capabilities of the business in regards to AI.
We therefore created our own AI Fluency Framework that measured different levels of capability across four dimensions: Tools/Productivity & Innovation, Critical Thinking, Governance and Ethics. We created a quiz that went out to the whole organisation to understand the benchmark of where our employees were on their AI Fluency journey. We received around 400 respondents and a wealth of feedback that helped identify the early adopters, those in need of more momentum and some resisters. The data acquired helped us better understand the needs of our employees and how to further evolve our approach. The biggest takeaway was that the appetite was there, but we were running the risk of a scatter-gun approach with no strategic direction, no way of coordinating and no control.
Our response was to create an AI Immersion Week to promote AI learning in an engaging way. We went to the market to hire an AI Fluency Lead who could help coordinate, create and roll out a sophisticated learning programme around AI that could be easily adapted as the technology and needs changed. We hired someone with a tremendous skillset and aptitude for change.
Shortly after making this hire, we decided to evolve the AI Fluency Initiative into the AI Transformation Programme and better address the needs of coordination, collaboration, and control surfaced through feedback. We created the AI Cross Company Taskforce, composed of the following roles: Departmental reps in charge of sharing information to their teams and across the taskforce, Focus Area Reps responsible for different parts of AI transformation and the Core Team, making sure that the FT is making progress against their AI goals.
Success Measures
We’ve been measuring our efforts towards AI fluency in the following ways:
● AI Fluency survey results increased from 88% achieving the ‘AI literate’ level or higher to 98% within six months.
● Launch of 29 AI tool use cases across the organisation as ratified by the FT’s Generative AI Use Case panel.
● ChatGPT usage soared to 1,400 weekly users, with 100,000 weekly messages and 424 custom GPTs developed.
● Strategic exits from underperforming AI tools saved significant license costs, demonstrating data-driven decision-making in action.
● Quarterly surveys showed a marked increase in employees reporting productivity improvements due to AI tools, with promoters rising from 22% in Q1 to 32% by Q3.
Employee feedback:
“This has been a game changer in the way I approach my work, but also think its been facilitated wonderfully by the FT”
“My productivity has significantly improved as a result of the AI tools and training provided by the FT. The tools have streamlined many of my tasks, allowing me to work more efficiently and focus on higher-value activities. The training has also enhanced my understanding of how to effectively integrate AI into my daily workflows, leading to better time management, quicker decision-making, and overall increased output. I feel more empowered to tackle complex challenges with these resources at my disposal.”
Quantifying Impact
This is something we’re working on. The next phase of the AI Transformation Programme is around the measurement of the impact/value that AI is generating for the FT. At the moment, we only have anecdotal evidence that it is making work easier/higher quality, and there are pockets of experimentation around larger applications and customer-facing services.
Challenges and Overcoming Them
The pace of change was one of the biggest challenges, but forming the AI Fluency Initiative, which leveraged different departments/skillsets, was really key. Us all working effectively together helped navigate the uncertainty and rollercoaster of the first couple of waves of adoption.
Another challenge was the response of people at the FT. GenAI is quite an emotive topic, particularly for our employees who often see it as a threat to our very existence and as morally hazardous. Encouraging experimentation and engagement whilst being mindful of these concerns was difficult (and still is). Although we’ll never be fully out of the woods on this one, we really made sure people could voice their concerns and feelings, not just report on their level of competency with any of the tools. Our first survey around GenAI was largely focused on people’s perceptions, so we knew how to shape our comms. Within the AI Fluency Initiative and now the larger AI Transformation Programme, we continue to reference thoughts/feelings/ethics/morality as we grapple with transforming the FT and consider it a duty to balance business gains with our responsibility to our employees, our readers and the world.
Impact on Employees, Customers, and Stakeholders
This is something we’re hoping to measure more concretely, but so far, we’ve seen a reduction in AI-related anxiety and fear and an uptake in experimentation and adoption in BAU tasks. We’re experimenting with customer care automation, bullet-point summaries in articles and rolling out Ask FT to our B2B customers, which is an internally created GAI chatbot trained on the FT archive to bring natural language summaries and information based on a user’s prompt. We’ll be gathering user feedback after these are formally launched.
II. Adherence to Scottish AI Strategy Values
a. Ethics
Identifying and Mitigating Ethical Challenges
Ensuring widespread adoption of AI across a business IS an ethical challenge. Which is why starting with an AI policy and principles was crucial. I don’t think there is any other way to do it, but sadly, I’ve heard so many stories that suggest some organisations think otherwise.
Beyond that first step, we have ensured that ethics stay at the heart of the AI transformation. Within the programme, we defined 7 focus areas:
● Trusted Journalism (AI in the newsroom)
● Governance & Ethics - Customer-Facing Products and Data
● Governance & Ethics - Brand Integrity and Value
● Tools & Automation
● Monetisation & Licensing
● Fluency (which has ethics and governance as part of the framework)
● Model Selection
● Product Development
Each focus area collaborates with the others to ensure all the right standards are being met and the right information shared.
Adopting Ethical Guidelines
We have developed the AI Fluency Framework, AI Principles, AI Policy and AI Ethics Framework.
b. Trustworthiness
Data Accuracy and Security
We ensure that the data used in AI systems is accurate, reliable, and secure as part of the Model Selection Focus Area within the AI Transformation and it is an expected part of BAU for any Data Science work.
Transparency and Explainability
This will depend on a number of factors. Here’s what’s stipulated about disclosure from our AI Ethics Framework:
Disclosure
We aim to have an appropriate level of transparency to ensure trust and understanding, particularly in any externally facing use of AI. The more automatic or impactful a process, or the larger the output proportion generated by AI, the more appropriate it is to be transparent and the more prominent the disclosure should be.Ensuring that we let people know when their data is being handled by AI, and when what looks like something we're doing directly is actually something done by AI ensures trust and clarity in the relationship. When in doubt, disclosure is the default option.
Example 1: Ask FT, for instance, can be prone to some errors, yet does not have human oversight of its answers through necessity - we can't review individual queries. It can misinterpret allegory and take things out of context to give wrong answers. We counter this with very obvious markers of transparency - a banner labelling the tool an 'AI Experiment' and a footer explaining that AI tools can be inaccurate and humans don't review it - even the title of the response is 'AI response', to ensure there is no chance this is mistaken for direct, edited FT Journalism. This goes some way towards protecting the perception of our journalism by clarifying this is separate and subject to the norms of Generative AI output, and not those of published FT content.
Example 2: If we reviewed CVs for job applications with AI tools, people would need to know by law in some jurisdictions that a decision with material effect to them has been made without human oversight. Check with Compliance whenever you're looking to handle personal information with AI.
c. Inclusion
Mitigating Bias
Bias is something we talk about a lot. Mitigating the harmful effects of bias in AI systems is part of our AI principles. Our AI Ethics Framework has the following:
Fairness and Inclusion
We are committed to the use of AI that promotes equality and avoids harmful bias and unjust impacts on individuals or groups, with particular concern for sensitive characteristics, decisions and situations.
The nature of AI training can mean biases and other weaknesses can be present in either the information provided to train the model or the nature of the data, and our choice to make use of specific tools can also be a factor in unfairly treating others.
Example 1: Some facial recognition processes have been found to be biased in favour of functioning well for white, male faces due to lack of equivalent representation of other groups in the training data.
In such cases, in order to make sure decisions made with AI are made demonstrably fairly and AI models do not exclude or treat unfairly individuals or groups, we need to employ testing, checking and mitigation to both those we create and those we buy.
Example 2: It can be easy to exclude people based on the use of AI - for instance, if you're relying on an AI notetaker to record and summarise your meetings, but someone in one of your meetings has a nonstandard accent, their speech will likely be incorrectly annotated, and you will have a poorer record of what they say than of others. Ensuring transcript summaries are manually confirmed and updated can work to alleviate this and ensure nobody's input is forgotten or invalidated.
III. Sharing Best Practice
Lessons Learned
It is so critical to start with the “why.” Adopting AI isn’t necessarily a given, and it needs to be clear as to what the organisation is looking to achieve and the internal/external context it operates within. It’s a bit like going on the perfect date. Each person will have a different idea of what that entails and the qualities of the other person. The same is true of organisations. Their internal culture, operations, customer base, regulatory environment, etc. will also play a role in determining how (or IF) AI will deliver value and be worth the cost, effort and risk.
Ensuring there is coordination across the organisation is also key, once you’ve established principles around AI. Otherwise, it will be like the Wild West… or nothing will happen. What’s the mechanism by which people understand how to use tools responsibly or gain the necessary skills/information to have impact? This is unlikely to happen organically and needs careful consideration and dedicated resources.
I’d also highly recommend getting someone in specifically around AI Fluency. If an organisation is going to leverage AI at scale, it’s essential that employees have the right skillset and understanding to apply responsibly and effectively. Having someone with a learning background who understands transformation is an absolute god-send.