The world of work faces an uncertain future. Leaders, workers, regulators and governments are trying to anticipate and prepare for what’s ahead. What is certain is that the future will be AI-driven, and that time is of the essence. And while AI promises much in terms of productivity and possibilities, it also raises fundamental questions about how we can use it ethically and responsibly.
We are at an inflexion point. Companies can navigate the AI transition smoothly, but the workforce must have confidence in the technology. Responsible AI policies are key to winning buy-in across the enterprise, building trust and getting ahead on AI adoption. Digital trust is the key ingredient in an inclusive, responsible and ethical AI revolution – but it is currently lacking. A third of companies say building that trust in the next year is a priority*; it is a priority for two-thirds of companies in the next five years. The Adecco Group surveyed 2,000 C-Suite executives for its recent report, Leading through the great disruption, and found that only 52% of leaders had a responsible AI framework in place a year ago.
There is a sense of delayed urgency among business leaders because the world of work is in a nervous holding pattern. Nobody truly knows what is coming over the hill, but they know that it is moving fast. That uncertainty presents a historic opportunity for authentic and empathetic leadership. Leaders will struggle to build digital trust when they claim they have all the answers to AI. By approaching this journey from a place of humility and authenticity, they can say “we don’t know where we’re going, but we’re on this journey together” -- and truly start building digital trust within their organisations.
Uncertainty under pressure
Companies are currently exposed to AI in two key ways. The first is for internal use. Leaders must review their own use of AI tools to assess how they might streamline operations and increase productivity.
Companies are also using AI in their external operations. Organisations that are not visibly adopting the AI revolution risk being seen as lagging behind. This can have reputational and financial repercussions.
This dual pressure risks pushing companies to embrace AI more quickly and visibly than they are ready to. Without taking the time to proceed along the lines of a responsible AI framework, they could be creating complications in the future.
AI in recruitment
The use of AI in recruitment is a great example of these risks and the importance of digital trust. Most companies plan to use AI in their recruitment processes in 2025. However, 76% of workers say they value the expertise of a human recruiter to see their potential, according to the Adecco Group’s latest Global Workforce of the Future research. In fact, fewer workers trusted AI to deliver a fair assessment of previous work experience in 2024 than in 2023.
Leaders are eager to start implementing AI in their recruitment processes, but workers don‘t trust them either. Indeed, less than half of workers (46%) trusted their employer’s AI skills and knowledge last year. In fact, even fewer leaders (43%) trusted their own skills and knowledge last year.
Rushing AI adoption in recruitment without this trust presents serious risks. For example, discrimination and bias could sneak their way into AI systems and exacerbate inequities that already exist in the world of work. This could further erode workers’ already dwindling trust in AI tools. It is therefore essential to keep the ‘human touch’ present in recruitment.Additionally, while there is no obligation to do this yet, it is highly likely that the use of AI tools will soon be the subject of regulation. Failing to clearly communicate how, when and why these tools are used and failure to document that process could expose an organisation to severe repercussions further down the line.
Take a breather, do it right
Like every other organisation, the Adecco Group is also grappling with the realities of an AI-driven world of work. As we are maturing in our own AI journey, we are gathering key learnings.
First, it is crucial to build a self-assessment process in the earliest stages of AI implementation projects. Gaining full visibility of your own capabilities is key to move ahead. Second, placing ethics and responsible use considerations on a par with IT security and data protection in the pre-project phase ensures that they serve as a building block, and not as an afterthought.
Business leaders must also consider perspectives from as many different stakeholders as possible. This can ensure that AI adoption is inclusive and responsible. For example, somebody far removed from the C-Suite may raise questions around the accessibility of an AI tool. Is it accessible to those with vision impairment? For recruiters, stakeholders can ask: would AI recruitment tools give education more weight than skills when assessing compatibility with a role?
Then it is essential to secure buy-in across the entire organisation – from senior executives to small teams. Senior ambassadors and AI champions are a great start, but the Adecco Group has made its AI journey a broad and collaborative exercise. We have gathered all the perspectives around the table. We have made a point of positioning this as a joint learning exercise.
AI is disrupting and destabilising the world of work, but the positive effects can outweigh the negative ones, and it presents huge opportunities for those within it. It already impacts the lives of leaders and employees alike. There is no simple solution. The best thing business leaders can do is to make themselves and their organisation future-ready by becoming agile, adaptable and proactive as AI advances. And that means acting on the truest expression of human-centricity: getting everyone around the same table to start an open and honest dialogue – whether it’s around AI or any other topic
*Source: WEC report: The World We Want