When a computer is doing the thinking, how can you be sure it isn’t biased? Vodafone has strong ethical policies that ensure its own processes are as bias-free as possible. But with the advent of Artificial Intelligence (AI) and complex algorithms, technology is increasingly making decisions for us behind the scenes. Which is why Vodafone has implemented European wide policy and controls to ensure the company responsibly uses AI to protect its customers across all of the markets in which it operates.
In keeping with its existing AI Framework, the AI policy and controls sit at the centre of Vodafone’s actions as it increasingly embeds automated AI-based decision-making across the business. They ensure that Vodafone remains at the forefront of this evolving area and that the company complies with the requirements set out in forthcoming EU legislation.
The upcoming EU AI Act., which is scheduled to receive final adoption by EU ministers at the Telecom Council meeting on 6 December, is an important pan-European initiative. It sets out to address the fundamental rights and safety risks specific to the AI systems.
The importance of ethical AI
Deploying AI products and automated decisioning at scale across multiple countries can lead to societal and commercial benefits, for example, in personalisation of the customer recommendations and real-time network optimisations. In this case, Vodafone can rapidly respond automatically to events ensuring customers remain connected when it matters most.
With these opportunities also comes risk. Any data society produces, collects, and analyses is potentially open to being biased. This is because AI systems learn patterns within the data on which they are trained to optimise a predetermined objective. AI itself has no moral or legal compass to decide what is ‘right’ or ‘wrong’ and left ungoverned AI systems can (and have been proved to) reinforce and exacerbate humanity’s conscious and unconscious biases.
Tarek Salhi, Ethical AI Lead at Vodafone Group, explained: “Democratising AI in any large organisation has the potential to unlock tremendous value, however left ungoverned, companies can be exposed to reputational and legal risks.”
Unfortunately, the corporate world is littered with examples of companies encountering ethical AI issues such as discrimination in their hiring algorithms or self-driving car accidents. When a company fails to implement stringent controls over AI either built in-house or sourced from a third-party, the technology has the potential to quickly escalate from a localised problem to unfair rules being applied far and wide.
Putting customers first
Vodafone has adopted an ‘AI Ethics by design’ approach to making complex decisions. Led by Vodafone’s global Big Data & AI team, this policy was created by a diverse set of specialist employees from many areas, including privacy, legal, ethics, security, technology, data governance, human rights, and product owners. It is now being used by Vodafone’s multidisciplinary, international teams, as well as suppliers and partners to guide them on how any AI related project should be developed, used, and governed.
Cornelia Schaurecker, Big Data & AI Director at Vodafone, continued: “With more than 350 million customers across mobile, fixed and TV, it is vital that Vodafone has a policy that spans multiple borders, products and services, cultures and jurisdictions. It is the cornerstone on which we can build and scale AI in a responsible manner to best serve the interests of our customers.
“In parallel, our Machine Learning Operations Platform called AI Booster enables us to automate and support many of the processes with robust and advanced cloud-native technology in ten European markets, more to come. This means we can experiment with small use cases in one country, and then industrialise them in no time across Europe without the capacity planning constraints associated with running larger server farms.”
First, understanding how a machine makes a recommendation or arrives at a decision, and then explaining that process, helps build trust with end users. “The robustness of the AI systems regarding privacy and security is just the start,” added Tarek Salhi. “On top of that, we want our AI systems to be fair, explainable, and free of harmful bias with a strong governance process underpinning them.”
Keeping an eye on hundreds of AI use cases
To this end, Vodafone’s policy contains internal controls governing the use of AI from the start to finish of any process, for example the launch of new online seasonal offers for customers. Anyone developing an AI-based service needs to carry out a risk assessment to ensure fairness and avoid unfair preferential treatment. They can draw on a use case library for transparency and best practice templates, for example, to ensure that the correct documentation is logged for auditing.
Embedding an ethical AI programme across Vodafone’s entire geographical footprint comes with a unique set of challenges. As result, Vodafone has implemented a common Machine Learning framework giving it clear visibility of one or even thousands of AI use cases across the entire organisation. This provides the assurances needed to roll-out unbiased products and services at scale and pace. The company also constantly screens the AI landscape to ensure its controls remain relevant as events unfold, new tools are released, and regulatory requirements evolve.
It’s not all about processes and machines, human oversight is also crucial. Vodafone often consults external parties such as the University of Oxford and end users to strike a balanced view, and for high-risk cases covering ethical, social and political issues, it will involve reputational, regulatory and legal experts under a company-wide steering committee.
Having successfully implemented a global AI policy, Vodafone will not rest on its laurels. The company is continually upgrading its arsenal of online monitoring tools and growing its network of strategic partners. An Ethical AI awareness campaign for all employees is being rolled out as well as additional training for its hundreds of data scientists, data engineers, and AI product owners.
Tarek Salhi concluded: “At Vodafone, we empower our people working in AI to ensure they develop products and services without bias, and with the customer in mind.”
The machines may be doing the thinking, but Vodafone is keeping that thinking ethical.