Connect with us

International Circuit

2019: AI For Telecommunications Takes Two Paths

We’re coming to an interesting fork in the road for AI in telecommunications. Two of the trends I foresee for next year are veering in opposite directions.

First, there will be a growing acceptance for the use of AI and machine learning by regulators. This is already happening — regulatory bodies that appeared to be an obstacle just two or three years ago have seen the light. One reason for this acceptance is the cybersecurity threat — if the criminals are using AI against us, we need to respond with all the technical firepower we have.

In 2019, regulators will also learn to accept the more difficult side of machine learning, unsupervised models. These are machine learning models that discover predictive patterns on their own, and are especially beneficial when you don’t have historical performance data to use for modelling.

These models aren’t new — 13 years ago, FICO was using them to monitor detailed network flow data for network assurance at one of the biggest European telecommunications firms. But they’ve always been a harder sell because the patterns they find are more difficult to explain, and sometimes even to see. However, modelling very complex data sets using only supervised models is like tying one hand behind your back. We need the power and “curiosity” of unsupervised, self-calibrating models to detect changing patterns in real time, particularly in cybersecurity.

So telecommunications firms will see an easier path toward using AI, and it won’t just be the innovators who do it. The frameworks around the acceptance of unsupervised models will be more widely used.

The second trend is explainability, which appears to contradict the growing acceptance of unsupervised models. Explainable AI is one of the hottest fields in data science, because regulations such as GDPR — not to mention good old customer service — will demand that businesses can explain to customers why a decision was taken, and what behavior of theirs caused the algorithm to give them a particular ranking, rating or score.

For all of data scientists’ talk about deep learning being game-changing technology, questions about the details of learned patterns in a shallow or deep neural network are usually answered with quizzical silence, even at the largest companies. This is completely unacceptable for anyone who has to talk to a customer about the model or represent it to a regulator.

One of my recent patents addresses the immaturity of the AI industry on this issue. It is a methodology called “explainable latent features” that “explodes” a neural network model in a sparely connected multi-layered model, such that each hidden node can be explained succinctly. I recently talked about explainable latent features at an innovation workshop at the U.S. Federal Reserve, and the audience was very enthusiastic about building transparency into models in this way.

So how will we reconcile the increasing need for complex, unsupervised machine learning models with the need for explainability? This takes us into the conversation around ethical AI, which will accelerate dramatically in 2019.

The innovation workshop at the Fed was a good example of what’s needed to bridge these areas: a three-way dialogue between the industry, the regulators and academia. These three parts are important to the ethical use of AI. A lot of cross-education is needed, because the concepts are too complex to understand by just picking up a book (or, in today’s lingo, start Googling).

We will need to follow both paths — greater use of unsupervised models, and greater model explainability — to keep up with customer preferences and the unprecedented power of the mobile device in our lives. Increasingly, our phones will become our personal assistants. There’s no reason why my phone can’t predict the next things I’m going to do, or tell me to stop at Trader Joe’s for milk because the traffic pattern looks good.

The more powerful our phones become, the more security we will need, even at a time when our digital security seems ever more fragile. I’m pretty dependent on my phone now, and in 2019 that’s just going to increase. I need to know it is secure, and that the network is reliable and up. I see the re-emergence of network security monitoring using AI.

As a data scientist and member of the global analytics community, creating ethical analytic technology is very important to me. I see 2019 as an exciting time for this work in telecommunications, particularly. – Data-Economy

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!