Connect with us

Think Tank

Driving business and operational process excellence through synergy of process, application, and data layers in a DSP – A perspective

With the advancement of digitization and emergence of multi-dimensional value chain in the ICT industry, there is a strong desire for all service providers to be digital players or enablers. The buzz in the industry is at an all-time high, where every telco is now claiming to be a tech-co in a few years’ time, and it seems there is ample support and focus from the top management to meet this drive as fast as possible. While telcos are striving very hard to stay super relevant in this game of transformation, they are supremely challenged on two fronts – the bottom-line driving profitability as well as overall growth. The result is a great impedance effect, which is visible in managing the whole operations; hence a series of parallel focus has been given toward the automation drive and achieving operational excellence.

We are often hearing another term in recent times, in the remit of operational transformation, data-driven operations. This translates in simple terms as achieving excellence in core processes running across the length and breadth of telco operations and managing the same by taking the next step of improvement in all decisions taken based on the data that is collected and harvested within the eco-system. Telcos often have the habit of focusing on the data visualization layer only, when it comes to data-driven operations, rather than baselining and focusing on their overall data life cycle, starting with data and analytics maturity, their analysis, normalization, and lastly linking it to their visualization. Post this, they must see this from an e2e perspective to stitch the entire story together in order to realistically be truly data-driven as an enterprise.

As much logical as it may sound to achieve, the above description of data-driven in truest sense is a bit of a herculean task for large global telcos who have been running these operations for years. The reason is simple. Layered custom developed processes with underlying applications driven by monolith architectures then augmented drastically to fit in the service-oriented game resulting in a void of cost optimization. This is further topped up by varied data models and structures across different sets of applications running across front, middle, and back offices, which further complicates their maintenance and management. Though there is a well-defined standard and best practices for each layer within the Enterprise Architecture set up of a telco, it is often seen that putting all this together across their entire OSS-BSS setup is a big challenge. The result of all this is disconnected processes, bad data quality, and a serious lack of integration between processes and data, which run in silos. More often, it is seen with the advent of automation and AI, that data quality management is often not the center of focus for many; this results in undesirable outcomes, which may potentially lead to further results that are inaccurate and hence ideally decisions based on this data may potentially be erroneous.

Now that we have established the problem statement and acknowledge, it qualifies as a big enough problem which may potentially lead to very harmful implications resulting in serious consequences; so how do we go about fixing this? The answer is exceptionally simple, which is right in front of our own eyes but perhaps we did not give this enough traction to derive its true benefit in the context of achieving operational excellence. Most telcos do maintain a library of their core business and operational processes, detailed and well drafted at level 4 or level 5, in a well-orchestrated process mining and design tool, which has possibly all details of the business or operational process setup duly responsible for driving day-to-day operations. These are all encompassing the domains of fulfilment, assurance, and billing and cuts across three horizontal layers of customer, service, and resource. These are well drafted and mapped into TMForum frameworx standards and are well aligned to BPF (business process framework) of the overall frameworx architecture. It is expected all business rules, which are dictating the performance of these processes, are equally well captured within the detailed workflow. These rules are nothing but the most critical data points, which are well captured within the process workflow, which not only determines critical process paths or process logic but also helps in undergoing stimulation of all business processes to test their efficacy. The issue is, they are always kept in isolation within the process design and mining tool without really trying to go granular beyond the process layer, and linking these rules or data point to the application performance and further to the data layer.

The aim here is to synergize the process mining extracts by identifying all possible rules and linking it back to application performance driven by their modules and sub-modules and further identifying the critical data sets, which are sitting within these, which really govern the performance of any automated business service within the application in scope (whether OSS-BSS, front, middle, or back office) itself. The synergy driven by the lowest common denomination of data by identifying the critical data elements and mapping of these into the process performance or business rules is the magic recipe of success. How ever naïve this solution may seem to be like, it is one of the most empowering propositions to drive significant efficiency in management of any automated business service driven by any application.

Not just this, it is a very strong driving force across networks and IT applications stack and any operations where there is a significant software or automation play. The critical data elements need to be drafted out in a platform or solution with significant ETL or ELT capabilities with special focus on data cataloguing. Each of these data workflows need to be designed and physically configured into this solution, which will in turn be mapped or over-arched with the process workflows. The elements of data quality like validity, timeliness, redundancy, partial records as well as duplicate checks are mapped on top of these elements. The logical mapping of these data elements, which are termed as critical data, is often done to separate out the signal emitted from the noise that moves around these processes.

The above proposed solution is what I call as process-data synergy, which drives the full-blown efficiency of mission critical processes while they are invoking any business service across OSS-BSS application. This will ensure that process and data collaborate with each other seamlessly and are not looked upon in siloes. Hence both process and data stakeholders are very much in sync with exceptions and aberrations, which may possibly arise in maintaining and managing applications.

The focus on automated RCA (root cause analysis) is an eminent strategy and the base of that is driven by this process-data synergy or integration, without which it may well be a distant reality. Not just that, applying AI algorithms on faulty and unstructured data will yield erroneous results and decisions taken based on such data will not result in the needed benefits.

So, in summary, any data-driven decision needs to ensure that the efficacy of the data is of utmost standard and the basics of data quality and data management principles are being followed to the tee (T). All operational stakeholders, and especially the one managing applications and are responsible for excellence toward architecture built up, must follow these principles in order to be truly a data-driven enterprise. This phenomenon can be well replicated in an application-intensive landscape in any industry, where business processes are the heartbeat of all data being captured. CxOs are emphasizing the importance of data more and more while they have embarked upon this journey of being digital technology players; hence they must also relook at strategies in ensuring software design build and operations become data-driven in its truest sense. Linking these outcomes to tangible S-KPIs (service KPIs) or B-KPIs (business KPIs) to demonstrate value realization will be far easier and more logical. Hence for any CxO justifying the investments around analytics, AI and automation will look far more reasonable, rationale, and acceptable as part of a true transformational change.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!