Follow BigDATAwire:

November 26, 2019

Operationalizing Analytics: Conquering the Last Mile

Thomas Roehm

(ESB-Professional/Shutterstock)

Per research from McKinsey, only 8% of companies successfully scale analytics. To improve this abysmal rate, organizations must conquer what’s been called the last mile of analytics. For those who can, the payoff is tremendous.

Many organizations today implement data-driven strategies as they pursue digital transformation. They have adopted open-source and commercial technologies to manage their data and discover insights through analytics and artificial intelligence (AI).

The goal? To make faster and more informed decisions, such as what real-time offers to present to a customer, which financial transactions could be fraudulent, and what treatments should be prescribed to a patient. 

The Last Mile of Analytics

Despite these investments, many organizations still struggle to finish analytics’ last mile – the critical point where analytics deployment drives decision-making at scale. According to Gartner, 60% of the analytical models developed by organizations never are operationalized, or put into use. Think of the opportunity cost – and money wasted — when half of an organization’s most promising models become shelf-ware. Why is this so difficult?

To finish or conquer the last mile of analytics, many things must work in harmony. Data scientists churn out models to help their business. But getting these models into front-line operations typically requires recoding so the models can run within the operational system. The models must also run at scale and respond at the point where decisioning is required.

(Gorodenkoff/Shutterstock)

These models need to be governed, documented, secure, and explainable. And once this is all accomplished, the production models need to be monitored to ensure they are meeting business and IT requirements, and when they decay, they must be re-tuned and re-deployed in a timely manner.

All these considerations can make it difficult for organizations to achieve their rightful return on investment.

The ModelOps Approach

To realize the potentially major returns from data and analytics investments, organizations must systematically close the gaps that exist around operationalizing analytics. These gaps of process and culture can be overcome by adopting a ModelOps methodology.

ModelOps encompasses both machine learning and traditional analytic models and is similar to DevOps. At a macro level, DevOps is designed to move application code into production in an efficient and governed manner. ModelOps aligns with the goal of operationalizing analytics, but because published models can automatically change as they self-learn or decay based on the data and responses they receive, it presents a different set of challenges than DevOps.

Pursuing ModelOps is a significant, transformational change, including people, processes and technology driving automation, repeatability and governance. With a ModelOps approach, the potential to deploy, implement and manage models at scale is within reach for every industry.

The ModelOps approach encompasses the entire analytics lifecycle of data, discovery and deployment. The insights that modelers reveal during the data and discovery phases uncover potential value. These insights need to be put into action to ensure that this value is realized, repeatable and scalable. However, in many organizations this is the point where the process slows down or falls apart because there is neither a defined transition between discovery and deployment, nor collaboration between the model developers and IT, nor tools for optimized automation.

(TechnoVectors/Shutterstock)

Today, the challenge of operationalizing analytics is often on IT’s shoulders. But when we look at operationalizing analytics, we need to begin upstream with the data scientists. As Steven Covey suggests, begin with the end in mind.

During the data  and discovery phases of the analytics lifecycle within a ModelOps approach, data scientists use data from trusted sources that align with their organization’s privacy and security standards. They create models with deployment in mind to avoid re-work, preserving lineage and track-back information for governance and audit compliance purposes. These principles are foundational to the data and discovery phases within a ModelOps approach. A well designed ModelOps effort also instills the right balance of choice and control for data scientists and IT pros: choice on data types, discovery tools, and where the models are deployed; and control around data quality and lineage, model governance, and how production models are deployed.

The deployment phase of the analytics lifecycle within a ModelOps approach begins when models are registered to a central repository that provides check-in, check-out, documentation of changes, and version control. IT or the analytics teams can then take the tested and proven models and convert them into executable code to deploy into operations. Choices on where to embed the model can include an operational database, a real-time data stream, a RESTful API, or a decisioning system used by front-line operations.

It’s important that IT provide this shared environment for registry and governance so that deployment happens in a secure, efficient and user-friendly manner. For quality and efficiency, the conversion to executable code and deployment should be seamless and automated rather than hand-coded. A ModelOps approach ensures that the model developers and the operations teams are working collaboratively.

(Ashalatha/Shutterstock)

Once deployed, it is important to ensure that models are monitored and retrained when their performance inevitably decays. This is an area where analytical models are very different from traditional IT application deployment.

In DevOps, once the program is designed, tested and put into production, the outcomes don’t change unless the developer wants to put new changes into the program. Not true with analytical models. Over time, they will start to lose effectiveness or give incorrect recommendations. Simply put, model decay results in reduced return on analytics. Two critical capabilities during the deployment phase are data and model lineage and the ability to easily monitor and improve models in production.

Without the right ModelOps process and tool set, it can be difficult to answer critical questions about the models on which an organization relies.

Critical questions like:

  • Who created or has changed the models?
  • Where is the supporting model documentation?
  • What input variables are used to make the predictions?
  • What is the lineage of the data and other potential model inputs?
  • How are the models performing, and when were they last updated?

Organizations that can’t answer these questions with confidence cannot be sure their analytical models are delivering real value.

Pursuing a ModelOps approach can answer these critical questions and create a culture and set of processes that provide smart automation of tasks and processes, rapid deployment of analytics, and strict governance. Ultimately, a ModelOps approach creates trust, inspires collaboration and fosters analytic adoption, helping organizations complete and conquer the last mile of analytics, operationalize their analytical processes and deliver value from data and analytics investments.

About the author: Thomas Roehm is the Global Director of Analytics at SAS. Tom helped pioneer SAS’ investment in IoT, consults on the use of analytics in the connected world, and has been instrumental in defining and bringing new offerings to market.

Related Items:

How to Build a Better Machine Learning Pipeline

DataOps Hiring Surges Thanks to ML, Real-Time Streaming

‘Manifesto’ Touts the Marriage of AI, Ops

BigDATAwire