The Business Case for Low-Code No-Code Analytics


By Jeffrey D. Camm, Inmar Presidential Chair in Analytics, Wake Forest University School of Business

Much has been written about so-called low-code, no-code (LCNC) platforms for software development. Low-code platforms require a limited amount of knowledge of a programming language, whereas no-code platforms provide an approach that does not require any knowledge of a programming language. These same definitions apply to applications of analytics. For example, commercial analytics software SAS can be considered low-code analytics. Other commercial statistical and analytics software such as JMP, Alteryx, DataRobot, Tableau, Power BI, Crystal Ball, or even Excel itself are no-code analytics software platforms. Open-source LCNC analytics and data science software include platforms such as KNIME and Orange. We distinguish these from the open-source programming languages R and Python, which are now heavily used for analytics and data science.

LCNC analytics is not new. Back in the 1970’s, Robert Ling and Harry Roberts developed a software package called IDA (Interactive Data Analysis). As they stated in a paper about IDA, the objective was really to free the statistician from reliance on computer programmers:

“Despite rapid growth in computer technology and in the application of that technology to statistical computing, applied statistical work has often lagged far behind the technical potential of the computer. One possible explanation for the lag has been the reluctance of those who create, teach, and apply statistical tools to become directly involved in computing.”[1]

It seems that analytics/data science has come full circle. Early LCNC packages such as SAS, IDA and Minitab freed the analyst from the need to code in Fortran or some other programming language. The time not spent fixing syntax and other coding errors became time spent on more productive pursuits – namely, analysis. The development of R and Python in the 1990’s took us back to needing to be both an analyst and (at least at some level) a programmer.

All of this raises the question, “When should programming languages such as R and Python be used and when is LCNC more appropriate?”

Analytics versus Data Science. Let’s distinguish between data science and analytics. In general,  analytics tends to be more problem-centric, whereas data science tends to be more data-centric. A sample of data science and business analytics masters degree programs indicates that, at least academically, data science tends to be more technical, a cross between statistics and computer science. In contrast, business analytics often includes courses on business applications of analytics and data science techniques. As data science is more data-centric, it is also using big data more often than not. The managing and mining of big data typically require a more technical set of skills, including coding. Analytics, being problem-centric, starts with a business problem and may use small or big data. Hence, LCNC is more appropriate for those in analytics than data science. Lately, a relatively new label has been used – citizen data scientist. Gartner defines a citizen data scientist as follows:

A citizen data scientist is a person who creates or generates models that use predictive or prescriptive analytics, but whose primary job function is outside the field of statistics and analytics. The person is not typically a member of an analytics team (for example, an analytics center of excellence) and does not necessarily have a job description that lists analytics as his or her primary role.”[2]

From Gartner’s definition, a citizen data scientist sounds a lot like an analytics professional, one who perhaps graduated from a masters program in business analytics. They are often embedded in the business rather than in a centralized data science group, or they may be in a group with an analytics focus on a particular business domain, such as consumer insights or supply chain planning. They tend to solve business problems, but rarely develop their own software or solution approaches. But they are well-versed in analytics and data science methods, and understand model assumptions and how to influence with their analyses. The productivity of these citizen data scientists/analytics professionals can be greatly enhanced by LCNC analytics.

Projects versus Data Products. Consultants do projects. Engineers create products. Data science often has, as a goal, a data product that is then distributed throughout the enterprise.[3] This is particularly true for machine learning and artificial intelligence. By necessity, the creation of a data product typically requires more software development and hence more coding. However, project-based work, where the output is a set of recommendations that can be operational, tactical, or strategic in nature, can often be much more efficiently accomplished with LCNC analytics. Of course, LCNC models often suffer from being unable to be easily-scaled or controlled (an extreme example is a spreadsheet model that gets copied and edited by numerous users).

For project-based analysts embedded in the business or part of a specialized group with an analytics focus, LCNC analytics can save time and therefore increase productivity with data wrangling, model building and testing and analysis.[4] This allows more time to be dedicated to the analysis and developing sound recommendations. It also allows analysts to focus on what they were trained to do and love to do and, create analyses that lead to better decisions.

About the Author:
Jeffrey D. Camm
is the Inmar Presidential Chair in Analytics,  the Academic Director of the Center for Analytics Impact, and the Senior Associate Dean for Faculty at the Wake Forest University School of Business. Known for his pragmatic approach to analytics, he is a sought-after speaker in academia and business. He is coauthor of the market-leading business analytics text, Business Analytics 4e, with Cengage Publishing and his forthcoming book, Data Duped: How to Avoid Being Hoodwinked by Deceptive Information (with Derek Gibson), is scheduled to be published in 2023 by Rowman & Littlefield Publishers.


References:
[1] Ling, R., and H. Roberts, “IDA: An Approach to Interactive Data Analysis in Teaching and Research,” The Journal of Business , Vol. 48, No. 3, July, 1975, pp. 411-451
[2] Gartner, “Leading Upskilling Initiatives in Data Science and Machine Learning,” (Peter Krensky, Afraz Jaffri, Farhan Choudhary), June 11, 2021 (ID G00739774).
[3] Davenport, T., and R. Bean, “Developing Successful Data Products at Regions Bank.” MIT Sloan Management Review, November, 10, 2022.
https://sloanreview.mit.edu/article/developing-successful-data-products-at-regions-bank/
[4] Davenport, T. “The Future Of Work Now: AutoML At 84.51°And Kroger,” Forbes.com, October 21, 2020.