Advertisment

Consulting with Integrity: ‘Responsible AI’ Principles for Consultants

Explores the concept of "Responsible AI" in the context of a rapidly aging global population and the increasing role of AI in boosting productivity.

author-image
DQINDIA Online
New Update
Responsible AI

Responsible AI

Demographics around the world are shifting significantly. Across many countries, the proportion of the working-age population is set to shrink. The UN reports “the percentage of the global population aged 65 and above is expected to rise from 10% in 2022 to 16% in 2050”. That would be a jump from every 1 in 10 to 1 in 6.

Advertisment

Against this backdrop, AI holds an exciting promise as the next big lever for productivity. Generative AI and other forms of AI, acting standalone in the digital world, and coupled with robotics to execute tasks requiring physical labor, could improve global labor productivity by 30% in the next decade, and well above by 2050.

All this goodness assumes that the development, deployment, and value realization from AI, would all be done responsibly. ‘Responsible AI’, interpreted in this broader context, and as seen by me and many of my colleagues at Tiger Analytics - data architects, data scientists, AI engineers, consultants, and program orchestrators, is a continuum of initiatives that need to happen at three distinct, but connected levels.

 

Advertisment

First, in the fundamental science, research, and development of large-scale AI models. Second, in how AI models are integrated into business processes and other applications. Finally, the third: change management, with an eye for maximizing value for stakeholders and minimizing negative impact on ALL forms of life on the planet. Here is a quick look at each one:

1. Responsible AI R&D

Thanks to the digitally well-connected world, many of us know about the debates at OpenAI over the previous year or so. In my view, much of that is about keeping AI safe: now, and importantly, in the future when models could get much smarter than humans. Topics like ‘alignment’ (aligning AI models to human values), or the ambitious ‘super alignment’ (building or enhancing value systems) get serious attention at this level in addition to the performance metrics of the models themselves.While this might appear a very modern problem to address, Isaac Asimov’s short story - The Runaround, outlined three laws of robotics way back in 1942. [The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law. The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law]. These are highly relevant especially as we approach a future in which digital robots (powered by self-learning, if not yet sentient, AI minds) and physical robots would handshake.

Advertisment

2. Responsible AI Engineering

AI Engineering refers to the process of integrating models and components that surround a model like data pre-processing, feature engineering, user interface, etc. into an end-to-end AI application that serves the end objective consistently, and at scale. In some cases, this is also referred to as an AI or ML Product. 

24 x 7 systems that scan every financial transaction for possible fraud, or generate the next best action recommendation for a healthcare provider for their patient, or personalized product, pricing, or promotional offer-related recommendations to a customer across multiple channels of interaction (app, chat interfaces, etc.), are some examples of such systems.

Advertisment

Building these needs multiple technical specialists - data scientists, machine learning engineers, application DevOps professionals, etc., to work along with business analysts, consultants, and end business users. Everyone in such teams must be aware of, and actively implementing principles of Responsible AI. 

a) Interpretability of models used in decision systems: an approach to establish interpretability for classical predictive models vs artificial neural networks and generative AI models are different, but the main point remains: output of a model in relation to inputs should be reasonably clear, and consistent

b) Fairness in decisions made, or enabled, to eliminate bias or discrimination

Advertisment

c) Physical safety and mental health of end users interfacing with the system

d) Human centeredness, from two perspectives i) optimal human-AI interaction experience, and ii) design choice human-in-the-loop vs end-to-end automation

e) Privacy - opt-in or opt-out choices made by individuals for the use of their data in the development of AI models should be strictly complied with.

Advertisment

f) Clear accountability by way of i) individuals explicitly declaring adherence to the above principles, and ii) a multi-tiered governance system - in line with the anticipated risks, are also needed to ensure sustained and safe application of AI. 

3. Responsible AI - Advisory and Change Management

Deployment of AI-based business solutions is not just a technology upgrade exercise. There are still a lot of human elements that need to be considered in integrating AI into business processes. Complete automation vs hybrid human-AI is often referred to as the “Cyborg vs a Centaur” model. Responsible AI in this context, needs to address a few key questions

Advertisment

1) When encountering a particular business problem, the first question to ask is if we need AI to solve it. In my observation, in many cases, smart analysis of data would provide actionable and valuable insights, whereas, in some, there is a big opportunity to completely reimagine a business process through a system of well-synchronized AI models with a generative AI wrapper for human interface. The key question in such cases: what is the extent of automation of tasks & decision-making that needs to be delegated to AI, and what is the incremental business value vs cost of such interventions? This helps build a first-level consensus for change.

2) When the thoughtful deployment of AI results in the release of human capital, there is a need to plan such transitions to be smooth, with adequate reskilling plans. This is especially important as certain job roles within organizations (and certain sectors in an economy) will be impacted by AI a lot more quickly than others.

3) Finally, in the event of an AI outage, what is the backup plan? This is no different from human pilots taking over from autopilot. How do we ensure human pilots of AI are still skilled enough to take over when there is a need? 

In summary, the promise of AI to magnify human potential to superhuman levels is at an all-time high. With responsible design and thoughtful deployment, I'd think we would be in a Centaurian world - with experiences enriched by the best of human and AI minds, for quite some time before Cyborgs dominate.

By Santhanakrishnan Ramabadran, Head of Analytics Consulting, Tiger Analytics

Advertisment