Artificial intelligence (AI) is forecast to reach a market volume of US$826.70 billion by 2030 (Statista). As one of the fastest growing economies, India is the third largest digitalised country among the G20 nations, after the USA and China. India has enormous potential for an AI revolution, having the second highest installed AI talent base with 420,000 employees. India's AI market is set to grow with a CAGR of 25 to 35%, and is expected to reach US$17 billion by 2027.
As the potential of AI grows, so has concerns about the risks of using it, including the technology’s impact on privacy and security. People are wary of how AI technologies use and apply the data they collect. In the past five years, consumer trust in AI has fallen globally from 61% to 53% (Edelman). There have also been instances where the AI models’ output does not perform as intended. When the AI model is not trained and tested against representative datasets, for example, there can be bias against certain populations.
As the technology matures and becomes more ubiquitous, organisations and companies are doing more to ensure that the AI systems they implement are not only making accurate, bias-aware decisions without violating data privacy but are also being used in a responsible manner. In recent years, both the public and private sectors have focused on developing guardrails and principles to ensure that AI is developed and deployed in a safe, trustworthy, and ethical fashion.
Demonstrating responsible AI use
To address the significant concerns many have over the unintended consequences of AI use, organisations and companies need to go beyond committing to responsible AI principles and do more to demonstrate to their stakeholders that they are implementing responsible AI in an objective and verifiable way.
Voluntary self-assessment is a start. AI Verify, which is the world’s first voluntary AI Governance Testing Framework and Toolkit, enables businesses to demonstrate their deployment of responsible AI through technical tests and process checks.
Developed by Singapore’s Infocomm Media Development Authority (IMDA), AI Verify has two components. Firstly, the governance testing framework specifies the testable criteria and the corresponding processes required to carry out the test. Secondly, the software toolkit conducts technical tests and records the results of process checks.
AI Verify brings together the disparate ecosystem of testing sciences, algorithms, and technical tools to enable companies to assess their AI models holistically in a user-friendly way.
AI Verify also facilitates the interoperability of AI governance frameworks in multiple markets and contributes to the development of international AI standards. It encompasses a testing framework that is aligned with internationally accepted AI ethics principles such as those from the EU and OECD.
Public-private partnerships critical
While guidelines are key to safeguarding responsible AI use, it is important to ensure that these guidelines do not inadvertently restrict innovation. This light-touch, flexible approach to managing AI risks is reflected in the artificial intelligence governance framework published by the Association of Southeast Asian Nations (ASEAN) in February this year. The voluntary ASEAN AI Guide provides seven guiding principles and recommends best practices for implementing responsible AI in the region.
To truly move the needle on responsible AI governance, close public-private collaboration in discussions and action is vital. Only by working with Industry can we employ the collective power of public-private partnerships to advance AI testing tools, promote best practices and standards, and enable responsible AI.
The AI Verify Foundation, which was launched by the IMDA in 2023, brings together AI owners, solution providers, users, and policymakers, to support the development and use of AI Verify to address AI risks.
Companies, including AWS, Google, Meta, Microsoft, and Standard Chartered Bank, have tested AI Verify, and provided IMDA with valuable feedback on the framework. Such industry feedback is consistently channelled into the development of the framework to strengthen AI governance testing and evaluation.
Responsible AI underpins technology’s future
The journey towards responsible AI has just begun and progress requires commitment from, and collaboration with, stakeholders across the AI ecosystem. At the upcoming ATxSummit, global government and business leaders, as well as visionaries and industry experts will gather in Singapore to advance discussions around AI governance and explore partnerships to bridge the gap between AI's expanding capabilities and the necessary safeguards. Gen AI, for instance, has given businesses the ability to generate content quickly and cheaply but the technology brings its own set of risks and challenges.
AI is poised to transform the global economy, and will profoundly impact the way we work, live, and play, bringing economic and social benefits for all. But, its immense potential cannot be fully harnessed without addressing concerns about the risks of using AI and solidifying trust in the technology. The path ahead may be challenging to navigate but it is a path we must take.
-- Lee Wan Sie, Director, Development of Data-Driven Tech, Infocomm Media Development Authority, Singapore.