Building Trust and Ethics in AI

By Vivek Khemani, Co-founder, Quantiphi; Vishal Vaddina, Solution Architect-ML, Leading Applied Research, Quantiphi Nov 01, 2021

The applications of AI are ever-expanding and opening newer avenues in processes, workflows and technological solutions. Vivek Khemani and Vishal Vaddina from Quantiphi elaborate on how organisations that successfully implement responsible AI will have an edge in a digitally connected world, where humans and machines perform complementary functions for notably remarkable results.

Sifting through data sources scattered across business units, collating information and deriving sense out of massive datasets often leaves one with little time to be creative or strategic at work. If one could use technology to perform mundane, high-volume tasks, one can free up time to be creative and focus on more valuable work. This is where we witness the transformative power of artificial intelligence (AI), and it is this ability that makes AI incredibly powerful.

In the globally connected digital world, businesses now need to make their most critical decisions with greater speed and higher accuracy. To achieve this, the best-run businesses leverage a combination of ‘AI-Data-Cloud’ techniques with human insights to set up a "decision-making pipeline" that ingests all available data, uncovers the deepest insights, and produces reliable recommendations in a shape and form that is readily consumable. The key is collaborative intelligence, where technology augments human creativity to attain greater insights and far-reaching impact.

It is easy to assume AI as a threat and a substitute for human ingenuity. However, the balance between human creativity and AI implementation is akin to the ‘yin’ and ‘yang’. Yin entails the human qualities of exploration and experimentation, and yang is the execution and measurement of these experiments enabled by AI. When enterprises adopt a human-centric approach to AI, they can create remarkable value, new opportunities and unprecedented growth. Such an approach also enables organisations to make the right implementation choices early in their digital transformation process that help them build and apply responsible AI. Responsible AI helps organisations and leaders to ensure that AI systems are fair, the acquisition and use of data is ethical, and AI processes are transparent.


AI and the Human Point of Confluence

AI pervades various aspects of our daily lives such as voice recognition tools on our mobile phones, faster claim assessments by insurance firms, facial recognition passage at airports, and vaccine management by healthcare providers. AI is breaking barriers and evolving at an unimaginable speed, enabling businesses to create value at a global level with better products that self-learn, improve on their own and attain high customer satisfaction. However, it also raises concerns about the associated risks and creates the fear of machines replacing humans. This is misleading and requires a perspective shift about AI’s capabilities and role in today’s society. The need is to harness intelligence by design, where humans and machines function collaboratively and complement each other.

AI can optimise the process of creation and innovation the same way it improves the efficiency of most functions. The principal objective of any AI system is to rid the human bandwidth from mundane tasks and make them available for goals that require learnability, thinking prowess and a mindset to explore new possibilities. For this to work, organisations need to play the role of an enabler to create an environment that necessitates re-skilling and upskilling for the workforce to embrace the change.

As AI continues to revolutionise different sectors, the approach to implementing AI systems is vital to achieving the desired outcomes. When we design systems in a manner where AI technologies augment human capabilities, we set the foundation to realise higher value for our customers, co-workers and society at large from these systems. The pandemic hastened the change where humans and technology came together to strengthen the response with processes such as contact tracing, disease detection, vaccination management, remote education, and telehealth. What we imagined happening over the span of a few years, we had to perform in a matter of weeks. Some of the challenges that government bodies and agencies faced included a spike in demand for patientcare, an unprecedented surge in unemployment claims, huge demand-supply imbalances in critical medical supplies and resources, the need to identify and monitor high-risk facilities, overwhelmed call centres, and COVID-19 medical data management. The Quantiphi team leveraged their AI-first digital engineering expertise to rapidly develop solutions addressing all of these critical problems.

Quantiphi experts built a ‘Rapid Response Virtual Agent’ for a US government agency that effectively assisted citizens with millions of unemployment claim-related inquiries. The virtual agent, specifically trained to understand user intent and address queries, helped revolutionise their contact centre and ameliorated the overall user experience.

Quantiphi utilised AI and machine learning technologies to battle the pandemic and support communities with solutions that automated repetitive tasks and helped clinicians handle operations. We also developed an AI-based solution for end-to-end vaccine distribution management that provided an efficient and empathetic approach to automate inbound and outbound patient communication, including scheduling and availability of the vaccine through their preferred digital channels.

Another Quantiphi solution built using Google Cloud Platform helped medical institutes process and match massive amounts of structured and unstructured patient data with relevant clinical trial information and eligibility criteria. This enabled the institutes to make more informed treatment decisions and promote efficient patientcare.

The impact of such collaborative innovation is radical. As with all technologies, the objectives of AI solutions must align with human interests. AI technology, by definition, has features of self-development. Nevertheless, the direction that it takes will continue to be governed by humans. With the correct approach, humans will drive technological change, unlocking new value and making organisations future-ready.


Designing Responsible AI Systems

As AI empowers businesses and societies alike, the ethics around high-stakes AI applications have become a contentious issue. AI technologies aid decision-making; however, such use cases are exposed to risks such as simulating or amplifying human biases. Hence, it becomes essential for organisations to ensure that the AI systems are fair and unbiased towards various sections of society. Even when datasets reflect adequate population representations, the AI output may still present compromised results due to historical human biases. To quote an example, we developed a speech-to-text model that performed poorly on female and Asian accents, resulting in even lower performance for a combination of the two. We realised the inherent bias in the system and trained the model on more diverse data to include all types of accents/genders or other attributes, thereby improving the results.

While ethics in AI remain an evolving subject, responsible AI involves building systems bound by fundamental guidelines, distinguishing the legitimate and illegitimate use of AI. Responsible AI implies the need for AI systems to be transparent, interpretable, human-centric and socially beneficial. While being fair and ethically compliant, responsible AI also includes regulations spanning multiple key areas such as:

Safety, security and privacy: Data is the fuel for AI. AI systems must be developed and deployed in secure and conducive environments, both for data collection and storage. Follow best practices while dealing with the security and safety of data used by the AI systems. For instance, collect only the required data with the user's consent and provide information regarding data-sharing with other parties, along with an opt-out option. Store data only in a secure environment with encryption. Ensure effective usage of data quality and data-deletion principles.

Responsibility and accountability: AI systems need to abide by the rules and regulations of the land and need to be answerable and accountable to the specific governing councils for all consequences—both intentional and otherwise. Organisations need to have teams in place with well-defined roles and pointed responsibilities for any actions and outcomes. Detailed technical audits of AI and data policies also help bring out the best practices of building responsible AI systems.


Transitioning to be a Responsible AI-based Organisation at Scale

At the outset, organisations should shun the belief that an AI system should be ripe enough to be adopted and deployed. AI applications often exhibit results going beyond their functionality. Though AI can transform businesses much faster than expected, many organisations abandon AI systems after facing ethical hiccups. As per research[1] by Capgemini Research Institute, 41% of senior executives reported that they abandoned an AI system altogether when faced with ethical concerns, and 55% implemented a "watered-down" version of the system. Therefore, organisations need to consider embedding responsible AI at every stage of the AI adoption plan—from the initial stages of planning, handling data and developing models to deploying and monitoring AI systems at scale and collecting feedback from the end-users. Incorporating this feedback into the next version allows organisations to correct the course before minor roadblocks amplify into unmanageable hurdles. This may require specific training programmes and workshops for the teams, enabling them to:

a) Learn more about AI principles.

b) Set the boundaries of usage of AI systems.

c) Gain knowledge of governing processes.

d) Identify the potential risks and pitfalls of the lack of responsible AI.

e) Understand the benefits and requirements of responsible AI.


Discover Your Why

All organisations, regardless of size and the sector they operate in, have access to limitless technical infrastructure on the cloud. AI enables them to collaborate with teams across the globe, stay on top of the latest research in real-time, and improve products and services at a global scale. AI holds the potential to reshape operations, reimagine processes, deliver superior customer experiences and mitigate risks. The question is no more whether your organisation needs AI but where and how to implement it to realise its full potential. Even with a minor implementation of AI systems, one can have a significant impact. That is a great advantage for small businesses in their journey to AI adoption.

We would suggest enterprises embrace AI early and leverage it to deliver better products and services to their customers. To be at the forefront of this change, we need to learn to combine valuable human characteristics such as empathy, foresight, creativity, and judgement with the logic of machines and the power of advanced technologies.


Key takeaways:

  • AI systems should free human bandwidth from mundane tasks to enable greater creativity.
  • Embrace AI early and leverage the same for delivering better products and services to your customers.
  • Include responsible AI at every stage of the AI adoption plan.
  • Deploy governance and responsible AI models, monitor such systems at scale and collect feedback from the end-users.


[1]
Why addressing ethical AI - Capgemini. (n.d.). Retrieved from https://www.capgemini.com/wp-content/uploads/2019/08/AI-in-Ethics_Web.pdf


     

Vivek Khemani, Co-founder, Quantiphi

Vivek Khemani has expertise in corporate strategy and financial operations for new and established enterprises. As a seasoned business manager, he brings together the transformative capabilities of AI and cloud computing with a deep understanding of the global media and entertainment ecosystem. Prior to founding Quantiphi, he worked for Sasken Technologies for over a decade at the interesting intersection of technology services, telecom, IoT and analytics in the United States, EU and India. Vivek holds an MBA from the Indian Institute of Management, Bangalore and is a chartered accountant from India.

Vishal Vaddina, Solution Architect-ML, Leading Applied Research

Vishal Vaddina is a Solution Architect (ML) with extensive experience in leading and developing multiple end-to-end large-scale engagements across various industry sectors. He currently leads the R&D function at Quantiphi, specialising in applied research and focussing on cutting-edge research areas spanning multiple ML domains, including graph representation learning and responsible AI. Vishal holds an integrated Bachelor's and Master's degree from the Indian Institute of Technology, Kharagpur.

     

     

     

     

     

The idea of ISB Management ReThink was born out of the impending need to revisit and redefine the time-tested tenets of management, and at the same time, identify how they can still hold on to their relevance in contemporary times. With the ever-changing dynamics of management philosophies, and the associated classroom teaching methodology, it is about time to readjust the focus by shaking the fundamentals, breaking myths and bringing about the change necessary to survive in this cut-throat era of stiff competition.