October 16, 2021
Trustworthy AI
Kriti Singh
One of the challenges with the pursuit of AI is the incongruity between the fantasy concept of artificial intelligence and the real-world, practical applications of AI. In movies and science fiction novels, AI systems are sketched as super-intelligent machines that have cognitive capabilities adequate to or greater than that of humans (also known as Artificial General Intelligence).


However, the reality is that much of what is being implemented today are niche applications of AI such as image recognition (Google Lens), conversational systems (Siri, Alexa, chatbots), predictive analytics (predicting outcomes of legal decisions, monitor the progression of Parkinson’s disease) as well as pattern and anomaly detection of which a common example would be medical imaging to detect cancerous cells.


In each of these realms, we ask machines to perform a certain set of tasks that would otherwise require the judgment and insight of humans. However, even in these niche applications of AI, we have reason to be apprehensive. Systems are placed in positions where they may influence someone’s life, job, and health and this comes with risk and a need for trust. So we ask the question: Can AI be trusted?


trust
AI Systems Used Everyday



What is considered trustworthy AI?


Trustworthy AI systems are:
Lawful: Those that respect all laws and regulations.
Ethical: Those that respect principles and values.
Robust: These AI applications take both a technical and a social perspective into consideration with regard to system behavior.


The EU trustworthy AI recommendations list seven requirements for the AI system to be considered trustworthy:

1. Focus on human agency and oversight. AI systems need to support human objectives, enable people to flourish, support human agency and fundamental rights and support the overall goals of a healthy society to prove themselves as trustworthy AI systems.


2. Technical robustness and safety. AI systems should “do no harm” and even prevent harm from occurring. They must be developed to perform reliably, have safe failover mechanisms, minimize intentional as well as unintentional harm and prevent damage to people or systems.


3. Privacy and data governance. AI systems should maintain people’s data privacy as well as the privacy of the models and supporting systems. Privacy concerns contribute a lot towards making robust trustworthy AI systems.


4. Transparency. AI systems should be able to explain their decision-making as well as provide visibility into all elements of the system. Explainable AI uses a variety of approaches and procedures to ensure that each choice made during the machine learning process can be traced and explained in a precise manner.


5. Diversity, nondiscrimination, and fairness. As part of the focus on human agency and rights, AI systems must support society’s goals of inclusion and diversity, minimize aspects of bias and treat humans with equity.


6. Societal and environmental well-being. In general, Trustworthy AI applications shouldn’t cause societal or environmental unrest, and make people feel like they’re losing control of their lives or jobs, or work to destabilize the world.


7. Accountability. At the end of the day, someone needs to be in charge. The systems might be working autonomously, but humans should be the supervisors of the machine. There needs to be an established path for responsibility and accountability for the behavior and operation of the AI system through the system’s life cycle.


All these 7 recommendations contribute towards making reliable and robust trustworthy AI systems.


Building AI systems we can trust:


Biometrics: Trust but Verify

Performance, Robustness, and Scalability: Although accuracy has matured, are there inputs that will still break the system? How will the system perform over time? How will the system scale to millions or even billions of users? India’s Aadhaar — national ID biometric recognition systems — have achieved a remarkable level of scalability. Aadhaar has been extremely successful in its mission to provide unique and verifiable digital identity to all, boasting over 1.3 billion enrollees based upon deduplication utilizing all ten fingerprints, face, and both iris images. However, very few evaluations exist in the literature to show how biometric recognition systems operate at a scale the size of Aadhaar. If false rejects and false matches are introduced, it will cause chaos and ill-will. Testing AI algorithms before they’re deployed is paramount to ensure that they’re capable of carrying out the tasks they are set to do.



Bias and Fairness: Does the biometric recognition system work as well across all demographic groups? Does the system misclassify members of one demographic group more than another (e.g., age, gender, race, ethnicity, and country of origin)? Why? What are the sources of bias in a biometric recognition system? Artificial intelligence can inherit the biases of the people creating it; involving more people with diverse perspectives is key to preventing problematic developments from blossoming. Several studies have also investigated the bias factor of age or aging in biometric modalities. A consistent finding of bias in face recognition across studies is that the recognition performance is worse for female cohorts (possibly due to the use of cosmetics).


Data sets and textbooks both have human authors — they’re both amassed according to guides made by people. If a student is handed a textbook that is written by a prejudiced author — is it not implied that they will pick up some of those same prejudices? AI is in its nascent stage, figuring out how to remit issues like algorithmic bias. But biases exhibited by AI are the same as existing human biases — data sets used to train machines are exactly like textbooks that are used to educate people. The field should stress that organizations using AI, and individuals within those organizations, are trustworthy.


“No matter how good your models are, they are only as good as your data.”


Explainability and Interpretability: Why is the system making the decision it is making? What parts of the input are being used to make a final decision? What features of the input are most important in the decision? Will these features enable the model to operate accurately and consistently over time and in different operating conditions? Operators and end-users have been in the “unknown” due to the AI landscape filled with black-box solutions with limited autonomy, leading to vast mistrust in the public eye. At LENS, our solutions, while maintaining the necessary Intellectual Property Rights for our clients, customers, and end-users, include a guarantee to leverage everything at our disposal to ensure complete transparency, till all the “how” and “why” are answered satisfactorily. That is how LENS is taking a step towards building trustworthy AI systems.


Privacy & Security: Even if we have a highly accurate and secure system, how can we protect the privacy of end users (and those who are in the training database)? Can we train on decentralized data, e.g., federated learning? Can we perform training or make inferences directly on encrypted data?


The General Data Protection Regulation lays out strict guidelines for processing data:


• Lawfulness, fairness, and transparency — Processing must be lawful, fair, and transparent to the data subject.

• Purpose limitation — You must process data for the legitimate purposes specified explicitly to the data subject when you collected it.

• Data minimization — You should collect and process only as much data as necessary for the purposes specified.

• Accuracy — You must keep personal data accurate and up to date.

• Storage limitation — You may only store personally identifying data for as long as necessary for the specified purpose.

• Integrity and confidentiality — Processing must be done in such a way as to ensure appropriate security, integrity, and confidentiality (e.g. by using encryption).

• Accountability — The data controller is responsible for being able to demonstrate GDPR compliance with all of these principles.


Significant progress has been made in solidifying the accuracy component of a trustworthy biometric recognition system. Scientists must begin shifting their attention away from a pure cognition accuracy and convenience-driven mindset to concerns voiced by policy makers and the general public about the reliability of biometric recognition systems.


Viewing A.I. in a Softer Light



Humanity’s story is the story of automation. Technological advancements help us wring out inefficiencies. AI has made the world a safer, healthier, and happier place in niche ways. Medical AI, for example, empowers health care professionals to provide better diagnoses and treatments to more patients than they could on their own which is crucial during a pandemic that’s stretching resources thin. AI is used in the healthcare sector to predict the onset of diseases, to detect fraud in insurance, to predict crime in law enforcement, in agriculture to increase crop yields, in cities to reduce congestion, in logistics to identify supply-chain management risks, tracking pandemics and in many innumerable ways. Moreover, the future holds even more scope for trustworthy AI systems to improve our way of living.
AI isn’t as bad as the headlines claim. AI systems do not operate in a lawless world and are legally bound. Artificial intelligence has been for the welfare of humanity since its genesis. This can be a time of exploration and discovery, not one of fear-mongering, but only if we welcome the real merits of A.I. already materializing before our eyes.

trust
At LENS:


An area where AI is vastly overlooked is “AI for Social Good”. LENS actively provides solutions to global challenges including, assigning digital ID (biometrics) to everyone (including infants and under-developed regions), conserving wildlife, forecasting natural disasters (say, via aerial imagery), etc. Our contributions towards this goal can go a long way towards instilling trust, fairness, and security for society at large.


LENS, complemented by our research-driven approach, strives to overcome some of the noted challenges of AI, such as ethical training and bias-free decision-making while preserving user privacy which is all key to transforming the industry ahead. We’re building the future of Trustworthy AI, together.

trust
Share
facetwitlinkinst
Tags
Artificial Intelligence