Published: Sep 09, 2022
how do we build ethical AI?
Introduction
During the Covid-19 pandemic technology provided a lifeline to millions of people, enabling them to continue working, socialising, and accessing vital services even when their governments imposed major restrictions on movement.
Among all the technological breakthroughs of the past few decades, one of the most significant is the development of artificial intelligence (AI). Recent advances in AI have supercharged information processing and brought advanced computing power into the mainstream. AI-enabled technologies have fundamentally transformed many areas of everyday life and as new breakthroughs are made the impact of these technologies on society will only increase. Some of the most useful applications of AI today include:
- Advanced analytics, which allows governments and businesses to leverage large amounts of data collected from individuals and devices to refine product design and improve the customer experience
- Providing users of online healthcare services with guidance on how to manage their condition, including remote access to medical professionals and information on nearby support groups
- Chatbots, which allow individuals to interact with a business or government institution and complete transactions without needing to speak to a human agent
- Improving urban planning by optimising transportation routes, thereby reducing commute times
- Directing citizens to social services based on their needs and eligibility
However, the rise of AI has been met with concerns over the potential for it to perpetuate human biases or be misused for things like heavy-handed government surveillance. For societies to reap the benefits of technology while avoiding the pitfalls, organisations must learn to wield it responsibly.
With that in mind, how can technology be used to bring governments and businesses closer to the individuals they serve without undermining trust?
What are the concerns?
As use of AI becomes more pervasive, many experts are raising concerns about the difficulty of defining and enforcing the ethical use of intelligent technologies. They point primarily to the fact that AI is increasingly being used by businesses which are mainly concerned with maximising profits and by governments seeking to surveil and control citizens, without sufficient transparency over the explainability, visibility or accountability to the individuals affected by AI-influenced decisions.
The issue is further complicated by the fact that two of the world’s largest tech superpowers, the USA and China, define ethics in different ways. This makes it difficult to achieve a global consensus on how to regulate AI and ensure that it is deployed in a responsible manner.
According to research carried out by Pew Research Center and Elon University’s Imagining the Internet Center, most experts fear that “ethical principles focused primarily on the public good will not be employed in most AI systems by 2030.”
Without a proper ethical framework in place to ensure the responsible use of AI, intelligent technologies could lead to a range of negative outcomes, including:
- The displacement of humans in the workforce. Jobs that primarily involve repetitive or predictable activities are most at risk of being taken over by intelligent machines.
- The erosion of privacy and human liberty. AI allows businesses and governments to collect and analyse the personal information of members of the public on a massive scale and with great speed. This has major implications for the individual’s right to privacy. Moreover, AI-enabled tools like facial recognition technology and video analytics can be used by authoritarian governments to control their populations.
- Systemic discrimination. Algorithms can reinforce the biases of the people that create them, which is especially worrying when they are being used for things like screening job applicants, approving loans, and so on. Just like humans, AI systems are capable of discriminating on the basis of ethnicity, gender, sexual orientation, disabilities, social class, and more.
Making AI Work for businesses and societies
Discussions around the responsible use of AI have given rise to the concept of “human-centred AI”. Basically, this involves making human input integral to the design and development of AI systems to ensure that these systems improve peoples’ lives while operating with transparency and fairness and upholding privacy.
NCS is doing its part to create a human-centred AI ecosystem. We help our clients to operationalise data and AI ethics by:
- Ensuring all our data science Practice Leaders are certified in AI ethics and governance
- Creating and applying a data and AI risk framework tailored to our client’s situation and industry
- Putting ethical considerations upfront in the design of an AI system
- Addressing bias and upholding data security and privacy throughout the development and maintenance of the AI system
- Monitoring the use and impact of the AI system (as a system could be ethically developed but unethically deployed)
- Supporting AI ethics education in our institutions of higher learning
We are an inaugural partner with the Infocomm Media Development Authority (IMDA) to test and provide input to refine the Model Artificial Intelligence Governance Framework. The framework is designed to help organisations ensure that “decisions made by or with the assistance of AI are explainable, transparent and fair to consumers; and their AI solutions are human-centric.”
NCS is continuing to find ways of harnessing the power of AI without undermining privacy. We are working to drive privacy-enhancing computation technologies that help uphold privacy, security, and transparency in the AI space, including:
- Providing trusted environments for sensitive data to be processed or analysed, including hardware-trusted execution zones
- Performing processing and analytics in a decentralised manner, using federated and privacy-aware machine learning, to reduce the need for exchange of data
- Transforming data and algorithms before processing. This includes Homomorphic encryption, a type of encryption which allows users to perform computations with encrypted data without needing to decrypt it
Shaowei Ying, Senior Partner and Chief Scientist at NCS, noted, for example, that it is possible to have an AI-powered surveillance system that preserves individuals’ privacy. “What we have been able to do is use summarised data for agent-based modelling, deriving individual-level insights from aggregated telco data without exposing the identity and privacy of individuals,” he said, “and we have found that the accuracy is actually good enough really for transport modelling and public safety use cases.”
Fostering ethical AI
There is no doubting the fact that AI will continue to become more advanced in the coming years as well as more ubiquitous in everyday life. It is important, therefore, to take steps now to ensure that the AI systems of the future operate in a way that benefits everyone in society.
“The need for businesses to adopt ethical AI framework is clear: AI scales business operations, but it also unfortunately amplifies ethical risks,” Ying says.
Wynthia Goh, Senior Partner, NEXT, NCS, agrees. “For AI to be trusted in society, people need to have assurance there are sufficient safety checks before AI applications are released and while they continue to be in use,” she says. “The bigger the impact an AI application can make, e.g. medical diagnosis of a critical condition or affirmation of a criminal act to law enforcement, the higher should be the bar that is set to make sure we have built an AI application that is fair. AI adoption is important to both society and the tech industry. We owe it to stakeholders as well as ourselves to get it right in terms of how we design, build, manage and monitor AI.”
Ultimately, collaboration is key. As Ying points out, the AI ethical issue is “a multifaceted topic that requires technologists, humanists, and other functions within the organisation to address it. And only when all come together will we be able to solve that tricky, thorny problem.”
There are reasons to be optimistic. According to Pew Research Center, there is a consensus building around ethical AI as the issue comes under increasing scrutiny. The think tank also pointed out that “no technology endures if it broadly delivers unfair or unwanted outcomes. The market and legal systems will drive out the worst AI systems.”