In the last blog, I have covered Impact of Data Protection and Data Localization Regulations, one of the key trends unfolding across multiple geographies. There is one more trend slowly and silently taking the central stage, and will dictate how and where the companies need to focus in the next decade.
The business world will continue to be agitated by technologies such as artificial intelligence, robotics, additive manufacturing, and augmented reality, among others, and will be forced to change the way it operates. Governments will also increasingly keep on trying and adopting these technologies in healthcare, education, agriculture, traffic management and transport. Regulators will remain busy to moderate the role of technologies in harnessing gain and keep fine balance by curbing ill effects that may come out from potential misuse of technologies.
All of us are aware that boom of artificial intelligence is going to disrupt our world. The sectors which will gain from intelligent machines are healthcare, agriculture, education, smart cities and infrastructure, smart mobility and transportation along with others. It is estimated that AI has the potential to add USD 957 billion, or 15 percent of current gross value added to India’s economy in 2035.8
In India, large scale applications of AI are being deployed across sectors. In Uttar Pradesh, for example, 1,100 CCTV cameras were installed for the ‘Prayagraj Kumbha Mela’ in 2019 to raise an alert when the crowd density exceeded a threshold. NIRAMAI, a startup, has developed an early-stage breast cancer detection system using a portable, non-invasive, non-contact AI-based device. Researchers from IIT Madras are looking to use AI to predict the risk of expectant mothers dropping out of healthcare programmes, to improve targeted interventions and increase positive healthcare outcomes for mothers and infants.
While the potential of these solutions to improve productivity, efficiency and outcome is well established, there are side effects also. And, regulators across world are struggling to cope up with unprecedented challenges which need to be addressed in near future.
- How will privacy concerns be taken care of during data collection?
- How to mitigate unfair discrimination in recommendation?
- Are government and regulators prepared for new norm coming with deployment of AI based automation leading to loss in jobs, deep fakes, threat to social harmony?
There are recent examples of instances raising concerns of ethical use of AI.
“Amazon Reportedly Killed an AI Recruitment System Because It Couldn’t Stop the Tool from Discriminating Against Women” – Fortune, Oct 2018
“A beauty contest was judged by AI and the robots didn’t like dark skin” – Guardian, Sep 2016
“Investor Sues After an AI’s Automated Trades Cost Him $20 Million” – MIT Tech Review, Jan 2020
“Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared.” – Forbes, May 2020
Psychological profiling enabled by AI and the ease of spreading propaganda through online platforms has potential to cause social disharmony and disrupt democratic process. Cambridge Analytica scandal involved using data of millions of users, without their consent on matters of National and Political interest around the world. In Myanmar, online platforms were used to spread hate speech and fake news was targeted against a particular community, leading to ethnic violence.
Let’s scan what’s happening in different countries with respect to Guidelines or regulations established specifically for AI.
European Union (EU)
The significance to enhance security related aspects for AI has been highlighted by EU in a whitepaper. The paper proposes necessary amendments
needed in the Product Liability Directive to address risks associated with AI. As per the EU’s Ethics Guidelines for Trustworthy AI, the AI practitioners need to respect procedural fairness and create balance between competing interests and objectives.
In 2019, The Personal Data Protection Commission (PDPC) in Singapore an updated Model Artificial Intelligence Governance Framework for data protection compliance for organizations deploying AI. Earlier in 2018, the Monetary Authority of Singapore (MAS) published the Principles to Promote Fairness, Ethics, Accountability and Transparency in the Use of AI and Data Analytics in financial services.
United States (US)
Unlike Singapore and the EU, the US doesn’t have an overarching federal legislation on privacy. However, in February 2019, the US government has issued slew of directives on promoting AI technologies alongside protecting civil liberties. Subsequently, the US White house has released a set of 10 “Principles for the Stewardship of AI Applications” in January 2020 to ensure fairness and non-discrimination in AI implementation
India is also moving towards setting up overarching guidance framework as well as sector-specific frameworks for the use of AI systems. In finance, SEBI has started actively pursuing on reporting requirements for Artificial Intelligence (AI) and Machine Learning (ML) applications and systems offered and used. National Digital Health Mission (NDHM) has identified the need for creation of guidance and standards to ensure responsible use of AI systems in health.
This is an emerging field of regulation. A lot of research is happening in academia, industry and government to develop tools to manage AI systems responsibly. Google, Microsoft and IBM have released open-source toolkits to understand bias in datasets and the ML model.
Ethical dimensions of Robotics
There will be gradual acceptance of robots in human environment. With the advancement of algorithm, computing and processing power, human-robot collaboration will become reality. Not only factory floors, robots, in coming days, will be seen in hospitals (example: robotic surgery), roads and highways (example: autonomous vehicle), dangerous mines (example: autonomous trucks), retail shops (example: assisting buyers), home (example: geriatric care, autistic child care, assisting in household works). They will change manufacturing, healthcare, logistics and transportation industries, and society as a whole.
While robots are rapidly getting better at seeing, feeling, and sensing their environments, and will be able to take more and more work, government will have to regulate the new human-robot environment. Ethical use of data captured by Robot and security will be a great challenge. Over the period of time Robot will acquire very sensitive information about the individual human. That information should be protected from misuse. The ethical issue of associating gender with a robot for performing certain type of tasks may be perceived as bias and prejudice. Regulators will certainly bring compliance requirements to ensure robots not to be used for any deceptive and manipulative purpose, or offending any gender, race or community.
Regulation in Industry 4.0
Industry 4.0 is unfolding, and regulation in industry 4.0 will not be the same.
The core of industry 4.0 is a set of rapidly evolving and converging technologies. Additive manufacturing and advanced materials are pushing the boundary of the scope of manufacturing industries. These technologies are enabling richer insights through big data analytics. These tec
hnologies areblurring lines between physical and digital realms through rich simulations and augmented reality. In addition to that, shift is happening through artificial intelligence, autonomous robots, cloud computing, and Internet of Things (IoT).9
We have not yet seen full-fledged impact of industry 4.0. Completely new industries can be created at the intersections of these technologies. Even the definite number of technologies that will disrupt the manufacturing industry are uncertain as it is rapidly growing and evolving.
Though it is premature to predict the complete scope of industry 4.0 compliance challenges at this stage, the companies should start strengthening in the area of privacy of data, including appropriate handling, ownership and storage, streamlining processes and standards for embracing automation and data exchange, and building robust security framework to protect against Anti-Money Laundering (AML) and fraud. The regulators will be actively seeking more and more accurate and useful data in the regime of industry 4.0, which will necessitate companies to adopt a proactive, rather than reactive approach to compliance and regulation.
(To be continued …)
Based on inputs by Manas Bairagi