AI use during pandemic creates solutions and causes anxiety; regulators should step in
The years 1956 to 1965 are considered the Big Bang for Artificial Intelligence (AI). This is when the earliest programmers asked computers to make sense of large sets of data. Today, AI technologies are still prone to failure and unable to perform abstractions, but they have become excellent in pattern-matching. The ability of machines to learn from experience and perform tasks once only possible for humans have created a range of possibilities to enrich and improve human lives, as well as to endanger them.
The pandemic of 2020 may have rebooted AI, and with this AI systems will be ripping through our lives for years to come. The implications may be huge. People should embrace AI, but not trust it just yet. Not until engineers build systems, and governments create a regulatory environment and transparency that earn the trust of the society.
Through the last several months many have placed their trust in AI – demonstrating its abilities and developing applications to enhance the effectiveness of the solutions to fight the virus.
In Europe, countries like France are testing tools that trace patterns to build forecast models with the purpose of understanding the emerging COVID-19 hotspots. They are deploying AI-powered systems that can run millions of patterns per minute to discover insights and build predictive models across a range of outcomes.
Another crucial benefit of implementing AI robots is that they minimize the risk of exposure of essential workers while allowing hospitals or stores to continue to carry on with daily activities. Blue Ocean Robotics, a Danish start-up, for example, has created a self-driving disinfection robot that uses ultraviolet light to kill bacteria and viruses in hospitals. Their robots have been deployed in all Chinese provinces to help fight COVID-19.
Other innovative uses of AI in the context of COVID-19 include tracking and forecasting outbreaks, diagnosing the virus, processing healthcare claims, informing on the status of local restrictions, delivering food and medical supplies, cleaning and sterilizing surfaces, contact tracing, and fast-tracking the development of a vaccine.
Risks associated with AI
AI has been rebooted this year and it will lead the way in the post-pandemic world. As is the case with many new technologies, at times AI will also be abused. Its increased use will cause labour disruption, targeted and mass surveillance, new arms race, as well as the development of biased algorithmic decision-making tools.
The unregulated use of AI is in some cases already affecting civil liberties in the name of public health interests. Countries around the globe have been adopting surveillance measures that use automated means to gather data and carry out contact tracing, quarantine monitoring and electronic fencing.
Various European countries are pulling data to assess compliance with the lockdown measures. In the UK and Spain, CCTV footage and video drones are being used to monitor and enforce general lockdown compliance. Whether such measures will remain around the globe after the pandemic is still to be seen. We argue that AI should be further developed and used, provided we can trust the technology and people using it.
Public health concerns may trump privacy rights during a state of emergency. We are here to claim that one should not exclude the other. Emergency steps seem to have a nasty tendency of becoming a new normal. We are concerned about those recently implemented measures that seem unlikely to roll back once the pandemic is over, thus generating major privacy risks. We are pointing to the aftershock of the Paris terrorist attacks in 2015 when a state of emergency was extended five times and lasted two years. When it was withdrawn, exceptional anti-terrorism powers were absorbed into law by the French government in November 2017.
This, as well as the use of AI in the post-pandemic world, requires careful regulation and diligent oversight.
In the US the draft rules on use of AI indicate that the government will use minimal regulation and to encourage AI’s growth and innovation. This will result in the industry getting ahead of the government and US-based AI companies getting an advantage, as the case has been with the Internet and other technologies.
The EU will need to find a balanced approach. Over-regulation and too much red tape will result in lower innovation. The regulators first need to become well-educated on AI, as the process of legislating and regulating may not be given to those less informed. They should ensure that any use of AI is proportional to the public interest, which may not be overblown or politicised.
Similarly, to the use of online data, there has to be an independent oversight on what data is collected, and who can use it and when. No AI obtained data should be traded or commercialized without an educated consent of the individual.
Privacy watchdogs, Data Privacy Officers and privacy platforms need to bring transparency and help build trust in the use of data by AI systems – auditing them and helping their operators with system development and implementation.
Overly invasive uses of AI, such as surveillance, need to end when the pandemic ends. The data received to fight the virus and future pandemics may not be used for other purposes – commercially by companies or by governments to monitor individuals.
The EU should strive to champion the AI development and implementation. Its progress, however, shall be determined not only by machines’ ability to crack more data or run more operations. The AI systems are to become truly intelligent when they align with human values and respect privacy.