When a new coronavirus emerged from nature in 2019, it changed the world.
But COVID-19 won’t be the last disease to jump across from the shrinking wild.
Just this weekend, it was announced that Australia is no longer an onlooker, as Canada, the US and European countries scramble to contain monkeypox, a less dangerous relative of the feared smallpox virus we were able to eradicate at great cost.
As we push nature to the fringes, we make the world less safe for both humans and animals.
That’s because environmental destruction forces animals carrying viruses closer to us, or us to them. And when an infectious disease like COVID does jump across, it can easily pose a global health threat given our deeply interconnected world, the ease of travel and our dense and growing cities.
We can no longer ignore that humans are part of the environment, not separate to it. Our health is inextricably linked to the health of animals and the environment. This will not be the last pandemic.
To be better prepared for the next spillover of viruses from animals, we must focus on the connections between human, environmental and animal health. This is known as the One Health approach, endorsed by the World Health Organisation and many others.
We believe artificial intelligence can help us better understand this web of connection, and teach us how to keep life in balance.
How can AI help us ward off new pandemics?
Fully 60 per cent of all infectious diseases affecting humans are zoonoses, meaning they came from animals. That includes the lethal Ebola virus, which came from primates, swine flu, from pigs, and the novel coronavirus, most likely from bats.
Early warning of new zoonoses is vital, if we are to be able to tackle viral spillover before it becomes a pandemic.
Pandemics such as swine flu (influenza H1N1) and COVID-19 have shown us the enormous potential of AI-enabled prediction and disease surveillance. In the case of monkeypox, the virus has already been circulating in African countries, but has now made the leap internationally.
What does this look like? Think of collecting and analysing real-time data on infection rates. In fact, AI was used to first flag the novel coronavirus as it was becoming a pandemic, with work done by AI company Bluedot and HealthMap at Boston Children’s Hospital.
How? By tracking vast flows of data in ways humans simply cannot do. Healthmap, for instance, uses natural language processing and machine learning to analyse data from government reports, social media, news sites, and other online sources to track the global spread of outbreaks.
We can also use AI to mine social media data to understand where and when the next COVID surge will occur. Other researchers are using AI to examine the genomic sequences of viruses infecting animals in order to predict whether they could potentially jump from their animal hosts into humans.
Better conservation through AI
That means protecting and conserving nature also helps our health. By keeping ecosystems healthy and intact, we can prevent future disease outbreaks.
In conservation, too, AI can help. For instance, Wildbook uses computer-vision algorithms to detect individual animals in images, and track them over time. This allows researchers to produce better estimates of population sizes.
Trashing the environment by deforestation or illegal mining can also be spotted by AI, such as through the Trends.Earth project, which monitors satellite imagery and Earth observation data for signs of unwelcome change.
AI for the natural world as well as humans
Researchers are beginning to consider the ethics of AI research on animals.
If AI is used carelessly, we could actually see worse outcomes for domestic and wild animal species, for example, animal tracking data can be prone to errors if not double-checked by humans on the ground, or even hacked by poachers.
AI is ethically blind. Unless we take steps to embed values into this software, we could end up with a machine which replicates existing biases. For instance, if there are existing inequalities in human access to water resources, these could easily be recreated in AI tools which would maintain this unfairness. That’s why organisations such as the AINowInstitute are focusing on bias and environmental justice in AI.
In 2019, the EU released ethical guidelines for trustworthy AI. The goal was to ensure AI tools are transparent and prioritise human agency and environmental health.
AI tools have real potential to help us tackle the next pandemic by keeping tabs on viruses and helping us keep nature intact.
But for this to happen, we will have to widen AI outwards, away from the human-centredness of most AI tools, towards embracing the fullness of the environment we live in and share with other species.
We should do this while embedding our AI tools with principles of transparency, equity and protection of rights for all.
Ann Borda, Associate Professor, Melbourne Medical School, The University of Melbourne; Andreea Molnar, Associate Professor, Swinburne University of Technology; Cristina Neesham, Associate Professor of Business Ethics and Corporate Social Responsibility, Newcastle University, and Prof Patty Kostkova, Professor in Digital Health, Director of UCL Centre of Digital Public Health in Emergencies (dPHE), UCL