This is an online exclusive story.
The next decade is going to deliver a wild ride of awe-inspiring technological advances and occasional dystopian dead ends as artificial intelligence develops at an accelerating pace, according to 305 US-based technology experts.
Pew Research Centre, a highly trustworthy Washington DC non-profit, has been surveying experts on their expectations on what the next decade of living in the digital world has in store for us.
The latest survey was dominated by predictions of what impacts generative AI, which only exploded into the public sphere with the release of ChatGPT in November, will have on society.
As you’d expect, there’s good news and bad, with a skew towards detrimental outcomes that will need to be addressed early to avoid the technology causing widespread harm. Of those surveyed, 79% of respondents reported they felt concerned or equally concerned and excited about the future impact of tech.
This is how the survey results broke down:
- 42% of these experts said they are equally excited and concerned about the changes in the “humans-plus-tech” evolution they expect to see by 2035.
- 37% said they are more concerned than excited about the changes they expect.
- 18% said they are more excited than concerned about expected change.
- 2% said they are neither excited nor concerned.
- 2% said they don’t think there will be much real change by 2035.
The fact that the biggest group of respondents are equally excited and concerned about what will result from technological change between now and 2035 suggests generative AI will spearhead tangible transformative change.
So, where will it benefit humanity? The biggest advances, say the experts, will come in the areas of healthcare and education, both of which are overburdened and under-resourced sectors in most countries.
The pros: healthcare, education, human rights
Experts see an explosion in innovation around digital tools used in medicine, health, fitness, and nutrition.
“We will see a proliferation of AI systems to help with medical diagnosis and research. This may cover a wide range of applications, such as expert systems to detect breast cancer or other X-ray/imaging analysis; protein folding, etc, and discovery of new drugs; better analytics on drug and other testing; and limited initial consultation for doing diagnosis at medical visits,” Rich Salz, principal engineer at Akamai Technologies, told Pew.
“Automated drug discovery will revolutionise the use of pharmaceuticals. This will be particularly beneficial where speed or diversity of development is crucial, as in cancer, rare diseases and antibiotic resistance,” adds Jonathan Stray, senior scientist at the Berkeley Center for Human-Compatible AI.
Ray Schroeder, senior fellow at the University Professional and Continuing Education Association, noted the next 12 years will see a maturing of the relationship between human and AI that will improve access to education and skill development on a more equitable basis.
“Education will be delivered through AI-guided online adaptive learning for the most part in the first few years, and more radical ‘virtual knowledge’ will evolve after 2030. This will allow global reach and dissemination without limits of language or disability. The ubiquity of access will not limit the diversity of topics that are addressed,” he told Pew.
As well as improving productivity in our personal and working lives – with chatbots writing emails on our behalf and the rise of truly intelligent personal assistants to take the admin out of daily life, the experts suggest there is real scope for AI tools to help more people have influence in society’s institutions.
“These experts believe digital tools can be shaped in ways that allow people to freely speak up for their rights and join others to mobilise for the change they seek,” Pew reported.
The cons: inequality, digital overreach, runaway AI
But while the experts eloquently outline the many advances that AI will bring about in the coming years, there are many caveats in the report. The scope for AI entrenching inequality, exacerbating the spread of misinformation, and even destroying humanity is real, they say.
“Some are anxious about the seemingly unstoppable speed and scope of digital tech that they fear could enable blanket surveillance of vast populations and could destroy the information environment, undermining democratic systems with deep fakes, misinformation and harassment,” Pew reports.
Others point to AI’s potential role in surveillance of citizens, “sophisticated bots embedded in civic spaces … advanced facial-recognition systems, and widening social and digital divides as looming threats”.
Daniel S Schiff, assistant professor and co-director of the Governance and Responsible AI Lab at Purdue University, isn’t convinced that moves to regulate AI will prove effective in mitigating the power of the technology.
“Regulatory efforts that aim to centre human rights and well-being may fall somewhat to the banalities of trade negotiations and the power of big technology companies,” he told Pew.
Some of the experts told Pew they echo the concerns of scientists who in February called for a six-month pause on AI development to give regulators time to figure out how to manage responsible research.
Others fear massive unemployment, the spread of global crime, and further concentration of global wealth and power in the hands of the founders and leaders of a few large companies.
What is Aotearoa doing?
The New Zealand government has had a fairly muted response to the rise of generative AI. The Department of Internal Affairs is in the process of refreshing its guidance to government departments about how they should be using artificial intelligence, but there hasn’t been the rush to lay the groundwork for regulation that we’ve seen in other countries.
The European Union, for instance, has passed the EU AI Act in its parliament, which would regulate AI based on the potential risks posed by its use. For instance, AI-powered video games would be classified as “minimal risk” and require only that the developer sign up to a code of conduct. At the other end of the scale, use of AI to power a social credit score, such as that used in China to encourage favourable behaviour in citizens, would be considered to carry an “extreme risk” and receive an outright ban.
Australia is considering introducing regulation that would also impose limits on use of AI that are deemed to be high risk.
Our government seems to be taking a wait-and-see approach. In the meantime, the Privacy Commissioner has warned companies deploying AI systems that the Privacy Act applies to technology, so existing standards for data privacy and security must be observed.
“The Privacy Act is technology-neutral and takes a principle-based approach, meaning the same privacy rights and protections apply to generative AI tools that apply to other activities that use personal information (such as collecting and using personal information via paper or computer),” the Office of the Privacy Commissioner pointed out last month.
David Clark’s vision of the future:
Among the experts quoted in the Pew report, David Clark, an Internet Hall of Fame member and senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, has perhaps the most thoughtful take on what digital life in the future could hold – if we take an optimistic view.
“To have an optimistic view of the future, you must imagine several potential positives come to fruition to overcome big issues:
“The currently rapid rate of change slows, helping us to catch up.
“The internet becomes much more accessible and inclusive, and the numbers of the unserved or poorly served become a much smaller fraction of the population.
“Over the next 10 years, the character of critical applications such as social media mature and stabilise, and users become more sophisticated about navigating the risks and negatives.
“Increasing digital literacy helps all users to better avoid the worst perils of the internet experience.
“A new generation of social media emerges, with less focus on user profiling to sell ads, less emphasis on unrestrained virality and more of a focus on user-driven exploration and interconnection.
“And the best thing that could happen is that application providers move away from the advertising-based revenue model and establish an expectation that users actually pay. This would remove many of the distorting incentives that plague the ‘free’ internet experience today.
“Consumers today already pay for content (movies, sports and games, in-game purchases and the like). It is not necessary that the troublesome advertising-based financial model should dominate.”
Here is the full report of Pew Research Centre’s latest survey.