Two-thirds of experts believe AI will make humans better off overall, finds Pew study

AI is a double-edged sword, warn survey respondents

A study conducted by Pew this summer gathered interviews from 979 technology experts including innovators, developers, policy and business leaders, researchers, activists and other pioneers regarding the future of AI. Participants were asked to opine on how our increasing dependence on networked artificial intelligence would affect human capacities, empowerment, autonomy and agency by the year 2030. 63% of respondents stated they believed that humans would be better off overall as a result of AI.

The survey respondents were optimistic about AI’s potential contributions to healthcare, education and care of senior citizens. Referring to smart systems in places like buildings, vehicles and farms, they expressed a belief that AI will save time, money and lives in the future. However, despite the positive contributions expected of AI, survey respondents voiced concerns about the effects of AI on human agency and natural capacities, as well as the potential for data abuse, job loss and general Terminator-like mayhem.

“Black box” AI will increasingly shut out humans from decision-making processes

Experts worry that the opaque nature of the algorithms that increasingly govern our daily lives will have restrictive effects. Not only do we sacrifice privacy in order to benefit from the convenience of these algorithms, but we also sacrifice our independence when we rely on these algorithms to make our choices for us. As automated systems become more complex, we will become more and more shut out from these decision-making processes.

While AI is touted as a supplement to human capacities, some of the survey respondents warned that people who rely too much on technology might lose the ability to think for themselves. Some went so far as to say that our cognitive, social and survival skills risk being eroded due to our dependence on AI. Bart Knijnenburg, assistant professor of computer science who is active in the Human Factors Institute at Clemson University, noted that algorithms “follow our behavioral patterns, which may not be aligned with our preferences. The algorithms behind these tools need to support human agency, not replace it.”

AI bots like Siri or Alexa are currently still mired in the artificial “narrow” intelligence phase – capable of responding to simple demands but highly reliant on the programmers who created them. When invented and perfected, artificial “general” intelligence that would function like a human in its own right could remove the need for human involvement in certain areas.

As one of the survey respondents put it, “The most-feared reversal in human fortune of the AI age is loss of agency. The trade-off for the near-instant, low-friction convenience of digital life is the loss of context about and control over its processes. People’s blind dependence on digital tools is deepening as automated systems become more complex and ownership of those systems is by the elite.”

Misuse of big data another concern as AI moves forward

The control of the elite over these systems brings up the oft-voiced issue of the potential misuse of individuals’ data. There are major privacy issues with AI-infused surveillance systems, such as the facial recognition cameras that are already prevalent in many public spaces in China. While the purpose of collecting data is ostensibly to improve efficiency, it’s also a huge source of profit and is vulnerable to misuse. In this age where it’s becoming necessary to allow AI insights into our personal lives in order to make use of social networks, payment systems and the like, it’s crucial that we invest serious efforts into regulating the ethical boundaries of big data.

AI is encroaching on every aspect of our personal lives: what can we do about it?

Respondent after respondent expressed a fear that algorithms would negatively affect the way we conduct ourselves in our daily lives. Thomas Schneider, head of International Relations Service and vice-director at the Federal Office of Communications (OFCOM) in Switzerland, said, “With more and more ‘recommendations,’ ‘rankings’ and competition through social pressure and control, we may risk a loss of individual fundamental freedoms (including but not limited to the right to a private life) that we have fought for in the last decades and centuries.” Futurist Baratunde Thurston stated, “For the record, this is not the future I want, but it is what I expect given existing default settings in our economic and sociopolitical system preferences.”

Seeing as the advancement of AI appears to be unstoppable, is there a way we could steer it along a healthier route? Thurston noted, “By 2030, we may cram more activities and interactions into our days, but I don’t think that will make our lives ‘better.’ A better life, by my definition, is one in which we feel more valued and happy. Given that the biggest investments in AI are on behalf of marketing efforts designed to deplete our attention and bank balances, I can only imagine this leading to days that are more filled but lives that are less fulfilled. To create a different future, I believe we must unleash these technologies toward goals beyond profit maximization. Imagine a mapping app that plotted your work commute through the most beautiful route, not simply the fastest. Imagine a communications app that facilitated deeper connections with people you deemed most important. These technologies must be more people-centric.”

Is it possible to live in a world where sophisticated AI and human autonomy co-exist?