This Expert Thinks Human Brains Can Be Hacked If We Don’t Regulate AI | #computerhacking | #hacking | #education | #technology | #infosec

[ad_1]

“Hearst Magazines and Yahoo may earn commission or revenue on some items through the links below.”

  • Historian Yuval Noah Harari told 60 Minutes he believes human brains will be “hacked,” and soon, saying artificial intelligence will use data to manipulate users into doing its bidding.

  • Three principles can help ensure that humans are unharmed by mass data gathering.

Speaking to CBS’s 60 Minutes, Yuval Noah Harari, author of the bestselling book Sapiens, says that human brains will be hacked soon if we don’t figure out how to regulate artificial intelligence. Harari’s comments certainly speak to humanity’s worst fears about AI, but could he be right?

The hacking Harari describes is more proverbial than literal. “To hack a human being is to get to know that person better than they know themselves. And based on that, to increasingly manipulate you,” Harari said on 60 Minutes. Honestly, you could argue that this is what has already happened in environments like Facebook and YouTube where people are fed more and more enticing and extreme content in an effort to keep them looking. (Watch the interview below.)

In the world of cybercrime and hacking, this is in the category called social engineering. But there’s a key difference: the traditional definition of social engineering is human interaction. In this case, Harari is suggesting that AI itself will be able to socially engineer unsuspecting human beings. He told 60 Minutes that he believes the large amount of data being gathered now will lead to more and more powerful algorithms that will tell us everything from what to study to who to marry.

Harari wasn’t specific about the technology he fears, instead speaking about the ideas of data gathering and powerful algorithms. IBM details the history of artificial intelligence, including legendary computer scientist Alan Turing’s question “Can machines think?” IBM explains: “At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving.”

So “robust datasets” are right there in the definition. Today, many of us carry around smartphones and watches that document everything we do and interact with, down to how we move our limbs during the day and sleep at night. And now, with brain-machine interfaces like Neuralink, at least some people seem eager to further reduce the distance between themselves and the data collection.

Harari suggested three principles to help ensure these technologies operate safely and without a frightening accumulation of power. First, the data must be used to help rather than manipulate. Next, he said any corporation or group with power to surveil must in turn be surveilled to make sure that power is used responsibly. Finally, the data must not be concentrated in just one place, which Harari said leads to dictatorship.

Harari teaches history at the Hebrew University of Jerusalem, and he works in both history and philosophy. That means that, while his insights and speculation have a certain amount of value, he’s not a scientist and is not working in a hands-on way with any kind of artificial intelligence. He cited a time frame of “10 or 20 or 30 years” for some of the problems he suggested—a timeframe that seems conservative for some aspects but more far out for others. For example, Facebook already lets advertisers peddle misinformation to manipulate users.

Where does all of this leave us? That’s a tough question to answer, partly because of the broad nature of Harari’s comments. He was also careful to point out that our data power could help to solve many of humanity’s most entrenched problems. The key takeaway may be Harari’s suggestion that companies with huge amounts of user data should be supervised just as carefully to make sure they use that data responsibly. It’s easy to imagine that supervision making a difference 15 years ago when Facebook was still new.

Now Watch This:

You Might Also Like

[ad_2]

Source link