The Risk of AI

Our data identifies us, defines us, and allows us to be tracked. As it stands today we can’t have any expectations of privacy, neither the ability to control where all that data is going or who has access to it.

Now imagine you feed all that data, willingly, to the current incarnations of artificial intelligence (AI). Yes, modern AIs, like ChatGPT and others, are getting smarter. Well, they are learning to get smart.

The researchers that are spending time and money into making them faster, bigger, all reaching, and all “knowing”, are, in my opinion, not taking the time to threat-model where this is leading us to. Understanding the risks that result from feeding more and more real world data into AIs and trusting these new incarnations of software with solving actual problems, has to be a priority.

Do we really know what would/could happen if more data gets fed into an AI? What if some of that data is malicious? What can the AI do with malicious data? What about stolen intellectual property data from a company breach? What if a bad actor decides to use the AIs to comb the virtually unlimited amounts of online data about people and businesses, using the technology to begin profiling them and selecting better targets based on ease of attack? Or worse, what if they ask the AIs where the most damage could be inflicted within those targets...

AI can help bad actors automate and rapidly go through the loop of the three Fs: find, fix, finish.

Like all technology, AIs can serve both sides. There is no way to stop that, but the more we let it slide, the more it will tend to lean towards helping the dark side. That’s the nature of technology because that’s human nature.

Nature is asymmetric, unbalanced, and biased. It will affect AI as well.

I think it’s time we pause, all of us, researchers, companies, and users alike, and truly assess what can go wrong. We need to seriously weigh the threats and whether we are ready to accept what would probably come next and live with it.

I am personally terrified of where this is going. As smart as we “think” we are, we are not often smart enough to think ahead, and see things for what they are.

These are a few of the threats presented by the use of AIs that come to mind:

So, what can we do? A good question without an easy answer.

It might be too late to do anything already, however maybe the scientists and corporations creating and training these AIs, making them smarter with each iteration, can themselves get smarter and understand the risks their creations already present to all of us, and, then, look into the future and see the risk that awaits us if we don’t do something now.

I think it’s time for the security community to get together and generate a collection of rules and regulations for a world that will be controlled by the use of AI.

Or, like Daniel Messier put it:

“We can’t curmudgeon our way to protecting users.
We need to get out front, and do our best to clear the way.”

We will see.

© 2009-2024 Modern Adversary. No tracking or visit logs.