The world right now is all about computers, computers, and more computers. From big to small, there is a tech gadget that is sure to capture your fancy that promises to make your life easier and bring in more fun. The gadgets we are using are getting smaller and cooler and they pack a lot of power too. You can do virtually almost everything with a smart device and with the growing clout of the Internet of Things, a lot of gadgets are now connected to the web in ways you never thought possible in the past. One might think we have already reached the pinnacle of innovation but the answer is no. We are actually just warming up as news about AI continues to linger in the horizon.
Full-blown AI technology is still in effect, but there are various gadgets and technologies that are already powered by artificial intelligence without you actually knowing. Automated objects and other smart devices are often powered by AI. Some vehicles are run by AI too. They really come in handy especially in speeding up processes in the field of medicine and security yet it has one major flaw – neural networks operating this technology can be fooled.
Fooling an AI with a couple of pixels is called an adversarial example, and potential attackers can use them to trick or confuse an AI. For the first time, researchers have tricked an AI into thinking a real-world object—in this case, a 3D printed turtle—is a rifle.
The concept of an adversarial object attack in the real world was only ever considered theoretical by researchers in the field. Now, LabSix, an AI research group at the Massachusetts Institute of Technology, has demonstrated the first example of a real-world 3D object becoming “adversarial” at any angle. The paper that describes the work was authored by Anish Athalye, Logan Engstrom, Kevin Kwok, and Andrew Ilyas.
Adversarial example is effective in fooling an AI technology simply by making a few tweaks on its algorithm. And with the way the world is going now, many of the crimes are happening online where the average Joe are often clueless of until they get hit with a major virtual headache like ransomware, for instance. Experts in the field have conducted experiments proving that adversarial example is capable of tricking AI into whatever it wants. Knowing that it can happen should raise warning flags for the tech industry as a whole especially that cybercriminals are growing and more people are using the web now and technology in general, so there’s a lot of risks involved.
“It’s actually not just that they’re avoiding correct categorization — they’re classified as a chosen adversarial class, so we could have turned them into anything else if we had wanted to,” researcher Anish Athalye told Digital Trends. “The rifle and espresso classes were chosen uniformly at random. The adversarial examples were produced using an algorithm called Expectation Over Transformation (EOT), which is presented in our research paper. The algorithm takes in any textured 3D model, such as a turtle, and finds a way to subtly change the texture such that it confuses a given neural network into thinking the turtle is any chosen target class.”
While it might be funny to have a 3D-printed turtle recognized as a rifle, however, the researchers point out that the implications are pretty darn terrifying. Imagine, for instance, a security system which uses AI to flag guns or bombs, but can be tricked into thinking that they are instead tomatoes, or cups of coffee, or even entirely invisible. It also underlines frailty in the kind of image recognition systems self-driving cars will rely on, at high speed, to discern the world around them.
If you tell someone that a turtle is a gun, how would you react? It’s crazy, right? But that’s exactly how AI will see it with adversarial example at work. So you see, the technology is not entirely foolproof because of this major technical glitch. Now, a lot of human processes and devices are run on AI, which has major implications once these systems are hacked by sneaky and skilled cybercriminals and used adversarial example on it for their own gain. It can spell disaster on every level. It is a major cause of concern in the medical field and in safety protocols as the machines run on AI can be tricked to think something else of a potential hazard and simply dismiss it.
We may be heading deeper into the realms of deep learning and more complex technologies but the ones we are using now also gives us a headache simply in our day-to-day use and it often involves problems with data loss. When confronted with such a problem, it pays to be knowledgeable about the following as they can be helpful in recovering your RAID data.