Deep learning has come a long way since the days when it could only recognize handwritten characters on checks and envelopes. Today, deep neural networks have become a key component of many computer ...
Adversarial attacks are an increasingly worrisome threat to the performance of artificial intelligence applications. If an attacker can introduce nearly invisible alterations to image, video, speech, ...
Imagine the following scenarios: An explosive device, an enemy fighter jet and a group of rebels are misidentified as a cardboard box, an eagle or a sheep herd. A lethal autonomous weapons system ...
You’re probably familiar with deepfakes, the digitally altered “synthetic media” that’s capable of fooling people into seeing or hearing things that never actually happened. Adversarial examples are ...
Artificial intelligence is a key technology for self-driving vehicles. It is used for decision-making, sensing, predictive modeling and other tasks. But how vulnerable are these AI systems to an ...
The algorithms that computers use to determine what objects are–a cat, a dog, or a toaster, for instance–have a vulnerability. This vulnerability is called an adversarial example. It’s an image or ...
An autonomous train is barreling down the tracks, its cameras constantly scanning for signs that indicate things like how fast it should be going. It sees one that appears to require the train to ...
HealthTree Cure Hub: A Patient-Derived, Patient-Driven Clinical Cancer Information Platform Used to Overcome Hurdles and Accelerate Research in Multiple Myeloma Adversarial images represent a ...
We’ve touched previously on the concept of adversarial examples—the class of tiny changes that, when fed into a deep-learning model, cause it to misbehave. In March, we covered UC Berkeley professor ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results