Digital assistants built into phones or computers are rapidly gaining popularity because they offer a convenient verbal means of communicating with the user without using a keyboard or other physical interfaces. However, at the same time, their technical flaws have begun to be exposed.
For example, last year, American Megan Neitzel received a package at home with a large dollhouse for $170 and almost two kilograms of cookies. It turned out that the order was accidentally made by her six-year-old daughter - she simply asked Alexa for a dollhouse and cookies, and the assistant decided that this was a command to buy. And already in May of this year, Amazon Echo recorded a personal conversation of an American family and sent an audio message with it to a person from the system user's contact book. The recipient turned out to be a colleague of the head of the family, he was very surprised by the recording and decided that his colleague's speaker had been hacked.
Such incidents highlight how voice assistants lack malta whatsapp data layers and circuits that make traditional devices secure, not to mention that they are echolocators, picking up the slightest sound vibrations day and night. On the one hand, they are always ready to respond to voice commands, but on the other, they pose a privacy threat.
Researchers identify five main ways that could potentially give hackers access to voice assistants.
When studying the security issues of AI and machine learning systems, researchers turn to so-called adversarial attacks, making small, targeted changes to the image that are invisible to the human eye but force the machine learning system to recognize a completely different image. Scientists have already modeled algorithms that disrupt the vision of self-driving cars, robots, multicopters, and any other robotic systems that try to navigate their surroundings.
1. Substitute/hidden audio commands
-
- Posts: 816
- Joined: Sun Dec 22, 2024 7:16 am