Page 1 of 1

We may also see large AI-powered

Posted: Sun Feb 09, 2025 7:32 am
by relemedf5w023
Furthermore, generative AI will turn cybercriminals into better scammers. AI will help attackers create well-written, convincing phishing emails and websites in multiple languages, allowing them to expand their campaigns across multiple geographies. We expect the quality of social engineering attacks to improve, and the lures to be harder to detect for both target groups and security services. As a result, we may see an increase in the risks and harm associated with social engineering, from fraud to network intrusions.

phishing campaigns timed to coincide with major events such as sporting events (e.g., the Paris Olympics, the Champions League) or shopping events (e.g., Black Friday sales). As AI-generated emails russia mobile database virtually indistinguishable from legitimate ones, relying on employee training alone to protect users is not enough. Instead, security teams should consider isolation technologies such as micro-virtualization, which do not rely on detection to protect employees. This technology opens risky files and links in isolated virtual environments, preventing malware and software exploits — even zero-day threats — from infecting devices.

#2. Local LLMs
As computing power increases, the next generation of “AI PCs” will be able to run local large language models (LLMs) without having to rely on powerful external servers. This will allow PCs and users to take full advantage of AI, changing the way people interact with their devices.

On-premises LLMs promise to improve efficiency and performance, as well as provide security and privacy by operating independently of the Internet. However, on-premises models and the sensitive data they handle can make endpoints a big target for attackers if not properly protected.

Moreover, many companies are implementing chatbots built on LLM to improve the level and scale of customer service. However, AI technology can create new information security and privacy risks, such as potentially exposing sensitive data. This year, we may see cybercriminals attempt to manipulate chatbots to bypass security measures and gain access to sensitive information.