Technology

Warnings Against the Use of Artificial Intelligence in Cyberattacks

dzwatch

Microsoft and OpenAI have revealed that hackers are already employing large language models (LLMs), such as ChatGPT, to enhance and refine their current cyberattacks, according to The Verge. The companies have identified attempts by groups supported by Russia, North Korea, Iran, and China to use tools like ChatGPT for researching targets, improving scripts, and aiding in the development of social engineering techniques.

Microsoft, in a blog post, mentioned, “Cybercriminal groups and state-threat actors, among other adversaries, are exploring and testing various AI technologies as they emerge, in an attempt to understand the potential benefits for their operations and the security controls they may need to circumvent.”

Microsoft further revealed that the Strontium group, associated with Russian military intelligence, has been using large language models to understand satellite communication protocols, radar imaging techniques, and specific technical standards.

This hacking group, also known as “APT28” or “Fancy Bear,” has been active during Russia’s war in Ukraine and was previously involved in targeting Hillary Clinton’s presidential campaign in 2016, as reported by The Verge.

The use of AI in cyberattacks poses significant challenges for cybersecurity, indicating a need for advanced defensive measures to counteract these evolving threats. As AI technologies continue to advance, so does the sophistication of cybercriminal strategies, underscoring the importance of continuous innovation in cybersecurity defenses.

For more insights into cybersecurity trends, visit dzwatch.net. Use of AI in Cyberattacks.

Related Articles

Leave a Reply

Back to top button