Home Hackers can read your encrypted AI-assistant chats

Hackers can read your encrypted AI-assistant chats

Researchers at Ben-Gurion University have discovered a vulnerability in cloud-based AI assistants like Chat GTP. The vulnerability, according to researchers, means that hackers are able to intercept and decrypt conversations between people and these AI assistants.

The researchers found that chatbots such as Chat-GPT send responses in small tokens broken into little parts in order to speed up the encryption process. But by doing this, the tokens can be intercepted by hackers. These hackers in turn can analyze the length, size, and sequence of these tokens in order to decrypt their responses.

“Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab, told ArsTechnica in an email

“This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or the client’s knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”

“Our investigation into the network traffic of several prominent AI assistant services uncovered this vulnerability across multiple platforms, including Microsoft Bing AI (Copilot) and OpenAI’s ChatGPT-4. We conducted a thorough evaluation of our inference attack on GPT-4 and validated the attack by successfully deciphering responses from four different services from OpenAI and Microsoft.

According to these researchers, there are two main solutions: either stop sending tokens one by one or make tokens as large as possible by “padding” them to the length of the largest possible packet, which, reportedly, will make these tokens harder to analyze.

Featured image: Image generated by Ideogram

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Charlotte Colombo
Freelance Journalist

Charlotte Colombo is a freelance journalist with bylines in Metro.co.uk, Radio Times, The Independent, Daily Dot, Glamour, Stylist, and VICE among others. She most recently worked as a Staff Writer for entertainment outlet The Digital Fix for two years and, prior to that, worked with Business Insider and Dexerto on their digital culture desks. She’s also appeared on BBC Radio 5 and The Guardian podcast to share her expertise on technology, influencers, and niche internet subcultures. She holds an MA in Magazine Journalism from City, University of London and has been freelancing for three years. She has a wide range…

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.