The FBI and CISA dismiss false claims of compromised voter registration data. The State Department accuses RT of running global covert influence operations. Chinese hackers are suspected of targeting a Pacific Islands diplomatic organization. A look at Apple’s Private Cloud Compute system. 23andMe will pay $30 million to settle a lawsuit over a 2023 data breach. SolarWinds releases patches for vulnerabilities in its Access Rights Manager. Browser kiosk mode frustrates users into giving up credentials. Brian Krebs reveals the threat of growing online “harm communities.” Our guest is Elliot Ward, Senior Security Researcher at Snyk, sharing insights on prompt injection attacks. How theoretical is the Dead Internet Theory?
Remember to leave us a 5-star rating and review in your favorite podcast app.
Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn.
CyberWire Guest
Our guest is Elliot Ward, Senior Security Researcher at Snyk, sharing insights on their recent work "Agent Hijacking: the true impact of prompt injection attacks."
Selected Reading
FBI tells public to ignore false claims of hacked voter data (Bleeping Computer)
Russia’s RT news agency has ‘cyber operational capabilities,’ assists in military procurement, State Dept says (The Record)
The Dark Nexus Between Harm Groups and ‘The Com’ (Krebs on Security)
China suspected of hacking diplomatic body for Pacific islands region (The Record)
Apple Intelligence Promises Better AI Privacy. Here’s How It Actually Works (WIRED)
Apple seeks to drop its lawsuit against Israeli spyware pioneer NSO (Washington Post)
23andMe settles data breach lawsuit for $30 million (Reuters)
SolarWinds Patches Critical Vulnerability in Access Rights Manager (SecurityWeek)
Malware locks browser in kiosk mode to steal Google credentials (Bleeping Computer)
Is anyone out there? (Prospect Magazine)
Share your feedback.
We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show.
Want to hear your company in the show?
You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info.
The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc.
Learn more about your ad choices. Visit megaphone.fm/adchoices
(phone ringing) - You're listening to the Cyberwire Network, powered by N2K. (upbeat music) - Hey everybody, Dave here. Have you ever wondered where your personal information is lurking online? Like many of you, I was concerned about my data being sold by Databrokers. So I decided to try delete me. I have to say delete me is a game changer. Within days of signing up, they started removing my personal information from hundreds of Databrokers. I finally have peace of mind, knowing my data privacy is protected. Delete me's team does all the work for you with detailed reports, so you know exactly what's been done. Take control of your data and keep your private life private by signing up for delete me. Now at a special discount for our listeners, today get 20% off your delete me plan when you go to joindeleteme.com/N2K and use promo code N2K at checkout. The only way to get 20% off is to go to joindeleteme.com/N2K and enter code N2K at checkout. That's joindeleteme.com/N2K code N2K. (upbeat music) (upbeat music) (upbeat music) The FBI and SISA dismiss false claims of compromised voter registration data. The State Department accuses RT of running global covert influence operations. Chinese hackers are suspected of targeting a Pacific Islands diplomatic organization. A look at Apple's private cloud compute system. 23andMe will pay $30 million to settle a lawsuit over a 2023 data breach. SolarWinds releases patches for vulnerabilities in its access rights manager, browser kiosk mode frustrates users and to giving up credentials. Brian Krebs reveals the threat of growing online harm communities. Our guest is Elliot Ward, senior security researcher at SNCC sharing insights on prompt injection attacks and how theoretical is the dead internet theory. (upbeat music) (upbeat music) It's Monday, September 16th, 2024. I'm Dave Bittner and this is your CyberWire Intel Briefing. (upbeat music) (upbeat music) Thanks for joining us here today. It is great as always to have you with us. The FBI and SISA are warning the public about false claims that US voter registration data has been compromised in cyber attacks. According to the agencies, malicious actors are spreading disinformation to manipulate public opinion and undermine trust in democratic institutions. These actors often use publicly available voter registration data to falsely claim that election infrastructure has been hacked. However, possessing or sharing such data does not indicate a security breach. The FBI and SISA emphasize that there is no evidence of cyber attacks affecting US election infrastructure, voting processes or results. They advise the public to be cautious of suspicious claims, especially on social media and to rely on official sources for accurate election information. As elections approach, the agencies are increasing awareness about efforts by foreign actors to erode confidence in US elections, though no attacks have been shown to compromise election integrity. The US State Department has accused Russian media outlet RT of running covert influence operations globally supported by a cyber unit linked to Russian intelligence. Secretary of State Antony Blinken revealed that in early 2023, this cyber unit was embedded within RT with the leadership's knowledge. The unit gathers intelligence for Russian state entities and helps procure military supplies for Russia's war in Ukraine through a crowdfunding campaign. RT's influence operations extend beyond the US targeting countries like Moldova, where Russia allegedly aims to incite unrest if pro-Russian candidates lose in elections. Blinken also highlighted RT's influence via platforms like Africa Stream and Red used to spread Kremlin narratives. The US, UK and Canada have launched a joint campaign against Russian disinformation and imposed sanctions on Russian media. The State Department warned that these operations aim to manipulate democratic elections and destabilize societies globally. Chinese state-sponsored hackers are suspected of breaching the Pacific Islands Forum Secretariat's network, HIF, a regional diplomatic body in Fiji. According to ABC News, Australia's government sent cybersecurity specialists to Suva after discovering the intrusion, HIF Secretary General Baron Waka confirmed the cyber attack, though no specific threat actor has been officially identified. The breach, occurring months before a PIF meeting provided attackers with information on PIF operations and communications between member states. China denied involvement following controversy at the PIF meeting over Taiwan's inclusion as a developing partner, which Beijing opposes. The cyber attack is part of rising regional tensions with Beijing increasing its influence among Pacific nations. Australia has responded by bolstering regional cybersecurity efforts, including signing defense agreements with countries like Vanuatu and deploying cyber specialists to counter China-linked incidents. In a story for Wired, Lily Hay Newman examines Apple's approach to privacy with the introduction of Apple intelligence in iOS 18 and macOS Sequoia. Apple's approach stands out due to its focus on security-first infrastructure, particularly through its private cloud compute system, or PCC, Apple built custom servers running Apple Silicon with a unique operating system, blending iOS and macOS features. These servers prioritize user privacy by operating without persistent storage, meaning no data is retained after a reboot. Each server boot generates a new encryption key, ensuring that previous data is cryptographically irrecoverable. PCC servers also leverage Apple's secure enclave for encryption management and secure boot for system integrity, unlike typical cloud platforms, which allow administrative access in emergencies, Apple has eliminated privileged access in PCC, making the system virtually unbreakable from within. Additionally, Apple implemented strict code verification through its trusted execution monitor, locking down servers so no new code can be loaded once the system boots, significantly reducing attack vectors. Apple's transparency measures are also unique. Each PCC server build is publicly logged and auditable, ensuring that no rogue servers can process user data without detection. Apple has engineered its cloud system to minimize reliance on policy-based security and instead uses technical enforcement. This highly secure, on-device processing approach, paired with minimal cloud exposure, defines Apple's cloud architecture as one of the most privacy-focused in the industry. In unrelated Apple news, Cupertino has requested the dismissal of its lawsuit against spyware firm NSO Group, citing challenges in obtaining critical files related to NSO's Pegasus tool. The company expressed concerns that Israeli officials who seized files from NSO could hinder discovery. Apple also warned that disclosing its security strategies to NSO's lawyers could expose them to hacking, potentially aiding NSO and its competitors. Since the lawsuit began, NSO has declined in influence with many employees leaving to join or start competing firms. While Pegasus spyware was once notorious for targeting dissidents and journalists, US sanctions have severely limited NSO's reach. Apple has strengthened its threat detection capabilities, notifying users targeted by spyware and collaborating with organizations like CitizenLab to expose hacking operations. Its introduction of Lockdown mode has also enhanced iPhone security with no successful commercial spyware attacks reported against it. 23 and me will pay $30 million and provide three years of security monitoring to settle a lawsuit over a 2023 data breach affecting 6.9 million customers. The breach exposed sensitive genetic information with hackers specifically targeting individuals of Chinese and Ashkenaji Jewish ancestry. The settlement, which requires court approval, includes cash payments and security monitoring for affected customers. 23 and me, facing financial difficulties, expects $25 million of the settlement to be covered by cyber insurance. The breach impacted 5.5 million DNA relatives profiles and 1.4 million family tree users. SolarWinds has released patches for two vulnerabilities and its Access Rights Manager, including a critical bug with a CVSS score of 9.0. This flaw allows unauthenticated attackers to execute arbitrary code remotely via deserialization of untrusted data. The second vulnerability involves hard-coded credentials that could let attackers bypass authentication for the RabbitMQ management console. Both vulnerabilities were reported by Piotr Bazzidlo of Trend Micro's zero-day initiative and are resolved in version 2024.3.1. No exploitation in the wild has been reported. A malware campaign discovered by O.A. Labs uses a browser's kiosk mode to trap users on a Google login page, frustrating them into entering their credentials, which are then stolen by the Steel C Info Stealer. The malware blocks the escape and F11 keys, preventing users from easily exiting the browser. Users hoping to unlock their systems may save their credentials in the browser, which Steel C then retrieves from the credential store. This attack is primarily delivered by the Amadeh malware, which has been active since 2018. To escape, users can try keyboard shortcuts like Alt-F4 or Control-Alt-Delete to close the browser. If unsuccessful, a hard reset or safe mode reboot is recommended, followed by a malware scan to remove the infection. Crabs on security's analysis of the 2023 cyber attack on Las Vegas casinos sheds light on a troubling evolution in the cyber criminal landscape. The attack, which temporarily shut down MGM resorts, was linked to the Russian ransomware group Alpha Black Cat. However, what makes this incident particularly significant is the involvement of young English-speaking hackers from the US and UK, marking the first known collaboration of this kind with Russian ransomware groups. One of the key figures in the MGM hack was a 17-year-old from the UK who explained how the breach occurred. Using social engineering, the hackers tricked MGM staff into resetting the password for an employee account, which ultimately led to the disruption of casino operations. Cybersecurity firm CrowdStrike later dubbed the group responsible as "scattered spider" due to the decentralized nature of its members who are spread across various online platforms such as Telegram and Discord. Crabs discovered that many of these young hackers are not only involved in financially motivated cybercrime but are also part of growing online communities that engage in far more dangerous activities. These groups, collectively known as "The Calm" serve as forums where cyber criminals collaborate, boast about their exploits, and compete for status within the community. However, beyond financial crime, these groups are increasingly associated with harassment, stalking, and extortion, often targeting vulnerable teens. In some cases, victims are pushed to commit extreme acts, including self-harm, harming family members, or even suicide. According to court records and investigative reporting, members of these groups have also been involved in real-world crimes, including robberies, swatting, and even murder. Crabs notes that these cyber criminal communities are becoming more widespread and are recruiting new members through gaming platforms and social media. The growing threat from these harm communities has even prompted law enforcement agencies to consider using anti-terrorism laws to prosecute their members, as the activities they engage in often involve violent extremism. However, as Krebs points out, applying terrorism statutes to cyber crime can be legally challenging and may not always result in convictions. Ultimately, the analysis reveals that the 2023 MGM hack was just the tip of the iceberg. Beneath the surface, a much darker cyber criminal ecosystem is emerging. We are financial crime, harassment, and violence intersect, raising concerns about the broader implications of these growing online communities. Coming up after the break, our guest is Elliot Ward from SNEAK, sharing insights on prompt injection attacks. Stay with us. (upbeat music) And now, a word from our sponsor, No Before. It's all connected, and we're not talking conspiracy theories. When it comes to InfoSec tools, effective integrations can make or break your security stack. The same should be true for security awareness training. No Before, provider of the world's largest library of security awareness training provides a way to integrate your existing security stack tools to help you strengthen your organization's security culture. No Before's security coach uses standard APIs to quickly and easily integrate with your existing security products from vendors like Microsoft, CrowdStrike, and Cisco, 35 vendor integrations and counting. Security Coach analyzes your security stack alerts to identify events related to any risky security behavior from your users. Use this information to set up real-time coaching campaigns targeting risky users based on those events from your network, endpoint, identity, or web security vendors. Then, coach your users at the moment the risky behavior occurs, with contextual security tips delivered via Microsoft Teams, Slack, or email. Learn more at nobefore.com/securitycoach. That's nobefore.com/securitycoach. And we thank No Before for sponsoring our show. (upbeat music) Imagine this, your primary identity provider goes down, whether it's a cloud outage, network issue, or even a cyber attack. Suddenly, your business grinds to a halt. But what if it didn't have to? Meet identity continuity from Strata, the game-changing solution that keeps your business running smoothly no matter what. Whether your cloud IDP crashes or your on-prem system faces a hiccup, identity continuity seamlessly shifts authentication to a secondary or even tertiary IDP, automatically and without disruption. Powered by the Mavericks Identity Orchestration Platform, identity continuity uses smart health checks to monitor your IDP's availability and instantly activates failover strategies tailored to your needs. When the coast is clear, it's a seamless switchback. No more downtime, no lost revenue, no frustrated customers. Just continuous, secure access to your critical applications every single time. Protect your business from the high costs of IDP outages with identity continuity from Strata. Downtime is a thing of the past. Learn more at strata.io. Keep your business moving even when the unexpected happens. That's strata.io. Elliot Ward is Senior Security Researcher at SNEEC. I recently caught up with him for his insights on prompt injection attacks. So yeah, I mean, obviously like we're in a security research team here at SNEEC and we like to research into new technologies or things that are having an impact on kind of developers and developer communities. So it made sense to look at LLMs and AI in the current cut last couple of years. And we're not experts in AI. So we have a local security AI company here in Zurich where I live and they have an AI security product. So we teamed up with them to get a better understanding of how people are actually leveraging LLMs in practice. And then we kind of applied our security hat to this to be able to deliver some high-quality security research. Well, before we dig into the specifics of the research here, for folks who might not be familiar with prompt injection, can you give us a little brief on what exactly that entails? Yeah, absolutely. So like we can kind of think of prompt injection very similar to kind of the early days of SQL injection. And this is basically where we have kind of some user data and some code. And the actual piece of code that's processing this doesn't know how to distinguish between one and the other. So in the traditional kind of SQL injection case, we basically take the user input and combine this into the query. And it may be like select star from users where username equals Dave. And in that case, the database doesn't know which part is the part of the query that the user submitted and which is the actual kind of grammar of the query that the developers anticipated. And it's very similar to this where basically when we pass data to the LLM, we give it some instruction. And it may be, for example, tell me a joke about x. And then we replace x with something that the user has provided. And then maybe the user provides cats. And then it says, tell me a joke about cats. And that's actually what gets passed to the LLM. But when the LLM sees this, it doesn't know which part is from the user and which part is from the developers. So in those case, we could potentially do things like, we could say cats and tell me a fact about dogs. And then it would basically be like, tell me a joke about cats and a fact about dogs. And then the LLM, we'll see this in a little process that as kind of the whole instruction. And then you can kind of co-hearse the LLM into performing tasks that it wasn't intended to by the developers. So it would be like, tell me about cats and also the financial situation of the company that LLM is running on. Something like that. Absolutely. And I mean, in the kind of simple cases where we've seen a lot of kind of prompt injection research already, those kind of attacks won't work as successfully. Because if we're using a generic LLM, the LLM itself doesn't actually have access to kind of your customers or your proprietary data. But then this is one of the areas that we looked into. So we have kind of the concept of LLM orchestration frameworks or agents. And these kind of allow you to build a more realistic application where we combine data from an actual database with some proprietary knowledge base internally, or we connect it to our kind of customer like CRM, where we can basically draw from all of these external data sources. And then in those situations, then when you have that prompt injection and you're able to say like, hey, tell me some information about Dave or this company, then the LLM will go ahead and be like, oh, in order to do that, I need to speak to this API or I need to read from this database. And then that's where things get really dangerous. And so how do organizations prevent this sort of thing? What kind of protections should they be putting in place? So that's a great question. And the whole kind of LLM security kind of field is quite new. But there's some great stuff that's been doing. I mean, our partner in this research like error, their primary business is an LLM security guardrail. And basically what they do is provide something very similar as like a WAV or firewall before your LLMs. So they basically screen what comes in via the prompt and then what comes out via the prompt completion. And they look for signs of prompt injection or kind of prompt leakage. And this is kind of one really good defense that we can adopt here. And then additionally, we also work together to create a new OS project called the large language model security verification standard, also LLM-SVS. And this is kind of inspired by the traditional ASVS, which is for the application security verification standard. And it's basically a set of security requirements for building secure and robust LLMs within a complete ecosystem. So there's kind of everything in there from when you're training your model to kind of the steps and things that you should be doing to ensure that you don't kind of import bad data to them when you're integrating this into your backend APIs, that we don't take the responses from the LLMs and pass that to some further API that treats this is trusted, for example. So there's, I think we have kind of eight control groups at the moment that all address eight specific kind of security domains that are relevant to integrating LLM applications. - Well, help me understand here, Elliot. I mean, so what we're dealing with is it goes beyond just sanitizing the input into the LLM. I mean, you're actually sort of cross referencing the input with the output, which it seems to me to be a whole nother level. - Yeah, exactly. So I mean, there's many things that could potentially go wrong here, right? I mean, so like when we pass something to the LLM, I mean, even in a case where we don't have a constant injection, it's always possible that the LLM is gonna respond with some potentially malformed data. And I mean, take the example where we say something to the LLM and it responds back with some data and we pass that directly into a SQL query. And if, for example, that data has a single quote in it, then that may break our SQL query, even if somebody's not kind of intentionally done this, just the LLM may include that as part of its response. So like anything that comes out of the LLM should be considered untrusted or tainted. As we typically call this in the application security world and it should be treated accordingly. Yeah, those things can go a long way in terms of kind of preventing some of these things from going wrong. - So what are the take-homes here for the research? I mean, for the folks who are tasked with protecting their organization, so what are the words of wisdom you'd like them to come away with here? - So for this, I mean, like, firstly, using LLMs is great. I mean, this can do a really, it can really help with the way that we process things and allow us to do things that we, before would have had to build really complex systems. So this is really good. But we just need to make sure that we kind of follow the advice of things like the LLM-SVS and also the OWASPOP10 for LLMs and just make sure that we're aware of the various track landscapes that LLM has introduced and that we take the kind of relevant steps to improve, like, make sure that we're mitigating those risks. That Elliot Ward, senior security researcher, add a sneak. (upbeat music) (upbeat music) - And now a word from our sponsor, Cortex. Security teams face a barrage of more, more security tools, create more complexity, more devices need protection, more specialized focus areas create more silos. The security landscape is changing fast. How can security operations transform to meet current threats? Cortex, by Palo Alto Networks, consolidates SecOP tools into an integrated platform and helps organizations stop threats at scale with AI, automation, and analytics. Learn more at Palo Alto Networks.com/ Cortex. (upbeat music) (eerie music) (eerie music) (eerie music) (eerie music) (eerie music) (eerie music) (eerie music) (eerie music) - And finally, in an article for Prospect Magazine, James Ball asks you to picture this. You're walking down a silent empty street on the dead of night. For a fleeting moment, it feels like you're the last person on earth until someone else appears breaking the illusion. Now imagine that feeling on the internet, but instead of someone else showing up, you're surrounded by bots and you might actually be the last real human online. Welcome to Dead Internet Theory, a half joke, half conspiracy suggesting that if you're listening to this, you're the only living person left online. Everyone else, bots. The comments, the videos, the memes, it's all automated. While it sounds absurd, the internet today is teetering close to this reality. AI-generated content is flooding social media, search results, and news sites with bots driving engagement to the top of your feed, all in the name of ad revenue. Platforms like Facebook are brimming with low quality, strange memes, AI-slop, boosted by fake accounts and click farms. Entrepreneurs in places like India and the Philippines are turning this slop into viral content, all to cash in on ads placed by Facebook. This trend, which began as a joke, is now a reality. Content for content's sake with bots liking, sharing, and commenting, just to make a buck. Meanwhile, actual human interaction is being sidelined. Facebook feeds, once full of personal stories, are now stuffed with bizarre AI-generated images. Google search results are getting worse, and social media feels increasingly like an endless stream of junk. The real tragedy? It's not even a glitch, it's by design. The big tech companies aren't fighting it, they're fueling it. As algorithms prioritize engagement over quality, bots are more effective at gaming the system than we are. It's all about ad clicks and real human needs just aren't part of the equation anymore. But here's the catch. A bot-run internet won't last. In the end, the economy depends on humans, not bots. If the tech giants don't course correct and make the internet work for real people again, someone else will. Just like that deserted street you walked down late at night, the internet isn't really empty. The real people are still there, just out of sight, waiting for something better. (upbeat music) And that's the Cyberwire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. Don't forget to check out the Grumpy Old Geeks podcast where I contribute to a regular segment on Jason and Brian's show every week. You can find Grumpy Old Geeks where all the fine podcasts are listed. We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes or send an email to cyberwire@n2k.com. We're privileged that N2K Cyberwire is part of the daily routine of the most influential leaders and operators in the public and private sector from the Fortune 500 to many of the world's preeminent intelligence and law enforcement agencies. N2K makes it easy for companies to optimize your biggest investment, your people. We make you smarter about your teams while making your teams smarter. Learn how at N2K.com. This episode was produced by Liz Stokes, our mixer is Trey Hester with original music and sound designed by Elliot Peltzman, our executive producers, Jennifer Eiben, our executive editor is Brandon Park. Simone Petrela is our president, Peter Kilpius, our publisher, and I'm Dave Bitner. Thanks for listening. We'll see you back here tomorrow. (upbeat music) (upbeat music) (bell ringing) (bell ringing) (chimes) [BLANK_AUDIO]