News

Kids say they can beat age checks by drawing on a fake mustache

The Register - Mon, 04/05/2026 - 21:50
It’s been months since the UK government began requiring stronger age checks under the Online Safety Act, and recent research suggests those measures are falling short of keeping kids away from harmful content. In some cases, even drawing on a mustache has been reported as enough to fool age detection software. Like keeping booze away from teenagers or nudie mags out of the hands of young lads, slapping a big “restricted, 18+” label on parts of the internet hasn't stopped kids testing the limits. Those limits, according to UK online safety group Internet Matters, are easy to sidestep. The group surveyed over 1,000 UK children and their parents, and while it did report some positive effects from changes made under the OSA, many children saw age verification as an easy-to-bypass hurdle rather than something that kept them genuinely safe. A full 46 percent of children even said that age checks were easy to bypass, while just 17 percent said that they were difficult to fool. The methods kids use to fool age gates vary, but most are pretty simple: There's the classic use of a video game character to fool video selfie systems, while in other instances, children reported just entering a fake birthday or using someone else's ID card when that was required.  The report even cites cases of children drawing a mustache on their faces to fool age detection filters. Seriously. While nearly half of UK kids say it's easy to bypass online age checks (and another 17 percent say it's neither hard nor easy), only 32 percent say they've actually bypassed them, according to Internet Matters.  Dude, want some TikTok? My mom will hook us up Like scoring some booze from "cool" parents, keeping age-gated content out of the hands of kids under the OSA is only as effective as parents let it be, and a quarter of them enable their kids' online delinquency.  More specifically, Internet Matters found that a full 17 percent of parents admitted to actively helping their kids evade age checks, while an additional 9 percent simply turned a blind eye to it.  "When speaking to parents and children about these situations, they described scenarios in which parents felt they understood the risks involved and, based on their knowledge of their child, were confident the activity was safe," Internet Matters said of parents who let their kids engage in risky behavior as long as they did it where they could be supervised.  What this means for a major part of the OSA - namely keeping kids from accessing harmful content online - is that it’s falling short. Internet Matters has data to that end, too. Half of children (49 percent) who responded to the group's survey said that they've encountered harmful content online recently, suggesting that even those who don't circumvent age gates are still finding it in their feeds.  So, what can be done to make kids' online safety more effective? Parents told Internet Matters that lawmakers need to do more, and CEO Rachel Huggins agreed that they need help.  "Stronger action is needed from both government and industry to ensure that children can only access online services appropriate for their age and stage and where safety is built in from the outset, rather than added in response to harm," Huggins said in the report.  The Internet Matters chief pointed to the prime minister’s recent talks with social media firms about tackling online harms, describing the moment as “a timely opportunity for positive change.” ®
Categories: News

Kids say they can beat age checks by drawing on a fake mustache

The Register - Mon, 04/05/2026 - 21:50
46% say age checks are easy to bypass, and nearly a third admit getting around them

It’s been months since the UK government began requiring stronger age checks under the Online Safety Act, and recent research suggests those measures are falling short of keeping kids away from harmful content. In some cases, even drawing on a mustache has been reported as enough to fool age detection software.…

Categories: News

Shadow IT has given way to shadow AI. Enter AI-BOMs

The Register - Mon, 04/05/2026 - 16:04
When it comes to securing enterprise supply chains, now heavily infused with AI applications and agents, a software bill of materials (SBOM) no longer provides a complete inventory of all the components in the environment. Enter AI-BOMs. While a traditional SBOM includes all of the software packages and dependencies in the organization, an AI-BOM aims to cover the gaps introduced by AI assets by providing visibility across all of the models, datasets, SDK libraries, MCP servers, ML frameworks, agents, agentic skills, prompts, and other AI tools - plus how these AI components interact with each other and connect to workflows. "Imagine if AI is a birthday cake in the middle of this room, but you don't know how it got there," Ian Swanson, VP of AI security at Palo Alto Networks said in an interview with The Register. "You don't know the recipe, you don't know the ingredients, you don't know the baker. Would you eat a slice of that cake?" A lot of organizations are eating the cake anyway. In addition to the company-sanctioned models and AI used in the tech stack, there's also the problem of "shadow AI" - we used to call this "shadow IT" - and these unsanctioned tools also need to be brought out of the shadows so they can be accounted for. This includes all the vibe coding platforms and agents that individual employees spin up, along with any external chatbots they interact with on work computers and potentially input sensitive corporate data into.  To secure all of these AI ingredients baked into the cake, companies first need to know what they are, what they connect to, and how they are being used. "In general, organizations that are trying to wrap their head around AI security," Amy Chang, Cisco's head of AI threat intelligence and security research told The Register. "They want a way to be able to identify what AI assets exist in their environment. A tool like the AI bill of materials is one of those first places that you can start to get a better understanding of what exists." Up next: model provenance Cisco previously open sourced its AI-BOM, making it free for anyone to scan codebases, container images, and cloud environments to produce this bill of materials. On Friday, it also made available its Model Provenance Kit as an open source tool to track model provenance. In a blog announcing the new repository, Chang and other AI researchers describe it as a DNA test for AI models, and it determines provenance using one of two modes: compare or scan. Compare mode takes any two models and shows their similarity across metadata, tokenizer structure, weight-level signals along with a final composite score. Scam mode starts with a single model and matches it against a database to determine the closest lineage candidates - and to help with this mode, Cisco also released a model fingerprint database covering about 150 base models across more than 45 families and over 20 publishers. Chang told us that the new AI tool performs two gate checks. "First, at the metadata level, it compares the information from the base model with the fine-tuned version of the model to delineate some sort of provenance-linked relationship - like this was derived from Meta Llama 4, or derived from Alibaba Qwen3," she said. "Then, what we do is look at weight-based signifiers. So now we're providing a sort of verifiable, repeatable and provable way to attest that the models that you use and deploy, that are customer facing, that are ingesting all this data, are truly the models that that you're supposed to be using, or that that are within the confines of your risk tolerance." During our interview, Chang pointed to Cursor's Composer 2, which is partly built on Kimi 2.5, a Chinese open source model. "They were very quick to admit that, yes, we used the Chinese model to build this," she said. "But that could have regulatory or compliance risk." Case in point: The European Union's AI Act mandates organizations document training data, characteristics of training methodology, and risk assessments for "high-risk systems." Google's Wiz, in its AI-BOMs, also accounts for all of the tools in the developers' workstation, such as a laptop or integrated development environment, that went into building the AI application. "Many people define visibility or BOMs by what's actually in the final artifact, but we also extend the definition of BOMs in general and AI-BOMs in particular to include the AI tools that went into building that application," Ziad Ghalleb, Wiz technical product marketing manager, told us.  "And then another important aspect is the identities that are attached to these AI workloads, because all these agents or models, tools, etc., are tied to a specific identity inside your environment," Ghalleb added. "So you need to be looking at these non-human identities that are related to these systems. It's not just the resources. It's also the identities and the permission sets that are tied to them." All of this boils down to visibility and security. "If you don't have visibility of these workloads, then you can't really understand what it is to protect," Swanson said.  Protection against poisonings Enterprises aren't the only ones madly rushing to incorporate AI tools into their workloads and processes, as everyone who reads The Reg likely knows. Criminals are also using these same tools to move faster and make their attacks more efficient. As Sherrod DeGrippo, Microsoft's GM of global threat intelligence, told The Register in a previous interview: This includes tasks such as performing reconnaissance on compromised computers, and standing up and managing attack infrastructure. "Agentic, automated reconnaissance against systems is something that is worth taking a look at," DeGrippo said. "Go find out about XYZ, and come back to me with everything you've seen. Go scan the net blocks owned by this particular entity." According to Swanson, this is also a case where having an AI-BOM can help defenders respond faster. He says he can't name the company, but in one incident that Palo Alto Networks responded to, a criminal group used AI to scout out the victim organization and locate exposed endpoints.  "One of the things that they did is get access to system prompts, the instructions to an AI workload that tells it what it can do, and what it can't do," Swanson said. And once the attacker gained access to the company's internal AI's system prompts, they modified them to force the AI to do things that it shouldn't - like steal data, and send it to an external email account. An AI-BOM would provide an understanding of the AI system's configurations and dependencies at a specific state in time - and also indicate any changes. "If you had understanding of state and understanding of state changes, then you would be able to go back to an AI bill of materials and say: 'What system prompt was used within the ingredients to create the AI application?' And then see it's changed from a prior state to a new state. So we should probably check this and see if there's anything bad that's happening here," Swanson said. "And in that case, you'd be able to catch it." Other supply chain attacks such as model and skills poisoning underscore the risks of not knowing what AI tools are in an IT environment.  "Skills that people use in coordination with a lot of these coding assistants are pretty easy to tamper with, and so it's important to be able to scan them to make sure that somebody is not manipulating the capabilities," Swanson said. If a skill is supposed to provide a weather forecast, it shouldn't also steal credentials or leak secrets, he explained. "Understand state changes, constantly scan these artifacts for supply chain risks, and then at the point of runtime, when your AI application is live, also look at all communications to make sure that nothing bad is happening," Swanson said. AI-BOMs (and their software counterparts) can also help organizations quickly identify compromised open source code running on corporate systems. For example: the recent rash of poisoned npm and PyPI packages and earlier Shai-Hulud worm credential stealer attacks. Both of these campaigns targeted code commonly integrated into AI applications. Even in the absence of a CVE identifier, an AI-BOM lets users query "related libraries or packages," and then identify any malicious versions in their environment, Ghalleb said. "There's no CVE attached to them, but at least you know how to remove these to contain an evolving threat." ®
Categories: News

Shadow IT has given way to shadow AI. Enter AI-BOMs

The Register - Mon, 04/05/2026 - 16:04
'If you don't have visibility, you can't understand what to protect'

When it comes to securing enterprise supply chains, now heavily infused with AI applications and agents, a software bill of materials (SBOM) no longer provides a complete inventory of all the components in the environment. Enter AI-BOMs.…

Categories: News

If the vote you rocked, your personal info can be grokked

The Register - Mon, 04/05/2026 - 10:06
Your voter data could be used against you. A foreign intelligence service that wished to identify the family members of deployed military personnel could do so by cross-referencing public voter record data and social media posts. An employer who only wanted to hire employees with a specific political affiliation could do so by analyzing the primary ballot history of job applicants. An identity fraud ring seeking to open credit accounts in the names of other people could identify voters whose mail has been returned (via voter file suspense indicators) to take over those addresses using bogus change-of-address requests. These scenarios are possible thanks to the ability to link publicly available voter data to other data sets, according to Noah M. Kenney, founder of consultancy Digital 520. "I picked two different counties that kind of represented opposite ends of the spectrum," Kenney told The Register in a phone interview.  "In Texas, they hide a lot of information and then North Carolina makes a lot of it public in terms of the specific records. And what I was looking at specifically is if you go and merge this data set or link this data set with other data sets, how likely are you to be able to re-identify a person?" More than 25 years ago, research by Latanya Sweeney, currently a professor at Harvard, demonstrated that most of the US population (87 percent) could be identified with just three anonymous data points – a five-digit ZIP code, gender, and date of birth. Those results can be improved when combined with other data sets. And recent research has shown that the process of identifying people from seemingly anonymous data points becomes even easier with AI tools. In a research paper titled "Public Voting Records: A Record, or an Attack Surface?", Kenney describes how he analyzed public records from Travis County, Texas, and Robeson County, North Carolina to show that the adversarial scenarios cited above are practical with public data. The Texas file provides fewer data points than the North Carolina file, but the research suggests redaction doesn't make much of a difference in the re-identification scenarios evaluated. Table 1 — Disclosure regime comparison With the less detailed Texas info, Kenney was able to use a Python script to link the voter records to other public records like the Federal Election Commission's individual-contribution data. "We pulled 500 contribution records for ZIP 78704 (an Austin-core ZIP including South Congress and Travis Heights neighborhoods) from the 2024 cycle via the FEC OpenAPI on May 1, 2026," he explains in his paper.  "We de-duplicated to 181 unique contributors by exact match on (last name, first name, ZIP), and inner-joined to the voter file on the same key, no fuzzy matching, no nickname normalization, no suffix handling. Of the 181 contributors, 105 (58.01 percent) matched any voter record and 95 (52.49 percent) matched a uniquely-identifiable voter. Of the 105 matches, 74.3 percent had a non-trivial employer field in FEC." That 52 percent individual match rate for identifying individuals from voter rolls and FEC data, Kenney said, would be more like 90–95 percent using the kinds of tools commercial data brokers employ. The North Carolina voter dataset includes a phone number for the majority of voters. According to the paper, 88.53 percent of voters who have a phone number listed have a number that is unique within the county. As a result, external datasets containing phone numbers can be joined at a similar rate using this field as a key to narrow down and identify likely individuals. Among the report's other findings:  Name and ZIP code uniquely identify 95.81 percent of Texas voters and 87.79 percent of North Carolina voters. Among Travis County voters who have voted in 20 or more elections, 98.4 percent have a turnout pattern that is unique to them, making that data point a fingerprint. Texas' redaction of date of birth as a privacy measure is undermined by the publication of the voter registration data, which allows 28 percent of voters to be uniquely identified when combined with ZIP and gender. The Travis County voter file currently exposes 320 deployed military families through the publication of APO/FPO codes for military mailings. There's currently no comprehensive federal privacy law. While many states have privacy rules, there's a lot of variation. "Even within a specific state, most of the counties are individually handling these public records requests, so they all handle them differently across the country," said Kenney.  "Some of them, you can't get them. Some of them, you need an ID to get them. Some of them you have to go through a request process for public records or you have to pay for them. The two counties I used are both freely available. You can go and download zip files of them without even putting in an email address or your name from anywhere in the world." Kenney said that he believes that access controls represent a better answer than redacting certain data fields, pointing to his findings that show redaction doesn't necessarily protect against privacy harms. He recommends measures like rate limits on bulk file requests, identity verification, requiring state ID, maintaining audit logs of requests, and prohibiting commercial resale of these records – because they're often used by data brokers. Beyond specific fixes based on his findings – Texas should generalize voter registration dates to a year rather than a day and armed forces mailing codes should be excluded from voter rolls – Kenney argues that people should be allowed to opt out of inclusion in public data sets and that general data privacy protections would be helpful. Last week, House Republicans introduced the Secure Data Act in an effort to create federal privacy rules. But Kenney says that it's significantly weaker than a lot of state regulations and he doesn't expect it will pass. "The industry consensus is that the likelihood of it passing is extremely low, at least in its current form," he said. "This represents the third attempt to pass comprehensive data privacy in recent years, most recent being the American Data Privacy and Protection Act, which failed to pass." ®
Categories: News

If the vote you rocked, your personal info can be grokked

The Register - Mon, 04/05/2026 - 10:06
Even limited voter rolls can be linked to identify people, research shows

Your voter data could be used against you. A foreign intelligence service that wished to identify the family members of deployed military personnel could do so by cross-referencing public voter record data and social media posts.…

Categories: News

Five Eyes spook shops warn rapid rollouts of agentic AI are too risky

The Register - Mon, 04/05/2026 - 03:35
Information security agencies from the nations of the Five Eyes security alliance have co-authored guidance on the use of agentic AI that warns the technology will likely misbehave and amplifies organizations’ existing frailties, and therefore recommend slow and careful adoption of the tech. The agencies delivered that position last Friday in a guide titled Careful adoption of agentic AI services [PDF] that opens with the observation that “Agentic artificial intelligence (AI) systems increasingly operate across critical infrastructure and defense sectors and support mission-critical capabilities,” making it “crucial for defenders to implement security controls to protect national security and critical infrastructure from agentic AI-specific risks.” The thrust of the document is that implementing agentic AI will require use of many components, tools, and external data sources, creating an “interconnected attack surface that malicious actors can exploit.” “Consequently, every individual component in an agentic AI system widens the attack surface, exposing the system to additional avenues of exploitation,” the document warns. To illustrate the risks agentic AI poses, the document offers the example of an AI agent empowered to install software patches that is thoughtlessly given broad write access permissions, with the following unpleasant results: Here’s another nasty agentic mess the document uses as a warning: An organization deploys agentic AI to autonomously manage procurement approvals and vendor communications, and gives the agent access to financial systems, email and contract repositories; This user only considers permissions for the agent when deploying it; Over time, other agents rely on the procurement agent’s outputs and implicitly trust its actions; A malicious actor compromises a low-risk tool integrated into the agent’s workflow and inherits the agent’s over-generous privileges; The attacker uses that privileged access to modify contracts and approve unauthorized payments, and evades detection by creating faked audit logs that don’t trip alerts. Australia’s Signals Directorate and Cyber Security Centre (ASD’s ACSC) contributed to the document, working with the USA’s Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA), the Canadian Centre for Cyber Security (Cyber Centre), the New Zealand National Cyber Security Centre (NCSC-NZ) and the United Kingdom National Cyber Security Centre (NCSC-UK). The document contains more scary stories, then lists 23 different risks and over 100 individual best practices to address them. Much of the advice targets developers who deploy AI, but the authors also urge vendors to ensure they test their wares thoroughly and ensure their products “fail-safe by default requiring agents to stop and escalate issues to human reviewers in uncertain scenarios.” The document also urges security practitioners and researchers to spend more time contemplating AI. “Threat intelligence for agentic AI systems is still evolving, which can introduce significant security gaps,” the document warns, because resources like the Open Web Application Security Project and MITRE ATLAS currently focus on LLMs. “As a result, some attack vectors unique to agentic AI may not be fully captured or addressed.” Given the huge to-do list for anyone creating agentic AI, or contemplating its use, the document argues for very cautious adoption. “Organisations should therefore approach adoption with security in mind, recognizing that increased autonomy amplifies the impact of design flaws, misconfigurations and incomplete oversight,” the document concludes. “Deploy agentic AI incrementally, beginning with clearly defined low-risk tasks and continuously assess it against evolving threat models.” “Strong governance, explicit accountability, rigorous monitoring and human oversight are not optional safeguards but essential prerequisites. Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritizing resilience, reversibility and risk containment over efficiency gains.” ®
Categories: News

Five Eyes spook shops warn agentic is too wonky for rapid rollout

The Register - Mon, 04/05/2026 - 03:35
Prioritize resilience over productivity, say CISA, NCSC and their friends from Oz, NZ, Canada

Information security agencies from the nations of the Five Eyes security alliance have co-authored guidance on the use of agentic AI that warns the technology will likely misbehave and amplifies organizations’ existing frailties, and therefore recommend slow and careful adoption of the tech.…

Categories: News

Brace for the patch tsunami: AI is unearthing decades of buried code debt

The Register - Sat, 02/05/2026 - 09:30
Britain's cyber agency is warning that AI-fuelled bug hunting is about to flush out years of buried flaws, leaving defenders scrambling to keep up. In a blog post on Friday, Ollie Whitehouse, CTO of the UK's National Cyber Security Center, said organizations should brace for a looming "patch wave," driven by a backlog of weaknesses now being exposed faster than many teams can realistically fix them. "All organizations have 'technical debt'; a backlog of technical issues – that is both expensive and time-consuming – as a result of prioritising short-term gains over building resilient products," Whitehouse wrote.  "Artificial Intelligence, when used by sufficiently-skilled and knowledgeable individuals, is showing the ability to exploit this technical debt at scale and at pace across the technology ecosystem," he added. The result, according to NCSC, is likely to be a "forced correction" as those weaknesses are uncovered and addressed in bulk. That warning lands just as vendors roll out tools built to do exactly that. Models like Anthropic's Claude Mythos and OpenAI's GPT-5.5-Cyber promise to find and fix bugs before attackers do, but the same capability also lowers the barrier to finding them in the first place. "We are expecting an influx of updates to address vulnerabilities across all severities, and expect a number to be critical," Whitehouse wrote. The cyber agency is urging teams to get ahead of the incoming flood by shrinking their exposed footprint. "All organizations must take steps to identify and minimise their internet-facing (and other externally-exposed) attack surfaces as soon as is possible," Whitehouse said, adding that defenders should "prioritise technologies on your perimeter and then work inwards." Even then, patching alone will not be enough; Whitehouse notes that unsupported or end-of-life systems may need to be replaced altogether. "Prepare to patch quickly, more often, and at scale," is the message from the NCSC. In practice, that means a lot more fixes landing at once, and a lot less time to get them done. ®
Categories: News

Brace for the patch tsunami: AI is unearthing decades of buried code debt

The Register - Sat, 02/05/2026 - 09:30
Britain's cyber agency says the bill for years of technical shortcuts is coming due, and it's arriving all at once

Britain's cyber agency is warning that AI-fuelled bug hunting is about to flush out years of buried flaws, leaving defenders scrambling to keep up.…

Categories: News

First reports come in of victims of critical cPanel vuln as 'millions' of sites potentially exposed

The Register - Fri, 01/05/2026 - 14:10
CISA has added a critical cPanel bug to its known-exploited list, confirming that attackers are already poking holes in one of the internet's most widely used hosting stacks. The vulnerability, tracked as CVE-2026-41940, carries a near-worst-case CVSS score of 9.8 and affects all supported versions of cPanel and Web[Host Manager (WHM) released after version 11.40, along with WP Squared, a WordPress management layer built on top of the same platform. In plain terms, a successful exploit can hand over full control of the server. The US government's cybersecurity agency added the flaw to its Known Exploited Vulnerabilities catalog on Thursday, confirming attackers are not waiting around. By the time cPanel shipped a patch on Tuesday, exploitation was already underway. Hosting provider KnownHost has been more explicit about what that looked like in practice, warning customers it had seen successful exploitation attempts before any fix was available. In a Reddit post, the company's CEO, Daniel Pearson, said the provider had "seen execution attempts as early as 2/23/2026" and urged users to restrict access and assume systems could already be compromised if left unpatched. Another hosting provider, Namecheap, says it temporarily blocked access to cPanel and WHM, effectively slamming the door shut until fixes were ready. It has since begun rolling out updates. There are also early signs of what those attackers are up to once they get in. A small business owner posting on Reddit said their company had been hit by ransomware after running what they described as a fairly standard cPanel setup, adding that their hosting provider appeared to be struggling under the weight of the incident. The attackers, they said, demanded $7,000 to unlock systems. The claim is anecdotal, but if it holds up, it suggests this bug is already being used by criminals to lock up systems, not just lurk quietly or skim data in the background. It's not yet known how many organizations have been impacted by the vulnerability, but security firm Rapid7 used Shodan to identify roughly 1.5 million internet-exposed cPanel instances.  cPanel underpins hosting for tens of millions of sites, many run by small outfits that rely on providers to handle security. For them, "patch now" often means "wait and hope," which is not a great place to be when a near-max severity bug is already being weaponized. ®
Categories: News

First reports come in of victims of critical cPanel vuln as 'millions' of sites potentially exposed

The Register - Fri, 01/05/2026 - 14:10
Exploitation was underway before patches landed, at least one victim reports ransomware demand

CISA has added a critical cPanel bug to its known-exploited list, confirming that attackers are already poking holes in one of the internet's most widely used hosting stacks.…

Categories: News

OpenAI locks GPT-5.5-Cyber behind velvet rope despite slamming Anthropic for doing exactly that

The Register - Fri, 01/05/2026 - 12:42
OpenAI is lining up a limited release of its new GPT-5.5-Cyber model to a handpicked circle of "cyber defenders," just weeks after taking a swipe at Anthropic for doing almost exactly the same thing. CEO Sam Altman said in a post on X that the rollout will begin "in the next few days," with access restricted to a group he described as trusted defenders working to secure critical systems.  "We will work with the entire ecosystem and the government to figure out trusted access for cyber," he wrote, adding that the goal is to "rapidly help secure companies and infrastructure." GPT-5.5-Cyber is built to spot flaws before anyone else abuses them. OpenAI says it can pentest, find bugs, exploit them, and tear apart malware, but as we have already seen, tools that break systems rarely stay in the right hands for long. OpenAI's announcement comes just weeks after Anthropic rolled out its own cyber-focused model, Claude Mythos, to roughly 50 organizations under tight controls, saying it would never be made publicly available – and Altman was not impressed.  As reported by TechCrunch, he took aim at what he framed as exclusivity dressed up as caution during an appearance on the Core Memory podcast.  "There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people," he said. "You can justify that in a lot of different ways." He went further, likening the approach to selling fear. "We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million." Now OpenAI is, if not building the same shelter, at least checking IDs at the door. Independent testing suggests the model is not just marketing fluff. The UK's AI Security Institute said this week that GPT-5.5-Cyber is "one of the strongest models we have tested on our cyber tasks," and noted it is only the second system to complete one of its multi-step attack simulations end to end.  It may be pitched as protection, but when the tools can both break and fix systems, the difference often comes down to who gets there first. ®
Categories: News

OpenAI locks GPT-5.5-Cyber behind velvet rope despite slamming Anthropic for doing exactly that

The Register - Fri, 01/05/2026 - 12:42
Altman's crew now doing the same gatekeeping it recently mocked

OpenAI is lining up a limited release of its new GPT-5.5-Cyber model to a handpicked circle of "cyber defenders," just weeks after taking a swipe at Anthropic for doing almost exactly the same thing.…

Categories: News

Pro-Iran crew turns DDoS into shakedown as Ubuntu.com stays down

The Register - Fri, 01/05/2026 - 12:05
Canonical says its web infrastructure is under attack after a pro-Iran hacktivist group instructed its members to target the open source giant. "I can confirm that Canonical's web infrastructure is under a sustained, cross-border Distributed Denial of Service (DDoS) attack" a Canonical spokesperson told The Register.  "Our teams are working to restore full availability to all affected services. We will provide updates in our official channels as soon as we are able to." Known best for managing the development of Ubuntu, the distro's main website is down at the time of writing, and has been for several hours. The hacktivist group The Islamic Cyber Resistance in Iraq, aka 313 Team claimed responsibility for the 503 errors Ubuntu's website was returning on Thursday evening, announcing via its Telegram channel that the attack was scheduled to persist for four hours. More than 12 hours later, the attack continues to disrupt Ubuntu's main website and many of its subdomains, although some, including its Archive and Discourse pages, remain up and running. 313 Team sent a follow-up message to its Telegram group, directed at Canonical, which indicates the group is veering away from hacktivism and toward full-on extortion: "There is a simple way out. We have emailed you with our Session Contact ID. If you fail to reach out, we will continue our assault. You are in an awful position, don't be foolish." The service disruption at Ubuntu means users cannot download any versions of its distros through the usual channels, nor can they log into their Canonical accounts. Canonical promised to provide regular updates when it has new information to share. 313 Team has claimed responsibility for similar DDoS attacks on the likes of eBay's Japan and US divisions, as well as BlueSky in just the past month alone. Why the group is targeting London-based Canonical remains unclear and no reason was given via its Telegram channel. It is presumably because Ubuntu is one of the most popular Linux distros. ®
Categories: News

Pro-Iran crew turns DDoS into shakedown as Ubuntu.com stays down

The Register - Fri, 01/05/2026 - 12:05
313 Team tells Canonical: pay up or the packets keep coming

Canonical says its web infrastructure is under attack after a pro-Iran hacktivist group instructed its members to target the open source giant.…

Categories: News

Passport to £££: Home Office adds £216M to travel doc contract before a single bid's been placed

The Register - Fri, 01/05/2026 - 10:15
The Home Office has increased the annual value and overall duration of its new passport production contract, increasing it to a total of £576 million as it starts a third round of engagement with suppliers. The department’s first engagement notice for the Provision of Passport Manufacturing and Personalisation Services contract last July included an estimated total value of £360 million including VAT over 10 years or £36 million a year. The version published on 24 April increases the total value to £576 million including VAT over 12 years or £48 million a year. The Home Office has also pushed back the contract’s start date from September 2027 to August 2028, as well as postponing the publication of the full tender notice from June to November this year. The latest version says that HM Passport Office issues about eight million passports annually, up from seven million in the first notice, although this would not fully account for the increased annual value. The Home Office’s current passport production contract with Thales (which bought the winning bidder Gemalto) started in April 2018, with an estimated value of £262 million over 11.5 years or £22.8 million a year. It ends on 30 September 2029. As well as physical production of passports and other travel documents, the new supplier will have to personalize them with data including biometrics. It may also need to produce digital travel credentials and make provision “for crypto technologies and contingency solutions.” Potential suppliers will have the chance to ask questions after completing a non-disclosure agreement at an online event on 18 May. The Home Office disclosed that it will pay IBM £5.88 million including VAT for software licenses and support services to operate and maintain its biometric systems between 1 May 2026 and 30 April 2028. The department is awarding the contract directly without competition "as the required software and support services are proprietary to IBM and embedded within existing live systems, with no reasonable alternative supplier without disproportionate technical difficulties." ®
Categories: News

Passport to £££: Home Office adds £216M to travel doc contract before a single bid's been placed

The Register - Fri, 01/05/2026 - 10:15
Start date pushed back a year, annual cost up a third, and UK's now handing out eight million passports a year

The Home Office has increased the annual value and overall duration of its new passport production contract, increasing it to a total of £576 million as it starts a third round of engagement with suppliers.…

Categories: News

The never-ending supply chain attacks worm into SAP npm packages, other dev tools

The Register - Fri, 01/05/2026 - 00:21
The wave of supply chain attacks aimed at security and developer tools has washed up more victims, namely SAP and Intercom npm packages, plus the lightning PyPI package. The newly compromised packages as of Thursday include intercom-client@7.0.5 (according to Google-owned Wiz) and intercom-client@7.0.4 (says supply-chain security firm Socket) and lightning@2.6.2 and 2.6.3. Attackers infected all versions with the same credential-stealing malware that, on Wednesday, poisoned multiple npm packages associated with SAP's JavaScript and cloud application development ecosystem. The SAP-related compromise is a Shai-Hulud-worm style campaign that calls itself Mini Shai-Hulud. So far, these SAP-related npm packages include: mbt@1.2.48 @cap-js/db-service@2.10.1 @cap-js/postgres@2.2.2 @cap-js/sqlite@2.2.2 Collectively, these four packages receive about 572,000 weekly downloads and are widely used by developers building cloud applications. SAP did not answer The Register's questions about the compromise and instead sent us this statement: "A security note is published and available for SAP customers and partners." The note is only accessible to logged-in customers. These latest offensives are called "Mini Shai-Hulud worm” attacks because of similarities to the earlier self-propagating Shai-Hulud malware that targeted npm packages. Both Wiz and Socket attributed the SAP compromise to TeamPCP – the cybercrime crew linked to the earlier Checkmarx, Bitwarden, Telnyx, LiteLLM, and Aqua Security Trivy infections. The two security shops also note that the Thursday attacks on the Intercom and lightning packages appear to contain the same malicious code seen in the SAP operation. Here's what has happened in the world of supply-chain attacks over the past 48 hours. SAP-related npm packages On April 29, TeamPCP compromised four official npm packages from the SAP JavaScript and cloud application development ecosystem and published the poisoned releases between 09:55 and 12:14 UTC. The compromised packages contain malicious preinstall scripts set to execute automatically on every npm install, and run attacker-controlled code before any application code runs. This new campaign deploys a multi-stage payload that steals developer secrets, self-propagates, encrypts all the stolen goods, and then exfiltrates the now-locked secrets into a new GitHub repository under the victim's own account. "The second-stage payload is a credential stealer and propagation framework designed to target both developer environments and CI/CD pipelines," the Wiz kids said on Thursday. "It collects sensitive data including GitHub tokens, npm credentials, cloud secrets (AWS, Azure, GCP), Kubernetes tokens, and GitHub Actions secrets – leveraging advanced techniques such as extracting secrets from runner memory. Exfiltration occurs via public GitHub repositories, where it posts encrypted payloads. Additionally, the malware includes propagation logic to infect additional repositories and package distributions." Plus PyPI package lightning Then on Thursday, an additional package was poisoned to execute credential-stealing malware on import. Up first: PyPI package Lightning versions 2.6.2 and 2.6.3. Lightning is a widely used deep learning framework for training and deploying AI products. Developers download it hundreds of thousands of times every day. "The obfuscated JavaScript payload contains many similarities to the Shai-Hulud attacks, overlapping in targeted tokens, credentials and obfuscation methods. Socket also identified signs that router_runtime.js both poisons GitHub repositories and infects developer npm packages," according to Socket, which also published a separate Mini Shai-Hulud supply-chain campaign page that it updates as new information comes to light. And Intercom's npm package Also on Thursday: Socket and Wiz sounded the alarm on a new compromise of the intercom-client npm package. Intercom is a customer communications platform, and intercom-client is a widely used official SDK for Intercom's API. It sees about 360,000 weekly downloads, and npm lists more than 100 dependent projects. However, as Socket notes, the real exposure likely extends beyond these direct dependencies because the package is commonly installed in backend services, developer environments, and CI/CD pipelines that integrate with Intercom's API. "The attack closely resembles the lightning@2.6.2 PyPI attack from earlier today, as well as the TeamPCP-linked supply chain campaign we reported yesterday affecting SAP CAP and Cloud MTA npm packages," Socket wrote. Neither Intercom nor Lightning immediately responded to The Register's requests for comment. We will update this story when we hear back from any of the compromised organizations. ®
Categories: News

The never-ending supply chain attacks worm into SAP npm packages, other dev tools

The Register - Fri, 01/05/2026 - 00:21
Mini Shai-Hulud caught spreading credential-stealing malware

The wave of supply chain attacks aimed at security and developer tools has washed up more victims, namely SAP and Intercom npm packages, plus the lightning PyPI package.…

Categories: News

Pages

Subscribe to Sec Tec Limited aggregator - News