News

India orders infosec red alert in case Mythos sparks crime spree

The Register - Wed, 06/05/2026 - 03:32
India’s Securities and Exchange Board has advised participants in the nation’s equities industry to immediately revisit their information security systems and practices, in case Anthropic’s Mythos bug-finding AI sparks a cyberattack spree. The Board is India’s equivalent of the USA’s Securities and Exchange Commission, or the UK’s Financial Conduct Authority. On Tuesday, the Indian regulator issued an advisory that opens with the following observation: In response to those threats, the Board has established a taskforce that will examine the risks posed by models like Mythos, share threat intelligence, report incidents, and initiate a review of cybersecurity at third-party software vendors who supply the regulator and the entities it oversees. The advisory then offers some basic infosec advice: ensure patches are up to date, conduct audits of potential vulnerabilities, conduct inventories of APIs and secure them, run a serious SOC and take its advice, and harden systems by adopting principles such as zero-trust networking and running only essential services. The regulator also told participants in India’s equities markets to have their IT committees issue guidance on how to mitigate risks created by AI-led vulnerability detection models, then develop a plan to use AI as part of their infosec armoury. “Also, undertake other measures including recalibration of risks for AI accelerated threats, AI-augmented SOC transformation, and continuous vulnerability management using AI tools,” the advisory states. The Board directed the above advice at 19 different classes of company, ranging from venture capitalists to merchant bankers, mutual funds, stock exchanges, and even niche suppliers such as agencies that store know your customer information. Other regulators around the world have also acknowledged the risks Mythos poses. US Treasury Secretary Scott Bessent convened an emergency meeting with the nation’s banks a few weeks back. Singaporean regulators did likewise, yesterday. Australian regulators sent local banks a strongly worded reminder that they must develop AI strategies that consider risks the technology creates. Hong Kong’s Monetary Authority is working on new infosec guidance for the age of Mythos. India’s approach stands out for effectively putting entities it regulates on alert to an imminent threat and ordering them to take action to prevent problems. ®
Categories: News

India orders infosec red alert in case Mythos sparks crime spree

The Register - Wed, 06/05/2026 - 03:32
Securities regulator urges market players to develop new strategies and nail cyber-basics before AI models fuel mass attacks

India’s Securities and Exchange Board has advised participants in the nation’s equities industry to immediately revisit their information security systems and practices, in case Anthropic’s Mythos bug-finding AI sparks a cyberattack spree.…

Categories: News

ServiceNow clears agents for landing with new AI control tower

The Register - Tue, 05/05/2026 - 18:00
ServiceNow announced an expansion of its AI Control Tower, transforming what began last year as a governance dashboard into what the company now describes as a command center for managing AI assets across an entire enterprise, including those running outside ServiceNow's own platform. The updated AI Control Tower, shipping as part of ServiceNow's Australia platform release, now operates across five areas: discovery, observation, governance, security, and measurement. The company said that this is its answer to AI agent sprawl, as enterprises have deployed more AI than they can account for and the tools to govern it have not kept pace. “What we launched last year gave customers a governance layer, but what we're shipping this year goes significantly deeper, evolving from visibility and management into a full enterprise AI command center,” Nenshad Bardoliwalla, group vice president of AI products at ServiceNow told reporters during a media briefing ahead of the company’s annual product show, Knowledge 26. “Our AI control tower ensures every AI system asset and identity is compliant, secure, and aligned with your strategy.” The AI Control Tower now reaches beyond ServiceNow's own platform with 30 new enterprise connectors that span all three major hyperscalers, Amazon Web Services, Google Cloud, and Microsoft Azure, along with enterprise applications such as SAP, Oracle, and Workday. The system can now discover AI assets, models, agents, prompts, and datasets running across an organization's full technology estate, not just those deployed on ServiceNow. “With our Veza integration, we're bringing patented access graph technology into the AI control tower, extending identity access governance to hyperscaler AI environments and every connected device, every agent, every model, every action has scope permissions, least privilege enforcement and auditable identity chains,” Bardoliwalla said. Bardoliwalla walked through a demo in which the AI Control Tower detected a prompt injection attack on a pricing agent. The system identified malicious instructions hidden inside order payloads, mapped the blast radius of affected systems using access graph technology from Veza, and presented a kill switch to disable the compromised agent, without human intervention. "You need a system that senses, decides and acts on its own, that can scale with your AI portfolio, not your head count," said Bardoliwalla. Two recent acquisitions underpin the security architecture. ServiceNow announced in December it would acquire Veza, which contributes an access graph that maps every identity and access path across systems whether it belongs to humans, machines, or AI agents. It also knows which entities have create, read, update, and delete-level permissions. ServiceNow said the access graph currently maps over 30 billion fine-grained permissions. When a vendor pushes a new version of a model or agent, the platform detects permission changes and automatically triggers a re-scoping workflow. Traceloop, which ServiceNow acquired in March, provides deep AI observability inside the Control Tower by tracking every LLM call that is running in the system. The integration delivers continuous runtime monitoring with live alerts, replacing what ServiceNow described as the periodic manual audits most enterprises still rely on. Teams can watch how agents reason, where they make decisions, and when to course-correct. ServiceNow also addressed the cost side of the AI equation. Control Tower now includes cost tracking and ROI dashboards to give finance teams visibility into model spend. The measurements track token consumption across providers such as OpenAI, Anthropic, and Google so customers can predict costs and tie spending to business outcomes. ServiceNow said it uses the AI Control Tower internally to manage over 1,600 AI assets and tracked half a billion dollars in cumulative AI value from internal use cases in 2025. "The number one question every CFO is asking is, where's the value?" said Bardoliwalla during the briefing. He added that runaway model spend ranks among the biggest pain points enterprises currently face as they scale AI deployments. Alongside the Control Tower expansion, ServiceNow announced Action Fabric, a mechanism that opens the company's full workflow engine to external AI agents. Through a generally available MCP server, agents built on Claude, Copilot, or custom platforms can now trigger governed enterprise actions — not just read and write data, but execute the flows, playbooks, approval chains, and catalog requests that ServiceNow customers have built over years. Anthropic is the first design partner for Action Fabric. The integration connects Claude directly to ServiceNow's governed system of action. "The gap between knowing what needs to happen and making it happen is where productivity dies," said Boris Cherny, head of Claude Code at Anthropic said in a statement. "Connecting Claude Cowork to ServiceNow's system of action closes that gap with enterprise execution, directly in the flow of work." Every action routed through Action Fabric runs through the AI Control Tower, so it carries identity verification, permission scoping, and a full audit trail. The MCP server is included in every Now Assist and AI Native SKU, with additional features planned for the second half of 2026.
Categories: News

Attackers are cashing in on fresh 'CopyFail' Linux flaw

The Register - Tue, 05/05/2026 - 16:01
CISA is warning that a newly-disclosed Linux kernel bug dubbed "CopyFail" is already being exploited, just days after researchers dropped a working root-level exploit. Tracked as CVE-2026-31431, the bug sits in the Linux kernel and gives low-level users a way to take full control of a system by modifying data they should only be able to read, effectively turning limited access into full root privileges on unpatched machines. The issue was disclosed by cybersecurity consultancy Theori, which said the flaw was discovered by its AI-powered penetration testing platform, Xint, and reported to the Linux kernel security team on March 23. Major Linux distributions pushed out patches ahead of public disclosure, which Theori published alongside a proof-of-concept exploit. The Python-based code works against Ubuntu 24.04 LTS, Amazon Linux 2023, RHEL 10.1, and SUSE 16, but the researchers warned that every mainstream Linux kernel built since 2017 is in scope of potential exploitation. "Same script, four distributions, four root shells — in one take. The same exploit binary works unmodified on every Linux distribution," Theori says. That level of reliability has not gone unnoticed. The CISA, the US government's cybersecurity agency, has added the bug to its Known Exploited Vulnerabilities catalog and ordered Federal Civilian Executive Branch agencies to patch within two weeks, setting a May 15 deadline. Microsoft backed CISA's findings and said it is already seeing signs of activity following the PoC's release. "Given the availability of a fully working exploit proof-of-concept (PoC) and the race to patch systems, Microsoft Defender is seeing preliminary testing activity that might result most likely in increased threat actor exploitation over the next few days," the company warned. The mechanics help explain the urgency. The attack is local and requires little access, with no user interaction, so anyone who already has a foothold on a vulnerable box can try their luck. It is the kind of bug that turns a small break-in into full control pretty quickly. As The Register reported last week, the flaw stems from how the kernel handles certain cryptographic operations, opening a path to tamper with cached data in ways that were never meant to be user-controlled. With a reliable exploit now in the wild, that design quirk has effectively turned into a universal privilege-escalation trick. ®
Categories: News

Attackers are cashing in on fresh 'CopyFail' Linux flaw

The Register - Tue, 05/05/2026 - 16:01
Researchers dropped a reliable root exploit and it didn’t sit idle for long

CISA is warning that a newly-disclosed Linux kernel bug dubbed "CopyFail" is already being exploited, just days after researchers dropped a working root-level exploit.…

Categories: News

Real estate giant confirms vishing incident as ShinyHunters and Qilin both come knocking

The Register - Tue, 05/05/2026 - 14:34
Real estate giant Cushman & Wakefield has confirmed a data breach after two cybercrime groups, ShinyHunters and Qilin, separately claimed responsibility for attacks on the company. A spokesperson told The Register the attack was "limited" in scope and stemmed from vishing (voice phishing), suggesting an employee was socially engineered. The representative said: "Cushman & Wakefield recently became aware of a limited data security incident due to vishing. We have activated our response protocols, including taking steps to contain the unauthorized activity and engaging third-party expert advisors to support a comprehensive response.  "Our systems and operations continue to run normally, and we are working diligently to investigate the incident. We recognize the trust placed in us to protect sensitive data and we take this responsibility very seriously." Cushman & Wakefield (C&W) did not address the apparent dual targeting by both ShinyHunters, which operates a pay-or-leak model, and Qilin, currently viewed as the world's most prolific ransomware group. There is no previously established coalition between ShinyHunters and Qilin, which suggests the two alleged attacks are separate but coincidentally timed. In a message sent to The Register, ShinyHunters claimed they attacked the company on May 1, while Qilin listed C&W on its data leak site on May 4. Qilin's website listing did not detail how it allegedly attacked C&W, although ShinyHunters claimed it stole "over 500,000 Salesforce records containing PII and other internal corporate data." ShinyHunters set a May 6 deadline for C&W to make contact to prevent the data from being leaked, but the cybercriminals claimed this had yet to happen. ShinyHunters has been on something of a tear recently. Known for its large-scale, high-impact attacks, the group's latest wave of activity began in March when it laid claim to an expansive supply chain attack after breaching Salesforce customers via the CRM giant itself. At the time, it said it had stolen data belonging to Salesforce and more than 100 of its high-profile customers. Since then, big-name brands like ADT, Carnival Cruise Line, Rockstar Games, Vimeo, and others have all confirmed ShinyHunters-linked cyberattacks, although not all were explicitly linked to its earlier Salesforce compromise. ®
Categories: News

Real estate giant confirms vishing incident as ShinyHunters and Qilin both come knocking

The Register - Tue, 05/05/2026 - 14:34
Cushman & Wakefield activated incident response protocols after serial extortionists issued separate threats

Real estate giant Cushman & Wakefield has confirmed a data breach after two cybercrime groups, ShinyHunters and Qilin, separately claimed responsibility for attacks on the company.…

Categories: News

ShinyHunters claims dump puts 119K Vimeo emails in the wild

The Register - Tue, 05/05/2026 - 13:15
More than 119,000 Vimeo users's email addresses were extracted in a breach traced to a third-party analytics vendor, according to Have I Been Pwned. The incident first surfaced in April when the ShinyHunters crew added Vimeo to its growing "pay or leak" hit list, claiming it had pulled hundreds of gigabytes of data and threatening to dump the lot unless a deal was struck. That dump has since landed, and breach notification service Have I Been Pwned now puts a number on at least part of the fallout: 119,000 unique email addresses, in some cases paired with names. Vimeo last week confirmed that data was taken, but stopped short of saying how many people were affected. The company pinned the incident on Anodot, a third-party analytics provider used across its systems, and said the attacker gained access via that integration rather than breaking into Vimeo directly. Anodot has not said anything publicly, but its status page shows the incident kicked off on April 4. According to Vimeo, the stolen databases were heavy on technical data, video titles, metadata, and some customer email addresses. The company has been keen to stress what was not included: no actual video content, no valid login credentials, and no payment card information.  That does not make the data harmless. Email lists like this get reused, resold, and recycled into phishing runs for years, especially when they come with enough context to make a message look convincing. The attackers, for their part, claim the breach went deeper. In a post seen by The Register, ShinyHunters alleged that "Snowflake and BigQuery instances data was compromised thanks to Anodot.com," adding that the company "failed to reach an agreement" despite multiple attempts to negotiate.  Vimeo says it has cut off the problem at the source, disabling Anodot credentials, ripping out the integration, and bringing in outside security help while notifying law enforcement. The investigation is ongoing, and the company says it will update customers as it learns more. For now, the numbers from Have I Been Pwned seem to fill in the gap left by Vimeo's initial disclosure, and underline a familiar problem: you can lock down your own systems, but your vendors only have to slip once. ®
Categories: News

ShinyHunters claims dump puts 119K Vimeo emails in the wild

The Register - Tue, 05/05/2026 - 13:15
Vimeo points finger at analytics supplier Anodot, says no logins or card data were touched

More than 119,000 Vimeo users's email addresses were extracted in a breach traced to a third-party analytics vendor, according to Have I Been Pwned.…

Categories: News

Romance scammers turn sweet talk into £102M payday

The Register - Tue, 05/05/2026 - 12:43
Romance fraudsters scammed Britons out of £102 million ($138 million) last year, according to the latest police figures. That works out to roughly £280,000 ($379,000) a day, the City of London Police said Tuesday. The average victim loses around £9,500 ($12,866) per scam, though individual cases have reached £1 million ($1.35 million). The figures come from Report Fraud, a City of London Police service that logged 10,784 romance scam reports in 2025, a 29 percent year-on-year bump. "Romance fraud is particularly harmful because it targets trust and emotional connection," said Detective Superintendent Oliver Little at the City of London Police.  "Offenders will often spend significant time building what appears to be a genuine relationship before attempting to exploit their victim financially," he added. "While the monetary losses can be substantial, the emotional impact is often just as damaging. This crime can affect anyone, and by reporting it, victims help us build intelligence, disrupt offenders, and protect others from harm." The scams disproportionately hit older victims, with almost half of 2025's total losses coming from those aged 55-74. Men submitted the highest number of reports, but women incurred the greatest financial losses. The playbook is well-established: criminals build fake profiles on social media, cultivate rapport with targets – often expressing strong feelings early – then request money for various reasons, including travel, medical expenses, and other invented needs. City of London Police has urged the public to look out for common tactics used by fraudsters: unsolicited affection from strangers online, excuses to avoid video calls or in-person meetings, and sudden investment pitches. A second opinion from a friend or family member can help. Confidence/romance scams are an even bigger problem in the US, where they rank as the fifth most costly form of cybercrime. An annual report from the FBI's Internet Crime Complaint Center (IC3) estimated total losses in 2025 at $929.4 million, ahead of data breaches, phishing, extortion, and ransomware. In the UK, romance fraud sits at the lower end of the cybercrime spectrum. Advance fee fraud, banking fraud, investment fraud, and online shopping scams all generate far more reports. Total fraud losses in the UK reached £3.4 billion ($4.6 billion) in 2025 across 388,895 reports, according to data, a figure that puts romance fraud's toll in stark perspective. Underreporting is also thought to be widespread, with many victims staying silent out of shame. ®
Categories: News

Romance scammers turn sweet talk into £102M payday

The Register - Tue, 05/05/2026 - 12:43
Victims losing £280K a day to fake profiles and sob stories

Romance fraudsters scammed Britons out of £102 million ($138 million) last year, according to the latest police figures.…

Categories: News

NHS to close-source hundreds of GitHub repos over AI, security concerns

The Register - Tue, 05/05/2026 - 10:15
Healthcare giant's maintainers handed May deadline to enact the change

The UK's National Health Service (NHS) is ordering all of its technology leaders to temporarily wall off the organization's open source projects over concerns relating to advanced AI and Anthropic's Mythos.…

Categories: News

Microsoft's bad obsession is showing up in shabby services and slipshod software. Here's proof

The Register - Tue, 05/05/2026 - 09:30
If you can't bother to keep GitHub running, why should we bother with you?

Opinion  It's been another shabby week for Microsoft, and a shabbier one for its users. We learnt that Windows 11's epic habit of trying to corral customers into paid-for Microsoft services just got worse with a low-rent trick. Remote Desktop got a bit more secure, which is good, but in a way that suggests not too much user testing took place. As for GitHub… GitHub got two helpings of Chef Redmondo's Special Sauce.…

Categories: News

Singapore boffins get diverse SIEMs singing in harmony with agentic rule translation

The Register - Tue, 05/05/2026 - 03:12
Academics from Singapore and China have found a way to make AI useful for cyber-defenders, by creating a technique that translates rules from diverse Security Information and Event Managements (SIEMs) so they’re easier to consume across multiple systems. SIEMs collect log files from many sources and allow users to set rules that trigger alerts that a security operations center (SOC) considers in case they represent security incidents. Testing for an “impossible travel” scenario – in which the same user logs on from New York and London within an hour, suggesting credential theft or other skulduggery – is a common SIEM rule. Many organizations end up with multiple SIEMs, which means complexity for SOCs. Enter researchers from the National University of Singapore and China’s Fudan University, who recently presented a paper [PDF] titled “ARuleCon: Agentic Security Rule Conversion” in which they explain a technique they developed to translate rules so they’re consumable by multiple SIEMs. Lead author Ming Xu told The Register she and her colleagues developed ARuleCon because SIEMs use specific schemas for rules, so a rule created with one SIEM won’t work with another. While some vendors provide translation tools, they don’t offer support for many SIEMs: the authors say Microsoft’s tool shifts Splunk rules into Redmond’s Sentinel SIEM but can’t handle others. “Rule conversion can be performed manually by security experts, which are slow and imposes a heavy workload,” the paper observes. Tools like the Sigma framework aim to help manage and share rules across multiple platforms, but Ming and her co-authors think it, and other existing translation tools, don’t do well with complex or interlinked rules. It’s 2026 so it seems natural to try using an LLM to convert SIEM rules into different formats. The authors say that approach “typically yield a poor accuracy and lacks vendor-specific correctness” because training data used to build LLMs doesn’t include enough data about SIEM rule schemas. “These shortcomings call for a scalable, vendor-neutral, and reliable SIEM-rule conversion framework that retains existing rule value and eases SOC workloads,” the paper states, before explaining how ARuleCon gets the job done with an "agentic RAG [retrieval augmented generation] pipeline that retrieves authoritative official vendor documentation to address the convention/schema mismatches, and Python-based consistency check that running both source and target rules in controlled test environments to mitigate subtle semantic drifts." Long story short, the researchers developed agentic tech capable of translating SIEM rules created using Splunk, Microsoft Sentinel, IBM QRadar, Google Chronicle and RSA NetWitness. Not all the conversions are brilliant, but ARuleCon can translate the proprietary rule format each SIEM vendor uses to multiple rival platforms – and does it more accurately than a generic LLM. ARuleCon therefore makes it possible to export rules from one SIEM and use them in another. Ming told The Register she hopes the tool helps organizations to consider and plan SIEM consolidations or migrations, and emerge with SOCs that can more easily detect the signals of security threats and stop worrying about noise from multiple alerts. ®
Categories: News

Singapore boffins get diverse SIEMs singing in harmony with agentic rule translation

The Register - Tue, 05/05/2026 - 03:12
Vendors all use different formats. This tech translates them all so you can smooth your SOC

Academics from Singapore and China have found a way to make AI useful for cyber-defenders, by creating a technique that translates rules from diverse Security Information and Event Managements (SIEMs) so they’re easier to consume across multiple systems.…

Categories: News

Kids say they can beat age checks by drawing on a fake mustache

The Register - Mon, 04/05/2026 - 21:50
It’s been months since the UK government began requiring stronger age checks under the Online Safety Act, and recent research suggests those measures are falling short of keeping kids away from harmful content. In some cases, even drawing on a mustache has been reported as enough to fool age detection software. Like keeping booze away from teenagers or nudie mags out of the hands of young lads, slapping a big “restricted, 18+” label on parts of the internet hasn't stopped kids testing the limits. Those limits, according to UK online safety group Internet Matters, are easy to sidestep. The group surveyed over 1,000 UK children and their parents, and while it did report some positive effects from changes made under the OSA, many children saw age verification as an easy-to-bypass hurdle rather than something that kept them genuinely safe. A full 46 percent of children even said that age checks were easy to bypass, while just 17 percent said that they were difficult to fool. The methods kids use to fool age gates vary, but most are pretty simple: There's the classic use of a video game character to fool video selfie systems, while in other instances, children reported just entering a fake birthday or using someone else's ID card when that was required.  The report even cites cases of children drawing a mustache on their faces to fool age detection filters. Seriously. While nearly half of UK kids say it's easy to bypass online age checks (and another 17 percent say it's neither hard nor easy), only 32 percent say they've actually bypassed them, according to Internet Matters.  Dude, want some TikTok? My mom will hook us up Like scoring some booze from "cool" parents, keeping age-gated content out of the hands of kids under the OSA is only as effective as parents let it be, and a quarter of them enable their kids' online delinquency.  More specifically, Internet Matters found that a full 17 percent of parents admitted to actively helping their kids evade age checks, while an additional 9 percent simply turned a blind eye to it.  "When speaking to parents and children about these situations, they described scenarios in which parents felt they understood the risks involved and, based on their knowledge of their child, were confident the activity was safe," Internet Matters said of parents who let their kids engage in risky behavior as long as they did it where they could be supervised.  What this means for a major part of the OSA - namely keeping kids from accessing harmful content online - is that it’s falling short. Internet Matters has data to that end, too. Half of children (49 percent) who responded to the group's survey said that they've encountered harmful content online recently, suggesting that even those who don't circumvent age gates are still finding it in their feeds.  So, what can be done to make kids' online safety more effective? Parents told Internet Matters that lawmakers need to do more, and CEO Rachel Huggins agreed that they need help.  "Stronger action is needed from both government and industry to ensure that children can only access online services appropriate for their age and stage and where safety is built in from the outset, rather than added in response to harm," Huggins said in the report.  The Internet Matters chief pointed to the prime minister’s recent talks with social media firms about tackling online harms, describing the moment as “a timely opportunity for positive change.” ®
Categories: News

Kids say they can beat age checks by drawing on a fake mustache

The Register - Mon, 04/05/2026 - 21:50
46% say age checks are easy to bypass, and nearly a third admit getting around them

It’s been months since the UK government began requiring stronger age checks under the Online Safety Act, and recent research suggests those measures are falling short of keeping kids away from harmful content. In some cases, even drawing on a mustache has been reported as enough to fool age detection software.…

Categories: News

Shadow IT has given way to shadow AI. Enter AI-BOMs

The Register - Mon, 04/05/2026 - 16:04
When it comes to securing enterprise supply chains, now heavily infused with AI applications and agents, a software bill of materials (SBOM) no longer provides a complete inventory of all the components in the environment. Enter AI-BOMs. While a traditional SBOM includes all of the software packages and dependencies in the organization, an AI-BOM aims to cover the gaps introduced by AI assets by providing visibility across all of the models, datasets, SDK libraries, MCP servers, ML frameworks, agents, agentic skills, prompts, and other AI tools - plus how these AI components interact with each other and connect to workflows. "Imagine if AI is a birthday cake in the middle of this room, but you don't know how it got there," Ian Swanson, VP of AI security at Palo Alto Networks said in an interview with The Register. "You don't know the recipe, you don't know the ingredients, you don't know the baker. Would you eat a slice of that cake?" A lot of organizations are eating the cake anyway. In addition to the company-sanctioned models and AI used in the tech stack, there's also the problem of "shadow AI" - we used to call this "shadow IT" - and these unsanctioned tools also need to be brought out of the shadows so they can be accounted for. This includes all the vibe coding platforms and agents that individual employees spin up, along with any external chatbots they interact with on work computers and potentially input sensitive corporate data into.  To secure all of these AI ingredients baked into the cake, companies first need to know what they are, what they connect to, and how they are being used. "In general, organizations that are trying to wrap their head around AI security," Amy Chang, Cisco's head of AI threat intelligence and security research told The Register. "They want a way to be able to identify what AI assets exist in their environment. A tool like the AI bill of materials is one of those first places that you can start to get a better understanding of what exists." Up next: model provenance Cisco previously open sourced its AI-BOM, making it free for anyone to scan codebases, container images, and cloud environments to produce this bill of materials. On Friday, it also made available its Model Provenance Kit as an open source tool to track model provenance. In a blog announcing the new repository, Chang and other AI researchers describe it as a DNA test for AI models, and it determines provenance using one of two modes: compare or scan. Compare mode takes any two models and shows their similarity across metadata, tokenizer structure, weight-level signals along with a final composite score. Scam mode starts with a single model and matches it against a database to determine the closest lineage candidates - and to help with this mode, Cisco also released a model fingerprint database covering about 150 base models across more than 45 families and over 20 publishers. Chang told us that the new AI tool performs two gate checks. "First, at the metadata level, it compares the information from the base model with the fine-tuned version of the model to delineate some sort of provenance-linked relationship - like this was derived from Meta Llama 4, or derived from Alibaba Qwen3," she said. "Then, what we do is look at weight-based signifiers. So now we're providing a sort of verifiable, repeatable and provable way to attest that the models that you use and deploy, that are customer facing, that are ingesting all this data, are truly the models that that you're supposed to be using, or that that are within the confines of your risk tolerance." During our interview, Chang pointed to Cursor's Composer 2, which is partly built on Kimi 2.5, a Chinese open source model. "They were very quick to admit that, yes, we used the Chinese model to build this," she said. "But that could have regulatory or compliance risk." Case in point: The European Union's AI Act mandates organizations document training data, characteristics of training methodology, and risk assessments for "high-risk systems." Google's Wiz, in its AI-BOMs, also accounts for all of the tools in the developers' workstation, such as a laptop or integrated development environment, that went into building the AI application. "Many people define visibility or BOMs by what's actually in the final artifact, but we also extend the definition of BOMs in general and AI-BOMs in particular to include the AI tools that went into building that application," Ziad Ghalleb, Wiz technical product marketing manager, told us.  "And then another important aspect is the identities that are attached to these AI workloads, because all these agents or models, tools, etc., are tied to a specific identity inside your environment," Ghalleb added. "So you need to be looking at these non-human identities that are related to these systems. It's not just the resources. It's also the identities and the permission sets that are tied to them." All of this boils down to visibility and security. "If you don't have visibility of these workloads, then you can't really understand what it is to protect," Swanson said.  Protection against poisonings Enterprises aren't the only ones madly rushing to incorporate AI tools into their workloads and processes, as everyone who reads The Reg likely knows. Criminals are also using these same tools to move faster and make their attacks more efficient. As Sherrod DeGrippo, Microsoft's GM of global threat intelligence, told The Register in a previous interview: This includes tasks such as performing reconnaissance on compromised computers, and standing up and managing attack infrastructure. "Agentic, automated reconnaissance against systems is something that is worth taking a look at," DeGrippo said. "Go find out about XYZ, and come back to me with everything you've seen. Go scan the net blocks owned by this particular entity." According to Swanson, this is also a case where having an AI-BOM can help defenders respond faster. He says he can't name the company, but in one incident that Palo Alto Networks responded to, a criminal group used AI to scout out the victim organization and locate exposed endpoints.  "One of the things that they did is get access to system prompts, the instructions to an AI workload that tells it what it can do, and what it can't do," Swanson said. And once the attacker gained access to the company's internal AI's system prompts, they modified them to force the AI to do things that it shouldn't - like steal data, and send it to an external email account. An AI-BOM would provide an understanding of the AI system's configurations and dependencies at a specific state in time - and also indicate any changes. "If you had understanding of state and understanding of state changes, then you would be able to go back to an AI bill of materials and say: 'What system prompt was used within the ingredients to create the AI application?' And then see it's changed from a prior state to a new state. So we should probably check this and see if there's anything bad that's happening here," Swanson said. "And in that case, you'd be able to catch it." Other supply chain attacks such as model and skills poisoning underscore the risks of not knowing what AI tools are in an IT environment.  "Skills that people use in coordination with a lot of these coding assistants are pretty easy to tamper with, and so it's important to be able to scan them to make sure that somebody is not manipulating the capabilities," Swanson said. If a skill is supposed to provide a weather forecast, it shouldn't also steal credentials or leak secrets, he explained. "Understand state changes, constantly scan these artifacts for supply chain risks, and then at the point of runtime, when your AI application is live, also look at all communications to make sure that nothing bad is happening," Swanson said. AI-BOMs (and their software counterparts) can also help organizations quickly identify compromised open source code running on corporate systems. For example: the recent rash of poisoned npm and PyPI packages and earlier Shai-Hulud worm credential stealer attacks. Both of these campaigns targeted code commonly integrated into AI applications. Even in the absence of a CVE identifier, an AI-BOM lets users query "related libraries or packages," and then identify any malicious versions in their environment, Ghalleb said. "There's no CVE attached to them, but at least you know how to remove these to contain an evolving threat." ®
Categories: News

Shadow IT has given way to shadow AI. Enter AI-BOMs

The Register - Mon, 04/05/2026 - 16:04
'If you don't have visibility, you can't understand what to protect'

When it comes to securing enterprise supply chains, now heavily infused with AI applications and agents, a software bill of materials (SBOM) no longer provides a complete inventory of all the components in the environment. Enter AI-BOMs.…

Categories: News

If the vote you rocked, your personal info can be grokked

The Register - Mon, 04/05/2026 - 10:06
Your voter data could be used against you. A foreign intelligence service that wished to identify the family members of deployed military personnel could do so by cross-referencing public voter record data and social media posts. An employer who only wanted to hire employees with a specific political affiliation could do so by analyzing the primary ballot history of job applicants. An identity fraud ring seeking to open credit accounts in the names of other people could identify voters whose mail has been returned (via voter file suspense indicators) to take over those addresses using bogus change-of-address requests. These scenarios are possible thanks to the ability to link publicly available voter data to other data sets, according to Noah M. Kenney, founder of consultancy Digital 520. "I picked two different counties that kind of represented opposite ends of the spectrum," Kenney told The Register in a phone interview.  "In Texas, they hide a lot of information and then North Carolina makes a lot of it public in terms of the specific records. And what I was looking at specifically is if you go and merge this data set or link this data set with other data sets, how likely are you to be able to re-identify a person?" More than 25 years ago, research by Latanya Sweeney, currently a professor at Harvard, demonstrated that most of the US population (87 percent) could be identified with just three anonymous data points – a five-digit ZIP code, gender, and date of birth. Those results can be improved when combined with other data sets. And recent research has shown that the process of identifying people from seemingly anonymous data points becomes even easier with AI tools. In a research paper titled "Public Voting Records: A Record, or an Attack Surface?", Kenney describes how he analyzed public records from Travis County, Texas, and Robeson County, North Carolina to show that the adversarial scenarios cited above are practical with public data. The Texas file provides fewer data points than the North Carolina file, but the research suggests redaction doesn't make much of a difference in the re-identification scenarios evaluated. Table 1 — Disclosure regime comparison With the less detailed Texas info, Kenney was able to use a Python script to link the voter records to other public records like the Federal Election Commission's individual-contribution data. "We pulled 500 contribution records for ZIP 78704 (an Austin-core ZIP including South Congress and Travis Heights neighborhoods) from the 2024 cycle via the FEC OpenAPI on May 1, 2026," he explains in his paper.  "We de-duplicated to 181 unique contributors by exact match on (last name, first name, ZIP), and inner-joined to the voter file on the same key, no fuzzy matching, no nickname normalization, no suffix handling. Of the 181 contributors, 105 (58.01 percent) matched any voter record and 95 (52.49 percent) matched a uniquely-identifiable voter. Of the 105 matches, 74.3 percent had a non-trivial employer field in FEC." That 52 percent individual match rate for identifying individuals from voter rolls and FEC data, Kenney said, would be more like 90–95 percent using the kinds of tools commercial data brokers employ. The North Carolina voter dataset includes a phone number for the majority of voters. According to the paper, 88.53 percent of voters who have a phone number listed have a number that is unique within the county. As a result, external datasets containing phone numbers can be joined at a similar rate using this field as a key to narrow down and identify likely individuals. Among the report's other findings:  Name and ZIP code uniquely identify 95.81 percent of Texas voters and 87.79 percent of North Carolina voters. Among Travis County voters who have voted in 20 or more elections, 98.4 percent have a turnout pattern that is unique to them, making that data point a fingerprint. Texas' redaction of date of birth as a privacy measure is undermined by the publication of the voter registration data, which allows 28 percent of voters to be uniquely identified when combined with ZIP and gender. The Travis County voter file currently exposes 320 deployed military families through the publication of APO/FPO codes for military mailings. There's currently no comprehensive federal privacy law. While many states have privacy rules, there's a lot of variation. "Even within a specific state, most of the counties are individually handling these public records requests, so they all handle them differently across the country," said Kenney.  "Some of them, you can't get them. Some of them, you need an ID to get them. Some of them you have to go through a request process for public records or you have to pay for them. The two counties I used are both freely available. You can go and download zip files of them without even putting in an email address or your name from anywhere in the world." Kenney said that he believes that access controls represent a better answer than redacting certain data fields, pointing to his findings that show redaction doesn't necessarily protect against privacy harms. He recommends measures like rate limits on bulk file requests, identity verification, requiring state ID, maintaining audit logs of requests, and prohibiting commercial resale of these records – because they're often used by data brokers. Beyond specific fixes based on his findings – Texas should generalize voter registration dates to a year rather than a day and armed forces mailing codes should be excluded from voter rolls – Kenney argues that people should be allowed to opt out of inclusion in public data sets and that general data privacy protections would be helpful. Last week, House Republicans introduced the Secure Data Act in an effort to create federal privacy rules. But Kenney says that it's significantly weaker than a lot of state regulations and he doesn't expect it will pass. "The industry consensus is that the likelihood of it passing is extremely low, at least in its current form," he said. "This represents the third attempt to pass comprehensive data privacy in recent years, most recent being the American Data Privacy and Protection Act, which failed to pass." ®
Categories: News

Pages

Subscribe to Sec Tec Limited aggregator - News