Fresh from the feed
Filter by timeframe and category to zero in on the moves that matter.
The finger.exe command is used in ClickFix attacks .
The SANS Holiday Hack Challengeâ„¢ 2025 is available.
Aalto University researchers have developed a method to execute AI tensor operations using just one pass of light. By encoding data directly into light waves, they enable calculations to occur naturally and simultaneously. The approach works passively, without electronics, and could soon be integrated into photonic chips. If adopted, it promises dramatically faster and more energy-efficient AI systems.

In this tutorial, we explore how to build agentic systems that think beyond a single interaction by utilizing memory as a core capability. We walk through how we design episodic memory to store experiences and semantic memory to capture long-term patterns, allowing the agent to evolve its behaviour over multiple sessions. As we implement planning, […] The post How to Build Memory-Powered Agentic AI That Learns Continuously Through Episodic Experiences and Semantic Patterns for Long-Term Autonomy appeared first on MarkTechPost .

Cerebras has released MiniMax-M2-REAP-162B-A10B, a compressed Sparse Mixture-of-Experts (SMoE) Causal Language Model derived from MiniMax-M2, using the new Router weighted Expert Activation Pruning (REAP) method. The model keeps the behavior of the original 230B total, 10B active MiniMax M2, while pruning experts and reducing memory for deployment focused workloads such as coding agents and tool […] The post Cerebras Releases MiniMax-M2-REAP-162B-A10B: A Memory Efficient Version of MiniMax-M2 for Long Context Coding Agents appeared first on MarkTechPost .
AI Debt Explosion Has Traders Searching for Cover: Credit Weekly Bloomberg.com

The botnet malware known as RondoDox has been observed targeting unpatched XWiki instances against a critical security flaw that could allow attackers to achieve arbitrary code execution. The vulnerability in question is CVE-2025-24893 (CVSS score: 9.8), an eval injection bug that could allow any guest user to perform arbitrary remote code execution through a request to the "/bin/get/Main/

The newly sequenced RNA is 25,000 years older than the previous record-holder, opening a new window into genetic evolution and revealing a surprise about a famous mammoth mummy.
We’re releasing Slither-MCP , a new tool that augments LLMs with Slither’s unmatched static analysis engine. Slither-MCP benefits virtually every use case for LLMs by exposing Slither’s static analysis API via tools, allowing LLMs to find critical code faster, navigate codebases more efficiently, and ultimately improve smart contract authoring and auditing performance. How Slither-MCP works Slither-MCP is an MCP server that wraps Slither’s static analysis functionality, making it accessible through the Model Context Protocol. It can analyze Solidity projects (Foundry, Hardhat, etc.) and generate comprehensive metadata about contracts, functions, inheritance hierarchies, and more. When an LLM uses Slither-MCP, it no longer has to rely on rudimentary tools like grep and read_file to identify where certain functions are implemented, who a function’s callers are, and other complex, error-prone tasks. Because LLMs are probabilistic systems, in most cases they are only probabilistically correct. Slither-MCP helps set a ground truth for LLM-based analysis using traditional static analysis: it reduces token use and increases the probability a prompt is answered correctly. Example: Simplifying an auditing task Consider a project that contains two ERC20 contracts: one used in the production deployment, and one used in tests. An LLM is tasked with auditing a contract’s use of ERC20.transfer() , and needs to locate the source code of the function. Without Slither-MCP, the LLM has two options: Try to resolve the import path of the ERC20 contract, then try to call read_file to view the source of ERC20.transfer() . This option usually requires multiple calls to read_file , especially if the call to ERC20.transfer() is through a child contract that is inherited from ERC20. Regardless, this option will be error-prone and tool call intensive. Try to use the grep tool to locate the implementation of ERC20.transfer() . Depending on how the grep tool call is structured, it may return the wrong ERC20 contract. Both options are non-ideal, error-prone, and not likely to be correct with a high interval of confidence. Using Slither-MCP, the LLM simply calls get_function_source to locate the source code of the function. Simple setup Slither-MCP is easy to set up, and can be added to Claude Code using the following command: claude mcp add --transport stdio slither -- uvx --from git+https://github.com/trailofbits/slither-mcp slither-mcp It is also easy to add Slither-MCP to Cursor by adding the following to your ~/.cursor/mcp.json : Run sudo ln - s ~ /.local/bin/uvx /usr/local/bin/uvx Then use this config : { "mcpServers" : { "slither-mcp" : { "command" : "uvx --from git+https://github.com/trailofbits/slither-mcp slither-mcp" } } } Figure 1: Adding Slither-MCP to Cursor For now, Slither-MCP exposes a subset of Slither’s analysis engine that we believe LLMs would have the most benefit consuming. This includes the following functionalities: Extracting the source code of a given contract or function for analysis Identifying the callers and callees of a function Identifying the contract’s derived and inherited members Locating potential implementations of a function based on signature (e.g., finding concrete definitions for IOracle.price(...) ) Running Slither’s exhaustive suite of detectors and filtering the results If you have requests or suggestions for new MCP tools, we’d love to hear from you . Licensing Slither-MCP is licensed AGPLv3, the same license Slither uses. This license requires publishing the full source code of your application if you use it in a web service or SaaS product. For many tools, this isn’t an acceptable compromise. To help remediate this, we are now offering dual licensing for both Slither and Slither-MCP. By offering dual licensing, Slither and Slither-MCP can be used to power LLM-based security web apps without publishing your entire source code, and without having to spend years reproducing its feature set. If you are currently using Slither in your commercial web application, or are interested in using it, please reach out .

Plus: State-sponsored AI hacking is here, Google hosts a CBP face recognition app, and more of the week’s top security news.

One of the Mac’s most popular productivity apps is incorporating generative artificial intelligence in a way that keeps it offline, private, and customizable.

The U.S. Department of Justice (DoJ) on Friday announced that five individuals have pleaded guilty to assisting North Korea's illicit revenue generation schemes by enabling information technology (IT) worker fraud in violation of international sanctions. The five individuals are listed below - Audricus Phagnasay, 24 Jason Salazar, 30 Alexander Paul Travis, 34 Oleksandr Didenko, 28, and Erick

Deals between media conglomerates and tech companies serve both sets of interests, while leaving artists by the wayside The world’s biggest music company is now in the AI business. Last year, Universal Music Group (UMG), alongside labels including Warner Records and Sony Music Entertainment sued two AI music startups for allegedly using their recordings to train text-to-music models without permission. But last month, UMG announced a deal with one of the defendants, Udio, to create an AI music platform. Their joint press release offered assurances that the label will commit to “do what’s right by [UMG’s] artists”. However, one advocacy group, the Music Artists Coalition, responded with the statement : “We’ve seen this before – everyone talks about ‘partnership’, but artists end up on the sidelines with scraps.” Alexander Avila is a video essayist, writer and researcher Continue reading...
Like many have reported, we too noticed exploit attempts for CVE-2025-64446 in our honeypots.

Two campaigns delivering Gh0st RAT to Chinese speakers show a deep understanding of the target population's virtual environment and online behavior. The post Digital Doppelgangers: Anatomy of Evolving Impersonation Campaigns Distributing Gh0st RAT appeared first on Unit 42 .
Google to Invest $40 Billion in New Data Centers in Texas Bloomberg.com

A new bill sponsored by Sen. Hawley (R-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. That might sound reasonable at first, but behind those talking points lies a sprawling surveillance and censorship regime that would reshape how people of all ages use the internet. The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot. The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using the digital tools that they rely on every day. EFF has warned for years that age-verification laws endanger free expression, privacy, and competition. There are legitimate concerns about transparency and accountability in AI, but the GUARD Act’s sweeping mandates are not the solution. TAKE ACTION TELL CONGRESS: The guard act won't keep us safe Young People's Access to Legitimate AI Tools Could Be Cut Off Entirely. The GUARD Act doesn’t give parents a choice—it simply blocks minors from AI companions altogether. If a chat system’s age-verification process determines that a user is under 18, that user must then be locked out completely. The GUARD Act contains no parental consent mechanism, no appeal process for errors in age estimation, and no flexibility for any other context. The bill’s definition of an AI “companion” is ambiguous enough that it could easily be interpreted to extend beyond general-use LLMs like ChatGPT, causing overcautious companies to block young people from other kinds of AI services too. In practice, this means that under the GUARD Act, teenagers may not be able to use chatbots to get help with homework, seek customer service assistance for a product they bought, or even ask a search engine a question. It could also cut off all young people’s access to educational and creative tools that have quickly become a part of everyday learning and life online. The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. By treating all young people—whether seven or seventeen—the same, the GUARD Act threatens their ability to explore their identities, get answers to questions free from shame or stigma, and gradually develop a sense of autonomy as they mature into adults. Denying teens’ access to online spaces doesn’t make them safer , it just keeps them uninformed and unprepared for adult life. The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. Instead, it will undermine both safety and autonomy by replacing parental guidance with government mandates and building mass surveillance infrastructure instead of privacy controls. All Age Verification Systems Are Dangerous. This Is No Different. Teens aren’t the only ones who lose out under the GUARD Act. The bill would require platforms to confirm the ages of all users—young and old—before allowing them to speak, learn, or engage with their AI tools. Under the GUARD Act, platforms can’t rely on a simple “I’m over 18” checkbox or self-attested birthdate. Instead, they must build or buy a “commercially reasonable” age-verification system that collects identifying information (like a government ID, credit record, or biometric data) from every user before granting them access to the AI service. Though the GUARD Act does contain some data minimization language, its mandate to periodically re-verify users means that platforms must either retain or re-collect that sensitive user data as needed. Both of those options come with major privacy risks. EFF has long documented the dangers of age-verification systems: They create attractive targets for hackers. Third-party services that collect users’ sensitive ID and biometric data for the purpose of age verification have been repeatedly breached , exposing millions to identity theft and other harms. They implement mass surveillance systems and ruin anonymity . To verify your age, a system must determine and record who you are. That means every chatbot interaction could feasibly be linked to your verified identity. They disproportionately harm vulnerable groups. Many people—especially activists and dissidents, trans and gender-nonconforming folks, undocumented people, and survivors of abuse—avoid systems that force identity disclosure. The GUARD Act would entirely cut off their ability to use these public AI tools. They entrench Big Tech . Only the biggest companies can afford the compliance and liability burden of mass identity verification. Smaller, privacy-respecting developers simply can’t compete. As we’ve said repeatedly, there’s no such thing as “safe” age verification. Every approach—whether it’s facial or biometric scans , government ID uploads, or behavioral or account analysis—creates new privacy, security, and expressive harms. Vagueness + Steep Fines = Censorship. Full Stop. Though mandatory age-gates provide reason enough to oppose the GUARD Act, the definitions of “AI chatbot” and “AI companion” are also vague and broad enough to raise alarms. In a nutshell, the Act’s definitions of these two terms are so expansive that they could cover nearly any system capable of generating “human-like” responses — including not just general-purpose LLMs like ChatGPT, but also more tailored services like those used for customer service interactions, search-engine summaries, and subject-specific research tools. The bill defines an “AI chatbot” as any service that produces “adaptive” or “context-responsive” outputs that aren’t fully predetermined by a developer or operator. That could include Google’s search summaries, research tools like Perplexity, or any AI-powered Q&A tool—all of which respond to natural language prompts and dynamically generate conversational text. Meanwhile, the GUARD Act’s definition of an “AI companion”—a system that both produces “adaptive” or “context-responsive” outputs and encourages or simulates “interpersonal or emotional interaction”—will easily sweep in general-purpose tools like ChatGPT. Courts around the country are already seeing claims that conversational AI tools manipulate users’ emotions to increase engagement. Under this bill, that’s enough to trigger the “AI companion” label, putting AI developers at risk even when they do not intend to cause harm. Both of these definitions are imprecise and unconstitutionally overbroad. And, when combined with the GUARD Act’s incredibly steep fines (up to $100,000 per violation, enforceable by the federal Attorney General and every state AG), companies worried about their legal liability will inevitably err on the side of prohibiting minors from accessing their chat systems. The GUARD Act leaves them these options: censor certain topics en masse, entirely block users under 18 from accessing their services, or implement broad-sweeping surveillance systems as a prerequisite to access. No matter which way platforms choose to go, the inevitable result for users is less speech, less privacy, and less access to genuinely helpful tools. How You Can Help While there may be legitimate problems with AI chatbots, young people’s safety is an incredibly complex social issue both on- and off-line. The GUARD Act tries to solve this complex problem with a blunt, dangerous solution. In other words, protecting young people’s online safety is incredibly important, but to do so by forcing invasive ID checks, criminalizing AI tools, and banning teens from legitimate digital spaces is not a good way out of this. The GUARD Act would make the internet less free, less private, and less safe for everyone. The GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further stratifying the internet we know and love. Lawmakers should reject the GUARD Act and focus instead on policies that provide transparency, more options for users, and comprehensive privacy for all. Help us tell Congress to oppose the GUARD Act today. TAKE ACTION TELL CONGRESS: OPPOSe THE GUARD ACT

Reaction follows Wall Street’s worst day in a month and unprecedented slump in investment in China Business live – latest updates Global markets suffered another day of volatile trading after a tech sell-off that fuelled Wall Street’s worst day in a month and weak economic data from China showed an unprecedented slump in investment. The FTSE 100 fell by 1.1% in London, closing down about 100 points at 9,698, as bellwether banking stocks tumbled. Barclays, Lloyds and NatWest slumped between 2.7% and 3.6%. Continue reading...

A new US law enforcement initiative is aimed at crypto fraudsters targeting Americans—and now seeks to seize infrastructure it claims is crucial to notorious scam compounds.
Comet Assistant puts you in control Perplexity