Feed aggregator
"Rockstar Games fired dozens of employees," reports Bloomberg, "in a move that a British trade union said was designed to prevent the workers from unionizing. The company said they were fired for misconduct."
TheGrand Theft Automaker terminatedbetween 30 and 40 staffersacross multipleoffices in the UK and Canada on Thursday, according to aspokesperson for the Independent Workers' Union of Great Britain (IWGB). All of the employees were part of a private trade union chat groupon Discord and were either members of the union or attempting to organize at the company, the union spokesperson said.
"Rockstar has just carried out one of the most blatant and ruthless acts of union busting in the history of the games industry," Alex Marshall, president of theIWGB, said in a statement. "This flagrant contempt for the law and for the lives of the workers who bring in their billions is an insult to their fans and the global industry."
On BlueSky the IWGB union posted "We won't back down, and we're not scared — we will fight for every member to be reinstated."
Bloomberg notes that Grand Theft Auto VIis slated for release on May 26, 2026, "and is expected to be one of the top-selling video games of all time."
Read more of this story at Slashdot.
The Financial Times (a British newspaper) has a rare profile of Fidelity Investments (paywall, archive) with the title “Can Fidelity keep its grip on America’s investments?”. Fidelity is privately-held (49% by the founding Johnson family now in its third generation, 51% by employees) and they don’t seek the limelight. They didn’t grant an interview for this article, and they seem to only disclose the absolute minimum financial information about their company as required by law. I can respect that, but at the same time, I like to better understand the custodian of a big share of my net worth, so I read the article with interest. Here are my takeaways.
Fidelity’s longevity is at least partially due to its willingness to pivot with the times. They were once best known for their actively-managed mutual funds like Magellan, then became a 401(k) behemoth managing trillions, accepted low-cost passive investing options, and even today are more open to crypto than other big companies (a Fidelity stablecoin is coming). They don’t move crazy fast, but they do move thoughtfully.
They are willing to be different things to different people. They have some of the largest companies in the world as their customer through 401(k) plans, they are the home to very power financial advisors and their billionaire clients, and they also count tiny individuals like myself as clients who trade less than 10 times a year and only mostly low-cost (non-Fidelity) ETFs. They’ve somehow figured out how to balance all these activities and profit from them all:
Astute fee management has also played a part. “Fidelity is a full-fee, full-cost player, not a discounter like Vanguard,” the former employee says. “Abby [Johnson] has masterfully priced her services across asset classes, products and channels.”
Fidelity’s mutual fund fees are competitive. According to Morningstar Direct, the average asset-weighted cost of an active equity fund in the US is 0.59 per cent a year, compared with Fidelity’s 0.43 per cent. Among passive products the average is 0.10 per cent and Fidelity’s is 0.03 per cent.
Fidelity is able to take a longer-term view.
Even in the face of such challenges, its advocates say Fidelity has another important string to its bow. As a private, family-controlled company — Edward and Ned each ran it for over three decades — it is not subject to the demands of quarterly reporting and managing shareholder expectations, helping management to focus on longer-term strategy and innovation.
“I would say this is the secret sauce of the Johnson family,” the former employee says. “They think about 25-year periods. I’m sure [Abby’s] father was petrified about: how do I keep this thing going so that my daughter can take over?”
“As they prepare for the generation coming up behind Abby, they will be thinking about where the next 50mn [customers] are going to come from.
Overall, Fidelity has the vibe of the sober adult in the room. Not the crypto teenager that can take huge risks since they have nothing to lose. Not the young adult Robinhood trying to break things first and ask for forgiveness later. However, they are also not the old man who complains about everything new and refuses to change their habits out of stubbornness. Based on the new stuff I learned in this article, I still see Fidelity as a good long-term home for my investments.
"It finally happened," writes the GamingOnLinux site:
Linux gamers on Steam as of the Steam Hardware & Software Survey for October 2025 have crossed over the elusive 3% mark. The trend has been clear for sometime, and with Windows 10 ending support, it was quite likely this was going to be the time for it to happen as more people try out Linux...
Overall, 3% might not seem like much to some, but again — that trend is very clear and equates to millions of people. The last time Valve officially gave a proper monthly active user count was in 2022, and we know Steam has grown a lot since then, but even going by that original number would put monthly active Linux users at well over 4 million.
Additional details from Phoronix:
The only time Steam on Linux use was close to the 3% mark was when Steam on Linux initially debuted a decade ago and at that time the overall Steam user-base was much smaller than it is today. Long story short, thanks to the ongoing success of Valve's Steam Deck and other handhelds plus Steam Play (Proton) working out so well, these October numbers are the best yet... a hearty 0.41% increase to Linux... landing its overall marketshare at 3.05%. Windows meanwhile was at 94.84% (falling below 95% for the first time in a while) and macOS at 2.11%. For comparison, in October 2024 Steam on Linux was at 2.00%.
The Linux-specific data shows SteamOS commanding around 27% of all the Linux installs at large. SteamOS most notably being on the Steam Deck hardware.
Read more of this story at Slashdot.
"It finally happened," writes the GamingOnLinux site:
Linux gamers on Steam as of the Steam Hardware & Software Survey for October 2025 have crossed over the elusive 3% mark. The trend has been clear for sometime, and with Windows 10 ending support, it was quite likely this was going to be the time for it to happen as more people try out Linux...
Overall, 3% might not seem like much to some, but again — that trend is very clear and equates to millions of people. The last time Valve officially gave a proper monthly active user count was in 2022, and we know Steam has grown a lot since then, but even going by that original number would put monthly active Linux users at well over 4 million.
Additional details from Phoronix:
The only time Steam on Linux use was close to the 3% mark was when Steam on Linux initially debuted a decade ago and at that time the overall Steam user-base was much smaller than it is today. Long story short, thanks to the ongoing success of Valve's Steam Deck and other handhelds plus Steam Play (Proton) working out so well, these October numbers are the best yet... a hearty 0.41% increase to Linux... landing its overall marketshare at 3.05%. Windows meanwhile was at 94.84% (falling below 95% for the first time in a while) and macOS at 2.11%. For comparison, in October 2024 Steam on Linux was at 2.00%.
The Linux-specific data shows SteamOS commanding around 27% of all the Linux installs at large. SteamOS most notably being on the Steam Deck hardware.
Read more of this story at Slashdot.
TechCrunch reports:
OpenAI CEO Sam Altman recently said that the company is doing "well more" than $13 billion in annual revenue — and he sounded a little testy when pressed on how it will pay for its massive spending commitments.
His comments came up during a joint interviewon the Bg2 podcast between Altman and Microsoft CEO Satya Nadella about the partnership between their companies. Host Brad Gerstner (who's also founder and CEO of Altimeter Capital) brought upreports that OpenAI is currently bringing in around $13 billion in revenue — a sizable amount, but one that's dwarfed by more than $1 trillion in spending commitments for computing infrastructure that OpenAI has made for the next decade.
"First of all, we're doing well more revenue than that. Second of all, Brad, if you want to sell your shares, I'll find you a buyer," Altman said, prompting laughs from Nadella. "I just — enough. I think there are a lot of people who would love to buy OpenAI shares."
Altman's answer continued, making the case for OpenAI's business model. "We do plan for revenue to grow steeply. Revenue is growing steeply. We are taking a forward bet that it's going to continue to grow and that not only will ChatGPT keep growing, but we will be able to become one of the important AI clouds, that our consumer device business will be a significant and important thing. That AI that can automate science will create huge value...
"We carefully plan, we understand where the technology — where the capability — is going to go, and the products we can build around that and the revenue we can generate. We might screw it up — like, this is the bet that we're making, and we're taking a risk along with that." (That bet-with-risks seems to be the $1.4 trillion in spending commitments — but Altman suggests it's offset by another absolutely certain risk: "If we don't have the compute, we will not be able to generate the revenue or make the models at this kind of scale.")
Satya Nadella, Microsoft's CEO, added his own defense, "as both a partner and an investor. There has not been a single business plan that I've seen from OpenAI that they have put in and not beaten it. So in some sense, this is the one place where in terms of their growth — and just even the business — it's been unbelievable execution, quite frankly..."
Read more of this story at Slashdot.
"An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device," writes Tom's Hardware.
"That's when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn't consented to."
The user, Harishankar, decided to block the telemetry servers' IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after... He sent it to the service center multiple times, wherein the technicians would turn it on and see nothing wrong with the vacuum. When they returned it to him, it would work for a few days and then fail to boot again... [H]e decided to disassemble the thing to determine what killed it and to see if he could get it working again...
[He discovered] a GD32F103 microcontroller to manage its plethora of sensors, including Lidar, gyroscopes, and encoders. He created PCB connectors and wrote Python scripts to control them with a computer, presumably to test each piece individually and identify what went wrong. From there, he built a Raspberry Pi joystick to manually drive the vacuum, proving that there was nothing wrong with the hardware. From this, he looked at its software and operating system, and that's where he discovered the dark truth: his smart vacuum was a security nightmare and a black hole for his personal data.
First of all, it's Android Debug Bridge, which gives him full root access to the vacuum, wasn't protected by any kind of password or encryption. The manufacturer added a makeshift security protocol by omitting a crucial file, which caused it to disconnect soon after booting, but Harishankar easily bypassed it. He then discovered that it used Google Cartographer to build a live 3D map of his home.
This isn't unusual, by far. After all, it's a smart vacuum, and it needs that data to navigate around his home. However, the concerning thing is that it was sending off all this data to the manufacturer's server. It makes sense for the device to send this data to the manufacturer, as its onboard SoC is nowhere near powerful enough to process all that data. However, it seems that iLife did not clear this with its customers.
Furthermore, the engineer made one disturbing discovery — deep in the logs of his non-functioning smart vacuum, he found a command with a timestamp that matched exactly the time the gadget stopped working. This was clearly a kill command, and after he reversed it and rebooted the appliance, it roared back to life.
Thanks to long-time Slashdot reader registrations_suck for sharing the article.
Read more of this story at Slashdot.
"Ubuntu's decision to switch to Rust-based coreutils in 25.10 hasn't been the smoothest ride," writes the blog OMG Ubuntu, "as the latest — albeit now resolved — bug underscores."
[Coreutils] are used by a number of processes, apps and scripts, including Ubuntu's own unattended-upgrades process, which automatically checks for new software updates. Alas, the Rust-based version of date had a bug which meant Ubuntu 25.10 desktops, servers, cloud and container images were not able to automatically check for updates when configured. Unattended-upgrades hooks into the date utility to check the timestamp of a reference file of when an update check was last run and, past a certain date, checks again. But date was incorrectly showing the current date, always.
A fix has been issued so only Ubuntu 25.10 installs withrust-coreutils 0.2.2-0ubuntu2 (or earlier) are affected.
Read more of this story at Slashdot.
"AI isn't just a tool anymore; it's an integral part of the development experience," argues GitHub's blog. So "Agents shouldn't be bolted on. They should work the way you already work..."
So this week GitHub announced "Agent HQ," which CNBC describes as a "mission control" interface "that will allow software developers to manage coding agents from multiple vendors on a single platform."
Developers have a range of new capabilities at their fingertips because of these agents, but it can require a lot of effort to keep track of them all individually, said GitHub COO Kyle Daigle. Developers will now be able to manage agents from GitHub, OpenAI, Google, Anthropic, xAI and Cognition in one place with Agent HQ. "We want to bring a little bit of order to the chaos of innovation," Daigle told CNBC in an interview. "With so many different agents, there's so many different ways of kicking off these asynchronous tasks, and so our big opportunity here is to bring this all together." Agent HQ users will be able to access a command center where they can assign, steer and monitor the work of multiple agents...
The third-party agents will begin rolling out to GitHub Copilot subscribers in the coming months, but Copilot Pro+ users will be able to access OpenAI Codex in VS Code Insiders this week, the company said.
"We're into this wave two era," GitHub's COO Mario Rodriguez told VentureBeat, an era that's "going to be multimodal, it's going to be agentic and it's going to have these new experiences that will feel AI native...."
Or, as VentureBeat sees it, GitHub "is positioning itself as the essential orchestration layer beneath them all..."
Just as the company transformed Git, pull requests and CI/CD into collaborative workflows, it's now trying to do the same with a fragmented AI coding landscape...
The technical architecture addresses a critical enterprise concern: Security. Unlike standalone agent implementations where users must grant broad repository access, GitHub's Agent HQ implements granular controls at the platform level... Agents operating through Agent HQ can only commit to designated branches. They run within sandboxed GitHub Actions environments with firewall protections. They operate under strict identity controls. [GitHub COO] Rodriguez explained that even if an agent goes rogue, the firewall prevents it from accessing external networks or exfiltrating data unless those protections are explicitly disabled.
Beyond managing third-party agents, GitHub is introducing two technical capabilities that set Agent HQ apart from alternative approaches like Cursor's standalone editor or Anthropic's Claude integration. Custom agents via AGENTS.md files: Enterprises can now create source-controlled configuration files that define specific rules, tools and guardrails for how Copilot behaves. For example, a company could specify "prefer this logger" or "use table-driven tests for all handlers." This permanently encodes organizational standards without requiring developers to re-prompt every time... Native Model Context Protocol (MCP) support: VS Code now includes a GitHub MCP Registry. Developers can discover, install and enable MCP servers with a single click. They can then create custom agents that combine these tools with specific system prompts. This positions GitHub as the integration point between the emerging MCP ecosystem and actual developer workflows. MCP, introduced by Anthropic but rapidly gaining industry support, is becoming a de facto standard for agent-to-tool communication. By supporting the full specification, GitHub can orchestrate agents that need access to external services without each agent implementing its own integration logic.
GitHub is also shipping new capabilities within VS Code itself. Plan Mode allows developers to collaborate with Copilot on building step-by-step project approaches. The AI asks clarifying questions before any code is written. Once approved, the plan can be executed either locally in VS Code or by cloud-based agents. The feature addresses a common failure mode in AI coding: Beginning implementation before requirements are fully understood. By forcing an explicit planning phase, GitHub aims to reduce wasted effort and improve output quality.
More significantly, GitHub's code review feature is becoming agentic. The new implementation will use GitHub's CodeQL engine, which previously largely focused on security vulnerabilities to identify bugs and maintainability issues. The code review agent will automatically scan agent-generated pull requests before human review. This creates a two-stage quality gate.
"Don't let this little bit of news float past you like all those self-satisfied marketing pitches we semi-hear and ignore," writes ZDNet:
If it works and remains reliable, this is actually a very big deal... Tech companies, especially the giant ones, often like to talk "open" but then do their level best to engineer lock-in to their solution and their solution alone. Sure, most of them offer some sort of export tool, but the barrier to moving from one tool to another is often huge... [T]he idea that you can continue to use your favorite agent or agents in GitHub, fully integrated into the GitHub tool path, is powerful. It means there's a chance developers might not have to suffer the walled garden effect that so many companies have strived for to lock in their customers.
Read more of this story at Slashdot.
|