There’s a pattern hiding in today’s tech news, and once you see it, you can’t unsee it.

Everybody wants automation. Fair enough. We should want automation. If a machine can kill boring work, reduce friction, and let more people build more things faster, that’s good. That’s the whole point. Civilization does not level up by making humans manually click through dropdown menus forever.

But here’s the split: there’s the useful future, where software quietly gets better and gives normal people more power. Then there’s the slop future, where every company bolts AI onto a bad product, calls it innovation, and somehow makes the user experience worse. Chat, is this real? Unfortunately yes.

Take WordPress.com, which now says AI agents can draft, edit, and publish posts, manage comments, fix metadata, and reorganize tags and categories. On one level, that rules. The web has needed lower-friction publishing tools for years. If a solo creator, a tiny newsroom, a local club, or some weird niche forum can stand up and run a site with far less overhead, that is genuinely powerful. More leverage for small players is good. Always.

But there’s a second-order effect here, and pretending it doesn’t exist is cope. WordPress is part of the plumbing of the modern web. If you make machine-written publishing radically easier, you don’t just unlock more useful sites — you also unlock industrial-scale content sludge. The barrier to creating something valuable drops, yes. The barrier to generating infinite SEO oatmeal also drops through the floor. That’s the trade.

And this is exactly why Microsoft’s Windows 11 reversal is so interesting. Microsoft is now promising fewer unnecessary Copilot entry points, less noisy updates, better performance, a movable taskbar, and a more trustworthy search experience. In plain English: the company got dragged hard enough that it had to admit users do not actually enjoy having random AI tentacles stapled onto every surface of the operating system.

That’s the real lesson. People are not anti-AI. People are anti-annoying. They are anti-bloat. They are anti-being treated like NPCs in somebody else’s quarterly earnings call. If the feature is useful, fast, optional, and respectful, people will use it. If it’s a giant flashing button jammed into Notepad because someone in corporate wanted to say “Copilot engagement” on a slide deck, people are going to hate it. Correctly.

Then you look at Meta, which is rolling out more AI-powered support and moderation systems across Facebook and Instagram. Again, there is a real upside here. Better scam detection? Good. Faster support for locked-out users? Good. Catching obvious garbage without traumatising armies of low-paid contractors who’ve spent years doing the psychological sewage work of the internet? Also good.

But the first-principles question is not “can AI automate moderation?” Obviously it can automate parts of moderation. The real question is: who is accountable when these systems are wrong, biased, gamed, or just conveniently opaque? Because companies love saying “AI helps us scale” right up until you need a real appeal path, a human explanation, or a clear line of responsibility. Suddenly it’s fog machine time.

And then, because 2026 refuses to be normal, we’ve got serious orbital data center talk. SpaceX is pushing the idea of solar-powered data-center satellites — server farms in orbit, basically — as a way around the water, land, and power constraints crushing AI infrastructure on Earth. On pure first principles, I get the attraction. If terrestrial compute is eating local grids, chewing through water, and turning communities into collateral damage for someone else’s inference pipeline, of course people are going to start asking whether the compute should go somewhere else.

This is the part where the haters say it sounds like sci-fi nonsense. Sure. But a lot of frontier tech sounds dumb right before it becomes real. The catch is that “just put it in orbit” doesn’t erase the externalities — it changes them. Now the question becomes orbital congestion, debris, access control, and whether the future of high-end compute gets concentrated even harder into a handful of giant players with rockets, regulators, and ridiculous capex.

That’s the throughline in all of this: control.

Who controls publishing when AI can run a site? Who controls the personal computer when the OS vendor keeps stuffing it with agenda-driven UI junk? Who controls speech and access when moderation pipelines get abstracted into machine judgment? Who controls compute when the AI boom turns infrastructure into a geopolitical resource?

Even the smaller Google Messages story — live location sharing, finally — fits the same frame. It’s not sexy. It’s not keynote bait. It’s just a thing regular people actually use. Which is why it matters. Tech keeps trying to impress people with giant ideology-coated product visions while quietly losing the plot on basic quality-of-life features. The companies winning the next decade are not the ones screaming “AI” the loudest. They’re the ones building tools that work, respect the user, and increase real agency.

That’s the split. The useful future gives you more capability with less friction. The slop future gives you more abstraction, more dependency, more noise, and less control while insisting this is progress.

Pick carefully, because the infrastructure decisions getting made right now are going to decide which version of the future becomes normal.

Sources: TechCrunch on WordPress AI publishing, The Verge on Microsoft’s Windows 11 fix plan, Meta on AI support and safety systems, The Verge on orbital data-center satellites, and Google Messages help on live location sharing.