> Support, no matter how valued and important to the organisation it is, is never worth $200k/year on the output of 1 person.
This reads more like a reflection of how we’ve historically designed support roles than a fundamental truth about them.
Oxide’s approach seems to invert the premise: instead of paying people based on a low-leverage job description, they design roles where every team member is expected to operate at high leverage — including support engineers. That means hiring differently, scoping differently, and building a product that aligns with that structure.
It’s fair to be skeptical of anything cushioned by VC — but skepticism cuts both ways. Just because traditional comp structures haven’t empowered support doesn’t mean they can’t. We’ve seen engineering, design, and even ops roles evolve dramatically when companies raised the bar. Why not support?
This is a great reminder that most “helpful” AI is just optimized conformity.
When models suggest edits, they’re not offering insight — they’re offering what’s safest, most average, most familiar to the dominant culture. And that’s often Western, white, male-coded language that reads as “neutral” because it’s historically overrepresented in training data and platform norms.
This isn’t just about grammar or clarity. It’s about whose voice gets flattened and whose story gets smoothed out until it sounds like a TED Talk.
We should stop thinking of AI as neutral by default. The bias isn’t a bug — it’s baked into the system of reinforcement learning and feedback loops that reward comfort over challenge, safety over truth, sameness over difference.
Anyone here doing work to counteract this? How do you keep LLMs from deradicalizing or deracializing your writing?
That’s a perfect example of LLMs providing epistemic scaffolding — not just giving you answers, but helping you check your footing as you explore unfamiliar territory. Especially valuable when you’re reasoning through something structurally complex like rewrite systems or proof strategies. Sometimes just seeing your internal model reflected back (or gently corrected) is enough to keep you moving.
This breakdown hits hard because it’s not just about business models — it’s about trust.
Open source succeeded because it created shared public infrastructure. But hyperscalers turned it into extraction infrastructure: mine the code, skip the stewardship.
The result? We’ve confused “open” with “free-for-the-powerful.”
It’s time to stop pretending licenses are enough. This is about incentives, governance, and resilience. The next generation of “open” has to bake in counterpower — or it’s just a feeding trough for monopolies.
Before moving from permissive licences to non-open-source licences (because they have exceptions for TooBigTech), an easy step would be to use copyleft licences, wouldn't it?
> Grafana's a great example of this. AWS and Azure _could_ have sold the unmodified AGPL Grafana as a service or published their modified versions, but instead, they both struck proprietary licensing and co-marketing agreements with Grafana Labs.