Hacker Newsnew | past | comments | ask | show | jobs | submit | auggierose's favoriteslogin

Since the author mentioned trying to find a general solution, not one just for his project - here's one that could work:

Make a new standard license similar to the GPL, but one that includes machine-readable payment requirements, each consisting of:

- a UUID

- a minimum profit threshold

- a license fee, either a fixed amount or some well-defined formula (you'd probably want an inflation adjustment system)

- a recipient

Anyone who wants to use the software can do it, but if you cross the profit threshold, you have to pay, once per project. Dependents would naturally inherit the payment requirements of their dependencies, but you'd only pay once per dependency even if it was used in multiple projects (hence the UUID).

With high enough profit thresholds and small payments, this should avoid the license from becoming toxic:

* If you aren't a megacorp, you don't care because you're not hitting the thresholds.

* If you aren't a megacorp but dreaming of becoming one, you still don't care, because if you do become one, you can afford the cost, and the combined cost (payments + compliance cost) is well understood and limited.

* If you are a megacorp, you still don't care, because we're most likely talking about peanuts and the machine readable descriptions make it practical to comply, and you get a "software bill of materials" out of it as a side effect.

This relies on the minimum profit thresholds being high enough and the license fees low enough. This could be achieved by the text of the license itself being licensed only as long as you keep within certain thresholds.

Building a new license ecosystem and the critical mass behind it is a tall order, but I think this way it's not hopeless-from-the-start. The design isn't meant to "capture a fair share of the value" or anything like that, it's meant to be minimally toxic (because that's a hard requirement for having a chance of becoming popular) while still delivering some minimal contribution to big projects with a lot of dependents.

I was originally planning to suggest a revenue threshold, but I think profit is better, as it excludes nonprofits, startups in the starting-up phase, companies that aren't money printers, etc.


Something else to add is mathematical discovery. There is a team that is very close to solving the Navier-Stokes Millenium Prize problem: https://deepmind.google/discover/blog/discovering-new-soluti...

The cynists will comment that I've just been sucked in by the PR. However, I know this team and have been using these techniques for other problems. I know they are so close to a computationally-assisted proof of counterexample that it is virtually inevitable at this point. If they don't do it, I'm pretty sure I could take a handful of people and a few years and do it myself. Mostly a lot of interval arithmetic with a final application of Schauder that remains; tedious and time-consuming, but not overly challenging compared to the parts already done.


I wrote a bit about this the other day: https://simonwillison.net/2025/Jun/27/context-engineering/

Drew Breunig has been doing some fantastic writing on this subject - coincidentally at the same time as the "context engineering" buzzword appeared but actually unrelated to that meme.

How Long Contexts Fail - https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-ho... - talks about the various ways in which longer contexts can start causing problems (also known as "context rot")

How to Fix Your Context - https://www.dbreunig.com/2025/06/26/how-to-fix-your-context.... - gives names to a bunch of techniques for working around these problems including Tool Loadout, Context Quarantine, Context Pruning, Context Summarization, and Context Offloading.


I think we need to shift our idea of what LLMs do and stop thinking they are ‘thinking’ in any human way.

The best mental description I have come up with is they are “Concept Processors”. Which is still awesome. Computers couldn’t understand concepts before. And now they can, and they can process and transform them in really interesting and amazing ways.

You can transform the concept of ‘a website that does X’ into code that expresses a website X.

But it’s not thinking. We still gotta do the thinking. And actually that’s good.


> In what sense can a finite set exist and be finite when it is unfindable, unverifiable, and has unboundable size?

In the same sense that we could say that every computer program must either eventually terminate or never terminate without most people thinking there's a major philosophical problem here.

And by the way, the very same question can be (and has been) levelled at constructivism: in what sense does a result that would take longer than the lifetime of the universe to compute exist, as it is unfindable and unverifiable?

Look, I think that it is interesting to work with constructive axioms, but I don't think that humans philosophically reject non-constructive results. It's one thing to say that we can learn interesting things in constructive mathematics and another to say there's a fundamental problem with non-constructive mathematics.

> But formalism leads to having to accept conclusions that some of us don't like.

At least in Hilbert's sense, I don't think formalism says quite what you claim it says. He says that some mathematical statements or results apply to things we can see in the world and could be assigned meaning through, say, correspondence to physics. But other mathematical statements don't say anything about the physical world, and therefore the question of their "actual meaning" is not reason to reject them as long as they don't lead to "real" results (in the first class of statements) that contradict physical reality.

Formalism, therefore, doesn't require you to accept or reject any particular meaning that the second class of statements may or may not have. If a statement in the second class says that some set exists, you don't have to assign that "existence" any meaning beyond the formula itself.


Do not fall into the trap of anthropomorphising Larry Ellison. You need to think of Larry Ellison the way you think of a lawnmower. You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end. You don't think 'oh, the lawnmower hates me' -- lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower. Don't fall into that trap about Oracle

Here's a great comparison, updated two weeks ago. https://github.com/Elanis/web-to-desktop-framework-compariso...

Electron comes out looking competitive at runtime! IMO people over-fixate on disc space instead of runtime memory usage.

Memory Usage with a single window open (Release builds)

Windows (x64): 1. Electron: ≈93MB 2. NodeGui: ≈116MB 3. NW.JS: ≈131MB 4. Tauri: ≈154MB 5. Wails: ≈163MB 6. Neutralino: ≈282MB

MacOS (arm64): 1. NodeGui: ≈84MB 2. Wails: ≈85MB 3. Tauri: ≈86MB 4. Neutralino: ≈109MB 5. Electron: ≈121MB 6. NW.JS: ≈189MB

Linux (x64): 1. Tauri: ≈16MB 2. Electron: ≈70MB 3. Wails: ≈86MB 4. NodeGui: ≈109MB 5. NW.JS: ≈166MB 6. Neutralino: ≈402MB


My team at Shopify just open sourced Roast [1] recently. It lets us embed non-deterministic LLM jobs within orchestrated workflows. Essential when trying to automate work on codebases with millions of lines of code.

[1] https://github.com/shopify/roast


I think the professional sciences has, for a long time, been a social game of building ones career but it does feel like it's metastasized into something that's swallowed academia.

From the first article in the series [0]:

> Insiders ... understand that a research paper serves ... in increasing importance ... Currency, An advertisement, Brand marketing ... in contrast to what outsiders .. believe, which is ... to share a novel discovery with the world in a detailed report.

I can believe it's absolutely true. And yikes.

Other than the brutal contempt, TFA looks like pretty good advice.

[0] https://maxwellforbes.com/posts/your-paper-is-an-ad/


That's...ridiculously fast.

I still feel like the best uses of models we've seen to date is for brand new code and quick prototyping. I'm less convinced of the strength of their capabilities for improving on large preexisting content over which someone has repeatedly iterated.

Part of that is because, by definition, models cannot know what is not in a codebase and there is meaningful signal in that negative space. Encoding what isn't there seems like a hard problem, so even as models get smarter, they will continue to be handicapped by that lack of institutional knowledge, so to speak.

Imagine giving a large codebase to an incredibly talented developer and asking them to zero-shot a particular problem in one go, with only moments to read it and no opportunity to ask questions. More often than not, a less talented developer who is very familiar with that codebase will be able to add more value with the same amount of effort when tackling that same problem.


Just need a way to talk to ChatGPT anytime. Microphone, speaker and permanent connection to ChatGPT. That’s all you need: io

One need is being able to talk to ChatGPT in a whisper or silent voice… so you can do it in public. I don’t think that comes from them, but it will be big when it does. Much easier than brain implants! In an ear device, you need enough data of listening to the muscles and the sounds together, then you can just listen to the muscles…

I assume they want to have their own OS that is, essentially, their models in the cloud.

so, here are my specific predictions

1. Subvocalization-sensing earbuds that detect "silent speech" through jaw/ear canal muscle movements (silently talk to AI anytime)

2. An AI OS laptop — the model is the interface

3. A minimal pocket device where most AI OS happens in the cloud

4. an energy efficient chip that runs powerful local AI, to put in any physical object

5. … like a clip. Something that attaches to clothes.

6. a perfect flat glass tablet like in the movies (I hope not)

7. ambient intelligent awareness through household objects with microphones, sensors, speakers, screens —


The AI hype seems driven more by stock valuations than genuine productivity gains.

Developers now spend excessive time crafting prompts and managing AI generated pull requests :-) tasks that a simple email to a junior coder could have handled efficiently. We need a study that shows the lost productivity.

When CEOs aggressively promote such tech solutions, it signals we're deep into bubble territory:

“If you go forward 24 months from now, or some amount of time — I can’t exactly predict where it is — it’s possible that most developers are not coding.”

  - Matt Garman – CEO of Amazon Web Services (AWS) - June 2024
 
"There will be no programmers in five years"

    - Stability AI CEO Emad Mostaque - 2023

“I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software.”

  - Satya Nadella – CEO of Microsoft - April 2025
    
“Coding is dead.”

  - Jensen Huang CEO, NVIDIA - Feb 2024
   
"This is the year (2025) that AI becomes better than humans at programming for ever..."

   - OpenAI's CPO Kevin Weil - March 2025 

“Probably in 2025, we at Meta are going to have an AI that can effectively function as a mid-level engineer that can write code."

  - Mark Zuckerberg - Jan 2025

"90% of code will be written by AI in the next 3 months"

    - Dario Amodei - Anthropic CEO  - March 2025

If you want very well hidden, get a writable NFC chip implanted.

You write the data to it, delete the phone app which did that write, and once you are through, re-install the phone app and read your data.

Last time I went through the TSA scatter scanners and wand waving theatre the 3 implants I had were not detected.


One of the underrated downsides of the professionalization of research is how much it sucked the "fun" out of things. It's strange, but research papers in most fields are written very differently from how people actually talk to each other. Professional researchers still communicate informally like normal humans, in ways that are "fun" and show much more of how they came up with ideas and what they are really thinking. But this is very hard for outsiders to access.

This is like saying: y=e^-x+1 will soon be 0, because look at how fast it went through y=2!

You can see this in venerable software which has lived through the times of "designing for the user" and is still being developed in the times of "designing for the business".

Take Photoshop, for example, first released in 1987, last updated yesterday.

Use it and you can see the two ages like rings in a tree. At the core of Photoshop is a consistent, powerful, tightly-coded, thoughtfully-designed set of tools for creating and manipulating images. Once you learn the conventions, it feels like the computer is on your side, you're free, you're force-multiplied, your thoughts are manifest. It's really special and you can see there's a good reason this program achieved total dominance in its field.

And you can also see, right beside and on top of and surrounding that, a more recent accretion disc of features with a more modern sensibility. Dialogs that render in web-views and take seconds to open. "Sign in". Literal advertisements in the UI, styled to look like tooltips. You know the thing that pops up to tell you about the pen tool? There's an identically-styled one that pops up to tell you about Adobe Whatever, only $19.99/mo. And then of course there's Creative Cloud itself.

This is evident in Mac OS X, too, another piece of software that spans both eras. You've still got a lot of the stuff from the 2000s, with 2000s goals like being consistent and fast and nice to use. A lot of that is still there, perhaps because Apple's current crop of engineers can't really touch it without breaking it (not that it always stops them, but some of them know their limits). And right next to and amongst that, you've got ads in System Settings, you've got Apple News, you've got Apple Books that breaks every UI convention it can find.

There are many such cases. Windows, too. And MS Word.

One day, all these products will be gone, and people will only know MBA-ware. They won't know it can be any other way.


> I mean, it’s not called Startup News

Well, not anymore it isn't: https://web.archive.org/web/20070601184317/http://news.ycomb...


We were rejected from YC because we wouldn’t dilute to 0 some of the early people at the company

They first told us we were in, but that we would need to adjust our cap table so that only 2 founders would have equity. They gave us a phone call, we pushed back, and later they sent us a rejection email


I think it's because (a) it's become a lot easier to create languages and (b) we're stuck.

I am hoping (a) is straightforward. For (b), I think most of us sense at least intuitively, with different strengths, that our current crop of programming languages are not really good enough. Too low-level, too difficult to get to a more compact and meaningful representation.

LLMs have kinda demonstrated the source for this feeling of unease, by generating quite a bit of code from fairly short natural language instructions. The information density of that generated code can't really be all that high, can it now?

So we have this issue with expressiveness, but we lack the insight required to make substantial progress. So we bounce around along the well-trodden paths of language design, hoping for progress that refuses to materialize.


It's not an issue of not having the money to pay for it. But it is an issue about how the company makes money.

I say this as someone who loves the Mac, and goes out of his way to avoid electron apps, but in most cases, it doesn't make sense for a company to produce native Mac apps. Especially if that company is VC funded.

I worked in shops in the late 90s into the 2000s that produced cross-platform Windows/Mac programs. It was a lot of fun in many ways. But even the best-run teams with the best cross-platform stacks suffered from slower development times than they would have if they were targeting a single platform.

Even when like 90% of the code is cross-platform.

There are lots of reasons why. Some of it is due to platform differences. Sometimes the same feature couldn't work the same way across platforms. That meant either correctly doing extra design and planning ahead of time, or figuring out that you messed up and calling emergency meetings to figure it out. It meant extra work producing documentation that called out the differences. It meant training support staff on the differences.

Another issue was keeping all the teams on the same schedule. If you've got "one good person" working on the Mac side of things, and they get sick for a week, then either you ship the Mac update late or you delay every other platform.

Finally, there's always been a lot of churn on the Apple side. APIs getting removed. The platform moving to a completely different programming language. Toolbox. Carbon. AppKit. SwiftUI. Pascal. C. C++. Objective-C. Swift.

So a lot of teams went the route of trying to create their own cross-platform framework in C++, and focused their platform developers on just implementing the framework, so all the app work could be done by the cross-platform devs. The Mac people could thus isolate the app from the technology churn on the platform.

This never worked out. It almost always ended up being faster to write separate apps from the ground up for each platform, because the cross-platform framework would balloon into a monstrosity. You'd spend more time on the framework than any app, and somehow, you'd still lack functionality that the cross-platform devs discover they need for the latest top-priority feature coming down from management.

But typically, on the best teams, it wasn't just one person per platform. Because of the need for so much coordination between platforms, lots of time was spent in meetings, so you'd need at least a few people working on each platform. These are things you don't have to worry about when you're an independent developer working on an app that hits the company's API.

You might be temped to point out that this could all be avoided if you just had a server team write the server software, and one-person platform-specific teams that just wrote native apps that communicated with those servers. Sure, you still have to produce different documentation and support resources, but most companies skimp on those anyway.

You'd also have a bunch of apps with different features, shipping at different times, but they're all great apps, so that doesn't matter, right?

The result is that you'd come up with decently great apps for each platform. The project managers might even be able to recognize that they're cool, great apps.

But you'd still be like a cat bringing them a dead rodent. What are they supposed to do with this?

Admittedly, some of this has to do with how middle-level project managers justify their jobs. They spearhead features and make sure they get delivered into the product, theoretically increasing its value, and therefore the value they delivered to the company.

But it's not all that kind of bullshit. Sometimes it's essential to the company's bottom line, or to an attempt to improve that company's bottom line.

To use Slack as an example, at the start of the COVID lockdowns, Zoom usage shot up tremendously. Slack felt that this was a potential threat to their place in the market, and they introduced huddles. It took like a year, and Slack huddles are still worse than Zoom, but it was still probably good enough to retain some subscribers.

If it took a year with Electron alone, just imagine how much extra time it would have taken to plan and coordinate the feature across multiple platforms. Since it's important to the company, you can't just let the platform teams roll it out on their own schedule, or decide it doesn't fit well with the feature set they've cultivated for their platform. You need it out, you need it fast, and you need it to work the same.

Electron was the first cross-platform solution that worked well enough to make that a reality. It sucks in many ways. It doesn't follow platform conventions. But it's good enough that people can use it. And, bonus, the apps are all written in a language and an API that millions of Web devs are familiar with. No need to worry about the schedule slipping because one employee was sick.

In a fast-pased business, you need fast, universal turnaround on important functionality. It's more impactful to your bottom line than customer experience, even if that functionality is just some dumb BS like Discord super reacts or something.


I have a saying that program's complexity is always exactly equal to the human-intelligible complexity + 1.

If not, the developer would add one more feature. It is due to the entirely human-made aspect of this discipline.


Does GitHub use code from private repos for AI training?

>Thats why RedHat is a business. They're not selling Linux, they're selling the reliability, longevity, services, support etc.

>In truth the license doesn't matter.

It's funny to bring that up in the context of Red Hat who have started to circumvent the GPL by terminating their relationship with anyone who tries to actually make use of the rights granted by it. "The license doesn't matter" because they've found a loophole in it, but it clearly does matter in that they had to do so in the first place and weren't able to adhere to its spirit due to business concerns.

[1]: https://sfconservancy.org/blog/2023/jun/23/rhel-gpl-analysis...

[2]: https://opencoreventures.com/blog/2023-08-redhat-gets-around...


Would love to hear Venkatesh Rao / Cal Newport discuss this at some point: https://contraptions.venkateshrao.com/p/against-waldenpondin...

Another related post: https://contraptions.venkateshrao.com/p/semicolon-shaped-peo...

From the second link:

> Tenured professors with status in a discipline can tune out the world and do "deep work" peers recognize as "important" before it is done (with accompanying ivory-tower/angels-on-pinhead risks).

> But a free agent, with no institutional safety net, no underwriting of exploratory expeditions by disciplinary consensus, and no research grants, cannot afford this luxury.


This is the full post:

""Successful people create companies. More successful people create countries. The most successful people create religions."

I heard this from Qi Lu; I'm not sure what the source is. It got me thinking, though--the most successful founders do not set out to create companies. They are on a mission to create something closer to a religion, and at some point it turns out that forming a company is the easiest way to do so.

In general, the big companies don't come from pivots, and I think this is most of the reason why."

Sounds like an explicit endorsement lol


I also market to engineers and I have thoughts:

> Engineers look down on advertising and advertising people, for the most part.

Most everyone in every industry dislikes advertising.

> Engineers do not like a “consumer approach.

What people say they like and what they respond to in ways that marketers want are not always the same thing. Also, people almost invariably underestimate the impact that marketing has on them. They think they can’t be swayed by it, but they can (why else would GEICO spend $2b a year on it?). More broadly, there has been a major shift towards using B2C-style, informal marketing for B2B campaigns. Even in long, complex B2B sales cycles, attention spans are shorter and audiences are engaging with more consumer-style content like short explainer videos, and not just the traditional 5,000 word whitepapers and such.

> Engineers are not turned off by jargon—in fact, they like it.

In my experience, that’s not always true. What is true is that they use jargon involuntarily and unconsciously because they are so immersed in their niche they don’t even realize they are doing it. Often, when an outsider like me is brought in and I retell their marketing story without the acronyms and jargon, they are extremely pleased to hear it told more plainly.

> Why is jargon effective? Because it shows the reader that you speak his language.

If you’ve done your homework and you truly understand their business and technology, that familiarity will come across in the content even without jargon.


I think it's a pretty simple, and timeless, aspect of human nature:

"People are generally better persuaded by the reasons which they have themselves discovered than by those which have come into the mind of others."

- Blaise Pascal, 1670


This is NOT the paper, but probably a very similar solution: https://arxiv.org/abs/2009.03393


The corollary is that any technology that is distinguishable from magic is insufficiently advanced.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: