Hacker Newsnew | past | comments | ask | show | jobs | submit | emregucerr's commentslogin

I would love to see someone build it as some kind of an SDK. App builders could use it as a local LLM plugin when dealing with data involving sensitive information.

It's usually too much when an app asks someone to setup a local LLM but this I believe could solve that problem?


It's not too hard to code together with an LLM. I've been playing with small embeddings models in browsers in the last weeks. You don't really need that much. The limitation is that these things are fairly limited and slow to begin with and they run slower in a browser even with webgpu. But you can do some cool stuff. Adding an LLM is just more of the same.

If you want to see an example of this, https://querylight.tryformation.com/ is where I put my search library and demo. It does vector search in the browser.


Which apps have you seen ask for someone to setup a local LLM? Can't recall having ever seen one

Did they pick the word "uncrewed" to not use the word "unmanned"? If so, I'm not hopeful. Might be another EuroDrone disaster.


Even the absence of a person could potentially become offended, I suppose.


Better than unpersoned I guess?


People dismiss this as a meme too quick but I think this is a good thought experiment not only for drawing a comparison for energy consumption but learning efficiency. AI is often criticized for its low learning efficiency but if you compare it to a human it's not looking too bad. Let's say a human becomes an AGI-level learner by the time they are 14yo. Human vision is approx. 500 megapixels and that is approx. 1.7 Gb per second of vision data. That means it takes approx. 800 PETABYTES of data to 'pre-train' a human to become a well-enough generalist learner. Take Llama 4 from Meta whose training data set consisted of 30 trillion tokens - his is equivalent of 120 Tb which is a mere 0.12 peta-bytes.

I am well aware this is a flimsy napkin math at best but I find comparing LLM models to humans with a more serious tone is fun and useful thought.


> I don’t think the gates should animate up into the air.

I agree! It feels off compared to the overall aesthetic of the game.

Awesome game though! Loved it.


Hey HN! We recently got our SOC2 certification. One thing that really annoys us is having to get all deployment PRs approved by at least one person per guidelines. This might not sound like a lot but it gets annoying pretty fast when you are 2 people and deploy multiple times a day.

We built a very simple Github bot that snap-approves all pull requests for the default branch. For some extra flair, it sends a very dry joke about the contents of the PR.

In the off-chance that you are also a small team that has a Slack channel filled with PR links just to get them blindly approved, you can download it for yourself.

p.s. This is obviously mostly parody. Even though we have a small use-case for it, we realize how stupid this is.


Approval is not mandatory for all PRs. You can change your policy about it and easily justify it with your auditor. => It makes way more sense to have important stuff reviewed vs automated approval from a bot.


I think most people blindly try to get controls in Vanta/Drata to pass like us. I'd much rather build a dumb bot than having to talk to my auditor. But still

> we realize how stupid this is


i wonder how good is R1 at counting pixels from a screenshot. what enabled claude and OAI's CUA to develop computer use was being able to precisely give x-y coordinates of a click location.

also, how big of a gain to have reasoning for computer use? i feel like reasoning unlocks a lot when there is a single complex question but not so much better at taking actions in a long term plan.


Yep, coordinate grounding is key, we use Ai2's pixmo for a lot of that https://huggingface.co/datasets/allenai/pixmo-points

We had previously created https://huggingface.co/datasets/agentsea/wave-ui but that was superseded by pixmo as it contains over a million data points.


why specifically books? i never was able to get real value out of them in the case of programming.


It's the way I roll, I guess. Everything technical with regards to computing I have learned in life I've done so from books/manuals. It is kind of all of human history to do so. And are you suggesting that you can't get anything of value out of (for e.g.) "The C Programming Language"?


imo, the more senior counterpart benefits less from pair programming and therefore enjoys less. however, it's still the fastest way to get someone familiarized with a concept/project when done correctly. might be really valuable in a cs curriculum.


Am I missing something with the latest buzz around the 'founder mode'? It's a new concept for startups that transitioned into an enterprise. Not seed/series A stage startups. In those cases you don't have any other option at all?

I believe going against the notion that CEOs should delegate work as the company grows is great but I don't get why seed stage startup founders act like they are enlightened from this and brag about being in 'founder mode' is something special. You simply don't have any other option?

So how is 'founder mode' - which lost all of its meaning at this point is an edge for 'startups'?


> The best lens for future performance of large models is uncertainty. 100% agree. I think to better way to phrase my argument there would be to reject the notion that LLMs are destined to get exponentially smarter (twitter fallacy). This is not to say I believe they are not going to get any smarter in the future. We simply don't know and building a company/product on the expectation of another Moore's Law is dangerous.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: