Hacker Newsnew | past | comments | ask | show | jobs | submit | amdivia's commentslogin

Btw seems like NullClaw is facing the same issue. Currently the 1st result on Google is a shady website with popups, claiming to be NullClaw's (while the actual site (nullclaw.io) is not coming up

This seems to be a pattern


I'm yet to find a satisfying vim AI integration. I want something that blends into my vim workflow, and does not require me to switch Windows and copy paste or reload my open buffers after AI agents edit my code.

For instance I would love for it to seamlessly melt into a "highlight comments/pseudo code" -> some keybind, then AI would expand those to actual code for instance, or I don't know.. but something not like what we have currently


Try CodeCompanion if you're using neovim. I have a keybind set up and takes the highlighted region, prepends some context which says roughly "if you see a TODO comment, do it, if you see a WTF comment, try to explain it", and presents you an inline diff to accept/reject edits. It's great for tactical LLM use on small sections of code.

For strategic use on any larger codebase though, it's more productive to use something like plan mode in Claude code.


If I had to use AI with neovim I'd probably use https://github.com/ThePrimeagen/99 or one with a similar workflow.

I found this to be inaccurate, I can run OSS GPT 120B (4 bit quant) on my 5090 and 64 ram system with around 40 t/s. Yet here the site claims it won't work

Although I agree with the general sentiment, but I'll slightly push back on the "nobility" of any engineering pursuit. Such things are highly amoral (not immoral) and context specific

Assume an "Evil" state worked on defensive technology that can foil any nuclear attacks against it. Now, this allows this "Evil" state to use it's own nuclear weapons without fear of retaliation. So in this example the innovation made in defensive technologies allowed for war and destruction


Well of course, which is why we prohibited the development of defence tech in the ABM treaty. But that doesn't stop non-nuclear states from developing anti-nuke defence technology. Perhaps the only reason why they don't is because it's harder than building a nuke.

It is incredible naïveté to associate technology as good and bad.

Which one is the Internet


Exactly, that's why I objected to the "noble" aspect of working on defensive technologies

I'm waiting for someone to polish a well thought out interface for power coding


The process that Get Shit Done forces is pretty good, with Claude Code as the interface.

https://github.com/gsd-build/get-shit-done


I think for AI to become useful while minimizing it's harm, the interface as a whole needs to be reworked. Instead of having a loop of code generation followed by review. The initiative should be taken by the developer, AI should be a background thing, not one that's surfacing itself to the developer.

For instance I was thinking of AI coding where the developer is writing the application interface, files, design, and the AI in the background is reading them and translating them to the programming language of choice.

This way the developer is writing the whole thing out by hand, it would be as if one is writing fluid pseudo code, but the abstractions would be there, the way they are interacting with each other is there, the human is thinking if the abstractions and when to use them. Whereas the AI is out of the view, simply translating the fluid pseudo code to a rigid programming language.

Perhaps the above isn't really it, but I strongly feel something needs to change in the way we currently work, because it really creates a chasm between the developer and the outputted code not only in terms of the actual implementation but the mental abstractions it's supposed to reflect


You could give https://github.com/jurriaan/aico a try.


Seems very interesting thanks for sharing! (even tho it still has a review process, but I feel it's heading in the right direction)


I'm failing to understand the criticism here

Is it about the haphazardous deployment of AI generated content without revising/proof reading the output?

Or is it about using some graphs without attributing their authors?

if it's the latter (even if partially) then I have to disagree with that angle. A very widespread model isn't owned by anyone surely, I don't have to reference newton everytime I write an article on gravity no? but maybe I'm misunderstanding the angle the author is coming from

(Sidenote: if it was meant in a lightheaded way then I can see it making sense)


Other than that, I find this whole thing mostly very saddening. Not because some company used my diagram. As I said, it's been everywhere for 15 years and I've always been fine with that. What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own. This isn't a case of being inspired by something and building on it. It's the opposite of that. It's taking something that worked and making it worse. Is there even a goal here beyond "generating content"?

I mean come on – the point literally could not be more clearly expressed.


I mean he still does say "and ship it as your own" and he mentioned plagiarism too, all of which made me perplexed initially


did you read the article? this is explicitly explained! at length!

not at all about the reuse. it's been done over and over with this diagram. it's about the careless copying that destroyed the quality. nothing was wrong with the original diagram! why run it through the AI at all?


The thing is, in this context "editing text" is seen as the one job, that one tool should do.

So when you're working with multiple applications, all of which are trying to force you to use their own way of editing text, it feels highly fragmented and un-unixy

I do understand what you're saying, it's just that I wish the text editing portion of most of these tools is abstracted to a degree that allows for my text-editing tool of choice to be used within it


Side question: does anyone else feel like the quality of openclaw (as a tool) is extremely low?

Their logging seems to be haphazardous, there is no easy way to monitor what the agent is doing, the command line messages feel unorganized, error messages are really weird.. as if the whole thing is vibe coded? not even smartly vibe coded..

Even the landing page is weird, it takes one first to a blog about the tool, instead of explaining what it is, the getting started section of the documentation (and the documentation itself feels like AI slob)


Some ideas:

* Clear labeling of action types (read/get vs write/post) * A better way of describing what an agent is potentially about to do (based purely on the functions the agent is about to call) * More occurrences of AI agents hurting more than helping in the current ecosystem


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: