Hacker Newsnew | past | comments | ask | show | jobs | submit | queueueue's commentslogin

Ironic that I’m going to give another anecdotal experience here, but I’ve noticed this myself too. I catch myself trying to keep on prompting after an llm has not been able to solve some problem in a specific way. While I can probably do it faster at that point if I switch to doing it fully myself. Maybe because the llm output feels like its ‘almost there’, or some sunken cost fallacy.


Not saying this is you, but another way to look at it is that engaging in that process is training you (again, not you, the user) -- the way you get results is by asking the chat bot, so that's what you try first. You don't need sunk cost or gambling mechanics, it's just simple conditioning.

Press lever --> pellet.

Want pellet? --> press lever.

Pressed lever but no pellet? --> press lever.


For me, this has somehow gotten to a point where I keep questioning myself if I’m actually doing something out of curiosity or because of the idea I could share something with other people or some other motive. So I’m not even sure what I’m curious about anymore, which might sound ridiculous.


Hey, don't worry. In my book, if you do something because you want to share it, then you're still interested in it enough (or curious about if you want). You just like to share, and that's okay.

It's also a good filter for topics. Naturally, the topics of interest of others seem more valuable.

I am doing a similar thing on my blog. Generally, each topic must pass the test of: is this useful to at least some? And being commited to write means I can clarify and organize my thoughts.

So nothing to worry about, keep on experimenting and sharing.


Not sure about AI specific but: Todo apps, habit trackers, lots of social media, job boards, recommendation apps, fun things to do with friends, travel planners, trackers (movies/books). I think it’s more common for B2C because these are things that a lot of people come across.

Some of these ideas could maybe be done better now that we have genAI but the question might would it work as a standalone app or is it just a feature?


Apple has a screentime API which allows the app/dev to block an app after the user chooses it


I've counted 18! The next one was blank.


I don't see how this is not possible using knowledge graphs? You retrieve the entity, Sharon, and the additional context you get will be the nodes and edges close to Sharon. After this it becomes the LLM's job because if it is not mentioned in the given context, it should let the prompter know "In the given context the occupation of Sharon could not be found".


I had to double check the date the article was posted because all 4 examples, while using ChatGPT 4o, did not give the output mentioned in the article. It seems the examples are old, which becomes obvious when you look at the chat interface of the screenshots in the article. They do not match the current ChatGPT interface. I'm sure there are new ways to do visual prompt injection though!


The API is not used for training purposes either. https://openai.com/enterprise-privacy


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: