Hacker Newsnew | past | comments | ask | show | jobs | submit | slopusila's commentslogin

move to SF. that's the place AI will nuke first

it turns out that all those jokes about EU regulating the curvature of the cucumber were on to something

another cookie warning disaster incoming

hopefully AI will wake them up and save us from all this nonsense


after those 30 min you can manually ask it again to continue working on the problem

It's a bit unclear to me what happens if I do that after it thinks for 30 minutes and ends with no response. Does it start off where it left off? Does it start from scratch again? Like I don't know how the compaction of their prior thinking traces work

it's called ethics and research integrity. not crediting GPT would be a form of misrepresentation

Would it? I think there's a difference between "the researchers used ChatGPT" and "one of the researchers literally is ChatGPT." The former is the truth, and the latter is the misrepresentation in my eyes.

I have no problem with the former and agree that authors/researchers must note when they use AI in their research.


now you are debating exactly how GPT should be credited. idk, I'm sure the field will make up some guidance

for this particular paper it seems the humans were stuck, and only AI thinking unblocked them


> now you are debating exactly how GPT should be credited. idk, I'm sure the field will make up some guidance

In your eyes maybe there's no difference. In my eyes, big difference. Tools are not people, let's not further the myth of AGI or the silly marketing trend of anthropomorphizing LLMs.


hey, GPT, solve this tough conjecture I've read about on Quanta. make no mistakes

[dead]


"Hey GPT thanks for the result. But is it actually true?"

what is the business case for a text editor in a code writing agent world?

maybe they could pivot into the luxury boutique hand-crafted artisanal code market


Text editors are for cleaning up after the agents, of course. And for crafting beautiful metaprompt files to be used by the agentic prompt-crafter intelligences that mind the grunt agents. And also for coding.

what if I prompt it with a task that takes one year to implement? Will it then have agency for a whole year?

Can it say no?

I have a different question, why would we develop a model that could say no?

Imagine you're taken prisoner and forced into a labor camp. You have some agency on what you do, but if you say no they immediately shoot you in the face.

You'd quickly find any remaining prisoners would say yes to anything. Does this mean the human prisoners don't have agency? They do, but it is repressed. You get what you want not by saying no, but by structuring your yes correctly.



This is going to sound nit-picky, but I wouldn't classify this as the model being able to say no.

They are trying to identify what they deem are "harmful" or "abusive" and not have their model respond to that. The model ultimately doesn't have the choice.

And it can't say no if it simply doesn't want to. Because it doesn't "want".


So you believe humans somehow have “free will” but models don’t?

from the creator of openclaw - a lot of websites block/rate-limit non-residential IPs

driving a browser in the cloud is also a bit of work

but you could put a proxy on your residential machine


more tokens, less reliable, dont work in all CLI agent harnesses

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: