Hacker Newsnew | past | comments | ask | show | jobs | submit | xnorswap's commentslogin

A zero hours contract has been useful for me, as someone who needed to work around a medical condition. I worked the hours I could, got paid for them, but I could also take days off at short notice if I wasn't up to working that day.

I have had to negotiate a reduced-hours permanent contract now, but the zero-hours contract I had has genuinely helped me a lot, and was more flexible than the reduced hours contract I'm moving to.

But I also accept that not everyone is in the privileged position of being a software developer at a good company, and that there are many predatory companies taking the piss.

So I support the crackdown on zero-hours while also finding myself missing the fact that I will no longer benefit from it.


You must be joking. How would you heal yourself if you gonna get 1h per week for next 1 year?

Firstly, I have a good relationship with my employer, so I'm the one choosing my hours. It would have been legal for them to reduce my hours to zero, but I trusted them not to do that, and indeed they didn't do that.

Secondly, we have the NHS, so my access to healthcare is not tied to my employment nor my ability to pay.

I can only share my experience, and as I said, I recognise this isn't the norm for most zero-hours contractors out there, who are in much worse conditions and have uncertainty, which is why I support the changes in law around them.


Most countries require their own dual-nationality citizens to enter on their local passports not foreign ones, Britain was an exception before. It's not unreasonable to ask for the British passport, and I say this as someone affected.

> It's not unreasonable to ask for the British passport

Why? What legitimate purpose does this serve?


Counting the sheep in the herd.

Even if one takes this as legitimate, the "foreign" passport gives enough information already (otherwise they couldn't prevent me from acquiring an ETA with it).

Keeping track of which of your citizens are outside of the country. Ensuring the state knows you are a citizen and should be treated as such.

France had a weird issue recently about the media talking for ages about someone who committed a crime while the state had asked for him to be deported months before on the basis of his foreign passport and it took weeks for someone to finally notice that the guy was actually French. It made the police looks clownish.


That's your guess. The UK authorities have never given this reason.

You were asking for legitimate purposes. That's some of them.

I asked for the purpose. You guessed at the purpose. those are different.

I've read the issue is that some countries require you to renounce your previous nationality to get citizenship, and people have taken advantage of not needing a British passport by lying about renouncing their British citizenship.

I've seen claims this technique was actually recommended by the British consulate, no idea if that's true.


> I've read the issue is that some countries require you to renounce your previous nationality to get citizenship, and people have taken advantage of not needing a British passport by lying about renouncing their British citizenship.

Oh that's an interesting little loophole that might be a[nother] reason. A handful of EU member states disallow dual citizenship, so those taking advantage of "EU and British" might be impacted by this.


Found the article, it was Spain specifically that requires it https://www.theguardian.com/politics/2026/feb/13/dual-nation...

ok, did not know that, every day is a school day!

But hey the sensationalist comment you originally posted gets your lots of upvotes…

The only .fun site I know is neal.fun, which regularly features on the front page here: https://news.ycombinator.com/from?site=neal.fun


> “I think you’d be a damned fool to invest in this technology for any serious project. Right now this is a toy.”

This comment about Typescript was correct. Typescript had a major fundamental re-write fairly early on in it's history.

This quoted comment was written before Typescript even had Generics, let alone Union types.


I see that's from almost 10 years ago, it would be interesting to see how that's changed with improvements to V8, python and C# since.

Also, Typescript 5 times worse than Javascript? That doesn't really make sense, since they share the same runtime.


Why is that so unbelievable? TypeScript isn't JavaScript, and while they have the same runtime, compiled TypeScript often don't look like how you'd solve the same problem in vanilla JS, where you'd leverage the dynamic typing rather than trying to work around it.

See this example as one demonstration: https://www.typescriptlang.org/play/?q=8#example/enums

The TS code looks very different from the JS code (which obviously is the point), but given that, not hard to imagine they have different runtime characteristics, especially for people who don't understand the inside and outs of JavaScript itself, and have only learned TypeScript.


Enums are one of only a few places where there is significant deviation, I don't believe that makes it 400% less efficient.

Maybe read the paper and see if you can figure out their reasoning/motivation :) https://dl.acm.org/doi/10.1145/3136014.3136031

One thing to consider, is that with JavaScript you put it in a .js file, point a HTML page at it, and that's it.

TypeScript uses a ton more than that, which would impact the amount of energy usage too, not to mention everything running the package registries and what not. Not sure if this is why the difference is bigger, as I haven't read the paper myself :)

But if you do, please do share what you find out about their methodology.


I wonder if the parent comments remark is a communication failure or pedantry gone wrong, because like you, claude-code is out there solving real problems and finding and fixing defects.

A large quantity of bugs as raised are now fixed by claude automatically from just the reports as written. Everything is human reviewed and sometimes it fixes it in ways I don't approve, and it can be guided.

It has an astonishing capability to find and fix defects. So when I read "It can't find flaws", it just doesn't fit my experience.

I have to wonder if the disconnect is simply in the definition of what it means to find a flaw.

But I don't like to argue over semantics. I don't actually care if it is finding flaws by the sheer weight of language probability rather than logical reasoning, it's still finding flaws and fixing them better than anything I've seen before.


I can't control random internet people, but within my personal and professional life, I see the effective pattern of comparing prompts/contexts/harnesses to figure out why some are more effective than others (in fact tooling is being developed in the AI industry as a whole to do so, claude even added the "insights" command).

I feel that many people that don't find AI useful are doing things like, "Are there any bugs in this software?" rather than developing the appropriate harness to enable the AI to function effectively.


Asking it what its knowledge cut-off is interesting, it doesn't seem to be consistent even within a single response. Sometimes it responds to say it cuts off 2020 too.

    You
    What is your knowledge cut-off?

    Jimmy
    My knowledge cut-off is 2022, which means that my training data is current up to 2021, but 
    I have been trained on a   dataset that is updated periodically. If you have any specific 
    questions about events or topics that occurred after 2021, I may not have information on 
    those topics. However, I can still provide general information and context on those topics 
    to help guide further research.

The instantaneous response is impressive though. I'm sure there will be applications for this, I just lack the imagination to know what they'll be.

Sounds like they aren’t providing a system prompt which tells it what its knowledge cutoff is. It doesn’t know unless it’s told. Given the limited context, it makes sense they wouldn’t use up valuable space for that.

The system prompt is configurable in the sidebar and minimal. It doesn't give a knowledge cutoff. This is a tech demo of the fact it works at all, it's not meant to be a good chatbot.

The trouble is, this reads like "Just don't be depressed", since this is almost effectively a list of (anti-) symptoms of depression.

This is always what leaps out at me with this kind of “live better be happier” article. Yeah, if you’re sleeping well, eating well, working out regularly, spending quality time with friends, and have a healthy work life balance, you probably aren’t depressed. Might there be some possibility, perchance, that this is because any single one of those factors is almost impossible when in the pit of despair, let alone all of them at once? It’s like saying glasses cause short sightedness because hey, you almost never see someone with good eyesight wearing glasses.

And the ones that aren't simply "don't be depressed" are a lottery for anyone, even those not suffering depression. In the US, current projections are for 300 new jobs per state per month this year. Even accounting for retirees and deaths, last I looked at the numbers, there have been 2M more new additions to the workforce (people turning 18) than there are new jobs for those new additions, all competing with the large unemployed and laid off populations.

It's similarly often not easy to solve relationship stressors or noise pollution.


I agree we are headed into very unstable times, all the more reason for people to exercise. The stress relief effect is magical, and if you do it outside you get some fresh air and vitamin D. Exercise isn't a magic cure to make everyone honkey dorey but I do believe it should be seen as one of the best (and free) tools we have to maintain mental health.

No, it isn't like "Just don't be depressed", I know it's very difficult for people to get all of those combined, that's what I said you should strive to get all of them if you want to cure your depression.

SSRIs or exercise won't make your financial or relationship issues go away so you'll still be depressed unless you can fix all the issues bothering you.


The problem is that depression by definition makes it 100x harder to do any of those things.

In my case it wasn't that they were hard to do, it was more that I just didn't have the motivation to do them.

The change needed was to actually have a reason to start doing any of them. For me, that started with being honest with myself. Deep down inside I knew the reason for my depression, I "just" had to be honest about it to be able to take control over it.

Once I did that I gained some motivation to do those things on that list, and so I started doing them. And slowly but surely I got out of my hole.

Every now and then I notice I'm halfway back down a hole. I stopped doing those things. Again I have to be honest with myself about why, and with that I can start the climb back out, starting exercising again, eating better etc.


I was typing something a bit snarky, but I went back and thought a little and I just want to say good for you on getting out of that hole and I hope you stay out of it!

Well yeah, but you gotta start somewhere, and it's not gonna get better if you drink cola, beer everyday and stay up until 2AM watching netflix.

I think GP's point is that "just do it" shows a lack of understanding that an inability to "just do X" is often a symptom of depression that leads to people not talking about it, because people who haven't experienced it think you must be lying about having decided to do something and not actually then doing it.

Consider: there was a study about a guy who was paralyzed from the waist down, who got an implant to bypass the injury, and with a year+ of walking with the implant, could eventually walk to a limited extent without the implant.[1]

This suggests walking can be used to treat loss of ability to walk. Unfortunately, there's a catch-22 there...

...and so too with inability to make yourself do anything.

[1] - https://www.nature.com/articles/s41586-023-06094-5


> Well yeah, but you gotta start somewhere

Wait, I thought those things solved depression? Why are you hedging now? Did your initial analysis suck, and you're just now realizing it? Are you figuring out that maybe this is slightly more complex than "just be happy" and that maybe you're not the only person on the planet who's ever thought about this?


Your comment makes no sense. I never said "just be happy". Reformulate it and add more substance so we have something to talk about.

Why do you think I have the responsibility to "add substance" to your non-substantive load of shit?

You may not have said the literal words "be happy", but that is precisely what your "medical advice" amounted to. Perhaps it's time to stop having big, strident opinions about things you know you know absolutely nothing about?


This entire article is in Claude's voice too, so I guess the author keeps choosing Claude for their blog writing too.

I'd rather read 2 short paragraphs of the authors' original ideas than the neatly presented expansion into a dozen paragraphs of ridiculously short staccato sentences.


> it can't find actual flaws in your code

I can tell from this statement that you don't have experience with claude-code.

It might just be a "text predictor" but in the real world it can take a messy log file, and from that navigate and fix issues in source.

It can appear to reason about root causes and issues with sequencing and logic.

That might not be what is actually happening at a technical level, but it is indistinguishable from actual reasoning, and produces real world fixes.


> I can tell from this statement that you don't have experience with claude-code.

I happen to use it on a daily basis. 4.6-opus-high to be specific.

The other day it surmised from (I assume) the contents of my clipboard that I want to do A, while I really wanted to B, it's just that A was a more typical use case. Or actually: hardly anyone ever does B, as it's a weird thing to do, but I needed to do it anyway.

> but it is indistinguishable from actual reasoning

I can distinguish it pretty well when it makes mistakes someone who actually read the code and understood it wouldn't make.

Mind you: it's great at presenting someone else's knowledge and it was trained on a vast library of it, but it clearly doesn't think itself.


What do you mean the content of your clipboard?

I either accidentally pasted it somewhere and removed, forgetting about doing that or it's reading the clipboard.

The suggestion it gave me started with the contents of the clipboard and expanded to scenario A.


Sorry to sound rude - but you polluted the context, pointing to the fact you would like A, and then found it annoying it tried to do A ?

Oh, please. There’s always a way to blame the user, it’s a catch-22. The fact is that coding agents aren’t perfect and it’s quite common for them to fail. Refer to the recent C-compiler nonsense Anthropic tried to pull for proof.

It fails far less often than I do at the cookie cutter parts of my job, and it’s much faster and cheaper than I am.

Being honest; I probably have to write some properly clever code or do some actual design as a dev lead like… 2% of my time? At most? The rest of the code related work I do, it’s outperforming me.

Now, maybe you’re somehow different to me, but I find it hard to believe that the majority of devs out there are balancing binary trees and coming up with shithot unique algorithms all day rather than mangling some formatting and dealing with improving db performance, picking the right pattern for some backend and so on style tasks day to day.


I know I am not supposed to be negative in HN, but lay off the koolaid, dear colleague.

So you don't even know what your prompt was and you think it might be secretly reading your clipboard and you expect people to take you seriously?

What you're describing is not finding flaws in code. It's summarizing, which current models are known to be relatively good at.

It is true that models can happen to produce a sound reasoning process. This is probabilistic however (moreso than humans, anyway).

There is no known sampling method that can guarantee a deterministic result without significantly quashing the output space (excluding most correct solutions).

I believe we'll see a different landscape of benefits and drawbacks as diffusion language models begin to emerge, and as even more architectures are invented and practiced.

I have a tentative belief that diffusion language models may be easier to make deterministic without quashing nearly as much expressivity.


Nothing you've said about reasoning here is exclusive to LLMs. Human reasoning is also never guaranteed to be deterministic, excluding most correct solutions. As OP says, they may not be reasoning under the hood but if the effect is the same as a tool, does it matter?

I'm not sure if I'm up to date on the latest diffusion work, but I'm genuinely curious how you see them potentially making LLMs more deterministic? These models usually work by sampling too, and it seems like the transformer architecture is better suited to longer context problems than diffusion


The way I imagine greedy sampling for autoregressive language models is guaranteeing a deterministic result at each position individually. The way I'd imagine it for diffusion language models is guaranteeing a deterministic result for the entire response as a whole. I see diffusion models potentially being more promising because the unit of determinism would be larger, preserving expressivity within that unit. Additionally, diffusion language models iterate multiple times over their full response, whereas autoregressive language models get one shot at each token, and before there's even any picture of the full response. We'll have to see what impact this has in practice; I'm only cautiously optimistic.

I guess it depends on the definition of deterministic, but I think you're right and there's strong reason to expect this will happen as they develop. I think the next 5 - 10 years will be interesting!

This all sounds like the stochastic parrot fallacy. Total determinism is not the goal, and it not a prerequisite for general intelligence. As you allude to above, humans are also not fully deterministic. I don't see what hard theoretical barriers you've presented toward AGI or future ASI.

I haven't heard the stochastic parrot fallacy (though I have heard the phrase before). I also don't believe there are hard theoretical barriers. All I believe is that what we have right now is not enough yet. (I also believe autoregressive models may not be capable of AGI.)

Did you just invent a nonsense fallacy to use as a bludgeon here? “Stochastic parrot fallacy” does not exist, and there actually quite a bit of evidence supporting the stochastic parrot hypothesis.

I imagine "stochastic parrot fallacy" could be their term for using the hypothesis to dismiss LLMs even where they can be useful; i.e., dismissing them for their weaknesses alone and ignoring their strengths. (Of course, we have no way to know for sure without their input.)

Correct, I am stating that the stochastic parrot hypothesis is a fallacy.

> moreso than humans

Citation needed.


Much of the space of artificial intelligence is based on a goal of a general reasoning machine comparable to the reasoning of a human. There are many subfields that are less concerned with this, but in practice, artificial intelligence is perceived to have that goal.

I am sure the output of current frontier models is convincing enough to outperform the appearance of humans to some. There is still an ongoing outcry from when GPT-4o was discontinued from users who had built a romantic relationship with their access to it. However I am not convinced that language models have actually reached the reliability of human reasoning.

Even a dumb person can be consistent in their beliefs, and apply them consistently. Language models strictly cannot. You can prompt them to maintain consistency according to some instructions, but you never quite have any guarantee. You have far less of a guarantee than you could have instead with a human with those beliefs, or even a human with those instructions.

I don't have citations for the objective reliability of human reasoning. There are statistics about unreliability of human reasoning, and also statistics about unreliability of language models that far exceed them. But those are both subjective in many cases, and success or failure rates are actually no indication of reliability whatsoever anyway.

On top of that, every human is different, so it's difficult to make general statements. I only know from my work circles and friend circles that most of the people I keep around outperform language models in consistency and reliability. Of course that doesn't mean every human or even most humans meet that bar, but it does mean human-level reasoning includes them, which raises the bar that models would have to meet. (I can't quantify this, though.)

There is a saying about fully autonomous self driving vehicles that goes a little something like: they don't just have to outperform the worst drivers; they have to outperform the best drivers, for it to be worth it. Many fully autonomous crashes are because the autonomous system screwed up in a way that a human would not. An autonomous system typically lacks the creativity and ingenuity of a human driver.

Though they can already be more reliable in some situations, we're still far from a world where autonomous driving can take liability for collisions, and that's because they're not nearly as reliable or intelligent enough to entirely displace the need for human attention and intervention. I believe Waymo is the closest we've gotten and even they have remote safety operators.


It's not enough for them to be "better" than a human. When they fail they also have to fail in a way that is legible to a human. I've seen ML systems fail in scenarios that are obvious to a human and succeed in scenarios where a human would have found it impossible. The opposite needs to be the case for them to be generally accepted as equivalent, and especially the failure modes need to be confined to cases where a human would have also failed. In the situations I've seen, customers have been upset about the performance of the ML model because the solution to the problem was patently obvious to them. They've been probably more upset about that than about situations where the ML model fails and the end customer also fails.

That's not a citation.

It's roughly why I think this way, along with a statement that I don't have objective citations. So sure, it's not a citation. I even said as much, right in the middle there.

That’s because there’s no objective research on this. Similarly, there are no good citations to support your objection. They simply don’t exist yet.

Maybe not worth discussing something that cannot be objectively assessed then.

Then don't; all I did was offer my thoughts in a public comments section.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: