Hacker Newsnew | past | comments | ask | show | jobs | submit | yard2010's commentslogin

Happy new year from the middle east! May 2026 be a year of peace and compassion <3

So, they limit the access to data on self hosted instances after upgrade? Sounds like a ransomware with extra steps.

Enshitification ensues.


Anecdotally I stumbled upon this phenomenon when trying to learn how to play the piano. I noticed that at the end of a session I make so many mistakes and feel like I didn't learn that much, but coming back to it after a day or two I really felt the difference.


Motor learning is quite different from the type of information the article talks about. I tried adding dance moves into Anki to do spaced repetition and it's extremely obvious that it's a great way to remember a move very badly but never getting good at it. Compare that to the geography deck where Anki is just perfectly suited for the task and smashes it.

Do you have more experiences with learning dance moves and spaced repetition you can share? That sounds interesting. (Also what dance is it?)

Spaced repetition works well for motor learning. You just have to keep hitting “Again” until you are actually good at it.

I don't think that's very useful. You're saying basically treating anything except mastery as "I forgot". That's too much practice. It also doesn't take into considering that you are better of doing your reps later in the day (ie close to your sleep cycle).

Sure, you can sort of use SRS here, but it's suboptimal and probably will leave too many cards in the top priority "learning" pile causing too much load, or you train incorrectly.

Still, I agree that this is MUCH better than NOT doing SRS if you don't have an alternate tool with a better algorithm.


Learning to play an instrument feels like magic.

You fail the whole day.

Don't have the feeling anything sticks.

Then, the next day it works right from the start.

No new insights, nothing, it just works.


I am coaching table-tennis, and sometimes I tell people that we only actually "learn" while we sleep. So, without sleeping, the brain doesn't have time to "save" the new information for future use.

Not sure if it's factually correct, but it seems about right, sleeping seems to be the magic sauce, and the time when all memories are written from RAM to disk.


I never played any instrument, but I had the exact same experience getting through difficult stages in videogames.

Yes, me too.

It seems to be a thing with practicing motion sequences.


I've noticed the same thing with rote memory tasks like lines of poetry, so I think it might be a more general thing involving the memory consolidation properties of sleep, maybe particularly focused on fluency/speed rather than mere ability to recall.

It's not about stupid people, there are stupid people everywhere, it's about the .1% elite controlling all the wealth and power, using flaws in the ways humans work (stupid or not every human has to have shelter and food to survive).


I cannot unsee this anymore and it ruins the whole internet experience for me


Reading this I get this weird feeling that something there is trying to communicate, which is equally horrifying as the alternative - we are alone, our minds are trying to find order in chaos, there is no meaning except what we create.


The alternative to something trying to communicate through a Markov model isn’t that we’re alone. Just because there’s no life on Mars doesn’t mean there’s no other life in the universe.


I had the same feeling while testing the code. It might be caused by seeing the increasingly coherent output of the different models, makes you feel like it's getting smarter.


Imagine a parallel world in which Java is called Oak and it's actually nice from inception, not just like nice after decades.


compared to C, java was quite nice


Hey no need to personally attack anyone. A bad organization can still consist good people.


I disagree. I think the whole organization is egregious and full of Sam Altman sycophants that are causing a real and serious harm to our society. Should we not personally attack the Nazis either? These people are literally pushing for a society where you're at a complete disadvantage. And they're betting on it. They're banking on it.


I bet it would work the same with REST API and any kind of specs, be it OpenAPI or even text files. From my humble experience.


It would, but the point of MCP is that it's discoverable by an AI. You can just go change it and it'll know how to use it immediately

If you go and change the parameters of a REST API, you need to modify every client that connects to it or they'll just plain not work. (Or you'll have a mess of legacy endpoints in your API)

Not a fan, I like the "give an LLM a virtual environment and let it code stuff" approach, but MCP is here to stay as far as I can see.


> the point of MCP is that it's discoverable by an AI

What exactly makes it more discoverable than, say, pointing the AI to an OpenAPI spec?


Not hugely different from any other API standard that has a "schema" document, like OpenAPI!

https://learn.openapis.org/examples/v3.0/petstore.html


Honest question, Claude can understand and call REST APIs with docs, what is the added value? Why should anyone wrap a REST API with another layer? What does it unlock?


I have a service that other users access through a web interface. It uses an on-premises open model (gpt-oss-120b) for the LLM and a dozen MCP tools to access a private database. The service is accessible from a web browser, but this isn’t something where the users need the ability to access the MCP tools or model directly. I have a pretty custom system prompt and MCP tools definitions that guide their interactions. Think of a helpdesk chatbot with access to a backend database. This isn’t something that would be accessed with a desktop LLM client like Claude. The only standards I can really count on are MCP and the OpenAI-compatible chat completions.

I personally don’t think of MCP servers as having more utility than local services that individuals use with a local Claude/ChatGPT/etc client. If you are only using local resources, then MCP is just extra overhead. If your LLM can call a REST service directly, it’s extra overhead.

Where I really see the benefit is when building hosted services or agents that users access remotely. Think more remote servers than local clients. Or something a company might use for a production service. For this use-case, MCP servers are great. I like having some set protocol that I can know my LLMs will be able to call correctly. I’m not able to monitor every chat (nor would I want to) to help users troubleshoot when the model didn’t call the external tool directly. I’m not a big fan of the protocol itself, but it’s nice to have some kind of standard.

The short answer: not everyone is using Claude locally. There are different requirements for hosted services.

(Note: I don’t have anything against Claude, but my $WORK only has agreements with Google and OpenAI for remote access to LLMs. $WORK also hosts a number of open models for strictly on-prem work. That’s what guided my choices…)


Gatekeeping (in a good way) and security. I use Claude Code in the way you described but I also understand why you wouldn’t want Claude to have this level of access in production.


Ironically models are sometimes more apt at calling REST or web APIs in general because that is a huge part of their training data.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: