Hacker Newsnew | past | comments | ask | show | jobs | submit | smusamashah's commentslogin

Didn't know that agar.io was based on a real thing. That name always felt weird.

We are not getting faster and better software even now when coding is "solved". We are not getting Skynet until we have that.

I believe that peak of automated coding will be when this AI write super optimised software in assembly language or something even closer to CPU. At the moment it's full of bloat, with that it will only drown under it's own weight instead of improving itself.


In Exhibition section Lasting Moment is showing 4 glass sheets standing parallel to each other.

It looks like the cracks are same on all 4 sheets. That is amazing. Their are only 4 pictures though. I want to see them more closely.

Edit: while looking for more photos found more work here. The 3D effect by layering sheets is so cool. https://aurum.gallery/simon-berger/ I like the sphere more than the skull.

Edit: Found some more pictures of those sheets with same cracks in his Instagram https://www.instagram.com/p/C_34-G0K-Qm/?igsh=MWtzY2FydWQxa2...


I need to tell everyone, we can just uninstall this modern notepad which restores plain old notepad.

You can just uninstall this modern notepad. It will bring back plain old notepad.

I found when I did that I lost the ability to associate any program with .txt files; like popup errors when trying to assign a default

You can make old Notepad be the default cmd line by going to Apps > Advanced app settings > App execution aliases, and disable the Notepad setting


We can just "uninstall" this notepad and it will restore old simple notepad.

Until a future Window release doesn't include the old notepad anymore.

This is my concern with Notepad and the old one will just be gone for ever. Same thing happened with paint.

Then you'd have to copy the old one over to keep it alive

Will it be possible to put this on Talaas chip and go even higher speeds?


Does it mean if it was embedded on a Talaas chip, it could generate ~50,000+ tokens per second?

Think pretty much anything is going to get a enormous speed boost if the model isn’t undergoing mem latency but is just inherently baked into the circuits asic style

I got the same answer in Claude workbench which does not have a system prompt. On Web UI I was always getting Claude as answer.

Anthropic recently complained about DeepSeek using Claude to train their models. This response however shows that Claude is also doing the similar.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: