Hacker Newsnew | past | comments | ask | show | jobs | submit | arikrak's commentslogin

I wouldn't have expected there to be enough text from before 1913 to properly train a model, it seemed like they needed an internet of text to train the first successful LLMs?

This model is more comparable to GPT-2 than anything we use now.

I recently wrote a short post on something similar: while AI is able to solve increasingly longer tasks, people's attention spans are getting shorter. https://www.zappable.com/p/ai-vs-human-attention-spans

Hopefully people can learn to use AI to help them, while still thinking on their own. It's not like that many of the assignments in school were that useful anyways...


Before advanced AI, "essential complexity" was a bottleneck and Brooks was right that there couldn't be continuous exponential gains in software productivity. However advanced AI will handle essential complexity as well, which can end up making it 10x or 100x faster to develop software. We still need humans currently, but there's no area that one can point to and say we'll always need people for this aspect of software development. The latest coding agents are already reasoning with requirements before they write code, and they will only improve...


AI solves "essential complexity" the same way they solve the Halting problem...

These are fundamental CS concepts, you don't solve them.

Also, I would first wait for LLMs to have reliable reasoning capabilities on trivial logic puzzles, like Missionaries and cannibals, before claiming they can correctly "reason" about concurrency models and million LOC program behavior at runtime.


Essential complexity is essential to your problem space... AI can't figure that out.


I've used Pocket from back when they were called "Read it later" and have saved just shy of 30k articles. They've gotten worse recently, but when I tried other apps (instapaper, getmatter, raindrop, paperspan, omnivore) they all had their own issues. Now I have to give those apps another try...


Try Reader from Readwise. It's a great multi platform app which supports all kinds of reading mediums and can save articles, tweets, YouTube videos etc. It's a paid app, but well worth it. Look for comment from tristanho, the founder, below.


Here's gemini https://g.co/gemini/share/ab287b25648f

I also asked Chat GPT o3 and it thought for 11.5 minutes! https://chatgpt.com/share/682d0993-db4c-8004-a66c-3908ef7203...


AI is a potential silver bullet since it can address the "essential complexity" that Fred Books said regular programming improvements couldn't address. It may not yet have caused an "order of magnitude" improvement in overall software development but it has caused that improvement in certain areas, and that will spread over time.

https://en.wikipedia.org/wiki/No_Silver_Bullet


Clicking the link opens the Ticketmaster app on my phone.


They use higher-caffeine beans for decaf coffee - Robusta instead of Arabica. Robusta beans aren't considered as good so they're cheaper, and they sell the caffeine they extract to e.g. soda companies.


Just not true. Cheap coffee is already robusta anyways, and arabica is used in good quality decaf coffee. Even still, arabica is consistently used as a marketing bullet for coffee… even if there’s plenty of really really bad arabica coffee out there.


ChatGPT already does that when you create a custom GPT. If you don't connect it to an external text or service that's basically all custom GPTs are.


This looks pretty cool! Anyone know of visualizations for simpler neural networks? I'm aware of tensorflow playground but that's just for a toy example, is there anything for visualizing a real example (e.g handwriting recognition)?



We made a VR visualization back in 2017 https://youtu.be/x6y14yAJ9rY



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: