Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also: UCSD p-System, Symbolics Lisp-on-custom hardware, ...

Historically their performance is underwhelming. Sometimes competitive on the first iteration, sometimes just mid. But generally they can't iterate quickly (insufficient resources, insufficient product demand) so they are quickly eclipsed by pure software implementations atop COTS hardware.

This particular Valley of Disappointment is so routine as to make "let's implement this in hardware!" an evergreen tarpit idea. There are a few stunning exceptions like GPU offload—but they are unicorns.



They were a tar pit in the 1980s and 1990s when Moores law meant a 16x increase in processor speed every 6 years.

Right now the only reason why we don't have new generations of these eating the lunch of general purpose CPUs is that you'd need to organize a few billion transistors into something useful. That's something a bit beyond what just about everyone (including Intel now apparently) can manage.


Sure. The need to organize millions (now 10s to 100s of billions) of transistors to do something useful, the economics and will to bring those to market, the need to coordinate functions baked into hardware with the faster moving and vastly more-plastic software world—oh, and Amdahl's Law.

They are the tar pit. Transistor counts skyrocket, but the principles and obstacles have not changed one iota in over 50 years.


The obstacles have absolutely changed.

A processor from 2015 is good enough for most daily tasks in 2025. Try saying that about one from 1985 to 1995.

The issue today isn't that by the time you get to market with SOTA manufacturing on a custom 10x design you only have two years before general purpose chips are just as fast.

It's getting to the market in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: