Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The stock-hardware compilers for Lisp that were available in 01979 when Knight designed the CADR, like MACLISP, were pretty poor on anything but numerical code. When Gabriel's book https://archive.org/details/PerformanceAndEvaluationOfLispSy... came out in 01985, the year after he founded Lucid to fix that problem, InterLisp on the PDP-10 was 8× slower on Tak (2") than his handcoded assembly PDP-10 reference version (¼") (pp. 83, 86, 88, "On 2060 in INTERLISP (bc)"), while MacLisp on SAIL (another PDP-10, a KL-10) was only 2× slower (.564"), and the Symbolics 3600 he benchmarked it on was slightly faster (.43") than MacLisp but still 50% slower than the PDP-10 assembly code. No Lucid Common Lisp benchmarks were included.

Unfortunately, most of Gabriel's Lisp benchmarks don't have hand-tuned assembly versions to compare them to.

Generational garbage collection was first published (by Lieberman and Hewitt) in 01983, but wouldn't become widely used for several more years. This was a crucial breakthrough that enabled garbage collection to become performance-competitive with explicit malloc/free allocation, sometimes even faster. Arena-based or region-based allocation was always faster, and was sometimes used (it was a crucial part of GCC from the beginning in the form of "obstacks"), but Lisp doesn't really have a reasonable way to use custom allocators for part of a program. So I would claim that, until generational garbage collection, it was impossible for stock-hardware Lisp compilers to be performance-competitive on many tasks.

Tak, however, doesn't cons, so that wasn't the slowness Gabriel observed in it.

So I will make two slightly independent assertions here:

1. Stock-hardware Lisp compilers available in the late 01970s, when LispMs were built, were, in absolute terms, pretty poorly performing. The above evidence doesn't prove this, but I think it's at least substantial evidence for it.

2. Whether my assertion #1 above is actually true or not, certainly it was widely believed at the time, even by the hardest core of the Lisp community; and this provided much of the impetus for building Lisp machines.

Current Lisp compilers like SBCL and Chez Scheme are enormous improvements on what was available at the time, and they are generally quite competitive with C, without any custom hardware. Specializing JIT compilers (whether Franz-style trace compilers like LuaJIT or not) could plausibly offer still better performance, but neither SBCL neither Chez uses that approach. SBCL does open-code fixnum arithmetic, and I think Chez does too, but they have to precede those operations with bailout checks unless declarations entitle them to be unsafe. Stalin does better still by using whole-program type inference.

Some links:

https://dl.acm.org/doi/pdf/10.1145/152739.152747 "'Infant Mortality' and Generational Garbage Collection", Baker (from 01993 I think)

https://dspace.mit.edu/handle/1721.1/5718 "CADR", AIM-528, by Knight, 01979-05-01

https://www.researchgate.net/publication/221213025_A_LISP_ma... "A LISP machine", supposedly 01980-04, ACM SIGIR Forum 15(2):137-138, doi 10.1145/647003.711869, The Papers of the Fifth Workshop on Computer Architecture for Non-Numeric Processing, Greenblatt, Knight, Holloway, and Moon, but it looks like what Knight uploaded to ResearchGate was actually a 14-page AI memo by Greenblatt

https://news.ycombinator.com/item?id=27715043 previous discussion of a slide deck entitled "Architecture of Lisp Machines", the slides being of little interest themselves but the discussion including gumby, Mark Watson, et al.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: