Hacker Newsnew | past | comments | ask | show | jobs | submit | haberman's commentslogin

I'm old enough to remember a time when the primary hacker cause was DRM, the DMCA, patent trolls, export controls for PGP, etc. All things that made it difficult to use information when you want to. "Information wants to be free."

It's wild to see the about face. Now it's:

> If [companies] can’t source training data ethically, then I see absolutely no reason why any website operator should make it easy for them to steal it.

It would have been very difficult to predict this shift 25 years ago.


This claim of contradiction has never worked for me.

Let say person A wants everyone to be rich.

Person B plots a plan to make themself rich and everyone else poorer.

One can make an argument that any action by A is now a contradiction. If they work with B, it makes a lot of people poorer and not richer. If they work against B, B do not get rich.

However this is not a contradiction. If a company use training data in ways that reduce and harm other peoples ability to access information, like hiding attribution or misrepresenting the data and sources, people who advocate for free information can have a consistent view and also work against such use. It is not a shift. It is only a shift if we believe that copyright will be removed, works will be given to the public for free, and companies will no longer try to hide and protect creative works and information.


You can certainly argue second-order effects (ie. we have to restrict information to save information), but the movie studios were making that same argument at the time:

> If copyright can no longer protect the distribution of the work they produce, who will invest immense sums to create films or any other creative material of the kind we now take for granted? Do the thieves really expect new music and movies to continue pouring forth if the artists and companies behind them are not paid for their work?

--Jack Valenti, Motion Picture Association of America, 2000 (https://archive.is/PBy7C)

It sounds remarkably similar to what people concerned about AI say today. How do we make sure that artists get paid?

I don't think many hackers found the argument compelling at the time.


You're taking Jack Valenti at face value. He said "we're here to protect the artists" because the artists were popular and the record labels were not. He was in the business of protecting the labels and screwing the artists and everyone knew it.

The artists were certainly making more money from the studios and record labels than they got from the authors of DeCSS, Napster, BitTorrent, The Pirate Bay, etc.

When Gillian Welch wrote "Everything is Free" in 2001, she wasn't complaining about the record companies, she was complaining about Napster.

> Q: Do you remember where you were when you wrote “Everything is Free”?

> A: I do. I remember exactly where I was and what was going on. It was when Napster was starting to decimate the traditional recording industry dynamic, the viability of making your livelihood [from] your art.

--Gillian Welch, 2018 (https://www.rollingstone.com/music/music-features/gillian-we...)


Most artists were making way more money off the fans (even those downloading music) via touring and merch sales, than they were making off of the labels from residuals. Most were not making anything from residuals.

Valenti was desperate to enlist musicians because people hated the labels and did not feel bad about stealing from them. But the vast majority of musicians were not willing to back the labels against the fans. The few he managed to enlist, like Metallica were notable because they were exceptions. And the fact that they were already rich and already at the end of there career was noted by many at the time.

In contrast you have, for instance, Courtney Love who wrote a widely-distributed essay about how she and most artists make almost nothing from record sales.

https://www.salon.com/2000/06/14/love_7/


It's an interesting essay, and the TLC case does sound pretty egregious. But the premise is undermined by the fact that Love is worth an estimated $100M today, largely thanks to owning Nirvana's publishing rights, which she inherited from Kurt Cobain.

This is what happens when a culture doesn't have robust exclusionary mechanisms for people who want to burn it down.

We welcomed the vampires in and wonder why our necks hurt.


This is like saying Winner Take All Capitalism doesn't have an exclusionary mechanism for the rich. The system exists for the sole purpose of serving the already-rich. The vampires are an inevitability baked into the system from the start.

We don't technically have "winner take all" capitalism. At least some people 90 ish years ago we had many mechanisms to regulate such situations.

Then more vampires creeped in and convinced people that the government they were voted into sucks. So began a campaign to ruin the regulations protecting them from the vampires as they slowly filled their blood banks.


[flagged]


Disney is all-in on AI.

They are thrilled.

The folks fighting perpetual copyright were not fighting to make it possible for Disney to fire creatives. In fact they were fighting for the creatives to triumph over Disney.


Disney is all in because all their characters are entering the public domain over the next 5 years. They can't fight like it's 1998 because youtube is now worth more than they are.

> In fact they were fighting for the creatives to triumph over Disney.

We were doing nothing of the sort. It was "Information wants to be free" not "we want to provide a perpetual job for a subset white collar workers".

sprinkles holy water


Well I was in that cohort and none of us were thinking we were helping megacorps create the content slop machine from 1984.

Our concern was that corporations were expanding the definition of intellectual property to the extent where you couldn't make a movie or song or write a book as an individual without some corporation with a massive "IP" warchest coming after you and declaring it derivative. You couldn't write some software without a corporation with a massive repository of junk patents claiming you infringe.

We wanted to insure that individual creators could continue to have a voice, and not get sued out of existence by an IP Legal/Industrial Complex that was forming causing arms races between megacorps and SLAPs against everyone else.

If we knew we were feeding a yet-to-be-invented slop machine that would allow megacorps to unemploy all the creatives, most of us would not have supported that.

And by the way Disney is all in on AI for the same reason they were all in on perpetual copyright. In the perpetual copyright world, having a massive library of content you no longer have to pay residuals on was a source of massive amounts of "free" revenue. You could just keep re-releasing and re-making stuff. You did not have to do the messy, expensive work of paying people to come up with really good new stuff.

In the AI world, the money-printing capital asset is the trained model that grinds out slop 24/7 and you -emdash- again -emdash- don't have to pay actual people to create anything new.


>If we knew we were feeding a yet-to-be-invented slop machine that would allow megacorps to unemploy all the creatives, most of us would not have supported that.

We have multiple Communist ais that is on par with Western ai from 18 months ago and can run locally on 5 year old hardware.

I have no idea the fever nightmare you live in but the future is bright and only getting better.


I think you just want to make a comparison of copyright to slavery.

Property classes are born and die everyday. You can own the rights to publish an arcade video game, but that class of rights would have been way more valuable 45 years ago. NFTs were born and died just recently. You can own digital assets worth real money in an online game that simply shuts down.

Some people may read this and say "these don't qualify as a property class", to which I will remind you that property class used in this way is a brand new term, which I think is invented solely to be able to compare the limitations on human freedom associated with slavery to the limitations on human freedom associated with intellectual property.


> The last time a property class was removed was _slaves_.

Easy counterexample: titles of nobility. Also perpetual bonds, delegated taxation rights, the ability to mint currency. The list goes on.

If you're going to use history to support your AI bull agenda, you should at least pre-fly it with the AI first -- it would have pointed this out.

> Arguing that copyright is good because a subset of big tech doesn't want it around is as stupid as arguing that slavery is good because the robber barons don't like it.

Sorry, who's saying it's good? You are, actually, insofar as you're willing to support the right of AI companies to take people's information and use it to create copyrighted model weights. Why do you care less about the intellectual property of billionaires than that of the common man? Do you really think they're on your side?


Those people where trying to build a sharing/gift economy. They weren't able to keep bad actors out of their sharing economy. They are bitter that their utopian dreams got hijacked by self-dealers. Why is that wild?

It's highly debatable whether, in case of an information sharing/gift economy, the concept of "bad actors coming in and ruining it for everybody by taking without giving back" even makes sense.

The information is still there, as is the community that you've built, the joy that you get out of sharing the information, everything you've learned...

Why is any of that diminished, just because some people or entities that you dislike also got something out of it?


I would take up that debate.

Attribution is seemingly a central part of a information sharing/gift economy, and especially in a information sharing/gift community. It is part of the trust that connects people and without it the community falls apart, and with that the economy. AI by its very nature removes attribution.

Accuracy of information is a second critical aspect of information sharing and communities that are built around it. Would Wikipedia as a community and resource work if some articles was just random words? If readers don't trust the site, and editors distrust each other, the community collapses and the value of the information is reduced. It might look like adding AI generated articles would not harm other existing articles, or the joy that editors of the past had in writing them, but the harm is what happen after the community get flooded by inaccurate information. Same goes for many other information sharing communities.


Source trust and gift attribution are two distinct concepts, I'd say. One happens at the detriment to the taker (or "thief", if that even makes sense, as per my original comment); the other harms the original "producer".

For the former, it is already very much in any AI company's best interest to preserve attribution to become and remain credible.

For the latter, I can't help but wonder whether a gift economy that needs to diligently bookkeep attribution really is one, and if this is the only practicable way to implement one in a given larger society/economy, I'd say this says something important about that society as well.


I make very heavy use of sources that Gemini sites when I use it. I tend to use AI as sort of a mega search engine where I get a little bit of discussion, but if I care even a little bit about the topic, I end up reading the source material anyway.

> AI by its very nature removes attribution.

This is incorrect. RAG preserves attribution. Training data doesn't, but it doesn't make sense to attribute that anyway, unless you want a list of every person who has ever lived.


It's diminished because the hard reality is that you need money to live.

The end result of major tech companies sweeping in, taking everyone's creative work, outcompeting the originals with AI derivatives, and telling every artist on the planet "fuck off, send a job application to McDonalds" is significantly less art.

Copyright was invented to prevent exactly this scenario.


Yes, which is why hackers and artists (at least those mainly publishing instead of mainly performing for a live audience) are ultimately not natural/inherent allies.

Hackers have usually drawn their funding from their (often lucrative) employment, which is what gave them the freedom to give away the products of their hacking for free.

One needs copyright to survive, the other see it as a means to enforce openness at best (those in favor of copyleft) and as an obstacle to their pursuit (owning the full system, liberating all aspects of and information about it) at worst.

This rift was always visible if you knew where to look, but AI is definitely wedging it wide open.


> whether ... the concept of "bad actors coming in and ruining it for everybody by taking without giving back" even makes sense.

This is pretty clearly answered by the GPL: yes, it does, and this concept has been around since the very beginning.

> The information is still there

True

> as is the community that you've built

Untrue. At this point it's well understood that AI is substitutionary for many of the services that would have once afforded people a way to monetize their production for the community. Without the ability to make a living by doing so, even a small one, people will be limited to doing only what they can in the little free time they get outside of work.

That's the whole problem -- that AI, as it exists today, is taking away from the public, and hurting it at the same time. That's closer to robbery than it is to "sharing in the community".


Yes. There's a difference between walking a trail and maybe littering a a few pieces of trash, and walking a trail while actively setting branches on fire.

One scenario is manageable to leave be, or perhaps one or two volunteers clean it up. The fires have an entire trail closed down to everyone.

With some FOSS projects being bombarded by scraping traffic, redoing their PR system, considering ways to limit contributiors, and even going closed source, I don't think such a metaphor is an exaggeration.


> utopian dreams got hijacked by self-dealers

Such is the fate of all utopian dreams.


If you're implying that it's a violation of the original hacker ethos, I disagree.

"Information wants to be free" is a small part of the hacker ethos venn diagram. There are many hacker ethos traits that aren't about cracking, specifically.

Also, the server "information" isn't free (as in beer) to begin with, it costs server availability. Coming up ways to penalize greedy actors is not only well within the server operator's perogative, it's an interesting tit-for-tat problem that could pique any hacker's interests.

A bonus hacker trait is that these poisoning responses are individualistic, i.e. the government doesn't get involved, where certainly more aggressive anti-AI sentiments could (wrongly) call for that.

So I'd say this type of LLM-resistance falls squarely in the original hacker ethos, even though it incidentally counteracts one minor aspect of "information availability". Though I'd certainly agree that the picture today is a lot different than it was. Ironic even.


For what it's worth, I've generally sort of been on the "information, wants to be free" side of things, and I still am. I don't really understand the folks that released their software under open source license and are now upset that LLMs are training on it -- those folks were pretty quiet when their source code was being indexed by Google. But I suppose that's because Google was sending traffic their way with they could then monetize. So this is much less about any kind of philosophical argument and much more about who's getting money, which I don't really care about. I view one of the core values of open source software as being something that we can learn from, whether that's through AI or otherwise.

> I don't really understand the folks that released their software under open source license and are now upset that LLMs are training on it

The key word there is "license." Open source often has strings attached--an obligation to credit the source, an obligation to release derivative code under the same license, etc. LLMs seldom respect the license, they just quietly and extensively plagiarize everything.


"Information wants to be free, but only be used by people I wholly endorse." is the motto. You'll see young people singing the praises of piracy but then use "piracy" as an excuse for hating LLMs.

Corporations are not people.

Who works at corporations and benefits from their actions?

If my LinkedIn feed is any indication, bizarre inhuman ghouls who wear the names and profile pictures of my college friends like skin-suits and exclusively post AI-generated marketing materials for AI products.

About a few million less Americans than a few years ago, I guess.

It becomes a bit easier to see when you finish the sentence. "Information wants to be free (from ______)." If you filled that blank in with "rent-seeking Capitalists and corporations," you likely have everything you need to understand why they don't see it as a turn.

I say this as someone whose notions exist orthogonal to the debate; I use AI freely but also don't have any qualms about encouraging people to upend the current paradigm and pop the bubble.


Sure, with enough effort, you can find a seemingly clever way to turn almost every mantra into its semantic opposite.

It doesn't take much cleverness because we're talking about a straightforward dynamic. A counter-cultural expression that was a "screw you" aimed at corporations was co-opted and misinterpreted by those same corporations as "It's free real estate", and now the latter are flummoxed that they're not buddies with the former. Well, points up that's why.

Hackers are not one big homogeneous group (although there definitely are larger trends, and maybe you have a point there).

Still, people were saying all kinds of inane stuff 25 years ago too.


Politics will make more sense once you realize no one is trying to have consistent principles.

People are in general for whatever they think will benefit them, and against what they think will harm them.

So piracy is ok when it benefits the little guy and not ok when it benefits the big guy. Unions are good when they stand up against employers, and bad when they discriminate against non-union workers. There's no contradiction there.


THEN: "You can't violate our copyright because it's ours and belongs to us."

NOW: "We can violate your copyright because we want to."

YOU: "Where's mine, and how do I make more people click on these ads?"


The common string between both of those advocacies is that they heavily favor huge corporations instead of the little guy.

Basically, DMCA and DRM makes you a criminal while protecting NBC and Disney and such. And AI steals your work and allows soulless mega corps to basically take your job.

Personally I'd argue AI is very likely to be worse for the average person, depending on their career.

Some people don't care or maybe don't realize. And then I think some people are just naive, and are assuming everyone else will be fucked, but they won't be. And then some other people are self-destructive, and they know it will make their life harder - but they advocate for it anyway, because they feel they deserve the suffering, and maybe hold some misguided belief that suffering is the fuel of victory.


It was never about some "information wants to be free" philosophy for most people. It was about, "I want information to be free for me to access, and btw fuck big corporations." No real shift happened.

Those people were always lying, it was always about power dynamics. People hated DRM and surveillance because they saw it as punching down. People now hate AI wielded by corpos because they see it as punching down. Extremely few (if any) people ever bought into the “cyber-utopia” thing and now the mask has completely come off, everyone knows the Internet is a tool for subjugation

Wait, didn't Clinton actually balance the budget? That gets props from me; no government since then has actually given Americans an honest picture of what it actually takes to run a balanced budget, which will require some combination of higher taxes and/or decreased spending.

I would be open to paying higher taxes if I believed it would help address the deficit and debt (instead of just enabling more spending) and if I believed that the money was being well spent.

Earlier in my adulthood, I would happily vote for almost any tax or levy, because I had faith that that money was turning directly into societal good.

I have lost that faith. In the worst case, money seems to be grossly mismanaged (here is a local example from just last month: https://www.seattletimes.com/seattle-news/politics/fallout-f...).

In other cases, it is going to real nonprofits that are tasked with solving problems that never seem to get better, no matter how much money is spent.

In yet other cases, the money goes to building transit (something else I was previously very bullish on), but that, once built, seems to be governed by principles of limitless permissiveness (an example from a few days ago: https://komonews.com/news/local/only-8-metro-fare-enforcemen...)

It's hard to feel invested in the programs that my taxes pay for when it doesn't feel like they reflect my values.


I am also from Seattle, and the fraud and waste in the Washington state government is horrifying. The Attorney General and governor are threatening independent journalists with prosecution if they investigate it.

And California, where I lived ten years, is even worse now.


> Obama's FAA disincentivised its traditional "feeder" colleges that do ATC courses to "promote diversity", net outcome was fewer applicants

It was much worse than that. Students who had already spent years studying to be air traffic controllers through the CTI program were subject to a sudden policy change that disqualified them from entering the profession unless they passed a “biographical questionnaire.”

85% of candidates failed this questionnaire, but the National Black Coalition of Federal Aviation Employees (the organization that pushed for this change to begin with) was feeding the “right” answers to its own members.

“Right” answers included things like having gotten bad grades in high school science class. You can take the test for yourself here and see how you score: https://kaisoapbox.com/projects/faa_biographical_assessment/

I can’t blame anyone for thinking this sounds too outrageous to be real, but all of it is public record at this point and the subject of an ongoing lawsuit: https://www.tracingwoodgrains.com/p/the-full-story-of-the-fa...


This test is completely insane. What were the people making it thinking? It feels like half of the scored questions have point values assigned at random. Why does being unemployed for 1-2 months before enrolling in the program award you 10 points, 5-6 months is 8 points, yet 3-4 is a fat zero? There's so many questions with these random score assignments. Why does having real qualifications related to your job only give you a point or two, but some random factoid like taking unrelated courses or doing poorly in college history give upwards of 15 points? Why is child labor rewarded, with more points given the earlier you started?

Unless I'm missing something, this couldn't have been designed by a human being with normal goals in mind. This feels like a test that was created to act as a locked door that you could only pass by knowing the exact password, the sequence of lies you had to produce. That anyone's career was at the mercy of THIS is deranged. What the hell is going on in the US?


> Thankfully, there is the esm-integration proposal, which is already implemented in bundlers today and which we are actively implementing in Firefox.

From the code sample, it looks like this proposal also lets you load WASM code synchronously. If so, that would address one issue I've run into when trying to replace JS code with WASM: the ability to load and run code synchronously, during page load. Currently WASM code can only be loaded async.


This is not strictly true; there are synchronous APIs for compiling Wasm (`new WebAssembly.Module()` and `new WebAssembly.Instance()`) and you can directly embed the bytecode in your source file using a typed array or base64-encoded string. Of course, this is not as pleasant as simply importing a module :)


> Not everyone in history thought that 12-TET was an acceptable compromise. Johann Sebastian Bach thought we should use other tuning systems

This is presented as fact, but as I understand it there is no conclusive evidence for what Bach intended wrt temperament. There is a theory that the title page of the Well-Tempered Clavier encodes Bach’s preference in the calligraphic squiggles, but this is a recent theory and speculative. I don’t believe there are any direct statements by Bach as to his intention.


TL;DR: when a user writes to /proc/self/mem, the kernel bypasses the MMU and hardware address translation, opting to emulate it in software (including emulated page faults!), which allows it to disregard any memory protection that is currently setup in the page tables.


It doesn't bypass it exactly, it's still accessing it via virtual memory and the page tables. It's just that the kernel maintains one big linear memory map of RAM that's writable.


Thank You.


> So many of our foundational institutions – hiring, journalism, law, public discourse – are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. [...] The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system.

I disagree. While AI certainly acts as a force multiplier, all of these dynamics were already in play.

It was already possible to make an anonymous (or not-so-anonymous) account that circulated personal attacks and innuendo, to make hyperbolic accusations and inflated claims of harm.

It's especially ironic that the paragraph above talks about how it's good when "bad behavior can be held accountable." The AI could argue that this is exactly what it's doing, holding Shambaugh's "bad behavior" accountable. It is precisely this impulse -- the desire to punish bad behavior by means of public accusation -- that the AI was indulging or emulating when it wrote its blog post.

What if the blog post had been written by a human rather than an AI? Would that make it justified? I think the answer is no. The problem here is not the AI authorship, but the actual conduct, which is an attempt to drag a person's reputation through mudslinging, mind-reading, impugning someone's motive and character, etc. in a manner that was dramatically disproportionate to the perceived offense.


Lately I'm seeing more and more value in writing down expectations explicitly, especially when people's implicit assumptions about those expectations diverge.

The linked gist seems to mostly be describing a misalignment between the expectations of the project owners and its users. I don't know the context, but it seems to have been written in frustration. It does articulate a set of expectations, but it is written in a defensive and exasperated tone.

If I found myself in a situation like that today, I would write a CONTRIBUTING.md file in the project root that describes my expectations (eg. PRs are / are not welcome, decisions about the project are made in X fashion, etc.) in a dispassionate way. If users expressed expectations that were misaligned with my intentions, I would simply point them to CONTRIBUTING.md and close off the discussion. I would try to take this step long before I had the level of frustration that is expressed in the gist.

I don't say this to criticize the linked post; I've only recently come to this understanding. But it seems like a healthier approach than to let frustration and resentment grow over time.


Agreed, TFA is a good example of how to write down expectations explicitly.

But as far as dinging Hickey for the fact that he eventually needed to write bluntly? I'm not feeling that at all. Some folks feel that open-source teams owe them free work. No amount of explanation will change many of those folks' minds. They understand the arguments. They just don't agree.


> he eventually needed to write bluntly

Is there a history of that here? Were there earlier clear statements of expectations (like CONTRIBUTING.md) that expressed the same expectations, but in a straightforward way, that people just willfully disregarded?

I don't mean to "ding" anybody, I mostly just felt bad that things had gotten to the point where the author was so frustrated. I completely agree that project owners have the right to set whatever terms they want, and should not suffer grief for standing by those terms.


I don't remember the exact situation, but I think this relates to this:

Clojure core was sent a set of patches that were supposed to improve performance of immutable data structures but were provided without much consideration of the bigger picture or over optimized for a specific use case.

There's a Reddit thread which provides a bit more detail so excuse me if I got some of it wrong: https://www.reddit.com/r/Clojure/comments/a01hu2/the_current...

*Edit* - actually this a better summary: https://old.reddit.com/r/Clojure/comments/a0pjq9/rich_hickey...


Dissatisfaction n. 3 is the essence of the problem: "Because Clojure is a language and other people's jobs and lives depend on it, the project no longer feels like someone's personal project which invites a more democratic contribution process". This is a common, and modern, feeling that the more users a certain thing has, the more the creators/maintainers have a duty to treat it as a "commons or public infrastructure" and give the users a vote on how the thing is to be managed and developed. This is, of course, utter horsesh*t.


> Is there a history of that here?

I have been maintaining not-super-successful open source projects, and I've had to deal with entitled jerks. Every. Single. Time. I am totally convinced that any successful open source project sees a lot more of that.

> Were there earlier clear statements of expectations (like CONTRIBUTING.md) that expressed the same expectations, but in a straightforward way, that people just willfully disregarded?

IMO it's not needed. I don't have to clearly state expectations: I open source my code, you're entitled to exactly what the licence says. The CONTRIBUTING.md is more some kind of documentation, trying to avoid having to repeat the same thing for each contribution. But I don't think anyone would write "we commit to providing free support and free work someone asks for it" in there :-).


Someone once said: Abuse and expectations erode a culture of cooperation.

I am currently seeing this in real time at $work. A flagship product has been placed onto the platform we're building, and the entire sales/marketing/project culture is not adjusting at all. People are pushy, abusive, communicate badly and escalate everything to the C-Level. As a result, we in Platform Engineering are now channeling our inner old school sysadmins, put up support processes, tickets, rules, expectations and everything else can go die in a ditch.

Everyone suffers now, but we need to do this to manage our own sanity.

And to me at least, it feels like this is happening with a lot of OSS infrastructure projects. People are getting really pushy and pissy about something they need from these projects. I'd rather talk to my boss to setup a PR for something we need (and I'm decently successful with those), but other people are just very angry that OSS projects don't fullfil their very niche need.

And then you get into this area of anger, frustration, putting down boundaries that are harmful but necessary to the maintainers.

Even just "sending them to the CONTRIBUTING.md". Just with a few people at work, we are sending out dozens of reminders about the documentation and how to work with us effectively per week to just a few people. This is not something I would do on my free time for just a singular day and the pain-curbing salary is also looking slim so far.


Furthermore, writing down the contract calmly, as part of a plan, can avoid having to bang it out in frustration and leaving a bad taste.


> I don't say this to criticize the linked post

What you have written is obviously a criticism of the linked post.


If I'm criticizing the linked post, then I'm also criticizing myself, because I could easily imagine having written it.


I think some might get the impression that you're complaining about Hickey's tone. Perhaps your emotional terms "frustration," "defensive," and "exasperated" may be the reason.


I don't see anything wrong with the way he expressed himself, and I think his point is totally legitimate. I mostly just felt bad that he experienced so much grief about it, on account of a gift he was offering to the world.


"So much grief." It sounds like you're trying to interpret Hickey's emotions. How would you check whether your interpretation is accurate?


I don't know if you're a native English speaker, so apologies if this isn't appropriate. But the word 'grief' has more than one vernacular meaning.

"Giving someone grief" means giving someone a hard time.

So "he experienced so much grief" can just mean that it can just mean that people criticised him. It doesn't necessarily express anything about Rich Hickey's state of mind.


More concretely, I think the magic lies in these two properties:

1. Conservation of mass: the amount of C code you put in will be pretty close to the amount of machine code you get out. Aside from the preprocessor, which is very obviously expanding macros, there are almost no features of C that will take a small amount of code and expand it to a large amount of output. This makes some things annoyingly verbose to code in C (eg. string manipulation), but that annoyance is reflecting a true fact of machine code, which is that it cannot handle strings very easily.

2. Conservation of energy: the only work that will be performed is the code that you put into your program. There is no "supervisor" performing work on the side (garbage collection, stack checking, context switching), on your behalf. From a practical perspective, this means that the machine code produced by a C compiler is standalone, and can be called from any runtime without needing a special environment to be set up. This is what makes C such a good language for implementing garbage collection, stack checking, context switching, etc.

There are some exceptions to both of these principles. Auto-vectorizing compilers can produce large amounts of output from small amounts of input. Some C compilers do support stack checking (eg. `-fstack-check`). Some implementations of C will perform garbage collection (eg. Boehm, Fil-C). For dynamically linked executables, the PLT stubs will perform hash table lookups the first time you call a function. The point is that C makes it very possible to avoid all of these things, which has made it a great technology for programming close to the machine.

Some languages excel at one but not the other. Byte-code oriented languages generally do well at (1): for example, Java .class files are usually pretty lean, as the byte-code semantics are pretty close to the Java langauge. Go is also pretty good at (1). Languages like C++ or Rust are generally good at (2), but have much larger binaries on average than C thanks to generics, exceptions/panics, and other features. C is one of the few languages I've seen that does both (1) and (2) well.


Nicely put!

Haven't seen C's allure quite explained that way.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: