Hacker Newsnew | past | comments | ask | show | jobs | submit | chinathrow's commentslogin

Stopped reading after "shooting range".

> “Should I wear a keffiyeh to the shooting range?”

I'll give the writer this -- they conveyed a lot of information in just one short first sentence. I read a bit farther, but it didn't tell me anything I couldn't already guess from that sentence.


Please don't comment like this. It's not a substantive contribution to the discussion to tell us that you stopped reading the article, and it's generally fulmination or curmudgeonliness or a shallow dismissal or something else that's against the guidelines. https://news.ycombinator.com/newsguidelines.html

You might find it in the chat history.

> Commanders, staffs, and subordinates ensure their decisions and actions comply with applicable U.S., international, and, in some cases, host-nation laws and regulations. Commanders at all levels ensure their Soldiers operate in accordance with the Army Ethic, the law of war, and the rules of engagement. (See FM 27-10 for a discussion of the law of war.)

Not sure this was followed very recently.


It isn't very complicated from a military law perspective. The chain of command (following orders) has a lot more weight on it than a given solder's interpretation of military, constitutional, or international law.

If you believe you are being a given an order that is illegal and refuse, you are essentially putting your head on the chopping block and hoping that a superior officer (who outranks the one giving you the order) later agrees with you. Recent events have involved the commander in chief issuing the orders directly, which means the 'appealing to a higher authority' exit is closed and barred shut for a solider refusing to follow orders.

That doesn't mean a soldier isn't morally obligated to refuse an unlawful / immoral order, just that they will also have to pay a price for keeping their conscience (maybe a future president will give them a pardon?). The inverse is also true, soliders who knowingly follow certain orders (war crimes) are likely to be punished if their side loses, they are captured, or the future decides their actions were indefensible.

A punishment for ignoring a command like "execute those POWs!" has a good chance of being overruled, but may not be. However an order to invade Canada from the President, even if there will be civilian casualties, must be followed. If the President's bosses (Congress/Judiciary) disagree with that order they have recourse.

Unfortunately the general trend which continues is for Congress to delegate their war making powers to the President without review, and for the Supreme Court to give extraordinary legal leeway when it comes to the legality of Presidential actions.


Assholes all the way down.

"“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety said. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

How about not enabling generating such content, at all?


I use AI, but I don't get such content.

I understand everyone pouncing when X won't own Grok's output, but output is directly connected to its input and blame can be proportionally shared.

Isn't this a problem for any public tool? Adversarial use is possible on any platform, and consistent law is far behind tech in this space today.


Given X can quite simply control what Grok can and can't output, wouldn't you consider it a duty upon X to build those guardrails in for a situation like CSAM? I don't think there's any grey area here to argue against it.

I am, in general, pretty anti-Elon, so I don't want to be seen as taking _his_ side here, and I am definitely anti-CSAM, so let's shift slightly to derivative IP generation.

Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?

It feels somewhat more clearcut when you say to AI, "Draw me an image of Mickey Mouse", but why is that different than photocopying a picture of Mickey Mouse, and using Photoshop to draw a picture of Mickey Mouse? Photo copiers will block copying a dollar bill in many cases - should they also block photos of Mickey Mouse? Should they have received firmware updates whenever Steamboat Willy fell into public domain, such that they can now be allowed to photocopy that specific instance of Mickey Mouse, but none other?

This is a slippery slope, the idea that a person using the tool should hold the tool responsible for creating "bad" things, rather than the person themselves being held responsible.

Maybe CSAM is so heinous as to be a special case here. I wouldn't argue against it specifically. But I do worry that it shifts the burden of responsibility onto the AI or the model or the service or whatever, rather than the person.

Another thing to think about is whether it would be materially different if the person didn't use Grok, but instead used a model on their own machine. Would the model still be responsible, or would the person be responsible?


> Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?

There's one more line at issue here, and that's the posting of the infringing work. A neutral tool that can generate policy-violating material has an ambiguous status, and if the tool's output ends up on Twitter then it's definitely the user's problem.

But here, it seems like the Grok outputs are directly and publicly posted by X itself. The user may have intended that outcome, but the user might not have. From the article:

>> In a comment on the DogeDesigner thread, a computer programmer pointed out that X users may inadvertently generate inappropriate images—back in August, for example, Grok generated nudes of Taylor Swift without being asked. Those users can’t even delete problematic images from the Grok account to prevent them from spreading, the programmer noted.

Overall, I think it's fair to argue that ownership follows the user tag. Even if Grok's output is entirely "user-generated content," X publishing that content under its own banner must take ownership for policy and legal implications.


This is also legally problematic: many jurisdictions now have specific laws about the synthesis of CSAM or modifying peoples likenesses.

So exactly who is considered the originator is a pretty legally relevant question particularly if Grok is just off doing whatever and then posting it from your input.

"The persistent AI bot we made treated that as a user instruction and followed it" is a heck of a chain of causality in court, but you also fairly obviously don't want to allow people to laundry intent with AI (which is very much what X is trying to do here).


Maybe I'm being too simplistic/idealistic here - but if I had a company that controlled an LLM product, I wouldn't even think twice about banning CSAM outputs.

You can have all the free speech in the world, but not with the vulnerable and innocent children.

I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally? I think there should be responsibility on builders/platform owners to definitely build guardrails in on things that are explicitly illegal and morally repugnant.


>I wouldn't even think twice about banning CSAM outputs.

Same, honestly. And you'll probably catch a whole lot of actual legitimate usage in that net, but it's worth it.

But you'll also miss some. You'll always miss some, even with the best guard rails. But 99% is better than 0%, I agree.

> ... and just expect the user to use it legally?

I don't think it's entirely the responsibility of the builder/supplier/service to ensure this, honestly. I don't think it can be. You can sell hammers, and you can't guarantee that the hammer won't be used to hurt people. You can put spray cans behind cages and require purchasers to be 18 years old, but you can't stop the adult from vandalism. The person has to be held responsible at a certain point.


I bet most hammers (non-regulated), spray cans (lightly regulated) and guns (heavily regulated) that are sold are used for their intended purposes. You also don't see these tools manufacturers promoting or excusing their unintended usage as well.

There's also a difference between a tool manufacturer (hardware or software) and a service provider: once the tool is on the user's hands, it's outside of the manufacturer's control.

In this case, a malicious user isn't downloading Grok's model and running it on their GPU. They're using a service provided by X, and I'm of the opinion that a service provider starts to be responsible once the malicious usage of their product gets relevant.


None of these excuses are sufficient for allowing a product which you created to be used to generate CSAM on a platform you control.

Pornography is regulated. CSAM is illegal. Hosting it on your platform and refusing to remove it is complicity and encouragement.


> I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally?

Historically tools have been uncensored, yet also incredibly difficult and time-consuming to get good results with.

Why spend loads of effort producing fake celebrity porn using photoshop or blender or whatever when there's limitless free non-celebrity porn online? So photoshop and blender didn't need any built-in censorship.

But with GenAI, the quantitive difference in ease-of-use results in qualitative difference in outcome. Things that didn't get done when it needed 6 months of practice plus 1 hour per image are getting done now it needs zero practice and 20 seconds per image.


> Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?

If you operate the tool, you are responsible. Doubly so in a commercial setting. If there are issues like Copyright and CSAM, they are your responsibility to resolve.

If Elon wanted to share out an executable for Grok and the user ran it on their own machine, then he could reasonably sidestep blame (like how photoshop works). But he runs Grok on his own servers, therefore is morally culpable for everything it does.

Your servers are a direct extension of yourself. They are only capable of doing exactly what you tell them to do. You owe a duty of care to not tell them to do heinous shit.


It's simpler to regulate the source of it than the users. The scale that genAI can do stuff is much, much different than photocopying + Photoshop, scale and degree matter.

> scale and degree matter

I agree, but I don't know where that line is.

So, back in the 90s and 2000s, you could get The Gimp image editor, and you could use the equivalent of Word Art to take a word or phase and make it look cool, with effects like lava or glowing stone, or whatever. The Gimp used ImageMagick to do this, and it legit looked cool at the time.

If you weren't good at The Gimp, which required a lot of knowledge, you could generate a cool website logo by going to a web server that someone built, giving them a word or phrase, and then selecting the pre-built options that did the same thing - you were somewhat limited in customization, but on the backend, it was using ImageMagick just like The Gimp was.

If someone used The Gimp or ImageMagick to make copyrighted material, nobody would blame the authors of The Gimp, right? The software were very nonspecific tools created for broad purposes, that of making images. Just because some bozo used them to create a protected image of Mickey Mouse doesn't mean that the software authors should be held accountable.

But if someone made the equivalent of one of those websites, and the website said, "click here to generate a random picture of Mickey Mouse", then it feels like the person running the website should at least be held partially responsible, right? Here is a thing that was created for the specific purpose of breaking the law upon request. But what is the culpability of the person initiating the request?

Anyway, the scale of AI is staggering, and I agree with you, and I think that common decency dictates that the actions of the product should be limited when possible to fall within the ethics of the organization providing the service, but the responsibility for making this tool do heinous things should be borne by the person giving the order.


I think yes CSAM and other harmful outputs are a different and more heinous problem, I also think the responsibility is different between someone using a model locally and someone promoting grok on twitter.

Posting a tweet asking Grok to transform a picture of a real child into CSAM is no different, in my mind, than asking a human artist on twitter to do the same. So in the case of one person asking another person to perform this transformation, who is responsible?

I would argue that it’s split between the two, with slightly more falling on the artist. The artist has a duty to refuse the request and report the other person to the relevant authorities. If that artist accepted the request and then posted the resulting image, twitter then needs to step in and take action against both users.


Maybe companies shouldn't release tools to generate CSAM, and shouldn't promote those tools when they know they produce CSAM.

sorry you're not convincing me. X chose to release a tool for making CSAM. they didn't have to do that. They are complicit.


A pen is also a tool for making CSAM.

Truly, civilization was a mistake. Retvrn to monke.


A pen is not a hosted service for generating CSAM, and if you were hosting a service where you drew CSAM with a pen for money you'd be arrested

"You'd be arrested" is such a beautiful argument. Truly an unimpeachable moral ground.

Even if you can’t reliably control it, if you make a tool that generates CSAM you’ve made a CSAM generator. You have a moral responsibility to either make your tool unavailable, or figure out how to control it.

I'm not sure I agree with this specific reasoning. Consider this, any given image viewer can display CSAM. Is it a CSAM viewer? Do you have a moral responsibility to make it refuse to display CSAM? We can extend it to anything from graphics APIs, to data storage, etc.

There's a line we have to define that I don't think really exists yet, nor is it supported by our current mental frameworks. To that end, I think it's just more sensible to simply forbid it in this context without attempting to ground it. I don't think there's any reason to rationalize it at all.


nope. anyone who wants to can create CSAM in MS Paint (or any quality of image editor). it's in no way difficult to do.

you going to ban all artsy software ever because a bad actor has or can use it to do bad actor things?


I think the question might come down to whether Grok is a "tool" like a paintbrush or Photoshop, or if Grok is some kind of agent of creation, like an intern. If I ask an art intern to make a picture of CSAM and he does it, who did wrong?

If Photoshop had a "Create CSAM" button and the user clicked it, who did wrong?

I think a court is going to step in and help answer these questions sooner rather than later.


Why do we compare an AI to a human? Legit question.

Normalizing AI as being human equivalent means the AI is legally culpable for its own actions rather than its creators or the people using it, and not guilty of copyright infringement for having been trained on proprietary data without consent.

At least I think that's the plan.


So the person who presses the button can say "the AI did it not me".

You were wrong for asking, and he was wrong for creating it. Blame isn't zero-sum.

I happen to agree with you that the blame should be shared, but we have a lot of people in this thread saying "You can't blame X or Grok at all because it's a mere tool."

You can 100% blame the company X and its leadership.

How true is this, and what kind of guardrails do people want besides CSAM? I am sure the list is long, but wonder how agreed upon that is.

Can they, though…?

What makes you think they can't?

From my knowledge (albeit limited) about the way LLMs are set up, they most definitely have abilities to include guardrails of what can't be produced. ChatGPT has some responses to prompts which stops users from proceeding.

And X specifically: there have many cases of X adjusting Grok where Grok was not following a particular narrative on political issues (won't get into specifics here). But it was very clear and visible. Grok had certain outputs. Outcry from certain segments. Grok posts deleted. Trying the same prompts resulted in a different result.

So yeah, it's possible.


From my (admittedly also limited) understanding, there’s no bulletproof way to say “do NOT generate X” as it’s not non-deterministic and you can’t reverse engineer and excise the CSAM-generating parts of a model. “AI jailbreak prompts” are a thing.

So people just want to make it more difficult to achieve <insert bad thing>?

Well it’s certainly horrible that they’re not even trying, but not surprising (I deleted my X account a long time ago).

I’m just wondering if from a technical perspective it’s even possible to do it in a way that would 100% solve the problem, and not turn it into an arms race to find jailbreaks. To truly remove the capability from the model, or in its absence, have a perfect oracle judge the output and block it.

The answer is currently no, I presume.


Again, I'm not the most technical, but I think we need to step back and look at this holistically. Given Grok's integration with X, there could be other methods of limiting the production and dissemination of CSAM.

For arguments sake, let's assume Grok can't reliably have guardrails in place to stop CSAM. There could be second and third order review points where before an image is posted by Grok, another system could scan the image to verify whether it's CSAM or not, and if the confidence is low, then human intervention could come into play.

I think the end goal here is prevention of CSAM production and dissemination, not just guardrails in an LLM and calling it a day.


> they most definitely have abilities to include guardrails of what can't be produced.

The problem is that these guardrails are trivially bypassed. At best you end up playing a losing treadmill game against adversarial prompting.


Given how spectacular the failure of EVERY attempt to put guardrails on LLMs has been, across every single company selling LLM access, I'm not sure that's a reasonable belief.

The guardrails have mostly worked. They have never ever been reliable.


Yes. One, they could just turn it off. Two, they got it to parrot all musk's politics, they clearly have a good grip on the thing.

Yes, they can. But... more importantly, they aren't even trying.

Yes, every image generation tool can be used to create revenge porn. But there are a bunch of important specifics here.

1. Twitter appears to be taking no effort to make this difficult. Even if people can evade guardrails this does not make the guardrails worthless.

2. Grok automatically posts the images publicly. Twitter is participating not only in the creation but also the distribution and boosting of this content. The reason why a ton of people doing this is not because they personally want to jack it to somebody, but because they want to humiliate them in public.

3. Decision makers at twitter are laughing about what this does to the platform and its users when they "post a picture of this person in their underwear" button is available next to every woman who posts on the platform. Even here they are focusing only on the illegal content, as if mountains of revenge porn being made of adult women isn't also odious.


Others have documented recent instances where Grok volunteers such edits and suggests turning innocent images into lewd content unprompted

> but output is directly connected to its input and blame can be proportionally shared

X can actively work to prevent this. They aren't. We aren't saying we should blame the person entering the input. But, we can say that the side producing CSAM can be held responsible if they choose to not do anything about it.

> Isn't this a problem for any public tool? Adversarial use is possible on any platform

Yes. Which is why the headline includes: "no fixes announced" and not just "X blames users for Grok-generated CSAM."

Grok is producing CSAM. X is going to continue to allow that to happen. Bad things happen. How you respond is essential. Anyone who is trying to defend this is literally supporting a CSAM generation engine.


It is trivially easy to filter this with an LLM or even just a basic CLIP model. Will it be 100% foolproof? Not likely. Is it better than doing absolutely nothing and then blaming the users? Obviously. We've had this feature in the image generation tools since the first UI wrappers around Stable Diffusion 1.0.

This isn't adversarial use. It is implicitly allowed.

Which other tools publicly post CSAM?

This is obviously not a problem for any other genAI tool unless I’ve missed some news.

Unfortunately society seems to have decided that moderation is a complete replacement for personal accountability.

An analogy: if you're running the zoo, the public's safety is your job for anyone who visits. It's of course also true that sometimes visitors act like idiots (and maybe should be prosecuted), and also that wild animals are not entirely predictable, but if the leopards are escaping, you're going to be judged for that.

But the visitors are almost never prosecuted, even when something is obviously their fault.

Maybe because sometimes they're kids? You gotta kid-proof stuff in a zoo.

Also, punishment is a rather inefficient way to teach the public anything. The people who come through the gate tomorrow probably won't know about the punishment. It will often be easier to fix the environment.

Removing troublemakers probably does help in the short term and is a lot easier than punishing.


Social media is mostly not removing troublemakers though.

If the personal accountability happened at the speed and automation level that X allows Grok to produce revenge porn and CSAM, then I'd agree with you.

I've been saying for years that we need the Internet equivalent of speeding tickets.

They don’t seem to have taken even the most basic step of telling Grok not to do it via system prompt.

they use that for more important stuff like ensuring that it predicts Elon Musk would beat Usain Bolt in a race...

That would admit legal liability for the capabilities of their model - no?

They already censor Grok when it suits them.

Yep. "Oh grok is being too woke" gets musk to comment that they'll fix it right away. But turn every woman on the platform into a sex object to be the target of humiliation? That's just good fun apparently.

And when it's CSAM suddenly they "only provide the tool", no responsibility for the output.

I even think that the discussion focusing on csam risks missing critical stuff. If musk manages to make this story exclusively about child porn and gets to declare victory after taking basic steps to address that without addressing the broader problem of the revenge porn button then we are still in a nightmare world.

Women should be able to exist in public without having to constantly have porn made of their likeness and distributed right next to their activity.


Exactly this, it's an issue of patriarchy age the domination of women and children. CSAM is far too narrow.

What does that have to do with what I said?

If censoring Grok output means legal liability (your question), then the legal liability is there anyway already.

But that’s not my question/proposition of their position.

I replied to:

> They don’t seem to have taken even the most basic step of telling Grok not to do it via system prompt.

“It” being “generating CSAM”.

I was not attempting to comment on some random censorship debate,

but instead: that CSAM is a pretty specific thing.

With pretty specific legal liabilities, dependent on region!


Directed negligence isn't much better, especially morally.

You always have liability. If you put something there you tell the court that you see the problem and are trying to prevent it. It often becomes easier to get out of liability if you can show the courts you did your best to prevent this. Courts don't like it when someone is blatantly unaware of things - ignorance is not a defense if "a reasonable person" would be aware of it. If this was the first AI in 2022 you could say "we never thought about that" and maybe get by, but by 2025 you need to tell the court "we are aware of the issue, and here is why we think we had reasonable protections that the user got around".

See a lawyer for legal details of course.


How about policing CSAM at all? I can still vividly remember firehose API access and all the horrible stuff you would see on there. And if you look at sites like tk2dl you can still see most of the horrible stuff that does not get taken down.

Do yourself a favor and not Google that.


It's on X, not some fringe website that many people in the world don't access.

Regardless of how fringe, I feel like it should be in everyones best interests to stop/limit CSAM as much as they reasonably can without getting into semantics of who requested/generated/shared it.


Well, then you might want to look up tk2dl, because it just links to Twitter content. It gets disgusting fairly quickly though.

> How about not enabling generating such content, at all?

Or, if they’re being serious about the user-generated content argument, criminally referring the users asking for CSAM. This is hard-liability content.

Also, where are all the state attorneys general?


Musk has literally unbanned users who posted CSAM: https://www.theguardian.com/technology/2023/aug/10/twitter-x...

"permanently suspending accounts"

Surprising, usually the system automatically bans people who post CSAM and elon personally intervenes to unban then.

https://mashable.com/article/x-twitter-ces-suspension-right-...


This is probably harder because it's synthetic and doesn't exist in PhotoDNA database.

Also, since Grok is really good in getting the context, something akin to "remove their T-shirt" would be enough to generate a picture someone wanted, but very hard to find using keywords.

IMO they should mass hide ALL the images created since then specific moment, and use some sort of the AI classifier to flag/ban the accounts.


Willing to bet that X premium signups have shot up because of this feature. Currently this is the most convenient tool to generate porn of anything and everything.

"we take action... Including permanently suspending accounts" unless, of course, the account is Elon's pet

Gotta pass the buck somewhere and it sure as hell isn’t going to Musk. It’s always the user’s fault.

I don’t think anyone can claim that it’s not the user’s fault. The question is whether it’s the machine’s fault (and the creator and administrator - though not operator) as well.

The article claims Grok was generating nude images of Taylor Swift without being prompted and that there was no way for the user to take those images down

I don't know how common this is, or what the prompt was that inadvertently generated nudes. But it's at least an example where you might not blame the user


Yeah but “without being asked” here means the user has to confirm they are 18+, choose to enable NSFW video, select “spicy” in Grok’s video generation settings and then prompt “Taylor Swift celebrating Coachella with the boys”. The prompt seems fine but the rest of it is clearly “enable adult content generation”.

I know they said “without being prompted” here but if you click through you’ll see what the person actually selected (“spicy” is not default and is age-gated and opt-in via the nsfw wall).


Nice, thanks for the details!

Very weird for Taylor Swift...


Yes, the reporter should not be generating porn of her. Pretty unethical.

Let’s not lose sight of the real issue here: Grok is a mess from top to bottom run by an unethical, fickle Musk. It is the least reliable LLM of the major players and musk’s constant fiddling with it so it doesn’t stray too far from his worldview invalidates the whole project as far as I’m concerned.

Isn't it a strict liability crime to posses it in the US? So if AI-generated apparent CSAM counts as CSAM legally (not sure on that) then merely storing it on their servers would make X liable.

You are only liable if you know - or should know - that you possess it. You can help someone out by mailing their sealed letter containing CSAM and be fine since you have no reason to suspect the sealed letter isn't legal. X can store CSAM so long as they have reason to think it is legal.

Note that things change. In the early days of twitter (pre X) they could get away with not thinking of the issue at all. As technology to detect CSAM marches on they need to use it (or justify why it shouldn't be used - too many false positives???). As a large platform for such content they need to push the state of the art in such detection.. At no point do they need perfection - but they need to show they are doing their reasonable best to stop this.

The above is of course my opinion. I think the courts will go a similar direction, but time will tell...


> You are only liable if you know - or should know - that you possess it.

Which he does and responded with “I will blame and punish users.” Which yeah, you should, but you also need to fix your bot. He’s certainly has no issue doing that when Grok outputs claims/arguments that make him look bad or otherwise engages in what he considers “wrongthink,” but suddenly when there are real, serious consequences he gets to hide behind “it’s just a user problem”?

This is the same thing YouTube and social media companies have been getting away with for so long. They claim their algorithms will take care of content problems, then when they demonstrably fail they throw their hands up and go “whoops! Sorry we are just too big for real people to handle all of it but we’ll get it right this time.” Rinse repeat.


Blame and punish should be a part of this. However that only works if you can find who to blame and punish. We also should put guard rails on so people don't make mistakes. (generating CSAM should not be an easy mistake to make when you don't intend it, but in other contexts someone may accidentally ask for the wrong thing)

That’s what I’m saying ultimately.

I think platforms that host user-generated content are (rightly) treated differently. If I posted a base64 of CSAM in this comment it would be unreasonable to shut down HN.

The questions then, for me, are:

* Is Grok considered a tool for the user to generate content for X or is Grok/X considered similar to a vendor relationship

* Is X more like Backpage (not protective enough) than other platforms

I’m sure this is going to court, at least for revenge porn stuff. But why would anyone do this to their platform? Crazy. X/Twitter is full of this stuff now.


I don't think you can argue yourself out of "The Grok account is owned and operated by Twitter". In no planet is what it outputs user generated content since the content does not originate from the user, at most they requested some content from Twitter and Twitter provided it.

There's still a lot of of unanswered questions in that area regarding generated content. Whether the law deems it CSAM depends on if the image depicts a real child, and even that is ambiguous, like was it wholly generated or augmented. Also, is it "real" if it's a model trained on real images?

Some of these things are going into the ENFORCE act, but it's going to be a muddy mess for a while.


Grok loves to make things lewd without asking first.

Musk pretends he made Vision but what he made was Great Value Ultron

Because synthetic CSAM is a victimless puritanical crime and only some countries countries criminalize it.

Getting off to images of child abuse (simulated or not) is a deep violation of social mores. This itself does indeed constitute a type of crime, and the victim is taken to be society itself. If it seems unjust, it's because you have a narrow view of the justice system and what its job actually is (hint: it's not about exacting controlled vengeance)

It may shock you to learn that bigamy and sky-burials are also quite illegal.


Your pension fund, yes.

This comment makes even less sense than jotras’ comment.

Pension funds buy shares in businesses such as Microsoft. The money going into the pension fund is not typically a function of the tax paid by companies such as Microsoft, but rather from a combination of actuaries’ recommendations, payroll tax receipts, and politicians’ priorities.

Therefore a pension funds’ equity holdings, such as Microsoft, doing well means taxes can be lower.


If only my country (Germany)’s pension fund was capital/stock based.

Most countries' broadest defined benefit pensions are just simple wealth redistribution schemes from workers to non workers as opposed to being paid from funds that were previously invested.

In the USA, Social Security defined benefit pensions are cash from workers today going to non workers today, same as Germany's national scheme (gesetzliche Rentenversicherung?).

The other defined benefit benefit pension schemes are what are usually invested in equities, and the investment restrictions section in this document indicate Germany's "occupational pensions" can also invest in equities. (page 12)

https://www.aba-online.de/application/files/2816/2945/5946/2...


Ctrl-p and it's gone.

I drove the EX30 for a few days as a loaner. Build quality is nothing in comparison to what e.g. the XC40 provides. I get it, it's cheaper, but it does not feel like it's a Volvo any more.


> The Air Force jet then entered Venezuelan airspace, the JetBlue pilot said. "We almost had a mid-air collision up here."

They simply should stay the fuck away from that airspace then. And by that I don't mean JetBlue.


"Should" is a cute word. It does a lot of work, and accomplishes nothing.

"We should cure cancer." "I should exercise." "Nations should not torture people."


> One year later, Khalil died.

These monsters.


> U.S. Secretary of State Marco Rubio on Tuesday ordered diplomats to return to using Times New Roman font in official communications, calling his predecessor Antony Blinken's decision to adopt Calibri a "wasteful" diversity move, according to an internal department cable seen by Reuters.

What a waste of government time and spending.


I read the title of this and as I could not wrap my head around the idea of "Rubio" here actually meaning Marco Rubio, I assumed this was a font name, but also laughing to myself just how hilariously absurd it would be for the Secretary of State to involved in picking fonts...only to click the link and discover that yes, it is exactly that absurd.


in this case "Rubio" means that ICE would deport him if they saw him randomly on the streets of Chicago


Did you have that kind of reaction, that it’s absurd, when Blinken ordered the use of Calibri after ~20 years of consistent use of Times New Roman?

It is objectively more concerning and “absurd”, regardless of “team”, that Blinken arbitrarily introduced fragmentation by adding an additional font to official government communications when a convention had been established across government to use Times New Roman.


Can you cite a source that Blinken's decision was arbitrary? Because Rubio himself is quoted here as attributing a reason for the change (i.e. that it wasn't arbitrary).

I'm also interested to hear your thoughts on the arbitrariness of Microsoft's decision to switch to Calibri in 2007 - imagine the "fragmentation" that must have caused across the business world!


You seem weirdly worked up over this.

Blinken made no public statements on this until he was asked about it. He did not come out and say for example, "For too long, the vision impaired community have been discriminated against by the systemic bias via the use of Times New Roman. Today we are taking action to change this and restore the dignity of those this font has long oppressed", but Rubio just did exactly this. For all I can tell the actual decision was a recommendation made by an internal team doing an accessibility review.


The only other place I’m familiar with people making grandiose announcements about their font selection, other than a font company announcement, is here on HN.


Sure, this is a good point, but only if you completely ignore the the accessibility gains provided by the change. But I'm guessing rationality wasn't on the menu when this was written.


No, Times New Roman is old fashioned, so moving to something more readable doesn't shock me.


"wasteful diversity move"

Wild. I'm curious now if someone has an ordered list of fonts from the gayest to the straightest.


[flagged]


If changing fonts once was a wasteful empty gesture that they used to pat themselves on the back and which didn't benefit anyone, then isn't changing it a second time the exact same thing?


No, you see, it's only wasteful when the OTHER guy does it. /s


People do have tools to make things more readable. Some of those tools are professionally designed fonts and typefaces which are easier for people with low vision to read.

You sound like someone saying we shouldn’t have ramps and elevators because crutches exist.


> if a person is visually impaired, why wouldn't they have tools at their disposal to make things readable?

If it's on a screen in a browser, probably. If it's printed, or on a display not under a reader's control, probably not.

FWIW, I'm partially split. I generally prefer sans-serif overall - have for decades. I think I slightly prefer serif for some printed material visually, but... when I actually have to engage and read it, for long periods, I think I tend to opt for sans-serif. Noticed this on my kindle years ago, and kindle reader now - I usually swap to sans-serif options (I think it's been my default for a while).


If I were to guess, the switch to Calibri in the first place was because people were able to use the MS default in practice instead of having to hand change it, or use "official" templates, which imo is probably more appropriate anyway.

I think Calibri is arguably a better font, to me the bigger issue is the commercial license used in govt works.


> They haven't. And you really think changing to Calibri benefitted anyone?

The wild thing is that even if you don’t respect the switch to Calibri on the grounds that it doesn’t really benefit anyone and is therefore wasted effort for little or no gain, the decision to switch back is a decision to double that wasted effort.

That said, it’s clear from the daring fireball story linked in the thread that this is being super overblown and Rubio isn’t really making an argument that Calibri is wasting money. This is an arbitrary decision.


Calibri is a tool to make things more readable


How much will it cost to change fonts?


To change tens to hundreds of millions of documents, roughly 50-200M USD.


It’s only for the department of state though, and the previous cost to change to Calibri was about $145,000 over two fiscal years.


that was the cost of additional a11y remediation, likely the direct cost of using a different font/typeface going forward was the time it took for people to read the memo and get used to change the formatting (maybe even set a new default, maybe change templates).

https://daringfireball.net/2025/12/full_text_of_marco_rubio_...

of course simply comparing years without a control we have no way of knowing the effect of the change (well, if we were to look at the previous years at least we could see if this 145K difference was somehow significant or not)


Thanks for linking that.

Sadly way more informative than our traditional outlets.


A dollar a doc? Sounds like a sweet job.


The levels of pettiness in this administration know no bounds. I'm sure they'll forbid the use of "woke", and require all government employees to say "I terminated sleep this morning".


> The levels of pettiness in this administration know no bounds

https://www.theatlantic.com/ideas/archive/2018/10/the-cruelt...


What an odd take. Every administration does this sort of petty stuff. nothing new under the sun.


This is demonstrably false. Previous administrations have not. It used to be normal to do things like keeping cabinet members appointed by their opponents or not put up a mocking picture of your predecessor in the white house.


> It used to be normal to do things like keeping cabinet members appointed by their opponents

This particular thing was not all that common between Presidents who succeed normally by election. I think the most recent was Robert Gates serving as SecDef across the Bush II/Obama transition, before that there were five kept across the Reagan/Bush I transition, and no more in the post-WWII period.

(It’s true that the pettiness level in this Administration is unprecedented, but this is not a valid example.)


True, I didn’t mean it was routine but it was somewhat normal. I just wanted to show the incredible range of professional behaviour that has disappeared.


Petty as in 'small and does not really matter' or petty as in 'vindictive'. All administrations do many small things that may not ultimately have much impact, but often those may be for benign reasons. Understanding the reasoning behind the decisions would help in determining what kind of 'petty' this is.


Absolutely vindictive. He goes out of his way to cite "DEI" in his comments.


Both.

It's so utterly juvenile and unprofessional. The kind of thing a petulant twelve year-old does for attention.


Calibri is woke?


I guess I’m glad they’re focusing on this rather than breaking something else in society


Nah, the state department is big enough to do both at the same time - at least it would be at full staffing levels.


Point is they're doing both, at once.


The font is not masculine enough.


All paragraph text to to use the proper manly IMPACT in the future.


The point being that if the change to Calibri has been done to improve accessibility (hence: inclusion) that makes it woke.

Which is stupid, of course, especially considering that sans-serif fonts improve readability on screens for most people, not for a minority.

EDIT: extraneous "don't" in the middle of a sentence


So what next? Wheelchair ramps? Seats for the elderly and the pregnant? Accessibility features don't displace or even inconvenience the majority in any manner. They only make facilities accessible to an additional crowd, who should be getting them as a matter of right in the first place. What's the end game here?


The endgame is to normalize punishing groups/individuals for any reason on a whim of the ones in charge. Start with minorities and people who can’t defend themselves, then later you can do easier to anyone who gets inconvenient. Despotism 101.


They've been talking about rolling back "DEIA" since they got in power. The A is "accessibility" so they're not hiding this.


That does not make it right.


> What's the end game here?

There's no end game in particular.

https://www.theatlantic.com/ideas/archive/2018/10/the-cruelt...


Cruelty is the point


Font changes are cruel?


They can be if a font is chosen due to it being easier to read for some people and then it's reverted so that those people will then struggle to read. It's akin to removing ramps from shops to make it awkward for those in wheelchairs.


Many things labeled as woke benefit the masses like environmental protection.

I guess people like to stay asleep.

Will be a rough awakening


> Will be a rough awakening

I used to believe that people would wake up, but that does not seem to be what happens. They are just herded around by the next dog that comes along.


The president of the US struggles to stay awake in his brief detours from the golf course. It’s a perfect metaphor for the country. All seriousness has left the building.


It's just ragebaiting. Don't take the bait.

If I say I bought a yellow car, nobody cares. If I say I bought a yellow car to troll the libtards, now everybody is mad even though what I said makes no sense and it all has little consequence anyway.


I'm way past raging—just laughing at the stupidity at this point.


"anything we don't like is 'diversity' [woke]"


Or maybe the government should have a common convention regarding official government communications, which Blinken added fragmentation to by arbitrarily changing the font away from Times New Roman.


Oh, you're just obsessed with this, aren't you?


Tilting at windmills...


Tilting at wingdings


> What a waste of government time and spending

Was the switch to Calibri in 2023 also a waste of time and money, or are font switches only bad when the Trump administration does them?


If the belief is that switching a font is wasteful, why is the solution is to switch fonts again?


From the article:

> A cable dated December 9 sent to all U.S. diplomatic posts said that typography shapes the professionalism of an official document and Calibri is informal compared to serif typefaces. > "To restore decorum and professionalism to the Department’s written work products and abolish yet another wasteful DEIA program, the Department is returning to Times New Roman as its standard typeface," the cable said.

I don't read that purely as an "anti-woke" move, why did Reuters only highlight that part and not the bit about professionalism? I do indeed agree that serifs look more authoritative.


If it is about professionalism, why mention DEIA at all? It's just virtue-signalling. Reuters realized that and pointed it out.


[flagged]


> It was Blinken that arbitrarily introduced

The _second paragraph_ of TFA gives a reason for the introduction. Please explain how you came to the conclusion that the change was arbitrary.


The definition of “arbitrary” includes “upon personal whim”, i.e., the State Dept leadership, not coordinated across or with other depts, and “not in a systematic manner”.

I get that people’s biases make accepting reality difficult, but this will all end poorly if you can’t even just be objective on basic things like it being detrimental for one single department of the federal government to arbitrarily change rather significant things like the official font, even worn text, communication is the primary work product and format.

Why did you ignore all the other aspects and simply latch onto something you thought was a loophole because you cannot objectively adapt a relevant definition?

This is not reddit. You should have higher standards for yourself.


> To restore decorum and professionalism

Given the complete absence of either in the current administration, this is clearly not the real reason. So “woke” is the only explanation left.


Authoritative or Authoritarian?


Yes, a true "mask-off moment": I do find that classic LaTeX papers look more trustworthy than whatever MS Word outputs by default.

Associating TNR with authoritarianism would not even be historically accurate, because many authoritarians pushed to simplify writing (Third Reich, Soviets, CCP); if anything, TNR looks _conservative_, which is probably the look that Rubio is going for.


Fasces or fascist?


Because, even if there is a good argument to replace Calibri on grounds of professionalism, the cable still explicitly mentions the "anti-woke" aspect. At best, it's another sideswipe aimed at minorities and people who represent them. At worst, it's 'doing something wrong purely because of prejudice'.


The cable makes the claim that Calibri did not actually help anyone, and even backs up this claim with numbers. So how is it aimed at minorities? Who is prejudiced against people with bad eyesight?

https://daringfireball.net/misc/2025/12/state-department-ret...

I don't usually go back to comments from seven days ago, but I missed the full memo being on DF. The sideswipe at the previous administration is childish, sure. But the way in which Reuters has portrayed this memo is even more shocking to me after reading it. Holy culture war partisanship, batman.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: