Hacker Newsnew | past | comments | ask | show | jobs | submit | tptacek's commentslogin

This is a brochure site from "The Alliance for Secure AI", which I am unfamiliar with, but whose site gives "AGI weirdo". Am I misreading?

https://secureainow.org/


I don't think so..nothing about these folks backgrounds screams "understands LLMs" https://secureainow.org/staff/. Which to be clear doesn't mean they can't effectively pull together publicly available layoff data in a website.

No, they didn't. They distinguished it, when presented with it. Wildly different problem.

Yeah. And it is totally depressing that this article got voted to the top of the front page. It means people aren’t capable of this most basic reasoning so they jumped on the “aha! so the mythos announcement was just marketing!!”

Yeah. Extremely disappointing.

There isn't one (much as I might think there should be). Threads about Mangione were also uncivil and activating.

HN isn't a "science and technology" site.

You're being nice about it but I think you're inadvertently expressing literally the sentiment Dan was referring to.

On the contrary, not justifying nor condoning anything of the sort.

The main point I was trying to make was in highlighting the perceptual and emotional disconnect between knowing and working with someone personally, versus those who haven't (myself included).

Most people's perception of Sam was shaped in recent years, by press coverage that tends to treat him as the face of AI, with sentiment that usually goes something like: "hey, this guy's stealing all your water so he can take your job too, and by the way he lies a lot."

A couple follow-on points there were:

a) Dan shouldn't take it personally for not being able to control a tidal wave of negative sentiment stemming from that dynamic playing out.

b) I don't think it does anyone any good to dismiss the negative sentiment driving that as mere mob mentality. Even Sam appears to understand this quite well, in the very blog post the submission links to.

To echo another comment[0]:

>... while the vast majority of us think "holy crap, that's horrible" but aren't adding it because of course that's already been said and there just isn't any more nuance needed.

I agree; explicit condemnation just felt performative and hollow.

For what it's worth, I'm actually rooting for Sam assuming his words ultimately line up with his actions, and my opinion of him is neutral or slightly positive. I don't think it's widely appreciated just how crazy a position the guy is in; there's no way he can make everybody happy.

To touch on the hollow part: this is someone pg once described in so few words as more than capable of handling himself. [1]

I recall reading that years ago he insisted offices be swept for bugs after a visit by Musk, and he hangs out with similarly powerful people.

In other words, you don't operate in that world without your security already being excellent, and it's probably going to get even better now. Give it a couple years and he'll probably have a humanoid robot perimeter that'll smoke anyone on sight with a level of efficiency that is comical.

So, in that context taking a thoughts and prayers tone felt a little unnnecessary.

[0] https://news.ycombinator.com/item?id=47732594

[1] https://news.ycombinator.com/item?id=7280124


It shouldn't matter how many lies a guy tells, or how he runs his business. People shouldn't throw molotov cocktails at his house, and people shouldn't act like his behavior is potentially justification for people throwing molotov cocktails at his house.

Anybody whose perception of Sam Altman was "he deserves for me to throw a molotov cocktail at his house" is a horrible person. I don't care if Paul Graham says he's a tough guy.

Explicit condemnation is only hollow if you don't mean it.


To be clear I'm not saying any of it is justified and generally agree with everything you wrote. The fact that happened to Sam and his family is indeed horrible.

That said, please don't twist my words. I think there's utility in understanding why people feel and act the way they do.

Otherwise, everybody just takes the de facto stance of "those people are intrinsically bad people, and not good people like us!" which is pretty useless and typically just leads to more escalation.

You could also spare me the one-line zinger at the end.


I didn't mean it as a zinger; I meant it as a rebuttal of the line from your comment. If you felt zinged by it, maybe it's worth considering why.

You keep writing comments where you try to wiggle between it being really important to think about the context in which people commit crimes and the context in which people are OK with crimes being committed based on not liking the victim, but also you keep clarifying that you don't condone what they're doing or saying.

What is your actual point? The best I can try to pluck out, the summation of the above is that the people throwing molotov cocktails, and the people saying it's justified, are bad people but they're bad for understandable reasons?


>I didn't mean it as a zinger; I meant it as a rebuttal of the line from your comment.

Fair enough.

>If you felt zinged by it, maybe it's worth considering why.

Conditioned response from years of defending comments against immediate pedantry, of which I'm probably guilty of myself. Not saying that you were being pedantic.

>What is your actual point?

Originally dang seemed pretty burnt out from moderating this thread, so I just wanted to pitch in with my two cents saying that he's dealing with a tidal wave of larger negative public sentiment that's perhaps beyond his control.

I think there's an important distinction to be had between whoever threw the cocktail (fuck them), and the folks expressing what I termed callous indifference.

People are allowed to not give a shit and say as much, and while that might be bannable I don't think it's particularly productive to take that route.

Moreover, I thought it was important to note that some people here (like dang presumably) actually know Sam personally, so it might not be appreciated that it comes off as extra ghoulish to them when they're reading said callous comments.

At the same time, if your only source of information about the guy is recent press, it's easy to understand how someone arrives at that position; anti-AI sentiment is gaining popularity rapidly.

That's it. That's my point or stance if you will, I don't think it's that unreasonable; just trying to highlight what I see as a disconnect.


This is the waffling again. You made the pitch earlier that explicit condemnation felt hollow. Your comments here (and the many from other people saying similar things) are what look hollow to me.

When you say things like "it's easy to understand how someone arrives at that position", you're laying the groundwork to justify why what you class as "callous indifference" is just a logical and natural state that we should accept.

We shouldn't. The people who are celebrating or ok with molotov cocktails being thrown are also bad people. To borrow your language: fuck them, too.


>When you say things like "it's easy to understand how someone arrives at that position", you're laying the groundwork to justify why what you class as "callous indifference" is just a logical and natural state that we should accept.

I didn't say it should be accepted nor was I laying groundwork for justification, be it implicit or explicit.

Rather, only stating that such indifference does logically follow in those circumstances.

Quoting my prior comment:

>>Most people's perception of Sam was shaped in recent years, by press coverage that tends to treat him as the face of AI, with sentiment that usually goes something like: "hey, this guy's stealing all your water so he can take your job too, and by the way he lies a lot."

People's reaction here isn't exactly shocking when taken in that context.

>To borrow your language: fuck them, too.

Yeah, agreed.


> Rather, only stating that such indifference does logically follow in those circumstances.

This is exactly what I’m talking about.


This feels like a pointless semantic trap. Everything is "waffling" or "wiggling". I don't see the parent saying anything in a disguised manner. It's just that reality is complicated. In the immediate wake of violence, it's exceedingly easy to paint any sentiment aside from "this is horrible" as disrespectful or weasel-worded. That's cheap (as I mentioned elsewhere, it's like the way conservatives refuse to talk about guns in the wake of gun violence).

I disagree with almost all of this but I'm not here to single you out.

Appreciated, but I would hope that it at least changes your initial read.

I am not speaking for the parent, but my personal interpretation is that they are trying to add perspectives/thoughts, not denying what Dan said (i.e. it's not "inadvertent" in as few words).

By that I meant it didn't read like they were trying to push back on him.

If you cut out the vulnerable code from Heartbleed and just put it in front of a C programmer, they will immediately flag it. It's obvious. But it took Neel Mehta to discover it. What's difficult about finding vulnerabilities isn't properly identifying whether code is mishandling buffers or holding references after freeing something; it's spotting that in the context of a large, complex program, and working out how attacker-controlled data hits that code.

It's weird that Aisle wrote this.


> It's weird that Aisle wrote this.

No, writing an advertisement is not weird. What's weird is that it's top of HN. Or really, no, this isn't weird either if you think about it -- people lookin for a gotcha "Oh see, that new model really isn't that good/it's surely hitting a wall/plateau any day now" upvoted it.


Nah, Saturday post. Less news less content.

It's not weird. Top of HN is worthless as a barometer at this point, people downvote for calling out AI slop.

Can you downvote submissions?

It's weird, because when working on a big project, taking a break for a week or two, and returning to it, I will find a bug and will see hundreds of lines of code that are absolutely terrible, and I will tell myself "Tom you know better than to do this, this is a rookie mistake".

I think people forget that it's hard to be clever and tidy 100% of the time. Big programs take a lot of discipline and an understanding of the context that can be really hard to maintain. This is one of several reasons that my second draft or third draft of code is almost always considerably better than the first draft.


It's also that humans are very bad at repetitive detailed tasks. Sitting down with a code base and looking at each function for integer overflow comparison bugs gets boring really fast. It's a rare person who can do that for as long as it takes to find a bug that they don't already have some clues about.

It's the flaw in the "given enough eyeballs, all bugs are shallow" argument. Because eyeballs grow tired of looking at endless lines of code.

Machines on the other hand are excellent at this. They don't get bored, they just keep doing what they are told to do with no drop-off in attention or focus.


idk man, pay me enough money and I’ll look at as much code as you want looking for integer overflows

Would it be cheaper than Claude Mythos doing it? No idea. Maybe, maybe not.

But it’s weird how we’re willing to throw away money to a megacorp to do it with “automation” for potentially just as much if not more as it would cost to just have big bounty program or hiring someone for nearly the same cost and doing it “normally”.

It would really have to be substantially less cost for me to even consider doing it with a bot.


> idk man, pay me enough money and I’ll look at as much code as you want looking for integer overflows

So would I, but it doesn't negate that we, humans, are bad at this. We will get bored and our focus will begin to drift. We might not notice it, we might not want to admit it, but after a few continuous hours we will start missing things.


And there aren't enough security researchers in the world to review ALL the files from OpenBSD.

And if there were, the cost would be more like $20M than 20K.

Having all code reviewed for security, by some level of LLM, should be standard at this point.


If it’s obvious when you look close, then automate looking close. Seems simple to write tools that spider thru a code base, finding logical groupings and feeding them into an LLM with prompts like “there is a vulnerability in this code, find it”.

The thesis is, the tooling is what matters - the tools (what they call the harness) can turn a dumb llm into a smart llm.


Hold on, I misread your comment because I'm knee-jerk about code scanners, which were the bane of my existence for a while. Reworking... and: done. The original comment was just the first graf without the LLM qualification. Sorry about that.

The general approach without LLMs doesn't work. 50 companies have built products to do exactly what you propose here; they're called static application security testing (SAST) tools, or, colloquially, code scanners. In practice, getting every "suspicious" code pattern in a repository pointed out isn't highly valuable, because every codebase is awash in them, and few of them pan out as actual vulnerabilities (because attacker-controlled data never hits them, or because the missing security constraint is enforced somewhere else in the call chain).

Could it work with LLMs? Maybe? But there's a big open question right now about whether hyperspecific prompts make agents more effective at finding vulnerabilities (by sparing context and priming with likely problems) or less effective (by introducing path dependent attractors and also eliminating the likelihood of spotting vulnerabilities not directly in the SAST pattern book).


I have long said that static checkers get ten false positives. note that size of the code is not a consideration, it doesn't matter if it the four line 'hello world' or the 10 million line monster some of us work on, it is ten max false positive.

Right, but they didn't actually test that, did they?

What's weird is that Google, Anthropic and OpenAI are claiming the model is the powerhouse, when what Aisle is stating is very much not the case.

It almost seems like a coordinated effort (Google in January, Anthropic and OAI in April) building out gated models that will eventually be very expensive. Yet, here we are: Aisle is saying that's not required to get there.

I don't think it's weird at all. It seems to me the Frontier providers are just trying to find, still unsuccessfully, a moat to make their unsustainable business model... Well. Sustainable.


I agree that the apocalyptic messaging about mythos is eye-rolling, but the thesis of the article that "the moat is the system, not the model" is weird because the point is that the model is the whole system. A little Bash loop that just tells the model to "look at this file" for every file is clearly not a "moat" of a system

Is it, though? In a way: yes. But look at where the focus of LLMs has gone: agentic frameworks. Yet, we see all of the models continually being compared against benchmarks that can easily be gamed by the model itelf [0].

There's no great way to garner the quality / efficacy of something non-deterministic that you can't trust, at least not currently. And I wouldn't be surprised that the providers haven't known that their LLMs could possibly be cheating for a while now.

On one hand they're saying: these models are so apocalyptic if everyone had them, and then on the other hand showcasing how their models are sweeping the floor on benchmarks. So which is it? Personally I don't believe any of these companies at this point, especially when they make claims that are non-public and wrapped in NDAs that benefit their bottom line.

[0] https://rdi.berkeley.edu/blog/trustworthy-benchmarks-cont/


While I agree this is true of coding, there are other domains and paradigms in which the loop is more involved than a bash loop.

Realizing this fact explains:

1. why software development is first to get disrupted by AI

2. other domains that are easily loopable like contract review are also quite easy to deploy AI into, so you get all these "AI for Law" running around doing essentially the same thing

3. domains that are not easily loopable are much harder to figure out leading people to believe AI can't be useful, when in fact it's a failure of the application layer


Yea I think if you read the actual design of the test they are presenting as evidence it shows that what these small models are doing is not the same as what Mythos did. They isolated the vulnerable code down to the vulnerable subset of the function and provided hints in the prompt about all of the key contextual factors that matter to finding the vulnerability. That makes the problem significantly easier.

I realize they are trying to prove that an agentic harness running small models can ultimately achieve the same thing as what Mythos did, but they are handwaving away the steps it takes to construct the context Mythos handled in model and using a misleading test result to prove small models can handle the key step.

Poor evidence of a premise that logically wouldn't even be proven if the their evidence was valid. If they could find these types of vulnerabilities with the same effectiveness they would have done it already.


People really lack imagination. The point here is that a dedicated attacker with a good harness and really cheap models can run the attack regardless. It's like portscan/url search attacks. They could run all of these against all codebases and clients. However, on the flip side, this also means we could run cheap models against every PR made, and do a thorough red-team security review.

None of these requires mythos. If anything we just need Opus 4.5+ that is not lobotomised.


That is a point. It might even be true. But showing a small model an example of vulnerable code and asking to confirm that it is vulnerable code isn't evidence for that point!

No, it is evidence for that point. You could just rattle off every possible vulnerability and have the cheap model scan for it in the harness through a loop.

Note that I say cheap, not small, because small models may lack the reasoning needed, but some models are cheap enough but retain enough reasoning (ala Sonnet 3.7+)


They could write a post demonstrating that you can do that and surface the same bugs in the same codebases.

It would be way more informative than this one, which didn't do that.


That's not what they did.

It’s like not differentiating between solving and verifying.

“PKI is easy to break if someone gives us the prime factors to start with!”


Off-topic but is there an effort to test AI models against code versions with major historic bugs (Heartbleed, GHOST, log4j, etc)? Seems like the kind of thing that would be relevant in security-related AI benchmarks.

So it follows that the most efficient time to discover bugs is when you first write them.

... or maybe when you see them triggered or exploited reproducibly, then the underlying bug will also be pretty easy to discover. But at that point, it's already too late. :)

I really like your original point, I never thought about it this way.


The point of contention is whether Mythos is the product of its intelligence or its harness; the results like this, and other similar testimonies, call into question too-dangerous-to-release marketing, and for good reason, too. Because it is powerful marketing. Aisle merely says the intelligence is there in the small models. I say, it's already clear that competent defenders could viably mimic, or perhaps even eclipse what Mythos does, by (a) making better harness, (b) simply spending more on batch jobs, bootstrapping, cache better, etc. You may not be doing this yourself, but your probably should.

Aisle and Anthropic are literally talking about two different problem spaces.

These are message boards. The obvious sentiment, that firebombing attacks are awful (perhaps cut a little bit with "the perpetrator appears to be someone deeply in need of help) is boring. This is an availability bias issue: the only sentiments that actually spool out into threads are edgy. Once you learn to spot these effects, message boards make a lot more sense and are less jarring.

Another good thread to follow is the murdering of UnitedHealthcare CEO Brian Thompson: https://news.ycombinator.com/item?id=42317604

It's an interesting exercise to compare these threads.

My own position on the matter is the not an edgy one: political violence of any kind, is never justified, but it does signal that something deep in society requires a change.


I'm of the view that it's violence of the non-political kind that is never justified*. Political violence can be legitimized, as an option of last resort. There's plenty of historical examples where groups of people were denied every avenue of redress until they turned violent. As an example, read up on the history of most labour unions.

* one exception being defense of life and limb.


I am european and not american, but since reddit is mostly used by americans I would say that from their prospective political violence is justified and encoded in the constitution. How would you explain the second emendment?

> A well regulated Militia, being necessary to the security of a free State

Isn't really political. By my reading the clause also invalidates the entire amendement soon as the US aquired a standing army, but I'm not from the US so, who knows.


It seems odd to not quote the full amendment. It’s not like it’s long.

The phrasing in total is far more vague than what you’re presenting.


> political violence of any kind, is never justified,

I'm genuinely curious about how you reconcile this with the world around you.


I completely disagree. Political violence is the universal check on every political system that keeps it from sucking too much.

The optimal amount of crazies getting off the porch at any one time is not zero much like the optimal amount of fraud is not zero.


Besides I think the sentiment would be very different if anyone actually got hurt.

"causing a fire to an exterior gate" doesn't lead me to believe there was any chance of real harm.


And the same applies to HN? Edgy messages make it to the top, and the reader should learn to react accordingly (in what way?)

Mostly just by not being emotionally destabilized by edgy comments, is all.

I think this is a little too optimistic:

- Go onto a Reddit thread about ICE, everyone in the comment threads says they don't like ICE. That's the obvious statement, not edgy.

- Go onto a Reddit thread about Trump, everyone says they don't like Trump. That's the obvious statement, not edgy.

Why would we think the Sam Altman thread is any different? I unfortunately think the Reddit thread might be the real deal, or at least a little more real than you are saying.


I operate in at least one social circle that is heavily not-technical (local politics) and I do not see this at all.

My experience is somewhat in the middle -- I see educated non-technical people who are strongly against AI because they see it as polluting, "wasting water", and harmful to society. Although many use it anyway.

I could totally believe uneducated or less well-adjusted people reacting in the above way, though.


Non-technical indeed. The wasting water or pollution argument is getting really tiring.

I would be careful about this one. While the overall impact (in the global/national aggregate sense) may not be massive, the impact to individual communities nearby these new hyperscale datacenters is far more impactful than most people on this site might think.

Look at the grok datacenter in Memphis for one example. The "move fast and break things" mentality in this arena isn't about code anymore, it's being applied to communities.


Let’s say we grant the grok example.

A) How many other datacenters with similar problems can you name?

B) How does this industry compare to every other one on earth, and then look at the disproportionate hate this gets compared to other industries that are substantially worse.


I'm going to flip this around, I know about the grok data center because I am under the impression it is unusual in terms of approvals and pollution.

How many other datacenters with similar problems can you name? If it is indeed not unique, I would appreciate you pointing out some other examples of the same behavior from non-AI related datacenters.


The hatred is particularly intense on reddit. I lost a couple of accounts there to suspension, just for speaking a civil way about the positive aspects of AI.

What do you see?

People in politics aren't that dissimilar to tech bros (especially AI ones) in terms of world view.

People in "local politics" are random neighbors, almost none of whom are "in politics" in the colloquial sense.

Fair enough, but I still think it at least somewhat applies to people who are willing to get involved in any kind of political process beyond the very basics or perhaps some special interest groups.

They are nothing remotely like "tech bros", is my point.

There's nothing "un-controversial" about trying to mitigate a firebombing attack with a broad critique of capitalism. It's an edgy take, just own it.

God damnit I didn't know until 15 seconds ago that the Space-switching animation in macOS was annoying. Thanks a lot!

Just wait until you notice that it’s inexplicably slower on 120hz monitors and that your input devices remain focused on the previous space until the animation fully completes!

> our input devices remain focused on the previous space until the animation fully completes

This strikes me as the fuckup more than anything else.


This is true in iOS, too. Taps are ignored until any animation completes. Must be deep in the code!

That's entirely app-level issue; the frameworks and the system itself is perfectly capable of handling this. [1]

It's just that for _most_ cases it's perfectly fine to make the users wait until the animations is finished, and handling users tapping multiple things in a quick succession can get annoying and unwieldy.

There are some apps when it's infuriating though, especially when they're quadruply badly engineered and _tie the internal logic state_ to the UI state.

As someone living in a country where I don't speak the local language, I swear at Google Translate engineers daily because I do a "swap the active pair of languages and then quickly launch the camera mode" combo _very_ frequently, and the selected pair of languages isn't actually updated _until the animation finishes_.

It's maddening. [2]

[1]: A quick demo: tap an app on a Springboard to open it, and very quickly swipe up from the bottom to hide it. You'll absolutely be able to interrupt the animation of it launching.

[2]: I'm actually sorta guessing that this is a workaround for a different bug they had; when if you tapped this quickly enough a couple of times you could end up in a situation where the UI displayed a different pair of languages than the internal logic had, so they added that delay, but who knows, maybe I'm theorycrafting too much.


The -only- time I experience it infuriatingly is in iOS natively.

For example: on the Home Screen, open a folder, then tap outside it to bring back the Home Screen. While the animation is happening, try opening an app. It won't.

Open the camera app. Swipe between modes and press the shutter button. It won't work until the animation completes.

An especially egregious one: open messages. Tap the + sign, and tap the letter P while it's animating. I had to tap the letter 10 times before it finally started showing Ps.

The OS is riddled with them.


I wasn’t claiming that Apple apps are immune to this, or even noticeably better — just that this is very much not a OS-level limitation, and something that _can_ be accomplished.

One of the examples I gave is springboard behavior which is very much OS-level

It’s insane and the worst part about Mac OS

Window management on macOS is just trash.

3 people from my team recently switched to macOS and they never owned a mac before and they are all complaining about window management.

Do you know how dumb it makes me feel to have to tell them they need to install third party apps just to make their system somewhat usable? it's insane.


Exactly! "It just works" used to be true for macos, and linux was for tinkering. Now, ubuntu "just works" (and fast!).

It's not just that macos is slow and window management is strange. Last time I had a macbook (M2 pro) I needed 3rd party software just to use a 4k external monitor without blurring.


>3 people from my team recently switched to macOS and they never owned a mac before and they are all complaining about window management.

For legit reasons? Because many switchers complain for stupid reasons, like the macOS distinction between apps and windows.


Complaining about the distinction between apps and windows isn't a "stupid reason" to complain though.

Say I use Slack, Teams and Outlook. If I use their Electron versions, I switch between them with cmd+tab. If I use them in separate browser windows, I switch between them by using cmd+tab to switch to Firefox, then cmd+` to cycle through windows until I find the one I want. That's weird; how you switch between these three apps depend on the technical details on how you opened them? Why?

Say I have neovim, the mutt email client, and a shell open. These are three separate apps, but because they happen to run in a terminal emulator, I still have to cmd+tab to the terminal emulator, then cmd+` to cycle between them. They're semantically different applications in dedicated windows, but technical implementation details mean they belong to what macOS considers "the same app", just like the "apps in Firefox windows" example above.

It wouldn't be so bad if the cmd+` "cycle between windows in the app" feature worked well. But it doesn't. Unlike cmd+tab, it doesn't show a bar which you get to select from, it just instantly re-orders your windows; and it's impossible to select a window in another workspace. That means, if I have Slack open in Firefox in workspace 1 and Outlook open in Chrome in workspace 2, I can switch between Slack and Outlook with cmd+tab, but if I Slack open in Firefox in workspace 1 and Outlook open in Firefox in workspace 2, there is no way to switch between Slack and Outlook. That's pretty bad.


The (shift+)cmd+` order also resets to match the window z-order whenever you switch apps. So if the order is windows A, B, C, then you select window B, cmd+tab away, then cmd+tab back, the order will now be B, A, C.

I've developed an intuitive understanding of this, but I had to experiment just now to describe the behavior precisely. And my intuition is still wrong sometimes (like if the app has windows on multiple monitors, it's hard to predict the z-order).

> if I Slack open in Firefox in workspace 1 and Outlook open in Firefox in workspace 2, there is no way to switch between Slack and Outlook

My local maximum is to never use workspaces – just cmd+tab, cmd+`, and sometimes cmd+h to reduce screen clutter.


I would also add to this that in order to open two instances of an app the app explicitly needs to support this. For example, you can't open 2 instances of Calculator.app side by side.

This is really annoying.


Yeah, I always want 2 calculator apps when I'm speed calculating... what?

You may want to see the result of one calculation while doing another calculation?

If only I could write it down in a text note and refer to it and many more, as opposed to keeping a calculator window open per prior calculation I want to refer to...

Or if only there was a tool like Soulver or Calca...


Yes you can write down the result of calculations and all relevant state and then clear the state of the calculator window before doing a new calculation. But why?

Mac power user 25+ years.

Yes, it’s complete shit


Have your teammates also discovered how macOS handles copy/pasting a folder when the destination folder already exists? How macOS just deletes the entire contents of the destination folder, instead of merging? I remember discovering that for the first time :(

My favorite MacOS update was when the removed the need for Rectangle, Mos, and Unnatural ScrollWheels.

/s


I have noticed this bug since... I want to say High Sierra? It was the inspiration behind this project.

If you don't want to go insane try to forget before you notice everything else. Might be too late already once you first do though

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: