Hacker Newsnew | past | comments | ask | show | jobs | submit | joshfraser's commentslogin

I miss the days when Hacker News would root for startups and brave innovators instead of the stogy old incumbents


But would the responses be any different back then? Energy innovations like battery tech were also looked at critically back then. There's even a template doing the rounds with a checklist for supposedly revolutionary battery tech.

Using a turbine to generate electricity isn't innovative or brave, it's been done, the technology has been around for over a hundred years, but it's not widely used because there's better ways to generate electricity.


There are multiple manufacturers currently making gas turbines that are exactly this size. They’ve been making them for at least 50 years.

This is not brave or innovative, it’s a cash grab during a bubble. You can call a Siemens or GE sales rep and place an order right now for the exact same thing.


I don't root for startups who will make the world a worse place.


You're telling us that data centers are more sensitive to downtime than airplanes??? That makes no sense.

All of the aeroderivatives were designed in the 70's before we had computer modeling to help optimize the designs. It's not that crazy to assume that we can design a better and more efficient turbine today with all of the help of modern technology.


Planes are quite sensitive to engine failure during flight. Data centers don’t tend to fly for three hours then sit idle for an hour, then sit idle overnight. They need to be up 24/7. When you’re talking 40 or 50 megawatts, you’re not going to necessarily buy triple or quadruple capacity. So it’d better be reliable without a lot of downtime for checks and maintenance.


> All of the aeroderivatives were designed in the 70's before we had computer modeling to help optimize the designs.

Not even remotely correct. The concept started in the 70s, and designs have been continuously improved, using the latest modelling techniques, for the last 50 years. Modern turbines are some of the most optimized machines humanity has ever produced.


No, that's not what he's telling us. Read it again


We have an abundance of natural gas and a shortage of electricity.


What other options do they have? They've been sanctioned to the point where they have few options left but to turn to crime.


Their brutal dictatorship is a choice.


Isn't the whole point of a dictatorship that you don't get to choose?


"creating software for free that largely benefits large corporations"

Who cares. The end result of this is that we all get to use amazing software, often for free.

Think of your open source contributions as a gift to all of humanity. I wouldn't get too hung up on the fact that bad people can use it. Hammer makers don't add conditions on who can buy their products, even if it could be used as a murder weapon. Take solace in the fact that your work is creating far more good than evil.

You're increasing the rate of innovation in the world. And we're all grateful for it.


I love the convenience of Heroku but hate their predatory pricing. Who's fixing this?


Fly was supposed to fix Heroku but my bill more than doubled since they changed how they charge for shared CPUs.

https://community.fly.io/t/cpu-quotas-update/23473


I work at Render (render.com); we have over 4 million developers on the platform, and we've migrated many large (and small) Heroku customers over because of our more modern capabilities and scalable pricing.

https://render.com/docs/migrate-from-heroku


You have your range of options - it depends on the size of your team, the kind of apps you're running, etc. The answer can be anything from an "ssh script" to AWS (or K8S), etc.

If you're running something that's too expensive for your taste and can share more information, happy to brainstorm some options.


AWS Elastic Beanstalk gives you more or less the same experience but charges you normal EC2 instance pricing. It's as cheap as PaaS gets.


There are a ton of interesting use-cases for public city data. When I was an Airbnb host, I built an early alert system to send me email if my address was ever reported or under investigation. The government moves at a snails pace, so anyone who was paying attention would have plenty of time to cure any issues before any formal investigation was even started. I even had a personal dashboard showing how the enforcement office operated, how many investigators they had, which neighborhoods were getting the most enforcement actions, stats on how cases were resolved, how long they took, etc.


I switched from Slack to Discord back in 2017 and I can't imagine ever going back. Their free offering is better than what you get for $$$$ from Slack.

Slack is designed for small groups of people that all know and trust each other. That security model falls apart when you scale to large low-trust organizations. Discord was designed for strangers and offers far more granular controls.

They offer infinite search. Unlimited users. And it's free! Can't recommend it enough.


I've seen the invite-only marketplaces where these exploits are sold. You can buy an exploit to compromise any piece of software or hardware that you can imagine. Many of them go for millions of dollars.

There are known exploits to get root access to every phone or laptop in the world. But researchers won't disclose these to the manufacturers when they can make millions of dollars selling them to governments. Governments won't disclose them because they want to use them to spy on their citizens and foreign adversaries.

The manufacturers prefer to fix these bugs, but aren't usually willing to pay as much as the nation states that are bidding. All they do is drive up the price. Worse, intelligence agencies like the NSA often pressure or incentivize major tech companies to keep zero-days unpatched for exploitation.

It's a really hard problem. There are a bunch of perverse incentives that are putting us all at risk.


> It's a really hard problem

Hard problems are usually collective-action problems. This isn't one. It's a tragedy of the commons [1], the commons being our digital security.

The simplest solution is a public body that buys and releases exploits. For a variety of reasons, this is a bad idea.

The less-simple but, in my opinion, better model is an insurance model. Think: FDIC. Large device and software makers have to buy a policy, whose rate is based on number of devices or users in America multiplied by a fixed risk premium. The body is tasked with (a) paying out damages to cybersecurity victims, up to a cap and (b) buying exploits in a cost-sharing model, where the company for whom the exploit is being bought pays a flat co-pay and the fund pays the rest. Importantly, the companies don't decide which exploits get bought--the fund does.

Throw in a border-adjustment tax for foreign devices and software and call it a tariff for MAGA points.

[1] https://en.wikipedia.org/wiki/Tragedy_of_the_commons


I think what is actually the problem is the software and hardware manufacturers.

Secure use of any device requires a correct specification. These should be available to device buyers and there should be legal requirements for them to be correct and complete.

Furthermore, such specifications should be required also for software-- precisely what it does and legal guarantees that it's correct.

This hasn't ever been more feasible, also considering that we Europeans are basically at war with the Russians, it seems reasonable to secure our devices.


We have already have that: ISO 15408, Common Criteria [1]. Certification is already required and done for various classes of products before they can be purchased by the US government.

However, large commercial IT vendors such as Microsoft and Cisco were unable to achieve the minimum security requirements demanded for high criticality deployments, so the US government had to lower the minimum requirements so their bids could be accepted.

At this point, all vendors just specify and certify that their systems have absolutely no security properties and that is deemed adequate for purchase and deployment.

The problem is not lack of specification, it is that people accept and purchase products that certify and specify they have absolutely zero security.

[1] https://en.m.wikipedia.org/wiki/Common_Criteria


Yes, but consumers buy, for example, graphics cards with binary blobs and are certainly not sent a specification of the software in them, or of the interfaces, etc. and that is what I believe is the absolute minimum foundation.

So I mean an internal specification of all hardware interfaces and a complete description of software-- no source code, but a complete flow diagram or multi-process equivalent.


> These should be available to device buyers and there should be legal requirements for them to be correct and complete

You're still left with a massive enforcement problem nobody wants to own. Like, "feds sued your kid's avourite toy maker because they didn't file Form 27B/6 correctly" is catnip for a primary challenger.


That's an incredibly tough sell, particularly for software. Who is it that should "require" these specifications, and in what context? Can I still put my scrappy code on Github for anyone to look at? Am I breaking the law by unwittingly leaving in a bug?


Yes, but you wouldn't be able to sell it to a consumer.

They way I imagine it: no sales of this kind of thing to ordinary people, only to sophisticated entities who be expected to deal with the incompletely specified source code, so if a software firm wants to buy it that's fine, but you can't shrink wrap it and sell it to an ordinary person.


Modern software is layers upon layers of open-source packages and libraries written by tens of thousands of unrelated engineers. How do you write a spec for that?


A tragedy of the commons occurs when multiple independent agents exploit a freely available but finite resource until it's completely depleted. Security isn't a resource that's consumed when a given action is performed, and you can never run out of security.


> Security isn't a resource that's consumed when a given action is performed, and you can never run out of security

Security is in general non-excludable (vendors typically patch for everyone, not just the discoverer) and non-rival (me using a patch doesn't prevent you from using the patch): that makes it a public good [1]. Whether it can be depleted is irrelevant. (One can "run out" of security inasmuch as a stack becomes practically useless.)

[1] http://www.econport.org/content/handbook/commonpool/cprtable...


>Security is [...] a public good

Yeah, sure. But that doesn't make it a resource. It's an abstract idea that we can have more or less of, not a raw physical quantity that can utilize directly, like space or fuel. And yes, it is relevant that it can't be depleted, because that's what the term "tragedy of the commons" refers to.


> it is relevant that it can't be depleted, because that's what the term "tragedy of the commons" refers to

I think you're using an overly-narrow definition of "tragedy of the commons" here. Often there are gray areas that don't qualify as fully depleting a resource but rather incrementally degrading its quality, and we still treat these as tragedy of the commons problems.

For example, we regulate dumping certain pollutants into our water supply; water pollution is a classic "tragedy of the commons" problem, and in theory you could frame it as a black-and-white problem of "eventually we'll run out of drinkable water", but in practice there's a spectrum of contamination levels and some decision to be made about how much contamination we're willing to put up with.

It seems to me that framing "polluting the security environment" as a similar tragedy of the commons problem holds here, in the sense that any individual actor may stand to gain a lot from e.g. creating and/or hoarding exploits, but in doing so they incrementally degrade the quality of the over-all security ecosystem (in a way that, in isolation, is a net benefit to them), but everyone acting this way pushes the entire ecosystem toward some threshold at which that degradation becomes intolerable to all involved.


> It's an abstract idea that we can have more or less of, not a raw physical quantity that can utilize directly, like space or fuel

Uh, intellectual property. Also land ownership is an abstract idea. (Ownership per se is an abstract idea.)


Land is obviously a finite resource. I don't know what point you're trying to make with regards to intellectual property.


> don't know what point you're trying to make with regards to intellectual property

Stocks. Bonds. Money, for that matter. These are all "abstract idea[s] that we can have more or less of, not a raw physical quantity." We can still characterise them as rival and/or excludable.


security maybe considered "commons" but accountables are individual manufacturers. If my car is malfunctioning I'm punished by law enforcement. There are inspections and quality standards. Private entities may provide certifications.


Please no more mandated insurance programs.


insurers can be quite good at enforcing quality standards


The markets here are complicated and the terms on "million dollar" vulnerabilities are complicated and a lot of intuitive things, like the incentives for actors to "hoard" vulnerabilities, are complicated.

We got Mark Dowd to record an episode with us to talk through a lot of this stuff (he had given a talk whose slides you can find floating around, long before) and I'd recommend it for people who are interested in how grey-market exploit chain acquisition actually works.

https://securitycryptographywhatever.com/2024/06/24/mdowd/


Makes me wonder if there are engineers on the inside of some of these manufacturers intentionally hiding 0 days so that they can then go and sell them (or engineers placed there by companies who design 0 days)


People have been worrying about this for 15 years now, but there's not much evidence of it actually happening.

One possible reason: knowing about a vulnerability is a relatively small amount of the work in providing customers with a working exploit chain, and an even smaller amount of the economically valuable labor. When you read about the prices "vulnerabilities" get on the grey market, you're really seeing an all-in price that includes value generated over time. Being an insider with source code access might get you a (diminishing, in 2025) edge on initial vulnerability discovery, but it's not helping you that much on actually building a reliable exploit, and it doesn't help you at all in maintaining that exploit.


good vulnerability / backdoor should be indistinguishable from programming mistake. Indirect call. Missing check on some bytes of encrypted material. Add some validation and you will have good item to sell no one else can find.


See: second paragraph above.


Are we just straight up ignoring the Jia Tan xz exploit that happened 10 months ago that would've granted ssh access to the majority of servers running OpenSSH?, or does that not count for the purposes of this question, because that was an open source library rather than a hardware manufacturer?


Is there any evidence the author of this backdoor was able to sell it to anyone, for any kind of money?


> It's a really hard problem.

Classify them as weapons of mass destruction. That's what they are. That's how they should be managed in a legal framework and how you completely remove any incentives around their sale and use.


How about some penalties for their creation? If NSA is discovering or buying, someone else is creating them (even if unintentionally).

Otherwise corporations will be incentivized (even more than they are now) to pay minimal lip service to security - why bother investing beyond a token amount, enough to make PR claims when security inevitably fails - if there is effectively no penalty and secure programming eats into profits? Just shove all risk onto the legal system and government for investigation and clean up.


> weapons of mass destruction. That's what they are

Seriously HN? Your Netflix password being compromised is equivalent to thermonuclear war?


Think more along the lines of exploits that allow turning off a power grid, spinning a centrifuge too fast, or releasing a dam.


> exploits that allow turning off a power grid, spinning a centrifuge too fast, or releasing a dam

By this definition trucks are WMDs because they, too, can blow up a dam.

Hyperbolic comparisons undermine the speaker’s authority. Zero Days aren’t WMDs.


That is never, ever going to happen, and they are nothing at all like NBC weapons.


Yes. Except our government is the largest buyer.


The USA has 5044 nuclear missiles, so that shouldn't be a problem.


Suddenly I felt like re-reading Ken Thompson’s essay Reflections on Trusting Trust.

We’ve created such a house of cards. I hope when it all comes crashing down that the species survives.


Instead of hoping, you can do a lot just by ditching your cell phone and using Debian stable.


Ah yes, switching from an iPhone to Debian is sure to… checks notes save the species from extinction.

Apologies for the dismissive snark; perhaps you could provide me some examples of how this would help?


reminds me of the anthropic claude jailbreak challenge which only pays around $10,000. if you drive the price up, i'm pretty sure you'll get some takers. incentives are not aligned.


It's a classic timing attack. You can detect which Cloudflare datacenter is "closest" (ie. least network latency) to a targeted Signal or Discord user.

The speed of light is the main culprit here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: