Hacker Newsnew | past | comments | ask | show | jobs | submit | mattbuilds's commentslogin

No one put words in your mouth, they asked you a question. You are the one who made the initial comparison to B2C apps, so it seems like a fair question to me. Your comment implies that its standard and the app isn't doing anything out of the ordinary when I think most people would except an official government app to be held to a higher standard than the average B2C app.

>You are the one who made the initial comparison to B2C apps, so it seems like a fair question to me.

The relevant part of B2C is the 2C part, not the B. Mass market apps are generally ridden with telemetry and SDKs. Moreover I'm not sure how you think it's a "fair question" to go from a remark about how other apps are equally bad, to thinking I want the US government to operate as a business. It's like doing:

A: "I called the IRS and was put on hold for 2 hours, can you believe that?"

B: "To be fair that's the experience calling into most businesses, like banks or the cable company"

A: "Wow so you think we should be running the IRS like a bank?"

>I think most people would except an official government app to be held to a higher standard than the average B2C app.

Is this a "yes, in an ideal world that's how things should be" type of statement, or are you claiming "yes, government agencies have a track record of delivering technical excellence on software projects, and this particular project was especially bad"? The former is basically a meaningless platitude, and I don't think anyone seriously thinks the latter is true.


[flagged]


>Ok, so then it just sounds like whataboutism.

The flip side of "whataboutism" is "isolated demands for rigor"[1]. Going back to the IRS example, is it a fair retort to point out that IRS's hotline only sucks as much as any other large organization's hotline, or is it "whataboutism"?

[1] https://slatestarcodex.com/2014/08/14/beware-isolated-demand...


It's the government, the US government. By far the largest employer and spender in the world. So yes, they are held to a higher standard. Businesses intentionally throttle customer service lines for profit reasons. The government should not. How is this difficult to understand?

>So yes, they are held to a higher standard.

See my earlier comment about how this is a meaningless platitude.

>Businesses intentionally throttle customer service lines for profit reasons. The government should not.

None of this was presupposed in the original comment, only that wait times are long.


what the hell do you mean meaningless platitude? Do you understand the difference between civic duty and corporate duty?

If a company proactively evades taxes for profit, do you give the government the same pass? Companies skate and fight all this through litigation and interpretation. The government's duty is to the people and to uphold the law, not fight it. They are held to a higher standard of law, accountability and practice in all undertakings. What exactly are you refuting here?


Yea clearly. There is nothing in here at all even worth discussing.


Got any evidence of this or is just vibes based?


Unsure why the status quo needs evidence but remote doesn't, but which part of my reasoning do you require evidence to believe?


That’s a false equivalency, sorry that some of us think companies should actually be responsible for the things they produce.


I’m sorry but the difficult part of making games isn’t the coding, it is making something that is appealing and enjoyable to play. An LLM isn’t going to help with that at all. How is it going to know if something is fun? That’s the real work.

Also the idea that a dev who could making a game in 24 hour would create something professional and polished in 3 days is a joke. The answer to “where are all the games” is simple: LLMs don’t actually make a huge impact on making a real game.


Easy! Ask the LLM to play the game and if it’s not fun to try again. just like when you ask it to compile the code and if it fails to try again

…Joking…. For now


This is almost on the money. Making something fun often requires coding, art, sound etc to bring the fun out. So in fact coding is the difficult part, along with all the other stuff needed for something to be fun. Imo tooling like ue blueprints and visual scripting is in the coding bucket.


I’m not saying coding is easy, but when it comes to games it is the easy part. Lots of people can code, very few can make something actually fun. Knowing how to code (or how to use an engine/blueprints/visual scripting) is just the start. It’s like making films. Everyone can record some videos on their phone, but it takes much more than that to make something people want to watch.


That analogy is more accurate for LLM vibe coding than real programming which i think proves my point. Not everyone can code. Actually code. Ideas are bountiful compared to the required skill to bring them into reality.


I personally don't dismiss or advocate for AI/LLMs, I just take what I actually see happening, which doesn't appear revolutionary to me. I've spent some time trying to integrate it into my workflow and I see some use cases here and there but overall it just hasn't made a huge impact for me personally. Maybe it's a skill issue but I have always been pretty effective as a dev and what it solves has never been the difficult or time consuming part of creating software. Of course I could be wrong and it will change everything, but I want to actually see some evidence of that before declaring this the most impactful technology in the last 100 years. I personally just feel like LLMs make the easy stuff easier, the medium stuff slightly more difficult and the hard stuff impossible. But I personally feel that way about a lot of technology that comes along though, so it could just be I'm missing the mark.


> I have always been pretty effective as a dev

> LLMs make the easy stuff easier

I think this is the observation that's important right now. If you're an expert that isn't doing a lot of boilerplate, LLMs don't have value to you right now. But they can acceptably automate a sizeable number of entry-level jobs. If those get flushed out, that's an issue, as not everyone is going to be a high-level expert.

Long-term, the issue is we don't know where the ceiling is. Just because OpenAI is faltering doesn't mean that we've hit that ceiling yet. People talk about the scaling laws as a theoretical boundary, but it's actually the opposite. It shows that the performance curve could just keep going up even with brute force, which has never happened before in the history of statistics. We're in uncharted territory now, so there's good reason to keep an eye on it.


True but if you are building it for yourself then you will still have something useful in the end. Chances are that you also probably enjoyed or took satisfaction in the process of building it. Also, if it is truly a passion project and not just attempt to make money, it’s probably more interesting than most of the stuff shared.


Got any evidence on that or is it just “vibes”? I have my doubts that AI tools are helping good programmers much at all, forget about “running circles” around others.


I don't know about "running circles" but they seem to help with mundane/repetitive tasks. As in, LLMs provide greater than zero benefit, even to experienced programmers.

My success ratio still isn't very high, but for certain easy tasks, I'll let an LLM take a crack at it.


That is not a moral obligation, it is in fact the opposite. It is a lie that people tell themselves and the world to allow themselves to make immoral decisions for their own benefit.

I’m not saying running a company is easy and I know that many gray areas exist in the decision making. I do think companies can exist, profit, and be a net good for the world. However, we need to remove the notion that the duty to shareholder profits is a moral duty. It’s a cowards way out of having to make actual difficult choices. It’s one of those things that sounds great exactly because it allows you do horrible things with no responsibility. It creates a system where you offload the effort and weight of your decisions. As long as you’re are acting in the interest of shareholders, you are in the clear. That’s a dangerous concept and the opposite of morality.


It is the fiduciary duty of the CEO to do what’s in the best interest of shareholders.

In a working system it should be the governments responsibility to limit what a company can do


According to your logic, a CEO should attempt to destabilize and influence the government's responsibility so they can maximize shareholder value. And guess what, that is exactly what happens in reality. You can't just simplify reality into rules like this because it leads to people using those rules as an excuse to skirt responsibility and make actual difficult decisions.


Correct and this is why regulatory capture is the phase after market capture, to transition into legal monopoly.


In the best interest of the shareholders might reasonably interpreted as, say, not destroying the biosphere. Fiduciary duty is certainly not "maximise profits whatever the consequences".


I would recommend reading about the Friedman Doctrine and the time period where it came about. It is only a theory and not necessarily a good one.


Unless Saint Friedman got his "doctrine" from some higher power, it's just the oligarchy's first commandment.

In the first line of GP's reference in Wikipedia:

"The Friedman doctrine, also called shareholder theory, is a normative theory of business ethics advanced by economist Milton Friedman that holds that the social responsibility of business is to increase its profits."


Is it really the best approach though if we sink all this capital into it if it can never achieve AGI? It’s wildly expensive and if it doesn’t achieve all the lofty promises, it will be a large waste of resources IMO. I do think LLMs have use cases, but when I look at the current AI hype, the spend doesn’t match up with the returns. I think AI could achieve this, but not with a brute force like approach.


There's still even a more fundamental question before getting there, how are we defining AGI?

OpenAI defines it based on the economic value of output relative to humans. Historically it had a much less financially arrived definition and general expectation.


You really can't take anything OpenAI says about this kind of thing seriously at this point. It's all self-serving.


It's still important though, they are the ones many are expecting to lead the industry (whether that's an accurate expectation is surely up for debate).


Market will sort that out just like it did dotcom or tulip madness.

Another big push back is copyrighted content. Without proper revenue model how to pay for that?

That will also restrict what can be "learned". Already there's lawsuit, allegations of using pirated books etc


I'll be surprised if anything meaningful comes of those issues in the end.

Copyright issues here feel very similar to claims against Microsoft in the 80s and 90s.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: