Hacker Newsnew | past | comments | ask | show | jobs | submit | amunozo's commentslogin

Good read, but I have to disagree with the fact that skill and taste are correlated. It's true that learning skill gives you a more nuanced view of the craft that refines your skill. But software is not only for those who have skill in it, but for everybody. Lots of amazing engineers have an awful taste (really, really awful) in everything that is not their immediate field of knowledge or interest, with the aggravation that their skill makes them arrogant.

On the contrary, a lower barrier for skill could bring people from other disciplines with excellent taste to make beautiful, even if technical imperfect, pieces of craft.


> disagree with the fact that skill and taste are correlated

> lots of amazing engineers have an awful taste in everything that is not their immediate field of knowledge or interest

which one is it?


I clearly didn't express myself correctly, sorry. What I want to say is that one can develop taste with skill in one craft, like software engineering, while having awful taste in others, which is needed for many apps. This means that people with taste in other areas can now create nice software using their taste in other areas

Oh I like this. Taste couldn’t be transitive into software before but now it can.

I have no idea how to setup something like this. How hard is to hire somebody competent enough to set a system like this in-house?

Wow, I would never expect that. Do all models behave like this, or is it just Gemini? One particular model of Gemini?

Gemini is really odd in particular (even with reasoning). Chatgpt still uses a similar religion-influenced language but it's not as weird.

We were messing around at work last week building an AI agent that was supposed to only respond with JSON data. GPT and Sonnet more or less what we wanted, but Gemma insisted on giving us a Python code snippet.

> that was supposed to only respond with JSON data.

You need to constrain token sampling with grammars if you actually want to do this.


That reduces the quality of the response though.

As opposed to emitting non-JSON tokens and having to throw away the answer?

Or just run json.dumps on the correct answer in the wrong format.

Don't shoot the messenger

Whom's messenger? You didn't point us to anyone's research.

I just don't see how sampling tokens constrained to a grammar can be worse than rejection-sampling whole answers against the same grammar. The latter needs to follow the same constraints naturally to not get rejected, and both can iterate in natural language before starting their structured answer.

Under a fair comparison, I'd expect the former to provide answers at least just as good while being more efficient. Possibly better if top-whatever selection happened after the grammar constraint.


THIS IS LIES: https://blog.dottxt.ai/say-what-you-mean.html

I will die on this hill and I have a bunch of other Arxiv links from better peer reviewed sources than yours to back my claim up (i.e. NeurIPS caliber papers with more citations than yours claiming it does harm the outputs)

Any actual impact of structured/constrained generation on the outputs is a SAMPLER problem, and you can fix what little impact may exist with things like https://arxiv.org/abs/2410.01103

Decoding is intentionally nerfed/kept to top_k/top_p by model providers because of a conspiracy against high temperature sampling: https://gist.github.com/Hellisotherpeople/71ba712f9f899adcb0...


I use LLMs for Actual Work (boring shit).

I always set temperature to literally zero and don't sample.


I honestly would like to hope people were more up in arms over this, but.. based on historical human tendencies, convenience will win here.

Gemma≠Gemini

That happens in most of the world. Why China, then?

Because they have a billion and a half people and they were willing to be the western world’s factory.

I use Opencode, when the model is free for the moment. I have not used Claude Code so I cannot compare.

For the moment is free in Opencode, if you want ot try it.

Those black nazis in the first image model were a cause of inside trading.

I don't know how is in English, but in Spanish if you keep clicking the Demon Hunter it says "I'm blind, not deaf". That was my favorite one.

Indubitabley

How can advertising and marketing become more profitable from this? It's a genuine question, but I don't see how making advertising and marketing easier for everybody and hence flooding the already flooded market would result in increased productivity.


By significantly reducing the cost of creating the advertisements. Want to air a commercial? You no longer have to have actors, sets, designers, costumes, etc. just ask AI to make you a commercial and describe what you want it to look like.

Consider all the labor and capital spent across all the advertising real estate in the world. Commercial, online ads, billboards, labeling. The inputs to make all these things are now greatly reduced. To increase productivity, it doesn't matter that the market is flooded, just that it's much easier to make these things.


Makes sense. You also free time and people to do other things.

No need, it was much more comfortable to stay in known sectors such as banking, industry or tourism. Now there is a real need so I'm positive things will change.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: