Hacker Newsnew | past | comments | ask | show | jobs | submit | potatoman22's commentslogin

You need to buy an apple device

Elaborate?


Debt becomes cheaper with devaluation of the dollar, and the US has a lot of it.


Not just that. Imports become more expensive; exports become cheaper. This helps US manufacturing.


What's the use case of models like T5 compared to decoder-only models like Gemma? More traditional ML/NLP tasks?


They trained it to be used like any other decoder only model. So text generation essentially. But you could use the encoder part for things like classification without much effort. Then again you can also slap a classifier head on any decoder model. The main reason they seem to be doing this is to have swappable encoder/decoder parts in an otherwise standard LLM. But I'm not sure if that is really something we needed.


Encoder/decoder is much, much more efficient for finetuning and inference than decoder-only models.

Historically T5 are good when you finetune them for task specific models (translation, summarization, etc).


I have actually worked on encoder-decoder models. The issue is, finetuning itself is becoming historic. At least for text processing. If you spend a ton of effort today to finetune on a particular task, chances are you would have reached the same performance using a frontier LLM with the right context in the prompt. And if a big model can do it today, in 12 months there will be a super cheap and efficient model that can do it as well. For vision you can still beat them, but only with huge effort the gap is shortening constantly. And T5 is not even multimodal. I don't think these will change the landscape in any meaningful way.


This t5 is multimodal.

Also a hint: you can create a finetuning dataset from a frontier LLM pretty easily to finetune those t5 and effectively distill them pretty fast these days


Only thing it buys you is a more “natural” embedding, i.e. the encoder can get you a bag o’ floats representing a text, but that also doesn’t mean it’s naturally a good embedding engine - I strongly assume you’d do further training.

Decoder gets you the autoregressive generation you’d use for an llm.

Beyond that, there’s this advantage of having small LLMs train better, they kinda hit a wall a year or two ago IMHO. E.g. original Gemma 3 small models were short context and only text.

As far as I understand you have to pay for that by 2x inference cost at runtime

(Would be happy to be corrected on any of the above, I maintain a multi platform app that has llama.cpp inference in addition to standard LLMs, and I do embeddings locally, so I’m operating from a practical understanding more than ML phd)


In general encoder+decoder models are much more efficient at infererence than decoder-only models because they run over the entire input all at once (which leverages parallel compute more effectively).

The issue is that they're generally harder to train (need input/output pairs as a training dataset) and don't naturally generalize as well


≥In general encoder+decoder models are much more efficient at infererence than decoder-only models because they run over the entire input all at once (which leverages parallel compute more effectively).

Decoder-only models also do this, the only difference is that they use a masked attention.


That doesn't quite make sense to me. A meter is a unit of measure for distance, but a meter is a distance.


Yeah, it's not the same as physics units.

Money does have real value, but only because it can be traded for valuable things.

But money in itself, as bills or numbers in a bank account, is useless until you trade it for something "real".


It's technically deterministic, but it feels nondeterministic in chatbots since tokens are randomly sampled (temp > 0) and input is varied. Using the right prompt makes the model perform better on average, so it's not completely dumb.

I like task vectors and soft prompts because I think they show how prompt engineering is cool and useful.

https://arxiv.org/pdf/2310.15916

https://huggingface.co/docs/peft/conceptual_guides/prompting


> It's technically deterministic, but it feels nondeterministic in chatbots since tokens are randomly sampled

Are you not aware the random sampling makes something non-deterministic?


I'm saying LLMs are deterministic and because of that, prompt engineering can be effective. You knew what I was trying to say, but chose to ignore it.

You should follow the HN Guidelines. I'm trying to have a discussion, not a snarkfest.

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.


I don’t believe that when he wrote that, he was using his own intelligence


At least I'm adding to the discussion.


People who disagree with you add to the discussion too


From what I can tell, their official chat site doesn't have a native audio -> audio model yet. I like to test this through homophones (e.g. record and record) and asking it to change its pitch or produce sounds.


“record and record”, if you mean the verb for persisting something and the noun for the thing persisted, are heteronyms (homographs which are not homophones), which incidentally is also what you would probably want to test what you are talking about here (distinguishing homophones would test use of context to understand meaning, but wouldn’t test anything about whether or not logic was working directly on audio or only working on text processed from audio, failing to distinguish heteronyms is suggestive of processing occurring on text, not audio directly.)


There are homophones of “record”, such as:

“He’s on record saying he broke the record for spinning a record.”


True.

OTOH my point that the thing being suggested to be tested is not testable by seeing whether or not the system is capable of distinguishing homophones, but might be by seeing whether or not it distingishes heteronyms still stands. (The speculation that the record/record distinction intended was one that is actually a pair of heteronyms and that the error was merely the use of the word “homophone" in place of “heteronym”, rather than the basic logic of the comment is somewhat tangential to the main point.)


Ah I meant heteronyms. Thanks!


Huh, you're right. I tried your test and it clearly can't understand the difference between homophones. That seems to imply they're using some sort of TTS mechanism. Which is really weird because Qwen3-Omni claims to support direct audio input into the model. Maybe it's a cost saving measure?


Weirdly, I just tried it again and it seems to understand the difference between record and record just fine. Perhaps if there's heavy demand for voice chat, like after a new release, they load shed by using TTS to a smaller model.

However, It still doesn't seem capable of producing any of the sounds, like laughter, that I would expect from a native voice model.


To be fair, discerning heteronyms might just be a gap in its training.


Is record a homophone? At least in the UK we use different pronunciations for the meanings. Re-cord for the verb, rec-ord for the noun.


I was mistaken about what homophone means!


I'd guess S&box is more an extension of Garry's Mod rather than a reaction to Unity


After the Roman Republic, they switched to having an emperor. Jesus was crucified during this Roman empire. The kings of Rome were around 600 years before this. They meant the emperor, not the king.


You got me into web dev. Thank you!


I'm currently making a tycoon game with React, it's not bad for making some games. I use setInterval for a simple game loop along with a zustand store for the game logic. I'm keeping the game logic and state client-side for now, but I might move it over to a server in the future.


Just a note for those planning to make a simple game or animation in JavaScript: in most cases it's preferrable to use `requestAnimationFrame` instead of `setInterval` or `setTimeout`.

https://stackoverflow.com/a/38709924


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: