Hacker Newsnew | past | comments | ask | show | jobs | submit | superluserdo's commentslogin

I basically implemented exactly this on top of whisper since I couldn't find any implementation that allowed for live transcription.

https://tomwh.uk/git/whisper-chunk.git/

I need to get around to cleaning it up but you can essentially alter the number of simultaneous overlapping whisper processes, the chunk length, and the chunk overlap fraction. I found that the `tiny.en` model is good enough with multiple simultaneous listeners to be able to have highly accurate live English transcription with 2-3s latency on a mid-range modern consumer CPU.


Seems like we're just currently in the top-right of this comic https://xkcd.com/2044/


Makes me think of Kafka as well.


Someone should write an AI tool that evaluates every top article in hacker News and provides the appropriate XKCD comic as a comment.


And then a few steps later it's just bots talking to bots. Then what did we read when we're on the loo?


The real answer is it's completely domain-specific. If you're trying to search for something that you'll instantly know when you see it, then something that can instantly give you 5 wrong answers and 1 right answer is a godsend and barely worse than something that is right 100% of the time. If the task is to be an authoritative designer of a new aeroplane, it's a different story.


I wish I could go back to the days of doing almost anything at all without having to tell a server what a motorbike or traffic light is.


LPT: switch to the audio captcha. Yes, it takes a bit longer than if you did one grid captcha perfectly, but I never have to sit there and wonder if a square really has a crosswalk or not, and I never wind up doing more than one.


You're mixing up years and days there. It would be about $1 a day, not 400.


If you couldn't more than $400 a day then nothing great would ever get done.


Somehow people making these sort of hypotheticals about billionaires spending or dispersing their money always make a mistake like this.

"Jeff Bezos has 300 billion dollars. There are 300 million people in America, so he could give everybody a million dollars."

For fun, calculate how long the billionaires of America could fund America's social programs if they were taxed at 100%. If you ask people this, the off-the-cusp estimates are usually something like a thousand years, a century, some huge number like that...


Pretty sure it's not going to be 300 billion dollars if you try to cash it out.


why should we consider it 300 billion dollars at all, then?


that's a thousand dollars each

Even in this populist age, math still counts.


It was an illustrative example of the way people botch this sort of math problem. Steve Mould has a video about it IIRC.


Sure but those hypotheticals are just poorly thought out ways to visualize the imbalance. Another way to do it would be saying "Jeff Bezos has 300 billion dollars, that is 300 thousand millions. There are 300 million people in America, so $1000 has been taken out of every American's share of the national wealth and reserved exclusively for Jeff Bezos". Repeat that for every billionaire in the US and you should be able to demonstrate quite the imbalance.

Of course that assumes you think Earth's and society's (or at least the US's and Americans') resources should exist for all humans (or Americans) and the ideal balance would be based on as little as one needs and as much as one can contribute, i.e. literally how early human communities operated and how human communities still often operate outside economical contexts (e.g. after a natural disaster). You can say that model doesn't scale but I don't see a good argument for why that should be a reason to use a completely different model unless you're literally among the few people it disproportionately benefits (if you ignore how ruinous it usually is to them too at a human and interpersonal level because of how much it alienates them from almost anyone else around them).


That’s also misusing maths though because Amazon is a global company so really you should divide by 8 billion or at least a couple of billion.

As a Brit I think I’ve derived significantly more than $1000 in value through Amazon’s existence as compared with the status quo beforehand, and that’s exclusively counting the shopping part and not anything else they do. You can ask the question about whether it would have happened anyway in a communist paradise or whether Bezos gets the correct percentage of the reward but I mean, it actually is a very useful thing.

Similarly with Apple and Google and so on. These companies make things that people for the most part choose to use.


This whole argument also assumes there is something called "Americas National Wealth" and that its a zero sum game where there is x dollars to be distributed around to everybody.

Capitalism is not a zero sum game, and people can choose to turn effort into wealth or they can choose to sit around and do nothing.


Yeah. I feel as if there's a (small, but growing?) group of people out there who just sort of see, ok, well, everyone isn't as well off as I think they should be, so those who are doing well must just be hoarding everything. Which really just doesn't make sense at all.

It's usually based on nothing other than pure vibes.

It could theoretically be true if e.g. some billionaire just decided to buy up a load of houses and leave them empty just to piss people off, but whilst theoretically they probably could do this (e.g. if I back of the envelope it, Elon actually has enough net worth to offer everyone in my hometown double the market value of their house and then just leave them to rot without even renting them out), no-one actually does.


It is and has been for a while, but most of the more flashy and exciting developments in ML and AI don't have very much applicability to LHC event processing. To be able to state any kind of finding about some aspect of physics based on the scattering of particles in the accelerator and their decays in the detector, you need to take the background of all events and make multivariate discriminants on the data in order to enrich your signal as much as possible while throwing as little as possible away. This requires you to have a rigorous and verifiable statistical "paper trail" from start to finish, so you can say with confidence intervals how much signal and background you ought to have, vs how much you measure in your data after processing it. An overly broad black box doesn't really work for this kind of introspection.


I wouldn't write it off as a bubble, since that usually implies little to no underlying worth. Even if no future technical progress is made, it has still taken a permanent and growing chunk of the use case for conventional web search, which is an $X00bn business.


A bubble doesn't necessarily imply no underlying worth. The dot-com bubble hit legendary proportions, and the same underlying technology (the Internet) now underpins the whole civilization. There is clearly something there, but a bubble has inflated the expectations beyond reason, and the deflation will not be kind on any player still left playing (in the sense of AI winter), not even the actually-valuable companies that found profitable niches.


Clicking the link to the comments on that post takes you to this comment section, so I think there's some by-hand deduping/merging going on


Both situations involve a huge amount of water flowing downstream. That needs to be replenished either through rainfall or through pumping.


See this >15-year-old video "How to get featured on YouTube" - https://www.youtube.com/watch?v=-uzXeP4g_qA, which I remember as being originally uploaded to the official Youtube channel but looks like it's been removed now, this reupload is from October 2008.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: