Not always, you can improve the loop by putting something real inside, like, a code execution tool, a search engine, a human, other AIs or an API. As long as the model can make use of that external environment its data can improve. By the same logic a human isolated from other humans for a long time might also be in a situation of going crazy.
Practical example - using LLMs to create deep research reports. It pulls over 500 sources into a complex analysis, and after all that compiling and contrasting it generates an article with references, like a wiki page. That text is probably superior to most of its sources in quality. It does not trust any one source completely, it does not even pretend to present the truth, it only summarizes the distribution of information it found on the topic. Imagine scaling wikipedia 1000x by deep-reporting every conceivable topic.
Yes, I think Plex was an XBMC fork and Kodi is the new name of XBMC. Jellyfin forked from Emby, I think when it became closed source. I never used Emby. Plex always seemed to cost money in confusing ways and that turned me off. My initial TV just used NFS shares on a unix machine and a Netgear NeoTV box (~2009) but eventually the codec support was too poor so I moved to XBMC on the Shield and then a number of years later to Jellyfin server on Linux with Jellyfin client on the Shield.
Plex used to routinely offer lifetime pass for like $80 at Black Friday until recently, so that was just an obvious decision if you even remotely used it. If you’re only planning on using it on your local network you don’t need to pay for anything.
yeah I'm back to Torrenting too. I was more than happy to pay for Prime, Netflix, and maybe Apple TV+ a few times a year but now they expect me to pay for HBO & Crave & x & y & z & a... I might as well get a cable package.
The funny thing is, between a NAS & a monthly VPN subscription & usenet subscriptions I probably could have paid for all those streaming services for a few years :D
Having figured it out myself, I agree. And it's not obvious that you need both a Usenet _indexer_ (who tells you what content is available) and a Usenet provider (who actually serves you the content).
FWIW, and I'm not sure if this is against terms here, but I use newsgeek for the former and giganews for the latter. Both are paid services but reasonably priced imo. When I can find something on Usenet, it typically downloads with speeds > 10MBps vs. torrenting which can exceed that but is usually much slower.
You can use whatever client you want. I have the *arr stack mentioned elsewhere in this thread as well and SABnzbd is the recommended option there.
Between you and your provider the downloads are over HTTP. The distribution of content between the Usenet providers is over the Usenet protocol which predates HTTP and the WWW.
Why does every blog nowadays read this way -- quick, witty, sentences "x wasn't the problem, it was the point" (or whatever) and have these section headers that are full thoughts? Or is this just more AI slop?
Is this sarcasm? While it may be true that my mother does not know what ffmpeg is I'm almost positive she interacts with stuff that uses it literally every single day.
reply