Not a great use case for Claw really. I'm sure ChatGPT can one shot a Python script to do this with yt-dlp and give you instructions on how to set it up as a service
Yeah it’s all the stuff beyond the one-shotting of the script that make it useful though.
You just get the final result. The video you requested saved.
No copy pasting, no iterating back and forth due to python version issues, no messing around with systemd or whatever else, etc.
Basically the difference between a howto doc providing you instructions and all the tools you need to download and install vs just having your junior sysadmin handle it and hand it off after testing.
These are miles apart in my mind. The script is the easy part.
ChatGPT can do it w/o draining your bank account etc. I’d agree…
But for speed only, I think it’s “your idea but worse” when the steps include something AND instructions on how to do something else. The Signal/Telegram bot will handle it E2E (maybe using a ton more tokens than a webchat but fast). If I’m not mistaken.
I mean that’s sort of where I think this all will land. Use something like happy cli to connect to CC in a workspace directory where it can generate scripts, markdown files, and systemd unit files. I don’t see why you’d need more than that.
That cuts 500k LoC from the stack and leverages a frontier tool like CC
That's I think basically what you describe. I've been using it for the past two days it's very very basic but it's a I think it gives you everything you actually need sort of the minimal open claw without a custom harness and 5k loc or 50k or w/e. The cool thing is that it can just grow naturally and you can audit as it grows
With regard to Tether none of this is applicable as they aren't compliant with the GENIUS Act. They are in fact attempting to launch a totally separate stablecoin to try to get some of that market: https://www.reuters.com/sustainability/boards-policy-regulat...
Right, and who is looking at it anyways? Let's not kid ourselves, noone at the SEC will be enforcing "genius" act. Does anyone realistically think otherwise?
If it really is the goal to increase treasuries demand by means of stablecoins, then I would expect them to enforce this. If the stablecoins aren't really buying the assets then they do nothing for demand.
If another goal is to enrich the Trump family, then the SEC could forgo enforcement on the World Liberty Financial stablecoin. But they could still enforce the act for everyone else.
Increasing demand for treasuries, thus keeping interest rates down, also directly benefits Trump because he's bought at least $100 million in bonds since becoming president.
No it really isn't. Is Jackson Pollock entirely directing each drop of paint or is there some inherent randomness that is being guided and directed at a higher level? There's a clear analogy to digital art where there exists along a continuum things like traditional digital art tools -> algorithmic generative art -> LLM generated art at varying levels of direct control.
Is your gripe purely that text is involved in the middle? If a paralyzed man painted by verbalizing commands -- left, right, up, down -- that "instructed" a simple machine to move the paintbrush, then by your definition he would simply be "instructing" a machine to "generate" the painting.
That would only make sense if they had a lot of net worth which is clearly not the case. If they have $400 in their checking account and a ton of student loans or other debt inflation is actually great for them.
There was a somewhat similar search for these duplicate galaxies as evidence for a universe with positive curvature. Because in that case if you look deep enough you'll see more images of the same galaxies although they'll be further back in time and possibly shifted in the way you're describing by the cosmic structure. It didn't pan out obviously.
I've got a site that is basically infinite scroll of mostly YouTube, SoundCloud, and Reddit embeds and had to do this for YouTube for it to even be functional. Using the YouTube provided thumbnails though since I'm not too concerned about tracking.
Please more respect for your users, as should be appropriate for a webmaster. If you personally do not mind tracking, OK, but please do respect that a visitor of your website might have different opinions. Thank you!
This is not something they should have to care about. And once you leave the tech savvy community then can not take care of it via content-blockers... You should do the correct thing by default, which is not supporting adtechs' psychological warfare agains the population imho.
Is he really trying to say that AMD had a superior product in the Core 2 Duo era and Intel was only dominating due to marketing? It's hard to take any of the rest of his opinions seriously when he starts with that take
I'm not sure if you're familiar with CPU history, but this is roughly true.
Intel's catchup to multicore offerings was trippy and severely lagged behind AMD.
I think it's often forgotten that CPU leadership has fluctuated between different companies many times in the past!
I'm quite familiar as I worked for Intel for over a decade as an engineer. It's absolutely true that leadership has fluctuated a lot but the 2003-2010 era had fairly clear cut leaders for each generation. AMD was the choice for just about everything through the Athlon 64 single core era but the Core 2 Duo run had them relegated to superiority in the very bottom end of the market only for a long time.
Core 2 as an individual core was significantly better than AMD's competing core (e.g. by being able to issue 4 simultaneous instructions vs. 3 instructions for AMD).
Nevertheless, the integration of multiple cores into an Intel multiprocessor was very inefficient before Nehalem (i.e. the cores were competing for a shared bus, which prevented them from ever reaching their maximum aggregate throughput, unlike in the AMD multiprocessors, which had inherited the DEC Alpha structure, with separate memory links and peripheral interfaces and with an interconnection network between cores, like all CPUs use now).
However this was noticeable at that time mostly in the server CPUs and much less in the consumer CPUs, as there were few multithreaded applications.
Core 2 still lagged behind AMD's cores for various less mainstream applications, like computations with big integers.
Only 2 generations later, after Core 2 and Penryn, with Nehalem (the first SKU at the end of 2008, but the important SKUs in 2009) Intel has become able to either match or exceed AMD's cores in all applications.
Thanks for the color! From the article you linked, it looks like the Twitter thread is quite misleading in claiming that Intel simply slapped two cores together to achieve superiority over AMD. Your article notes a big process improvement (65nm vs 90nm) which allowed for 2x the transistors on a smaller die size along with faster clock and lower memory latency. Curious to get your take.
Intel's 90 nm CMOS process was a disaster, at least in its variant for desktop or server CPUs, all of which had an unbelievably high leakage power consumption (the idle power consumption of a desktop could be more than a half of its peak power consumption).
On the other hand, AMD's 90 nm CMOS process has been excellent.
With its 65 nm process, Intel has recovered its technological leadership, but that was not the most important factor of success, because AMD's 65 nm process was also OK and it became available within a few months of Intel's process.
AMD has lost because they did not execute well the design process for their "Barcelona" new generation of CPUs (made also in 65 nm, like Core 2). While Intel has succeeded to deliver Core 2 even earlier than their normal cadence for new CPU generations, AMD has launched Barcelona only after several months of delays and even then it was buggy. The bugs required microcode workarounds that made Barcelona slow in comparison with Core 2, and that started the decline of AMD, after a few years of huge superiority over Intel.
The benchmarks for all these CPUs that my personal view point is based on are all out there. Anandtech was my favorite source for this at the time due to relatively detailed testing and a clear understanding of the implications of architecture decisions. The complete history of their contemporaneous reviews are still online and userbenchmark.com has independent data on these older CPUs as well although obviously with less control over potential mitigating factors.
AMD was struggling to release CPUs that were competitive against year old Intel Core 2 Duos which remained the status quo through their Bulldozer architecture. Things started turning around with Ryzen when a combination of architecture improvements and typical workloads taking more advantage of multicore flipped the script.
The bits about "true" multicore are also sketchy considering Bulldozer was using shared L2, fetch/decode, and floating point hardware on each module and calling a module two "cores" for marketing purposes.
K7/K8 were great, and while the follow-on K10 Athlon2/Phenom/etc were definitely not bad, they weren't great and they were competing against Conroe/Core2 onwards. That kind of tag-team trading places highlights how (mostly) good the CPU market is now, both AMD and intel are putting out some really nice products with variety so you can pick the most suitable for you, but there's no default "just pick [company]"
AMD did become at least competitive in high end CPUs with the original Athlon or Athlon XP. Not sure whether they were faster than the Pentium 3 but they weren't trailing.
So perhaps a bit more than a couple of years, but my impression is also that they fell behind on (single-thread) performance for a long time after that.
I've also understood that in more ancient history AMD CPUs sometimes beat contemporary Intel parts in performance, although releasing their parts later than Intel. I'm not sure that's relevant to any remotely recent developments anymore though.
The OP is right. Pentium D was a single generation in which Intel offering was worse that Athlon 64 X2 . But Intel quickly shifted to Core 2 Duo architecture and it was much better than AMD.
Since the introduction of Opteron at the beginning of 2003 until the introduction of Core 2 at the middle of 2006, the AMD CPUs were vastly superior to any kind of Pentium 4, not only to Pentium D.
This was much more obvious in servers or workstations than in consumer devices, because the kinds of applications run by non-professionals at that time were much more sensitive to the high burst speeds offered by Pentium 4 with very high clock frequencies, than to the higher sustained performance of the AMD CPUs.
In 2005, I had both a 3.2 GHz Pentium 4 (Northwood, 130 nm) and a 3.0 GHz Pentium D (Prescott, 90 nm). With any of them, the compilation from sources of a complete Linux distribution took almost 3 days of continuous work of 24 hours per day.
After I bought an Athlon X2 of only 2.2 GHz, the time for performing the same task has been reduced to much less than a day. Even for some single-threaded tasks, but which contained many operations that were inefficient on Pentium 4, like integer multiplications or certain kinds of floating-point operations, the 2.2 GHz AMD CPU was several times faster than the 3.2 GHz Pentium 4.
At work, the domination of the AMD CPUs was even greater. Each server with Opteron CPUs that we bought was faster than several big racks with Sun or Fujitsu servers that were used before. Intel did not have anything remotely competitive. At the beginning of 2006, on my laptop with an AMD Turion I could run professional EDA/CAD programs much faster than on the big Sun servers normally used for such tasks. Intel had nothing like that (i.e. the 32-bit Intel CPUs could not use enough memory to even run such programs, so the question whether they could have run such programs fast enough was irrelevant).
Of course, half of year later the competition between Intel and AMD looked completely different.
If I understand correctly this is what they want to do eventually (delete the info used to create the iris hash) but isn't actually what they are doing. The impression is that once they have their algorithm "perfect" and never need to retest on source data they'll go back and delete all the data they have stored but who really trusts that will happen?