Yep if you wrote lambda@edge functions, which are part of Cloudfront and can be used for authentication among other things, they can only be deployed to us-east-1
> This utterly baffles me. [checks] The post isn't 25y old. Author is obviously intelligent and posses self awareness and analytical skills.
One lens on this is that according to him he hasn't sold a single share since he left the company. That would mean he has a substantial monetary reason to see that people keep believing in HP.
It wasn't the Itanium people so much as the industry analysts who follow such things. And, yes, they (including myself) were spectacularly wrong early on but, hey, it was Intel after all and an AMD alternative wasn't even a blip on the radar and 64-bit chips were clearly needed. I'm not sure there was any industry analyst--and I probably bailed earlier than most--who was going this is going to be a flop from the earliest days.
an AMD alternative wasn't even a blip on the radar
Aside from it not being 64bit initially uh.. did we live through the same time period? The Athlons completely blew the Intel competition out of the water. If Intel hadn't heavily engaged in market manipulation, AMD would have taken a huge bite out of their marketshare.
In the 64-bit server space, which is really what's relevant to this discussion, AMD was pretty much not part of the discussion until Dell (might have been Compaq at the time) and Sun picked them up as a supplier in the fairly late 2000s. Yes, Intel apparently played a bunch of dirty pool but that was mostly about the desktop at the time which the big suppliers didn't really care about.
But initial Opteron success was pretty much unrelated to 64-bit. As a very senior Intel exec told me at the time, Intel held back on multi-core because their key software partner was extremely nervous about being forced to support a multi-core world.
I'm well aware of Opteron's impact. In fact, the event when that info was related to me, was partly held for me to scare the hell out of Intel sales folks. But 64-bit wasn't really part of the equation. Long time ago and not really disposed to dig into timelines. But multi-core was an issue for Intel before they were forced to respond with Yamhill to AMD's 64-bit extensions to x86.
> As a very senior Intel exec told me at the time, Intel held back on multi-core because their key software partner was extremely nervous about being forced to support a multi-core world.
That's one way to explain it. Alternatively, one might say that FSB-based Netburst servers would not benefit much from multi-core because the architecture (and especially FSB) has hit its limitation. Arguably, Intel had no competitive product on the mass server market until 2006 and Core-based Xeon 5100 introduction. Only enormous market inertia has kept them afloat.
> In the 64-bit server space, which is really what's relevant to this discussion, AMD was pretty much not part of the discussion until Dell (might have been Compaq at the time) and Sun picked them up as a supplier in the fairly late 2000s.
That was one relatively small (servers number-wise) segment of the market. Introduction of Opteron servers and Windows Server 2003 64-bit has created a new segment of mass 64-bit servers which have very quickly taken over entire 32-bit (at that time) mass server market. That was the real market that Intel wanted for themselves with introduction of proprietary Itanium but failed to acquire it because of the compatibility issue. High-end mainframe-adjacent market segment indeed belonged to Itanium for many years after, but that wasn't the goal of Itanium. Intel wanted to be a monopoly on the entire PC&server market with no cross-licensing agreements but failed and had to cross-license AMD64 instead.
It’s understandable why companies try and sometimes succeed at creating a reality distortion field about the future success of their products. Management is asking Wall Street to allow them to make this huge investment (in their own salaries and R&D empire), and they need to promise a corresponding huge return. Wall Street always opportunities to jack up profits in the short term, and management needs to tell a compelling story about ROI that is a few years in the future to convince them it’s worth waiting. Intel also wanted to encourage adoption by OEMs and software companies, and making them think that they need to support Itanium soon could have been a necessary condition to make that a reality.
I don’t know what factors would make IEA underestimate solar adoption.
> I don’t know what factors would make IEA underestimate solar adoption.
The IEA is an energy industry group from back in the days where "energy" primarily meant fossil fuels (i.e. the 1970s), and they've never entirely gotten away from that mentality.
There are trillions of dollars on the line in convincing people not to buy solar panels or other renewable sources.
Remember all the conspiracy theories about how someone invented a free energy machine and the government had to cover it up? Well they're actually true - with the caveat that the free energy machine only works in direct sunlight.
How often are they reality distortion fields vs leadership trying to put on a face to rally the troops and investors? How do you do the second without the first?
Something I ponder from time to time, while trying to figure out how to be less of a cynic and more of a leader.
> Management is asking Wall Street to allow them to make this huge investment (in their own salaries and R&D empire), and they need to promise a corresponding huge return. Wall Street always opportunities to jack up profits in the short term, and management needs to tell a compelling story about ROI that is a few years in the future to convince them it’s worth waiting
Explain Amazon, Uber, Spotify, Tesla, and other publicly listed businesses that had low or even negative profit margins for many years.
The idea that Wall Street only rewards short term profit margins is laughable considering who is at the top of the market cap rankings.
one thing I found amazing about the IEA chart is how similar the colors of each year was making it very difficult to see which year was which. the gist of the chart was still clear though
So a 145% tariff on high tech goods will hurt the US too much? China should ban high tech exports to the US. That's gonna hurt both sides but the war was already started by Trump.
I apologize for my erratic behavior which has tarnished our brand and created unnecessary turmoil within our organization. Regrettably, we will need to implement a 16% reduction in headcount to address the financial challenges we now face. I have decided to step aside and hand over control to my deputy, who I believe will provide the steady leadership needed to rebuild trust and restore our company's vision.
He'll never admit he was wrong or step down. He'll drive Automattic into the ground and Wordpress along with it (until someone forks it, like say...WP Engine, heh. Or Redhat, or IBM, or some huge web design firm, etc.)
He considers Wordpress "his" even though...he took it over from the original author who was abandoning the project.
It reminds me of the rage-bender Jamie and Jim Thompson went on, attacking OPNsense for "stealing" their work and doing a lot of immature things taking over opnsense's domain, their subreddit, etc. via legal actions. And at least one lawsuit. They lost on every front - reddit gave the subreddit back to the opnsense developers, ICANN gave them back their domain name, etc.
Attacking OPNsense for "stealing" pfSense was pretty rich given pfSense's origin; netgate slapped their logo on m0n0wall and started working on their fork. Which is exactly what opnsense did that enraged them...
Especially as pfsense software started getting more user-hostile and shifting functionality into the paid versions, pfsense has rapidly become less and less popular. I almost never see anyone recommend it anymore.
> He considers Wordpress "his" even though...he took it over from the original author who was abandoning the project.
Non Wordpress user here, not a blogger, don’t use CMSs. Curious about this line. Reading the history on Wikipedia, the original b2 was the precursor. It was pretty small and being abandoned. Matt proposed forking it in January 2003, and worked with Mike to bring the first version to fruition a few months later. 22 years later it’s a goliath.
Given that history it seems totally fair for Matt to consider WP his thing. You don’t seem to think so, can you explain?
I have a bright red m0n0wall firewall on the shelf above my desk. Nothing to add here, just its very rare I run across anyone who might think its cool, or even know what it is.
I think it's smart to start with a high level language which should reduce development time, prove the worth of the application, then switch to a lower level language later.
What was that saying again? Premature optimisation is the root of all evil
A thread going into what Knuth meant by that quote that is usually shortened to "premature optimization is the root of all evil". Or, to rephrase it: don't tire yourself out climbing for the high fruit, but do not ignore the low-hanging fruit. But really I don't even see why "scripting languages" are the particular "high level" languages of choice. Compilers nowadays are good. No one is asking you to drop down to C or C++.
I think that early in development you should be able to spam a lot of hypothesis and quickly test them and check how people interact with your software. Whether your software makes sense is more important than whether it's fast.
People are also highly unpredictable, so it is usually a matter of trial and error, very often their feedback may completely erase wide sets of assumptions you were building your product around.
It's borderline impossible to do it on mature product, but rewriting mature product to something faster is not borderline impossible - it's just very hard.
Note that it doesn't apply if you just program something in accordance from an rfc where everything is predefined.
I think a lot of people are running on facts that are between 10 to 25 years out of date. There was a time when the scripting languages had a very, very large step up in prototyping capability, because the static languages of the time were frankly terrible.
But the static languages have changed, a lot, for the better since then. I now find that when I'm greenfielding something, if I have even a clue how I want to structure it overall, that static languages end up being faster somewhere around a week into the development process. Dynamic languages are superficially easier to refactor, but the refactorings tend to take the form of creating functions that take more and more possible inputs and this corrodes the design over time. Static programs stay working the whole time, and I can easily transform the entire program to take some parameter differently or something and get assurance I'm not missing a code path.
I personally actively avoid dynamic languages for initial development now, for anything that is going to be over a week in size. The false economies are already biting by that point and it gets progressively and generally monotonically worse over time.
This comes from someone who was almost 100% dynamic scripting language in the first 15 years of my career. It's not from lack of understanding of dynamic scripting languages, used at scale.
> static languages end up being faster somewhere around a week into the development process
And when you factor in LLMs being ridiculously good at scaffolding basic apps, the time to reach that turning point will continue to decrease. It takes me time to write out test harness boiler plate, or making a nice dev/staging environment configuration. It is why many languages come with a `mylang create proj` command line tool to automate a basic project. But the custom scaffolding that a LLM can provide will eventually beat any command line project creation tool we can imagine.
This is one of the driving realizations of my point. I've coded in a lot of dynamic languages and a lot of static languages and the distance between their developer experiences are shrinking drastically. I would expect a decent dynamic language expert to become productive in Go very quickly. Rust may be more difficult but again should be totally possible for any competent programmer. Then you add on top of that the fact they will be ramping up using an LLM that can explain the code they are looking at to them, that can provide suggestions on how to approach problems, that can actually write example code, etc.
And then there are all of the benefits of deploying statically compile binaries. Of managing memory layouts precisely. Of taking direct advantage of things like simd when appropriate.
A spectroscope is a handheld optical device which allows you to see the light distribution coming from a light source. Should be able to pick one up for under a hundred bucks.
I've actually looked into this in the past (for measuring the spectra of LED bulbs), but they're significantly more expensive than I expected! Gratings are cheap, but digital models seem to run upwards of $2k. Any suggestions for midrange models would be appreciated!
Save money and go analog! Calibrate against a known spectrum. One example would be a sodium vapour street light but there aren't as many around these days.