Maybe. I'm not sure its that different though? If one person can do the work of two because of power tools, then why keep both? Same with AI. How people feel about it doesn't seem relevant.
Maybe the right example is the role of tractors in agriculture. Prior to tractors you had lots of people do the work, or maybe animals. But tractors and engines eliminate a whole class of labor. You could still till a field by hand or with a horse if you want, but it's probably not commercially viable.
First, creating power tools didn’t cause mass layoffs of carpenters and construction workers. There continued to be a demand for skilled workers.
Second, power tools work with the user’s intent. The user does the planning, the measuring, the cutting and all the activities of building. They might choose to use a dovetail saw instead of fasteners to make a joint.
Third, programming languages are specifications given to a compiler to generate more code. A single programmer can scale to many more customers than a labourer using tools.
The classification of centaur vs reverse-centaur tools came to me by way of Corey Doctorow.
There might be ways to use the technology that doesn’t make us into reverse centaurs but we haven’t discovered that yet. What we have in its current form isn’t a tool.
Did power tools not cause layoffs? That seems like a dubious claim to me. Building a house today takes far fewer people than 100 years ago. Seems unlikely that all the extra labor found other things to do in construction.
They should have known better. It was their job to sell the box. Instead they wasted a tonne of their clients money on a proof-of-concept for something that was never going to work. Using the word 'impossible' was probably also a big error. If it can perform computations, nothing is impossible, but some things are certainly not recommended.
Probably they mentioned something like: "Not possible with current hardware speed" just to be translated as "impossible" since the recollection is second-hand.
I think this is partly an education problem, and partly an industry culture problem. Lots of young developers are incentivized to 'contribute' to open-source as a way to demonstrate that they can actually write software. So open-source becomes a way of signalling competence when at a broader scale it's just extracting wealth from the vulnerable.
Open-source seems to be fragmented into three groups now. Large enterprise open-source like Kubernetes or OpenStack where the license seems more like a legal agreement amongst vendors to not sue each other. Legacy open-source projects that are getting by on brand recognition and sheer willpower. And a whole bunch of noise from people who are looking to leverage open-source into a job of some sort.
As people get more comfortable with AI. I think what everyone is noticing is that AI is terrible at solving problems that don't have large amounts of readily available training data. So, basically if there isn't already an open-source solution available online, it can't do it.
If what you're doing is proprietary, or even a little bit novel. There is a really good chance that AI will screw it up. After all, how can it possibly know how to solve a problem it has never seen before?
Because developers are incentivized to have marketable software skills. Not marketable build things that are cheap and profitable skills.
Moore's law was supposed to make it simpler and cheaper to do more computationally expensive tasks. But in the meantime, everyone kept inflating the difficulty of a task faster than Moore could keep up.
I think some of this is because of the incredible amounts of capital that startups seem to be able to acquire. If startups had to demonstrate profitability before they were given any money to scale, the story would be very different I think.
The problem with Wikipedia as an Academic source is that it's impossible to cite. You have no idea whether the information on there today is going to be there tomorrow or was there yesterday.
It's a shame that Fleming misremembered his process of discovery and created a myth of accidental discovery.
I like the Root-Bernstein narrative more. That in the monotonous execution of routine experiments for something unrelated an unusual observation 'forced' them to discover penicillins antibacterial properties.
Not an accidental discovery by good fortune in a serendipitous sense. An accidental discovery of a brute force exhaustive search. The narrative of we spent months meticulously examining hundreds of samples is less romantic, but is one that supports the importance of funding scientific inquiry.
We won't make progress by hoping people leave culture plates out on window sills. We make progress when we fund meticulous exhaustive efforts of discovery.
A large part of that training is done by asking people if responses 'look right'.
It turns out that people are more likely to think a model is good when it kisses their ass than if it has a terrible personality. This is arguably a design flaw of the human brain.
Maybe the right example is the role of tractors in agriculture. Prior to tractors you had lots of people do the work, or maybe animals. But tractors and engines eliminate a whole class of labor. You could still till a field by hand or with a horse if you want, but it's probably not commercially viable.
reply