Radiology seems like a good field for AI because it's easier to see what automation would look like in practice, since an MRI, CT-Scan directly produces data and which can be fed to an AI. How well this actually works, I have no idea.
They probably have this data. But the data is most definitely heavily polluted: Fake job listings, jobs posted to several sites, sites that aggregate job listings etc. This data is probably more polluted by AI then AI has influenced job openings itself.
Or it could have been internal politics. Amazon top management set the goal to cut 30k and after internal discussions with lower management they come up with 14k positions that can actually be cut.
I always overestimate how much I can do in one day and I underestimate how much I can get done in 100 days (with the caveat that I have to work on it consistently).
> Basically an LLM translating across languages will "light up" (to use a rough fMRI equivalent) for the same concepts (e.g. bigness) across languages.
That doesn't seem surprising at all. My understanding is that transformers where invented exactly for the application of translations. So, concepts must be grouped together in different languages. That was originally the whole point and then turned out to be very useful for broader AI applications.
I recently talked to someone who works at a company that builds fairly complicated machinery (induction heating for a certain material processing). He works in management and they did a week long workshop with a bunch of the managers to figure out where AI will make their company more efficient. What they came up with was that they could feed a spec from a customer into an AI and the AI will create the CAD drawings, wiring diagrams, software etc. by itself. And they wrote a report on it. And I just had to break it to him: The thing that AI is actually best at, is replacing these week-long workshop where managers are bs-ing around to write reports. Also, it shouldn't be the managers doing a top down approach where to deploy AI. Get the engineers, technicians, programmers etc. and they should have a workshop to make plans where to use AI, because they probably already are experimenting with it and understand where it works well and where it doesn't quite cut it yet.
Any example? Unless it was completely racist, I feel like a lot of people have been complaining about H1Bs for many years without any problems. Also here on hacker news.