It accounts for 3% of the economy and provides around 15 million jobs. That’s absolutely going to make a dent.
And international tourism supports local tourism. I think Las Vegas will continue to be a shell of what it was until international tourism rebounds.
BEA used to have these cool interactive tables on GDP by industry, but they’ve now been discontinued. It really feels like our current administration just does not like public data.
Edit: I do think it’s fair to say our economy is much more diversified and resilient to a drop in tourism then a country like Spain where it’s closer to 20% GDP.
But maybe the right way to frame it is it wouldn’t be felt as much nationally, but international tourism drops are pretty catastrophic to local economies of some of our biggest cities like New York Miami and Los Angeles Angeles.
How much of that 3% is from foreign tourists versus domestic Americans?
And what types of jobs are those 15 million? High paid high skilled or low pay low skilled?
Because from what I can tell you about EU tourism jobs, most jobs tourism creates over here are low pay, hard labor, unskilled jobs, mostly filled by minimum wage migrant seasonal workers who then send the money back home, meaning the biggest beneficiaries from those jobs are the wealthy land/business owners who exploit cheap mirant labor, and not the local workforce who mostly suffers gentrification as they don't work in low pay tourist jobs and have to deal with increased rents from tourism on top.
Plus, the massive black economy tourism creates where a lot of the money is under the table and avoids the tax man further compounds to the problem. So I doubt much of the US working class will suffer from a tourism stagnation.
@HEmanZ: Did you read anything I said? Who's losing their job when almost all tourism jobs are done by foreign seasonal workers? The locals mostly aren't losing any job because they don't work in tourism due to pay and work conditions.
Are you using the same logic to cry for the western workers making clothes and sneakers who lost their jobs to Asian sweatshops? Do you think they miss that type of jobs and would want them back?
I don't have a paid access to the website since 2021, so i can't look at the primary/secondary data, but it never failed me, and doesn't have the bias more political economic institutes has, so i mostly take data from there. If you have different data i will take them.
Ok so if that labor was someone’s job, that implies they couldn’t get something better for them. If you’re straight eliminating those jobs and now they have to take something even worse for them (lower pay, worse hours, worse personal satisfaction, etc)
Did you read anything I said? Who's losing their job when almost all tourism jobs are done by foreign seasonal workers? The locals mostly aren't losing any job because they don't work in tourism due to pay and work conditions.
Are you using the same logic to cry for the western workers making clothes and sneakers who lost their jobs to Asian sweatshops? Do you think they miss that type of jobs and would want them back?
“ For years, you’ve sat in front of a rectangle, moving tinier rectangles, only to learn that AI can now move those rectangles 10x better. As someone outside the equity class, you begin to wonder what your role is in this new paradigm. And whether rectangles were ever your ticket to happiness in the first place.”
VSCode opens single files outside of projects. What do they do? Personally I wouldn’t mind if it just defaulted to the settings of the last-used vault.
If you don't have a window open, then VSCode opens with no active workspace. There are no workspace settings at all, and there is no file tree. But since VSCode has user level settings, these are what is used, including theming/etc.
If you have a window open, the file is opened to the workspace for that window. You can see this in action because the "Trust" dialog specifically says that you're trying to open untrusted files into a trusted workspace.
I’m worried that opportunities like this to build fun/interesting software over models are evaporating.
A service just like this maybe 3 years ago would have been the coolest and most helpful thing I discovered.
But when the same 2 foundation models do the heavy lifting, I struggle to figure out what value the rest of us in the wider ecosystem can add.
I’m doing exactly this by feeding the papers to the LLMs directly. And you’re right the results are amazing.
But more and more what I see on HN feels like “let me google that for you”. I’m sorry to be so negative!
I actually expected a world where a lot of specialized and fine-tuned models would bloom. Where someone with a passion for a certain domain could make a living in AI development, but it seems like the logical endd game in tech is just absurd concentration.
I hear you. At the same time, I think we're on the cusp of a Cambrian explosion of creativity and there's a lot of opportunity. But we need to think about it differently; which is hard to do since the software industry hasn't changed much in a generation.
It wouldn't surprise me if we start to see software having much shorter shelf-lives. Maybe they become like songs, or memes.
I'm very long on human creativity. The faster we can convert ideas into reality, the faster new ideas come.
I don't think it's sanctimonious to say, hey, I don't want the technology I work on to be used for targeting decisions when executing people from the sky. Especially as the tech starts to play more active roles. You know governments will be quick to shift blame to the model developers when things go wrong.
> I don't want the technology I work on to be used for targeting decisions when executing people from the sky
one problem i have with this specific case and Anthropic/Claude working with the DOD is I feel an LLM is the wrong tool for targeting decisions. Maybe given a set of 10 targets an LLm can assist with compiling risks/reward and then prioritizing each of the 10 targets but it seems like there would be much faster and better way to do that than asking an LLM. As for target acquisition and identification, i think an LLM would be especially slow and cumbersome vs one of the many traditional ML AIs that already exist. DOD must be after something else.
> I don't want the technology I work on to be used for targeting decisions when executing people from the sky
What do you do when the government come to you and tell you that they do want that, and can back it up with threats such as nationalizing your technology? (see Anthropic)
We're back to "you might not care about politics, but that won't stop politics caring about you".
> I know this is a foreign concept to some, but you can have a backbone.
Challenge it in court. Move the company to a different jurisdiction. Burn everything down and refuse to comply.
Challenge in court is fine, even healthy.
Threatening to burn everything down and refuse to comply might well work; simply daring Trump to a game of Russian Roulette about this popping the bubble that's only just managing to keep the US economy out of recession, on the basis that he TACOs a lot, I can see it working in a way it wouldn't if he were a sane leader making the same actual demands just for sane reasons.
Move the company to a different jurisdiction? That would have worked if AI was a few hundred people and a handful of servers, as per classic examples of:
At the height of its power, Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography has become Instagram. When Instagram was sold to Facebook for a billion dollars in 2012, it employed only 13 people. Where did all those jobs disappear? And what happened to the wealth that all those middle class jobs created?
But (I think) now that AI needs new data centres so fast and on such a scale that they're being held back by grid connection and similar planning permission limits, this isn't a viable response.
They can be burned down, but I think they can't realistically be moved at this point. That said, I guess it depends on how much Anthropic relies on their own data centres vs. using 3rd parties, given Amazon's announced AWS sovereign cloud in Europe?
Yeah that sentence struck me as very carefully worded. They also don't mention how often RA is needed or invoked. We'll encounter a lot of these autonomous systems (cars, robots, equipment) that escalate decisions and edge cases to human employees until they are trained enough that reliability goes up.
It's tricky to give a number for "RA required" that isn't wildly misleading, or contextualize one you're given. The common case for most AV RAs is confirmation of what the vehicle already has planned. Does that count as "required"?
An AV company can also tune how proactive vehicles are in reaching out to RA for confirmation, which is a balancing act between incident rate, stoppages, RA availability, and rider metrics. There's other ways to tune RA rate by also adjusting when and where the vehicles operate, which comes down to standard taxi fleet management tools (e.g. price and availability).
Waymo chooses a target that they're comfortable with and probably changes it every so often, but those numbers aren't the only possible targets and they're not necessarily well-correlated to the system's "true" capabilities (which are themselves difficult to understand).
You are not the one folks are worried about. US Department of War wants unfettered access to AI models, without any restraints / safety mitigations. Do you provide that for all governments? Just one? Where does the line go?
> US Department of War wants unfettered access to AI models
I think the two of you might be using different meanings of the word "safety"
You're right that it's dangerous for governments to have this new technology. We're all a bit less "safe" now that they can create weapons that are more intelligent.
The other meaning of "safety" is alignment - meaning, the AI does what you want it to do (subtly different than "does what it's told").
I don't think that Anthropic or any corporation can keep us safe from governments using AI. I think governments have the resources to create AIs that kill, no matter what Anthropic does with Claude.
So for me, the real safety issue is alignment. And even if a rogue government (or my own government) decides to kill me, it's in my best interest that the AI be well aligned, so that at least some humans get to live.
If you are US company, when the USG tells you to jump, you ask how high. If they tell you to not do business with foreign government you say yes master.
a) Uncensored and simple technology for all humans; that's our birthright and what makes us special and interesting creatures. It's dangerous and requires a vibrant society of ongoing ethical discussion.
b) No governments at all in the internet age. Nobody has any particular authority to initiate violence.
That's where the line goes. We're still probably a few centuries away, but all the more reason to hone in our course now.
That you think technology is going to save society from social issues is telling. Technology enables humans to do things they want to do, it does not make anything better by itself. Humans are not going to become more ethical because they have access to it. We will be exactly the same, but with more people having more capability to what they want.
> but with more people having more capability to what they want.
Well, yeah I think that's a very reasonable worldview: when a very tiny number of people have the capability to "do what they want", or I might phrase it as, "effect change on the world", then we get the easy-to-observe absolute corruption that comes with absolute power.
As a different human species emerges such that many people (and even intelligences that we can't easily understand as discrete persons) have this capability, our better angels will prevail.
I'm a firm believer that nobody _wants_ to drop explosives from airplanes onto children halfway around the world, or rape and torture them on a remote island; these things stem from profoundly perverse incentive structures.
I believe that governments were an extremely important feature of our evolution, but are no longer necessary and are causing these incentives. We've been aboard a lifeboat for the past few millennia, crossing the choppy seas from agriculture to information. But now that we're on the other shore, it no longer makes sense to enforce the rules that were needed to maintain order on the lifeboat.
> Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations.
You recon?
Ok, so now every random lone wolf attacker can ask for help with designing and performing whatever attack with whatever DIY weapon system the AI is competent to help with.
Right now, what keeps us safe from serious threats is limited competence of both humans and AI, including for removing alignment from open models, plus any safeties in specifically ChatGPT models and how ChatGPT is synonymous with LLMs for 90% of the population.
Yes IMO the talk of safety and alignment has nothing at all to do with what is ethical for a computer program to produce as its output, and everything to do with what service a corporation is willing to provide. Anthropic doesn’t want the smoke from providing DoD with a model aligned to DoD reasoning.
the line of ego, where seeing less "deserving" people (say ones controlling Russian bots to push quality propaganda on big scale or scam groups using AI to call and scam people w/o personnel being the limiting factor on how many calls you can make) makes you feel like it's unfair for them to posses same technology for bad things giving them "edge" in their en-devours.
The cat is out of the bag and there’s no defense against that.
There are several open source models with no built in (or trivial to ecape) safeguards. Of course they can afford that because they are non-commercial.
Anthorpic can’t afford a headline like “Claude helped a terrorist build a bomb”.
And this whataboutism is completely meaningless. See: P. A. Luty’s Expedient Homemade Firearms (https://en.wikipedia.org/wiki/Philip_Luty), or FGC-9 when 3D printing.
It’s trivial to build guns or bombs, and there’s a strong inverse correlation between people wanting to cause mass harm and those willing to learn how to do so.
I’m certain that _everyone_ looking for AI assistance even with your example would be learning about it for academic reasons, sheer curiosity, or would kill themselves in the process.
“What saveguards should LLMs have” is the wrong question. “When aren’t they going to have any?” is an inevitability. Perhaps not in widespread commercial products, but definitely widely-accessible ones.
> There are several open source models with no built in (or trivial to ecape) safeguards.
You are underestimating this. It's almost trivial to remove the safeguards for any open-weight model currently available. I myself (a random nobody) did it a few weeks ago on a recently released model as a weekend side-project. And the tools/techniques to do this are only getting better and easier to use!
Sounds like you're betting everyone's future on that remaing true, and not flipping.
Perhaps it won't flip. Perhaps LLMs will always be worse at this than humans. Perhaps all that code I just got was secretly outsourced to a secret cabal in India who can type faster than I can read.
I would prefer not to make the bet that universities continue to be better at solving problems than LLMs. And not just LLMs: AI have been busy finding new dangerous chemicals since before most people had heard of LLMs.
chances of them surviving the process is zero, same with explosives. If you have to ask you are most likely to kill yourself in the process or achieve something harmless.
Think of it that way. The hard part for nuclear device is enriching thr uranium. If you have it a chimp could build the bomb.
I’d argue that with explosives it’s significantly above zero.
But with bioweapons, yeah, that should be a solid zero. The ones actually doing it off an AI prompt aren't going to have access to a BSL-3 lab (or more importantly, probably know nothing about cross-contamination), and just about everyone who has access to a BSL-3 lab, should already have all the theoretical knowledge they would need for it.
And international tourism supports local tourism. I think Las Vegas will continue to be a shell of what it was until international tourism rebounds.
BEA used to have these cool interactive tables on GDP by industry, but they’ve now been discontinued. It really feels like our current administration just does not like public data.
reply