This is really cool, I gave it a try this morning at work. Unfortunately, as much as I like it, I can't see myself using this regularly precisely because it's a webapp. I'm sure there's a lot of folks out there that would love it though.
Perhaps it's just a personal preference, but I'm just reticent to fall into the habit of using something which requires an internet connection, you know?
Hard agree. I'm really trying to figure out how to inject this idea into my friends' heads effectively. The main struggle I'm facing is how to convey the danger behind it. Why can it be deadly exactly? What can a program actually do to harm people, to the level where it's a risk of extinction or societal collapse?
Personally I didn't need to imagine a specific scenario to understand that there's risk, but I think it would help me convince other folks if I did.
If you want society to collapse all you need to do is succeed in having AI automate all jobs.
Every single country where money comes from somewhere other than people (oil, diamonds...) is an authoritarian nightmare simply because keeping people happy is not necessary.
Once AI can do everything and robots that can do any physical labor are developed the population will shrink dramatically as people with killer robots kill each other for resources. There is no need for AI rebellion or AI failure to get there.
I don't think I see enough discussion about what this means for privacy. There was some protection in the fact that it was prohibitively expensive to get someone to listen to every single one of our phonecalls/read all our emails/etc.
The top use case I've been hearing is in legal discovery. Law firms used to play games with diligence by disclosing TBs of email and making it cost prohibitive to find relevant emails. This task would normally require a $60-100/hr paralegal or lawyer.
GPT-4 can do that task for fractions of a penny per email now. It doesn't have to be perfect if its competing with nothing. I expect we'll see similar shops for any other high cost paper/trail business.
Is there a solution to the issue of data stewardship yet? I'd imagine it typically would not be permissible to send a bunch of proprietary legal documents off to OpenAI.
What I'd really love to implement is a way for GPT-4 to answer questions based on a corpus of "all our Confluence pages plus random other sources of documentation." Like with the legal document issue, it's a bit of a nonstarter right now given the proprietary nature of corporate documentation.
AFAIK the Azure APIs provide suitable data usage requirements. One of the most fascinating aspects of the AI world is that we've made extraordinarily expensive brute force search a valuable tool.
Why do you think there was a mass surveillance of American domestic communications since forever ago, as leaked by Snowden? This technology has been available since then and can effectively summarize millions of pieces of communication.
Yeah but have you seen the leaked slides? It's clear that they have only the ability to analyse 10% or less of the data they are storing.
GPT-like systems will close that gap, and then comes all of the problems of automated law enforcement - Extrapolation from incomplete data, false positives from coindicences, interpretation errors, all that annoying stuff
To add to that, the leaked documents from Snowden described a query language not unlike doing Boolean searches. Nothing close to GPT’s ability to comprehend human asks.
Collection an ocean of data from everyone isn't the same as actually painstakingly tying all the pieces together for everyone.
They've created a huge library of unorganized data. The difference here is they now can spawn a million untiring AI private investigators / librarians to organize this information into coherent "case files".
At least for me, until this point I've had a feeling of anonymity in the idea that, while my data is being slurped up, I'm just one data point in a sea of other 'normal' people. There would be little value in spending government time and effort tying all of the web detritus together for me. The juice would definitely not be worth the squeeze.
However, when the cost of this effort is nearly zero, that now becomes a different story. The balance of power between government and the people it rules is going to radically shift.
Not exactly. Gov had to be selective because its surveillance required a lot of resources per person/call. New technology allows it cheap and en mass. Voice calls can be recorded, then converted to text, then filtered. And humans will only analyze something of interest. Like we did have alphabet and books, and newspapers for hundreds of years. But only with internet we got the ability to process them easily.
Not only converted to text, it seems likely that we can document sentiment around the persons speech. For example if you're a low priority target that's still on the radar, but not high enough on the list to get a human handler I could see something like, not only what you said, but were you laughing, angry, crying. The the tone of your voice indicate the likelihood of action in a short time frame?
> Gov had to be selective because its surveillance required a lot of resources
not exactly. Where do you think all those budget trillions that don't have to be accounted for goes into? the FBI+NSA (=CIA but for citzens) have infinite resources.
All the overhead they have is to make sure a small subset of the citizens are not impacted. Snowden goes into this in some detail when talking about day to day operations. The norm is to extend the net as wide as possible, until you reach some politician or government agency.
How sure are you about that? The basic theories have been around since the 70s, have been proven at scale in the last decade, and the NSA has more data and compute than anybody else. I’d be shocked if they aren’t very far along in solving many problems.
This isn’t really a “throw money at it” problem like the government is good at.
Take drones for example. The government got really good at those because they made them jet-powered (lol) and blew a bunch of money on server-grade FPGA’s in each one of them.
You can’t really just buy a lot of GPUs to make an LLM work, you need iterative development of architecture and training methods.
Like maybe the government invented self-attention before 2017, but if they didn’t, then the constraint is training time, and the government has the same number of seconds as the rest of us.
The military invented the nuclear bomb yes. But Fermi did most of his thinking work in Italy before the Manhattan project. He got money thrown at him once he got here.
As for semiconductor devices, it was Bell Labs and TI.
Coding is an ambiguous concept that wasn’t really invented, but if it were, it would have first appeared in programmable looms.
The military likes to take credit for things, but really all they do is throw money at existing inventions.
I’m sure they’re throwing a bunch of money at Transformers now, but who are all these uncredited super geniuses who invent things and then let randos at Google take the credit/earn the money?
I used to agree with you. I used to think the military was kinda dumb. But after doing a deeper dive into past military technology, and present - I've come to realize this is just an intelligence ruse.
They made some mistakes in the 40's and 50's to where they had nuclear secrets stolen by the Russians. And ever since has been hyper compartmentalized.
It would not surprise me if in the 90's or 00's they had an internal working LLM, considering all the puzzle pieces. You will never hear about classified tech unless it's a bomb, gets leaked. (See code breaking machines declassed after 70+ years)
In a hypothetical scenario, a military organization might want to conceal its use of a large language model (LLM) for intelligence gathering and analysis.
Another scenario is the military's current interest in everything quantum. Quantum computers for example (you wouldn't want another nation being first and pirate baying out our secrets, would you) so there is an extreme national security importance of being first.
And to be first, you need to have the smart people, which the military has. There is a reason China struggles with jet engines 80 years after their invention, and still can't make nuclear carriers. While the US navy works on things like this: https://www.navair.navy.mil/foia/sites/g/files/jejdrs566/fil...
That's not how military intellectual property works. I don't doubt your technical credentials, but I think you lack deeper insight to what the military does, and has done.
>That's not how military intellectual property works.
So your implication is that the military is full of unnamed linear algebra, systems engineering, and linguistics super geniuses and these people never leave, never talk about their work publicly, never publish anything ever, and they're all cool with their huge innovations being kept away from the public forever? All because military IP regulation?
And none of these effective state prisoners ever defect to China (where they could live like royalty) because...
If that is your angle, I very much agree it's possible some deep black project exists that (once) looked into this. In the UFO lore, there are many stories about these advances (tr3b etc).
But all of it is orthogonal to LLMs though. Picking one exceptional area that the military is great at (aerospace) does not suddenly make the military exceptional in other areas like AI.
See above video, even code breaking machines are kept a dark secret. LLM's that could make analytical decisions about war and strategy, must have started with the military first. It explains its massive data gathering operations in the 2000's. And it explains why some countries separation to make their own internet, away from what really is the US-Owned World Wide Web.
U.S. military has engaged in the commercialization of top-secret technologies (after it's considered obsolete by military standards), often by collaborating with private companies or research institutions.
The military invented AI and NLP which underpins LLMs.
The military is responsible for most the technology we use and talk about today. The government may appear incompetent, but we’re living off military hand-me-downs, the entire world is
Your argument reminds me of the discussions about the moon landing.
If today's hardware was available 20 yrs ago, this would've been possible just like the moon landing could've been faked if it took place 20+ yrs later. The technology wasn't available at the time (GPUs in this case, and generally no experience in doing such advanced trick techniques for movies back then)
These models are having such a strong effect now because we've finally got the hardware to run them
I'm not so sure about that. I mean, maybe not since the Snowden leaks but how do we know that governments haven't been running their own LLMs for the last five years or so? We know that they're using sockpuppets[1]. We know that they're astroturfing[2]. Integrating LLMs into their toolkits seems like an obvious move, so obvious that they would be stupid not to do it.
>haven't been running their own LLMs for the last five years
Because the hardware has not existed.
This said by accident I've seen hardware that was brought to a testing company by federal marshals that was massively parallel custom hardware that was likely for signal processing a lot of channels at once. So there is plenty of custom hardware out there, but these items have not been produced at the scale needed (from what anyone can tell) and, again from what we can tell, they don't have the general processing capability that GPU/TPU driven LLMs have.
Yes, it has. Consumer-wise, we've had Dragon Naturally Speaking since the late 90s. It's pretty simple to have a script read what it outputs text-wise and look for key words. No AI is even needed to do this.
Shotspotter has been billed as a “system of sensors, software, AI and expert human review that accurately detects, locates and alerts police to gunfire”, and the company behind it (formerly “Shotspotter” was the company name, its recently been renamed “Soundthinking”) has a number of other AI-involved law enforcement products now, as well.
That's a layman explanation. ShotSpotter is likely a passive radar system. In recent years, you can combine signal processing and supervised learn (neuralnet) to get better direction-of-arrival estimations.
How long before the AI built in the US figures out its reward function is 80% more likely to be satisfied if it points its digital finger at someone black rather than the most likely subject?
> There was some protection in the fact that it was prohibitively expensive to get someone to listen to every single one of our phonecalls/read all our emails/etc.
That's already how it worked on platforms like mturk and uhrs, lots of the work was transcribing audio dumps from microphones built into computers/phones/smart home devices. UHRS especially had a lot of that (it's owned by MS) as well as search engine grading type work. They also certainly do not pay well, I'd imagine that in practice there isn't much cost difference to paying a bunch of bored people to do it vs the compute cost for running an AI model to do it, but the AI model will be vastly more accurate and will work 24/7.
Not to sound condescending but really? How is this not immediately your mind goes? Every piece of information ever recorded can now be summarized and cross-linked efficiently. Privacy is beyond dead. Soon every authoritarian government (and Democratic ones albeit secretly) will have integrated platforms that track every single one of your movements, known contacts, internet usage, financial data, and correspondence. Big Brother has NEVER EVER been more effective than it will become.
Looking at this from far away, with the Snowden revelations in mind I'd think it's not tinfoil hat territory to assume that some of the progress at OpenAI got achieved with some help from well ressourced folks in the USG/Three Letter Agencies.
I don’t think they helped them. Now, did they train off the same data sources? Well, since OpenAI isn’t saying what GPT-4 is trained on, and the NSA can hoover up all kinds of non-public data, it stands to reason they may both be doing something slightly shady with emails, texts, and the like.
Yeah, I've started wondering similar things about that too, like how far ahead is the NSA on this stuff? And how does that tie in to the recent policy of denying China semiconductors?
Perhaps history will show that the NSA made algorithmic breakthroughs a few years ago and realized what was coming, so political policy was crafted to stymie Chinese progress in this field, and what we're seeing in the public sphere from companies like openAI is a managed release of the technology into the public, openAI at least managing to independently discover the same breakthroughs that the NSA made a a few years ago.
You're seeing the government entity as separate from the corporate entity, but quite often in the US its the other way around. The government entity is a rather hollow shell, and the 'brains' of the operation is contracted out to the corporation. The government entity would almost cease to exist if the corporation under it magically disappeared.
Most people don't really think about things that don't affect their day-to-day lives. This includes the specifics of how Governments might run a mass surveillance plan.
With regard to privacy, what’s the difference between your email’s text stored on a server, and your email’s text alongside the output of the text processed through a LLM? If “they” can already look at the text, what more privacy is there to lose?
There's a great deal of privacy in simply being a needle in a haystack. Part of the processing that's possible with an LLM is filtering.
Imagine you've sent an email about transporting a friend's daughter across state lines to get a medically-necessary abortion. Or if you prefer, imagine you've arranged via email to "lose" some firearms which don't comply with your state's new assault weapons ban.
Pre-LLMs, trying to find these sorts of emails was very hard. A simple text search for "abortion" or "gun" is going to come up with far more emails where two family members got into a political debate, than emails about lawbreaking. Big Brother will find a few such emails here and there by chance, but the vast majority of such incriminating emails will simply be lost in the pile.
Enter LLMs, and Big Brother can feed some of the incriminating emails found my chance into a training dataset along with a bunch of non-incriminating emails, and teach the AI to find incriminating emails, and then apply the model to the entire list of emails and get a nicely filtered list of only the emails which are incriminating, further tuning the model by adding emails it gets wrong to the training dataset when they are found.
Ah but then you'd have to explain how God started, and the problem is that whatever explanation you concoct here might as well be used to explain the very thing you invoked God to explain in the first place
I disagree, and I think you should take a longer view. As automation approaches total replacement of labor, capitalism will cease to work at all. This view is best summarized in this (probably apocryphal) exchange between Henry Ford II and Walter Reuther (United Autoworkers Union leader), as they toured a new, highly automated car factory:
> Ford: Gee, how are you going to get all these robots to pay union dues?
> Reuther: How are you going to get them to buy your cars?
You say there will be less need for labor? Then who exactly will businesses sell their goods and services to, if everyone is out of a job? The development of AI would be qualitatively different from the previous developments of labor-saving machines: a switchboard operator who's job became obsolete could go get another job. Machines that can do anything a person can would mean there is no other job.
I don't know what's in store for us, but it will probably be as drastic a shift as pre/post the agricultural/industrial revolutions. And there's no reason to think it should be for the better. I think we all ought to be very nervous and worried about this.
But in all seriousness, in a capitalist society the people holding the capital have to solve existential problems (not to humanity itself, but to the economic system in this case), because they control all the resources required to do so. If they decline to do so, we'll replace them with new owners of capital.
I agree and really empathize with you on this. It's frustrating how hard it is to get people to care, I've even had someone throw McLuhan's tetrad at me, as if this is the equivalent of the introduction of phone apps.
We're racing into a fundamentally deep and irreversible societal shift, at least the same order of magnitude as the agricultural or industrial revolution. Maybe even many orders of magnitude deeper. Society will change so profoundly, it will be at least as unrecognizable as our lives would look to the average person from the Bronze age. There's absolutely no reason to assume this will be a good change. If it's not something I personally will have to live with, my descendants most certainly will.
I'll admit, I also draw a blank when I try to imagine what the consequences of all this will be, but it's a blank as in "staring into a pitch black room and having no idea what's in it" - not ignoring the darkness altogether. Mass psychosis is a good term for this, I think.
The collective blindspot failing to understand that there's NOTHING that says we're gonna 'make it'.
There's no divine being out there watching out for us. This isn't a fucking fairy tale, you can't assume that things will always 'work out'. Obviously they've always worked out until now because we're able to have this conversation, but that does NOT mean that things will work out indefinitely into the future.
Baseless conjecture: I think we are biased towards irrational optimism because it's an adaptive trait. Thinking everything will work out is better than not, because it means you're more likely to attempt escaping a predator or whatever despite a minuscule chance of success (which is better than not trying at all). It's another entry into the list of instincts we've inherited from our ancestors which bite us in the ass today (like being omnivorous, liking sweets, tribalism, urge to reproduce, etc).
You seem like you've given this a bunch of thought, and I wanna chat more about this and pick your brain about a few things. Have you ever thought about whether this intersects with the Fermi paradox somehow?
Have you read Eliezer Yudkowsky and the LessWrong forum on AI existential risk? Your understanding of the sheer magnitude of future AI and taking it seriously as a critical risk to humanity are common qualities shared with them. (Their focus to address this is to figure out if it's possible for AI to be built aligned with human values, so that way it cares about helping us instead of letting us get killed.)
(The Fermi paradox is also the kind of thing discussed on LessWrong.)
ive created a twitter account for people to follow to organize around this issue, talk to each other and organize political action. giving out my email to so many people is becoming untenable so please contact me there. im always excited to even encounter someone who sees the issue this way let alone get to chat. thats how few of us there are apparently. @stop_AGI
one thought -- i agree with your sentiment towards ai, but i think the goal of stopping AGI is fruitless. even if we stop OpenAI, there will be companies/entities in other countries that will proceed where OpenAI left off.
there is zero chance of surviving AGI in the long term. if every human were aware of whats going on, like they are aware of many other pressing issues, then stopping AGI would be easy. in comparison to surviving AGI, stopping it is trivial. training these models is hugely expensive in dollars and compute. we could easily inflate the price of compute through regulation. we could ban all explicit research concerning AI or anything adjacent. we could do many things. the fact of the matter is that AGI is detrimental to all humans and this means that the potential for drastic and widespread action does in fact exist even if it sounds fanciful compared to what has come before.
a powerful international coalition similar to NATO could exclude the possibility of a rogue nation or entity developing AGI. its a very expensive and arduous process for a small group -- you cant do it in your basement. the best way to think about it is that all we have to do is not do it. its easy. if an asteroid was about to hit earth, there might be literally nothing we could do about it despite the combined effort of every human. this is way easier. i think its really ironic that the worst disaster that might ever happen could also be the disaster that was the easiest to avoid.
the price of compute is determined by the supply of compute. supply comes from a few key factories that are very difficult to build, maintain and supply. highly susceptible to legislation.
how? the same way that powerful international coalitions do anything else... with overwhelming economic and military power.
You can't do it in your basement as of 2023. Very important qualification. It's entirely plausible that continuous evolution of ML architectures will lead to general AI which anyone can start on their phone and computer and learn online from there.
Naively, one might say "ah but that ended in 1971!" - but let me put it this way: if you spotted a cockroach in your house, you'd be a fool to think that was the only one.
Also: the oversight/limits you're protected by could disappear some day, they're imaginary and socially constructed. Sure, you trust our current government to handle these powers responsibly, (though you really shouldn't, see above), but why are you so confident you can trust _tomorrow's_ government?
Hey look I'm really trying to engage earnestly here. I did provide specific examples, but they were in the wiki article I linked and you didn't look at them (which is fair, no one likes being tossed a link like that). Let me summarize COINTELPRO:
# Covert & 'illegal' projects by FBI aimed at infiltrating, influencing, disrupting, and discrediting various political organizations
# Existence of the program was discovered after activists stole documents from an FBI office and leaked them to media
# Targets included: antiwar activists, feminist organizations, civil rights activists (ie MLK), environmentalists, animal rights activists, communist party, KKK, American Indian activists, far right groups
# Methods included:
* Breaking into homes, violent beatings, vandalism
* Assassination
* Smear campaigns
* Fabricating evidence, false testimony (leading to wrongful imprisonment and activist intimidation)
* Fabricating letters to discredit/humiliate people or erode their relationships, or cause conflict (leading to death in many cases)
I don't actually need to talk about hypotheticals, the US government has already abused these things to squish people or ideas it didn't like. The point about creeping authoritarianism is a secondary argument. My point is that sometimes it's better for certain tools/institutions not to exist at all.
I think we ought to treat surveillance technologies with the same type of reverence we treat nuclear tech (though maybe not to the same magnitude). Nuclear technology isn't intrinsically a bad thing: the problem is that, combined with human tendencies (tribalism, territorialism, etc), a conflict that previously would've resulted in a mere x deaths could now result in x^y deaths, or even total annihilation.
You agree that creeping authoritarianism is a general problem. Do you think it might just be in the nature of human societies? If so, wouldn't it be prudent to carefully consider what tools and institutions we leave lying around, in case the worst happens? We all accept this with nukes - there was some kind of effort at nuclear disarmament (though not enough). We should do the same for surveillance.
I'm only trying to convince you that we need to be very cautious, skeptical, and distrustful of things like the NSA, because the US govt cannot be trusted with it now, and it might get even worse in the future. What hypothetical evidence would someone have to show you, to change your mind?
While I might have been in some ways sympathetic, the Panthers were a violent, armed, (Marxist-Lenninist) Communist radical group that had ambitions to overthrow some parts of governance, they got into shootouts with and killed police officers, voter indimidation etc..
You do understand it'd be very appropriate for the FBI to infiltrate such groups, as they indicating they are currently doing now with 'far right' and other radical groups, especially those with wepaons.
Your characterization of 'assasination' is problematic. I wouldn't say Fred Hampton was so much assassinated. He and his buddies were involved in a shooting which killed police, very shortly thereafter the cops planned the raid to arrest them and two Panthers were killed. It seems that Fred was killed in cold blood. While this is obviously 'very illegal' - this is not the US Justice Department targetting someone, this is local Chicago/Oakland cops form of extra-judicial retribution for the gang killing of their colleagues. Again, not right, but something totally different what might be implied from 'assassination'. They killed cops, the cops got out of line and got revenge.
Very notably - these acts caused national attention and there was an enormous reaction. Information was made public, there was public and political furor, transparency etc..
All of this is some time ago, when central oversight was harder, when the violence was much higher, and where groups of various kinds (aka local cops, local Panther groups) would act independently from central control.
And in the grand scheme of 300 Million poeple, it's relatively small stuff.
Also, it's a good reason for not having a single power like J Edgar Hoover in charge of anything.
Finally, it should be noted that this was the start of the cold war, and the Soviets were absolutely funding totalitarian uprising around the world. Stalin direclty controlled 17% of the Bundestag during the Weimar. While obvoiusly not sufficient to cause 'The Big Bad Man' to rise, without it, 'The Big Bad Man' likely would have never existed. Afghanistan, Korea, Vietnam, Angola, Cuba, Chile, Nicaragua ... so much of the world ... was perturbed by very real, direct intervention from Soviet backed 'Marxist-Lenninist' groups. The 'Red Scare' was not a fantasty. It might have been overstated on some level, but it was a material 'existential' problem.
The same, continued tactics by Russians have landed us in an 'almost war' for the West in Ukraine today. Russian spies are all over Germany, Putin has corrupted so many people in Europe including literally former German Chancellors, Austrian, Hungarian leaders - the FBI exists so that this does not happen so brazenly in the the US and allied nations.
The FBI will step out of line again, just like all groups do, and there should be constant vigilance, but given the total independence of other branches, I'm not worried at all. There will always be whistleblowers, eventual transparency etc..
Perhaps it's just a personal preference, but I'm just reticent to fall into the habit of using something which requires an internet connection, you know?