"They have great R&D but just can’t make products"
Is this just something you repeat without thinking? It seems to be a popular sentiment here on Hacker News, but really makes no sense if you think about it.
Products: Search, Gmail, Chrome, Android, Maps, Youtube, Workspace (Drive, Docs, Sheets, Calendar, Meet), Photos, Play Store, Chromebook, Pixel ... not to mention Cloud, Waymo, and Gemini ...
So many widely adopted products. How many other companies can say the same?
I don't think Google is bad at building products. They definitely are excellent at scaling products.
But I reckon part of the sentiment stems from many of the more famous Google products being acquisitions orignally (Android, YouTube, Maps, Docs, Sheets, DeepMind) or originally built by individual contributors internally (Gmail).
Then here were also several times where Google came out with multiple different products with similar names replacing each other. Like when they had I don't know how many variants of chat and meeting apps replacing each other in a short period of time. And now the same thing with all the different confusing Gemini offerings. Which leads to the impression that they don't know what they are doing product wise.
Starting with an acquisition is a cheap way of accelerating once your company reaches a certain size.
Look at Microsoft - Powerpoint was an acquisition. They bought most of the team that designed and built Windows NT from DEC. Frontpage was an acquisition, Azure came after AWS and was led by a series of people brought in in acquisitions (Ray Ozzie, Mark Russinovich, etc.). It's how things happen when you're that big.
Because those were "free time" projects. It wasn't directed to do by the company, somebody at the company with their flex time - just thought it was a good idea and did it. Googlers don't get this benefit any more for some reason.
Leadership's direction at the time was to use 20% of your time in unstructured exploration and cool ideas like that, though good point of the other poster that that is no longer a policy.
Those are all free products, some of them are pretty good. But free is the best business strategy to get a product to the top of the market. Are others better, are you willing to spend money to find out? Clearly, most people are not interested. The fact that they can destroy the market for many different types of software by giving it away and still stay profitable is amazing. But that's all they are doing. If they started charging for everything there would be better competition and innovation. You could move a whole lot of okay-but-not-great cars, top every market segment you want, if you gave them away for free. Only enthusiasts would remain to pay for slightly more interesting and specific features. Literally no business model can survive when their primary product is competing with good-enough free products.
They come up with tons and tons of products like Google Glass and Google+ and so on and immediately abandon them. It is easy to see that there is no real vision. They make money off AdSense and their cloud services. That's about it.
Google does abandon a lot of stuff, but their core technologies usually make their way into other, more profitable things (collaborative editing from Wave into Docs; loads of stuff from Google+; tagging and categorizing in Photos from Picasa (I'm guessing); etc)
It annoyed me recently that they dropped support for some Nest/Google Home thermostats. Of course, they politely offered to let me buy a replacement for $150.
> Products: Search, Gmail, Chrome, Android, Maps, Youtube, Workspace (Drive, Docs, Sheets, Calendar, Meet), Photos, Play Store, Chromebook, Pixel ... not to mention Cloud, Waymo, and Gemini ...
Many of those are acquisitions. In-house developed ones tend to be the most marginal on that list, and many of their most visibly high-effort in-house products have been dramatic failures (e.g. Google+, Glass, Fiber).
I was extremely surprised that Google+ didn't catch on. The week before Google+ launched, me and all my friends agreed that Facebook is toast, Google will do the same thing but better, and everyone has a Gmail account so there will be basically zero barrier to entry. Obviously, we were wrong; Google+ managed to snatch defeat out of the jaws of victory, Google+ never got significant traction, and Facebook managed to keep growing and now they're yet another Big Evil Tech Corporation.
Honestly, I still don't really know how Google managed to mess that up.
I got early access to Google+ because of where I worked at the time. The invite-only thing had worked great for GMail but unfortunately a social network is useless if no-one else is on it. Then the real names thing and the resulting drumbeat of horror stories like "Google doxxed me to my violent ex-husband" killed what little momentum they had stone dead. I still don't know why they went so hard on that, honestly.
I think the sentiment is usually paired with discussion about those products as long-lasting, revenue-generating things. Many of those ended up feeding back into Search and Ads. As an exercise, out of the list you described, how many of those are meaningfully-revenue-generating, without ads?
A phrasing I've heard is "Google regularly kills billion-dollar businesses because that doesn't move the needle compared to an extra 1% of revenue on ads."
And, to be super pedantic about it, Android and YouTube were not products that Google built but acquired.
They bought YouTube but you have to give Google a hell of a lot of credit for turning it into what it is today. Taking ownership of YouTube at the time was seen by many as taking ownership of an endless string of copyright lawsuits, suing them into oblivion.
Youtube maintains an independent campus from the google/alphabet mothership, I'm curious how much direction they get, as (outwardly, at least) appear to run semi-autonomously.
Before Google touched Android it was a cool concept but not what we think of today. Apparently it didn't even run on Linux. That concept came after the acquisition.
Notably all other than Gemini are from a decade or more ago. They used to know how to make products, but then they apparently took an arrow in the knee.
Search was the only mostly original product. With the exception of YouTube which was a purchase, Android and ChromeOS all the other products were initially clones.
I'm far from being a fan of the company, but I think this article is substantially overstating the extent of the "freeze" just to drum up news. It sounds like what's actually happening is a re-org [1] - a consolidation of all the AI groups under the new Superintelligence umbrella, similar to Google merging Brain and DeepMind, with an emphasis on finding the existing AI staff roles within the new org.
From Meta itself: “All that’s happening here is some basic organisational planning: creating a solid structure for our new superintelligence efforts after bringing people on board and undertaking yearly budgeting and planning exercises.”
"I don't think there is a single founder/CEO in the 21st century that is performing better than Zuckerberg. I understand he's not a likable guy, and neither are are his products."
Right? "How Zuckerberg will be looked back on" involves far more than just business/fiscal metrics, especially when "he's not a likable guy, and neither are his products". He may be hailed as a significant success as a CEO, but there's also the impact to individual privacy, the emphasis to get users addicted in an effort to maximize "engagement", the impact that social media (of which it's being argued here that he is king) on youth/society and political discourse writ large, the brutal impact to our attention spans, so on and so forth.
If I'm a magic eight ball, I'm going to go with "Outlook not good" on how history's going to view him. Being a CEO that is "performing better" than all the others is but a single piece of the puzzle.
Key word "performing". If you want feel warm feelings about your heroes, don't make your heros CEOs who are ultimately judged on how large of a return they can give their shareholders.
Zuck is the poster boy for enshittification [0]. The dark pattern business model involves companies making money long after their users stop loving their offerings.
Well... In the early 1940s you could say that Hitler was the European leader that was performing better than all the others. But still, things took a turn.
Yes. I got a computer science degree, and would get one again today. Assuming you are able to finish the degree, no major has better ROI. As AI automates more and more types of work over the next few decades, computer science, the language of automation, will become more important, not less. A computer science degree teaches you the fundamentals (easy to miss in self study), builds discipline, exposes you to cutting edge topics, and opens doors to the best jobs in the industry.
Yes, it's possible to make it without a degree, but it makes things a lot more difficult. Don't second guess it. Do it!
Also, no reason to dread the intro classes IMO. Given his experience, it shouldn't be hard for him to ace them and race on ahead to bigger and better things. I learned some interesting things in intro CS, despite also coming in with prior programming experience.
Feel free to send me a message if you have any questions.
One way to make sense of this specific case at least.
- He's on track to becoming a top-tier AI researcher. Despite having only one year of a PhD under his belt, he already received two top awards as a first-author at major AI conferences [1]. Typically, it takes many more years of experience to do research that receives this level of recognition. Most PhDs never get there.
- Molmo, the slate of open vision-language models that he built & released as an academic [2], has direct bearing on Zuck's vision for personalized, multimodal AI at Meta.
- He had to be poached from something, in this case, his own startup, where in the best case, his equity could be worth a large multiple of his Meta offer. $250M likely exceeded the expected value of success, in his view, at the startup. There was also probably a large premium required to convince him to leave his own thing (which he left his PhD to start) to become a hired hand for Meta.
If it were me, yeah, park it in bonds and live off the interest on a tropical beach. Spend my days spearfishing and drinking beers with the locals. Have no concerns except how even my tan is (and tbh I don't see myself caring too much about that).
I am interested in improving the lives of the many people who cannot afford to be stockholders
The reason I'm interested in this is twofold
First, I think the current system is exploitative. I don't advocate for communism or anything, but the current system of extracting value from the lower class is disgusting
Second, they outnumber the successful people by a vast margin and I don't want them to have a reason to re-invent the guillotine
I agree. I just personally wouldn’t want to wander around exploring it continuously for months without more interesting work/goals. Even though cultures and geography may be wonderfully varied, their ranges are way smaller than what could be.
If you want to improve the lives of many, by all means go for it, I think that is a wonderful ambition to have in live and something I strive for, too!
But we are talking about an ad company here, trying to branch out into ai to sell more ads, right? Meta existing is without a doubt a net negative for mankind.
I met a youngster on Boca del Toro island in Panama a decade or so ago. I was about to be fired from my FAANG job so I used up years and years of vacation for one big trip before I was let go. We hung out for a few days while I was there (I don’t recommend the place at all btw). He cashed out from early twitter and was setting up surf schools all of the world. All he did was travel, surf, drink, and fuck. I’m still angry that laughed at all the dumb startups in the late 2000s instead of joining them. But this guy did what you’re suggesting, and I think there are many more unknown techbros who did it too.
I met a traveller from an antique land,
Who said: “Two vast and trunkless legs of stone
Stand in the desert. Near them, on the sand,
Half sunk, a shattered visage lies, whose frown,
And wrinkled lip, and sneer of cold command,
Tell that its sculptor well those passions read
Which yet survive, stamped on these lifeless things,
The hand that mocked them and the heart that fed;
And on the pedestal these words appear:
"My name is Ozymandias, king of kings:
Look on my works, ye Mighty, and despair!"
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare,
The lone and level sands stretch far away.
I take that more as a rumination on the futility of vanity and self-aggrandizing rather than "ruling the world " which in the modern day comes down to politics. Yes, there is considerable overlap with ego, but there's more to that topic than pure self-worship.
> Also, do you have a better way to spend that money?
Yes, I do.
I am aware of some quite deep scientific results that would have a deep impact (and thus likely bring a lot of business value) if these were applied in practice.
downsize Facebook back to like a couple thousand people max, use the resulting savings to retire and start your own AI instead of doing the whole shadow artist "I'll hire John Carmack/top AI researcher to work for me because deep down I can't believe I'd ever be as good as them and my ego is too afraid to look foolish so I won't even try even if deep down that's what I want more than being a capricious billionaire"?
or am I just projecting my beliefs onto Mark Zuckerberg here?
Retire? Anyone with more than about 10-20 million that continues to work has some sort of pathology that leaves them unsatisfied. Normal people rarely even get to that level because they are too busy enjoying life. Anyone making billions has some serious issues that they are likely stuck with because hubris won't let them seek meaningful help.
It might make more sense to think of in terms of expected value. Whilst the probability may be low, the payoff is probably many times the $250M if their startup becomes successful.
It's strange that Zuck didn't just buy options on that guy? (Or did he? Would love to see the terms.)
Zuck's advantage over Sir Isaac (Newton) is that the market for top AI researchers is much more volatile than in South Sea tradeables pre-bubble burst?
Either that or 250M is cheap for cognitive behavior therapy
This is IMO a comical, absurd, Beeple NFT type situation, which should point us to roughly where we are in the bubble.
But if he's getting real, non-returnable actual money from Meta on the basis of a back of envelope calculation for his own startup, from Meta's need to satiate Mark Zuckerberg's FOMO, then good for him.
This bubble cannot burst soon enough, but I hope he gets to keep some of this; he deserves it simply for the absurd comedy it has created.
Professional athletes get paid on that scale, CEOs get paid on that scale. A top researcher in a burgeoning technology should get paid that much. Because bubbles dont mean every company fails, it means most of them do and the winner takes all, and if someone thinks hiring this guy will make them the winner than it's not remotely unusual.
I'm not a conspiracy person, but it's hard not to believe that some cruel god sent us crypto just a few years before we accidentally achieved AGI just to mess with us. So many people are confident that AGI is impossible and LLMs are a passing fad based to a large degree on the idea that SV isn't trustworthy -- I'd probably be there too, if I wasn't in the field myself. It's a hard pattern not to recognize, even if n~=2.5 at most!
I hope for all of our sake's that you're right. I feel confident that you're not :(
Dude, LLMs are a very sophisticated autocomplete with a few tricks down their sleeve. From that to AGI or Zuck's "superintelligence" is just light years... LLMs are a dead end for "intelligence".
We are talking about the promises of the same crop of people who brought you full Autopilot on Tesla (still waiting since 2019 when it was supposed to happen), or the Boring Tunnel, the Metaverse, and their latest products are an office suite and "study mode".
This is EXACTLY the same as NFTs, Crypto, Web3, Mars. Some gurus and thought leaders talk big talk while taking gullible investors' money and hope nobody asks them how they plan to turn in a profit.
My edgy prediction is that blunder after blunder (Metaverse, LLAMA, the enshittification of whatsapp, instagram losing heavily to tiktok and facebook having mostly dead accounts) and now giving 250M in vapor-money to youngsters, this is Meta's last stab in the dark before they finally get exposed for how irrelevant they are. Then no amount of groveling to Trump or MMA would be able to save the Zuck from being seen for what he really is: an irrelevant morally bankrupt douche (I could say the same about Elon as well) just trying out random stuff and talking big.
Nuclear is actually a great example. It's a major scientific accomplishment but pretty much the least cost effective method of energy generation. Nuclear allowed us to generate huge amounts of energy to waste on whatever we like when the real focus should be on massively decreasing energy demands. But of course that doesn't make anyone money.
Decreasing energy demands isn't going to happen though, and energy should not be thought of as a profit generator. You can spend all your effort trying to do something that isn't going to happen without forcing people to do it, and you can lament that public needs don't make profits, or you can deal with the reality of the situation and accept that things aren't ideal.
> and you can lament that public needs don't make profits, or you can deal with the reality of the situation and accept that things aren't ideal.
Ah, so we have to "deal with the reality of the situation and accept things"? You know who also was being told that? French peasants and middle class, by their king Louis XVI, who told them that they had to accept plummeting living standards and wages. You know how it ended up, right? The king had to deal with the reality of a guillotine.
Some people might accept being used and abused by the capitalistic system as a "natural order of things", but there are people who can envision a world where not everything is about profit, and there are such things as public goods, services, resources, housing, benefits, etc.
> Decreasing energy demands isn't going to happen though, and energy should not be thought of as a profit generator
Says who? Not everything is about profit. The fact that so many in today's society are thinking of capitalism is the natural order of things doesn't make capitalism less fragile, especially if it continues not serving society. When something doesn't serve society, society gets rid of it (usually violently) and replaces it with something else. Many CEOs and oligarchs nowadays forget that, just like the kings before them, until a Luigi comes and reminds them.
> lament that public needs don't make profits,
Whatever the public needs, eventually the public will get, with or without profits involved, and with or without the agreement of the "ruling" class.
That whole comment comes off as unhinged and has nothing to do with development of energy infrastructure for a world population that has growing energy needs.
Please use an LLM and prove it yourself. Then use an LLM and prove the opposite. That's how much weight your "proofs" have.
In fact, buy a premium subscription to all the LLMs out there (yes, even LLAMA) and have them write the proof using the scientific method, and submit the papers to me via a carrier pigeon.
Other than the rockets and Starlink, I do not trust anything from Musk. Mainly because while everyone whose opinion I trust about rockets says "LGTM", everyone whose opinion I trust on all the other things he does says as per https://xkcd.com/2030/
I don't trust Zuckerberg. Bad vibes about everything Facebook ever since I went to one of their developer conferences in London a bit over a decade ago, only gotten worse since then. I wouldn't even pay for ads on their system, given the ads I see think I want dick pills and boob surgery, and are trying to get me to give up a citizenship I never had in the first place while relocating to a place I've actually left. Sometimes they're in languages I don't speak.
And while I can't say that I have any negative vibes from Altman, I've learned to trust all the people who do say he's a wrong 'un, as they were right about a few other big names before him.
And I wouldn't invest in any of the companies making LLMs, because I think the whole "no moat" argument is being shown to be plausible by way of how close behind all the open models are.
But LLMs are, despite all that, obviously useful even if you see them as only autocomplete. They're obviously useful even if you think they're just a blurry JPEG of the internet. They're obviously useful even if they're never going to meaningfully improve.
LLMs are not like NFTs. Might be like Mars, though.
Well, if LLMs were the next coming of Christ, where is the impact of them on open source?
Everyone claims they are awesome and super powerful in their own toy projects or in their private source. Where is the actual impact of LLMs superior power on OSS?
Woah, a whopping 0.5%. That's like working 12 minutes more every week. I waste much more time every day because of how slow and sluggish Windows 11 is.
0.5% across all the world's population would be a significant achievement. The fact that you spend this much time in Windows 11 is actually a crime against humanity.
0.5% of USD 100 trillion per year, with a usual accounting I've seen being that you can count that boost for the next 20 years, is worth 0.5% * $100T/year * 20 years = USD 10 trillion.
Now, we don't have a single investor making this investment nor getting the reward for the investment, but that's the kind of overall shape of the reasoning behind why people are willing to invest so much in AI.
It would be trivial for most of Europe to improve productivity by 0.5% or even 5% but they’re too busy drinking nice wine outside a cafe in the sunshine most of the day.
There is no guarantee that Meta or OpenAI would capture that increase in productivity as opposed to open models or otherwise driving the profits on the AI itself to zero.
As LLM's get smarter and more capable on smaller and smaller computers Anthropic, Google, Grok, OpenAI, and Meta (is there an acronym, like FAANG, for the giant AI companies? MOGGA? GOMAG?) will have to get creative with profit-drivers, when consumers will have a very capable LLM built-in to their computing device(s), and it can easily be worth the cost to a business to invest in the computing power to provide on-device LLM's to their workers.
If you agree it is a bubble, you are agreeing it is going to burst. Because that is what defines a bubble.
I have two questions about this, really:
- is he going to be the last guy getting this kind of value out of a couple of research papers and some intimidated CEO's FOMO?
- are we now entering a world where people are effectively hypothetical acquihires?
That is, instead of hiring someone because they have a successful early stage startup that is shaking the market, you hire someone because people are whispering/worried that they could soon have a successful early stage startup?
The latter of these is particularly worrisomely "bubbly" because of something that people don't really recognise about bubbles unless they worked in one. In a bubble, people suspend their disbelief about such claims and they start throwing money around. They hire people without credentials who can talk the talk. And they burn money on impossible ideas.
The bubble itself becomes increasingly intellectually dishonest, increasingly unserious, as it inflates. People who would be written off as fraudsters at any other time are taken seriously as if they are visionaries and ultra-productive people, because everyone's tolerance for risk increases as they become more and more desperate to ride the train. People start urgently taking impossible things at face value, weird ideas get much further advanced much more quickly, and grifters get closer to the target -- the human source of the cash -- faster than due dilligence would ordinarily allow them.
"This guy is so smart he could have a $1bn startup just like that" is an obvious target for con artists and grifters. And they will come.
For clarity I am ABSOLUTELY NOT saying that the subject of this article is such a person. I am perfectly happy to stipulate that he's the real deal.
But he is now the template for a future grift that is essentially guaranteed to happen. Maybe it'll be a team of four or five people who get themselves acquihired because there's a rumour they are going to have billions of dollars of funding for an idea. They will publish papers that in a few months will be ridiculed. And they will disappear with a lot of money.
> The bubble itself becomes increasingly intellectually dishonest, increasingly unserious, as it inflates.
You started to sound like Dario, who likes to accuse others as intellectually dishonest and unserious. Anyway, perhaps the strict wage structure of Anthropic will be its downfall in this crazy bubble?
I am accusing no one individually or particularly. I am observing that the problem is that within a bubble, people collectively become increasingly unserious and less intellectually honest as their appetite for risk increases with their desire to get a slice of the action. Indeed, it gets worse as people begin to think it might not last forever.
The same thing happened in the dotcom era, the same thing happened in the run-up to the subprime mortgage crisis. Every single bubble displays these characteristics.
I agree on all points. However, if he already had several millions like Mira and Ilya, his choice to work for Zuckerberg would likely be different. Where is the glory in bending the knee to Meta and Zuckerbeg?
I think you're forgetting to factor in the tail risks when calculating the rate. In fact I doubt they still qualify as "tail" under those circumstances. There's a fairly decent chance the retirement plan ends up being facing a firing squad.
We get less violent because AI research pays more than murder, so more people focus on being good AI researchers than good killers, and the world is happier for it.
Threats of violence might get me to work for any of them; and I don't think I hate Amazon enough to do more than just not use it; but if I were somehow important enough to get a call from Zuckerberg, my answer would be "Meta delanda est", no matter how many digits or how cash-based the proposed offer was.
But I'm not important enough to be noticed, let alone called.
This is such a weird sentiment to me, comparing taking one of the highest paying corporate jobs in the history of humanity to bowing down to some dictator.
It's fine to think that we're in a bubble and to post a comment explaining your thoughts about it. But a comment like this is a low-effort, drive-by shoot-down of a comment that took at least a bit of thought and effort, and that's exactly what we don't want on HN.
Various BigCo-s have been reporting good results recently and highlighted significant AI role in that. That may be a BS of course, yet even if it is half-true we're talking about tens/hundreds of billions of revenues. With 10x multiple it supports a trillion like generic valuation of AI in business right now.
Zuckerberg had a lot to say about AI in his part of Meta's Q2 earnings statement, but the follow-up earnings call [0] revealed the rather limited scope of AI in their ad business so far:
>"The first is enabling business AIs within message threads ... We’re expanding business AIs to more businesses in Mexico and the Philippines. And we expect to broaden availability later this year as we keep refining the product."
>"The second area of business AI development is within ads ... We’re currently testing this with a small number of businesses across Feed and Reels on Facebook and Instagram as well as Instagram Stories."
>"And then the final area that we are exploring is business AIs on business websites to help better support businesses across all platforms ... and we’re starting to test that with a few businesses in the US."
So it's just very small scale tests so far - not the sort of thing that would have any measurable impact on their revenue.
It's also possible, based on what happens to those who win the lottery, that his life could become a lot harder. It's not great to have the fact that you're making $2M a week plastered all over the Internet.
> He told his mother he wanted to organize a carnival for his friends, and mistakenly, he said, he placed an order for almost 70,000 pieces of the candy instead of reserving it.
> We estimate that users in the Facebook deactivation group reported a 0.060 standard deviation improvement in an index of happiness, anxiety, and depression, relative to control users. The effect is statistically distinguishable from zero at the p < 0.01 level, both when considered individually and after adjusting for multiple hypothesis testing along with the full set of political outcomes considered in Allcott et al. (2024). Non-preregistered subgroup analyses suggest larger effects of Facebook on people over 35, undecided voters, and people without a college degree.
> We estimate that users in the Instagram deactivation group reported a 0.041 standard deviation improvement in the emotional state index relative to control. The effect is statistically distinguishable from zero at the p = 0.016 level when considered individually, and at the p = 0.14 level after adjusting for multiple hypothesis testing along with the outcomes in Allcott et al. (2024). The latter estimate does not meet our pre-registered p = 0.05 significance threshold. Substitution analyses imply this improvement is achieved without shifts to offline activities. Non-preregistered subgroup analyses suggest larger effects of Instagram on women aged 1824.
Perhaps it wasn't clear what I meant. When I said significantly, I meant it in the colloquial sense, not in the statistical significance sense.
I was looking for a more digestable figure describing the extent of improvements, not whether the study found them confidently distinguishable (which I just assumed they did based on the wording, good to know they didn't for Instagram).
A 0.060 standard deviation improvement is super small. If the average person rates their happiness/anxiety/depression score at, say, 50 out of 100, and the standard deviation (how spread out people’s scores are) is around 10 points, then 0.060 SD = 0.6 points. So quitting Facebook gave an average person a ~1% bump in mood score. Instagram was even smaller: ~0.4 points, or 0.8%.
It's real, but barely noticeable for most people—unless you're in a more affected subgroup (e.g. undecided voters or younger women). Your experience feeling way better likely means you were an outlier (in a good way).
On the contrary, reporting changes relative to the standard deviation of a control group frees you from scales and their meanings, because it relates the observed change to the normal spread of scores before the intervention. In this way, you don't need to know the scale and its meaning to know if a change is big or small, and from a statistical perspective, that's (almost) all you need to find if a change is significant or due to random chance. Of course, looking back at the original scale and its meaning can help interpreting the meaning of the results in other ways
Standard deviation helps, but you still need to know: standard deviation of what? It's no different than saying someone scored 78% - 78% of what? What is it in the denominator? Also, different scales can represent the same thing differently.
Secondly, the impact of the difference isn't known - you don't know the curve representing the relationship of score to impact. In some contexts a little change is meaningless - the curve is flat; in others the curve is steep and it can be transformational. And impacts only sometimes scale linearly with performance or score, of course.
Without that knowledge, standard deviation means nothing beyond how unusual, in the given population, the subject's performance is.
The best thing you can do is compare it to another study, since turning 0.06 standard deviations into a percentage of happiness isn’t going to be that telling.
In general, 0.2 is considered a small effect.
So 0.06 is quite small — likely not a practically noticeable change in well-being. But impressive to me when I compare it to effect sizes of therapy interventions which can lie around 0.3 for 12 weeks.
Quote:
> “50 randomized controlled trials that were published in 51 articles between 1998 and August 2018. We found standardized mean differences of Hedges’ g = 0.34 for subjective well-being, Hedges’ g = 0.39 for psychological well-being, indicating small to moderate effects, and Hedges’ g = 0.29 for depression, and Hedges’ g = 0.35 for anxiety and stress, indicating small effects.”
Multiply by .37 to get PERCENTILE ranking change, not percent. If you were average happiness, and you improve that by 1 stdev, you are now happier than 87% of your peers (when you were at 50%ile before). 0.6 stdev improvement would be vs 72% of your peers.
So to put it colloquially, if you have 4 friends, and you were in the middle of them (3rd happiest aka happier than 2 of them), by quitting Facebook you are now happier than all but 1 one them (aka happier than 3 of them).
AKA for every 4 friends you have you can jump ahead of 1 of them in the happiness race by quitting facebook.
Multiply by .37 to get PERCENTILE change, not percent.
If you were average happiness, and you improve that by 1 stdev, you are now happier than 87% of your peers (when you were at 50%ile before). 0.6 stdev improvement would be vs 72% of your peers.
So to put it colloquially, if you have 7 friends, and you were in the middle of them (4th happiest), by quitting Facebook you are now happier than all but 1 one them.
People who use Facebook also may feel depression, from very strong to none at all. In the middle of this interval there's the "expected value" point, sort of an average level of feeling depressed. This point is at an equal distance from the "most depressed users" group, and from the "not depressed at all" group. Let's call this distance of depression strength a "standard deviation".
Now, the users who stopped using Facebook became slightly less depressed, by 6% of that "standard deviation" range. If you buy a small coke at a McDonald's, then take one sip, you make it about 6% smaller. It's not unnoticeable (you've made that refreshing sip), but about 15 more such sips still remain!
In other words, there is an effect which can definitely be noticed ("statistically significant"), but it's not a big deal either.