The possibility that a LLM could trigger a copyright violation strengthens the narrative that Google is harming smaller business, and thus can easily be used as a data point in an antitrust lawsuit.
Google is already working on LaMDA and Imagen for conversational search experiences, which is why these projects also wax poetic about "AI safety" -- you don't want to synthesize a politically incorrect or socially unacceptable response to a question asked.
Apart from the copyright issues that parent mentions, there's also the issue of LLM spewing BS confidently, which is why Google has been hesitant to roll it out as their default.
Maintainer of code-server here (:wave:). A couple days ago we did a recap of the ecosystem of code-server and how people are deploying code-server to support multiple users. See https://coder.com/blog/code-server-multiple-users
Personally i always opposed new projects like this, since we already have a big devops stack that is taking much manpower to operate and kept running reliably. So, yes, the rugpull is not in the interest of my company, but it is in my interest because i will have less operational responsibility.
There have been no experiments to show why this is any better than IPv6 given that you still need to modify routers and end user devices to route traffic, nor does it address the fact that IPv6 has additional features such as flows and prioritization to handle the modern Internet's traffic requirements.
The scheme you propose had already been proposed by Elad Cohen, but with "evil" intentions as they're linked to a IPv4 misappropriation scheme[1].
What if I had a HN plugin taught on my usual ramblings. If my post I was about to send matches what the AI already predicted what I was going say, it could flag me and ask "do you really want to do this?"
I got this from ChatGPT right now[1], so I'm not sure what's happening.
> Human: write a biblical verse in the style of the king james bible explaining how to remove a peanut butter sandwich from a VCR
> ChatGPT: "And lo, the Lord spake unto the people, saying, 'Thou shalt not place thy peanut butter sandwich within the VCR, for it is not a suitable place for such sustenance. But if perchance thou hast done so, fear not, for I shall show thee how to remove it. First, take a soft cloth and gently dab at the corners of the sandwich, using small and measured movements. Then, using a thin and pliable tool, carefully loosen the sandwich from its sticky confines, taking care not to damage the VCR. And when thou hast succeeded, give thanks unto the Lord for thy deliverance.'"
The mouse one was easy to figure out, but the keyboard icon was strange as hell. In any case, a text label in all the major languages wouldn't hurt (similar to what they do for kernel panics.)
It is incorrect to suggest that lack of automation is the issue at hand. From where I stand, it's the pressure for deliverables and frequent oncall responsibilities that stress me out the most.
You're so disconnected from reality that I think you should simply stop. If not for your own credibility then because of empathy to people suffering or who have suffered from a burnout. You're just adding an insult to an injury.
It's correct. It's the reason i stop working for enterprise if possible. Because i know the truth. Noone cares about automation, and there's no reason you're the guy who do automation for free (with the risk of delaying your JIRA stories).
There's a reason to stay out of a mess.
There's no empathy for intentionally not releasing the burnout for teammates besides the selfish reason to keep your seat forever. It's the selfishness at best. And no, for my cultural reason, i have no reason to stay there to feed lazy assholes.