OP is probably talking about pre-entry authorisation (ESTA) which is required by the US and will soon be required in UK, Europe, Japan etc. it's not a full on visa with lots of paperwork but it does eliminate the risk of someone showing up and being denied entry.
The 10 day transit visa explicitly required proof of onward journey and it couldn't be the same place you came from. That's the kicker. The current policy allows round trip visits anywhere for 30 days.
I assume that's a Gemini LLM response? You can tell Gemini is bullshitting when it starts using "often" or "usually" - like in this case "TPUs often come with large amounts of memory". Either they did or they didn't. "This (particular) mall often has a Starbucks" was one I encountered recently.
It's not bullshit (i.e., intended) but probabilities all the way down, as Hume reminded us: from observations, you can only say the sun will likely rise in the east. You'd need to stand behind a theory of the world to say otherwise (but we were told "attention is all you need"...)
Google AI Pro is like $15/month for practically unlimited Pro requests, each of which take million tokens of context (and then also perform thinking, free Google search for grounding, inline image generation if needed). This includes Gemini CLI, Gemini Code Assist (VS Code), the main chatbot, and a bunch of other vibe-coding projects which have their own rate limits or no rate limits at all.
It's crazy to think this is sustainable. It'll be like Xbox Game Pass - start at £5/month to hook people in and before you know it it's £20/month and has nowhere near as many games.
I agree that the TPUs are one of the things that are underestimated (based on my personal reading of HN).
Google already has a huge competitive advantage because they have more data than anyone else, bundle Gemini in each android to siphon even more data, and the android platform. The TPUs truly make me believe there actually could be a sort of monopoly on LLMs in the end, even though there are so many good models with open weights, so little (technical) reasons to create software that only integrates with Gemini, etc.
Google will have a lion‘s share of inferring I believe. OpenAI and Claude will have a very hard time fighting this.
It's literally just a folder of markdown notes with a handy search and file list if that's what you want it to be. That's all I use it for. I think the hyper-organisation stuff is more of a hobby. It's not about productivity it's enjoyable for its own ends.
AI art is massively downvoted here and on Reddit, but boomers on facebook seem happy to share it. So I think you'll do better on other platforms. The opinion of AI generated creative work is just very low here. I personally agree, I've never seen an AI generated story that was interesting and I don't want to expose my children to it. I'd rather they get real stories written by real people.
Fair perspective. But the parent isn't passive here — they're the creative director. They decide what the story is about, who the hero is, what happens. The AI does a lot of the writing, yes, but the parent is the editor: they review every page, rewrite lines, regenerate illustrations they don't like. It's closer to working with a ghostwriter than pushing a button.
Most AI content feels empty because it's made for nobody in particular. A StoryStarling book is the opposite - a parent shaping a story around their specific child's world. That's a real story. They just had help telling it.
People that take it seriously, are going to focus on the architecture, universe building, characters, and arc flow and then let the writing be done in a way. The power tools of the cognitive era are arriving.
I'm reading a 500 page sci-fi book and evaluating it, the first 275 pages are fantastic until I can feel the context collapse and it craps the bed.
Surprising there isn't a better way to do it than defining 10000 ligature config lines and 10000 glyphs. I guess dynamic combinations of subglyphs are a Unicode level thing?
Yes. Ironically the non-falsetto portion (low and high notes, particularly belted in the last chorus) is much harder than the falsetto note. Most singers can do the falsetto note. But for whatever reason that impresses people more.
What does "easy" and "hard" mean? Approaching the range? Matching the pitch? Sustaining it for a time?
There are not many tasks that can be cut-and-dried as universally "easy"/"hard". "The Star Spangled Banner" is a more challenging melody because of its wide range. Also, because it's often sung solo, with minimal accompaniment, to huge crowds.
If I were singing falsetto notes, I could probably launch into the range, but could I match pitch and harmonize without AutoTune?
The Take On Me chorus has a two octave range in full voice (A2 to A4) and a falsetto at E5. I think it's harder to find people who can sing that chorus A2-A4 consistently than to find people who can squeak out a falsetto at E5. Yet the falsetto is more "impressive".
I guess I could be biased because I find it easy and not everyone finds reinforced falsetto easy. But for example Bohemian's Rhapsody famous falsetto high note is Bb5, a full half-octave higher.
I don't think Steve Perry ever pitched down his vocals. IMO there are two types of singer, ones with good technique (due to practice or genetics) who can sing easily, and those who have to really force notes out in the studio (relying on perfect conditions) then pitch down live.
I agree he always seemed to me comfortable in that range, on talk shows, etc. The guy from Aerosmith on the other hand I would be surprised if he could match the radio version regularly in concert. As for Layla, I guess I only know the live/mtv unplugged and the original had higher notes?
reply