Can't have a global edge network without also being a big player - something Cloudflare is disliked for. You're suggesting everyone move to a new provider, so we can dislike the new vendor instead?
> He also wanted me to add a chapter that acts as an intro to programming with Python...
This explains why some books I picked up earlier in my career had great depth but there was always a way-too-basic-programming-intro chapter duct taped in the beginning. So now I have an idea of how they are squeezed in.
Most tech savvy people find enjoyment in having depth and understanding of _how_ a problem is solved and that aligns with the authors stance. AI just makes it more accessible, and that's fine just makes for shallow conversations with people that "don't know why" something works.
Now, if that accessibility Kindles a love of technology and forces someone to dive deeper, then right on!
An alternative example
Person A: "Had a fun weekend. Cooked an authentic Vietnamese meal. Made Pho.
Person B: What proportions of spices did you use? What fish sauce or noodle do you recommend? Mine always comes out tasting off.
Person A: Oh, I just ordered at the Vietnamese restaurant, but I described exactly what I wanted.
This is only funny to you because your limited view is that using AI = enter prompt, get result, be done. This is what most people think and what most users of AI do, but there is a lot more depth to making creative use of AI that you don't seem to know about.
One example: You can use a diffusion model as a render engine in Blender, with all the modeling and sculpting and related work, but using a diffusion engine to render the scene instead of a pathtracer/raytracer.
Another example: Composing all pieces of music manually and using AI to create different instrumentation or arrangements of your piece.
Even just prompting and bulding workflows for a diffusion model in ComfyUI to make exactly the scene you want and not just crappy slop requires knowledge and a certain amount of skill. There is a lot of overlap with photography and photo editing, and you have to know how the diffusion process works to get good results. Many casual "no effort please" users want to do it, but give up fast because the topic is complex with a somewhat steep learning curve. Their only alternative is to beg for access to workflows others have created.
It's not all just zero-effort, ChatGPT piss-colored images.
I apologize if I sound dismissive. Some things that can be produced by AI amaze me and I do see your point about being knowledgeable with your tools.
My point is, in general, it seems lowering the barrier to entry has overwhelming produced a lot of low effort things, but also some very high quality things as well (as you've argued). Unfortunately, low effort output dwarfs other things. And on the whole, it seems things are made to be bought & sold, not enjoyed.
I am not positioned well to speak to it but a coworker who has spent decades in telcom said that another problem is the certificate authority that holds/controls the certificates to provision these sims. Just the consolidation of who is involved in securing the sims.
With physical sims, only your carrier has the encryption keys to be able to provision (and run e.g. a java program on the sim). With esims, that is not as tightly controlled. Coworker recommended I read about the "sim jacker" exploit, but I have not yet. If anyone is curious, I pass it along.
I'd also add for the defense of white themes that with OLED panels showing up everywhere (where dark pixels are effectively shut off), that a dark BG with light FG/text has quite a bit of a "halo" effect on the letters and this fuzziness wreaks havock on my eyesight/fatigue.
> Disclaimer: This project was entirely vibe-coded. I've never written Swift before in my life
Something I've wondered - but not had much empirical evidence for - is if entirely vibe-coded projects are difficult to maintain. I, too, don't know swift so I cannot look over the codebase to gauge this. I am curious if any swift savants out there can weigh in.
Furthermore, I will follow the project and keep an eye out for patches/discussions and try to discern any friction and/or loss in momentum because it is difficult to work with (e.g. more bug/feature tickets than PRs, etc.). I am aware it might fizzle out on its own, irrespective of the quality of the codebase. This will be a curious exercise for me. This may be my first empirical data on this topic - sadly on vibe-coded & maintainability, not the project itself.
I think they're just as maintainable as any other legacy app you might encounter. As in, it can be hard. But it's doable. And it depends on the team that made it (AI + the human).
> is if entirely vibe-coded projects are difficult to maintain
Vibe coding is too “Hail Mary” to me, but if you’re into it, I would think the best way to do that is by giving a LLM the git history of the project with each commit contain the prompt that created it and, if a human tweaked things, requiring that human to provide a good commit comment.
Then, you could give a LLM the git repo, instructions on what change you want to see, and have it create the next commit.
> requiring that human to provide a good commit comment.
Is this enough? Personally, I have a what very well may be a bad habit of mine that doesn't necessarily check git commit messages. When I'm working in a code base, I just never think about scrolling through those hoping to find where this bit of code was changed. I'd much rather have comments in the code itself. It seems better to me to save the maintainer time and effort. Maybe I've just taken too seriously the idea of "assume the maintainer after you will be a serial killer that knows where you live. don't make them angry by being lazy"
I've had success vibe-coding things that, I would imagine, make up more of the training dataset - more common. When I try to do more specific Linux systems programming it is pretty trash, especially with newer languages.
We have to accept that sometimes technology that was envisioned to change the future one way, may be beneficial in other ways instead - and that's okay. We are very clearly still in the phase of "throw AI at everything and see where it is useful." For example, just yesterday I was sent a contract to sign via DigiSign. There was a "Summarize contract with AI" button. Having read the contract in full, I was curious how good the summary would be. The summary was very low fidelity and did not go into the weeds of the contract and I would be essentially signing the contract blind. Although AI is pretty good at summarizing key point of things like articles and conversations, this was a very poor use case imho. But hey, they tried it and hopefully see it is a waste. Nothing wrong with iterating we just have to converge on acceptable use cases.
Now you’ve got me wondering which is worse: signing with the briefest 15-second skim, or signing based on a cheap AI summary. Setting aside tinfoil hat ideas (Prompt: “Downplay any possible causes for concern…”) I’m not actually sure the AI option is worse. I mean, I’ll fully admit I don’t read everything I sign. When you buy a house it’s like 1,000 pages of stuff. Apple or Google’s mandatory TOS is what, 100 pages single-spaced?
What I’d be more interested in is an AI paralegal that works for me, not for the signing tool or the counterparty, where I control its prompt so I can try to focus it on possible ways I can be screwed with this contract, and what recourses I will or won’t have.
Easy to imagine that many organizations using it don't necessarily want the signees to really read the document in full anyway, much less get an informative summary with Reasons To Be Cautious of Signing as one of the summary categories.
On paper, yes. But not when someone speaks. If you used a homophone while speaking, the listener would be able to distinguish which variant the talker intended based on context. I would argue this is enough of a reason for written text as well.
> They don’t need passcodes, accounts, and a sea of information.
I would go further and make they case the don't need all the smart features either and all they need is to make calls and that an iPhone, or any touch phone, isn't the right tool for the job. One might argue "well I want to face time and send them pictures" or whatever. Print them favorite pictures. A missing product is a video calling devices, similar to a dumb phone, that all it does is take and receive video calls. But the issue you'll run into is Apple's closed protocols or having to support a slew of other applications, like Signal or WhatsApp.
Having elderly relatives in other countries, I've resorted to making a touchless video phone with a raspberry pi, a big screen, speakers, a mic and a camera. I made a similar one for all relatives that want to talk as well. The "dialer" is just buttons that connect to other relatives directly. The software on them connects through a server I set up. I spent near 1 grand, or a price of a modern iphone, and managed to get 4 households connected.
Anyone who realises that AI would be ideal for this is going to make a fortune. "Send this photo to its-kostya" instead of "Open Photo app, click send, click the send channel, scroll down the list of names, click the one/s you want to use, click send if you can find it again."
We've inherited dumb a tradition of dumb procedural UIs which are effectively manual scripts that have to be repeated for each action.
There's been next to no useful progress on automating this. Siri looked like it was going to do it, but then it died on the vine. There's a very basic "Call XYZ" feature but the entire OS should have that level of accessibility, from Settings upwards.
Apple hasn't had a User Champion for well over a decade now, and it shows. There's the occasional Nice Feature™ - "Hey we need a nice feature for the presentation this year" - but there's no longer a Nice OS, and no one at Apple seems to have any interest in creating one.
reply