Hacker Newsnew | past | comments | ask | show | jobs | submit | BatteryMountain's commentslogin

I don't drink much soda, maybe a coke once a year. Over the years the taste of coke has changed. Every year when I get that craving and drink one, it tastes different. So either the recipe has changed multiple times and there isn't one true coke flavour, or my taste buds might be faulty.

Coke is actually very different from country to country and less so from state to state. This is because Coke uses local bottle companies and they might be using different water. The coke you buy at your local store is probably bottled somewhere close to keep shipping costs down.

If you're a fan of Dr.Pepper, you'll notice they have 2 different bottles based on where you buy. That's because in some regions, Dr.Pepper uses Pepsi for bottling and in others it uses Coke bottlers.


It's not only the water, there are more differences like high fructose corn syrup versus other sugar forms.

As far as I know, the HFCS vs Sucrose is unlikely to be the reason for the difference in taste. I'm basing that off this video: https://www.youtube.com/watch?v=NY66qpMFOYo

TLDR: carbonic acid breaks down sucrose to glucose/fructose anyway


How long does the breakdown take?

Coke used to be mixed, bottled, and shipped out in an extremely quick timeframe. Inventory turned over fast.

I suspect the separated components wind up being equal to what a stale soda has, one that has been on the shelf. It’s like buying a soda whose sugar component has already gone stale.

Sure, the rest of the flavors are there and still fresh, unaffected by the carbonated water, but the sweetness one is off.


Isn't Coca Cola water reverse osmosis filtered?

The taste of local water should be irrelevant.


They don't do full reverse osmosis to the purest extent. There are still quite some minerals left. That's actually better for the end product

I can't speak for coke, but for bottled water, they often add minerals back in.

Honestly, the water is just a guess since the taste is different and the syrup comes directly from Coke. In other comments here, people mention the cans used but I’ve had Coke in glass from different countries taste different.

I divide Coke into USA vs ROW. The HFCS stuff in USA tastes absolutely nothing like the cane sugar Coke from anywhere else, which is why everyone is always trying to buy the Mexican stuff in the USA.

You should try the Coke made in Mexico. Easiest way I find it is by searching for “Mexico coke” on uber eats or something similar.

Most stores carrying products made in Mexico have it.


Mexico coke is cane sugar based instead of made with high fructose corn syrup. But didn't Trump say that they will ban HFCS in coke in favor of cane sugar?

1) Mexican coke was cane sugar based (as was US coke at one point), but the huge excess of corn created by US ag policy has shifted much of their production to HFCS

2) As it turns out, a cane sugar (sucrose) base for a dilute acidic liquid will very quickly assume an equilibrium ratio of intact sucrose to sucrose that's been cleaved in half into glucose & fructose, dictated by molecular interactions. Testing these drinks will always find a good amount of fructose.


Coke from Mexico also has a slightly different flavor profile that has nothing to do with the source of sugar.

The sugar sweetened Coke’s are unique, but I think McDonald’s consistently has the best Coke you can get. Unlike most restaurants, they store syrup in stainless steal containers instead of plastic bags and they use a higher syrup to soda ratio than most places.

I’m not sure if the flavor has changed much in the past 30 years, but I do know that a McDonald’s Coke is almost always good.


I worked at McDonald’s 20 years ago and we used plastic syrup bags…. Never heard about these stainless steel containers you speak of.

Odd. It seems dependent on the location. I see lots of comments on Reddit echoing your experience, but a few that mention steel tanks.

McDonald’s consistently has the worst Coke. Its so watered down and flat.

Setting up wireguard manually can be a pain in the butt sometimes. Tailscale makes it super easy but then your info flows through their nodes.

Which router OS are you using? I have openwrt + daily auto updates configure with a couple of packages blacklisted that I manually update now & then.

Tailscale is a good first step, but its best to configure wireguard directly on your router. You can try headscale but it seems to be more of a hobby project - so native wireguard is the only viable path. Most router OS's supports wireguard these days too. You can ask claude to sanity check your configuration.

I don't know why people jump into obscure/difficult distros as a first round... it's like you want to fail. Just use a something simple like Fedora + KDE. NONE of the issues the author would be a problem here out of the box.

Basically whenever I use a machine that has an nvidia gpu, I always use xfce, as it just works, has least amount of issues & babysitting nvidia drivers & breakages. For everything else I use KDE.

I have some old chromebooks (flashed with chromebox firmware) that uses xfce too, which works great!

So kde & xfce is the only two desktops I use these days & have patience for.


Does the DE matter for your GPU? Can you give some examples of what xfce does better than kde when you've got Nvidia? Because I've got Nvidia and am using kde.

XFCE is x11 only which might alleviate some Wayland bugs with nvidia.

Some fuel for the fire: the last two months mine has become way better, one-shotting tasks frequently. I do spend a lot of time in planning mode to flesh out proper plans. I don't know what others are doing that they are so sceptical, but from my perspective, once I figured it out, it really is a massive productivity boost with minimal quality issues. I work on a brownfield project with about 1M LoC, fairly messy, mostly C# (so strong typing & strict compiler is a massive boon).

My work flow: Planning mode (iterations), execute plan, audit changes & prove to me the code is correct, debug runs + log ingestion to further prove it, human test, human review, commit, deploy. Iterate a couple of times if needed. I typically do around three of these in parallel to not overload my brain. I have done 6 in the past but then it hits me really hard (context switch whiplash) and I start making mistakes and missing things the tool does wrong.

To the ones saying it is not working well for them, why don't you show and tell? I cannot believe our experiences are so fundamentally different, I don't have some secret sauce but it did take a couple of months to figure out how to best manipulate the tool to get what I want out of it. Maybe these people just need to open their minds and let go of the arrogance & resistance to new tools.


> My work flow: Planning mode (iterations), execute plan, audit changes & prove to me the code is correct, debug runs + log ingestion to further prove it, human test, human review, commit, deploy. Iterate a couple of times if needed.

I'm genuinely curious if this is actually more productive than a non-AI workflow, or if it just feels more productive because you're not writing the code.


One reason why it can be more productive is that it can be asynchronous. I can have Claude churning away on something while I do something else on a different branch. Even if the AI takes as long as a human to do the task, we're doing a parallelism that's not possible with just one person.

Go through a file of 15000 lines of complex C# business logic + db code, and search for specific thing X and refactor it, while going up & down the code to make sure it is correct. Typically these kinds of tasks can take anywhere from 1 day to a week for a good human developer, depending on the mess and who wrote it (years ago under different conditions). With my workflow I can get a good analysis of what the code is doing, where to refactor (and which parts to leave alone), where some risks are, find other issues that I didn't even know about before - all within 10 minutes. Then to do my iteration above to fix it (planning & coding) takes about another 30 minutes. So 30 minutes vs 1 week of hair pulling and cursing (previous developers choices..)... And it is not vibe coding, I check every single change in git diff tool long before committing, I understand everything being done and why before I use it.


Here is a short example from my daily live, A D96A INVOIC EDI message containing multiple invoices transformed into an Excel file.

I used the ChatGPT web interface for this one-off task.

Input: A D96A INVOIC text message. Here is what those look like, a short example, the one I had was much larger with multiple invoices and tens of thousands of items: https://developer.kramp.com/edi-edifact-d96a-invoic

The result is not code but a transformed file. This exact scenario can be made into code easily though by changing the request from "do this" to "provide a [Python|whatever] script to do this". Internally the AI produces code and runs it, and gives you the result. You actually make it do less work if you just ask for the script and to not run them.

Only what I said. I had to ask for some corrections because it made a few mistakes in code interpretations.

> (message uploaded as file)

> Analyze this D.96A message

> This message contains more than one invoice, you only parsed the first one

(it finds all 27 now)

> The invoice amount is in segment "MOA+77". See https://www.publikationen.gs1-germany.de/Complete/ae_schuhe/... for a list of MOA codes (German - this is a German company invoice).

> Invoice 19 is a "credit note", code BGM+381. See https://www.gs1.org/sites/default/files/docs/eancom/ean02s4/... for a list of BGM codes, column "Description" in the row under "C002 DOCUMENT/MESSAGE NAME"

> Generate Excel report

> No. Go back and generate a detailed Excel report with all details including the line items, with each invoice in a separate sheet.

> Create a variant: All 27 invoices in one sheet, with an additional column for the invoice or credit note number

> Add a second sheet with a table with summary data for each invoice, including all MOA codes for each invoice as a separate column

The result was an Excel file with an invoice per worksheet, and meta data in an additional sheet.

Similarly, by simply doing what I wrote above, at the start telling the AI to not do anything but to instead give me a Python script, and similar instructions, I got a several hundred lines ling Python script that processed my collected DESADV EDI messages in XML format ("Process a folder of DESADV XML files and generate an Excel report.")

If I had had to actually write that code myself, it would have taken me all day and maybe more, mostly because I would have had to research a lot of things first. I'm not exactly parsing various format EDI messages every day after all. For this, I wrote a pretty lengthy and very detailed request though, 44 long lines of text, detailing exactly which items with which path I wanted from the XML, and how to name and type them in the result-Excel.

ChatGPT Query: https://pastebin.com/1uyzgicx

Result (Python script): https://pastebin.com/rTNJ1p0c


> why don't you show and tell?

How do you suggest? A a high level, the biggest problem is the high latency and context switches. It is easy enough to get the AI to do one thing well. But because it takes so long, the only way to derive any real benefit is to have many agents doing many things at the same time. I have not yet figured out how to effectively switch my attention between them. But I wouldn't have any idea how to turn that into a show and tell.


I don't know how ya'all are letting the AIs run off with these long tasks at all.

The couple times I even tried that, the AI produced something that looked OK at first and kinda sorta ran but it quickly became a spaghetti I didn't understand. You have to keep such a short leash on it and carefully review every single line of code and understand thoroughly everything that it did. Why would I want to let that run for hours and then spend hours more debugging it or cleaning it up?

I use AI for small tasks or to finish my half-written code, or to translate code from one language to another, or to brainstorm different ways of approaching a problem when I have some idea but feel there's something better way to do it.

Or I let it take a crack when I have some concrete failing test or build, feeding that into an LLM loop is one of my favorite things because it can just keep trying until it passes and even if it comes up with something suboptimal you at least have something that compiles that you can just tidy up a bit.

Sometimes I'll have two sessions going but they're like 5-10 minute tasks. Long enough that I don't want to twiddle my thumbs for that long but small enough that I can rein it in.


I find it interesting you're all writing 'the AI' as if it's a singular thing. There's a myriad of ways to code with a myriad of AI's and none of them are identical. I use a Qwen 3 32B with Cline in VSCode for work, since I can't use cloud based AI. For personal projects, I use Codex in the cloud. I can let Codex perform some pretty complicated tasks and get something usable. I can ask Qwen something basic and it ends up in a loop, delivering nothing useful.

Then there's the different tasks people might ask from it. Building a fully novel idea vs. CRUD for a family planner might have different outcomes.

It would be useful if we could have more specific discussions here, where we specify the tools and the tasks it either does or does not work for.


The problem with current approaches is the lack of feedback loops with independent validators that never lose track of the acceptance criteria. That's the next level that will truly allow no-babysitting implementatons that are feature complete and production grade. Check out this repo that offers that: https://github.com/covibes/zeroshot/

Longest task mine has ever done was 30 minutes. Typically around 10 minutes for complex tasks. Most most things take less than 2 minutes (these usually offer most bang for buck as they save me half a day).

As a die hard old schooler, I agree. I wasn't particularly impressed by co-pilot though it did so a few interesting tricks.

Aider was something I liked and used quite heavily (with sonnet). Claude Code has genuinely been useful. I've coded up things which I'm sure I could do myself if I had the time "on the side" and used them in "production". These were mostly personal tools but I do use them on a daily basis and they are useful. The last big piece of work was refactoring a 4000 line program which I wrote piece by piece over several weeks into something with proper packages and structures. There were one or two hiccups but I have it working. Tool a day and approximately $25.


I have basically the same workflow. Planning mode has been the game changer for me. One thing I always wonder is how do people work in parallel? Do you work in different modules? Or maybe you split it between frontend and backend? Would love to hear your experience.

I plan out N features at a time, then have it create N git worktrees and spawn N subagents. It does a decent job. I find doing proper reviews on each worktree kind of annoying, though, so I tend to pull them in one at a time and do a review, code, test, feedback loop until it’s good, commit it, pull in the next worktree and repeat the process.

I literally have 3 folders, each on their own branch. But lately I use 1 folder a lot but work on different features (that won't introduce "merge conflicts" in a sense). Or I do readonly explorations (code auditing is fun!) and another one makes edits on a different feature, and maybe another one does something else in the Flutter app folder. So fairly easy to parallelize things like this. Next step is to install .net sdk + claude on some vm's and just trigger workflows from there, so no ide involved..

You won't be able to parallelize things if you just use the IDE's and their plugins. I do mine in the terminal with extra tabs, outside of the IDE.


This.

If you’re not treating these tools like rockstar junior developers, then you’re “holding it wrong”.


The problem I have with this take is that I'm very skeptical that guiding several junior developers would be more productive than just doing the work myself.

With real junior developers you get the benefit of helping develop them into senior developers, but you really don't get that with AI.


So then do your thing, while it’s doing scaffolding of your next thing.

Also: are you sure?

There’s as many of them as you’re talented enough to asynchronously instruct,

and you can tell them the boundaries within which to work (or not),

in order to avoid too little or too much being done for you to review and approve effectively.


My running joke and justification to our money guy (to pay for expensive tools), is that its like I have 10 junior devs on my side with infinite knowledge (domain expert with too much caffeine), no memory or feelings (I curse it without convo's with HR), can code decent enough (better than most juniors actually), can do excellent system admin....all for a couple hundred dollars a month, which is a bargain!

> To the ones saying it is not working well for them, why don't you show and tell?

Sure, here you go:


Ran out of context too soon?

The crazy part is, once you have it setup and adapted your workflow, you start to notice all sorts of other "small" things:

claude can call ssh and do system admin tasks. It works amazingly well. I have 3 VM's, which depends on each other (proxmox with openwrt, adguard, unbound), and claude can prove to me that my dns chains works perfectly, my firewalls are perfect etc as claude can ssh into each. Setting up services, diagnosing issues, auditing configs... you name it. Just awesome.

claude can call other sh scripts on the machine, so over time, you can create a bunch of scripts that lets claude one shot certain tasks that would normally eat tokens. It works great. One script per intention - don't have a script do more than one thing.

claude can call the compiler, run the debug executable and read the debug logs.. in real time. So claude can read my android apps debug stream via adb.. or my C# debug console because claude calls the compiler, not me. Just ask it to do it and it will diagnose stuff really quickly.

It can also analyze your db tables (give it readonly sql access), look at the application code and queries, and diagnose performance issues.

The opportunities are endless here. People need to wake up to this.


> claude can call ssh and do system admin tasks

Claude set up a Raspberry Pi with a display and conference audio device for me to use as an Alexa replacement tied to Home Assistant.

I gave it an ssh key and gave it root.

Then I told it what I wanted, and it did. It asked for me to confirm certain things, like what I could see on screen, whether I could hear the TTS etc. (it was a bit of a surprise when it was suddenly talking to me while I was minding my own business).

It configured everything, while keeping a meticulous log that I can point it at if I want to set up another device, and eventually turn into a runbook if I need to.


I have a /fix-ci-build slash command that instructs Claude how to use `gh` to get the latest build from that specific project's Github Actions and get the logs for the build

In addition there are instructions on how and where to push the possible fixes and how to check the results.

I've yet to encounter a build failure it couldn't fix automatically.


Biggest Google Apps for me:

Gboard 247MB

Google 415MB

Google Play Services 1330MB

Google Play Store 165MB

Messages 321MB

Gmail 233MB


Instead of Gboard, get Anysoft keyboard from https://f-droid.org, enable Gestures in the settings. You'll thank me later. Also, F-Droid has different keyboard layouts for Ansyoft such as better ones for SSH and OFC language related layouts.


Holy... What kinda heavy lifting is Google Play Services doing with those 1330 megs?

Dunno but my userdata for play services is 623MB.

To add to this, culture can be changed significantly in a short period. See how the USA has changed in the past 20 years, the culture has changed 2 or 3 times now with vastly different values & attitudes between each. What does each period have in common? Thick gobs of propaganda being push in every nook & cranny. And lack of critical thinking on the individual level. If country X wants to change, it is very possible, its just a matter of time, persistence & brain washing. Brain washing the youth is the easiest path, especially if in the opposite direction than what their parents/elderly want.

> See how the USA has changed in the past 20 years

I’m not sure we will ever know the complete answer, but some of this change seems to involve Russia too.


Russia or not, somehow, team Red and team Blue picked cards on who's on which team, and we're not allowed to have differing opinions about who should be on what side.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: