Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> By 2024, the legacy Python service had a history of reliability and performance issues. Ownership and maintenance of this service had become more cumbersome for all involved teams. Due to this, we decided to move forward into modern and domain-specific Go microservices.

Using Python for a backend system to "scale" really is just pure cope and was unscalable in the first place and in the long run as Reddit just found out. They knew they needed a lot more than just fake optimizations from an interpreted language to improve the performance and a Golang rewrite unsurprisingly solved those issues.

This once again clearly shows that other than in the prototyping stage of an MVP, it really makes no sense to scale with backends written in these interpreted languages in 2025.

Switching to safe, highly performant mature languages such as Golang tells you that it not only generally improves performance, but correctly handles concurrency which Python has always struggled at, especially at scale which is why in Reddit's case, the race conditions now revealed themselves more clearly before the rewrite.



Reddit was founded in 2005, Go was released in 2007.

They picked the tech that was available and mature at the time, and enabled them to scale for 20 years (to 100M+ DAUs + IPO) - seems like a pretty good choice to me.

You know which other platform was built on Python? Youtube.

Python isn't a bad choice if you're building [certain kinds of] billion-dollar businesses.


Python is pretty much universally a bad choice for a backend. It's not even "easier" than other backend frameworks - Java Spring and dotnet are right there, and are specifically optimized for this use case.

If you want to build an app that's easy to maintain, Python is a bad choice because of it's dynamic typing and struggles around tooling. If you want to build an app that's performant enough to scale, Python is a bad choice because Python. If you want to build an app that will get you a website ASAP, Python is still not the best choice, because other languages have larger batteries-included frameworks.

In 2005, even PHP would have been a better choice and would probably still be performant enough to run today.

Today, the story is much worse. Python has use cases! Experimentation, anything science related, "throwaway" code. Applications is just not one of the use cases IMO.


They actually started in LISP and rewrote it in Python (and also apparently, did not pick any of the "mature" web frameworks).

http://www.aaronsw.com/weblog/rewritingreddit


> unscalable

Instagram, which is significantly bigger than Reddit, disagrees.


Here we go, someone's throwing around the S word again. Based on my experience, every time someone mentions "scalability" in a web application context I smell red flags.

Now we don't have details on what the comments service in Reddit entails - maybe it does indeed do a lot of CPU-intensive processing, in which case moving to Golang will definitely help.

But maybe it's also just a trivial "read from DB, spit out JSON", in which case the bottleneck will always be the DB, and "scalability" is just an excuse to justify the work.

The fact this is part of a move off a "legacy" system to "modern" "microservices" suggests there's a huge amount of developers having fun and are incentivized to justify continuing getting paid to have fun replacing a perfectly functioning system, rather than an actual hard blocker to scalability that can't be solved in a simpler way like by throwing more hardware at it.


> The fact this is part of a move off a "legacy" system to "modern" "microservices" suggests there's a huge amount of developers having fun ...

I don't think it suggests that at all. This is their press release, so of course they're going to spin it that way.


reddit is like one of the most visited websites and they've now felt the need to migrate off python. the average user's website is fine on python.


What scale do you think python breaks down?


Reddit scale. For most of the world python is fine. Go is also so easy to work with it makes sense it was the go to after python.

I been saying it for almost 10yr, go is the future for backends.


At the very moment person in charge says "ok this works, now make it not slow". Python is modern age BASIC. Easy to write and good for prototypes, scripting, gluing together libraries, fast iterations. If you want performance and heavy data processing anything else will be better. PHP, Java, even JavaScript.

For example Python is struggling to reach real time performance decoding RLL/MFM data off of ancient 40 year old hard drives (https://github.com/raszpl/sigrok-disk). 4GHz CPU and I cant break 500KB/s in a simple loop:

    for i in range(len(data)):
      decoder.shift = ((decoder.shift << data[i]) + 1) & 0xffffffffff
      decoder.shift_index += data[i]
      if decoder.shift_index >= 16:
       decoder.shift_index -= 16
       decoder.shift_byte = (decoder.shift >> decoder.shift_index) & 0x5555
       decoder.shift_byte = (decoder.shift_byte + (decoder.shift_byte >> 1)) & 0x3333
       decoder.shift_byte = (decoder.shift_byte + (decoder.shift_byte >> 2)) & 0x0F0F
       decoder.shift_byte = (decoder.shift_byte + (decoder.shift_byte >> 4)) & 0x00FF


To optimize that code snippet, use temporary variables instead of member lookups to avoid slow getattr and setattr calls. It still won’t beat a compiled language, number crunching is the worst sport for Python.


Which is why in Python in practice you pay the cost of moving your data to a native module (numpy/pandas/polars) and do all your number crunching over there and then pull the result back.

Not saying it's ideal but it's a solved problem and Python is eating good in terms of quality dataframe libraries.


All those class variables are already in __slots__ so in theory it shouldnt matter. Your advice is good

     self.shift_index -= 16
     shift_byte = (self.shift >> self.shift_index) & 0x5555
     shift_byte = (shift_byte + (shift_byte >> 1)) & 0x3333
     shift_byte = (shift_byte + (shift_byte >> 2)) & 0x0F0F
     self.shift_byte = (shift_byte + (shift_byte >> 4)) & 0x00FF


 but only for exactly 2-4 milliseconds per 1 million pulses :) Declaring local variable in a tight loop forces Python into a cycle of memory allocations and garbage collection negative potential gains :(

    SWAR                           :     0.288 seconds  ->    0.33 MiB/s
    SWAR local                     :     0.284 seconds  ->    0.33 MiB/s
This whole snipped is maybe what 50-100 x86 opcodes? Native code runs at >100MB/s while Python 3.14 struggles around 300KB/s. Python 3.4 (Sigrok hardcoded requirement) is even worse:

    SWAR                           :     0.691 seconds  ->    0.14 MiB/s
    SWAR local                     :     0.648 seconds  ->    0.14 MiB/s
You can try your luck https://github.com/raszpl/sigrok-disk/tree/main/benchmarks I will appreciate Pull requests if anyone manages to speed this up. I give up at ~2 seconds per one RLL HDD track.

This is what I get right now decoding single tracks on i7-4790 platform:

    fdd_fm.sr 0.9385 seconds
    fdd_mfm.sr 1.4774 seconds
    fdd_fm.sr 0.8711 seconds
    fdd_mfm.sr 1.2547 seconds
    hdd_mfm_RQDX3.sr 1.9737 seconds
    hdd_mfm_RQDX3.sr 1.9749 seconds
    hdd_mfm_AMS1100M4.sr 1.4681 seconds
    hdd_mfm_WD1003V-MM2.sr 1.8142 seconds
    hdd_mfm_WD1003V-MM2_int.sr 1.8067 seconds
    hdd_mfm_EV346.sr 1.8215 seconds
    hdd_rll_ST21R.sr 1.9353 seconds
    hdd_rll_WD1003V-SR1.sr 2.1984 seconds
    hdd_rll_WD1003V-SR1.sr 2.2085 seconds
    hdd_rll_WD1003V-SR1.sr 2.2186 seconds
    hdd_rll_WD1003V-SR1.sr 2.1830 seconds
    hdd_rll_WD1003V-SR1.sr 2.2213 seconds
    HDD_11tracks.sr 17.4245 seconds <- 11 tracks, 6 RLL + 5 MFM interpreted as RLL
    HDD_11tracks.sr 12.3864 seconds <- 11 tracks, 6 RLL + 5 MFM interpreted as MFM


This is very subjective. Using Python influences your architecture in ways you would not encounter with other languages.

I maintain a critical service written in Python and hosted in AWS and with about 40 containers it can do 1K requests/sec with good reliability. But we see issues with http libraries and systemic pressure within the service.


1k requests/sec over 40 containers, meaning 25 RPS per container. Are you using synchronous threads by any chance (meaning if you're waiting on IO or a network call you are blocked yet your CPU is actually idle)? If so you might benefit from moving to gevent and handle that load with just a handful of containers.


1K requests/sec doing what?

That's a really low rate in my world.

I write software handling a couple of million of messages per second on a single core on a single machine


We get 1K on a single small hetzner VPS, with Flask behind Nginx ¯\_(ツ)_/¯


Yeah, 25 req/sec/process is abysmally slow. You can write slow in any language.

You don't end up seeing these kinds of complaints about Ruby backends and Ruby is the same order of magnitude in terms of speed.


The scale where throwing more hardware to run your CPU-intensive Python part (and not the part that just wait on a DB, IO or other networked service - that won't change with Golang) starts costing more than paying developers to write it in a new language and incurring the downside of introducing another language into the stack, throwing away all the "tribal knowledge" of the existing app and so on.

Modern hardware is incredibly fast, so if you wait for said scale it may never actually happen. It's likely someone will win the push for a rewrite based on politics rather than an actual engineering constraint, which I suspect happened here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: