> By 2024, the legacy Python service had a history of reliability and performance issues. Ownership and maintenance of this service had become more cumbersome for all involved teams. Due to this, we decided to move forward into modern and domain-specific Go microservices.
Using Python for a backend system to "scale" really is just pure cope and was unscalable in the first place and in the long run as Reddit just found out. They knew they needed a lot more than just fake optimizations from an interpreted language to improve the performance and a Golang rewrite unsurprisingly solved those issues.
This once again clearly shows that other than in the prototyping stage of an MVP, it really makes no sense to scale with backends written in these interpreted languages in 2025.
Switching to safe, highly performant mature languages such as Golang tells you that it not only generally improves performance, but correctly handles concurrency which Python has always struggled at, especially at scale which is why in Reddit's case, the race conditions now revealed themselves more clearly before the rewrite.
Reddit was founded in 2005, Go was released in 2007.
They picked the tech that was available and mature at the time, and enabled them to scale for 20 years (to 100M+ DAUs + IPO) - seems like a pretty good choice to me.
You know which other platform was built on Python? Youtube.
Python isn't a bad choice if you're building [certain kinds of] billion-dollar businesses.
Python is pretty much universally a bad choice for a backend. It's not even "easier" than other backend frameworks - Java Spring and dotnet are right there, and are specifically optimized for this use case.
If you want to build an app that's easy to maintain, Python is a bad choice because of it's dynamic typing and struggles around tooling. If you want to build an app that's performant enough to scale, Python is a bad choice because Python. If you want to build an app that will get you a website ASAP, Python is still not the best choice, because other languages have larger batteries-included frameworks.
In 2005, even PHP would have been a better choice and would probably still be performant enough to run today.
Today, the story is much worse. Python has use cases! Experimentation, anything science related, "throwaway" code. Applications is just not one of the use cases IMO.
Here we go, someone's throwing around the S word again. Based on my experience, every time someone mentions "scalability" in a web application context I smell red flags.
Now we don't have details on what the comments service in Reddit entails - maybe it does indeed do a lot of CPU-intensive processing, in which case moving to Golang will definitely help.
But maybe it's also just a trivial "read from DB, spit out JSON", in which case the bottleneck will always be the DB, and "scalability" is just an excuse to justify the work.
The fact this is part of a move off a "legacy" system to "modern" "microservices" suggests there's a huge amount of developers having fun and are incentivized to justify continuing getting paid to have fun replacing a perfectly functioning system, rather than an actual hard blocker to scalability that can't be solved in a simpler way like by throwing more hardware at it.
At the very moment person in charge says "ok this works, now make it not slow". Python is modern age BASIC. Easy to write and good for prototypes, scripting, gluing together libraries, fast iterations. If you want performance and heavy data processing anything else will be better. PHP, Java, even JavaScript.
For example Python is struggling to reach real time performance decoding RLL/MFM data off of ancient 40 year old hard drives (https://github.com/raszpl/sigrok-disk). 4GHz CPU and I cant break 500KB/s in a simple loop:
To optimize that code snippet, use temporary variables instead of member lookups to avoid slow getattr and setattr calls. It still won’t beat a compiled language, number crunching is the worst sport for Python.
Which is why in Python in practice you pay the cost of moving your data to a native module (numpy/pandas/polars) and do all your number crunching over there and then pull the result back.
Not saying it's ideal but it's a solved problem and Python is eating good in terms of quality dataframe libraries.
All those class variables are already in __slots__ so in theory it shouldnt matter. Your advice is good
self.shift_index -= 16
shift_byte = (self.shift >> self.shift_index) & 0x5555
shift_byte = (shift_byte + (shift_byte >> 1)) & 0x3333
shift_byte = (shift_byte + (shift_byte >> 2)) & 0x0F0F
self.shift_byte = (shift_byte + (shift_byte >> 4)) & 0x00FF
but only for exactly 2-4 milliseconds per 1 million pulses :) Declaring local variable in a tight loop forces Python into a cycle of memory allocations and garbage collection negative potential gains :(
SWAR : 0.288 seconds -> 0.33 MiB/s
SWAR local : 0.284 seconds -> 0.33 MiB/s
This whole snipped is maybe what 50-100 x86 opcodes? Native code runs at >100MB/s while Python 3.14 struggles around 300KB/s. Python 3.4 (Sigrok hardcoded requirement) is even worse:
This is very subjective. Using Python influences your architecture in ways you would not encounter with other languages.
I maintain a critical service written in Python and hosted in AWS and with about 40 containers it can do 1K requests/sec with good reliability. But we see issues with http libraries and systemic pressure within the service.
1k requests/sec over 40 containers, meaning 25 RPS per container. Are you using synchronous threads by any chance (meaning if you're waiting on IO or a network call you are blocked yet your CPU is actually idle)? If so you might benefit from moving to gevent and handle that load with just a handful of containers.
The scale where throwing more hardware to run your CPU-intensive Python part (and not the part that just wait on a DB, IO or other networked service - that won't change with Golang) starts costing more than paying developers to write it in a new language and incurring the downside of introducing another language into the stack, throwing away all the "tribal knowledge" of the existing app and so on.
Modern hardware is incredibly fast, so if you wait for said scale it may never actually happen. It's likely someone will win the push for a rewrite based on politics rather than an actual engineering constraint, which I suspect happened here.
Using Python for a backend system to "scale" really is just pure cope and was unscalable in the first place and in the long run as Reddit just found out. They knew they needed a lot more than just fake optimizations from an interpreted language to improve the performance and a Golang rewrite unsurprisingly solved those issues.
This once again clearly shows that other than in the prototyping stage of an MVP, it really makes no sense to scale with backends written in these interpreted languages in 2025.
Switching to safe, highly performant mature languages such as Golang tells you that it not only generally improves performance, but correctly handles concurrency which Python has always struggled at, especially at scale which is why in Reddit's case, the race conditions now revealed themselves more clearly before the rewrite.