Hacker Newsnew | past | comments | ask | show | jobs | submit | watchful_moose's commentslogin

> The fact this is part of a move off a "legacy" system to "modern" "microservices" suggests there's a huge amount of developers having fun ...

I don't think it suggests that at all. This is their press release, so of course they're going to spin it that way.


Reddit was founded in 2005, Go was released in 2007.

They picked the tech that was available and mature at the time, and enabled them to scale for 20 years (to 100M+ DAUs + IPO) - seems like a pretty good choice to me.

You know which other platform was built on Python? Youtube.

Python isn't a bad choice if you're building [certain kinds of] billion-dollar businesses.


Python is pretty much universally a bad choice for a backend. It's not even "easier" than other backend frameworks - Java Spring and dotnet are right there, and are specifically optimized for this use case.

If you want to build an app that's easy to maintain, Python is a bad choice because of it's dynamic typing and struggles around tooling. If you want to build an app that's performant enough to scale, Python is a bad choice because Python. If you want to build an app that will get you a website ASAP, Python is still not the best choice, because other languages have larger batteries-included frameworks.

In 2005, even PHP would have been a better choice and would probably still be performant enough to run today.

Today, the story is much worse. Python has use cases! Experimentation, anything science related, "throwaway" code. Applications is just not one of the use cases IMO.


They actually started in LISP and rewrote it in Python (and also apparently, did not pick any of the "mature" web frameworks).

http://www.aaronsw.com/weblog/rewritingreddit


Some of these would be retries that wouldn't have happened if not for earlier errors.


Yep, a decent canary mechanism should have caught this. There's a trade off between canarying and rollout speed, though. If this was a system for fighting bots, I'd expect it to be optimized for the latter.


I'm shocked that an automatic canary rollout wasn't an action item. Pushing anything out globally is a guaranteed failure again in the future.

Even if you want this data to be very fresh you can probably afford to do something like:

1. Push out data to a single location or some subset of servers.

2. Confirm that the data is loaded.

3. Wait to observe any issues. (Even a minute is probably enough to catch the most severe issues.)

4. Roll out globally.


Presumably optimal rollout speed entails something like or as close to ”push it everywhere all at once and activate immediately” that you can get — that’s fine if you want to risk short downtime rather than delays in rollout, what I don’t understand is why the nodes don’t have any independent verification and rollback mechanism. I might be underestimating the complexity but it really doesn’t sound much more involved than a process launching another process, concluding that it crashed and restarting it with different parameters.


I think they need to strongly evaluate if they need this level of rollout speed. Even spending a few minutes with an automated canary gives you a ton of safety.

Even if the servers weren't crashing it is possible that a bet set of parameters results in far too many false positives which may as well be complete failure.


It's probably not ok to silently run with a partial config, which could have undefined semantics. An old but complete config is probably ok (or, the system should be designed to be safe to run in this state).


This isn't what they do, though. This is a data/config push - original article says _a “feature file” used by our Bot Management system_


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: