Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why I'm No Longer Talking to Architects About Microservices (container-solutions.com)
117 points by saikatsg 8 months ago | hide | past | favorite | 92 comments


I think the term "software engineering" sometimes does more harm than good. Because we call it engineering, we tend to act like it's a rigid, well-defined discipline...like civil or mechanical engineering...where things have precise definitions and there's a right way to do everything. But in practice, software is full of gray areas. Take testing: people argue intensely about what counts as a unit test vs an integration test vs an E2E test, as if there's some pure Platonic definition out there. The same goes for debates about microservices vs. monoliths, service-oriented architecture, and so on. The energy behind these arguments often feels religious, not practical.

Over time, I've found that the only reliable way to reason about engineering decisions is to work backward from the business use case. What are we trying to accomplish? What's the time frame? What risks are worth taking now vs later? Everything else...patterns, paradigms, "best practices"...only make sense in that context. The rest is academic, and often divorced from reality tbh.

Also noticed that many strong opinions in engineering come from people trying to avoid pain they've experienced before. But the solution that saved them in one situation can easily become a liability in another. Patterns ossify into rules. That's why I try to keep my reasoning grounded in outcomes and context, not ideology. Most "best practices" are just local maxima..useful until they arent.


Mechanical engineering certainly has established practices but I would argue against it being rigid and well-defined in many or even most cases.


At least a mechanical engineer can estimate their rigidity.


Materials science has changed a couple times in the last thirty years hasn’t it?

And given that finite element analysis has flaws, one wonders what will replace it and when.


For example, there's probably an under-regognized amount of materials work going into radiation shielding related to current fusion efforts. And there's a huge amount of, perhaps subtle, material science innovation going int consumer products and elsewhere,


Gorilla glass is a great example.


Also tons of stuff related to especially outdoor clothing.


Same for electrical engineering. Some solutions require as much creativity and have infinite possible solution as software does.

The main difference is that most people working in electrical or electronic engineering aren't being forced to ship half-working products that might change tomorrow. When we ship a product, it's "done". The cost of modifying it later is not zero. Well, the same is true for software, but this is a reality that business gaslights developers into ignoring.


It’s a we can always fix it in post-production which becomes less and less the case with many physical projects. But that’s at least somewhat different from there not being fairly established ways of doing many fundamental things and having best practices for testing etc.


That flexibility is a fantastic advantage of software over hardware, not a liability. You still can deliver software as an unchanging thing, but generally no one does because incrementally giving users something useful sooner is more valuable.


The flexibility is how software engineers are forced to forget the engineering part, so it does matter.

Updates are fine. Forgetting quality because of the possibility of updating is not.


Depends how you use that flexibility. If it’s to add capabilities and fix minor problems that all products have, sure. If it’s to ship bug-ridden crap just because it can probably be fixed later, I’m less of a fan.


Theoretically, it could have been an advantage, but in practice it's not. What we got instead is a software ecosystem of slow, bug infested garbage that requires constant updates.


I agree that we don't live up to the title of "software engineering", and it's not a mistake that software is uniquely complex, but all the same this advanced degree of complexity doesn't preclude a culture of engineering. See the user and know your abilities (including limitations). That is what engineers are called on for. And when ethics comes knocking, know that there are not just users but stakeholders.


> And when ethics comes knocking, know that there are not just users but stakeholders.

I am the one who knocks, and I’m not sure where we’re all getting the idea that society desires ethical, competent, software engineers.


Consider it a personal ideal in a discussion about ideals, then.


It’s a signpost in a discussion about weather vanes.


Uniquely complex compared to a modern microprocessor for example?


I should say that software complexity is different than normal engineering complexity; physical constraints are replaced with cognitive constraints on managing abstractions. To address both: designing a modern microprocessor is very complex. I think certain software tools used to aid in this, e.g. a simulator to test designs, are very complex. These complexities are different: the former is based in science and the latter is based in logic. Even so, they intersect in the real world, but aren't equal. I think the latter has a much higher ceiling for difficulty, as complexity comes for free there.


The power of software is how it 1) moves a lot of work from the domain of the physical to the domain of the written and intellectual and 2) the way it can be replicated at scale. So it really isn't like engineering, more like writing, but industrialized. Well, at the application level at least, where a lot of software work happens. The work that actually allows written software to physically interact with the machines it runs on, that's engineering, but generally not what most "software engineers" work on.

Anyway, it's not really so surprising that there are few hard and fast and replicable rules in "software engineering" as there really aren't in, say, essay writing. There are best practices and tips and techniques that are generally more effective than others. And there _are_ actual constraints around software that don't exist around essay writing as, ultimately, software does have to translate to real physical limitations like available memory and such, but the abstraction away from those physical limits is by and large the point and purpose of software.


As an experienced "software engineer" :D, I fully agree with you. Although I would phrase it differently.

All patterns & practices come with their benefit and drawbacks. So all decisions are very much dependent on the circumstances. This is what you called "local maxima".


What separates software from engineering are concerns about legal liability. If we actually sued software companies for shipping us harmful crap, software would start to look like any other kind of engineering.


Not really true. The legal liability is generally associated with the Professional Engineer designation. In electrical engineering, for example, my experience is that almost no one uses/needs to be a PE unless they work in infrastructure projects like power distribution. People designing PCBs, ASICs, etc going in consumer products aren’t generally PEs.


There isn’t even a software engineer PE any longer I believe. But even otherwise it’s uncommon outside of civil engineering. Mostly it’s for signing off on drawings/designs for regulators.

I started down the path in mechanical but didn’t stay in that track long enough to get one.


... AND people would pay for software like they pay for any other kind of engineering. Or they wouldn't get any software.



I agree strongly with your second two paragraphs, but I'm going to push back on the oft-repeated meme that software engineering is not real engineering because it doesn't have the same disciplined approach as physical engineering practices.

Personally I think software engineering is absolutely engineering, it just has mostly different constraints from physical engineering. Most of this has to do with the dealing with the costs and consequences of building things in the physical world. Without those disciplines, a huge capital cost goes into building a bridge that fails, or an expensive machine that can't do its job. We also have to account for the limitations of physical materials and a harsh environment, which are in some ways harder to account for, but also more stable as a well-understood substrate than pure software (ie. all animals have some intuition about physical things, physics doesn't change, materials capabilities change slowly, etc).

When folks lament the lack of "discipline" in software engineering I think it's a misplaced, grass-is-greener sentiment that hasn't really been thought through. The reality is software has different constraints, not harder or easier, just completely different apples to oranges differences. For instance, durability and maintenance in physical objects is about material properties and physical wear and degradation from the operating environment, whereas software bits are infinitely durable and replicable, but they get broken by the environment itself since they have to interact with thousands or millions of other software artifacts created over many decades. With physical engineering it's mostly about discrete objects with natural and intuitive encapsulation that are in a sense "immutable". Some software is like this (like pre-internet console video games shipped on cartridges), but typical cloud-based software is nothing like this, it's a constantly evolving tangle of interconnected logic and ever-growing data used for widely heterogenous purposes, often never fully understood by any one person or entity.

This messiness is one of the frustrating things about software, but it's also its strength. Anyone that tries to declare rigid civil-engineering-style discipline to software will not survive in the marketplace because most software is not life-or-death. The discipline put into the Mars Rover or the Therac 25 successor would simply be overkill for average software. That said, of course there are cases where the importance and impact of software correctness is unknown, like if the government were to start using social media sentiment to impose real-world consequences on its citizens, but these types of considerations are much more fluid and harder to nail down than physical properties like "will this bridge stand up for 50 years". Software engineering in my view, is about leveraging the malleability of software to solve problems in a constantly evolving landscape. We're not just standing on the shoulders of giants, we are directly calling their APIs to solve new problems they couldn't even have conceived of. One could argue this is too broad to really be "engineering", but I can't think of a better word to describe the highly technical process of understanding and solving problems with software.


Could it be that not all software development is equal? Some of it leans more towards engineering and the academic (e.g. compilers, operating systems, tpc/ip stacks, etc) vs software that leans more towards the creative (e.g. web and mobile apps, games, api's, etc). I.e. to me software development seems more of a spectrum.


Yes and it makes them think they’re so smart. Developer or even engineer is really overkill for a name.

Code monkey is really the more accurate term. Not even slightly kidding here.


> makes them think

Who are they? People that advocate for specific patterns or argue about meanings of terms used?


My take would be that “they” are all of the people for whom learning to program gives a false sense of overall ability. It doesn’t follow that building a solid software product means you know how to build bridges, or fix social problems. But “they” seem to think it does. A tendency among the very young perhaps, but the hubris of the technogist is widely remarked upon.


No. People who think they’re smart because they do software. People who think senior, principle or architect as a title means superiority.


There was a time when all the people who would be electrical engineers were basically tradesmen. In the US, mostly trained by Edison. They were called "muckers."


Until recently most of the people calling themselves SE hate code monkeys and their attitude. Which is a lot of the impetus to call yourself something else so as not to be lumped in with the sloppier elements.


A difference to me is using existing engineering, vs engineering ways of engineering.

The latter seems to occur and mature in software more.


I've often felt that coding is more akin to writing than anything. Any given statement is perhaps mathematical but at the higher level you're more authoring a program according to humanistic needs and thought patterns rather than strictly "engineering" it.


In my experience a lot of technical decisions are about solving an immediate urgent problem, not really about a true discussion of tradeoffs. I’ve seen monoliths broken up into micro services because teams feel they can’t iterate on their problem fast enough. I’ve seen micro services turn back to monoliths because of difficult scaling/deployment/cost properties of so many little services. And then people argue from a standpoint of we had X problem we solved with microservices. People rarely feel the Y problem they’re not currently having.

Add to that the bias for the new dev/leader to push some change to show impact - you get a regular pendulum swing between extremes.

One problem underlying it all is stable engineering leadership. This itself is a challenging balance between stability avoiding dumb architectural churn like I describe vs ensuring you have some new blood to inspire fresh ideas. Another problem is just misatribution: we see X problem in the product, some eng pushes Y big solution to X even though there’s probably a simpler path to solving X in the current stack. Often you don’t really need to do that big revamp but to show impact? devs preference? we do X anyway.


Right, there is no measurements involved and most decisions are gut-based. In the end most systems will work somehow, it’s just that the pain points in development are different.


“In the end most systems will work somehow”

The older I get the more I feel like this is a feature, not a bug. I have spent too much time over engineering systems that end up being canned because leadership decides to go in a different direction. These days I am much more concerned with striking a balance between perfect and good enough.

Get something out into production, start generating some revenue and finding the stuff people like about it, then see what is giving you the most problems and fix that. That’s going to be a much more successful path than never shipping anything because of endless polishing and optimizing.

If I ever find myself spending a long time debating some technical decision (aka bike shedding), I have learned to hit pause and say let’s just pick a path and go with it. As an aside this also works for deciding what restaurant to go to or what kind of pizza to order.


Completely agree. I'm personally inclined to be more toward the "good enough" end of the spectrum than the "perfect" end of the spectrum - but it's still a judgement call. Ultimately, one needs to turn out a product, make money, and fix it up where there are gaps or deficiencies. It is a feature; a good one. Anything else is just hubris and vanity.

If you feel you need micro-services to force an organizational change, don't equivocate, just lead with that, and get back to business.


Even if there is measurement, it’s pretty easy to cherry pick stats or question the existing stats. I’ve seen it time and time again often “data driven” only matters if the data agrees with the leader/devs preconceived beliefs.


Microservices keep happening because we are bad at managing teams who's output is a library. We lack the tools to measure the "impact" of libraries during performance reviews. We can measure the number of calls to a microservice, but it's rare to measure library usage directly. When your library team needs to justify its budget, it doesn't have as much to point to.

It's also harder to assign blame for performance issues when your teams write libraries. In microservice, your team can isolate exactly how much CPU usage your code takes. But few companies can isolate the performance cost of a library. When your main service goes down, it's tricky to assign the responsibility to a single team to fix.

Finally, microservices give you ironclad control over your API surface, while in many companies other teams can often call into any part of your library. As other teams grow tendrils into your team's code, your support burden grows and you have to keep supporting unintended parts of your API.

Now all of these are fixable, and I think we should fix them. Writing and using libraries is easier and lower overhead than putting network calls everywhere. But until you fix the above issues, companies will keep sliding into microservices.


Kind of a weird take. Microservices are (or at least should be) more about encapsulation of data than encapsulation of code or compute.


I don't think that distinction matters. Libraries can easily encapsulate data.

I think he's right - microservices are very often used in situations where a library is clearly the right tool for the job. In my experience the main reasons people don't use libraries are: a) the people in charge of the place where you should put the library are too territorial, so it becomes easier overall to start an entirely new project and deal with the very large downsides of microservices, and b) they want to use a different language, and disappointingly FFI over HTTP is usually easier than "true" FFI.


Microservices are so expensive in terms of overhead.

Think: how much overhead, one function has in terms of the CPU calling it, then returning it.

How many 1000s of X more are you adding by making that a function inside another webserver.

Yes, there are times when a microservice is correct, there are many-many-many other times where using them is pure waste.


The CPU and network overhead are at least 1000x, but organizational costs are at least another 1000x on top of that (Unless you have a billion users or something, then the organizational costs might be a more manageable 10-100x — Building out fleets of data centers is expensive.)

The organizational costs come from communication overhead between engineers: Sync or maybe async calls within a codebase turn into an ad hoc distributed system, where a different team (and maybe different management chain) owns each network endpoint.


And this organizational overhead is why I disagree with his setion about structuring your organization to fit microservices. I have seen first hand over horrible it is when services do not share DBA resources for example. Every service has its own incorrectly and differently configured cloud PostgreSQL instance. Plus it bring really hard to innovate and create new features since many innovative features require you to touch multiple services and talk to a whole bunch of teams.


This. It’s also more expensive. If one DB is multi-AZ, then all of them need to be, otherwise you’re giving up all reliability improvements. So now instead of a single beefy instance in multi-AZ with a few read replicas, every service has that.

And as you correctly point out, the odds are low that there’s any semblance of a centralized configuration (nor that anyone knows what all the options for). So you wind up with 30 different DBs, all with different performance characteristics, and a ton of data duplication. Not to mention the observability nightmare. You want to trace something E2E? Have fun following all the traces and queries.


Somebody has never watched the whole company go down because some other team wrote a bad migration to the central db.


Which is a large reason why I don’t think dev teams should be managing DBs. It’s rare that anyone actually understands them, much less proper administration, configuration, tuning, etc.

If you have no other choice because you’re a tiny startup, then fine, but at least show the DB the respect it deserves, and read its documentation. You wouldn’t (I hope) write and run code when you had no idea how it worked, or what the potential ramifications were, so don’t do it for the rest of your tools.


Yes, I have watched that happen both in microservices and monoliths.


The only good argument I know for microservices is organizational:

When each team is responsible for one microservice, they can manage that with much less painful coordination with other teams.


My experience has been the total opposite. When each team is responsible for a service then to build anything complex you need to have meetings with stakeholders from a bunch of teams. So much time wasted on coordination compared to when working on a monolith. There you usually just need to get buyin from the platform team plus at most one other team. And that assuming you can't just build it and open s pull request which is often the case.


I also definitely prefer a monolith, if possible!


Yes, they can do their own thing without having to worry about their pesky users. In reality, this means that when you consume somebody else’s service and you need something changed, you have the choice to either hack up your own thing or somehow convince the service team to put your needed changes on the backlog. From my experience you often end up with everybody implementing some version of what the official service is supposed to do because the service can’t be bothered to make the changes within a reasonable time frame.

I would do micro services only once the requirements are very stable and there is a clear need for scalability. Microservice first architecture is a recipe for complexity.


I think the cross team dynamics you describe works the same way in a monolith architecture.

One difference/advantage in a monolith is that you don't have to worry which version the other microservices are on.


And TBF this is a great reason for why microservices.


That's the why, but the implication is that it's possibly not the most efficient way to build the system, and that we can't do better because we're no good at communication.


Conway's law


That's a big part of it!


Function is really a tiny issue comparing to serving static assets (ie. a fronted application) by deploying them as a dedicated Nginx microservice with a couple of files copied to the container image :)


Sure… but CPU overhead is kind of a technical detail, not a flaw in the concept. When you have a conversation about microservices, 95% of the time, the conversation veers into these details like CPU overhead. I think that’s what the article author is getting at. It’s a conversational trap—people have so many things to say about microservices.

In the end I think that microservices are not a big deal. It’s like choosing your programming language or choosing whether you want to use object-oriented programming. These decisions all have impact, but they generally aren’t revolutionary decisions. Object-oriented programming didn’t give us a massive wave of reusable code. Rust hasn’t unlocked some new, previously unseen level of developer productivity. Microservices are the same way.


> grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too

> seem very confusing to grug

https://grugbrain.dev/#grug-on-microservices


> A service with its own data store

This is the only valid definition. If you don't have separate data stores, you don't have a microservice. You have a mess that will break.

> Problem Three: Microservices Without Organizational Change Is Pointless

Also 100% correct. Microservices only work if the only necessary coordination between teams is an API. If you have to coordinate deployments or other work with another team, you've broken the model.

Now, that's not to say that you can't help another team add an API call to their service to make your own service easier to run. Or that you can't coordinate feature development across teams.

But at the end of the day, each team needs to be able to deploy or roll back independently of what any other team is doing. It needs to work as if every team is another company who they can only loosely coordinate with (hence the term loosely coupled).

OP is certainly right here, microservices are a means to an end, not an end.


Oh well, it’s not just about microservices… The whole damm IT/software world is just like that. Very few well-defined things, very few good standards, very few agreed upon architectures.

From an electrical engineering background, it is mind-boggling to me how we got to this convoluted software world. It’s like having different electric sockets in the same building. The only hard standards seem to be IP and HTTP.

I just have two basic explanations for that: 1. Most software today has no relationship to physics or the hardware it is running on. Hence, there is no need for optimization. 2. In new projects you always start from scratch and there are few guidelines/truths/principles to hold the developer accountable to. And if there are, then you can always discuss their applicability to your problem.


From a Computer Engineering background, it's just as convoluted when you start talking about hardware design. There's a reason every 2 generations Intel ends up rearranging their socket, and it's not all just because they want you to buy new CPUs. It's because all the sudden this section of the CPU needs more power and the previous pin wasn't enough.

The reason things shift for software is because they can and because there's no 1 "right" solution to almost any problem. There's no 1 correct way to represent data. And when we've tried to do that in the past (see SOAP) it ends up in a wild spaghetti mess of standards that tend to hamper development rather than speed it up.

There's also the very real problem that users of software LOVE to work around standards to the point where they create their own standards that ultimately end up needing to be supported in software. I can't tell you the number of times a "description" field in some set of data has ultimately ended up needing special handling in our data normalization process because an end user has decided "When I mark foo with this, it's actually a bar and the system needs to handle it slightly differently". To which our POs are happy to oblige. Often because software we didn't write is what the users are interacting with to send us data and the "description" field is the only thing they have the power to change.


> Microservices Without Organizational Change Is Pointless

Too true.

In my experience, micro-services were always proposed by either ivory-tower folks chasing the latest fad, or by more practical folks who really wanted to drive an organizational change - a means to an end. The big issue, and friction, occurs when the first group, pushing the notion, didn't recognize that an organizational reconfiguration is necessary for success.


Microservices was a sort of drop in the bucket point in time moment of 2015-2016 and the term lingers on for a description for what is services development. It's a step change to what SOA was and then Lambda and serverless became the next exploding function after that. There is still definitely merit to that type of development, but within the right context and scale of organisation. See, microservices and services development in general solve organisational problems first, not technical. Breaking out the hot paths in your code to separate services logically only gets you 10 services at best. But when your engineering org is tens if not hundreds of teams, the cross functional dysfunction becomes very apparent and that's where APIs, services and microservices do well. I think like all hot shiny trends, it was taken to an extreme by a lot of people or touted as the hammer to solve all problems. In reality, it best serves orgs bigger than 200 people and scaling rapidly. As someone who spent a decade working on microservices tech, I can vouch for the exhaustion at this point and the desire to go back to a more macro/monolith type of development. Yet still, I understand when and where microservices play a role, the key being, its an evolutionary architecture, you don't just "do" microservices, your org evolves in that direction, and then standardisation plays a huge role. Hence why I wrote a framework for it -> https://go-micro.dev


> Microservices conversations are abstract, with little tie-in to real business goals

In my experience there's plenty of blame to lay here. Some organizations are too siloed and some architects are too inexperienced.

The following aren't inherent to microservices, but are general architecture concerns: scaling up to handle more concurrent requests, hitting your SLA uptime, reducing defects or memory usage by avoiding statefulness, reducing latency, separation of concerns to improve maintainability, etc.

Are these not immediately obvious impacts to the business? Is it not obvious how microservices can help here even if it's not the only way to solve these?

My main question is whether the author is as similarly inexperienced as the architects.


> scaling up to handle more concurrent requests, hitting your SLA uptime, reducing defects or memory usage by avoiding statefulness, reducing latency, separation of concerns to improve maintainability, etc.

> Are these not immediately obvious impacts to the business?

Depends; for b2b loads all those concerns but the final one can be be met with a single beefy server for DB writes, with multiple read only replicas.

Run multiple instances of your monolith app behind a load balancer and you should be good for maybe 100k users working on that app at mostly the same time.

With care (redis serving session data) this is fine for even very high loads.

Once you start doing b2c and have millions of users logged in and doing stuff at the same time then those concerns are valid.


These may be sold as points for microservices but in the end are wishful thinking. No microservice can turn statefulness into stateless, the state still needs to be handled somewhere. Microservices also do not magically improve uptime or anything.


One topic of debate at my company that I rarely see here is shared libraries vs micro services. Should logic or external calls belong in a shared artifact to be consumed by microservices or hidden away behind an exposed microsercice api?

The truth is that micro services are a tool. However your organization has broadly agreed to utilize the tool is the correct approach.


Having worked with shared library systems, yeah micro or even macro services are far and away more preferable to deal with.

At very least, having clear boundaries on data ownership is something I really wish my company had done from the start. We have legacy structures that are very hard to iterate on because they get loaded by ~20 different applications directly from the database. That means any minor change to that structure requires careful planning and roll out of the shared library to these 20 services.

Were these behind a service, we'd still have to negotiate the structure (we could, for example, delete a field willy nilly) but we'd be able to add new fields to the structure or change the way those fields are populated to meet current business requirements without a mass rollout.

I do think the micro/nano approach can often be wrong. I think a service should be as big as it should be to cover the domain it's dealing with. But, importantly, who owns what bit of information is by large the most important thing to get right.


I think there is also a category of shared library like a color library. It could be a micro service but also could be a library. Since it is easy to have services upgrade to the newest version of it without having cross dependencies that may make more sense as a library.

I see what you are getting at though.


I think we are definitely in agreement.

It can be hard to determine when something should be a service or library and I don't know if I have a clear answer either.

We also have, for example, libraries that purely compute data. However, they have to be consistent across the system and it would actually be pretty helpful if they were a microservice instead as the computations often need to be changed or updated.

Perhaps when it's business logic it should be a service?

I definitely see the need for libraries like compression, IO help, or even just pure math like doing matrix multiplication where older ways to compute aren't necessarily incorrect.


I think the parent was saying that services should own their own mutable data. Libraries should not be used to "share" mutable data.

At least that's what I believe, so that's how I read it.


Definitely what I'm saying. But the point about libraries which don't own data is also valid and those can be useful as services in some cases. Particularly when we are talking about something like a set of business rules to follow.

At a bare minimum, if there's some sort of mutable data involved then I think there should be a single service that owns it. But I could also see reasons to create services which don't involve mutable data.

It does somewhat come down to complexity though. Like, for example, I think a "matrix multiplier" service would be silly. Or even just a general "do-math" service. But a service that, for example, takes in raw video and spits out compressed video? Now we are talking about something that probably is better as a service and not a library as you likely want to do things like control encoding values and standards system wide.

Just a little about my background. The systems I worked on, for example, shared user information using the `user-lib` library which contained all the details of how to fetch (and insert, and update) a user from the users table. Many services used that library to pull user information which has been a mess to untangle.


"microservices" is not an architectural question. It's a "recommended practices" question.


The part about no good definitions of micro-services is a bit far-fetched. I have one that is perfectly functional, though it requires an architect who is enthusiastic about micro-services, hasn't written any code in at least ten years, and who has no operational responsibilities nor experience. Getting one of these is not too hard; they abound. After that minor hurdle is cleared, the test is simple and has two parts: first, the functionality must fit in an ordinary function of some programming language, even Fortran will do in a pinch. Second, the architect has said that this must be a micro-service with its own git repository and independent database. That's it. If these two conditions are met, then this piece of functionality qualifies as a micro-service.


The point isn’t that there are no good definitions. The point is that there are many definitions.

I also have a working definition of a microservice, which is that it is a functional part of a system which exposes its functionality to other parts of the system via HTTP. This works for my team.

In general, we shouldn’t fetishize any particular technology, idea, or phrase. If a term obfuscates more than it clarifies, it’s not very useful.


> I also have a working definition of a microservice, which is that it is a functional part of a system which exposes its functionality to other parts of the system via HTTP.

How do you differentiate service and microservice?


A microservice is a type of service which faces internal systems. A service in general may also face the internet.


Wouldn't that be considered an API?


An API is an abstract interface - the "I" stands for interface after all. An implementation of an API can be a microservice.


API can mean a library, an RPC, an HTTP call using query params/XML/Json, a CLI, the list goes on.

In my definition, what matters is there’s something responding to HTTP requests internal to the system.


ITT: People missing the point of the article and talking about Microservices


It's been 10-15 years of microservices, and "tech influencers" and bloggers are writing posts for their Linkedin as if they just discovered the secrets behind proper microservices in 2025.

I don't know why people get satisfaction from writing the same posts for a decade about microservices but can we move on at this point? We know that microservices are better for development at larger organizations but it comes with a cost at things like service fragility, service discovery and investment in things like observability and alerts, etc. This is well established at this point, so if you want to write a post that talks about this, please don't.


I certainly don't understand this HN's trend of hating on microservice, and the previous trend of hating monolith. I have worked in both extreme and both works fine. Sure there are things which could be implemented 20% faster with monolith or microservice architecture, but it was never in the top 5 issue that I face while doing software engineering.

As long as there is monorepo, and anything in the testing boundary could be built and edited together that is fine for me.


It is pretty easy. A lot of people worked on badly coded monoliths and then we got the microservice fad and turns out microservices was not a silver bullet solution for bad code and bad organizations.

I personally don't believe at all in microservices, at least not in the organizational sens, I do not mind e.g. genservers in Erlang but then teams owns many of them. But that said this trend is not proof that they are bad, just that they do not magically solve the problems.


I've been in this industry for three decades now, and one thing that I've observed in a number of teams is that when someone has a really strongly held belief about something in this field -- microservices, languages, web frameworks, where to put the trailing curly brace, indentation, etc -- a good percentage of the time it can be traced back to some interpersonal issue/conflict they had with someone else in their workplace/team/peer group.

I could watch it happen in real time. Person A would advocate a position on one of those items, and their nemesis on the team, B, would then build their whole personality around being against that position. They would adjust every existing belief to now fit their new perspective. It would become boringly predictable.

So when someone is really passionate and certain about something like this, my natural thought is "who hurt you?". Especially on something as amorphous and hard to pin down like microservices where it's easy to attack it from a hundred angles that likely have no relevance to an actual implementation.

Like, it really doesn't matter that much. You can make an excellent solution using almost any approach. You can use all sorts of frameworks and languages and platforms, etc. It'll all be okay.


This is interesting. I agree. I never really thought much about it but I definitely remember some people doubling down on whatever preference they had after a certain discussion.

I remember one guy screaming at me because I didn't use the pattern he wanted in a certain project. I didn't really care, but later I found he had a fight with the CTO the day before about said pattern.

> Like, it really doesn't matter that much. You can make an excellent solution using almost any approach

Yep. One can also make a horrible solution with almost any approach. ;)


> As long as there is monorepo

That sentence is doing a lot of heavy lifting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: