Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really don't like helm. I think we have arrived at abstraction over abstraction over abstraction.

The last project I had to be involved with used kustomize for different environments, flux to deploy, helm to use a helmchart which took in a list of configmaps using "valuesFrom". Not only does kustomize template and merge together yaml but so does the valuesFrom thing, however at "runtime" in the cluster.

There's just not a single chance to get any coherent checking/linting or anything before deployment. I mean how could a language server even understand how all this spaghetti yaml merges together? And note that I was working on this as a developer in a very restricted environment/cluster.

Yaml is too permissive already, people really start programming with it. The thing is, kubernetes resources are already an abstraction. That's kind of the nice thing about it, you can create arbitrary resources and kubernetes is the management platform for them. But I think it becomes hairy already when we create resources that manage other resources.

And also, sure some infrastructure may be "cattle" but at some point in the equation there is state and complexity that has to be managed by someone who understands it. Kubernetes manifests are great for that, I think using a package manager to deploy resources is taking it too far. Inevitably helm charts and the schema of values change and then attention is needed anyway. It makes the bar for entry into the kubernetes ecosystem lower but is that actually a good thing for the people who then fall into it without the experience to solve the problems they inevitably encounter?

Sorry for the rant but given my second paragraph I hope there is some understanding for my frustrations. Having all that said, I am glad they try to improve what has established itself now and still welcome these improvements.



> I think we have arrived at abstraction over abstraction over abstraction.

> The thing is, kubernetes resources are already an abstraction.

Your first comment was more accurate - they’re heavily nested abstractions.

A container represents a namespace with a limited set of capabilities, resources, and a predefined root.

A Pod represents one of more containers, and pulls the aforementioned limitations up to that level.

A ReplicaSet represents a given generation of a set amount of Pods.

A Deployment represents a desired number of Pods, and pulls the ReplicaSet abstraction up to its level to manage the stated end state (and also manages their lifecycle).

I think most infra-adjacent people I’ve worked with who use K8s could accurately describe these abstractions to the level of a Pod, but few could describe what a container actually is.

> It makes the bar for entry into the kubernetes ecosystem lower but is that actually a good thing for the people who then fall into it without the experience to solve the problems they inevitably encounter?

It is not a good thing, no. There is an entire generation of infra folk who have absolutely no clue how computers actually work, and if given an empty bare metal server connected to a LAN with running servers, would be unable to get Linux up and running on the empty server.

I am not against K8s, nor am I against the cloud - I am against people using abstractions without understanding the underlying fundamentals.

The counter to this argument is always something along the lines of, “we build on abstractions to move faster, and build more powerful applications - you don’t need to understand electron flow to use EC2.” And yes, of course there’s a limit; it’s probably somewhere around understanding different CPU cache levels to be well-rounded. However, IME at the lower levels, the assumption that you don’t need to understand something to use it doesn’t hold true. For example, if you don’t understand PN junctions, you’re probably going to struggle to effectively use transistors. Sure, you could know that to turn a silicon BJT transistor on, you need to establish approximately 0.7 VDC between its base and emitter, but you wouldn’t understand why it’s much slower to turn off than to turn on, or why thermal runaway happens, etc.


> The thing is, kubernetes resources are already an abstraction.

What I meant by that is that kubernetes resources are generic. "Objects" in the cluster representing arbitrary things. And this makes sense because, it's okay if one doesn't know what cgroups and namespaces are to deploy a container/pod resource. What I'm trying to say is that this kind of arbitrary abstraction is what k8s brought to the table but people keep trying to abstract again on top of that which makes no sense. "Resource" is already generic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: