#k8s #kubernetes #devops #containerization #cloudcomputing
🤔I’m surprised to see so much k8s hate. Why do companies want k8s? As a k8s evangelist, I want to clarify the reasons for the animosity and shed some light on the benefits of using Kubernetes.
Understanding the Role of Kubernetes 👨💻
Kubernetes as an Operations Framework 🛠️
– A Shift in Deployment Paradigm
– Managing Legacy Tools and Technologies
– Scaling Up for Business Growth
The Need for Standardization 📦
– Overcoming Tech Debt and Legacy Sprawl
– Streamlining DevOps Processes
– Supporting a Diverse Range of Applications
Benefits of Kubernetes for Organizations 🌐
– Agility and Flexibility
– Efficient Resource Management
– Simplified Operations and Maintenance
Personal Experience with Kubernetes 💭
– Realizing the Power of Kubernetes
– Bridging the Gap Between Development and Operations
– Enhancing Productivity and Innovation
Conclusion: Embracing the Power of Kubernetes 💪
In conclusion, the push for Kubernetes adoption in organizations stems from the need for standardization, efficiency, and agility in managing complex and diverse applications. While there may be initial resistance and skepticism, the benefits of Kubernetes in streamlining operations, enabling scalability, and fostering innovation cannot be overlooked. It’s time to embrace the power of Kubernetes and elevate our DevOps practices to new heights. Let’s #EmbraceK8s and drive our organizations towards a more streamlined and productive future.
K8s is playable as a big blind defend and an open raise from the button but otherwise is not a good starting hand
>The chef and ansible is 7-10 years old and no one seems to fully understand how it works.
Instead now you have a Kubernetes that’s not even 3 years old (1.21) with a truckload of deprecated API’s you need to fix like crons, ingresses, pod disruption budgets, …
Yes way better right? Every tech stack, if you ignore it becomes a drama and a bunch of tech debt.
Part of your job is to understand the problems you’re solving and decide on the right tools. K8s is just that. A tool. I’ve seen some projects where k8s’s technical debt greatly outweighs its benefits.
If it’s a large company it’s usually fine since they have enough people to throw at maintenance.
I’ve seen a junior ops guy build a self-hosted k8s cluster for a mom&pop shop and their two laravel apps. They will probably have to learn the phrase “technical debt” the hard way.
Wtf is a k8s? Is that k8, plural? Kates? K.8.S.? And what do chefs have to do with development? This entire post was a word soup, and I’m fuckin drowning.
Experienced dev here with a lot of K8s and non-K8s high scalability experience, and I fully agree with you. I think K8s is made the bogeyman but the real problem is that companies move to a microservices architecture before they really need it (and most likely they will never need it), and then it becomes a slippery slope.
If your reason for going to microservices is that “we need to scale better in response to requests”, that is the most common stupid reason because your real problem is how to run multiple instances of your application without losing correctness, or scaling your database, and that’s got nothing to do with microservices.
The only correct reasons to move to microservices are human reasons – something along the lines of your application logic becoming too large for one person to understand it fully.
The vast majority of SWEs don’t understand distributed systems and it shows. There are 100-dev companies where not a single person understands Distributed Systems 101. These teams tend to throw meme “webscale” solutions like FaaS, microservices, k8s, NoSQL, horizontal sharding until they end up with a frankenstein abomination held together with ducttape when really all they needed was one good engineer making a few directed modifications to their monolith, and a good DBA/backend-SWE to scale their database tier.
>realize that it’s dominantly about being an operations framework more than a dev framework
This is a very critical point and paid K8s++ products like OpenShift, Rancher etc make a good living from enhancing K8s in ways that reinforce the boundary between developer and ops.
Also, Jenkins is one giant smelly ball of cancer and if your team uses it, your P0 priority should be to get off it
Is this CS career questions or CS lecture rants?
Here’s why I hate K8s: 97% of companies do not operate “at scale”. Therefore, using these tools is a giant waste of time and effort for devs that could be better spent delivering things customers actually want. Also, the alternatives are typically not as half-assed as most K8s evangelists make them out to be.
It’s not hate, just good engineering to figure out if you need Kubernetes or not for your product. Yes, surface level API is easy enough, bit when something breaks and you can’t fix it – say hello to your first multi-hour outage if you’re lucky.
Tell me, what do you do when your volumes suddenly turn read only, your control plane node no longer boots as it ran out of disk space, your etcd blows up or your networking layer just… Stops working?
I’ve used K8S since it’s infancy and seen the above and many, many more bugs and issues – most of which happen silently during the night and then your alarms go off.
It’s a good tool, but it needs to be weighted against the complexity it brings in as well. Most applications don’t need it. And you need to be very careful in listening to people who pitch it as a silver bullet as what usually happens those same people will be long gone when things start falling apart.
Typically, in AWS, it means they’re provisioning EC2 instances. As a proponent of serverless, it leaves an ick in my tastebuds but it’s a necessary technology to know if you’re doing any work standing up cloud servers.
#containerscontainerscontainetscontainers
K8s pose specific challenges for datastores like c*. In order to scale and keep availability you need to have a good understanding of the use case. The majority of teams supporting k8 deployments don’t give crap, because the point of K8s is that you don’t have to.
I have found more often than not that things are deployed in such a way as to provide availability at the container level but the underlying hardware is setup such that it’s failure will bring down enough containers as to produce an outage. It also looses some of its appeal when dedicated disk is required, Which is often the case in distributed datastores. Combine that with noisy neighbor issues and it’s simply more trouble than its worth for those specific use cases.
No one in their right mind wants you to be able to deploy ‘the most complex applications and data services on the planet in 3 commands’. That is not a good thing.
Kubernetes suffers from magic bullet syndrome. It was sold as a magic bullet during the hype phase. And now that the hype phase is over there are more people complaining about it than hyping it up. Kubernetes is a good system and has major advantages over its predecessors. But in practice it doesn’t really do what you are describing. Most orgs are not running one k8s cluster for heterogenous services. Instead we run one AWS account per team or service.
Another issue K8s has is that the industry has moved over the last 5~ years k8s has been big. K8s isn’t competing with VMs and individual instances anymore. It’s competitors now are ECS Fargate style managed container systems and serverless. Kubernetes’s edge on everything else has mostly been blunted at this point.
I don’t get what problems people have with k8s.
People dislike kubernetes? This seems equivalent to disliking containers, or air…
Depends on the flavor of K8’s too. GCP makes managing and provisioning a cluster easier than everyone else. EKS is pretty good. Never done K8’s in azure. That aside it can be good but it isn’t the panacea some ops folks make it out to be. It has a steep learning curve and folks stay away from it bc of that. Then you have the neckbeards who love it. Those guys almost always get their personality at the jerk store. To summarize K8’s? It depends.
k8s is fucking amazing at scale. Even with just 10 backend services, it does wonders.
Unpopular opinion, it is not even complex, it is probably the easiest one to learn.
What the hell is a K8S?
I don’t have never see a hate comment about k8s~
It’s just k8s it’s ok for middle to large organizations~ but for small companys really it’s a burden~
In my job we have sepárated teams for DevOps and Development~ Only developers write the code, test it in Docker (optional), push it to Development, create the pull request to QA~ PR have a automatic build test With some automatic test~ Then With ok for creation of the image and basic tests~ Then deploy to the k8s of qa~ when it’s ok, then continue deploy to main branch and repeat deploy to k8s In production~
Really there it’s not space for some developer learn k8s because it’s Fully Automatized and in general never touch the manifest file for k8s because allow some “senior” dev without k8s experience going to create a shit hole~
There is only need to touch k8s when the system requeriment Change or when creating a deployment (that we use templates)
How is this a career question?
My fave thing about k8 is scaling DOWN… I run k3s on my home mini server to run plex… because it’s way easier to apply some k8 manifests than figure how to install stateful software on fucking Linux etc etc etc
The real magic about k8 is it models and standardizes the network and cluster elements. I’ve wanted – ever since I left Amazon in 2006 – a cluster solution that would basically restart workload across machines to achieve a target of X instances or whatever. Literally until k8 nothing did that. Nothing!!!
I had 1 instance when k8s conflicting/affect our dev process.
Client want 1 simple app. Not much going, 20 concurrent user max. Monolithic service will do. Done deving, deploy no problem.
Some bugs appeared. Users suddenly logged out for no clear reason, sometimes and uploaded file just gone. Apparently they deployed our app on kubernetes without telling us first. And we uses in memory storage to store session and store uploaded file locally, so (small) chaos ensues.
I’d pick kubernetes day 1 for just about any distributed computing. IMHO the hate has come from…
1. Skill issue.
2. Devs resistant to handling their own ops/infra.
You kind of touched on it, but you kind of also ignore the answer…
K8s adds another abstraction layer. If I’m to draw a parallel, I’m not sure how familiar you are with React, but if you don’t know any frontend and you are first learning it, it can be a bit of additional overhead that seems pointless at first, but then once you learn it, you see the benefits and can’t work without it. You forget the initial pain of having to learn the new paradigm.
Same sort of deal, when you’re on the inside with your k8s knowledge, no matter how EASY you perceive it, it’s still another abstraction and a different way of thinking. Not only that, but if you’re not full stack and your full time job is infrastructure, of course you have the capacity to focus on the entire vertical that is infrastructure.
Secondly, you’re kind of ignoring the fact that most people globally don’t work on the same sort of work or teams as you do. As you said, you can why with a smaller team it makes sense, so acknowledge it.
Additionally, I can see differences between static and dynamic typed languages. Certain engineers like to shit down on certain languages and don’t comprehend how you can code smaller things faster using dynamic languages, whereas for things that don’t change often and are rigid they fair better being written in a static language.
Same sort of thing, I can work in an org with 250 engineers and still not have a managed infrastructure setup that meets my needs. In my squad of 5 engineers I could write a small docker file and deploy it to managed infrastructure platform and call it a day. It’s more expensive, but the organisation isn’t big enough to warrant the investment in infrastructure in house just yet. I can also spin up and tear down my docker container fast and easy on the managed platform, just like with k8s, so why should I bother learning the abstraction?
The mark of a good engineer, is focusing on trade offs and reserving bold statements for when they’re warranted. Not everyone needs or wants to manage infrastructure, or even write dockerfiles, so that plays a big part in your perceived k8s hate.
I feel like you’re fighting the wrong fight here. Tools are just tools.
The key issue I have with your post is “the organization needs you to learn one tool”.
That’s great! As a product engineer, is my project deadline getting extended so I can incorporate this new workflow? Is my end of year performance going to be x% better if I enthusiastically embrace this new tool?
If you’re an industry veteran like me, can you seriously look me in the eye and tell me the answer to the above questions is “yes” in the vast majority of companies?
The problem you’re trying to solve is the problem of mismatched incentives. Transitioning from <insert tool> to k8s, at large companies with specialized engineering teams, primarily benefits infra and operations teams most, with that benefit also trickling down to platform engineers (as one, I appreciate k8s a ton!), but it does not really extend to a product focused engineer or a frontend engineer. Therefore, the level of disruption your migration has should decrease exponentially to the benefits. The level of daily interaction with your tool should also decrease exponentially.
This is where organizations falter. The infra team convinces leadership the benefits of k8s and it’s approved. But months later, engineers report slower working cycles, frustration with the tooling, lack of support. Product owners report engineering teams pushing deadlines. Executives notice the slow down.
There’s also a surprising amount of comments here from people who say “devs who don’t understand k8s, that’s just a skill issue, just learn it hurrr durrr”. I posit that infra and operations teams who cannot seamless manage the transition to new tooling without minimal disruption to product engineers is a skill issue. Product engineers aren’t incentivized to care about how the infra engineer’s daily life is with ansible vs k8s just like how infra engineers aren’t incentivized to care about product market fit or business logic. We work in a largely capitalistic market. People don’t work out of the goodness of their hearts, and the number of people in this industry who actually enjoy learning the tech are vastly outnumbered by those who want to do their job and go home. Large organizations who rely on the former to succeed, or worse, chastise the latter as “skill issues”, will find themselves at an engineering deficit.
So how to do this right? K8s should be properly abstracted from product engineering. Backend engineers should absolutely understand the basics. You should be able to understand a manifest and fiddle with autoscaling rules to properly handle traffic. And you should incorporate these features into engineering design. But these should be increasing your productivity, not decreasing it. Migrations to k8s should be as invisible as humanly possible for product engineers. White glove migrations should be the standard. Operation teams should be leading the migration with product teams providing support, instead of operation teams throwing an outdated readme to a product engineer and say “get this done in a month”. Migrations should come with 40 hours a week of constant support, SLA timers to response and all. And most importantly, operation teams have to give up on their planned roadmap if their migration causes undue or unexpected challenges for product engineers and pivot to solving these challenges. You absolutely cannot say “fuck it, it’s good enough for us and we’re moving on”. That builds resentment and that’s how you get the comments here that you see.
There’s a time and place for k8s. Just because it worked for your scenario doesn’t mean it’s always a good idea.
It’s usually a skill issue. People hate things that they’re not good at, myself included.
My sysadmin friends always tell people who hate k8s to try and do sysadmin/devops work without it.
You’ve just described my team’s exact transition over the last year or two. Ansible/Jenkins/Artifactory stack deploying to VMs provisioned on company-owned machines.
I’m in the ops team moving the entire platform onto kubernetes, but we’ve kept the deployment process pain-free for our developers – we spent time figuring out how to make the process changes either minimal or incremental for the devs, and automating as much as possible.
Our deployment process is just to commit changes to the integration/master branch, rebuild the image for our applications whatever env via Jenkins, and then just bump the image version in our helm values in source code. We use ArgoCD to sync our clusters up to date with what’s in the yaml.
Devs are receptive to it because it is very automated, they don’t really need to understand kubernetes just to be able to deploy code, and we’ve documented everything they might need if anything goes wrong.
I’ve seen devs insist they need containers for their 20tps service when serverless will do just fine
This sub is full of people with little to no real world experience who actually have no idea what they are talking about. I wouldn’t read too much into it
Right now my team is deploying to fargate and that’s working fairly well. Personally I’d like to learn kunernetes but it’s a hard sell if it’s not something we’re likely to use.
My manager is all on board for k8s and is happy for me too get stuck in to it, but I’d like to be able to say to the rest of the team “we’re using this over fargate because…”.
Right now I can’t come up with anything. What am I missing?
TL;DR: for the love of God please give me an excuse to use k8s
Keep learning it as much as possible. K8S is the future for a lot of the industry and once you port a system to it getting off is too painful.
What is your CS career question?
Debugging a k8s instance is a pain and a half. It’s a full time job itself figuring out how to use the cmdline to interface with your k8s instances
Why are you posting this here, why aren’t you posting on that thread?
Possible reasons:
– Person is struggling with k8s’s relative complexity and that is conditioning their thoughts negatively. Basically, a skill issue. There are many developers that don’t really like learning that much and prefer to focus exclusively in their area (against the shift-left philosophy).
– Person never had to deal with a complex non-Kubernetes system “back in the day” and got too comfy with all the tools that set up your infra in 3 clicks, so, they struggle to see the benefit of adopting Kubernetes.
– Person has had a bad experience with it for some reason and now they’re correlating the bad experience with k8s.
I really like k8s but simplicity, functionality, and availability is more important than prioritizing k8s out the door.
I haven’t noticed k8s hate anywhere. I was on the team that moved my company to k8s and no one complained about it.
One thing I find awesome is splitting applications into microservices and deploying independently. For instance, just doing a frontend-only deployment vs. re-building and re-deploying your whole app.
Devil’s in the details on that though. Making sure microservices are in-sync. Making sure you’ve structured your pods such that you aren’t coupling things and you’re only building what you need to.
But it’s pretty awesome once everything is set up appropriately.
never had that problem, guess the haters aren’t forward-thinking
Edited because I misinterpreted where OP said the hate was coming from
i think the most common complaint is people reach for it too early
What is a k8? A KPI (key performance indicator), or something related to kubernates?
Edit: Thanks for answering, I am a DevOps big dumb and haven’t seen that abbreviation before.