About us
Our services

Capabilities

Legacy Modernization
Data Platforms
AI & Advanced Analytics

Industries

Automotive
Finance
Manufacturing
Aviation

Solutions

Databoostr

Data Sharing & Monetization Platform

Cloudboostr

Multicloud Enterprise Kubernetes

Looking for something else?

Contact us for tailored solutions and expert guidance.

Contact
Case studies
Resources

Resources

Blog

Read our blog and stay informed about the industry’s latest trends and technology.

Ready to find your breaking point?

Stay updated with our newsletter.

Subscribe

Insights

Ebooks

Explore our resources and learn about building modern software solutions from experts and practitioners.

Read more
Careers
Contact
Blog
Software development

Kubernetes cluster management: Size and resources

Adam Kozłowski
Head of Automotive R&D
July 28, 2020
•
5 min read

Table of contents

Heading 2
Heading 3
Heading 4
Heading 5
Heading 6

Schedule a consultation with software experts

Contact us

 While managing Kubernetes clusters, we can face some demanding challenges. This article helps you manage your cluster resources properly, especially in an autoscaling environment.

If you try to run a resource-hungry application, especially on a cluster which has autoscaling enabled, at some point this happens:

   Image credits:     https://github.com/eclipse/che/issues/14598  

For the first time, it may look bad, especially if you see dozens of  evicted pods in  kubectl get, and you only wanted to run 5 pods. With all that claims, that you can run containers without worries about the orchestration, as Kubernetes does all of that for you, you may find it overwhelming.

Well, this is true to some extent, but the answer is - it depends, and it all boils down to a crucial topic associated with Kubernetes cluster management. Let's dive into the problem.

Learn more about services provided by Grape Up

You are at Grape Up blog, where our experts share their expertise gathered in projects delivered for top enterprises. See how we work.

 Enabling the automotive industry to     build software-defined vehicles    
 Empowering insurers to create  
  insurance telematics platforms    
 Providing  
  AI & advanced analytics consulting  

Kubernetes Cluster resources management

While there is a general awareness that resources are never limitless - even in a huge cluster as a service solution, we do not often consider the exact layout of the cluster resources. And the general idea of virtualization and containerization makes it seem like resources are treated as a single, huge pool - which may not always be true. Let’s see how it looks.

Let’s assume we have a Kubernetes cluster with 16 vCPU and 64GB of RAM.

Can we run on it our beautiful AI container, which requires 20GB of memory to run? Obviously, not. Why not? We have 64GB of memory available on the cluster!

Well, not really. Let’s see how our cluster looks inside:

 The Cluster again

There are 4 workers in the cluster, and each has 16GB of memory available (in practice, it will be a little bit less, because of DaemonSets and system services, which run a node and take their small share). Container hard memory limit is, in this case, 16GB, and we can’t run our container.

Moreover, it means we have to always take this limitation into account. Not just if we deploy one big container, but also in complex deployments, or even things which in general can run out-of-the-box like  helm charts .

Let’s try another example.

Our next task will be a Ceph deployment to the same cluster. The target we want to achieve is a storage size of 1TB split into 10 OSDs (object storage daemons) and 3 ceph MONs (monitors). We want to put it on 2 of the nodes, and leave the other 2 for deployments which are going to use the storage. Basic and highly extensible architecture.

The first, naive approach is to just set OSDs count to 10, MONs count to 3 and add  tolerations to the Ceph pods, plus of course matching  taint on  Node 1 and  Node 2 . All ceph deployments and pods are going to have the  nodeSelector set to target only nodes  1 and  2 .

Kubernetes does its thing and runs mon-1 and mon-2 on the first worker along with 5 osds, and mon-3 along with 5 osds on the second worker.

              mon-1    
    mon-2    
    osd-1    
    osd-2    
    osd-3    
    osd-4    
    osd-5             mon-3    
    osd-6    
    osd-7    
    osd-8    
    osd-9    
    osd-10             Stateless App                  

It worked out! And our application can now save quite a lot of large files to Ceph very quickly, so our job becomes easier. If we also deploy the dashboard and create a replicated pool, we can even see 1TB of storage available and 10 OSDs up, that's a huge achievement!

   Dashboard view example (     https://ceph.io/community/new-in-nautilus-new-dashboard-functionality/    )

The very next morning, we check the status again and see that the available storage is around 400GB and 4 OSDs in flight. What is going on? Is this a crash? Ceph is resilient, it should be immune to crashes, restart quickly, and yet it does not seem like it worked very well here.

If we now check the cluster, we can see a lot of evicted OSD pods. Even more, than we are supposed to have at all. So what really has happened? To figure this out, we need to go back to our initial deployment configuration and think it through.

Limits and ranges

We ran 13 pods, 3 of them (monitors) don’t really need a lot of resources, but OSDs do. More we use it more resources it needs because ceph caches a lot of data in memory. Plus replication and balancing data over storage containers do not come free.

So initially after the deployment, the memory situation looks more or less like this:

                   Node 1        
    mon-1 - 50MB    
    mon-2 - 50MB    
    osd-1  - 200MB    
    osd-2  - 200MB    
    osd-3  - 200MB    
    osd-4  - 200MB    
    osd-5  - 200MB    
   
    1100MB memory used                  Node 2        
    mon-3  - 50M    
    Bosd-6   - 200MB    
    osd-7   - 200MB    
    osd-8   - 200MB    
    osd-9   - 200MB    
    osd-10 - 200MB    
   
   
    1050MB memory used          

After a few hours of extensive usage, something goes wrong.

                   Node 1        
    mon-1 - 250MB    
    mon-2 - 250MB    
    osd-1  - 6500MB    
    osd-2  - 5300MB    
    osd-3  - Evicted    
    osd-4  - Evicted    
    osd-5  - Evicted    
   
    12300MB memory used                  Node 2        
    mon-3  - 300MB    
    osd-6   - 9100MB    
    osd-7   - 5700MB    
    osd-8   - Evicted    
    osd-9   - Evicted    
    osd-10 - Evicted    
   
   
    15100MB memory used          

We have lost almost 50% of our pods. Does it mean it’s over? No, we can lose more of them quickly, especially if the high throughput will now target the remaining pods. Does it mean we need more than 32GB of memory to run this Ceph cluster? No, we just need to correctly set  limits so a single OSD can’t just use all available memory and starve other pods.

In this case, the easiest way would be to take the 30GB of memory (leave 2GB for mons - 650MB each, and set them limits properly too!) and divide it by 10 OSDs. So we have:

                   resources         :    
         limits         :    
         memory         : "3000Mi"    
         cpu         : "600m"          

Is it going to work? It depends, but probably not. We have configured 15GB of memory for OSDs and 650MB for each pod. It means that first node requires: 15 + 2*0.65 = 16.3GB. A little bit too much and also not taking into account things like DaemonSets for logs running on the same node. The new version should do the trick:

                   resources         :    
         limits         :    
         memory         : "2900Mi"    
         cpu         : "600m"          

Quality of Service

There is one more warning. If we also set a request for the pod to exactly match the limit, then Kubernetes treats this kind of pod differently:

                   resources         :    
         requests         :    
         memory         : "2900Mi"    
         cpu         : "600m"    
         limits         :    
         memory         : "2900Mi"    
         cpu         : "600m"          

This pod configuration is going to have  QoS in Kubernetes set to  Guaranteed . Otherwise, it is  Burstable . Guaranteed pods are never evicted - by setting the same request and limit size, we confirm that we are certain what is the resource usage of this pod, so it should not be moved or managed by Kubernetes. It reduces flexibility for the scheduler but makes the whole deployment way more resilient.

Obviously, for  mission-critical systems , “best-effort” is never enough.

Resources in an autoscaling environment

If we can calculate or guess the required resources correctly to match the cluster size, the limits and quality of service may be just enough. Sometimes though the configuration is more sophisticated and the cluster size is fluid - it can scale up and down horizontally and change the number of available workers.

In this case, the planning goes in two parallel paths - you need to plan for the minimal cluster size and the maximum cluster size - assuming linear scaling of resources.

It cannot be assumed that applications will act properly and leave space for the other cluster cohabitants. If the pods are allowed to scale up horizontally or vertically while the cluster is expanding, it may result in evicting other pods when it’s scaling down. To mitigate this issue, there are two main concepts available in Kubernetes:  Pod Priority and  Pod Disruption Budget .

Let’s start again by creating our test scenario. This time we don’t need tons of nodes, so let’s just create a cluster with two node groups: one consisting of regular instances (let’s call it persistent) and one consisting of preemptible/spot instance (let’s just call them preemptible for the sake of an experiment).

The preemptible nodes group will scale up when the CPU usage of the VM (existing node) will be over 0.7 (70%).

The advantage of the preemptible/spot instances is their price. They are much cheaper than regular VMs of the same performance. The only drawback is that there is no guarantee for their lifetime - the instance can be killed when the cloud providers decide it is required somewhere else, for maintenance purposes, or just after 24 hours. This means we can only run fault-tolerant, stateless workloads there.

Which should be most of the things which run in your cluster if you follow the 12 factors, right?

Why there is one persistent node in our cluster then? To prepare for the  rare case, when none of the preemptible nodes are running, it is going to maintain the minimal set of containers to manage the operability of the application.

Our application will consist of:

                   Application                      Replicas                      CPUs                      Memory                                                                 Redis cluster with one redis master - has to run on a persistent node             1             0.5             300MB                   Frontend application (immutable)             2             0.5             500MB                   Backend application (immutable)             2             0.7             500MB                   Video converter application (immutable)             1             1             2GB                   Sum                     3.9             4.3GB          

We can configure the redis master to work on the persistent node using a node selector. Then just deploy everything else and  Bob is your uncle .

Horizontal Pod Autoscaler

Well, but we have an autoscaling nodes group and no autoscaling configured in the cluster. This means we have never really triggered cluster autoscaling and it stays all the time on two workers, because application itself does not increase replicas count. Let’s start with the Horizontal Pod Autoscaler:

Frontend:

                   apiVersion         : autoscaling/v2beta2    
         kind         : HorizontalPodAutoscaler    
         metadata         :    
         name         : frontend-hpa    
         spec         :          scaleTargetRef         :    
         apiVersion         : apps/v1    
         kind         : Deployment    
         name         : frontend    
         minReplicas         : 2    
         maxReplicas         : 10    
         metrics         :    
    -          type         : Resource    
         resource         :    
         name         : cpu    
         target         :    
         type         : Utilization    
         averageUtilization         : 75          

Backend:

                   apiVersion         : autoscaling/v2beta2    
         kind         : HorizontalPodAutoscaler    
         metadata         :    
         name         : backend-hpa    
         spec         :    
         scaleTargetRef         :    
         apiVersion         : apps/v1    
         kind         : Deployment    
         name         : backend    
         minReplicas         : 2    
         maxReplicas         : 10    
         metrics         :    
    -          type         : Resource    
         resource         :    
         name         : cpu    
         target         :    
         type         : Utilization    
         averageUtilization         : 75          

Video converter:

                   apiVersion         : autoscaling/v2beta2    
         kind         : HorizontalPodAutoscaler    
         metadata         :    
         name         : video-converter-hpa    
         spec         :    
         scaleTargetRef         :    
         apiVersion         : apps/v1    
         kind         : Deployment    
         name         : video-converter    
         minReplicas         : 1    
         maxReplicas         : 25    
         metrics         :    
    -          type         : Resource    
         resource         :    
         name         : cpu    
         target         :    
         type         : Utilization    
         averageUtilization         : 25          

So now we have the same configuration as we described in the deployment - the sum of minReplicas is equal. Why does the video converter have such a low target average utilization? When there are multiple conversions enqueued, it will make autoscaling quicker - if it quickly reaches 25% of average CPU usage, then the new one is spawned. This is a very trivial configuration - if you need something more sophisticated check  scaling policies .

What might happen if we now test our environment and enqueue 50 video conversions each taking around 10 minutes?

It depends, but the likely scenario is that the video converter will scale up to the 25 instances. What happens with other containers in the cluster? Some of them will be evicted, maybe backend ones, maybe frontend ones, or maybe even redis. There is quite a high risk of the setup to break down and be inaccessible for the end-users.

Can we mitigate the issue? Yes, for example, we can create the  priority classes and assign them lower for the video converter. The higher priority pod has, the more worth it has for the scheduler. If two pods are due to be evicted - the one with lower priority gets the pole position. If two pods of different priorities are scheduled, the higher priority one gets the precedence.

                   apiVersion         : scheduling.k8s.io/v1    
         kind         : PriorityClass    
         metadata         :    
         name         : high-priority    
         value         : 100000    
         globalDefault         :          false        
         description         : "This is high priority class for important workloads"          

So if we give the converter lower priority, we confirm that the frontend and backend pods are more important, and in the worst case, the video converter can be expelled from the cluster.

Moreover, this is not going to guarantee that the backend can’t evict the frontend.

There is also an alternative that allows us to have better control over the scheduling of the pods. It is called…

Pod Disruption Budget

This resource allows us to configure a minimal amount of the deployment pods running at once. It is more strict than just priority because it can even block the node drain, if there is not enough space on other workers to reschedule the pod, and in result make the replicas count lower than the assigned budget.

The configuration is straightforward:

                   apiVersion         : policy/v1beta1    
         kind         : PodDisruptionBudget    
         metadata         :    
         name         : frontend-pdb    
         spec         :    
         minAvailable         : 2    
         selector         :    
         matchLabels         :    
         app         : frontend          

From now on, the frontend replica count cannot get lower than 2. We can assign this way minimums for all the pods and make sure there are always at least 1 or 2 pods which can handle the request.

This is the easiest and safest way to make sure that pod autoscaling and cluster scaling down is not going to affect the overall solution stability - as long as the minimal set of containers configured with the disruption budget can fit the minimal cluster size and it is enough to handle the bare minimum of requests.

Connecting the dots

Now we have all the required pieces to create a stable solution. We can configure HPAs to have the same min number of replicas as PDB to make the scheduler's life easier. We know our max cluster size and made sure limits are the same as requests, so pods are not evicted. Let’s see what we get with the current configuration:

                   Application                      Min. replicas                      Max. replicas                      PDB                      CPUs                      Memory                                                                                 A redis cluster with one redis master - has to run on a persistent node             1             1             1             0.5             300MB                   Frontend application (immutable)             2             10             2             0.5             500MB                   Backend application (immutable)             2             10             2             0.7             500MB                   Video converter application (immutable)             1             25             1             1             2GB                   Sum (min)                                     3.9             4.3GB                   Sum (max)                                     37.5             ~60.3GB          

Not bad. It can even stay as it is, but the current max cluster size is 24 cores with 48GB of memory. With all the configurations we went through, it should be fine when we exceed that size, so there is a little bit of flexibility for the scheduler - for example if there is a very low load on frontend and backend, but a huge pile of data to be converted, then the converter can scale up to approx. 19-21 instances, which is nice to have.

There is no one design that fits all

Is there anything wrong with the current configuration? Well, there can be, but we are going into unknown depths of “it depends.”

It all starts with the simple question - what is the purpose of my solution/architecture and what are the KPIs. Let’s look again at the example - it is a video converted with a web application. A pretty basic solution that scales up if required to accommodate a higher load. But what is more important - faster conversion or more responsible UI?

It all boils down to the product requirements, and in general, it is easy to solve. There are three paths we can follow from now on:

The  I don’t care path

If it does not matter from the user and product perspective just leave it and see how it performs. Maybe even two frontend pods can handle a lot of load? Or maybe nobody cares about the latency as long as nothing crashes unexpectedly? Don’t overengineer and don’t try the premature optimization - let it be and see if it’s fine. If it’s not there are still two other paths available.

The  I know what matters most path

This path requires a bit of knowledge about priorities. If the priority is the smooth and scalable UI and it’s fine to have quite some conversions waiting - put the higher priority on the frontend and backend deployments as described in previous paragraphs. If the video conversion is the key - put the higher priority on it. Whatever you choose, it will be the deployment that can scale up at the expense of the other one. This is especially important if loads don’t really run in parallel most of the time, so can scale up and down independently, and the next path does not fit that scenario.

The  I want to be safe path

The last path is straightforward, just put the maximums so to be close to the cluster limits, but not higher:

                   Application                      Min. replicas                      Max. replicas                      PDB                      CPUs                      Memory                       A redis cluster with one redis master - has to run on a persistent node             1             1             1             0.5             300MB                   Frontend application (immutable)             2             8             2             0.5             500MB                   Backend application (immutable)             2             8             2             0.7             500MB                   Video converter application (immutable)             1             13             1             1             2GB                   Sum (min)                                     3.9             4.3GB                   Sum (max)                                     23,1             34,3GB          

Now there is some space in the memory department, so we can, for example, give the pods more memory. We are also always safe because most of the time, there will be no fighting for resources. It might happen only when the cluster will be scaling up.

Is this a perfect solution? Not really, because it is possible to fit 20 video converters at once in the cluster when there is no traffic on the UI (frontend and backend) and we artificially limit the deployment ability to scale.

Autoscaling considerations

When it comes to autoscaling, there are some things to keep in mind. First, it is not reliable - it’s impossible to say how long it will take for the cloud provider to spin up the VM. It may take seconds, and it may take minutes (in general it rarely takes less than a minute), so starting very small with the hope of autoscaling solving the peak loads may not be the greatest idea.

The other often forgotten thing is that when we scale up, then there is a point when the cluster scales down. If the deployment scales down and pods are truly stateless and can handle it gracefully - then it is not a big deal. When it comes to the cluster scaling down, we need to remember that it effectively shuts down the VMs. Sometimes something is running on them, and the scheduler has to quickly move the workload to the other workers. This is something that has to be thoughtfully tested to make sure it does not break the application operations.

Kubernetes cluster management - summary

This is the end of our quite long journey through Kubernetes cluster size and resources management. There is much more there, especially for the bigger clusters or complex problems, which may come in handy later on, like configuring the  eviction policies , namespace  requests and limits , or  topology management useful when we have specific nodes for specific purposes. Although what we have gone through in this article should be perfectly fine and serve well even  quite complex solutions . Good luck and we wish you no evicted pods in the future!

Grape Up guides enterprises on their data-driven transformation journey

Ready to ship? Let's talk.

Check our offer
Blog

Check related articles

Read our blog and stay informed about the industry's latest trends and solutions.

Our experts

The state of Kubernetes - what upcoming months will bring for the container orchestration

Kubernetes has become a must-have container orchestration platform for every company that aims to gain a competitive advantage by delivering high-quality software at a rapid pace. What’s the state of Kubernetes at the beginning of 2020? Is there room for improvement? Here is a list of trends that should shape the first months of the upcoming year.

As a team that provides own Multicloud Enterprise Kubernetes platform and empowers numerous companies in adopting K8s, we follow all the news that helps to prepare for the upcoming trends in using this cloud-native platform. And there are the best places to learn what’s new and what’s coming like KubeCon I CloudNativeCon conferences.

A few weeks ago, San Diego hosted KubeCon + CloudNativeCon North America gathering 12 thousand cloud-native enthusiasts - 50% increase in the number of attendees in comparison to the previous edition shows the scale of the Kubernetes' popularity growth. During the event, we had a chance to listen about new trends and discuss further opportunities with industry experts. Most of the news announced in San Diego will influence the upcoming months in a cloud-native world. Below, we focus on the most important ones.

Kubernetes is a must for the most competitive brands

What makes KubeCon so likable? Access to Kubernetes experts, networking with an amazing community of people gathered around CNCF, chance to learn the trends before they become mainstream? For sure, but what also makes it so special? The answer comes to the hottest brands that join cloud-native nation these days - Pinterest, Home Depot, Walmart, Tinder and many more.

It’s obvious when tech companies present how they build their advantage using the latest technologies, but it becomes more intriguing when you have an opportunity to get to know how companies like Adidas, Nike or Tinder (yes, indeed) are using Kubernetes to provide their customers/users with extraordinary value.

As attached examples show, we live in the software-driven world, where the quality of delivered apps is crucial to stay relevant, regardless of the industry.

Enterprises need container orchestration to sustain their market share

The conference confirmed that Kubernetes is a standard in container orchestration and one of the key elements contributing to the successful implementation of a cloud-first strategy for enterprises.

But why the largest companies should be interested in adopting the newest technologies? Because their industries are being constantly disrupted by fresh startups utilizing agility and cutting-edge tech solutions. The only way to sustain position is by evolving. The way to achieve it comes to adopting a cloud-native strategy and implementing Kubernetes. As Jonathan Smart once said - “You’re never done with improving and learning.”

Automate what can be automated

As more and more teams move Kubernetes to production, a large number of companies is working on solutions that would help streamline and automate certain processes. That drives to the growing market of tools associated with Kubernetes and enriching its usage.

For example, Helm, which has its place in the native cloud toolbox used by administrators as one of the key deployment tools in its latest version, simplifies and improves operation by getting rid of some dependencies, such as Tiller, a server-side component running in the Kubernetes cluster.

Kubernetes-as-a-service and demand for Kubernetes experts

During this year’s KubeCon, many vendors presented a range of domains that have been offering complete solutions for Kubernetes, accelerating container orchestration. At previous events, we met vendors who have been providing storage, networking, and security components for Kubernetes. This evolution expresses the development of the environment built around the platform. Such an extensive offer of solutions allows teams or organizations to migrate to the native cloud to facilitate finding a compromise regarding "building versus buying" concerning components and solutions.

Rancher announced a solution that may be an example of an interesting Kubernetes-as-a-service option. The company collaborated with ARM to design a highly optimized version of Kubernetes for the edge - packaged as a single binary with a small footprint to reduce the dependencies and steps needed to install and run K8s in resource-constrained environments (e.g. IoT or edge devices for ITOps and DevOps teams.) By making K3s (lightweight distribution built for small footprint workloads) available and providing the beta release of Rio, their new application deployment engine for Kubernetes, Rancher delivers integrated deployment experience from operations to the pipeline.

Kubernetes-as-a-service offerings on the market are gaining strength. The huge number of Kubernetes use cases entails another very important trend. Companies are looking for talent in this field more than ever. Many companies have used conferences to meet with experts. Therefore, the number of Kubernetes jobs has also increased. The demand for experts on the subject is huge.

Multicloud is here to stay

Are hybrid solutions becoming a standard? Many cloud providers have claimed to be the best providers for multi-clouds - and we observe the trend that it becomes more popular. Despite some doubts (regarding its complexity, security, regulatory, or performance) enterprises are dealing well with implementing a multicloud strategy.

Top world’s companies are moving to multicloud as this approach empowers them to gain exceptional agility and huge cost savings thanks to the possibility to separate their workloads into different environments and make decisions based on the individual goals and specific requirements.

It is also a good strategy for companies working with private cloud-only. Usually, that’s the case because of storing sensitive data. As numerous case studies show, these businesses can be architected into multicloud solutions, whereas sensitive data is still stored securely on-premise, while other things are moved into the public cloud, which makes them easily scalable and easier to maintain.

Kubernetes is everywhere, even in the car….

During KubeCon, Rafał Kowalski, our colleague from Grape Up shared his presentation about running Kubernetes clusters in the car - "Kubernetes in Your 4x4 - Continuous Deployment Direct to the Car". Rafał showed how to use Kubernetes, KubeEdge, k3s, Jenkins, and RSocket for building continuous deployment pipelines, which ship software directly to the car, deals with rollbacks and connectivity issues. You can watch the entire video here:

https://www.youtube.com/watch?v=zmuOxFp3CAk&feature=youtu.be

…. and can be used in various devices

But these are not all of the possibilities; other devices such as drones or any IoT devices can also utilize containers The need for increased automation of cluster management and the ability to quickly rebuild clusters from scratch were the conclusions breaking through the above-mentioned occurrences.

The environment shows, through the remarkable pattern of the number of companies using Kubernetes and the development of utilities, there are still open needs in terms of simplicity and scalability of tools for operations, e.g. Security, data management, programming tools, and continuing operations in this area should be expected.

“Kubernetes has established itself as the de facto standard for container orchestration,”- these are the most frequently repeated words. It’s good to observe the development of the ecosystem around Kubernetes that strives to provide more reliable and cheaper experiences for enterprises that want to extend their strategic initiatives to the limit.

Read more
Software development

Deliver your apps to Kubernetes faster

Kubernetes is currently the most popular container orchestration platform used by enterprises, organizations and individuals to run their workloads . Kubernetes provides software developers with great flexibility in how they can design and architect systems and applications.

Unfortunately, its powerful capabilities come at a price of the platform’s complexity, especially from the developer’s perspective. Kubernetes forces developers to learn and understand its internals fluently in order to deploy workloads, secure them and integrate with other systems.

Why is it so complex?

Kubernetes uses the concept of Objects, which are abstractions representing the state of the cluster. When one wants to perform some operation on the cluster e.g., deploy an application, they basically need to make the cluster create several various Kubernetes Objects with an appropriate configuration. Typically, when you would like to deploy a web application, in the simplest case scenario, you would need to:

  • Create a deployment.
  • Expose the deployment as a service.
  • Configure ingress for the service.

However, before you can create a deployment (i.e. command Kubernetes to run a specific number of containers with your application), you need to start with building a container image that includes all the necessary software components to run your app and of course the app itself. “Well, that’s easy” – you say – “I just need to write a Dockerfile and then build the image using docker build ”. That is all correct, but we are not there yet. Once you have built the image, you need to store it in a container image registry where Kubernetes can pull it from.

You could ask - why is it so complex? As a developer, I just want to write my application code and run it, rather than additionally struggle with Docker images, registries, deployments, services, ingresses, etc., etc. But that is the price for Kubernetes’ flexibility. And that is also what makes Kubernetes so powerful.

Making deployments to Kubernetes easy

What if all the above steps were automated and combined into a single command allowing developers to deploy their app quickly to the cluster? With Cloudboostr’s latest release, that is possible!

What’s new? The Cloudboostr CLI - a new command line tool designed to simplify developer experience when using Kubernetes. To deploy an application to the cluster, you simply execute a single command:

cb push APP_NAME

The concept of “pushing” an application to the cluster has been borrowed from the Cloud Foundry community and its famous cf push command described by cf push haiku:

Here is my source code
Run it on the cloud for me
I do not care how.

When it comes to Cloudboostr , the “push” command automates the app deployment process by:

  • Building the container image from application sources.
  • Pushing the image to the container registry.
  • Deploying the image to Kubernetes cluster.
  • Configuring service and ingress for the app.

Looking under the hood

Cloudboostr CLI uses the Cloud Native Buildpacks project to automatically detect the application type and build an OCI-compatible container image with an appropriate embedded application runtime. Cloud Native Buildpacks can autodetect the most popular application languages and frameworks such as Java, .NET, Python, Golang or NodeJS.

Once the image is ready, it is automatically pushed to the Harbor container registry built into Cloudboostr. By default, Harbor is accessible and serves as a default registry for all Kubernetes clusters deployed within a given Cloudboostr installation. The image stored in the registry is then used to create a deployment in Kubernetes. In the current release only standard Deployment objects are supported, but adding support for StatefulSets is in the roadmap. As the last step, a service object for the application is created and a corresponding ingress object configured with Cloudboostr’s built-in Traefik proxy.

The whole process described above is executed in the cluster. Cloudboostr CLI triggers the creation of a temporary builder container that is responsible for pulling the appropriate buildpack, building the container image and communicating with the registry. The builder container is deleted from the cluster after the build process finishes. Building the image in the cluster eliminates the need to have Docker and pack (Cloud Native Buildpacks command line tool) installed on the local machine.

Cloudboostr CLI uses configuration defined in kubeconfig to access Kubernetes clusters. By default, images are pushed to the Harbor registry in Cloudboostr, but the CLI can also be configured to push images to an external container registry.

Why bother a.k.a. the benefits

While understanding Kubernetes internals is extremely useful, especially for troubleshooting and debugging, it should not be required when you just want to run an app. Many development teams that start working with Kubernetes find it difficult as they would prefer to operate on the application level rather than interact with containers, pods, ingresses, etc. The “cb push” command aims to help those teams and give them a tool to deliver fast and deploy to Kubernetes efficiently.

Cloudboostr was designed to tackle common challenges that software development teams face using Kubernetes. It became clear that we could improve the entire developer experience by providing those teams with a convenient yet effective tool to migrate from Cloud Foundry to Kubernetes. A significant part of that transition came to offer a feature that makes deploying apps to Kubernetes as user-friendly as Cloud Foundry does. That allows developers to work intuitively and with ease.

Cloudboostr CLI significantly simplifies the process of deploying applications to a Kubernetes cluster and takes the burden of handling containers and all Kubernetes-native concepts off of developers’ backs. It boosts the overall software delivery performance and helps teams to release their products to the market faster.

Read more
Software development

How to successfully adopt Kubernetes in an enterprise?

Kubernetes has practically become the standard for container orchestration. Enterprises see it as one of the crucial elements contributing to the success of the implementation of a cloud-first strategy. Of course, Kubernetes is not the most important success factor in going cloud-native. But the right tooling is the enabler for achieving DevOps maturity in an enterprise, which builds primarily on cultural change and shift in design thinking. This article highlights the most common challenges an enterprise encounters while adopting Kubernetes and recommendations on how to make Kubernetes adoption smooth and effective in order to drive productivity and business value.

Challenges in Kubernetes adoption

 Kubernetes is still complex to set up. Correct infrastructure and network setup, installation, and configuration of all Kubernetes components are not that straightforward even though there are tools created with the goal to streamline that part.

 Kubernetes alone is not enough. Kubernetes is not a cloud-native platform by itself, but rather one of the tools needed to build a platform. A lot of additional tooling is needed to create     a manageable platform that improves developers’ experience   and drives productivity. Therefore, it requires a lot of knowledge and expertise to choose the right pieces of the puzzle and connect them in the right way.

 Day 2 operations are not easy. When the initial problems with setup and installation are solved, there comes another challenge: how to productionize the platform, onboard users, and manage Kubernetes clusters at scale. Monitoring, upgrading & patching, securing, maintaining high availability, handling backups – these are just a few operational aspects to consider. And again, it requires a lot of knowledge to operate and manage Kubernetes in production.

 Another aspect is the platform’s complexity from the developer’s perspective. Kubernetes requires developers to understand its internals in order to use it effectively for deploying applications, securing them and integrating them with external services.

Recommendations for a successful Kubernetes adoption

 Choose a turnkey solution – do not build the platform by yourself as the very first step, considering the aforementioned complexity. It is better to pick a production-ready distribution, that allows to set it up quickly and focus on managing the cultural and organizational shift rather than struggling with the technology. Such a solution should offer a right balance between how much is pre-configured and available out-of-the-box, and the flexibility to customize it further down the road. Of course, it is good when the distribution is compatible with the upstream Kubernetes as it allows your engineers and operators to interact with native tools and APIs.

 Start small and grow bigger in time – do not roll out Kubernetes for the whole organization immediately. New processes and tools should be introduced in a small, single team and incrementally spread throughout the organization. Adopting Kubernetes is just one of the steps on the path to cloud-native and you need to be cautious not to slip. Start with a single team or product, learn, gain knowledge and then share it with other teams. These groups being the early adopters, should eventually become facilitators and evangelists of Kubernetes and DevOps approach, and help spread these practices throughout the organization. This is the best way to experience Kubernetes value and understand the operational integration required to deliver software to production in a continuous manner.

 Leverage others’ experiences – usually, it is good to start with the default, pre-defined or templated settings and leverage proven patterns and best practices in the beginning. As you get more mature and knowledgeable about the technology, you can adjust, modify and reconfigure iteratively to make it better suit your needs. At this point, it is good to have a solution which can be customized and gives the operator full control over the configuration of the cluster. Managed and hosted solutions, even though easy to use at the early stage of Kubernetes adoption, usually leave small to no space for custom modifications and cluster finetuning.

 When in need, call for backups – it is good to have cavalry in reserve which can come to the rescue when bad things happen or simply when something is not clear. Secure yourself for the hard times and     find a partner who can help you learn and understand the complexities of Kubernetes and other building blocks of the cloud-native toolset.   Even when your long-term strategy is to build the Kubernetes skills in-house (both from development and operations perspective).

 Do not forget about mindset change – adopting the technology is not enough. Starting to deploy applications to Kubernetes will not instantly transform your organization and speed up software delivery. Kubernetes can become the cornerstone in the new DevOps way the company builds and delivers software but needs to be supported by organizational changes touching many more areas of the company than just tools and technology: the way people think, act and work, the way they communicate and collaborate. And it is essential to educate all stakeholders at all levels throughout the adoption process, to have a common understanding of what DevOps is, what changes it brings and what are the benefits.

Adopting Kubernetes in an Enterprise - conclusion

Even though Kubernetes is not easy, it is definitely worth the attention. It offers a great value in the platform you can build with it and can help transition your organization to the new level. With  Kubernetes as the core technology and DevOps approach to software delivery , the company can accelerate application development, manage its workflows more efficiently and get to the market faster.

Read more
View all
Connect

Interested in our services?

Reach out for tailored solutions and expert guidance.

Stay updated with our newsletter

Subscribe for fresh insights and industry analysis.

About UsCase studiesContactCareers
Capabilities:
Legacy ModernizationData PlatformsArtificial Intelligence
Industries:
AutomotiveFinanceManufacturingAviation
Solutions:
DataboostrCloudboostr
Resources
BlogInsights
© Grape Up 2025
Cookies PolicyPrivacy PolicyTerms of use
Grape Up uses cookies

This website uses cookies to improve its user experience and provide personalized content for you. We use cookies for web analytics and advertising. You can accept these cookies by clicking "OK" or go to Details in order to manage your cookies preferences more precisely. To learn more, check out our Privacy and Cookies Policy

Accept allDetails
Grape Up uses cookies

Essential website cookies are necessary to provide you with services available through the website, autosave your settings and preferences, and to enhance the performance and security of the website - you have the right not to accept them through your web browser's settings, but your access to some functionality and areas of our website may be restricted.

Analytics cookies: (our own and third-party : Google, HotJar) – you can accept these cookies below:

Marketing cookies (third-party cookies: Hubspot, Facebook, LinkedIn) – you can accept these cookies below:

Ok