How to use a Kubernetes cluster

share this on:

  

Find out how to use a Kubernetes cluster to set up a microservice-based application and expose it to clients.


 

[Video Transcript]

Can we get a solution in which applications run on the same operating system and yet they are independent from one another?

Can we get a black books to answer applications on top of the operating system directly, but we did not know about anything else running on the same machine?

This is what led to the development out of containers. With containers, we'll have a base operating system daily units, might be windows, most of the time here in production, that you will see Linux containers first made their way up their way into the real scanner. And on top of that, well, you do have the engine, but it doesn't really sit between the OS and applications, the engine is in charge of starting the applications, and once the applications have started each application, the ones directly on top of the base operating system.

And what's more important, each application is blissfully unaware of any other application existing on the operating system. 

Best of both worlds, lower resource usage and independence between the applications. 

Once an application is containerized, it becomes easy, not only to start and stop it, but it becomes very easy to take the application and move it somewhere else. The different base operating system, different terminal of distribution, for example, generally not the problem. It's only the cornel that needs to match. Different libraries on the base operating system. We don't care, the container has all the libraries the application requires baked in.

It becomes very easy to run that application in a predictable way to make sure that the application is the same way everywhere.

Doesn't matter if it's on a developer's laptop, if it's in the data center, if it's somewhere in the cloud. 

Here we have to talk about something that you have probably seen before: pets and cattle. We have seen this for a long time. Now when dealing with servers, what's the difference? Well, the pets have names.

The pets are unique, loving the hand raised and caring for. And if a pet develops a problem, you spend a lot of time, a lot of resources trying to bring that pet to help. This used to be the case with servers. Each server had a name, a good sysadmin would in time learn the small quirks, the small differences between the service. 

Of course, when a server went down, when a server had the problem, a sysadmin would spend days, weeks trying to bring that server back to health. And it's a good approach. It works well, but it has a limitation. It doesn't scale. When you are talking about thousands, tens of thousands of servers, you can no longer treat each one of them as a separate entity.

You want to treat them like cattle? You care about how many there are. They should be as identical to one another as possible, and if one develops a problem, many times the easiest solution to this is to just commission that server moves the workload somewhere else, and deploys a new server based on the same template.

This is not new. This has been happening with servers for a long time. Now, however, in the container world, we are seeing something similar beginning to happen to the applications, to the world clouds that we're hunting on our servers. We try to keep those containers as similar to one another similar to the original image as possible.

We don't start state inside the container. If you need to stop the state, we save it somewhere else. Maybe a database, maybe a volume somewhere else. 

And many times if a container develops a problem, the easiest way to fix it is to just destroy the container and restart it. If you want to update the version of the application, the easiest option is to destroy the container, start a new container on the new version.

Everything works fine. We need to start treating our applications like ghetto. I know it doesn't sound great, but it is what we need to do in the container world. 

Ok. So this was a very quick overview of microservices. 

How containers and microservices, a match made in heaven because they actually were developed independently. It wasn't,  they weren't created for one another, but they proved to be an absolutely great match. And then we talked a little bit about the changes that containers bring to the application. 

However, why Kubernetes? We'll have containers, we'll have Docker, which was a very, very successful project. We have the ability to run our applications very easily, very quickly, on top of any Linux distribution you prefer. What was missing? Why did someone in this case, Google say, we need more.

What was missing here?

Well with Docker, Docker was designed from the very beginning, with one host in mind. When talking about cloud scale, we are talking about many, many applications. So hunting on many hosts. If you have ever worked with Docker, can you imagine managing a Docker cluster? And by the way, Docker standalone. I'm not talking about the docker container here.

We don't mention some here, that's a different subject altogether, but with Docker standalone. 

Can you imagine manually managing containers across three or five or seven hosts? It  is very, very difficult. You need more. You need two things: in order to scale, you need plastering capabilities and orchestration capabilities. With clustering, you're actually taking multiple hosts. They might be physical bare metal servers. They might be cloud instances. They might be about some machines, doesn't matter, but you have multiple servers. And you can manage them from one single point. You can treat them as one single pool of resources, CPU, and memory for running containers. And furthermore, you need orchestration.

You need a very simple way to run and organize deployments at scale. You want to be able to tell the system, run four instances of this web server. Oh, and make sure that all is accessible on this spot. Oh. And make sure that for high availability purposes, they are always scheduled to run on separate self, a separate host. And keep an eye on them. And if one crashes automatically, start a new one. This is what Kubernetes brings to the table. Kubernetes is a container orchestrate. And of course, here. I will go back to the Kubernetes comic and we will talk a little bit about Jason who had a major career change, where they do IT, and we'll see how history actually maps to what we have discussed so far.

I love this slide. I love this image. The servers have names and we even have some actual pets in the corner, bird, dog. This is the pets versus cattle analogy that we have discussed previous. Nothing works properly. Delivering learning services is difficult. Update cycles are slow. Scaling is complex.

How can we fix this? Of course, you know why you're here? The answer is Kubernetes. I think is now the goddess of wisdom coughed and container applications, because, well, I think that is, that is diversifying.

What's interesting here? Is that Jason is exactly where we are now. We have talked about containers. We have talked about what container engines are, what Docker is, and they are already running applications inside containers. They're halfway there, but something is still missing. The management part is difficult.

They need the orchestration capabilities of Kubernetes.

Once you add Kubernetes you get portability, they put disability, scalability, easy porting, or their application from one place to another. It doesn't matter if we're talking about a laptop, a server in a data center or a cloud deployment, and you also get self healing capabilities. Kubernetes will automatically and continuously compare the current state of your application and your environment to the desired state, it is specified. 

Whenever something changes, whenever the system deviates from the desired state, Kubernetes will take automated action to bring the system back to the desired state. Maybe it will restart a container of bot in Kubernetes terminology, maybe it will add more pods or delete some of the recent ones, but it will automatically try to fix the issues.

So this is what Kubernetes can bring. Can you put your application anywhere, we can manage the entire deployment declaratively. We can do manual and automatic scaling health monitoring self-healing capabilities and automated roll-outs and all breaks. Even upgrading your application, something that we will not have time to cover in the demo becomes much, much easier when using Kubernetes.

In the end, the application can run anywhere from your on premises data center to a cloud platform and becomes quite easy to move from one place to another. 

Want to learn more? See our Kubernetes courses!

 

 

 

share this on: