Jenkins and its main functionality

Alex Stan
Alex Stan February 11 2021 #jenkins

share this on:

  

Find out all about the main Jenkins functionality highlighting how easily it can be extended and integrated with other tools and services (GitHub, Maven etc.).


 

[Video Transcript] 

Let's move on and discuss a little bit about Jenkins. So the first question that we need to ask ourselves is what is this tool and how it can help us, right? 

You can think that Jenkins, it's just a server which runs on a Linux or windows environment, which can help you to automate or kind of tasks, which are related to building packaging, testing, delivering, deploying your software stack. Let's see. And these stories can help us or dev ops perspective, right.  To create a continuous integration, continuous delivery and, or continuous deployment, depending on what you are trying to achieve, by using this tool. 

You can compare it. I don't know. I hope everyone is familiar with Linux environments for everyone to use the Cron jobs or so on windows. She is not only on Linux.

A Cron job is something that it's executed, the specific period of time. And it runs what you define to run. So in the same manner, you can think that Jenkins, it's a tool similar to a Cron job that will help you to run those repetitive jobs that will be created inside the Jenkins application.

Because it's open source, this means that it can integrate with different other tools like: Get Maven with Ansible, for continuous configuration, with Puppet, with chef, with third-party tools, for testing like selenium, robot framework, and of course you can integrate it also with two for meteors or not just in order to perform continuous monitoring, of course of your jobs in this case, if this is the case, or if you can create jobs that will connect to other systems and extract specific metrics from those systems and using some kind of logic, it will generate a report or something like that. So you can use this tool in many ways.

The core of using this tool, is that any task which can be automated when a specific event occurs, it can be integrated in Jenkins. Right? So think about this. A developer pushes code to Get a report. Right? So maybe you would like to perform the following on each commit, that is done. I want to execute the specific to run specific tests.

And only after those tests are passing, we can accept the code, but for automatic code review and merge the changes from the feature branch to the master branch, right. This can be an example, or maybe I want, when a job fails to send me a notification via email, or maybe on Slack or other third-party tools. Right. It depends on the use case.

Let's see how we can do that. In Jenkins, we have this concept of jobs, right? So basically a job it's anything. That can be defined as a task that we need to accomplish. Right. And because we create it once and we make, get work, this means that it will reduce the cost and the risk of mistakes, right. Because it will eliminate that human intervention we created once the job we specified and we defined what the job will do, and we don't perform other changes. It's not like running some scripts from the shell and maybe you added a new line or inserted the new character in a comment. And that comment will fade.

You can do it once and it will work ideally forever. Right? Another benefit of using Jenkins and using this concept of jobs, it's the fact that if you want to have specific functionality inside the Jenkins, you can do that and you can do that using plugins. Right? So plugins are pieces of code, that can be integrated with the Jenkins score application and extend the functionality of that. And using this will have an easy integration with other applications and services and because again, it's open source, you can write your own plugins in your own favorite language, in order to make your life a little bit easier by automating yourself out of a job. Right? 

In conclusion, at the end of the day, from the point of view of a DevOps culture, it will increase productivity by doing that.

The key aspects from the point of view of Jenkins, remember that it's an open source platform, which is helping us to implement specific DevOps pipelines across platforms. You can deploy it also on Linux or Windows environments, because it's used by so many people in the community, it's very extensible because there are many plugins available on the market and you can integrate Jenkins with anything. Right. Can integrate it with Docker, with Kubernetes with cloud vendors, like Amazon, Azure and Google and so on and so forth. 

And secondly, it's a reliable solution because it's for many years, used in production systems, right. And it's stable and gives you confidence. Of course, this is not only a tool that can do that, but it's the most reliable one I know.  So, going further, we discussed the jobs, right?

But what is the pipeline or what we want to achieve using a pipeline? 

A pipeline, basically, it's a set of at least or set of steps, which are called stages and in each stage, we have a list of steps, right? So a stage it's a block that contains a list of steps and can be something that means, or it's meaningful to you.

And it will give you some kind of visualization from the point of view of the pipeline. Now, let's see and dig deeper, we make zoom in to mean inside the step. The step is basically the task where you specify what you want to execute them: Get clone - this is the step right? 

Step two: What do you move for the slash route? I don't know if you have pseudo privileges, delete all the Ansible jobs and so on and so forth. 

Changing these stages together from stage 1 to stage N, we create the pipeline, right? So we can say that a pipeline, it's a series of stages, which includes a series of steps. So are all the tasks that we want to execute in order to automate our workflow from the get commit to deploy to production. Right? So, John can be committing the codes, downloading the code, compiling the code, performing some, unit tests. 

Stage two can be: more testing, right? Performance testing, load, load testing, negative testing, more functional testing, end to end testing. If everything is fine in stage two, you can create stage three, right? Deployed to production, and here you can. Or non-production, let's say maybe you have to create the environment, right? So again, we'll use some Terraform scripts in these steps and after you create the infrastructure and the environment, you have to use some Ansible scripts, in order to perform some specific configurations, right install, maybe a web server or an engineer on the patch and so on and so forth. 

Maybe you have to configure some firewall rules, right? Maybe you need to enable SSH for specific users in order to copy artifacts. This can be another step, right, and so on and so forth. 

And changing all these tasks from the building the application to deploying to a non production environment or production environments, it creates our pipeline on end to end pipeline. 

What we can see here, it's the fact that using Jenkins, we can use and integrate with different kinds of tools, and chain those tools in such a way in order to create end to end, uh, continuous integration, continuous delivery or deployment process.

Another visualization of these continuous integration, continuous delivery pipelines is the following. So we start with version control on the left, we perform our building of the binary or the Docker image and so on and so forth. We are doing some unit tests, if everything it's fine, we're going to deployment.

If the deployment is four rounds without any failures and the production feedback loop gives us no errors. We can perform more testing and go further into the chain. We hit deployed to production. Right? And don't forget that we have to monitor the entire infrastructure from the version control to deploying, to production in order to be sure that nothing has changed while modifying the code.

And all this fast failing formation is if it's the case is given to us using this production feedback loop, right? Because we, all of us, like feedback, right? And we want the feedback fast in order to improve or to fix the problems that we created intentionally or unintentionally. So measuring and validating those measurements, it's another core principle in the DevOps workflow.

But again, this is maybe for a new webinar, which can come later in, next month, if you want to learn more about monitoring. Okay. So going back to our, back to our continuous integration and continuous delivery pipeline, we have the version control, right? And here we have all the means to organize the files. Coordinate the creation, update, delete, and this has to be done across the teams and even if an organization site, right. And here we can create or use different tools like mercurial, subversion, and of course we can integrate these and use these tools or insight inside the Jenkins.

Going further after we have all the code or in the SCM, we have to compile that code. Right. And here we have the build step and here all the features of the code from different feature branches will be merged. And if that's the case, we will use a compiler in order to create the binaries or if it's not the case like C, C++ languages or Java, we can use other tools that we'll build our application, maybe it's spite and so on and so forth, you know, testing, right? 

Here after we created our application, maybe we can perform some basic sanity tests. Maybe we have to install this binary somewhere and validate that the installation was successful or perform some, unit tests or perform some functional tests. It depends again, based on what we try to accomplish, and of course the most simple one is to validate the functionality of the code and validate that the code didn't break by performing those more changes and so on and so forth. 

If something goes wrong, we have the feedback loop. We have a notification, we see that the bill failed, we have to investigate troubleshooting, fix the problem and start over from version control. Right? Because we have to, in order to fix the problem, we have to make the change inside the machine control. We have to commit it and again, go to the entire process. Right?

But let's assume that for now it will pass, and we go to the deployment. So in the deploy phase, we try to replicate and put the application or what we are trying to build into an testing environment or staging environment or dev environment in order to perform more testing on it, and here again, we can extend our testing non-functional functional or user acceptance testing. It depends, it can be updated based on your needs.

If everything gets fine, we can perform more testing. Maybe if we have a blue-green deployment, we can validate and perform some user acceptance testing in the blowy environment, where no customer will be affected. Right? And if it passes the blue environment, this means that we can, we are ready to deploy it to the production environment.

We have to rate the rate over and over again from the beginning, maybe you have to start a new feature from the customers. We have to implement it, and again, we are going to hit and interact with each step from the continuous integration and continuous delivery pipeline.

Going further. So far, we discussed building security about continuous integration and continuous delivery, but let's see how we can OD, also security on top of this DevOps process, right. Because what we discussed here, didn't mention anything about the security of our applications or our products.

Because we don't want it to perform our security validations performing some manual steps which we want, or at least we want to achieve this kind of automation, also from the point of view of security. I know that security, it's not always so easy to fit in DevOps, continuous integration, continuous delivery workflow, but let's see how we can do it at least work, how we can try from the point of view of a high level of discussion.

What we are trying to do here, is try to see how we can remove manual interaction and try to automate more of these tasks. We can, we can think from another point of view or of the following, if we do it once for a specific task in a manual way, and we see that we have to execute that specific task each day. And if we see that we can automate that specific task, this means that we should focus on automating that specific task and removing toil. So what is toil you will ask, right? So toil, it's something that's repetitive, it's a manual process, can be automated and it's not automated. And doing this removing toil from your bag let's say of tasks, it will give you the ability to focus more on innovation or deploying new scripts or creating new pipelines or focusing on what is important. Right? Of course, there are scenarios that cannot eliminate 100% of this toil, because some things from the point of view of automation can introduce more complexity, then it's needs and create more problems. Right? And in this kind of case, maybe it's not the use case to try it out, automate this process. So to be more secure, at least from the point of view of a continuous integration, continuous delivery pipeline, we need to insert security also in the pipeline. 

How can we do that, we have to put security from the left, from the development phase, right? And having security from the development phase, we can be more effective and it will give more confidence also in the entire continuous integration and continuous delivery pipeline.

Think about this. We can add security on the SCM, on the developer tools on the build face, also security from the point of view of a continuous integration, continuous delivery pipeline. Going to specific environments, we can put security in the staging environment and also on the production and so on and so forth.

Of course, each phase can contain or can help you or implement security at different levels. From the point of view of a developer, you can use some specific tools that will let you know if the code is written in a good manner and you don't have any, I don't know, maybe buffer overflows, and it performed some static analysis on your code.

From the point of view of the code repository, think about, of the following, right? Maybe you have to automate some processes and those processes may require some credentials, maybe some SSH keys or tokens or other kinds of sensitive data, right? And guess what developers, what they do. They commit these things inside a report. They don't care if it, they committed a private key inside the GitHub repo. 

I saw in the past, I saw some reposts that had private keys from AWS. And you took that key. And of course beside that private key, there was also a TXT file and gave you the public IP addresses of those machines. And guess what you can do? Download that key and connect to that machine and voila, you have a breach and human environment. So there are tools that can help you prevent a comedic in this case, sensitive data or insight the report, right? So they will abort your commit. That will happen. They will, you can create some predefined rule with some specific patterns, and they will, when you try to commit the code, they will validate against those rules. 

And if you broke the role, the commit will be aborted, and so on and so forth. At each layer in the continuous integration and continuous delivery, you cannot, you can add some kind of mechanism of security.

Of course, we cannot cover all of them, but if you want to learn more or what you can use, please feel free to join a DevSecOps trainings, which are focused on security on Docker and Kubernetes, and there, you will find more information about security inside a DevOps environment. 

 

Want to learn more? See our Kubernetes courses!

 

 

share this on: