securedly elastic

Is Jenkins Dead? Exploring Managed vs. Unmanaged CI/CD Solutions in 2025

Spread the love

During the early days of my DevOps career, I needed to improve deployment processes in my organization. The process we had was not going to scale as the business.

The business needed to hire more DevOps engineers and software engineer to meet with the demand of products and systems that we needed to build. This meant that proper software delivery process needed to be put in place.

No form of automation existed in the deployment process. Developers literarily had to write code, push to a repo, and pull or clone the code to the VM it needs to run and start or restart the application.

It was riddled with errors, no rollback mechanism, no way to track failures when things broke and everyone had root access to production. A deployment had the following steps:

– ssh into production

– git pull / clone

– restart Apache

– and pray….

This was the typical release workflow that was used. If anything broke, it was share miracle to track what was wrong and who made that change across code, web server configruation and operating system configurations. Everyone that mattered in the tech team had access to production to be able to deploy changes. After watching series of videos of Jez Humble and reading The Phoenix Project, I had a lot of drive to change things. First goal was to keep everyone away from production. How do keep everyone from production and at thesame time give them the confidence that their code can get delivered there efficiently without hassles or doubts. At this point as a still a Junior DevOps Engineer trying to understand the Cloud (AWS) and CI/CD practices.

At this time, the company used Bitbucket as its git repository. In 2016 Atlassian just launched Bitbucket Pipelines which was a managed tool for implementing CI/CD. It had great promise and features that can greatly improve software delivery processes.

I also heard about a tool called Jenkins which was used by some organizations who were already implementing advanced CI/CD implementations for software delivery. Armed with this knowledge I had two options:

1. Bitbucket Pipelines which does not require a virtual machine. Configruations happened right from the Git repository, with a simple YAML file that the Continuous Integration and Continuous Deployment flow. Bitbucket also provided sample YAML configurations that made it easy and quick to apply for various programming languages. The build part was taken care of by Bitbucket, but the deployment part was not very clear if Bitbucket should be allowed to ssh into an EC2 instance to deploy the application. Our organisation has all deployments in EC2 and not Kubernetes or any other container orchestration tool.

2. Setup Jenkins in a vritual machine and write Jenkinsfile which is usually in Groovy, to perform the full CI/CD process. The CI/CD workflow runs outside of the Github repository. An event trigger is configured on Jenkins to fire when a code is pushed to a particular. This event will be sent to Jenkins to start running the pipeline. Jenkins is made up of plugins and these plugins have a vast array of functionalities. Jenkins plugs can be used to build any programming and deploy to any platform. With the plugins, time to write and start using pipelines is even shorter compared to Bitbucket pipelines.

From both options, Bitbucket seemed to be a better option, so I setup Bitbucket pipelines. Though it was success and it worked well. There was still the missing part of the deployment which was not very straightforward. I developed a webhook in NodeJS which performed the deployment after the build artifact has been sent to Amazon S3. The S3 bucket triggers an SNS topic which does a push to the endpoint I configured. The application behind the endpoint does the remaining part of the deployment, which is;

– pull the artifact from S3

– unarchive the file

– make a backup of the existing application,

– delete it from the folder it is running

– replace with the new version

– restart pm2 (a process manager mostly used for NodeJS applications)

Though it worked, it had too many working components to perform a basic deployment.

Bitbucket Pipelines did not require any form of VM managed because Bitbucket handles the VM and helps to ensure resource optimization for the pipeline build process. Today many managed CI/CD systems have an option to allow you bring your own VM, which seems similar to an unmanaged system like Jenkins. There are reasons such as compliance and security that can lead to such decision. But managed CI/CD is still awesome for the following reasons:

  • Managed CI/CD platforms are fully managed and pre-configured, meaning there’s no need to spend time on setup, maintenance, or upgrading the underlying infrastructure. This makes it much faster to get up and running with your CI/CD pipelines.
  • Managed CI/CD platforms typically provide elastic scaling, allowing you to scale up or down based on your project’s needs without worrying about provisioning new infrastructure. This is ideal for handling fluctuating workloads, especially in dynamic environments.
  • With a managed service, the provider takes care of all updates, patches, and security fixes, ensuring that your CI/CD platform is always up to date with the latest features and security enhancements.
  • Since the infrastructure is managed by the service provider, teams don’t have to worry about managing servers, networking, or other infrastructure-related concerns. This allows development teams to focus on code and deployment rather than system administration.
  • Managed CI/CD platforms usually offer built-in redundancy and high availability, ensuring minimal downtime. They are designed to be reliable, with infrastructure monitoring and failover mechanisms in place, which reduces the risk of service interruptions.
  • Managed CI/CD platforms often come with an integrated suite of tools, such as automated testing, deployment pipelines, version control, and artifact storage. This makes it easier to set up complete workflows without having to manually integrate third-party tools.
  • Many managed CI/CD providers have built-in security features like encryption, access controls, and compliance with industry standards (e.g., SOC 2, GDPR, HIPAA). This helps organizations meet security and regulatory requirements without extra overhead.
  • With less time spent on managing infrastructure and configuring pipelines, teams can focus on shipping software faster. Managed CI/CD platforms enable quicker feedback loops, speeding up development cycles and reducing time to market.
  • Managed CI/CD services often come with dedicated customer support, as well as access to a large user community. If any issues arise, you have access to expert support to troubleshoot and resolve problems quickly.
  • For small teams or startups, managed CI/CD can be more cost-effective than setting up and managing an internal CI/CD infrastructure. These services often have tiered pricing models, where you only pay for what you use, which can save money compared to maintaining your own servers and CI/CD infrastructure.

 

But with all these beautiful features, I still had to find a permanent and structured approach to the whole delivery process. Reverted to Jenkins and spent more time on it to learn how to use it, and to see how it fits into the broader goal to solve the CI/CD issues for good.

Did Jenkins Deliver ?

Oh it sure did. It could do a lot of things plus what we needed it to do. Build and Deployment to any environment within our infrastructure went on well. From EC2 instances, to Lambda functions and even to Kubernetes (EKS was launched in 2018). Everything operated via a Jenkinsfile which developers also contributed to and it greatly improved the CI/CD process. It solved all the challenges we had around software delivery and even developers cloud setup pipelines themselves with access to Jenkins.

This made it easier to disable production root access and ensure all deployments happen via the CI/CD pipeline. Improving the deployment workflow in the long run and ensuring engineers are totally kept away from production environments totally. Accessing and making changes to production was strictly for these reasons and was done only by the DevOps team

– configuration updates which where yet to be automated as plans where under-way to use Ansible / Canonical Juju for this

– Hot fixes to production environment, or other configuration tuning

Although Jenkins has its own downsides too and one of which is managing a virtual machine. When running Jenkins you need to be prepared to update the VM, update Jenkins, update plugins, secure the VM, optimize resources because Jenkins is built on Java and can be resource intensive. All these need to be put into consideration, and they have an effect on the operational cost of running running an unmanaged CI/CD system.

With the current rise of managed CI/CD systems such as; Github Actions, Gitlab CI, Bitbucket Pipelines (still standing), CircleCI and more, does Jenkins and its counterparts such as Argo Workflows, ArgoCD, TeamCity still hold any value in the current eco-system ? Yes they do. Even the managed CI/CD systems have an option to configure self-hosted runners as a VM that runs the pipeline, which is very similar to unmanaged CI/CD. These are the reasons, unmanaged CI/CD is still relevant.

  • With unmanaged CI/CD, teams have complete control over every aspect of the pipeline, from tools and environments to workflow customization. This is crucial for complex or highly specific requirements that managed services may not support.
  • Teams can configure their CI/CD workflows to match exactly how their development, testing, and deployment processes should work. This level of customization often exceeds the limitations of managed services.
  • Unmanaged CI/CD means you’re not tied to a particular vendor or platform. This can be important for avoiding vendor lock-in, where you’re dependent on a specific service’s pricing, limitations, or availability.
  • Unmanaged CI/CD can be more cost-effective, especially for small teams or startups, as you avoid the ongoing subscription costs of managed CI/CD services. The only costs involved are the infrastructure and maintenance required to run your own pipelines.
  • Some organizations prefer unmanaged CI/CD because it allows them to maintain tighter control over their security and privacy. Sensitive data and proprietary code are never exposed to third-party services, which can be important for regulatory compliance or security reasons.
  • Unmanaged CI/CD pipelines can be more easily scaled and tuned for performance according to specific project requirements. This can be crucial when handling a large number of builds or deployments, as the pipeline can be optimized to meet exact needs.
  • Organizations may have proprietary or custom-built tools that need to be integrated into their CI/CD pipeline. With an unmanaged setup, it’s easier to integrate such tools without the limitations of a managed service’s API or integration points.
  • Managing your own CI/CD system gives you deeper visibility into the entire process. You can monitor, log, and debug every part of the pipeline, which can be challenging with managed services where you may have limited access to underlying systems.
  • In an unmanaged setup, teams can experiment with new technologies or tools more quickly than they could with a managed service. For example, if a new deployment strategy or tool becomes available, it can be integrated into the pipeline right away without waiting for the managed service to support it.
  • By managing your own CI/CD pipelines, you’re not reliant on the uptime and reliability of a third-party provider. If a managed service goes down or experiences issues, it can disrupt development, but with an unmanaged system, you have more control to address any issues locally.

Conclusion

The tools used in an organization are determined by various factors. Most times they are designed to do thesame thing. But the ecosystem the business operates in matters, knowledge base, structure, size will determine which option to go to.

A simple rule of thumb here is; for a small team, it is advisable to go for managed CI/CD to avoid the hassles of managing a virtual machine.

For larger teams, unmanaged CI/CD or Self-hosted Runners is not a bad idea as it gives a lot of flexibility, meets security and regulatory requirements. They both have their place and are both good in different conditions.


Spread the love

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
×