These days I’m developing a bunch of pipelines to automatise the build/deployment process. Here are some things I’ve learned:
Initially, I’ve started developing the pipeline as any other program: Create a repository, fire my editor, write a Jenkinsfile and use the Pipeline script from SCM. While the result is exactly what you want to have at the end, the edit-deploy-run cycle becomes a chore and it’s quite unpleasant at the beginning (I had a bunch of typos for example), so DON’T.
Instead, create a simple pipeline with a Pipeline script. This way, you edit bits and can run the pipeline quickly to validate it. Once the pipeline is stable, add it to a repo and use the above approach. So, DO!
This simple change saved me a couple of days’ work.
Note: When you use the inline script, please make sure you tick the Use Groovy Sandbox because the SCM jenkinsfile is using hte sandbox by default.
There are two types of pipelines:
Take some time to figure out which is better for you. For example:
- If you need to pass parameters around, scripting can be easier
- If you need to use different build nodes with different capabilities, maybe declarative approach is easier
Initially, I’ve started with changing a default pipeline and started to tweak it to fit my purpose. While this is a good way to learn (particularly the DSL I found), the result is quite messy. I found that if I first start with a pen and paper to design the stages and steps, then the pipeline development is easy enough
If you have a bunch of code which is common to multiple steps/pipelines, then normalise it:
- create functions/procedures
- build a shared library