I was watching this video about Jenkins:
https://www.youtube.com/watch?v=6BIry0cepz4
He mentions that shell scripts have many advantages over using Groovy, to do custom work, in a Jenkins build process.
Apparently the sandbox that Jenkins uses to run the Groovy has some sharp limits?
Where can I find more information about this? When do I give up on Groovy and switch to a shell script?
As the comment from Szymon says this a broad question to answer. And there is no one stop shop that list all the pros and cons of each. It will rather build up based on the use case and the experience that you encounter.
Apparently the sandbox that Jenkins uses to run the Groovy has some
sharp limits?
This is due to the fact that Jenkins enforces certain security measure,so as to not call any method that can perform malicious or unhealthy stuff inside your infrastructure. If you really need to use certain listed libraries, you/jenkins admin need to white-list the class by approving it. Check out the link below:
https://jenkins.io/doc/book/managing/script-approval/
Now, shell script does comes very handy but its not always hunky-dory, on everything.
When do I give up on Groovy and switch to a shell script?
To me, it depends on what i am trying to achieve. Select it based on the which one makes it more easier.
Related
I'm trying to add CI to a project that uses a set of build scripts written in bash. The scripts prompt for input a few times for configuration information (setting flags, setting parameters, etc.) Does Github Actions have its own commands for dealing with this, or is there a way to set up an expect script (or something similar)?
There is currently no feature that allows prompting for manual input during workflow runs. See this response on the community forums where a similar question was asked.
Here are some options you can explore:
Redesign the scripts to read from configuration files and check them into git to trigger the build.
Use the workflow_dispatch event to create a workflow that you can manually trigger from the Actions UI and supply input parameters. See this documentation for more detail.
Use slash-command-dispatch to trigger the build using a slash command with arguments for the build's input parameters.
I have a large complex Ansible provisioning setup, > 100 roles, > 30 inventory groups, etc... This is organised in the recommended way: top level playbooks, with a /roles folder, a /group_vars folder, etc...
Sometimes this fails part-way through, and I want to restart it from where it failed, using the --start-at-task command line switch.
However, I have several tasks that always need to run. These dynamically add hosts to the inventory, set variables that are needed later, etc...
Is there a way to force a task to always be run - or a role to always be applied, even when using --start-at-task?
I know about the always tag, but I think this only makes it run when filtering tasks by using --tag, not --start-at-task - unless someone knows differently?
Alternatively, is there some other way to structure things that would avoid this problem with --start-at-task?
Unfortunately, it is not possible.
Currently you either have to use tags or bite the bullet, rely on idempotency and let all tasks re-run.
There were PRs and discussions to add such a functionality, but they never made it to the official Ansible release.
There is also an open issue asking for this feature, with a suggestiion, that always_run should enable running the task when using in conjunction with --start-at-task, but the discussion also withered away more than a year ago.
The conclusion of the issue mentioned by techraf is that the good way to do it is by implementing a custom strategy: https://github.com/ansible/ansible/issues/12565#issuecomment-834517997
One has been implemented there (but I haven't been able to make it work with ansible-test)
Anyway it's a good start to implement a working solution and the only way possible as Ansible devs will not patch --start-at-task
We have a third party software that is ran via a Powershell script once it has ran it it retuns results back via the script (as well as an XML files).
My question is there anyway within Sonar Qube to run a Powershell script (or windows command line) or would I have to write a brand new plugin?
Russell
You don't mention what the 3rd party tool is, or what kind of results it yields. So at a guess: yes, you'll have to write a new plugin to read the results generated by this tool and import them as part of your analysis.
Notice, I don't mention firing the tool. That can be done, but in general plugins simply import tool reports, and because plugin development is done in Java, it is likely to be simpler to include kicking the tool off as part of your build script/process. However, if you feel you really must fire it from your plugin, the StyleCop plugin may point you in the right direction. But while the StyleCop plugin fires StyleCop directly, you'll be firing a script to fire your tool. Or maybe you'll be incorporating your script logic into your plugin...? (Probably fewer headaches in the long run that way.)
I have some Issues regarding Jenkins and running a Powershell Script within. Long Story short: the Script takes 8x longe execution time then running it manually (takes just a few minutes) on the Server(Slave).
Im wondering why?
In the script are functions which which invoke commands like & msbuild.exe or & svn commit. I found out that the script hangs up in those Lines where before metioned commands are executed. The result is, that Jenkins time out because the Script take that long. I could alter the Timeout threshold in the Jenkins Job Configuration but i dont think this is the solution for the problem
There are no error ouputs or any information why it takes that long and i do not have any further Idea for the reason. Maybe one of you could tell me, how Jenkins invokes internaly those commands.
This is what Jenkins does (Windows batch plugin):
powershell -File %WORKSPACE%\ScriptHead\DeployOrRelease.ps1
I've created my own Powershell CI Service before I found that Jenkins supports it's own such plugin. But in my implementation and in my current jobs configs we follow sample segregation principle rule: more is better better. I found that my CI Service works better when is separated in different steps (also in case of error it's a lot easy for a root cause analyse). The Single responsibility principle is also helpful here. So as in Jenkins we have pre- & post-, build and email steps as separate script. About
msbuild.exe
As far as I remember in my case there were issues related with the operations in FileSystem paths. So when script was divided/separated in different functions we had better performance (additional checks of params).
Use "divide and conquer" technique. You have two choices: modify your script so that will display what is doing and how much it takes for every step. Second option is to make smaller scripts to perform actions like:
get the code source,
compile/build the application,
run the test,
create a package,
send the package,
archive the logs
send notification.
The most problematic is usually the first step: To get the source code from GIT or SVN or Mercurial or whatever you have as version control system. Make sure this step is not embeded into your script.
During the job run, Jenkins capture the output and use AJAX to display the result in your browser. In the script make sure you flush standard output for every step or several steps. Some languages cache standard output so you can see the results only at the end.
Also you can create log files that can be helpful to archive and verify activity status for older runs. From my experience using Jenkins with more then 10 steps requires you to create a specialized application that can run multiple steps like "robot framework".
I've just implemented build and deploy process which consists of java files, ant script and cmd files. In the process, a release manager will have to check out source, hit the build.cmd button and then carry a zip file over to a server.
I am wondering if it is worthwhile to make a GUI for it? So that the release manager does not need to check out source manually for example?
How do I start? I have quite limited knowledge of javax, but I very much like to learn.
Thanks,
Sarah
This sounds like something that could be handled by Hudson. It can check out source, run Ant scripts, etc., saving you the trouble of maintaining a GUI. I'd give that a shot before rolling your own.
I have helped develop the build process at my current company. The way we currently do it is with a script file. It checks out the latest code from the stable branch of our repository, performs some steps to get some data from a database (such as static SQL data that needs to be loaded at deployment), then compresses everything. The file is then distributed to our production servers and then the setup routine is executed. Everything is automatic and the script is written in Python. Python is great for these types of things because of the sheer number of libraries it has to help the developer.
Perhaps it may be useful to build a GUI for your deployment procedure -- typically this would be useful if the deployment requires user interaction to make decisions, such as "Which server shall I deploy to?", etc. But, if it's just a matter of doing things automatically, then a script file's the way to go. Choose your favourite language and dive in -- I of course recommend Python.
If you'd like to learn how to make a simple GUI in Java (since that seems to be what your company is familiar with), you should check out the stuff at this site:
http://java.sun.com/docs/books/tutorial/uiswing/index.html
I learned everything I know about Java from that site. The section on GUI programming is great.
Best of luck!
Shad