Fetching artifacts from Nexus to Rundeck - continuous-integration

I'm creating a Rundeck job which will be used to rollback an application. My .jar files are stored in a Nexus repository and I would like to add an option to Rundeck where I can choose a .jar version from Nexus and then run the rollback job on this.
I have tried using this plugin: https://github.com/nongfenqi/nexus3-rundeck-plugin, but it doesn't seem to be working. When I am logged in to Nexus I can access the JSON file listing the artifacts from my browser, but when I am logged off the JSON file is empty, even if the Nexus server is running.
When adding the JSON URL as a remote URL option in Rundeck like the picture below, I get no option to choose from when running the job, even if I am logged in to Nexus, as shown by picture number 2. Is there a way to pass user credentials with options, or any other workaround for this?

I would recommend you to install Apache / HTTPD locally on your rundeck server and use a CGI script for this.
Write a CGI script that queries your Nexus3 service for versions available on the jar file, and echo the results in JSON format.
Place the script in /var/www/cgi-bin/ with executable bit enabled. You can test it like so:
curl 'http://localhost/cgi-bin/script-name.py'
In your job you can configure your remote URL accordingly.
I find using local CGI script to be much more reliable and flexible. You can also handle any authentication requirements there.

Related

Jenkins: File Load Error- no such file while running on Jenkins Job via GitHub

I have a Jenkins job that clones the repository from GitHub, then runs the Execute Shell script with the help of the Newman command containing the collection and Environment JSON file.
After that, the collection of some of the services is working fine.
But the file upload-related services are failing.
The error is displayed as File load error: <“file name”>, no such fileFile Issue
Can you help me to resolve this issue with some solutions?
My all services are automated and all are working fine in Postman and Newman tools.
But now I'm trying to run my collection on Jenkins by cloning the repository from GitHub with the help of Newman from the shell script.
Expectation:
In the collection, I have 15 services where I have to upload different data files, and based on that response some of the services are dependent.
So I want to upload the files while running the Jenkins job

How to upload and run jmeter script from server

There is a need to upload the script from local to server and then run over there. can someone please let me know how to can achieve this.
Just copy it from local machine to the "server" using SCP for Linux or SMB for Windows, once done you can log into the server over SSH or RDP and execute your JMeter test in command-line non-GUI mode
If you want fully unattended/automated execution consider the following:
Setting up a version control system, i.e. use Github to store your script(s)
Configure the Webhook to trigger an action when you commit the file
Install Jenkins on the server
Configure Jenkins to listen to the Github Webhook and when it happens kick off a build running a JMeter test
This way whenever you add new or update existing script it will automatically trigger the job which will execute the test, check out How to Integrate Your GitHub Repository to Your Jenkins Project article for detailed steps if needed.

Jmeter - How to add jmx to the docker file

I am creating a Jmeter docker file. I have my JMX file and csv files checked in to git. Could you please guide me on the command to create the jmx image.
There are at least 2 ways of doing this:
Install git client (the steps are different depending on Linux distribution you're using in Docker) and perform git clone of the repository
Use Docker COPY instruction to copy the previously cloned .jmx and csv files from the host machine
Going forward I would recommend updating the question with your Dockerfile so we could get idea regarding your approach and underlying image(s) - this way we won't have to do "blind shots" and the chance you will get the answer will be much higher.
In the meantime check out Make Use of Docker with JMeter - Learn How article, you can use it (at least partially) as the reference for building your own setup.

Where can I pull the server host name from if we can't store it in this script?

I noticed someone creating a bunch of scripts to run on GemFire clusters, where they have multiple copies of the same script where the only difference between the scripts is the server name.
Here is a picture of the Github repo
What the script looks like:
#!/bin/bash
source /sys_data/gemfire/scripts/gf-common.env
#----------------------------------------------------------
# Start the servers
#----------------------------------------------------------
(ssh -n <SERVER_HOST_NAME_HERE> ". ${GF_INST_HOME}/scripts/gfsh-server.sh gf_cache1 start")
SERVER_HOST_NAME_HERE = the IP address or server name that the script was designed for, removed for the purposes of this questions.
I would like to create one script with a parameter for the server name. Problem is: I'm not exactly sure where the best location would be to store/retrieve the server ip/host name(s), and let the script reference it, any ideas? The number of cache servers will vary depending on environment, application, and cluster.
Our development pipeline should work like this ideally:
Users commit a file to GitHub repo
Triggers Jenkins job
Jenkins job copies file to each cache server, shuts down that server using the stop_cache.sh script, then runs the start_cache.sh script. The number of cache servers can vary from cluster to cluster.
GemFire cache servers are updated with new file.
Went with the method suggested by #nos
Right now you have them hardcoded in each file it seems. So extract them to a separate file(s), loop through entries in that file and run for host in $(cat cache_hostnames.txt) ; ./stop_cache.sh $host ; done and something similar for other kinds of services?
Placed the server names in a file, and looped through the file.
This project might be of interest:
https://github.com/Pivotal-Data-Engineering/gemfire-manager

Download files from Artifactory

How can I download files from Artifactory . Is it possible to download using batch script . I used CURL commands to upload then on the same way please provide suggestions to download. Appreciate your help.
You can use the JFrog CLI - a compact and smart client that provides a simple interface that automates access to JFrog products. The CLI works on both Windows and Linux.
For downloading files, take a look at the command for downloading files from Artifactory. This command allows you downloading specific files, multiple files (using wildcards) or complete folders,
Use GNU WGET from here - http://gnuwin32.sourceforge.net/packages/wget.htm
Very small utillity and supports download percentage and alot of other options like overwriting, not download if file exists etc.
Hi I used the same CURL command with Ansible .But I missed to configure the remote server for Ansible .So the CURL was not working . After configuring the remote server. It was able to download Thanks a lot for the response

Resources