How to automatically fetch the active Docker registry - bash

I am delivering a Docker image along with a script to automate and build > push that image to the repository.
docker build -t myDockerSpace/myServiceImage:latest .
docker push myDockerSpace/myServiceImage
without specifying the namespace/hub I run into access issues. Anyhow issue is when I delivery these scripts I would need to ask them to manually change the hub to their respective registry and then run which actually kills the purpose to automate it.
Is there a way I can fetch and insert the active Docker namespace of the machine where it is running and insert it in the document?

Related

Linode Docker Marketplace App: Where is the config stored?

Using the Docker App from the Marketplace, I can set a command to run on the Linode creation. What if I need to change the command like changing the image tag? Where can I find the info in Debian so I can edit it after creation?
After a few tests, I figured out that the command is not stored in the Linode and not run at reboot. Restarting a container at reboot is done by adding a restart policy in the Docker Run command.
In case I need to update the image with a newer version, I need to stop the current one and do a Docker Run with the new image.

How do I automatically update Grafana dashboards (and datasources) in the Docker Image from the exported JSON?

I am attempting to update Grafana dashboards/data-sources automatically inside a Grafana Docker image using the exported relevant JSON which is stored (and routinely updated) in Github/Bitbucket.
E.g.:
Docker image running Grafana
The Dockerfile adds a Bash script which pulls from a Git source,
The script then copies the JSON files into the relevant directories (/etc/grafana/provisioning/datasource + /dashboards).
Graphs and datasources are updated without the manual intervention (other than updating the JSON stored in Github or Bitbucket).
I have EXEC'ed into the Grafana docker image and Grafana runs on a very basic linux system, therefore practically no commands can be used i.e., git, wget, apt.
Would I be silly in thinking I should create a Dockerfile from the base Debian image, running an apt update and installing git inside. Then somehow running Grafana and the script inside that image?
please feel free to ask for more information.
Consider a simpler approach that uses docker volumes:
grafana container uses docker volumes for /etc/grafana/provisioning/datasource + /dashboards
Those docker volumes are shared with other docker container, that you create.
Your docker container runs an incoming webhook server, publicly available.
If that webhook is triggered, then your script runs.
That script git pulls the changes from your repo and copies the JSON files into the relevant directories. The "relevant directories" are those shared docker volumes between your docker and grafana docker.
You register a webhook to be executed in the github repo on each push on master.
The whole process is automated and looks like this:
You push on master to your github repo with the relevant sources
Your docker with incoming webhook server is pocked by github
Your docker executes a script
That script git pulls the github repo and copies the JSON files into the shared folders
If you need ex. to restart the grafana container from that script, you can mount docker socket -v /var/run/docker.sock and execute docker commands from inside the container.

Running jenkins on docker (on Windows)... Proper steps to run pipeline job

I'm trying to implement docker with jenkins and am not sure if I am on the right track.
Given:
Running jenkins on docker from Windows
Plan on fetching code from github, building the solution, running functional tests, etc on a container somehow
What I've currently done:
(1) Installed Docker on Windows
(2) Successfully launched Jenkins on Docker with the command
"docker run –name myJenkins -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts"
I believe this step binds the docker volume to my host machine's directory. This allows me to view and access the Jenkins content.
(3) In my host machine's Jenkins directory, I've created a plugin.txt (containing a variety of Jenkins plugins I want installed) and a Dockerfile. The Dockerfile installs the specified plugins in the plugins.txt file.
FROM jenkins/jenkins:lts
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
(4) In the windows command prompt, I built the Dockerfile with the command "docker build -t new_jenkins_image ."
(5) I stop my current container "myJenkins" and create a new container with the command "docker run –name myJenkins2 -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ new_jenkins_image". This loads up Jenkins with the newly installed jenkins plugins.
What I'm stuck/confused on
(1) Do I have to create a new container with a new name every time I want to install new jenkins plugins through the Dockerfile? This seems like a manual process as well... There has to be a better way.
(2) I started a basic jenkins pipeline job with the "Pipeline script from SCM" option. I entered in the correct repository URL and credentials but left the "Script Path" blank for now (I do not have a Jenkinsfile yet). When I execute the build, Jenkins did not fetch the code from github.
java.lang.IllegalArgumentException: Empty path not permitted.
at org.eclipse.jgit.treewalk.filter.PathFilter.create(PathFilter.java:80)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:205)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:249)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:281)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:171)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:165)
at jenkins.plugins.git.GitSCMFileSystem$3.invoke(GitSCMFileSystem.java:193)
at org.jenkinsci.plugins.gitclient.AbstractGitAPIImpl.withRepository(AbstractGitAPIImpl.java:29)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.withRepository(CliGitAPIImpl.java:72)
at jenkins.plugins.git.GitSCMFileSystem.invoke(GitSCMFileSystem.java:189)
at jenkins.plugins.git.GitSCMFile.content(GitSCMFile.java:165)
at jenkins.scm.api.SCMFile.contentAsString(SCMFile.java:338)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:110)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:67)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:293)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
I believe it's because the docker container does not have git installed? The container cannot access the Git or MSBuild from my host machine... Do I have to create a new container here to simply fetch the code?
Can someone explain to me what I'm missing or where I went wrong?
From my understanding, the process goes like this: Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
Where does the Dockerfile come into play here? Is my thought process on the right track?
you need to create container every time if you change/update the image. but it not required to give a new name each time. Did you stopped and removed previously running container? if not so docker gives errors like same name container cannot start. So stop and remove your previous container. and you will be able to start a new container with an updated image.
Yes, you need to install git in the same container to pull the code. it cannot access git on the host machine. But the error you are showing is like validation error. (i mean Jenkins validates input even before trying to pull the code. if you add some fake name it will throw next error like git not found)
Your thought is on correct track. Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
At the end of the question, you mentioned about different Dockerfile, i assume you are talking about Dockerfile in your repository (git). you can run your pipeline in docker agent. This removes to setup everything on jenkins host means you dont need to install dependencies to run your pipeline code on host, for example, if you are trying to execute some nodejs code in pipe, you need to setup nodejs on Jenkins host before you run the pipe, to get rid of this you can run pipe in container where everything is pre-setup. But I don't think you can use this feature if you are running Jenkins itself in docker. you need to setup Jenkins on the host directly in that case.

Committing a docker container after build fail

I'm trying to use the Docker plugin in Jenkins to commit a docker container when the build running on it fails. Currently I have a Jenkins server with ~15 nodes, each with its own docker cloud. The nodes all have the latest version of docker-ce installed. I have a build set up to run on a docker container. What I want to do is commit the container when the build fails. Below are the things I have tried:
Adding a post build task, where I obtain the container ID and the hostname of the node running the container. I then SSH into the node and then commit the container.
The problem: Not able to SSH from inside a container as it requires a password and there's no way adding the node to the container's list of known hosts
Checking the "commit container" box in the build's general configurations
The problem: This is probably working but I don't know where the container is being committed to. Also this happens every time, and not just when the build fails.
Using the build script
Same problem as using the post build task
Execute a docker command (Build step)
This option asks for the container ID, which I have no way of knowing as it is new every time a build is run.
Please let me know if I have misunderstood any of the above ways! I am still new to Jenkins and Docker so I am learning as I go. :)

How can I properly configure a gcloud account for my Gradle Docker plugin when using GCR?

Our containers are hosted using Google Container Registry, and I am using id "com.bmuschko.docker-java-application" version "3.0.7" to build and deploy docker containers. However, I run into permission issues whenever I try to pull the base image or push the image to GCR (I am able to get to the latter step by pulling the image and having it available locally).
I'm a little bit confused by how I can properly configure a particular GCloud account to be used whenever issuing any Docker related calls over a wire using the plugin.
As a first attempt, I've tried to create a task that precedes and build or push commands:
task gcloudLogin(type:Exec) {
executable "gcloud"
args "auth", "activate-service-account", "--key-file", "$System.env.KEY_FILE"
}
However, this simple wrapper doesn't work as desired. Is there currently a supported way to have this plugin work with GCR?
Got in touch with the maintainers of the gradle docker plugin and we have found this to be a valid solution.

Resources