I am trying to dynamically set the image name and tag for AWS Elastic Beanstalk in my Dockerrun.aws.json file:
"Image": {
"Name": "IMAGETAG",
"Update": "true"
}
with the following sed command as a script in my GitLab CI file:
sed -i.bak "s|IMAGETAG|$CONTAINER_TEST_IMAGE|" Dockerrun.aws.json && rm Dockerrun.aws.json.bak; eb deploy Production
Where $CONTAINER_TEST_IMAGE is a verified good environment variable (tested by doing echo $CONTAINER_TEST_IMAGE as a script). $CONTAINER_TEST_IMAGE contains the structure of the following content (where ... is the full id):
gitlab.company.com:123/my-group/my-project:core_7de09851...8f_testing
The problem I am facing is that sed does not work during the CI pipeline. I am failing to understand why considering if I set the environment variable locally and run the same command, it will successfully replace the value of Name to the same structure URL. This testing was done on a Macbook.
I know that it is not updating the file because the Gitlab CI log reports
WARN: Failed to pull Docker image IMAGETAG:latest, retrying...
I've tried a few things that did not work:
Running the sed and eb deploy commands as separate scripts (two different lines in the CI file)
Switch the variable that I am seeking to replace in Dockerrun.aws.json to <IMAGE>
While it was at <IMAGE>, running sed -i='' "s|<IMAGE>|$CONTAINER_RELEASE_IMAGE|" Dockerrun.aws.json instead of doing the .bak and then rm'ing it (I read somewhere that sed has inconsistencies on OSX with the -i='' version)
Does anyone have any thoughts on what the issue might be and how it can be resolved?
There were two aspects of this that were going wrong:
The sed command was not executing correctly on the runner, but was working locally
eb deploy was ignoring the updated file
For part 1, he working sed command is:
sed -ri "s|\"IMAGETAG\"|\"$1\"|" Dockerrun.aws.json
where the line in Dockerrun.aws.json is "Name": "IMAGETAG",. sed still confuses me here so I can't explain why this one works vs the original command.
For part 2, apparently eb deploy will always look at the latest commit if it can, rather than the current working directory. Makes sense, I guess. To get around this, run the command as eb deploy --staged. You can read more about this flag on AWS's site.
Also, note that my .gitlab-ci.yml simply calls a script to run all of this rather than doing it there.
- chmod +x ./scripts/ebdeploy.sh
- ./scripts/ebdeploy.sh $CONTAINER_TEST_IMAGE
Related
I was wondering if it would be possible to list all running docker-compose files with a bash function? Something like docker ps or docker container ls, but I want it to show the files only.
I got started with this - lsof | egrep 'REG|DIR' | awk '{print $9}' but this provides me tons of unwanted information as well.
What would be the best approach here?
Thanks in advance.
This bash oneliner shows the working dir of each container's compose file.
for c in `docker ps -q`; do echo $c; docker inspect $c --format '{{index .Config.Labels "com.docker.compose.project.working_dir"}}' ; done
I edited kvelev's command to get this. kvelev's command was just printing "docker-compose.yaml" for each of my loaded containers, so I edited the filter to show the working dir, which works for me.
So I played around with docker inspect and came up with that:
for c in `docker ps -q`; do echo $c; docker inspect $c --format '{{index .Config.Labels "com.docker.compose.project.config_files"}}' ; done
So it is possible ;)
It is not possible to backtrack and find the docker-compose file that was used in container deployment. To overcome such issues, a project pipeline is recommended using tools like maven, jenkins, gradle etc along with a repository platform like github. If its a personal project you can organize your project by wrapping the docker deployment commands and source files in a script and only use them to create deployments. This way it will be organized to some extent.
I am using gitlab-ci to automate build and release. In the last job, I want to upload the artifacts to a remote server using lftp.
$(pwd)/publish/ address is the artifacts that were generated in the previous job. And all variables declared in the gitlab Settings --> CI / CD.
This the job yaml code:
upload-job:
stage: upload
image: mwienk/docker-lftp:latest
tags:
- dotnet
only:
- master
script:
- lftp -e "open $HOST; user $FTP_USERNAME $FTP_PASSWORD; mirror -X .* -X .*/ --reverse --verbose --delete $(pwd)/publish/ wwwroot/; bye"
Note that lftp transfers my files, however, I'm not sure all of my files are transferred.
I added echo "All files Transfered." but it never runs.
There is no error and warning in the pipeline log but I got the following error:
I don't know what is it for. Have anyone faced the error and have found any solution?
Finally, I solved the problem by some changes in lft command parameters.
The point in troubleshooting ERROR: Job failed: exit code 1 is that to use commands with the verbose parameter that return sufficient log to enable you to troubleshoot the problem. Another important point is to know how to debug shell scripts such as bash, powershell, or etc.
Also, you can run and test commands directly in a shell, it is helpful.
The following links are helpful to troubleshoot command-line scripts:
How to debug a bash script?
5 Simple Steps On How To Debug a Bash Shell Script
Use the PowerShell Debugger to Troubleshoot Scripts
For lftp logging:
How to enable lftp protocol logging?
I am trying to run maven through a docker image using a shell script
When running docker in the shell, I use sed to remove single quotes:
bash script:
docker run $(echo "-e CUCUMBER_FILTER_TAGS=$CUCUMBER_FILTER_TAGS $RUN_ARGUMENT $INPUT_MAVEN_COMMAND $MAVEN_ARGUMENTS $AUTHENTICATION" | sed "s/'//g")
is translated into
docker run -e 'CUCUMBER_FILTER_TAGS="#bidrag-person' and not '#ignored"' --rm -v /home/deployer/actions-bidrag-cucumber-backend/ws/bidrag-cucumber-backend/bidrag-cucumber-backend:/usr/src/mymaven -v /home/deployer/.m2:/root/.m2 -w /usr/src/mymaven maven:3.6.3-openjdk-15 mvn test -e -DUSERNAME=j104364 -DINTEGRATION_INPUT=json/integrationInput.json -DUSER_AUTH=*** -DTEST_AUTH=*** -DPIP_AUTH=***
how can I remove those extra single quotes around and within CUCUMBER_FILTER_TAGS that seems to pop up from nowhere?
I cannot solve this and are seeking a solution. This script (https://github.com/navikt/bidrag-maven/blob/feature/filter.tags/cucumber-backend/cucumber.sh) is being run from a cron job on GitHub (GitHub Actions, part of a GitHub workflow)
The other variables (which are not inputs to this script) are set as environment variables from GitHub secrets in a GitHub workflow
AUTHENTICATION="-DUSER_AUTH=$USER_AUTHENTICATION -DTEST_AUTH=$TEST_USER_AUTHENTICATION -DPIP_AUTH=$PIP_USER_AUTHENTICATION"
are set in in a GitHub workflow yaml file like this:
- uses: navikt/bidrag-maven/cucumber-backend#v6
with:
maven_image: maven:3.6.3-openjdk-15
username: j104364
env:
USER_AUTHENTICATION: ${{ secrets.USER_AUTHENTICATION }}
TEST_USER_AUTHENTICATION: ${{ secrets.TEST_USER_AUTHENTICATION }}
PIP_USER_AUTHENTICATION: ${{ secrets.PIP_USER_AUTHENTICATION }}
You should be using an array.
docker_options=(
-e "CUCUMBER_FILTER_TAGS=$CUCUMBER_FILTER_TAGS"
"$RUN_ARGUMENT"
"$INPUT_MAVEN_COMMAND"
"$MAVEN_ARGUMENTS"
"$AUTHENTICATION"
)
docker run "${docker_options[#]}"
while these answers work to some degree, they will not function in my use case... I have recently upgraded the server where these scripts are used, and I have moved on to other scripting languages...
Conclusion:
Bash scripting is hard and painstaking... Both of these suggestions are functioning (sort of), but not as intended...
I have a simple Dockerfile that copies over a template which I used sed to replace some of the variables. Pretty straight forward. Looks very doable and from what I've seen/read for all intents and purposes, it should do it.
COPY /my-dir/my-textfile.conf /to/my/docker/path.conf
RUN sed -i s:TEXTTOREPLACE:my-new-text:g /to/my/docker/path.conf
I then run docker build.... then docker run ... bash
then I cat my file and TEXTTOREPLACE is still there.
Run the same sed command in the bash and it works no problem.
Any thoughts? What am I doing wrong/not seeing?
Thanks!
EDIT per request: base image is debian:7.11, work station is MAC OSX
Just to recap.
I have the file my-textfile.conf in my working directory. Its content is:
I need to change TEXTTOREPLACE with my-new-text
My test system is Ubuntu Linux 16.04 running Docker version 18.09.0, build
4d60db4.
This is the Dockerfile
FROM debian:7.11
COPY my-textfile.conf /tmp/path.conf
RUN sed -i s:TEXTTOREPLACE:my-new-text:g /tmp/path.conf
I run the following commands:
docker build -t mytestimage .
docker run -ti -d --name mytestcontainer mytestimage
docker exec -ti mytestcontainer /bin/bash
Then, inside the container, I run:
cat /tmp/path.conf
and I get this result:
I need to change my-new-text with my-new-text
So it seems it works as expected.
I am creating an amazon emr cluster where one of the steps is a bash script run by script-runner.jar:
aws emr create cluster ... --steps '[ ... {
"Args":["s3://bucket/scripts/script.sh"],
"Type":"CUSTOM_JAR",
"ActionOnFailure":"TERMINATE_CLUSTER",
"Jar":"s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar",
}, ... ]'...
as described in https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hadoop-script.html
script.sh needs other files in its commands: think awk ... -f file, sed ... -f file, psql ... -f file, etc.
On my laptop with both script.sh and files in my working directory, everything works just fine. However, after I upload everything to s3://bucket/scripts, the cluster creation fails with:
file: No such file or directory
Command exiting with ret '1'
I have found the workaround posted below, but I don't like it for the reasons specified. If you have a better solution, please post it, so that I can accept it.
I am using the following work around in script.sh:
# Download the SQL file to a tmp directory.
tmpdir=$(mktemp -d "${TMPDIR:-/tmp/}$(basename $0).XXXXXXXXXXXX")
aws s3 cp s3://bucket/scripts/file ${tmpdir}
# Run my command
xxx -f ${tmpdir}/file
# Clean up
rm -r ${tmpdir}
This approach works but:
Running script.sh locally means that I have to upload file to s3 first, which makes development harder.
There are actually a few files involved...