How to use kubernetes pod file in my remote Jenkins job - shell

I have a jenkins server that has access to remote k8 cluster. I need to run a shell script as a part of jenkins job and pass it the file that is inside the remote kubernetes pod.
I don't want to copy the file from k8 pod to jenkins server as the file is huge ~25G.
I tried with creating a variable 'kubectl_variable' with value:
kubectl get -f pod:<path to file>
this gives error as:
error: the path "podname:/var/lib/datastorage/dump.txt" cannot be accessed: CreateFile podname:/var/lib/datastorage/dump.txt: The filename, directory name, or volume label syntax is incorrect.
I was thinking of passing this variable to the shell script as:
parse.sh ${kubectl_variable}
I am not sure if I can achieve this or not.
Will appreciate your help.

Related

How to access ssh key file in a Teamcity right in a job without SSH upload

I have a job that ssh into other servers and deploys some configuration with scp, but I can not find any way to access ssh key file used in my project configuration in TeamCity in order to execute shell command in my job - "ssh -I ~/.ssh/password", because TeamCity runs only in job directory. Therefore, I want to ask is there any way to access this SSH private key file that I mentioned in a project settings.
Just to say, I cannot use SSH-EXEC and SSH-UPLOAD as I have Shell script that ssh into many servers one by one reading from a file, therefore it would not be useful to have for each job one separate SSH exec job step in TeamCity project, so I have to somehow access the file without using standard SSH-EXEC and SSH-UPLOAD in a TeamCity
What have I tried?
I only had one idea - somehow to access SSH key that is located outside working directory by a path (I found this in documentation):
<TeamCity Data Directory>/config/projects/<project>/pluginData/ssh_keys
Problem with this, is that I cannot just cd into given path, as job does not want to go outside my working directory where job is executed by TeamCity. Therefore I could not access given directory where ssh_keys for my project is located.
UPD: Find out solution to use build feature SSH, that way you can execute SSH-key right with command line in job

Mont a volume in host directoy

I am running an application with a dockerfile that I made.
I run at first my image with this command:
docker run -it -p 8501:8501 99aa9d3b7cc1
Everything works fine, but I was expecting to see a file in a specific folder of my directory of the app, which is an expected behaviour. But running with docker, seems like the application cannot write in my host directory.
Then I tried to mount a volume with this command
docker 99aa9d3b7cc1:/output .
I got this error docker: invalid reference format.
Which is the right way to persist the data that the application generates?
Use docker bind mounts.
e.g.
-v "$(pwd)"/volume:/output
The files created in /output in the container will be accessible in the volume folder relative to where the docker command has been run.

Post deployment script that reads environment variable inside deployed pod

I have kubernetes job whose responsibility is to post a jar to flink (via flink api) and run the Jar.
In response it is going to get a Job id from flink api, which i need to use in test script to see if my job is running or not. The job is going to run inside the container/pod spawned by job.yaml and test script is not going to run from the same pod/container spawned by job.yaml.
If i save this job id as environment variable inside the container/pod spawned by job.yaml, is there a way to access that environment variable outside the pod. I am not even allowed manually to get into the container (to print environment variables) using kubectl exec -it podname /bin/bash/ command saying I cant get in inside a completed (not running) Pod..So I am not sure if i can do the same via script..
Are there any alternatives for me to access the job id in test scripts by making use of environment variable i set inside the container/pod (spawned by job.yaml)..?
In summary is there a way to access the environment variable i set inside Pod, by using a script that runs out side of the pod?
Thank you...
Pavan.
No you can't use environment variable for that.
You could add an annotation from inside your pod.
For that you will need to setuo:
Service account to be able to annotate your self
Downward API
Then you will be able to access it from another pod/container

Specify an AWS CLI profile in a script when two exist

I'm attempting to use a script which automatically creates snapshots of all EBS volumes on an AWS instance. This script is running on several other instances with no issue.
The current instance already has an AWS profile configured which is used for other purposes. My understanding is I should be able to specify the profile my script uses, but I can't get this to work.
I've added a new set of credentials to the /home/ubuntu/.aws file by adding the following under the default credentials which are already there:
[snapshot_creator]
aws_access_key_id=s;aldkas;dlkas;ldk
aws_secret_access_key=sdoij34895u98jret
In the script I have tried adding AWS_PROFILE=snapshot_creatorbut when I run it I get the error Unable to locate credentials. You can configure credentials by running "aws configure".
So, I delete my changes to /home/ubuntu/.aws and instead run aws configure --profile snapshot_creator. However after entering all information I get the error [Errno 17] File exists: '/home/ubuntu/.aws'.
So I add my changes to the .aws file again and this time in the script for every single command starting with aws ec2 I add the parameter --profile snapshot_creator, but this time when I run the script I get The config profile (snapshot_creator) could not be found.
How can I tell the script to use this profile? I don't want to change the environment variables for the instance because of the aforementioned other use of AWS CLI for other purposes.
Credentials should be stored in the file "/home/ubuntu/.aws/credentials"
I guess this error is because it couldn't create a directory. Can you delete the ".aws" file and re-run the configure command? It should create the credentials file under "/home/ubuntu/.aws/"
File exists: '/home/ubuntu/.aws'

Cannot run `source` in AWS Codebuild

I am using AWS CodeBuild along with Terraform for automated deployment of a Lambda based service. I have a very simple buildscript.yml that accomplishes the following:
Get dependencies
Run Tests
Get AWS credentials and save to file (detailed below)
Source the creds file
Run Terraform
The step "source the creds file" is where I am having my difficulty. I have a simply bash one-liner that grabs the AWS container creds off of curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI and then saves them to a file in the following format:
export AWS_ACCESS_KEY_ID=SOMEACCESSKEY
export AWS_SECRET_ACCESS_KEY=MYSECRETKEY
export AWS_SESSION_TOKEN=MYSESSIONTOKEN
Of course, the obvious step is to simply source this file so that these variables can be added to my environment for Terraform to use. However, when I do source /path/to/creds_file.txt, CodeBuild returns:
[Container] 2017/06/28 18:28:26 Running command source /path/to/creds_file.txt
/codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: source: not found
I have tried to install source through apt but then I get an error saying that source cannot be found (yes, I've run apt update etc.). I am using a standard Ubuntu image with the Python 2.7 environment for CodeBuild. What can I do to either get Terraform working credentials for source this credentials file in Codebuild.
Thanks!
Try using . instead of source. source is not POSIX compliant. ss64.com/bash/source.html
CodeBuild now supports bash as your default shell. You just need to specify it in your buildspec.yml.
env:
shell: bash
Reference: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax
The AWS CodeBuild images ship with a POSIX compliant shell. You can see what's inside the images here: https://github.com/aws/aws-codebuild-docker-images.
If you're using specific shell features (such as source), it is best to wrap your commands in a script file with a shebang specifying the shell you'd like the commands to execute with, and then execute this script from buildspec.yml.
build-script.sh
#!/bin/bash
<commands>
...
buildspec.yml (snippet)
build:
commands:
- path/to/script/build-script.sh
I had a similar issue. I solved it by calling the script directly via /bin/bash <script>.sh
I don't have enough reputation to comment so here it goes an extension of jeffrey's answer which is on spot.
Just in case if your filename starts with a dot(.), the following will fail
. .filename
You will need to qualify the filename with directory name like
. ./.filename

Resources