As a part of the Kubernetes Resource Definition, I want to whitelist certain IPs. The list of IPs can be found by
$ kubectl get nodes -o wide --no-headers | awk '{print $7}'
#This prints something like
51.52.215.214
18.170.74.10
.....
Now,
In the Kubernetes deployment file (say deployment.yaml) I want to loop over these values and whitelist them.
I know that we can whitelist by adding under loadBalancerSourceRanges like
#part of the deployment.yaml
loadBalancerSourceRanges
- 51.52.112.111
- 18.159.75.11
I want to update the above loadBalancerSourceRanges to include the output of
$ kubectl get nodes -o wide --no-headers | awk '{print $7}'
How do I go about it? Instead of hardcoding the host IPs, I would like to programatically include via bash or ansible or any other cleaner way possible.
Thanks in advance,
JE
loadBalancerSourceRanges should be a part of Service, not Deployment
You can use the following oneliner to patch your service dynamically:
kubectl patch service YOUR_SERVICE_NAME -p "{\"spec\":{\"loadBalancerSourceRanges\": [$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="InternalIP")]}"{.address}/32",{end}' | sed 's/,*$//g')]}}"
, where you should replace YOUR_SERVICE_NAME with actual service name
To explain what's going on here:
We are using kubectl patch to patch existing resource, in our case - spec.loadBalancerSourceRanges.
we are putting our subshell inside [$(..)], since loadBalancerSourceRanges requires array of strings
kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="InternalIP")]}"{.address}/32",{end}' - gets InternalIPs from your nodes, adds /32 to each of them, since loadBalancerSourceRanges requires ranges, encloses each range in " and then places coma between each value.
sed 's/,*$//g' - removes a trailing comma
Using jsonpath is better thatn awk/cut because we are not dependent on kubectl column ordering and get only relevant for us information from API.
I agree with #Kaffe Myers that you should try using kustomize or helm or other templating engines, since they should be a better suited for this job.
You can use yq
# empty array if necessary
yq -i '.loadBalancerSourceRanges = []' file.yaml
# In my env (AWS EKS) the IP is field 6 (change if needed)
for host in $(kubectl get nodes -o wide --no-headers | awk '{print $6}')
do
yq -i '.loadBalancerSourceRanges += ["'${host}'"]' file.yaml
done
The -i parameter is to apply the change to the file (like sed)
If "loadBalancerSourceRanges" is inside "config", you can use: ".config.loadBalancerSourceRanges"
This is a very use-specific thingy, and you might be better off researching kustomize. That being said, you could make a temporary file which you alter before deploy.
cp deployment.yaml temp.yaml
kubectl get nodes -o wide --no-headers |
awk '{print $7}' |
xargs -I{} sed -Ei "s/^(\s+)(loadBalancerSourceRanges:)/\1\2\n\1 - {}/" temp.yaml
kubectl apply -f temp.yaml
It looks for the loadBalancerSourceRanges: part of the yaml, which on the "template" shouldn't have any values, then populate it with whatever kubectl get nodes -o wide --no-headers | awk '{print $7}' feeds it with.
Related
I need to convert a column which is in CSV format to a comma-separated list so that I can use a for loop on the list and use each parameter.
Here what I have tried:
$ gcloud dns managed-zones list --format='csv(name)' --project 'sandbox-001'
Output:
name
dns1
dns2
dns3
dns4
I need such result: "dns1,dns2,dns3,dns4" so that I can use a for loop:
x="dns1, dns2, dns3, dns4"
for i in $x:
print $i
done
The paste command returns me the last line:
$ gcloud dns managed-zones list --format='csv(name)' --project 'sandbox-001' |paste -sd,
,dns4
I would appreciate if someone can help me with this.
The real problem is apparently that the output has DOS line feeds. See Are shell scripts sensitive to encoding and line endings? for a broader discussion, but for the immediate solution, try
tr -s '\015\012' , <file | sed 's/^[,]*,//;s/,$/\n/'
The arguments to for should just be a list of tokens anyway, no commas between them. However, a better solution altogether is to use while read instead. See also Don't read lines with for
gcloud dns managed-zones list --format='csv(name)' \
--project 'sandbox-001' |
tr -d '\015' |
tail -n +2 | # skip header line
while read -r i; do
echo "$i" # notice also quoting
done
I don't have access to gcloud but its manual page mentions several other formats which might be more suitable for your needs, though. See if the json or list format might be easier to manipulate. (CSV with a single column is not really CSV anyway, just a text file.)
Create an array:
arr=( $(gcloud dns managed-zones list --format='csv(name)' --project 'sandbox-001') )
Than printf it like so:
printf -v var '%s,' "${arr[#]:1}"
This will create variable $var with value 'dns1,dns2,dns3,' echo it like this to drop last comma:
echo "${var%,}"
I have a docker-compose file, in which I have a docker images with its versioning tag, example myrepo.com/myimage:1.0.1.0. I want to write a bash script that receives the version number as a parameter and updates the version tag of the relevant image.
Currently I am using the below command to replace the version tag with the help of grep and sed, but I believe there should be a more efficient way for this. Any suggestions?
sedval='1.1.1.1' # <-- new value
imageval=$(grep -i 'image: myrepo.com/myimage:' docker-compose.yml | cut -d ':' -f2,3)
imagename=$(grep -i 'image: myrepo.com/myimage:' docker-compose.yml | cut -d ':' -f2)
imageversion=$(grep -i 'image: myrepo.com/myimage:' docker-compose.yml | cut -d ':' -f3)
sed -i -e "s+$imageval+$imagename:$sedval+g" docker-compose.yml
docker-compose supports variable substitution. You could do the following:
version: "3"
services:
myrepo:
image: myrepo.com/myimage:${VERSION}
docker-compose looks for the VERSION environment variable and substitutes its value in the file. Run with:
export VERSION=1.1.1.1; docker-compose up
Manipulating YAML with line-oriented regex tools is generally not advisable. You should probably switch to a YAML-aware tool like yq for this.
There doesn't seem to be a reason you extract imageversion; you assign this variable, but never use it. In fact, if you forgo the requirement to match case-insensitively, all you really need is
sed -i "s+\\(image: myrepo\\.com/myimage:\\)[.0-9]*+\\1$1+" docker-compose.yml
assuming (like you say in your question) the new version number is in $1, which is the first command-line argument to your script.
I don't think Docker allows you to use upper case; and if it does, you can replace i with [Ii], m with [Mm], etc. Notice also how the literal dot needs to be backslash-escaped, strictly speaking.
Generally speaking, you can often replace grep 'x' | sed 'y' with sed '/x/y'; see also useless use of grep. Also, the repeated grep could easily be refacored into a single grep with something like
imageval=$(grep -i 'image: myrepo.com/myimage:' docker-compose.yml)
imagever=${imageval##*:}
imagename=${imageval%:$imagever}
where the shell parameter substitutions are both simpler and more elegant as well as more efficient than repeatedly spawning cut in a separate process for this very simple string extraction task.
I am needing to append an addition operator at the end of the command that parses out the IP address from a list of running VM's in our labs. The command is as follows
$ dos.py net-list vanilla80 | grep -i 'fuelweb' | grep -Eo "[0-9][0-9][0-9].[0-9][0-9].[0-9][0-9][0-9].([0-9])"
At the end I would like to add two to the last number in the IP address at the end of the IP address.
Example 171.11.111.0 => 172.11.111.2
Any suggestions to this issue??? I am trying to make our labs more efficient with alias commands that reference a script that will match running lab vms and push their keys to allow for easy access to vanilla provisioned labs.
As a quick and dirty answer,
dos.py net-list vanilla80 |
grep -i 'fuelweb' |
grep -Eo "[0-9][0-9][0-9].[0-9][0-9].[0-9][0-9][0-9].([0-9])" |
awk -F . 'BEGIN { OFS=FS }
{ $NF += 2 }1'
However, a much better solution would be to refactor dos.py to produce machine-readable output, and/or perhaps turn it into a module you can import from another Python script which would replace this shell pipeline.
As an aside, your grep -Eo regex could be refactored into something like \<[0-9]{1,3}(\.[0-9]{1,3})(3}\> and if you don't end up rewriting all of this logic in Python, perhaps the entire pipeline could be refactored into a single Awk script.
I have a bash variable which has the following content:
SSH exit status 255 for i-12hfhf578568tn
i-12hdfghf578568tn is able to connect
i-13456tg is not able to connect
SSH exit status 255 for 1.2.3.4
I want to search the string starting with i- and then extract only that instance id. So, for the above input, I want to have output like below:
i-12hfhf578568tn
i-12hdfghf578568tn
i-13456tg
I am open to use grep, awk, sed.
I am trying to achieve my task by using following command but it gives me whole line:
grep -oE 'i-.*'<<<$variable
Any help?
You can just change your grep command to:
grep -oP 'i-[^\s]*' <<<$variable
Tested on your input:
$ cat test
SSH exit status 255 for i-12hfhf578568tn
i-12hdfghf578568tn is able to connect
i-13456tg is not able to connect
SSH exit status 255 for 1.2.3.4
$ var=`cat test`
$ grep -oP 'i-[^\s]*' <<<$var
i-12hfhf578568tn
i-12hdfghf578568tn
i-13456tg
grep is exactly what you need for this task, sed would be more suitable if you had to reformat the input and awk would be nice if you had either to reformat a string or make some computation of some fields in the rows, columns
Explanation:
-P is to use perl regex
i-[^\s]* is a regex that will match literally i- followed by 0 to N non space character, you could change the * by a + if you want to impose that there is at least 1 char after the - or you could use {min,max} syntax to impose a range.
Let me know if there is something unclear.
Bonus:
Following the comment of Sundeep, you can use one of the improved versions of the regex I have proposed (the first one does use PCRE and the second one posix regex):
grep -oP 'i-\S*' <<<$var
or
grep -o 'i-[^[:blank:]]*' <<<$var
You could use following too(I tested it with GNU awk):
echo "$var" | awk -v RS='[ |\n]' '/^i-/'
You can also use this code (Tested in unix)
echo $test | grep -o "i-[0-z]*"
Here,
-o # Prints only the matching part of the lines
i-[0-z]* # This regular expression, matches all the alphabetical and numerical characters following 'i-'.
I want to remove all tags on a given docker repository locally. For example, if I had two tags on a repo called "my_image:latest" and "my_image:sometag" I would want to remove both of those tags. However, I do not want to remove "another_image:latest".
To list the images you want to remove:
$ docker images --filter='reference=my_image' --format='{{.Repository}}:{{.Tag}}'
To delete them as well, combine with docker rmi:
$ docker images --filter='reference=my_image' --format='{{.Repository}}:{{.Tag}}' | xargs docker rmi
In one line mostly:
docker images | grep "$DOCKER_REPO" | awk '{system("docker rmi " "'"$DOCKER_REPO:"'" $2)}'
Remember you can't remove images that are being used unless you use the -f option.
You can use the following shell function:
dockercleanrepo(){
docker images --no-trunc --format '{{.ID}} {{.Repository}}' \
| grep "$#" | awk '{ print $1 }' | xargs -t docker rmi
}
Add it to bashrc, zshrc and use it simply with:
dockercleanrepo <your-repoitory>
Also related to this, check the answers for the question How to remove old and unused Docker images.
Not sure if something changed since #jcmack posted that answer or if google container registry is special but it didn't work as is; I adapted it:
docker images | grep "gcr.io/project-id" | awk '{system("docker rmi " $1 ":" $2)}'