I am trying to get the project name of my GCP project.
Firstly, I tried using the command:
gcloud projects describe $PROJECT_ID
Here you get the poject Id, number name, Organization and other details.
Then next use grep command to get the project Name.
It's often more convenient to use gcloud only:
PROJECT=[[YOUR-PROJECT]]
gcloud projects describe ${PROJECT} \
--format="value(name)"
You may use e.g. value(projectNumber) to get the project number.
To get the project name of your GCP project, use the below command:
gcloud projects describe $PROJECT_ID | grep name | awk -F'name: ' '{print $2}'
where, $PROJECT_ID is the id of your GCP project
Related
I want to know if there is a way to delete specific resources in kubernetes across all namespaces? I want to delete all the services as type of LoadBalancer at once, and have this automated in a pipeline. I already built a xargs kubectl command which will get the name of LoadBalancer services across all namespaces and feed the output to the kubectl delete command. I just need to loop across all namespaces now.
kubectl get service -A -o json | jq '.items[] | select (.spec.type=="LoadBalancer")' | jq '.metadata.name' | xargs kubectl delete services --all-namespaces
error: a resource cannot be retrieved by name across all namespaces
If I remove the --all-namespaces flag and run --dry-run=client flag instead, then I get a dry-run deletion of all the services I want getting deleted on all namespaces. Is there a way k8s lets you delete resources by name across all namespaces?
Any ideas?
UPDATE:
This is the output of running the command using --dry-run flag, it gets the name of all the services I want to delete and automatically feeds them to the kubectl delete command
kubectl get service -A -o json | jq '.items[] | select (.spec.type=="LoadBalancer")' | jq '.metadata.name' | xargs kubectl delete services --dry-run=client
service "foo-example-service-1" deleted (dry run)
service "bar-example-service-2" deleted (dry run)
service "baz-example-service-3" deleted (dry run)
service "nlb-sample-service" deleted (dry run)
The only part missing is that I need to do the deletion across all namespaces to delete all the specified services, I only want to delete services of type LoadBalancer and not other types of services like ClusterIP or NodePort or anything so the specific names must be provided.
You can achieve that using jsonpath with the following command:
kubectl delete services $(kubectl get services --all-namespaces \
-o jsonpath='{range .items[?(#.spec.type=="LoadBalancer")]}{.metadata.name}{" -n "}{.metadata.namespace}{"\n"}{end}')
The output of the kubectl get command gives a list of <service_name> -n <namespace_name> strings that are used by the kubectl delete command to delete services in the specified namespaces.
I would like realized inventory of GCP Org policies which are SET in my organization top folders, so I want to use bash script in my cloud shell terminal.
Code below works but I want use folder "DisplayName" and not folder ID.
DisplayName can be retrieve with this cmd gcloud beta resource-manager folders list --organization=XXXXX--format=value"(ID, DISPLAY_NAME)but it is difficult to organize result later in one table
#!/bin/bash
for folder in $(gcloud beta resource-manager folders list --organization=XXXXX --format="value(ID)")
do
echo $folder
gcloud beta resource-manager org-policies list --folder=$folder
done
I don't have access to either organization or folder but, the following should work.
The trick is to use --format=csv[no-heading] to provide a more readily 'parseable` values:
#!/bin/bash
ORGANIZATION="..."
PAIRS=$(\
gcloud beta resource-manager folders list \
--organization=${ORGANIZATION} \
--format="csv[no-heading](ID,displayName)")
for PAIR in ${PAIRS}
do
IFS=, read ID DISPLAY_NAME <<< ${PAIR}
echo ${DISPLAY_NAME}
gcloud beta resource-manager org-policies list \
--folder=${ID}
done
#!/bin/bash
ORGANIZATION="XXXXXX"
PAIRS=$(gcloud beta resource-manager folders list --organization=${ORGANIZATION} --format="csv[no-heading](ID,DISPLAY_NAME)")
for PAIR in ${PAIRS}
do
IFS=, read ID DISPLAY_NAME <<< ${PAIR}
echo ${DISPLAY_NAME}
gcloud beta resource-manager org-policies list --folder=${ID}
done
I would like to ask is there any way to disable build triggers for all TeamCity projects by running a script?
I have a scenario that I need to disable all the build triggers to prevent the builds from running. This is because sometimes, I might need to perform some upgrading process on build agent machines which will take more than one day.
I do not wish to manually click on Disable buttons for every build triggers on every different TeamCity projects. Is there a way to automate this process?
Thanks in advance.
Use Team City REST API.
Given your Team City is deployed at http://dummyhost.com and you enabled guest access with system admin role (otherwise just switch from guestAuth to httpAuth in URL and specify user with password in request, details are in documentation) you can do next:
Get all build configurations
GET http://dummyhost.com/guestAuth/app/rest/buildTypes/
For each build configuration get all triggers
GET http://dummyhost.com/guestAuth/app/rest/buildTypes/id:***YOUR_BUILD_CONFIGID***/triggers/
For each trigger disable it
PUT http://dummyhost.com/guestAuth/app/rest/buildTypes/id:***YOUR_BUILD_CONFIGID***/triggers/***YOUR_TRIGGER_ID***/disabled
See full documentation here
You can pause the build queue. See this video. This way you needn't touch the build configurations at all; you're just bringing the TC to a halt.
For agent-specific upgrades, it's best to disable only the agent you're working on. See here.
Neither of these is "by running a script" as you asked, but I take it you were only asking for a scripted solution to avoid a lot of GUI clicking.
Another solution might be to simply disable the agent, so no more builds will run.
Here is a bash script to bulk pause all (still not paused) build configurations by project name pattern via TeamCity REST API:
TEAMCITY_HOST=http://teamcity.company.com
CREDS="-u domain\user:password"
curl $CREDS --request GET "$TEAMCITY_HOST/app/rest/buildTypes/" \
| sed -r "s/(<buildType)/\n\\1/g" | grep "Project Name Regex" \
| grep -v 'paused="true"' | grep -Po '(?<=buildType id=")[^"]*' \
| xargs -I {} curl -v $CREDS --request PUT "$TEAMCITY_HOST/app/rest/buildTypes/id:{}/paused" --header "Content-Type: text/plain" --data "true"
I just found out about google new "gcloud beta logging" service.
The classic sample they show is something like this:
gcloud beta logging write my-test-log "A simple entry"
But I would like to log every new entry in a specific log file. For example:
tail -F My_Log_File.txt | grep gcloud beta logging write my-test-log
What is the best practice for this operation?
You can do this:
tail -F My_Log_File.txt | xargs gcloud beta logging write my-test-log
Or you can use a logging agent to watch certain files and log them to the logging service:
https://cloud.google.com/logging/docs/agent/installation
http://docs.fluentd.org/articles/in_tail
I am trying to launch an instance on Amazon EC2. I have researched this problem extensively, but I have not found any helpful information.
When I run the command hadoop-ec2 launch-cluster mycluster 2, I receive the following error message:
Starting master with AMI.
Required parameter 'AMI' missing (-h for usage)
I have entered my AWS key, AWS secret key, AWS key pairs, etc. I am using hadoop-1.0.4. I am using the default S3 bucket (hadoop-images), but I have tried many other AMIs and I always get the same error message.
Has anybody experience this problem before?
The basic issue is that the search for images the launch-hadoop-master script performs is not returning any results. The most likely cause of this due to the different AMIs that are available in different regions (but it could be due to any changes you've made to S3_BUCKET and HADOOP_VERSION in hadoop-ec2-env.sh).
From the launch-hadoop-master script:
# Finding Hadoop image
AMI_IMAGE=`ec2-describe-images -a | grep $S3_BUCKET
| grep $HADOOP_VERSION
| grep $ARCH
| grep available
| awk '{print $2}'`
# Start a master
echo "Starting master with AMI $AMI_IMAGE"
So, it appears that AMI_IMAGE is not being set to a valid image and thus the search for AMIs that match the various grep filters is failing (the defaults for the Hadoop 1.0.4 distribution are S3_BUCKET is hadoop-images, HADOOP_VERSION is 0.19.0 and ARCH is x86 if you're using m1.small instances). If you search the public AMIs in the US-West-2 region, you'll see that there aren't many Hadoop images, but if you search the public AMIs in the US-East-1 region, you'll see that there are quite a few. Thus, one way around this issue is to work in the US-East-1 region (this is simplest) or, alternatively, set EC2_URL in your login script via export EC2_URL=https://ec2.us-east-1.amazonaws.com but now you need make sure you put your keys in this region from the AWS console.
If you did indeed change HADOOP_VERSION to 1.0.4, I'll note that
ec2-describe-images -a | grep hadoop-images
| grep "1.0.4"'
| grep x86
| grep available
doesn't return any images in the US-East-1 region. Note that the version (HADOOP_VERSION) of the Hadoop distribution that you are running the hadoop-ec2 command from does not need to be the same as the version of Hadoop that the images will be running.
Lastly, as a blunt fix, you could find the AMI that you want to use, and force set AMI_IMAGE to the image name in the launch-hadoop-master and launch-hadoop-cluster scripts.