I am little confused on how to make use of cURL command to delete the unnecessary branches we have in our GitLab instance.
I want to delete the branches which are named as "tobedeleted_*" , so any branch name which starts with "tobedeleted" I want to delete it.
However one more additional thing which I want to check is, I want to delete the branches which are having an attribute in api where created_at is more than 1 month or 30 days.
This way, I want to make sure that I don't delete any branch which is newly created.
I have set of commands which can be executed manually to perform this which requires input of project id & branch name which is supposed to be deleted, but I want to automate it with the help of writing script or some curl commands which I will either schedule with the help of jenkins or gitlab schedule feature.
Can you guys help me on how to automate it?
I can tell you the details I have, I am basically lagging in writing the if conditions which will make it easier I believe.
I want to do this for all the projects which are present in a group number 6. So, To get all the project ids, I can make use of this curl command:
curl -s --location --request GET '$CI_API_V4_URL/groups/6/projects' --header 'PRIVATE-TOKEN:<my_pvt_token>' | sed 's/,/\n/g' | grep -w "id" | awk -F ':' '{print $2}' | sed -s 's/{"id"//g'
To get all the branches which are supposed to be deleted of a project which requires input of project ids, In the below curl command I am using projectid as 11
curl -s --location --request GET '$CI_API_V4_URL/projects/11/repository/branches' --header 'PRIVATE-TOKEN: <my_pvt_token>' | sed 's/,/\n/g' | grep -w "name" | awk -F ':' '{print $2}' | sed 's/"//g' | grep 'tobedeleted*'
Once, I extract the name of the branch of all the projects,
I need to give input of project id and branch name and iterate it to the following cURL command:
curl --request DELETE --header "PRIVATE-TOKEN:<my_pvt_token>" "$CI_API_V4_URL/projects/11/repository/branches/tobedeleted_username_6819"
I am very confused on how can I iterate through the projects by keep on deleting the multiple branches I have of that project.
Any help is really appreciated.
Also, this could be little easier if I was going to directly delete the merged branches with gitlab api, but I want to delete the specific named branches only.
Ref:
https://docs.gitlab.com/ee/api/branches.html#delete-repository-branch
https://docs.gitlab.com/ee/api/branches.html#list-repository-branches
https://docs.gitlab.com/ee/api/projects.html#list-all-projects
Related
I need a list of all repository tags for a remote Docker registry, along with their date = the date the image tag was pushed.
Or at least the tags sorted according when they were created (without the date, but in chronological order).
What I have now (from the Official Docker docs) is:
curl -XGET -u my_user:my_pass 'https://my_registry.com/v2/my_repo/tags/list'
But that seems to return the tags in random order. I need to further filter / postprocess the tags programmatically, so I need to fetch additional per-tag metadata.
Any idea how to do that?
My two cents:
order_tags_by_date() {
if [ -z $1 ] || [ -z $2 ]; then
echo "Get tag list of a component and order by date. Please enter a component name and a tag "
echo "For example: tags my-app 20200606"
return 1
fi
# get all tags list
url=some_host
result=$(curl -s $url:5000/v2/$1/tags/list)
# parse page and get "tags" array, sort by name, reverse, and get as a tab separated values;
# separate with space and put into an array in bash;
# "reverse" to get the latest tag first; if you want old tag first, remove it
IFS=" " read -r -a tags <<< "$(echo $result | jq -r '.tags | sort | reverse | #tsv')"
# for each tag, get the same component in docker api the manifest,
# parse the created field by the first element in history; I assume all histories has the same created timestamp
json="["
for tag in $tags
do
host=another-docker-api-host
date=$(curl -sk $host/v2/$1/manifests/$tag | jq -r ".history[0].v1Compatibility" | jq ".created")
json+='{"tag":"'$tag'", "date":'$date"},"
done;
valid_json=${json::-1}']'
echo $valid_json | jq 'sort_by(.date, .tag) | reverse'
}
I constructed another JSON for jq to sort on the date, tag field; because in my case, some image has timestamp based tags, but creation date is always 19700101; while other images have correct date but tags are number based; so I order by both. You can also remove "reverse" at the end to sort asc.
If you don't want the date, add this on the last line of the script, after reverse:
| .[] | .tag
So it will get all elements' tag value and they are already sorted.
jib created images, though, will have date of 1970-01-01, this is by design.
In the past, I wrote a script to migrate images from local docker registry to ECR, maybe you want to use some of the lines like,
tags=$(curl -s https://$username:$password#$privreg/v2/$repo/tags/list?n=2048 | jq '.tags[]' | tr -d '"')
creationdate=$(curl -s https://$username:$password#$privreg/v2/$repo/manifests/$tag | jq -r '.history[].v1Compatibility' | jq '.created' | sort | tail -n1)
#!/bin/bash
read -p 'Username: ' username
read -sp 'Password: ' password
privreg="privreg.example.com"
awsreg="<account_id>.dkr.ecr.<region_code>.amazonaws.com"
repos=$(curl -s https://$username:$password#$privreg/v2/_catalog?n=2048 | jq '.repositories[]' | tr -d '"')
for repo in $repos; do
tags=$(curl -s https://$username:$password#$privreg/v2/$repo/tags/list?n=2048 | jq '.tags[]' | tr -d '"')
project=${repo%/*}
service=${repo#*/}
awsrepo=$(aws ecr describe-repositories | grep -o \"$repo\" | tr -d '"')
if [ "$awsrepo" != "$repo" ]; then aws ecr create-repository --repository-name $repo; fi
for tag in $tags; do
creationdate=$(curl -s https://$username:$password#$privreg/v2/$repo/manifests/$tag | jq -r '.history[].v1Compatibility' | jq '.created' | sort | tail -n1)
echo "$repo:$tag $creationdate" >> $project-$service.txt
done
sort -k2 $project-$service.txt | tail -n3 | cut -d " " -f1 > $project-$service-new.txt
cat $project-$service-new.txt
while read repoandtags; do
sudo docker pull $privreg/$repoandtags
sudo docker tag $privreg/$repoandtags $awsreg/$repoandtags
sudo docker push $awsreg/$repoandtags
done < $project-$service-new.txt
Script might not work and need some changes, but you can use some parts of it, so I will leave it in the post as an example.
The tag list should be returned using lexical ordering according to the OCI spec:
If the list is not empty, the tags MUST be in lexical order (i.e. case-insensitive alphanumeric order).
There's no API presently available in OCI to query when the image was pushed. However, images do include a configuration that you can query and parse to extract the image creation time as reported by the build tooling. This date may be a lie, I've worked on tooling that backdates this for purposes of reproducibility (some well known date like 1970-01-01, or the datestamp of the Git commit being built), some manifests pushed may have an empty config (especially when it's not actually an image), and a manifest list will have multiple manifests each with different configs.
So with that disclaimer that this is very error prone, you can fetch that config blob and extract the creation date. I prefer to use tooling for this beyond curl to handle:
different types of authentication, or anonymous logins, that registries require
different types of image manifests, which I'm aware of at least 6 different manifests, OCI and Docker for images and manifest list and schema v1 manifests with and without a siganture
selecting a platform from multi-platform manifests and parsing the json of the config blob
The tooling I work on for this is regclient, but there's also crane and skopeo. With regclient/regctl, this looks like:
repo=registry.example.com/your/repo
regctl tag ls "$repo" | while read tag; do
regctl image config "$repo:$tag" --format "{{ printf \"%s: $tag\\n\" .Created }}"
done
I am trying to create as simple bash script to list all the repos in our organization on github. Seems like it shouldn't be that hard, so I am sure I am missing something obvious. Various examples are around, e.g. 1 but they don't really work. I have tried
curl --user "$USER:$PASS" https://api.github.com/orgs/$ORG/repos
where USER and PASS are valid usernames and password and ORG is https://github.com/readium.
But this doesn't work. I just get "message" : "Not Found"
I've tried many variations on this following various threads here and elsewhere but no soap. Suggestions?
curl --user "$USER:$PASS" https://api.github.com/orgs/$ORG/repos |
grep -w 'itemprop="name codeRepository"' |
cut -d/ -f3 | cut -d'"' -f1
Ex:
curl -u username:password https://github.com/username?tab=repositories |
grep -w 'itemprop="name codeRepository"' |
cut -d/ -f3 | cut -d'"' -f1
Aside from the way mentioned above, the more secure way would be to generate a token with appropriate permissions.
You can generate a token as well by using Settings -> Personal Access Token until you get corresponding organisation token. Make sure that you choose the smallest expiry and least permissions.
Once you have access to the token you can use the following command
curl -H "Authorization: token <your-token-here>" https://api.github.com/orgs/<organization-name-here>/repos
I am attempting to call an API for a series of ID's, and then leverage those ID's in a bash script using curl, to query a machine for some information, and then scrub the data for only a select few things before it outputs this.
#!/bin/bash
url="http://<myserver:myport>/ws/v1/history/mapreduce/jobs"
for a in $(cat jobs.txt); do
content="$(curl "$url/$a/counters" "| grep -oP '(FILE_BYTES_READ[^:]+:\d+)|FILE_BYTES_WRITTEN[^:]+:\d+|GC_TIME_MILLIS[^:]+:\d+|CPU_MILLISECONDS[^:]+:\d+|PHYSICAL_MEMORY_BYTES[^:]+:\d+|COMMITTED_HEAP_BYTES[^:]+:\d+'" )"
echo "$content" >> output.txt
done
This is for a MapR project I am currently working on to peel some fields out of the API.
In the example above, I only care about 6 fields, though the output that comes from the curl command gives me about 30 fields and their values, many of which are irrelevant.
If I use the curl command in a standard prompt, I get the fields I am looking for, but when I add it to the script I get nothing.
Please remove quotes after
$url/$a/counters" ". Like following:
content="$(curl "$url/$a/counters | grep -oP '(FILE_BYTES_READ[^:]+:\d+)|FILE_BYTES_WRITTEN[^:]+:\d+|GC_TIME_MILLIS[^:]+:\d+|CPU_MILLISECONDS[^:]+:\d+|PHYSICAL_MEMORY_BYTES[^:]+:\d+|COMMITTED_HEAP_BYTES[^:]+:\d+'" )"
I have the following requirement here: Fetch all the commits from our SVN from the last two years and list the title of all the JIRA issues that had code committed. Our commit rules are pretty strict, so a commit must start with the JIRA code, like: COR-3123 Fixed the bug, introduced a new one
So, I wrote the following shell script to get this working:
svn log -r{2012-04-01}:{2014-04-01} | grep "COR-" | cut -f1 -d" " | sort -u
This gets me all the JIRA codes.
But now I want to use these in the following command:
wget --quiet --load-cookies cookies.txt -O - http://jira.example.com/browse/{HERE} | sed -n -e 's!.*<title>\(.*\)</title>.*!\1!p'
Ie: get the JIRA page via wget and parse out the title... (I have already cached my login credentials to use with wget in cookies.txt)
and obviously to the location {HERE} I want to insert the code obtained from the first list. Doing this via a two step (step 1: get list, step 2 iterate via list) script (python, perl, ... ) is not a problem, but I'd like to know if it's possible to do it in ONE step, using bash :)
(Yes, I know there is JIRA rest API)
You can use xargs to pass the parameter to wget:
xargs -I {} wget http://jira.example.com/browse/{}
Given that I'm using svn command line on Windows, how to find the revision number, in which a file was deleted? On Windows, there are no fancy stuff like grep and I attempting to use command line only, without TortoiseSVN. Thanks in advance!
EDIT:
I saw a few posts, like examining history of deleted file but it did not answer my question.
Is there any way other than svn log -v url > log.out and search with Notepad?
Install Cygwin.
I use this:
svn log -v --limit <nr> -v | grep -E '<fileName>|^r' | grep -B 1 <fileName>
where
fileName - the name of the file or any pattern which matches it
nr - the number of latest revisions in which I want to look for
This will give you the revisions for all the actions (add, delete, remove, modify) concerning the file, but with a simple tweak with grep you can get the revisions only for deletion.
(Obviously, --limit is optional, however you usually have an overview about how deep you need to search which gains you some performance.)
The log is the place to look. I know you don't want to hear that answer, but that is where you need to look for deleted files in SVN.
The reason for this is simply that a deleted file is not visible after it's been deleted. The only place to find out about its existence at all is either in the logs, or by fetching out an earlier revision prior to it being deleted.
The easiest way I know of to deal with this problem is to move away from the the command line, and use a GUI tool such as TortoiseSVN.
TortoiseSVN links itself into the standard Windows file Explorer, so it's very easy to use. In the context of answering this question, you would still use it to look at the logs, but it becomes a much quicker excersise:
Browser to the SVN folder you want to examine. Then right-click on the folder icon and select TortoiseSVN -> View Logs from the context menu.
You'll now get a window showing all the revisions made in that folder. In particular, it is easy to see which revisions have had additions and deletions, because the list includes a set of Action icons for each revision. You can double-click on a revision to get a list of files that were changed (or straight into a diff view if only one file was changed)
So you can easily see which revisions have had deletions, and you can quickly click them to find out which files were involved. It really is that easy.
I know you're asking about the command-line, but for administrative tasks like this, a GUI browser really does make sense. It makes it much quicker to see what's happening compared with trying to read through pages of cryptic text (no matter how well versed you are at reading that text).
This question was posted and answered some time ago.
In this answer I'll try to show a flexible way to get the informations asked and extend it.
In cygwin, use svn log in combination with awk
REPO_URL=https://<hostname>/path/to/repo
FILENAME=/path/to/file
svn log ${REPO_URL} -v --search "${FILENAME}" | \
awk -v var="^ [D] ${FILENAME}$" \
'/^r[0-9]+/{rev=$1}; \
$0 ~ var {print rev $0}'
svn log ${REPO_URL} -v --search "${FILENAME}" asks svn log for a verbose log containing ${FILENAME}. This reduces the data transfer.
The result is piped to awk. awk gets ${FILENAME} passed via -v in the var vartogether with search pattern var="^ [D] ${FILENAME}$"
In the awk program /^r[0-9]+/ {rev=$1} assigns the revision number to rev if line matches /^r[0-9]+/.
For every line matching ^ [D] ${FILENAME}$ awk prints the stored revision number rev and the line: $0 ~ var {print rev $0}
if you're interested not only in the deletion of the file but also creation, modification, replacing, change Din var="^ [D] ${FILENAME}$"to DAMR.
The following will give you all the changes:
svn log ${REPO_URL} -v --search "${FILENAME}" | \
awk -v var="^ [DAMR] ${FILENAME}$" \
'/^r[0-9]+/ {rev=$1}; \
$0 ~ var {print rev $0}'
And if you're interested in username, date and time:
svn log ${REPO_URL} -v --search "${FILENAME}" | \
awk -v var="^ [DAMR] ${FILENAME}$" \
'/^r[0-9]+/ {rev=$1;user=$3;date=$5;time=$6}; \
$0 ~ var {print rev " | " user " | " date " | " time " | " $0}'