I am trying to create as simple bash script to list all the repos in our organization on github. Seems like it shouldn't be that hard, so I am sure I am missing something obvious. Various examples are around, e.g. 1 but they don't really work. I have tried
curl --user "$USER:$PASS" https://api.github.com/orgs/$ORG/repos
where USER and PASS are valid usernames and password and ORG is https://github.com/readium.
But this doesn't work. I just get "message" : "Not Found"
I've tried many variations on this following various threads here and elsewhere but no soap. Suggestions?
curl --user "$USER:$PASS" https://api.github.com/orgs/$ORG/repos |
grep -w 'itemprop="name codeRepository"' |
cut -d/ -f3 | cut -d'"' -f1
Ex:
curl -u username:password https://github.com/username?tab=repositories |
grep -w 'itemprop="name codeRepository"' |
cut -d/ -f3 | cut -d'"' -f1
Aside from the way mentioned above, the more secure way would be to generate a token with appropriate permissions.
You can generate a token as well by using Settings -> Personal Access Token until you get corresponding organisation token. Make sure that you choose the smallest expiry and least permissions.
Once you have access to the token you can use the following command
curl -H "Authorization: token <your-token-here>" https://api.github.com/orgs/<organization-name-here>/repos
Related
This question already has answers here:
Parsing XML using unix terminal
(9 answers)
Closed last month.
I am trying to get the meta title of some website...
some people write title like
`<title>AllHeart Web INC, IT Services Digital Solutions Technology
</title>
`
`<title>AllHeart Web INC, IT Services Digital Solutions Technology</title>`
`<title>
AllHeart Web INC, IT Services Digital Solutions Technology
</title>`
some like more ways... my current focus on above 3 ways...
I wrote a simple code, it only capture 2nd way of title written, but i am not sure how can I grep the other ways,
`curl -s https://allheartweb.com/ | grep -o '<title>.*</title>'`
I also made a code (very bad i guess)
where i can grep number of line like
`
% curl -s https://allheartweb.com/ | grep -n '<title>'
7:<title>AllHeart Web INC, IT Services Digital Solutions Technology
% curl -s https://allheartweb.com/ | grep -n '</title>'
8:</title>
`
and store it and run loop to get title item... which i guess a bad idea...
any help I can get all possible of getting title?
Try this:
curl -s https://allheartweb.com/ | tr -d '\n' | grep -m 1 -oP '(?<=<title>).+?(?=</title>)'
You can remove newlines from HTML via tr because they have no meaning in the title. The next step returns the first match of the shortest string enclosed in <title> </title>.
This is quite a simple approach of course. xmllint would be better but that's not available to all platforms by default.
'grep' is not a very good tool to match multiple lines. It is processing line-by-line. You could hack that by making your incoming text one line like
curl -s https://allheartweb.com/ | xargs | grep -o -E "<title>.*</title>"
This is probably what you want.
Try this sed:
curl -s https://allheartweb.com/ | sed -n "{/<title>/,/<\/title>/p}"
I need a list of all repository tags for a remote Docker registry, along with their date = the date the image tag was pushed.
Or at least the tags sorted according when they were created (without the date, but in chronological order).
What I have now (from the Official Docker docs) is:
curl -XGET -u my_user:my_pass 'https://my_registry.com/v2/my_repo/tags/list'
But that seems to return the tags in random order. I need to further filter / postprocess the tags programmatically, so I need to fetch additional per-tag metadata.
Any idea how to do that?
My two cents:
order_tags_by_date() {
if [ -z $1 ] || [ -z $2 ]; then
echo "Get tag list of a component and order by date. Please enter a component name and a tag "
echo "For example: tags my-app 20200606"
return 1
fi
# get all tags list
url=some_host
result=$(curl -s $url:5000/v2/$1/tags/list)
# parse page and get "tags" array, sort by name, reverse, and get as a tab separated values;
# separate with space and put into an array in bash;
# "reverse" to get the latest tag first; if you want old tag first, remove it
IFS=" " read -r -a tags <<< "$(echo $result | jq -r '.tags | sort | reverse | #tsv')"
# for each tag, get the same component in docker api the manifest,
# parse the created field by the first element in history; I assume all histories has the same created timestamp
json="["
for tag in $tags
do
host=another-docker-api-host
date=$(curl -sk $host/v2/$1/manifests/$tag | jq -r ".history[0].v1Compatibility" | jq ".created")
json+='{"tag":"'$tag'", "date":'$date"},"
done;
valid_json=${json::-1}']'
echo $valid_json | jq 'sort_by(.date, .tag) | reverse'
}
I constructed another JSON for jq to sort on the date, tag field; because in my case, some image has timestamp based tags, but creation date is always 19700101; while other images have correct date but tags are number based; so I order by both. You can also remove "reverse" at the end to sort asc.
If you don't want the date, add this on the last line of the script, after reverse:
| .[] | .tag
So it will get all elements' tag value and they are already sorted.
jib created images, though, will have date of 1970-01-01, this is by design.
In the past, I wrote a script to migrate images from local docker registry to ECR, maybe you want to use some of the lines like,
tags=$(curl -s https://$username:$password#$privreg/v2/$repo/tags/list?n=2048 | jq '.tags[]' | tr -d '"')
creationdate=$(curl -s https://$username:$password#$privreg/v2/$repo/manifests/$tag | jq -r '.history[].v1Compatibility' | jq '.created' | sort | tail -n1)
#!/bin/bash
read -p 'Username: ' username
read -sp 'Password: ' password
privreg="privreg.example.com"
awsreg="<account_id>.dkr.ecr.<region_code>.amazonaws.com"
repos=$(curl -s https://$username:$password#$privreg/v2/_catalog?n=2048 | jq '.repositories[]' | tr -d '"')
for repo in $repos; do
tags=$(curl -s https://$username:$password#$privreg/v2/$repo/tags/list?n=2048 | jq '.tags[]' | tr -d '"')
project=${repo%/*}
service=${repo#*/}
awsrepo=$(aws ecr describe-repositories | grep -o \"$repo\" | tr -d '"')
if [ "$awsrepo" != "$repo" ]; then aws ecr create-repository --repository-name $repo; fi
for tag in $tags; do
creationdate=$(curl -s https://$username:$password#$privreg/v2/$repo/manifests/$tag | jq -r '.history[].v1Compatibility' | jq '.created' | sort | tail -n1)
echo "$repo:$tag $creationdate" >> $project-$service.txt
done
sort -k2 $project-$service.txt | tail -n3 | cut -d " " -f1 > $project-$service-new.txt
cat $project-$service-new.txt
while read repoandtags; do
sudo docker pull $privreg/$repoandtags
sudo docker tag $privreg/$repoandtags $awsreg/$repoandtags
sudo docker push $awsreg/$repoandtags
done < $project-$service-new.txt
Script might not work and need some changes, but you can use some parts of it, so I will leave it in the post as an example.
The tag list should be returned using lexical ordering according to the OCI spec:
If the list is not empty, the tags MUST be in lexical order (i.e. case-insensitive alphanumeric order).
There's no API presently available in OCI to query when the image was pushed. However, images do include a configuration that you can query and parse to extract the image creation time as reported by the build tooling. This date may be a lie, I've worked on tooling that backdates this for purposes of reproducibility (some well known date like 1970-01-01, or the datestamp of the Git commit being built), some manifests pushed may have an empty config (especially when it's not actually an image), and a manifest list will have multiple manifests each with different configs.
So with that disclaimer that this is very error prone, you can fetch that config blob and extract the creation date. I prefer to use tooling for this beyond curl to handle:
different types of authentication, or anonymous logins, that registries require
different types of image manifests, which I'm aware of at least 6 different manifests, OCI and Docker for images and manifest list and schema v1 manifests with and without a siganture
selecting a platform from multi-platform manifests and parsing the json of the config blob
The tooling I work on for this is regclient, but there's also crane and skopeo. With regclient/regctl, this looks like:
repo=registry.example.com/your/repo
regctl tag ls "$repo" | while read tag; do
regctl image config "$repo:$tag" --format "{{ printf \"%s: $tag\\n\" .Created }}"
done
I am little confused on how to make use of cURL command to delete the unnecessary branches we have in our GitLab instance.
I want to delete the branches which are named as "tobedeleted_*" , so any branch name which starts with "tobedeleted" I want to delete it.
However one more additional thing which I want to check is, I want to delete the branches which are having an attribute in api where created_at is more than 1 month or 30 days.
This way, I want to make sure that I don't delete any branch which is newly created.
I have set of commands which can be executed manually to perform this which requires input of project id & branch name which is supposed to be deleted, but I want to automate it with the help of writing script or some curl commands which I will either schedule with the help of jenkins or gitlab schedule feature.
Can you guys help me on how to automate it?
I can tell you the details I have, I am basically lagging in writing the if conditions which will make it easier I believe.
I want to do this for all the projects which are present in a group number 6. So, To get all the project ids, I can make use of this curl command:
curl -s --location --request GET '$CI_API_V4_URL/groups/6/projects' --header 'PRIVATE-TOKEN:<my_pvt_token>' | sed 's/,/\n/g' | grep -w "id" | awk -F ':' '{print $2}' | sed -s 's/{"id"//g'
To get all the branches which are supposed to be deleted of a project which requires input of project ids, In the below curl command I am using projectid as 11
curl -s --location --request GET '$CI_API_V4_URL/projects/11/repository/branches' --header 'PRIVATE-TOKEN: <my_pvt_token>' | sed 's/,/\n/g' | grep -w "name" | awk -F ':' '{print $2}' | sed 's/"//g' | grep 'tobedeleted*'
Once, I extract the name of the branch of all the projects,
I need to give input of project id and branch name and iterate it to the following cURL command:
curl --request DELETE --header "PRIVATE-TOKEN:<my_pvt_token>" "$CI_API_V4_URL/projects/11/repository/branches/tobedeleted_username_6819"
I am very confused on how can I iterate through the projects by keep on deleting the multiple branches I have of that project.
Any help is really appreciated.
Also, this could be little easier if I was going to directly delete the merged branches with gitlab api, but I want to delete the specific named branches only.
Ref:
https://docs.gitlab.com/ee/api/branches.html#delete-repository-branch
https://docs.gitlab.com/ee/api/branches.html#list-repository-branches
https://docs.gitlab.com/ee/api/projects.html#list-all-projects
I am arguing with something i expected to be simple....
I want to lookup a users manager from ldap, then get the managers email and sam name.
I expected to be able to get the cn for the manager from ldap like this:
manager=$(/usr/bin/ldapsearch -LLL -H ldap://company.ads -x -D admin#company.ads -w password -b ou=employees,dc=company,dc=ads sAMAccountName=employee1 | grep "manager:" | awk '{gsub("manager: ", "");print}' | awk 'BEGIN {FS=","}; {print $1, $2 }' )
that gives me the cn like this:
CN=manager,\ Surname
Now when I run another query like this:
/usr/bin/ldapsearch -LLL -H ldap://company.ads -x -D admin#company.ads -w password -b ou=employees,dc=company,dc=ads $manager
I get bad search filter (-7) echo the command copy, paste run it i get the record back....
Ive tried a number of variations on this, can anyone see what im missing?
Thanks.
Since there's a space in $manager, you need to quote it to prevent it from being split into multiple arguments.
/usr/bin/ldapsearch -LLL -H ldap://company.ads -x -D admin#company.ads -w password -b ou=employees,dc=company,dc=ads "$manager"
In general, it's best to always quote your variables, unless you specifically want it to be split into words.
You also need to remove the backslash \ from the LDAP entry. Backslashes are for escaping literal spaces in scripts, they shouldn't be used in data, because they're not processed when expanding variables.
I need to get the public key ID from GPG. Everything on SO discusses importing, exporting, etc. In the example below, I need to get the ABCD1234 value to use in a bash script?
$ gpg --list-keys
/Users/jblow/.gnupg/pubring.gpg
---------------------------------
pub 2048R/ABCD1234 2016-09-20
uid [ultimate] Joe Blow <joe.blow#nowhere.com>
sub 2048R/JDKKHU76 2016-09-20
I was facing the same requirement today (extracting the key ID in order to use it together with duplicity in a bash script).
In the man page of gpg I read:
For scripted or other unattended use of gpg make sure to use the
machine-parseable interface and
not the default interface which is intended for direct use by humans. The machine-parseable inter-
face provides a stable and well documented API independent of the locale or future changes of gpg.
To enable this interface use the options --with-colons and --status-fd. For certain operations the
option --command-fd may come handy too. See this man page and the file `DETAILS' for the specifi-
cation of the interface.
I managed to extract the key ID I wanted by doing:
gpg --list-signatures --with-colons | grep 'sig' | grep 'the Name-Real of my key' | head -n 1 | cut -d':' -f5
PS: https://devhints.io/gnupg helped ;)
An awk + bash string manipulation way of doing it.
# 'awk' extracts the '2048R/ABCD1234' part of the command and
# string-manipulation done to strip-off characters up to the trailing '/' character
$ keyVal=$(gpg --list-keys | awk '/pub/{if (length($2) > 0) print $2}'); echo "${keyVal##*/}"
ABCD1234
If you want to extract the key for the sub just change the pattern in awk to
$ keyVal=$(gpg --list-keys | awk '/sub/{if (length($2) > 0) print $2}'); echo "${keyVal##*/}"
JDKKHU76
You can use grep:
gpg --list-keys | grep pub | grep -o -P '(?<=/)[A-Z0-9]{8}'
Output:
ABCD1234