How to get GPG public key in bash? - bash

I need to get the public key ID from GPG. Everything on SO discusses importing, exporting, etc. In the example below, I need to get the ABCD1234 value to use in a bash script?
$ gpg --list-keys
/Users/jblow/.gnupg/pubring.gpg
---------------------------------
pub 2048R/ABCD1234 2016-09-20
uid [ultimate] Joe Blow <joe.blow#nowhere.com>
sub 2048R/JDKKHU76 2016-09-20

I was facing the same requirement today (extracting the key ID in order to use it together with duplicity in a bash script).
In the man page of gpg I read:
For scripted or other unattended use of gpg make sure to use the
machine-parseable interface and
not the default interface which is intended for direct use by humans. The machine-parseable inter-
face provides a stable and well documented API independent of the locale or future changes of gpg.
To enable this interface use the options --with-colons and --status-fd. For certain operations the
option --command-fd may come handy too. See this man page and the file `DETAILS' for the specifi-
cation of the interface.
I managed to extract the key ID I wanted by doing:
gpg --list-signatures --with-colons | grep 'sig' | grep 'the Name-Real of my key' | head -n 1 | cut -d':' -f5
PS: https://devhints.io/gnupg helped ;)

An awk + bash string manipulation way of doing it.
# 'awk' extracts the '2048R/ABCD1234' part of the command and
# string-manipulation done to strip-off characters up to the trailing '/' character
$ keyVal=$(gpg --list-keys | awk '/pub/{if (length($2) > 0) print $2}'); echo "${keyVal##*/}"
ABCD1234
If you want to extract the key for the sub just change the pattern in awk to
$ keyVal=$(gpg --list-keys | awk '/sub/{if (length($2) > 0) print $2}'); echo "${keyVal##*/}"
JDKKHU76

You can use grep:
gpg --list-keys | grep pub | grep -o -P '(?<=/)[A-Z0-9]{8}'
Output:
ABCD1234

Related

Bash grep, awk o sed to reverse find

I am creating a script to look for commonly used patterns in a password.Although I have security policies in the hosting panel, servers have been outdated due to incompatibilities.
Example, into the file words.txt, i put in there, the word test, when i execute grep -c test123 words.txt. When I look for that pattern I need it to find it but I think that with the command grep it won't work for me.
Script:
EMAILPASS=`/root/info.sh -c usera | grep #`
for PAR in ${EMAILPASS} ; do
EMAIL=$(echo "${PAR}" | grep # | cut -f1 -d:)
PASS=$(echo "${PAR}" | cut -d: -f 2)
PASS="${PASS,,}"
FINDSTRING=$(grep -ic "${PASS}" /root/words.txt)
echo -e ""
echo -e "Validating password ${EMAIL}"
echo -e ""
if [ $FINDSTRING -ge 1 ] ; then
echo "Insecre"
else
echo "Secure"
fi
the current output of the command is as follows
# grep -c test123 /root/words.txt
0
I think grep is not good for what I need, maybe someone can help me.
I could also use awk or sed but I can't find an option to help me.
Regardsm
Reverse your application.
echo test123 | grep -f words.txt
Each line of the text file will be used as a pattern to test against the input.
edit
Apparently you actually do want to see if the whole password is an actual word, rather than just checking to see if it's based on a dictionary word. That's considerably less secure, but easy enough to do. The logic you have will not report test123 as insecure unless the whole passwword is an exact match for a word in the dictionary.
You said you were putting test in the dictionary and using test123 as the password, so I assumed you were looking for passwords based on dictionary words, which was the structure I suggested above. Will include as commented alternate lines below.
Also, since you're doing a case insensitive search, why bother to downcase the password?
declare -l pass # set as always lowecase
would do it, but there's no need.
Likewise, unless you are using it again later, it isn't necessary to put everything into a variable first, such as the grep results. Try to remove anything not needed -- less is more.
Finally, since we aren't catching the grep output in a variable and testing that, I threw it away with -q. All we need to see is whether it found anything, and the return code, checked by the if, tells us that.
/root/info.sh -c usera | grep # | # only lines with at signs
while IFS="$IFS:" read email pass # parse on read with IFS
do printf "\n%s\n\n" "Validating password for '$email'"
if grep -qi "$pass" /root/words.txt # exact search (-q = quiet)
#if grep -qif /root/words.txt <<< "$pass" # 'based on' search
then echo "Insecure"
else echo "Secure" # well....
fi
done
I think a better paradigm might be to just report the problematic ones and be silent for those that seem ok, but that's up to you.
Questions?

Get Macbook screen size from terminal/bash

Does anyone know of any possible way to determine or glean this information from the terminal (in order to use in a bash shell script)?
On my Macbook Air, via the GUI I can go to "About this mac" > "Displays" and it tells me:
Built-in Display, 13-inch (1440 x 900)
I can get the screen resolution from the system_profiler command, but not the "13-inch" bit.
I've also tried with ioreg without success. Calculating the screen size from the resolution is not accurate, as this can be changed by the user.
Has anyone managed to achieve this?
I think you could only get the display model-name which holds a reference to the size:
ioreg -lw0 | grep "IODisplayEDID" | sed "/[^<]*</s///" | xxd -p -r | strings -6 | grep '^LSN\|^LP'
will output something like:
LP154WT1-SJE1
which depends on the display manufacturer. But as you can see the first three numbers in this model name string imply the display-size: 154 == 15.4''
EDIT
Found a neat solution but it requires an internet connection:
curl -s http://support-sp.apple.com/sp/product?cc=`system_profiler SPHardwareDataType | awk '/Serial/ {print $4}' | cut -c 9-` |
sed 's|.*<configCode>\(.*\)</configCode>.*|\1|'
hope that helps
The next script:
model=$(system_profiler SPHardwareDataType | \
/usr/bin/perl -MLWP::Simple -MXML::Simple -lane '$c=substr($F[3],8)if/Serial/}{
print XMLin(get(q{http://support-sp.apple.com/sp/product?cc=}.$c))->{configCode}')
echo "$model"
will print for example:
MacBook Pro (13-inch, Mid 2010)
Or the same without perl but more command forking:
model=$(curl -s http://support-sp.apple.com/sp/product?cc=$(system_profiler SPHardwareDataType | sed -n '/Serial/s/.*: \(........\)\(.*\)$/\2/p')|sed 's:.*<configCode>\(.*\)</configCode>.*:\1:')
echo "$model"
It is fetched online from apple site by serial number, so you need internet connection.
I've found that there seem to be several different Apple URLs for checking this info. Some of them seem to work for some serial numbers, and others for other machines.
e.g:
https://selfsolve.apple.com/wcResults.do?sn=$Serial&Continue=Continue&num=0
https://selfsolve.apple.com/RegisterProduct.do?productRegister=Y&country=USA&id=$Serial
http://support-sp.apple.com/sp/product?cc=$serial (last 4 digits)
https://selfsolve.apple.com/agreementWarrantyDynamic.do
However, the first two URLs are the ones that seem to work for me. Maybe it's because the machines I'm looking up are in the UK and not the US, or maybe it's due to their age?
Anyway, due to not having much luck with curl on the command line (The Apple sites redirect, sometimes several times to alternative URLs, and the -L option doesn't seem to help), my solution was to bosh together a (rather messy) PHP script that uses PHP cURL to check the serials against both URLs, and then does some regex trickery to report the info I need.
Once on my web server, I can now curl it from the terminal command line and it's bringing back decent results 100% of the time.
I'm a PHP novice so I won't embarrass myself by posting the script up in it's current state, but if anyone's interested I'd be happy to tidy it up and share it on here (though admittedly it's a rather long winded solution to what should be a very simple query).
This info really should be simply made available in system_profiler. As it's available through System Information.app, I can't see a reason why not.
Hi there for my bash script , under GNU/Linux : I make the follow to save
# Resolution Fix
echo `xrandr --current | grep current | awk '{print $8}'` >> /tmp/width
echo `xrandr --current | grep current | awk '{print $10}'` >> /tmp/height
cat /tmp/height | sed -i 's/,//g' /tmp/height
WIDTH=$(cat /tmp/width)
HEIGHT=$(cat /tmp/height)
rm /tmp/width /tmp/height
echo "$WIDTH"'x'"$HEIGHT" >> /tmp/Resolution
Resolution=$(cat /tmp/Resolution)
rm /tmp/Resolution
# Resolution Fix
and the follow in the same script for restore after exit from some app / game
in some S.O
This its execute command directly
ResolutionRestore=$(xrandr -s $Resolution)
But if dont execute call the variable with this to execute the varible content
$($ResolutionRestore)
And the another way you can try its with the follow for example
RESOLUTION=$(xdpyinfo | grep -i dimensions: | sed 's/[^0-9]*pixels.*(.*).*//' | sed 's/[^0-9x]*//')
VRES=$(echo $RESOLUTION | sed 's/.*x//')
HRES=$(echo $RESOLUTION | sed 's/x.*//')

How do I get a user's friendly username on UNIX?

I want to get the "friendly" name, not the username, at least if such a string exists for the given user. Things I've tried:
whoami
jamesarosen
id -un
jamesarosen
id -p
uid jamesarosen
groups staff com.apple.access_screensharing ...
id -P
jamesarosen:********:501:20::0:0:James A. Rosen:/Users/jamesarosen:/bin/bash
That last one has the information I'm looking for, but I'd prefer not to have to parse it out, particularly since I'm not terribly confident that the format (specifically the number of :s) will remain consistent across OSes.
Parse the GECOS Field for User's Full Name
The format of /etc/passwd and most of the GECOS field is extremely well standardized across Unix-like systems. If you find an exception, by all means let us know. Meanwhile, the easiest way to get what you want on a Unix-like system is to use getent and cut to parse the GECOS field. For example:
getent passwd $LOGNAME | cut -d: -f5 | cut -d, -f1
The only way that I know would be to parse it:
grep -P "^$(whoami):" /etc/passwd | cut -f5 -d:
You can be pretty certain of the format of /etc/passwd
You could use finger to obtain that information:
finger `id -un` | head -1 | cut -d: -f3-
which has the advantage (or disadvantage, depending on your requirements) that it will retrieve the information for non-local users as well.
If you only want to get the information from /etc/passwd, you'll most likely have to parse the file one way or the other, as others have already mentioned. Personally I'd prefer awk for this task:
awk -F: -vid=`id -u` '{if ($3 == id) print $5}' /etc/passwd
Take a look at the /etc/passwd file. This file shows you how user information is stored. Your user information may or may not be stored here (There are several different databases that Unix uses for storing users), but the format is the same.
Basically, Unix uses the User ID (UID) to store what user is what. The next entry was the old password entry, then the UID, the primary Group ID, the GECOS field, the $HOME directory, and the user's shell. (There are three extra entries displayed in the id -P command in MacOS. I don't know what they are, but they make the GECOS field the eighth field instead of the fifth field).
Using the id -P command on your system gave you this entry. Some systems use getent or even getpwent as a command. What you need to do is parse this entry. Each field is separated by colons, so you need either the fifth or eighth the entry (depending upon the command you had to use).
The awk and cut commands do this quite nicely. cut is probably more efficient, but awk is more common, so I tend to use that.
In awk, the standard field separator is white space, but you can use the -F parameter to change this. In Awk, each field in a line is given a number and preceded by a dollar sign. The $0 field is the entire line.
Using awk, you get:
id -P | awk -F: '{print $8}'
This says to take the id -P command, and use the : as a field separator, and to print out the eighth field. THe curly braces surround all AWK programs, and the single quotes are needed to keep the shell from interpreting the $8.
In BASH, you can use $( ) to run a command and return it's output, so you can set environment variables:
$USER_NAME=$(id -P | awk -F: `{print $8}`)
echo $USER_NAME
On macOS at least (and probably other *BSD-alikes), you may use: id -F to get just the full name.

bash: shortest way to get n-th column of output

Let's say that during your workday you repeatedly encounter the following form of columnized output from some command in bash (in my case from executing svn st in my Rails working directory):
? changes.patch
M app/models/superman.rb
A app/models/superwoman.rb
in order to work with the output of your command - in this case the filenames - some sort of parsing is required so that the second column can be used as input for the next command.
What I've been doing is to use awk to get at the second column, e.g. when I want to remove all files (not that that's a typical usecase :), I would do:
svn st | awk '{print $2}' | xargs rm
Since I type this a lot, a natural question is: is there a shorter (thus cooler) way of accomplishing this in bash?
NOTE:
What I am asking is essentially a shell command question even though my concrete example is on my svn workflow. If you feel that workflow is silly and suggest an alternative approach, I probably won't vote you down, but others might, since the question here is really how to get the n-th column command output in bash, in the shortest manner possible. Thanks :)
You can use cut to access the second field:
cut -f2
Edit:
Sorry, didn't realise that SVN doesn't use tabs in its output, so that's a bit useless. You can tailor cut to the output but it's a bit fragile - something like cut -c 10- would work, but the exact value will depend on your setup.
Another option is something like: sed 's/.\s\+//'
To accomplish the same thing as:
svn st | awk '{print $2}' | xargs rm
using only bash you can use:
svn st | while read a b; do rm "$b"; done
Granted, it's not shorter, but it's a bit more efficient and it handles whitespace in your filenames correctly.
I found myself in the same situation and ended up adding these aliases to my .profile file:
alias c1="awk '{print \$1}'"
alias c2="awk '{print \$2}'"
alias c3="awk '{print \$3}'"
alias c4="awk '{print \$4}'"
alias c5="awk '{print \$5}'"
alias c6="awk '{print \$6}'"
alias c7="awk '{print \$7}'"
alias c8="awk '{print \$8}'"
alias c9="awk '{print \$9}'"
Which allows me to write things like this:
svn st | c2 | xargs rm
Try the zsh. It supports suffix alias, so you can define X in your .zshrc to be
alias -g X="| cut -d' ' -f2"
then you can do:
cat file X
You can take it one step further and define it for the nth column:
alias -g X2="| cut -d' ' -f2"
alias -g X1="| cut -d' ' -f1"
alias -g X3="| cut -d' ' -f3"
which will output the nth column of file "file". You can do this for grep output or less output, too. This is very handy and a killer feature of the zsh.
You can go one step further and define D to be:
alias -g D="|xargs rm"
Now you can type:
cat file X1 D
to delete all files mentioned in the first column of file "file".
If you know the bash, the zsh is not much of a change except for some new features.
HTH Chris
Because you seem to be unfamiliar with scripts, here is an example.
#!/bin/sh
# usage: svn st | x 2 | xargs rm
col=$1
shift
awk -v col="$col" '{print $col}' "${#--}"
If you save this in ~/bin/x and make sure ~/bin is in your PATH (now that is something you can and should put in your .bashrc) you have the shortest possible command for generally extracting column n; x n.
The script should do proper error checking and bail if invoked with a non-numeric argument or the incorrect number of arguments, etc; but expanding on this bare-bones essential version will be in unit 102.
Maybe you will want to extend the script to allow a different column delimiter. Awk by default parses input into fields on whitespace; to use a different delimiter, use -F ':' where : is the new delimiter. Implementing this as an option to the script makes it slightly longer, so I'm leaving that as an exercise for the reader.
Usage
Given a file file:
1 2 3
4 5 6
You can either pass it via stdin (using a useless cat merely as a placeholder for something more useful);
$ cat file | sh script.sh 2
2
5
Or provide it as an argument to the script:
$ sh script.sh 2 file
2
5
Here, sh script.sh is assuming that the script is saved as script.sh in the current directory; if you save it with a more useful name somewhere in your PATH and mark it executable, as in the instructions above, obviously use the useful name instead (and no sh).
It looks like you already have a solution. To make things easier, why not just put your command in a bash script (with a short name) and just run that instead of typing out that 'long' command every time?
If you are ok with manually selecting the column, you could be very fast using pick:
svn st | pick | xargs rm
Just go to any cell of the 2nd column, press c and then hit enter
Note, that file path does not have to be in second column of svn st output. For example if you modify file, and modify it's property, it will be 3rd column.
See possible output examples in:
svn help st
Example output:
M wc/bar.c
A + wc/qax.c
I suggest to cut first 8 characters by:
svn st | cut -c8- | while read FILE; do echo whatever with "$FILE"; done
If you want to be 100% sure, and deal with fancy filenames with white space at the end for example, you need to parse xml output:
svn st --xml | grep -o 'path=".*"' | sed 's/^path="//; s/"$//'
Of course you may want to use some real XML parser instead of grep/sed.

bash grep 'random matching' string

Is there a way to grab a 'random matching' string via bash from a text file?
I am currently grabbing a download link via bash, curl & grep from a online text file.
Example:
DOWNLOADSTRING="$(curl -o - "http://example.com/folder/downloadlinks.txt" | grep "$VARIABLE")"
from online text file which contains
http://alphaserver.com/files/apple.zip
http://alphaserver.com/files/banana.zip
where $VARIABLE is something the user selected.
Works great, but i wanted to add some mirrors to the text file.
So when the variable 'banana' is selected, text file which i grep contains:
http://alphaserver.com/files/apple.zip
http://betaserver.com/files/apple.zip
http://gammaserver.com/files/apple.zip
http://deltaserver.com/files/apple.zip
http://alphaserver.com/files/banana.zip
http://betaserver.com/files/banana.zip
http://gammaserver.com/files/banana.zip
http://deltaserver.com/files/banana.zip
the code should pick a random 'banana' string and store it as the 'DOWNLOADSTRING' variable.
the current code above can only work with 1 string in the text file, since it grabs everything 'banana'.
What this is for; i wanted to add some mirror downloadlinks for the files in the online text file, and the current code doesn't allow that.
Can i let grep grab one random 'banana' string? (and not all of them)
See this question to see how to get a random line after grep. rl seems like a good candidate
What's an easy way to read random line from a file in Unix command line?
then do a grep ... | rl | head -n 1
Try this:
DOWNLOADSTRING="$(curl -o - "http://example.com/folder/downloadlinks.txt" | grep "$VARIABLE")" |
sort -R | head -1
The output will be random-sorted and then the first line will be selected.
If mirrors.txt has the following data, which you provided in your question:
http://alphaserver.com/files/apple.zip
http://betaserver.com/files/apple.zip
http://gammaserver.com/files/apple.zip
http://deltaserver.com/files/apple.zip
http://alphaserver.com/files/banana.zip
http://betaserver.com/files/banana.zip
http://gammaserver.com/files/banana.zip
http://deltaserver.com/files/banana.zip
Then you can use the following command to get a random "matched string" from the file:
grep -E "${VARIABLE}" mirrors.txt | shuf -n1
Then you can store it as the variable DOWNLOADSTRING by setting it's value with a function call like so:
rand_mirror_call() { grep -E "${1}" mirrors.txt | shuf -n1; }
DOWNLOADSTRING="$(rand_mirror_call ${VARIABLE})"
This will give you a dedicated random line from the text file based on the user's ${VARIABLE} input. It is a lot less typing this way.

Resources