How do I get a user's friendly username on UNIX? - bash

I want to get the "friendly" name, not the username, at least if such a string exists for the given user. Things I've tried:
whoami
jamesarosen
id -un
jamesarosen
id -p
uid jamesarosen
groups staff com.apple.access_screensharing ...
id -P
jamesarosen:********:501:20::0:0:James A. Rosen:/Users/jamesarosen:/bin/bash
That last one has the information I'm looking for, but I'd prefer not to have to parse it out, particularly since I'm not terribly confident that the format (specifically the number of :s) will remain consistent across OSes.

Parse the GECOS Field for User's Full Name
The format of /etc/passwd and most of the GECOS field is extremely well standardized across Unix-like systems. If you find an exception, by all means let us know. Meanwhile, the easiest way to get what you want on a Unix-like system is to use getent and cut to parse the GECOS field. For example:
getent passwd $LOGNAME | cut -d: -f5 | cut -d, -f1

The only way that I know would be to parse it:
grep -P "^$(whoami):" /etc/passwd | cut -f5 -d:
You can be pretty certain of the format of /etc/passwd

You could use finger to obtain that information:
finger `id -un` | head -1 | cut -d: -f3-
which has the advantage (or disadvantage, depending on your requirements) that it will retrieve the information for non-local users as well.
If you only want to get the information from /etc/passwd, you'll most likely have to parse the file one way or the other, as others have already mentioned. Personally I'd prefer awk for this task:
awk -F: -vid=`id -u` '{if ($3 == id) print $5}' /etc/passwd

Take a look at the /etc/passwd file. This file shows you how user information is stored. Your user information may or may not be stored here (There are several different databases that Unix uses for storing users), but the format is the same.
Basically, Unix uses the User ID (UID) to store what user is what. The next entry was the old password entry, then the UID, the primary Group ID, the GECOS field, the $HOME directory, and the user's shell. (There are three extra entries displayed in the id -P command in MacOS. I don't know what they are, but they make the GECOS field the eighth field instead of the fifth field).
Using the id -P command on your system gave you this entry. Some systems use getent or even getpwent as a command. What you need to do is parse this entry. Each field is separated by colons, so you need either the fifth or eighth the entry (depending upon the command you had to use).
The awk and cut commands do this quite nicely. cut is probably more efficient, but awk is more common, so I tend to use that.
In awk, the standard field separator is white space, but you can use the -F parameter to change this. In Awk, each field in a line is given a number and preceded by a dollar sign. The $0 field is the entire line.
Using awk, you get:
id -P | awk -F: '{print $8}'
This says to take the id -P command, and use the : as a field separator, and to print out the eighth field. THe curly braces surround all AWK programs, and the single quotes are needed to keep the shell from interpreting the $8.
In BASH, you can use $( ) to run a command and return it's output, so you can set environment variables:
$USER_NAME=$(id -P | awk -F: `{print $8}`)
echo $USER_NAME

On macOS at least (and probably other *BSD-alikes), you may use: id -F to get just the full name.

Related

Passing parameter as control number and get table name

I have a scenario where there is a file with control number and table name, hereby an example:
1145|report_product|N|N|
1156|property_report|N|N
I need to pass the control number as 1156 and have to get table name as PR once I get the table name as PR then I need to add some text on that.
Please help
Assuming the controll file is:
# cat controlfile.txt
1145|report_product|N|N
1156|property_report|N|N
To fine some line you can use:
grep 1156 controlfile.txt
If needed you can save it to a variable: result=$(grep 1156 file.txt)
Assuming you need to add append something on this line.... you can use:
sed '/^1156/s/$/ 123/' controlfile.txt
This example will add "123" at the end of line that start with 1156
If needed, add more details like what output you want or anything else to help us better understand your need.
You need to work in two stages:
You need to find the line, containing 1156.
You need to get the information from that line.
In order to find the line (as already indicated by Juranir), you can use grep:
Prompt> grep "1156" control.txt
1156|property_report|N|N
In order to get the information from that line, you need to get the second column, based on the vertical line (often referred as a "pipe" character), for which there are different approaches. I'll give you two:
The cut approach: you can cut a line into different parts and take a character, a byte, a column, .... In this case, this is what you need:
grep "1156" control.txt | cut -d '|' -f 2
-d '|' : use the vertical line as a column separator
-f 2 : show the second field (column)
The awk approach: awk is a general "text modifier" with multiple features (showing parts of text, performing basic calculations, ...). For this case, it can be used as follows:
grep "1156" control.txt | awk -F '|' '{print $2}'
-F '|' : use the vertical line as a column separator
'{print $2}' : the awk script for showing the second field.
Oh, by the way, I've edited your question. You might press the edit button in order to learn how I did this :-)
For getting only the first letters, separated by the underscores:
grep "1156" control.txt | awk -F '|' '{print $2}' | awk -F '_' '{print substr($1,1,1) substr($2,1,1)}'
(something like that)

Command-line access to OS X's keychain - How to get e-mail address associated with account

I'm writing a Bash command-line tool that accesses the keychain for emulating a web browser to communicate with web pages.
It's quite straightforward to get the password stored in the keychain from there:
PASSWORD=`security find-internet-password -gs accounts.google.com -w`
But it's a bit more tricky to extract the email address, as the most specific command you get for this returns a lot of information:
$security find-internet-password -gs accounts.google.com
/Users/me/Library/Keychains/login.keychain"
class: "inet"
attributes:
0x00000007 <blob>="accounts.google.com"
0x00000008 <blob>=<NULL>
"acct"<blob>="my-email-address#gmail.com"
"atyp"<blob>="form"
"cdat"<timedate>=0x32303135303333303134333533315A00 "20150330143531Z\000"
"crtr"<uint32>="rimZ"
"cusi"<sint32>=<NULL>
"desc"<blob>=<NULL>
"icmt"<blob>=<NULL>
"invi"<sint32>=<NULL>
"mdat"<timedate>=0x32303135303333303134333533315A00 "20150330143531Z\000"
"nega"<sint32>=<NULL>
"path"<blob>="/ServiceLogin"
"port"<uint32>=0x00000000
"prot"<blob>=<NULL>
"ptcl"<uint32>="htps"
"scrp"<sint32>=<NULL>
"sdmn"<blob>=<NULL>
"srvr"<blob>="accounts.google.com"
"type"<uint32>=<NULL>
password: "my-password"
How would you extract the account e-mail address from the line starting with "acct"<blob>= and store it, say, to a variable called EMAIL?
If you're using multiple grep, cut, sed, and awk statements, you can usually replace them with a single awk.
PASSWORD=$(security find-internet-password -gs accounts.google.com -w)
EMAIL=$(awk -F\" '/acct"<blob>/ {print $4}'<<<$PASSWORD)
This may be easier on a single line, but I couldn't get the security command to print out an output like yours in order to test it. Plus, it's a bit long to show on StackOverflow:
EMAIL=$(security find-internet-password -gs accounts.google.com -w | awk -F\" '/acct"<blob>/ {print $4}')
The /acct"<blob>/ is a regular expression. This particular awk command line will filter out lines that match this regular expression. The -F\" divides the output by the field given. In your line, the fields become:
The spaces in the front of the line.
acct
<blob>
my-email-address#gmail.com
A null field
The {print $4} says to print out the fourth field.
By the way, it's usually better to use $(....) instead of back ticks in your Shell scripts. The $( ... ) are easier to see, and you can enclose subcommands to execute before your main command:
foo=$(ls $(find . -name "*.txt"))
EMAIL=`security find-internet-password -s accounts.google.com | grep acct | cut -d "=" -f 2`
EMAIL="${EMAIL:1:${#EMAIL}-2}" # remove the brackets
Explanation:
grep acct keeps only the line containing the string "acct"
cut -d "=" -f 2 parses that line based on the separator "=" and keeps the 2nd part, i.e. the part after the "=" sign, which is the e-mail address enclosed within brackets
EMAIL="${EMAIL:1:${#EMAIL}-2}" removes the first and last characters of that string, leaving us with the clean e-mail address we were looking for

AWK - taking incorrect parameters when using $1, $2 etc

I am using a script which launches with a number parameters assigned to it. For example, you can imagine that this is what I am doing when launching the script from the command line:
script.sh "/tmp" "/apps" "/var".
This script runs through a system which I will not delve into detail about. This system is the cause of my problem but I am powerless to make any changes to this.
So, when I use awk '{print $2}', I receive the 2nd parameter I passed to my script ("/apps") rather than the field in the command I am running.
My question: is there any alternative notation that I can use in AWK except for $1, $2 etc to signify field values?
UPDATE:
Here is an example of one command within the script:
df | grep -e /$ | awk '{print $3/1024}'.
This problem is, the 3rd parameter is populated by "/var" as mentioned above
Try \$1, \$2 etc. (The "system" seems to drop at least one level of quoting from the command, so let's add one.)

Create bash script with menu of choices that come from the output of another script

forgive me if this is painfully simple but I'm not a programmer so it's hard for me to tell what's easy and what's hard.
I have a bash script that I use (that someone else wrote) for finding out internal customer data where I basically run "info customername" and it searches our internal customer database for all customer records matching that customer name and outputs a list with their account numbers (which all have the same prefix of 11111xxxxxxxx), in the form of "Sample Customer - 111119382818873".
We have another bash script where you enter "extrainfo 11111xxxxxxxx", we get the plaintext data from their account, which we use for many things that are important to us.
The missing feature is that "extrainfo" cannot search by name, only number. So I'd like to bridge that gap. Ideally, I'd enter "extrainfo customername" and it would run a search using "info customername", generate a list of results as a menu, allow me to choose which customer I meant, and then run the "extrainfo 11111xxxxxxxxx" command of that customer. If there is only one match, it would automatically run the extrainfo command properly.
Here's what I have that works but only for the first result that "info customername" generates:
#!/bin/bash
key=`/usr/local/bin/info $1 | grep 11111 | awk '{print $NF}'`
/usr/local/bin/extrainfo $key
It's the menu stuff I'm having a hard time figuring out. I hope this was clear but again, I'm pretty dumb with this stuff so I probably left something important out. Thanks.
This might work for you:
#!/bin/bash
# Set the prompt for the select command
PS3="Type a number or 'q' to quit: "
# Create a list of customer names and numbers (fill gaps with underscores)
keys=$(/usr/local/bin/info $1 | sed 's/ /_/g')
# Show a menu and ask for input.
select key in $keys; do
if [ -n "$key" ]; then
/usr/local/bin/extrainfo $(sed 's/.*_11111/11111/' <<<"$key")
fi
break
done
Basically, this script reads all the customer info, finds all lines with the customer number prefix, and loads it into search.txt. Then it displays the file with line numbers in front of it, waits for you to choose a line number, and then strips out the customer name and spaces in front of the customer id. Finally, it runs my other script with just the customer id. It's hacky but functional.
#!/bin/bash
/usr/local/bin/info $1 | grep 11111 > search.txt
cat -n search.txt
read num
key=`sed -n ${num}p search.txt | awk '{print $NF}'`
/usr/local/bin/extrainfo $key

bash: shortest way to get n-th column of output

Let's say that during your workday you repeatedly encounter the following form of columnized output from some command in bash (in my case from executing svn st in my Rails working directory):
? changes.patch
M app/models/superman.rb
A app/models/superwoman.rb
in order to work with the output of your command - in this case the filenames - some sort of parsing is required so that the second column can be used as input for the next command.
What I've been doing is to use awk to get at the second column, e.g. when I want to remove all files (not that that's a typical usecase :), I would do:
svn st | awk '{print $2}' | xargs rm
Since I type this a lot, a natural question is: is there a shorter (thus cooler) way of accomplishing this in bash?
NOTE:
What I am asking is essentially a shell command question even though my concrete example is on my svn workflow. If you feel that workflow is silly and suggest an alternative approach, I probably won't vote you down, but others might, since the question here is really how to get the n-th column command output in bash, in the shortest manner possible. Thanks :)
You can use cut to access the second field:
cut -f2
Edit:
Sorry, didn't realise that SVN doesn't use tabs in its output, so that's a bit useless. You can tailor cut to the output but it's a bit fragile - something like cut -c 10- would work, but the exact value will depend on your setup.
Another option is something like: sed 's/.\s\+//'
To accomplish the same thing as:
svn st | awk '{print $2}' | xargs rm
using only bash you can use:
svn st | while read a b; do rm "$b"; done
Granted, it's not shorter, but it's a bit more efficient and it handles whitespace in your filenames correctly.
I found myself in the same situation and ended up adding these aliases to my .profile file:
alias c1="awk '{print \$1}'"
alias c2="awk '{print \$2}'"
alias c3="awk '{print \$3}'"
alias c4="awk '{print \$4}'"
alias c5="awk '{print \$5}'"
alias c6="awk '{print \$6}'"
alias c7="awk '{print \$7}'"
alias c8="awk '{print \$8}'"
alias c9="awk '{print \$9}'"
Which allows me to write things like this:
svn st | c2 | xargs rm
Try the zsh. It supports suffix alias, so you can define X in your .zshrc to be
alias -g X="| cut -d' ' -f2"
then you can do:
cat file X
You can take it one step further and define it for the nth column:
alias -g X2="| cut -d' ' -f2"
alias -g X1="| cut -d' ' -f1"
alias -g X3="| cut -d' ' -f3"
which will output the nth column of file "file". You can do this for grep output or less output, too. This is very handy and a killer feature of the zsh.
You can go one step further and define D to be:
alias -g D="|xargs rm"
Now you can type:
cat file X1 D
to delete all files mentioned in the first column of file "file".
If you know the bash, the zsh is not much of a change except for some new features.
HTH Chris
Because you seem to be unfamiliar with scripts, here is an example.
#!/bin/sh
# usage: svn st | x 2 | xargs rm
col=$1
shift
awk -v col="$col" '{print $col}' "${#--}"
If you save this in ~/bin/x and make sure ~/bin is in your PATH (now that is something you can and should put in your .bashrc) you have the shortest possible command for generally extracting column n; x n.
The script should do proper error checking and bail if invoked with a non-numeric argument or the incorrect number of arguments, etc; but expanding on this bare-bones essential version will be in unit 102.
Maybe you will want to extend the script to allow a different column delimiter. Awk by default parses input into fields on whitespace; to use a different delimiter, use -F ':' where : is the new delimiter. Implementing this as an option to the script makes it slightly longer, so I'm leaving that as an exercise for the reader.
Usage
Given a file file:
1 2 3
4 5 6
You can either pass it via stdin (using a useless cat merely as a placeholder for something more useful);
$ cat file | sh script.sh 2
2
5
Or provide it as an argument to the script:
$ sh script.sh 2 file
2
5
Here, sh script.sh is assuming that the script is saved as script.sh in the current directory; if you save it with a more useful name somewhere in your PATH and mark it executable, as in the instructions above, obviously use the useful name instead (and no sh).
It looks like you already have a solution. To make things easier, why not just put your command in a bash script (with a short name) and just run that instead of typing out that 'long' command every time?
If you are ok with manually selecting the column, you could be very fast using pick:
svn st | pick | xargs rm
Just go to any cell of the 2nd column, press c and then hit enter
Note, that file path does not have to be in second column of svn st output. For example if you modify file, and modify it's property, it will be 3rd column.
See possible output examples in:
svn help st
Example output:
M wc/bar.c
A + wc/qax.c
I suggest to cut first 8 characters by:
svn st | cut -c8- | while read FILE; do echo whatever with "$FILE"; done
If you want to be 100% sure, and deal with fancy filenames with white space at the end for example, you need to parse xml output:
svn st --xml | grep -o 'path=".*"' | sed 's/^path="//; s/"$//'
Of course you may want to use some real XML parser instead of grep/sed.

Resources