I'm a newbie in Mac OSX world and I have to write a script which gives me the vendor id and product id of a connected usb device. I have done it for Windows and Linux but for Mac I have no idea where to start from.
I have seen this post but the link with the example is not working. Do you guys have any advice about where I can start from or where I can find some examples?
In particular, which language should I use?
You tagged your question with bash, so I'll answer it as if you're asking how to do this in bash, rather than asking what language to use (which would make the question off-topic for StackOverflow).
You can parse existing data from system_profiler using built-in tools. For example, here's a dump of vendor:product pairs, with "Location ID" and manufacturer...
#!/bin/bash
shopt -s extglob
while IFS=: read key value; do
key="${key##+( )}"
value="${value##+( )}"
case "$key" in
"Product ID")
p="${value% *}"
;;
"Vendor ID")
v="${value%% *}"
;;
"Manufacturer")
m="${value}"
;;
"Location ID")
l="${value}"
printf "%s:%s %s (%s)\n" "$v" "$p" "$l" "$m"
;;
esac
done < <( system_profiler SPUSBDataType )
This relies on the fact that Location ID is the last item listed for each USB device, which I haven't verified conclusively. (It just appears that way for me.)
If you want something that (1) is easier to read and (2) doesn't depend on bash and is therefore more portable (not an issue though; all Macs come with bash), you might want to consider doing your heavy lifting in awk instead of pure bash:
#!/bin/sh
system_profiler SPUSBDataType \
| awk '
/Product ID:/{p=$3}
/Vendor ID:/{v=$3}
/Manufacturer:/{sub(/.*: /,""); m=$0}
/Location ID:/{sub(/.*: /,""); printf("%s:%s %s (%s)\n", v, p, $0, m);}
'
Or even avoid wrapping this in shell entirely with:
#!/usr/bin/awk -f
BEGIN {
while ("system_profiler SPUSBDataType" | getline) {
if (/Product ID:/) {p=$3}
if (/Vendor ID:/) {v=$3}
if (/Manufacturer:/) {sub(/.*: /,""); m=$0}
if (/Location ID:/) {sub(/.*: /,""); printf("%s:%s %s (%s)\n", v, p, $0, m)}
}
}
Note that you can also get output from system_profiler in XML format:
$ system_profiler -xml SPUSBDataType
You'll need an XML parser to handle that output, though. And you'll find that it's a lot of work to parse XML in native bash.
Depending on what you want to do with the information, you can just look it up in System Information.
Click the Apple menu at top left of screen, About this Mac, More Info, System Report and select Hardware at top left, then USB.
You could maybe write Applescript to do that, but if you are going to go on and interact with the device in some way, this may not be the best approach.
You can use the system_profiler command, like this:
system_profiler -detailLevel full
and parse the outout from that. Or you can add the -xml option to the system_profiler command and parse XML pretty easily with awk/grep or the XML module in Perl.
Example extract:
| | | | +-o FaceTime HD Camera (Built-in)#0 <class IOUSBInterface, id 0x1000002b2, registered, matched, active, busy 0 (26 ms), retain 7>
| | | | | {
| | | | | "IOCFPlugInTypes" = {"2d9786c6-9ef3-11d4-ad51-000a27052861"="IOUSBFamily.kext/Contents/PlugIns/IOUSBLib.bundle"}
| | | | | "bcdDevice" = 0x755
| | | | | "IOUserClientClass" = "IOUSBInterfaceUserClientV3"
| | | | | "idProduct" = 0x850b
| | | | | "bConfigurationValue" = 0x1
| | | | | "bInterfaceSubClass" = 0x1
| | | | | "locationID" = 0xfffffffffa200000
| | | | | "USB Interface Name" = "FaceTime HD Camera (Built-in)"
| | | | | "idVendor" = 0x5ac
Regarding the path to the USB device, I have no idea how you would do that simply on a Mac. I might be tempted to run:
find /dev -type b -o -type c
before inserting the USB device, and saving the output. Then have your user insert the device and run the same command again to see what device special files have been added as a result of plugging in your device. Maybe crude, maybe effective - just an idea.
Related
There are some lesser known bash variable expansions:
+----------------------------------------------------------+----------------+
| description | expression |
+----------------------------------------------------------+----------------+
| Remove everything **after** the **last** '7' | ${var%7*} |
| Remove everything **after** the **first** '7' | ${var%%7*} |
| Remove everything **before** the **first** '7' | ${var#*7} |
| Remove everything **before** the **last** '7' | ${var##*7} |
| First char upper case | ${var^} |
| All upper case | ${var^^} |
| First char lower case | ${var,} |
| All lower case | ${var,,} |
| Show how variable was set | ${var#A} |
| ?? something cool ?? | ${var#E} |
| Print variable as though it were the prompt variable PS1 | ${var#P} |
| ?? something cool ?? | ${var#Q} |
+----------------------------------------------------------+----------------+
I have been struggling to find a source that documents all of these tricks. So far the best one I have found is this cheat sheet. But even that page is missing some of these expansion rules. For the purposes of writing good bash code, and making that code portable I am looking for several things:
What are all of the bash variable expansion tricks?
Where is there a document that shows all of them (with examples ideally)?
What versions of bash do which tricks work with?
Some good pointers on parameter expansions:
https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
http://mywiki.wooledge.org/BashFAQ/073
https://wiki.bash-hackers.org/syntax/pe
You missed many, like
single substitution a -> b : ${x/a/b}
multiple substitutions a -> b : ${x//a/b}
offset manipulation: ${x:1:3}
${var-word} if var is defined, use var; otherwise, "word"
${var+word} if var is defined, use "word"; otherwise, nothing
${var=word} if var is defined, use var; otherwise, use "word" AND also assign "word" to var
${var?error} if var is defined, use var; otherwise print "error" and exit
array slice ${files[#]: -4}
Note that most of PE works with array too
I am trying to save a file on a library inside a Iseries database using the GxFtpPut on genexus 10 V3 with .net but when sending the file genexus tries to send it to a windows directory instead of sending it to the library which works using the ftp command on the cmd
I've already tried to changing the route is using to no avail and trying to find another way of sending the file through genexus.
for example when using the cmd I just put this :
put C:\FILES\Filename.txt Library/Filename
And it works on sending the file inside the library,
but when doing this on genexus:
Call("GxFtpPut", &FileDirectory , 'Library/'+&FileName,'B' )
Does not work and tries to find a directory with that name inside the windows files of the server
I just want to be able to send it to the server library without issue.
IBM i has two distinct name formats depending on the file system you are trying to use. NAMEFMT 0 is the library/filename format, and is likely unknown to PC FTP clients. NAMEFMT 1 is the typical hierarchical directory path used by non-IBM i computers, and also works with IBM i if you want to put a file anywhere in the IFS (Integrated File System).
Fun fact, the native library file system is also accessible from the IFS. But to address it you need to use a format that might be a little unfamiliar. /QSYS.lib/library.lib/filename.file/membername.mbr You may be able to drop the member name.
To change name format, you can issue the SITE sub-command on your remote host like this:
QUOTE SITE NAMEFMT 0 -- This sets name format 0 (library/filename)
QUITE SITE NAMEFMT 1 -- This sets name format 1 (directory path)
I did some testing with a plain Windows FTP client. The test file on the PC was a text file created in Notepad++. Turns out that we start out in NAMEFMT 0 unless it is changed. It looks like genexus only supports a limited set of commands. So here is the limited FTP script that works:
ascii
put test.txt mylib/testpf
I can now pull up testpf on the greenscreen utilities and read it. I can also read testpf in my GUI SQL client. The ASCII text has been converted properly to EBCDIC.
|TESTPF |
|--------------------------------------------------------------------------------|
| |
|// ------------------------------------ |
|// Sweep |
|// |
|// Performs the sweep logic |
|// ------------------------------------ |
|dcl-proc Sweep; |
| |
| |
| exec sql |
| update atty a |
| set ymglsb = (select ymglsb from glaty |
| where atty = a.atty) |
| where atty in (select atty from glaty where atty = a.atty); |
|// where ymglsb in (select ymglsb from glaty where atty = a.atty); |
| if %subst(sqlstate: 1: 2) < '00' or |
| %subst(sqlstate: 1: 2) > '02'; |
| exec sql get diagnostics condition 1 |
| :message = message_text; |
| SendSqlMsg('02: ' + message); |
| endif; |
| |
| exec sql |
| update atty a |
| set ymglsb = '000' |
| where not exists (select * from glaty where atty = a.atty); |
| if %subst(sqlstate: 1: 2) < '00' or |
| %subst(sqlstate: 1: 2) > '02'; |
| exec sql get diagnostics condition 1 |
| :message = message_text; |
| SendSqlMsg('03: ' + message); |
| endif; |
| |
|end-proc; |
However, if I try to transfer in binary mode, the resulting data in the file looks like this:
|TESTPF |
|--------------------------------------------------------------------------------|
|ëÏÁÁø&ÁÊÃ?Ê_ËÈÇÁËÏÁÁø% |
|?ÅÑÄÀÄ%øÊ?ÄëÏÁÁøÁÌÁÄËÉ% |
|ÍøÀ/ÈÁ/ÈÈ`/ËÁÈ`_Å%ËÂËÁ%ÁÄÈ`_Å%ËÂÃÊ?_Å%/È` |
|ÏÇÁÊÁ/ÈÈ`//ÈÈ`ÏÇÁÊÁ/ÈÈ`Ñ>ËÁ%ÁÄÈ/ÈÈ`ÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ |
|`//ÈÈ`ÏÇÁÊÁ`_Å%ËÂÑ>ËÁ%ÁÄÈ`_Å%ËÂÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ`//ÈÈ |
|`ÑöËÍÂËÈËÉ%ËÈ/ÈÁ?ʶËÍÂËÈËÉ%ËÈ/ÈÁ |
|ÁÌÁÄËÉ%ÅÁÈÀÑ/Å>?ËÈÑÄËÄ?>ÀÑÈÑ?>_ÁËË/ÅÁ_ÁËË/ÅÁ¬ÈÁÌÈ |
|ëÁ>ÀëÉ%(ËÅ_ÁËË/ÅÁÁ>ÀÑÃÁÌÁÄËÉ%ÍøÀ/ÈÁ/ÈÈ`/ |
|ËÁÈ`_Å%ËÂÏÇÁÊÁ>?ÈÁÌÑËÈËËÁ%ÁÄÈÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ`// |
|ÈÈ`ÑöËÍÂËÈËÉ%ËÈ/ÈÁ?ʶËÍÂËÈËÉ%ËÈ/ÈÁ |
|ÁÌÁÄËÉ%ÅÁÈÀÑ/Å>?ËÈÑÄËÄ?>ÀÑÈÑ?>_ÁËË/ÅÁ_ÁËË/ÅÁ¬ÈÁÌÈ |
|ëÁ>ÀëÉ%(ËÅ_ÁËË/ÅÁÁ>ÀÑÃÁ>ÀøÊ?Ä |
This has not been converted because we have told IBM i FTP server not to convert to EBCDIC because it is binary.
So try ASCII mode, use the library/filename format. The target file does not need to pre-exist.
I have two files, one with about 100 root domains, and second file with URLs only. Now I have to filter that URL list to get third file which contains only URLs that have domains from the list.
Example of URL list:
| URL |
| ------------------------------|
| http://github.com/name |
| http://stackoverflow.com/name2|
| http://stackoverflow.com/name3|
| http://www.linkedin.com/name3 |
Example of word list:
github.com
youtube.com
facebook.com
Resut:
| http://github.com/name |
My goal is to filter out whole row where URL contain specific word. This is what I tried:
for i in $(cat domains.csv);
do grep "$i" urls.csv >> filtered.csv ;
done
Result is strange, I've got some of the links, but not all of them that contain root domains from the first file. Then I tried to do the same thing with python and saw that bash doesn't do what I wanted, I've got better result with python script, but it takes more time to write python script than running bash commands.
How shoud I accomplish this with bash in further ?
Using grep:
grep -F -f domains.csv url.csv
Test Results:
$ cat wordlist
github.com
youtube.com
facebook.com
$ cat urllist
| URL |
| ------------------------------|
| http://github.com/name |
| http://stackoverflow.com/name2|
| http://stackoverflow.com/name3|
| http://www.linkedin.com/name3 |
$ grep -F -f wordlist urllist
| http://github.com/name |
I have the following command that I use to rewrite some maxscale output to be able to use it in other software:
maxadmin list servers | sed -r 's/[^a-z 0-9]//gi;/^\s*$/d;1,3d;' | awk '$1=$1' | cut -d ' ' -f 1,5 | sed -e 's/ /":"/g' | sed -e 's/\(.*\)/"\1"/' | tr '\n' ',' | sed 's/.$/}\n/' | sed 's/^/{/'
I am thinking this is way to complex for what I want to do, but I am not able to see a simpler version of this myself. What I want is to rewrite this (output of maxadmin list servers):
Servers.
-------------------+-----------------+-------+-------------+--------------------
Server | Address | Port | Connections | Status
-------------------+-----------------+-------+-------------+--------------------
svr_node1 | 192.168.178.1 | 3306 | 0 | Master, Synced, Running
svr_node2 | 192.168.178.1 | 3306 | 0 | Slave, Synced, Running
svr_node3 | 192.168.178.1 | 3306 | 0 | Slave, Synced, Running
-------------------+-----------------+-------+-------------+--------------------
Into this:
{"svrnode1":"Master","svrnode2":"Slave","svrnode3":"Slave"}
My command does a good job but as I said, there should be a simpler way with less sed commands being run hopefully.
You can use awk, like this:
json.awk
BEGIN {
printf "{"
}
# Everything after line for and before the last ------ line
# plus the last empty line (if any).
NR>4&&!/^([-]|$)/{
sub(/,/,"",$9) # Remove trailing comma
printf "%s\"%s\":\"%s\"",s,$1,$9
s="," # Set comma separator after first iteration
}
END {
print "}"
}
Run it like this:
maxadmin list servers | awk -f json.awk
Output:
{"svr_node1":"Master","svr_node2":"Slave","svr_node3":"Slave"}
In comments there came up the question how to achieve that without an extra json.awk file:
maxadmin list servers | awk 'BEGIN{printf"{"}NR>4&&!/^([-]|$)/{sub(/,/,"",$9);printf"%s\"%s\":\"%s\"",s,$1,$9;s=","}END{print"}"}'
Ugly, but works. ;)
If you want to put this into a shell script, consider a multiline version like this:
maxadmin list servers | awk '
BEGIN{printf"{"}
NR>4&&!/^([-]|$)/{
sub(/,/,"",$9)
printf"%s\"%s\":\"%s\"",s,$1,$9
s=","
}
END{print"}"}'
I am running a command like this:
mycmd1 | mycmd2 | mycmd3 | lp
Is there a way to redirect stderr to a file for the whole pipe instead of repeating it for each command?
That is to say, I'd rather avoid doing this:
mycmd1 2>/myfile | mycmd2 2>/myfile | mycmd3 2>/myfile | lp 2>/myfile
Either
{ mycmd1 | mycmd2 | mycmd3 | lp; } 2>> logfile
or
( mycmd1 | mycmd2 | mycmd3 | lp ) 2>> logfile
will work. (The first version might be have a slightly faster (~1ms) startup time depending on the shell).
I tried the following, and it seems to work:
(mycmd1 | mycmd2 | mycmd3 | lp) 2>>/var/log/mylogfile.log
I use >> because I want to append to the logfile rather than overwriting it every time.