wbemcli: key/value pair output - bash

If I use wbemcli to enumerate all the instances I get something similar to this:
wbemcli -nl -t -noverify ei 'https://aaa/aaa:aaa'
https://aaa/aaa:aaa.Version="",Vendor="",Name=""
-Version#=""
-Vendor#=""
-Name#=""
-Description=""
How can I call wbemcli to get only one item (i.e. the Name)? and not everything.
The -t option says:
-t
Append array ([]), reference (&) and key property (#) indicators to property names
but I wasn't able to utilize this in my favor.
Is there a way to retrieve this information in a key/value pair format?
Or maybe pipe the output into an array or something from which I can grab only what I need?
When I drop the output into an array all the data is stored in the first element ${a[0]}.
EDIT
Here's an output example:
$ wbemcli -nl -t -noverify ei 'https://user:pw#000.000.000.000:0000/root/aaa:AA_AaaAaaaAaaaa'
000.000.000.000:0000/root/aaa:AA_AaaAaaaAaaaa.ClassName="AA_AaaAaaaAaaaa",Name="123456a7ff890123"
-ClassName#="AA_AaaAaaaAaaaa"
-Name#="123456a7ff890123"
-Caption="aa aaa"
-Description="aa aa"
-ElementName="aa aaa aaaa"
-OperationalStatus[]=2
-HealthState=5
-CommunicationStatus=2
-DetailedStatus=1
-OperatingStatus=0
-PrimaryStatus=1
-EnabledState=5
-RequestedState=12
-EnabledDefault=2
-TransitioningToState=12
-PrimaryOwnerName="Uninitialized Contact"
-PrimaryOwnerContact="Uninitialized Contact"
The output is usually in this format.
If the query returns multiple objects they will be grouped and all will have the same members with their appropriate values.

http://linux.dell.com/files/whitepapers/WBEM_based_management_in_Linux.pdf has a number of examples which simply suggest to use grep to obtain the specific key and value you are looking for. There does not seem to be a way to directly query for a specific key within a result set.
Expanding on the comment by Etan Reisner, you could use something like
wbemcli <<query>> | grep -oP "^-$key=\K.*"
to obtain the value for the key named in $key, provided you have GNU grep which provides the -P option for Perl-compatible regular expressions (here, the \K "forget up through here" operator is useful). So for your specific example,
wbemcli -nl -t -noverify ei 'https://aaa/aaa:aaa' |
grep -oP '^-Name#=\K.*'
There is also a -dx option which produces XML output, which may be more robust if you are planning to write a major application on top of this protocol (but then perhaps you should be looking at a dedicated WBEM library such as the C or Java libraries listed in their wiki). It would not seem to be implausible to write a simple (e.g.) Python client to retrieve (part of?) the result tree and let you query or manipulate it locally, either.

Related

How to sort characters after a period

Need help on how to sort characters or numbers after a period(.)
test2.rod1
test1.rod1
test3.rod1
test1.mor2
test2.mor2
test3.mor2
zbcd1.abc1
abcd2.abc1
dbcd3.abc1
I would like the sort result anything after the period (.). Result should be something like below.
abcd2.abc1
dbcd3.abc1
zbcd1.abc1
test1.mor2
test2.mor2
test3.mor2
test2.rod1
test1.rod1
test3.rod1
If you're using a system with Unix like utilities such as MacOS, Linux, BSD, etc, then you can use the system sort command. The secret is to specify the field delimiter, which in your case is a period. The argument is either -t or --field-separator. So the following should work:
sort -t. -k 2 test.dat
Assuming that your data is in a file called test.dat

Create new bash var from value of dict bash var

My environment created a variable that looks like this:
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"bool_param":true,"float_param":1.25,"int_param":5,"model_dir":"s3://bucket/detection/prefix/testing-2019-04-06-02-24-20-194/model","str_param":"bla"},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"testing-2019-04-06-02-24-20-194","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://bucket/prefix/testing-2019-04-06-02-24-20-194/source/sourcedir.tar.gz","module_name":"launcher.sh","network_interface_name":"ethwe","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"ethwe"},"user_entry_point":"launcher.sh"}
EDIT by Ed Morton: per the OPs comment below, this is what (s)he is trying to describe above as the sample input:
$ SM_TRAINING_ENV='{"additional_framework_parameters":{},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"bool_param":true,"float_param":1.25,"int_param":5,"model_dir":"s3://bucket/detection/prefix/testing-2019-04-06-02-24-20-194/model","str_param":"bla"},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"testing-2019-04-06-02-24-20-194","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://bucket/prefix/testing-2019-04-06-02-24-20-194/source/sourcedir.tar.gz","module_name":"launcher.sh","network_interface_name":"ethwe","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"ethwe"},"user_entry_point":"launcher.sh"}'
$ echo "$SM_TRAINING_ENV"
{"additional_framework_parameters":{},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"bool_param":true,"float_param":1.25,"int_param":5,"model_dir":"s3://bucket/detection/prefix/testing-2019-04-06-02-24-20-194/model","str_param":"bla"},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"testing-2019-04-06-02-24-20-194","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://bucket/prefix/testing-2019-04-06-02-24-20-194/source/sourcedir.tar.gz","module_name":"launcher.sh","network_interface_name":"ethwe","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"ethwe"},"user_entry_point":"launcher.sh"}
How can I create a new bash variable that is equal to the value of SM_TRAINING_ENV["hyperparameters"]["model_dir"]?
For completeness, I was trying simple things like echo ${SM_TRAINING_ENV} | jq . and kept getting errors with everything I tried.
Edit: I've been informed that this value isn't a proper json, so rewording the question. I think the environment sets it to the value of a python dictionary, so jq seems not usable. Removed json tag. Maybe this is a job for awk?
It looks like I can match the value I want if I assume the structure doesn't change with the regex pattern s3.*?model, but not sure how to set a regex pattern to a new variable.
First, you need to quote the JSON value so that the double quotes will be included in the value.
SM_TRAINING_ENV='{"additional_framework_parameters":{},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"bool_param":true,"float_param":1.25,"int_param":5,"model_dir":"s3://bucket/detection/prefix/testing-2019-04-06-02-24-20-194/model","str_param":"bla"},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"testing-2019-04-06-02-24-20-194","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://bucket/prefix/testing-2019-04-06-02-24-20-194/source/sourcedir.tar.gz","module_name":"launcher.sh","network_interface_name":"ethwe","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"ethwe"},"user_entry_point":"launcher.sh"}'
Then you can use the jq utility to extract the value you want.
new_var=$(echo "$SM_TRAINING_ENV" | jq '.hyperparameters.model_dir')
This doesn't really index but it works if order is always the same:
NEW_VAR=$(echo $SM_TRAINING_ENV | egrep -o s3.*?model | head -1)
Would much prefer something not dependent on order though.

Find and replace in file with script

I want to find and replace the VALUE into a xml file :
<test name="NAME" value="VALUE"/>
I have to filter by name (because there are lot of lines like that).
Is it possible ?
Thanks for you help.
Since you tagged the question "bash", I assume that you're not trying to use an XML library (although I think an XML expert might be able to give you something like an XSLT processor command that solves this question very robustly), but that you're simply interested in doing search & replace from the commandline.
I am using perl for this:
perl -pi -e 's#VALUE#replacement#g' *.xml
See perlrun man page: Very shortly put, the -p switches perl into text processing mode, -i stands for "in-place", and -e let's you specify an expression to apply to all lines of input.
Also note (if you are not too familiar with that already) that you may use other characters than # (common ones are %, a comma, etc.) that don't clash with your search & replacement strings.
There is one small caveat: perl will read & write all files given on the commandline, even those that did not change. Thus, the files' modification times will be updated even if they did not change. (I usually work around that with some more shell magic, e.g. using grep -l or grin -l to select files for perl to work on.)
EDIT: If I understand your comments correctly, you also need help with the regular expression to apply. Let me briefly suggest something like this then:
perl -pi -e 's,(name="NAME" value=)"[^"]*",\1"NEWVALUE",g' *.xml
Related: bash XHTML parsing using xpath
You can use SED:
SED 's/\(<test name=\"NAME\"\) value=\"VALUE\"/\1 value=\"YourValue\"/' test.xml
where test.xml is the xml document containing the given node. This is very fragile, and you can work to make it more flexible if you need to do this substitution multiple times. For instance, the current statement is case sensitive, so it won't substitute the value on a node with the name="name", but you can add a case insensitivity flag to the end of the statement, like so:
('s/\(<test name=\"NAME\"\) value=\"VALUE\"/\1 value=\"YourValue\"/I').
Another option would be to use XSLT, but it would require you to download an external library. It's pretty versatile, and could be a viable option for more complex modifications to an XML document.

unix sort -n -t"," gives unexpected result

unix numeric sort gives strange results, even when I specify the delimiter.
$ cat example.csv # here's a small example
58,1.49270399401
59,0.000192136419373
59,0.00182092924724
59,1.49270399401
60,0.00182092924724
60,1.49270399401
12,13.080339685
12,14.1531049905
12,26.7613447051
12,50.4592437035
$ cat example.csv | sort -n --field-separator=,
58,1.49270399401
59,0.000192136419373
59,0.00182092924724
59,1.49270399401
60,0.00182092924724
60,1.49270399401
12,13.080339685
12,14.1531049905
12,26.7613447051
12,50.4592437035
For this example, sort gives the same result regardless if you specify the delimiter. I know if I set LC_ALL=C then sort starts to give expected behavior again. But I do not understand why the default environment settings, as shown below, would make this happen.
$ locale
LANG="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_CTYPE="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_ALL=
I've read from many other questions (e.g. here, here, and here) how to avoid this behavior in sort, but still, this behavior is incredibly weird and unpredictable and has caused me a week of heartache. Can someone explain why sort with default environment settings on Mac OS X (10.8.5) would behave this way? In other words: what is sort doing (with local variables set to en_US.UTF-8) to get that result?
I'm using
sort 5.93 November 2005
$ type sort
sort is /usr/bin/sort
UPDATE
I've discussed this on the gnu-coreutils list and now understand why sort with english unicode default locale settings gave the output it did. Because in English unicode, the comma character "," is considered a numeric (so as to allow for comma's as thousand's (or e.g. hundreds) separators), and sort defaults to "being greedy" when it interprets a line, it read the example numbers as approximately
581.491...
590.000...
590.001...
591.492...
600.001...
601.492...
1213.08...
1214.15...
1226.76...
1250.45...
Although this was not what I had intended and chepner is right that to get the actual result I want, I need to specify that I want sort to key on only the first field. sort defaults to interpreting more of the line as a key rather than just the first field as a key.
This behavior of sort has been discussed in gnu-coreutil's FAQ, and is further specified in the POSIX description of sort.
So that, as Eric Blake on the gnu-coreutil's list put it, if the field-separator is also a numeric (which a comma is) then "Without -k to stop things, [the field-separator] serves as BOTH a separator AND a numeric character - you are sorting on numbers that span multiple fields."
I'm not sure this is entirely correct, but it's close.
sort -n -t, will try to sort numerically by the given key(s). In this case, the key is a tuple consisting of an integer and a float. Such tuples cannot be sorted numerically.
If you explicitly specify which single keys to sort on with
sort -k1,1n -k2,2n -t,
it should work. Now you are explicitly telling sort to first sort on the first field (numerically), then on the second field (also numerically).
I suspect that -n is useful as a global option only if each line of the input consists of a single numerical value. Otherwise, you need to use the -n option in conjunction with the -k option to specify exactly which fields are numbers.
Use sort --debug to find out what's going on.
I've used that to explain in detail your issue at:
http://lists.gnu.org/archive/html/coreutils/2013-10/msg00004.html
If you use
cat example.csv | sort
instead of
cat example.csv | sort -n --field-separator=,
then it would give correct output. Use this command, hope this is helpful to you.
Note: I tested with "sort (GNU coreutils) 7.4"

Track file by extended attribute value

So I am trying to make a script that references other files. I want to be able to keep track of the file even if it moves. So I was thinking if I could assign a file a unique value then I could find the location of the file by searching by the unique value I assigned it.
Is there a better way to do this?
Basically I'd like to be able to find a file from a value it has as an extended attribute. But I don't know if this is possible.
Any help would be great. Thanks.
You could use the inode number (show it with ls -i /some/file) which will be unique per file and which does not change when the file is changed or moved, UNLESS you move the file to a different partition. If you don't need to track files over multiple partitions than this would be a very easy solution.
To find a file by inode number you can use find -inum <inode number>
If you need to move the file across filesystems, you can use the command setfattr to set extended attributes to file, note: your kernel must be able to do this.
There aren't one-shot command to find the file but you can create a simple script to search the file by your unique key.
To set to yourfile a key similar to user.unique and a string value as you like, e.g.: n1234
setfattr -n user.unique -v n1234 yourfile
to retrieve the value you can use the command getfattr
getfattr -n user.unique yourfile
or to get all extended file attributes:
getfattr -d yourfile
To test if your kernel is able to handle extended attributes on type of your filesystems:
zcat /proc/config.gz | grep FS_XATTR
To write a simple script to search using extended attributes you can refer to this Hack; on the same page you can read more about extended attributes.

Resources