Pass parameter to bash script and call bash scripts in Chef cookbook - bash

I have bash script to creating stores in openstack. My scenarios is like below:-
Parameter is passed along cookbook when execute them , this is a store name like store 1 , store 2
For example :- If i want to create a store 5 then i will call cookbook named "store-automation-cookbook " along with store name store 5, this store name in-turn will be passing to BASH script
then BASH script will take care rest of the automation

I answered this on the mailing list too so just copying over my answer verbatim. In the future please don't cross post like this.
Chef doesn't really have a good way to take command line input like that. Using -j as mentioned is possible but you might end up setting more things that you meant to. Using environment variables is another option but only works from the command line for the most part. A hybrid of the two might be best for you, consume the env var if set, otherwise use a node attribute.

Related

Configuration verification

ssh to an IP supplied via user input
Inject show running-config command
Search the output of the command and look for specific parameters like ports, QoS, VTY lines, SMTP settings, IP helpers etc.
Output only if the predefined parameters are not in place
If I store the output of the ssh run in a variable, is there a method I can use for parsing through it and just go with endless if, elsif, and else statements? How would that look?
It can be in Python bash or Perl, doesn't matter to me really but I don't know how to structure the thing and then fill in the gaps and expand the script.
Here's where I imagine starting from
use Net::SSH2
use File::Slurp;
print "\nEnter command list filename: ";
my $commandlist = <STDIN>;
chomp($commandlist);
my #commandfile = read_file($commandlist, chomp => 1);
my $commandsize = #commandfile;
How would I store the output of the commands in a variable or a temporary file and parse through it with conditions?
This is a very broad question, and it is unclear exactly what is puzzling you
The documentation for
Net::SSH2
and
Net::SSH2::Channel
describes clearly how to open a channel, send commands to it, and receive the response
You ask how you could store the results of the command in a variable, but it would be very awkward to do anything else. Again, the documentation describes this clearly
I suggest that you try writing some working code and experiment with it. It will be much easier to help you when you have a specific question

How to exclude instances of the EC2 inventory in Ansible?

We have an Ansible server using EC2 dynamic inventory:
https://github.com/ansible/ansible/blob/devel/contrib/inventory/ec2.py
https://github.com/ansible/ansible/blob/devel/contrib/inventory/ec2.ini
However, with the number of instances we have, running ./ec2.py --list or ./ec2.py --refresh-cache returns a 28,000 line JSON response.
This I assume, causes it to randomly fail (returns a Python stack trace) as it only receives a partial response when sending a call to AWS, but is then fine if ran again.
Which is why I want to know if there's a way to cut this down.
I know there is a way to include specific instances by tag in the ec2.ini (i.e. # instance_filters = tag:env=staging), but with
the way our instances are tagged, is there a way to exclude
instances instead (something that would look similar to: # instance_filters = tag:name=!dev)?
is there a way to exclude instances instead
Just for completeness, I wanted to point out that the "inventory protocol" for ansible is super straightforward to implement, and they even have a JSON Schema for it.
You can see an example of the output it is expecting by running the newly included ansible-inventory script with --list to see the output it generates from one of the .ini style inventories, and then use that to emit your own:
$ printf 'somehost ansible_user=bob\n\n[some_group]\nsomehost\n' > sample
$ ansible-inventory -i ./sample --list
What I am suggesting is that you might have better luck making a custom inventory script, that does know your local business practices, rather than trying to force ec2.py into running a negation query (which, as best I can tell, it will not do).
To generate dynamic inventory, just make an executable -- as far as I know it can be in any language at all -- and then point the -i at the executable script instead of a "normal" file. Ansible will invoke that program, and operate on the JSON output as the inventory. There are several examples people have posted as gists, in all kinds of languages.
I would still love it if you would file an issue with ansible about ec2.py, because you have the situation that can make the bug report concrete for them in ways that a simple "it doesn't work for large inventories" doesn't capture. But in the mean time, writing your own inventory provider is actually less work than it sounds.
I use the option pattern_exclude in ec2.ini:
# If you want to exclude any hosts that match a certain regular expression
pattern_exclude = staging-*
and
hostname_variable = tag_Name

Adding Helper Methods to Mongo Shell

Is there any way of adding "helper" methods to the mongo shell that loads each time you use it?
Basically whenever you want to query by _id, you have to do something like this.
db.collectionName.findOne({_id: ObjectId('THIS-IS-AN-OBJECTID')})
Whenever I'm going to be doing a lot of command line commands, I alias the ObjectId function to make it easier to type.
var ob = ObjectId;
db.collectionName.findOne({_id: ob('AN-OBJECTID')})
db.collectionName.findOne({_id: ob('ANOTHER-ONE')})
db.collectionName.findOne({_id: ob('ANOTHER')})
It would be pretty chill if there was a way of either running a specified piece of JS / add a chunk of code that runs each time mongo is pulled up from the shell. I checked out MongoDB's CLI documentation, but didn't see anything like that available, so I figured I would ask here.
I know there is a possibility of using this nefariously, so this might be a situation where it might be unsupported by the mongo shell by default. This might be a situation where we can create a helper bash script of some sort to launch the shell, then inject keyboard input to create the helper ob function? Not sure how this could be tackled personally, but would love some insight on how to do something like this, either natively or through a helper script of some sort.
If you want code to execute every time you launch the shell, then whatever you place in .mongorc.js will be run on launch:
.mongorc.js File
When starting, mongo checks the user’s HOME directory for a JavaScript file named .mongorc.js. If found, mongo interprets the content of .mongorc.js before displaying the prompt for the first time. If you use the shell to evaluate a JavaScript file or expression, either by using the --eval option on the command line or by specifying a .js file to mongo, mongo will read the .mongorc.js file after the JavaScript has finished processing. You can prevent .mongorc.js from being loaded by using the --norc option.
So simply define your variable association there.
You could also supply a file of your choice along with the --shell option to let the command know you want the shell opened on completion of any instructions contained:
mongo --shell file_with_javascript.js
But as mentioned, the .mongorc.js file would still be called (if present) unless the --norc option was also specified.

How to redirect multiple values to a script at different times from a single file

I want to redirect multiple values to a script from a single file from the command line. This script will ask different values from the user and runs accordingly. Script will use read command to read values from user. All i want to do is instead of asking different values from the user it reads those from a file. I want to run like this:
run-iso.sh abc.iso <filename
and my file contains following,
yes
10.168.10.206
25
1
And i don't want set any delimiter in my script to read these values.It should read as it reads from stdin. Is this possible? Correct me if my question itself is wrong.
Thanks.
You should have a look at expect.
Expect is a program that "talks" to other interactive programs
according to a script
This way you could script the call to run-iso.sh. Here you find some easy examples!

What exactly is going on in Proc::Background?

I am trying to write a script that automates other perl scripts. Essentially, I have a few scripts that rollup data for me and need to be run weekly. I also have a couple that need to be run on the weekend to check things and email me if there is a problem. I have the email worked out and everything but the automation. Judging by an internet search, it seems as though using Proc::Background is the way to go. I tried writing a very basic script to test it and can't quite figure it out. I am pretty new to Perl and have never automated anything before (other than through windows task scheduler), so I really don't understand what the code is saying.
My code:
use Proc::Background;
$command = "C:/strawberry/runDir/SendMail.pl";
my $proc1 = Proc::Background -> new($command);
I receive an error that says no executable program located at C:... Can someone explain to me what exactly the code (Proc::Background) is doing? I will then at least have a better idea of how to accomplish my task and debug in the future. Thanks.
I did notice on Proc::Background's documentation the following:
The Win32::Process module is always used to spawn background processes
on the Win32 platform. This module always takes a single string
argument containing the executable's name and any option arguments.
In addition, it requires that the absolute path to the executable is
also passed to it. If only a single argument is passed to new, then
it is split on whitespace into an array and the first element of the
split array is used at the executable's name. If multiple arguments
are passed to new, then the first element is used as the executable's
name.
So, it looks like it requires an executable, which a Perl script would not be, but "perl.exe" would be.
I typically specify the "perl.exe" in my Windows tasks as well:
C:\dwimperl\perl\bin\perl.exe "C:\Dropbox\Programming\Perl\mccabe.pl"

Resources