Bash Scripting with LastPass CLI - bash

Edit: As of 01/31/2023 the scripts that I am using below ARE working. Any patterns of inconsistencies that I find I will report here. Would like to leave this open in case others have findings/advice they are interested in sharing in relation to bash scripting/LastPass CLI/WSL
I am looking to use the LastPass CLI to make some changes to Shared Sites within our LastPass enterprise. I was able to write the scripts (fortunately with some help from others on here), however I am unable to get the commands to work properly within a script.
One of the commands that I WAS having troubles with was lpass share create. This command worked directly from the command line, but I was unable to run this command within a script successfully. I have a very simple script, similar to the one below:
#!/bin/bash
folderpath=$1
lpassCreateStoreFolder(){
lpass share create "$folderpath"
}
lpassLogin(){
echo 'testPWD' | LPASS_DISABLE_PINENTRY=1 lpass login --trust --force tester#test.com
}
lpassLogin
lpassCreateStoreFolder
I've been invoking my script through the PowerShell command line like so:
wsl "path/to/script" "Shared-00 Test LastPass CLI"
Sometimes this command works within the script and other times it does not. When I tried running this script around mid December, I had no success at all. The script would run through all the way, the CLI would even give me a response
Folder Shared-00 Test LastPass CLI created.
and the LastPass Admin Console logs show me a report of "Create Shared Folder". The problem is when I go to my LastPass Vault, the Shared Folder was rarely/if ever created. Running the command without a script, directly from the command line worked almost 100% of the time. I initially chalked this up to inconsistencies on their end, but now I am experiencing these same problems with a different command.
Similarly I have been using the lpass edit command to make edits to sites within our LastPass vault. Once again, I have a relatively simple script to make the edit to the site:
#!/bin/bash
lpassId=$1
lpassSetNotes(){
printf "Notes:\n What are your notes?\nThese are my notes" | lpass edit --non-interactive --sync=now "$lpassId"
}
lpassLogin(){
echo 'testPWD' | LPASS_DISABLE_PINENTRY=1 lpass login --trust --force test#test.com
}
lpassLogin
lpassSetNotes
and have been invoking this script through Powershell like so:
wsl "path/to/script" "000LastPassID000"
like the lpass share create command, running the script does not produce the desired output. The script runs all the way through and my changes are reflected in the logs, but when I go to the vault the site itself is never changed. The command DOES however work when I run it from the command line directly within WSL.
I am relatively new to writing Bash scripts/the Linux operating system, so I'm not entirely sure if this something wrong on my end or just the vendor's tool that I am utilizing producing inconsistencies. Any help would be appreciated, I know this issue might be hard to replicate without a LastPass account
Example LastPass CLI calls that work directly from command line in WSL
lpass share create "Shared-00 Testing LastPass CLI"
printf "Notes:\n What are your notes?\nThese are my notes" | lpass edit --non-interactive --sync=now "$lpassId"
References
LastPass CLI
CLI Manual
CLI GitHub

Related

How to know what initial commands being executed right after a SSH login?

I was provided a tool to do a SSH to a remote host. The remote host is a new docker to be created. I was trying to understand if there are commands being executed right after the SSH (i.e. probably using ssh -t <some commands>).
It seems like the .bash_history does not include those cmds. In such case, what else can I do to figure out what cmds being executed right after my login? Thank you.
To find out the actual commands that are executed, you could add "set -v" or "set -x" to the shell initialization file(s) on the system you are ssh-ing to.
See man bash (the "INVOCATION" section) to find out which files will executed so that you can figure out which file to add the "set" command to.
You will probably want to do that temporarily ... because the output is verbose.
Another approach would be to configure sshd to set the logging level to DEBUG and see what commands are requested. However, note that sshd DEBUG logging is a user privacy violation.
If you are trying to do this kind of stuff to find out what is happening on the first "boot" of a docker instance, try putting the (temporarily) config changes into the docker image that you are starting.
The bash history only contains command lines that are submitted to the shell via a shell command prompt.

curl command in git-bash

I have a script written in bash and tested working in Linux (CentOS 7) and on MacOS. The script uses cURL to interact with a REST API compliant data platform (XNAT).
I was hoping that Windows users could use the same script within git-bash that comes packaged with Git for Windows. Unfortunately there seems to be an issue when using cURL in git-bash.
The first use I make of cURL is to retrieve a JSESSION cookie:
COOKIE=`curl -k -u $USERNAME https://theaddress/JSESSION`
On Linux, this asks the user for password and stores the cookie in COOKIE.
In git-bash, issuing the command hangs, until using a "ctrl + C" to interrupt it. Strangely at that point the query message for the password is displayed, but too late, the script has terminated.
I have a suspicion that this may have to do with CR or LF issues, but cannot find some info I understand regarding this.
Any pointers would be welcome !
Thank you
EDIT:
It appears the above command works fine if I pass the password in the command like this:
COOKIE=`curl -k -u $USERNAME:$PASSWORD https://theaddress/JSESSION`
However, as pointed here:
Using cURL with a username and password?
I would rather avoid having the user typing their password as a command argument.
So the question is now "why is cURL not prompting for a password when I use the first command?" when in git-bash on Windows, while that command behaves as expected in Linux or MacOS:
COOKIE=`curl -k -u $USERNAME https://theaddress/JSESSION`
Ending up replying to my own question, hope this may be useful to someone else.
It appears this issue is a known problem when running cURL from within git-bash, according to this thread:
https://github.com/curl/curl/issues/573
In particular, see the answer of dscho on 30 Dec 2015:
The problem is the terminal emulator we use with Git Bash since Git for Windows 2.5, MinTTY.
This terminal emulator is not associated with a Win32 Console, therefore the user does not see anything when cURL wants to interact with the user via said Console.
This issue has a workaround, which is documented here:
https://github.com/git-for-windows/build-extra/blob/master/ReleaseNotes.md#known-issues
The workaround is to run curl via winpty as follows:
winpty curl [arguments]
Not an issue with CR or LF after all.
Soooo, git-bash may not be the magic-bullet (tm) to run my bash scripts in Windows with zero effort. Sigh...

Bash script to ssh into computers run a command and direct output to append to a .txt on server

How would I go about creating a Bash script that will ssh into a list of computers and run a command and have the output of that command append to a file on the server?
posting this as an answer since I do not have the required reputation for making comments...
You should use the following shell syntax :
for ip in $(<filename.txt); do ssh "$ip" 'yourcommand >> yourfile'; done;
Pro tip: If you foresee doing this a lot -- you have a bunch of servers on which you must routinely issue commands, capture output, whatever -- it would pay to setup and use Ansible or any of the commonly available infra orchestration tools like Chef/Puppet etc. The reason I recommend ansible is that it requires minimal setup, and that too only on the master machine. It also supports ad-hoc commands pretty well.
Ps: I do not have experience with Chef/Puppet, I've just used Ansible.

How can I interact with a Vagrant shell provisioning script?

I have a shell provisioning script that invokes a command that requires user input - but when I run vagrant provision, the process hangs at that point in the script, as the command is waiting for my input, but there is nowhere to give it. Is there any way around this - i.e. to force the script to run in some interactive mode?
The specifics are that I creating a clean Ubuntu VM, and then invoking the Heroku CLI to download a database backup (this is in my provisioning script):
curl -o /tmp/db.backup `heroku pgbackups:url -a myapp`
However, because this is a clean VM, and therefore this is the first time that I have run an Heroku CLI command, I am prompted for my login credentials. Because the script is being managed by Vagrant, there is no interactive shell attached, and so the script just hangs there.
If you want to pass temporary input or variables to a Vagrant script, you can have them enter their credentials as temporary environment variables for that command by placing them first on the same line:
username=x password=x vagrant provision
and access them from within Vagrantfile as
$u = ENV['username']
$p = ENV['password']
Then you can pass them as an argument to your bash script:
config.vm.provision "shell" do |s|
s.inline: "echo username: $1, password: $2"
s.args: [$u, $p]
end
You can install something like expect in the vm to handle passing those variables to the curl command.
I'm assuming you don't want to hard code your credentials in plain text thus trying to force an interactive mode.
Thing is just as you I don't see such option in vagrant provision doc ( http://docs.vagrantup.com/v1/docs/provisioners/shell.html ) so one way or another you need to embed the authentication within your script.
Have you thought about using something like getting a token and use the heroku REST Api instead of the CLI?
https://devcenter.heroku.com/articles/authentication

How can I automate running commands remotely over SSH to multiple servers in parallel?

I've searched around a bit for similar questions, but other than running one command or perhaps a few command with items such as:
ssh user#host -t sudo su -
However, what if I essentially need to run a script on (let's say) 15 servers at once. Is this doable in bash? In a perfect world I need to avoid installing applications if at all possible to pull this off. For argument's sake, let's just say that I need to do the following across 10 hosts:
Deploy a new Tomcat container
Deploy an application in the container, and configure it
Configure an Apache vhost
Reload Apache
I have a script that does all of that, but it relies on me logging into all the servers, pulling a script down from a repo, and then running it. If this isn't doable in bash, what alternatives do you suggest? Do I need a bigger hammer, such as Perl (Python might be preferred since I can guarantee Python is on all boxes in a RHEL environment thanks to yum/up2date)? If anyone can point to me to any useful information it'd be greatly appreciated, especially if it's doable in bash. I'll settle for Perl or Python, but I just don't know those as well (working on that). Thanks!
You can run a local script as shown by che and Yang, and/or you can use a Here document:
ssh root#server /bin/sh <<\EOF
wget http://server/warfile # Could use NFS here
cp app.war /location
command 1
command 2
/etc/init.d/httpd restart
EOF
Often, I'll just use the original Tcl version of Expect. You only need to have that on the local machine. If I'm inside a program using Perl, I do this with Net::SSH::Expect. Other languages have similar "expect" tools.
The issue of how to run commands on many servers at once came up on a Perl mailing list the other day and I'll give the same recommendation I gave there, which is to use gsh:
http://outflux.net/unix/software/gsh
gsh is similar to the "for box in box1_name box2_name box3_name" solution already given but I find gsh to be more convenient. You set up a /etc/ghosts file containing your servers in groups such as web, db, RHEL4, x86_64, or whatever (man ghosts) then you use that group when you call gsh.
[pdurbin#beamish ~]$ gsh web "cat /etc/redhat-release; uname -r"
www-2.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-2.foo.com: 2.6.9-78.0.1.ELsmp
www-3.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-3.foo.com: 2.6.9-78.0.1.ELsmp
www-4.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-4.foo.com: 2.6.18-92.1.13.el5
www-5.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-5.foo.com: 2.6.18-92.1.13.el5
[pdurbin#beamish ~]$
You can also combine or split ghost groups, using web+db or web-RHEL4, for example.
I'll also mention that while I have never used shmux, its website contains a list of software (including gsh) that lets you run commands on many servers at once. Capistrano has already been mentioned and (from what I understand) could be on that list as well.
Take a look at Expect (man expect)
I've accomplished similar tasks in the past using Expect.
You can pipe the local script to the remote server and execute it with one command:
ssh -t user#host 'sh' < path_to_script
This can be further automated by using public key authentication and wrapping with scripts to perform parallel execution.
You can try paramiko. It's a pure-python ssh client. You can program your ssh sessions. Nothing to install on remote machines.
See this great article on how to use it.
To give you the structure, without actual code.
Use scp to copy your install/setup script to the target box.
Use ssh to invoke your script on the remote box.
pssh may be interesting since, unlike most solutions mentioned here, the commands are run in parallel.
(For my own use, I wrote a simpler small script very similar to GavinCattell's one, it is documented here - in french).
Have you looked at things like Puppet or Cfengine. They can do what you want and probably much more.
For those that stumble across this question, I'll include an answer that uses Fabric, which solves exactly the problem described above: Running arbitrary commands on multiple hosts over ssh.
Once fabric is installed, you'd create a fabfile.py, and implement tasks that can be run on your remote hosts. For example, a task to Reload Apache might look like this:
from fabric.api import env, run
env.hosts = ['host1#example.com', 'host2#example.com']
def reload():
""" Reload Apache """
run("sudo /etc/init.d/apache2 reload")
Then, on your local machine, run fab reload and the sudo /etc/init.d/apache2 reload command would get run on all the hosts specified in env.hosts.
You can do it the same way you did before, just script it instead of doing it manually. The following code remotes to machine named 'loca' and runs two commands there. What you need to do is simply insert commands you want to run there.
che#ovecka ~ $ ssh loca 'uname -a; echo something_else'
Linux loca 2.6.25.9 #1 (blahblahblah)
something_else
Then, to iterate through all the machines, do something like:
for box in box1_name box2_name box3_name
do
ssh $box 'commmands_to_run_everywhere'
done
In order to make this ssh thing work without entering passwords all the time, you'll need to set up key authentication. You can read about it at IBM developerworks.
You can run the same command on several servers at once with a tool like cluster ssh. The link is to a discussion of cluster ssh on the Debian package of the day blog.
Well, for step 1 and 2 isn't there a tomcat manager web interface; you could script that with curl or zsh with the libwww plug in.
For SSH you're looking to:
1) not get prompted for a password (use keys)
2) pass the command(s) on SSH's commandline, this is similar to rsh in a trusted network.
Other posts have shown you what to do, and I'd probably use sh too but I'd be tempted to use perl like ssh tomcatuser#server perl -e 'do-everything-on-one-line;' or you could do this:
either scp the_package.tbz tomcatuser#server:the_place/.
ssh tomcatuser#server /bin/sh <<\EOF
define stuff like TOMCAT_WEBAPPS=/usr/local/share/tomcat/webapps
tar xj the_package.tbz or rsync rsync://repository/the_package_place
mv $TOMCAT_WEBAPPS/old_war $TOMCAT_WEBAPPS/old_war.old
mv $THE_PLACE/new_war $TOMCAT_WEBAPPS/new_war
touch $TOMCAT_WEBAPPS/new_war [you don't normally have to restart tomcat]
mv $THE_PLACE/vhost_file $APACHE_VHOST_DIR/vhost_file
$APACHECTL restart [might need to login as apache user to move that file and restart]
EOF
You want DSH or distributed shell, which is used in clusters a lot. Here is the link: dsh
You basically have node groups (a file with lists of nodes in them) and you specify which node group you wish to run commands on then you would use dsh, like you would ssh to run commands on them.
dsh -a /path/to/some/command/or/script
It will run the command on all the machines at the same time and return the output prefixed with the hostname. The command or script has to be present on the system, so a shared NFS directory can be useful for these sorts of things.
Creates hostname ssh command of all machines accessed.
by Quierati
http://pastebin.com/pddEQWq2
#Use in .bashrc
#Use "HashKnownHosts no" in ~/.ssh/config or /etc/ssh/ssh_config
# If known_hosts is encrypted and delete known_hosts
[ ! -d ~/bin ] && mkdir ~/bin
for host in `cut -d, -f1 ~/.ssh/known_hosts|cut -f1 -d " "`;
do
[ ! -s ~/bin/$host ] && echo ssh $host '$*' > ~/bin/$host
done
[ -d ~/bin ] && chmod -R 700 ~/bin
export PATH=$PATH:~/bin
Ex Execute:
$for i in hostname{1..10}; do $i who;done
There is a tool called FLATT (FLexible Automation and Troubleshooting Tool) that allows you to execute scripts on multiple Unix/Linux hosts with a click of a button. It is a desktop GUI app that runs on Mac and Windows but there is also a command line java client.
You can create batch jobs and reuse on multiple hosts.
Requires Java 1.6 or higher.
Although it's a complex topic, I can highly recommend Capistrano.
I'm not sure if this method will work for everything that you want, but you can try something like this:
$ cat your_script.sh | ssh your_host bash
Which will run the script (which resides locally) on the remote server.
Just read a new blog using setsid without any further installation/configuration besides the mainstream kernel. Tested/Verified under Ubuntu14.04.
While the author has a very clear explanation and sample code as well, here's the magic part for a quick glance:
#----------------------------------------------------------------------
# Create a temp script to echo the SSH password, used by SSH_ASKPASS
#----------------------------------------------------------------------
SSH_ASKPASS_SCRIPT=/tmp/ssh-askpass-script
cat > ${SSH_ASKPASS_SCRIPT} <<EOL
#!/bin/bash
echo "${PASS}"
EOL
chmod u+x ${SSH_ASKPASS_SCRIPT}
# Tell SSH to read in the output of the provided script as the password.
# We still have to use setsid to eliminate access to a terminal and thus avoid
# it ignoring this and asking for a password.
export SSH_ASKPASS=${SSH_ASKPASS_SCRIPT}
......
......
# Log in to the remote server and run the above command.
# The use of setsid is a part of the machinations to stop ssh
# prompting for a password.
setsid ssh ${SSH_OPTIONS} ${USER}#${SERVER} "ls -rlt"
Easiest way I found without installing or configuring much software is using plain old tmux. Say you have 9 linux servers. Pick a box as your main. Start a tmux session:
tmux
Then create 9 split tmux panes by doing this 8 times:
ctrl-b + %
Now SSH into each box in each pane. You'll need to know some tmux shortcuts. To navigate, press:
ctrl+b <arrow-keys>
Once your logged in to all your boxes on each pane. Now turn on pane synchronization where it lets you type the same thing into each box:
ctrl+b :setw synchronize-panes on
now when you press any keys, it will show up on every pane. to turn it off, just make on to off. to cycle resize panes, press ctrl+b < space-bar >.
This works alot better for me since I need to see each terminal output as sometimes servers crash or hang for whatever reason when downloading or upgrade software. Any issues, you can just isolate and resolve individually.

Resources