storing remote terminal output in a variable [closed] - bash

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
Pseudo-terminal will not be allocated because stdin is not a terminal
I have been using similar script as of above for long. I login on remote devices, have some commands ran on them and then logout. I use telnet and ssh both for remote login. so far all the commands are being echo-ed to remote terminals and the output being saved to some log files. what I need is to be able to interpreter the output generated by the remote terminal. Consider following a command sent to remote terminal after successful login via SSH/Telnet:
echo "show system date"
after running above command, I want to store the output generated by remote machine in a variable and perform some if/else statements. So far, I didn't have any kind of success. can someone please guide me how I can save the output generated by remote terminal in a variable?

If ansible is an option you could register the output and based on it do some actions, for example:
- name: show system date
command: date
register: date
- debug: msg="{{ date.stdout }}"
You could read more about ansible and the conditionals here: http://docs.ansible.com/ansible/latest/playbooks_conditionals.html

Related

Read port into Bash script variable [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am currently writing a bash script to automate my nodeJS deployment on my ubuntu test server. The port is listed within a the bin/www file. In the example line below the port is 3003 that I need to get in a variable of my script:
var port = normalizePort(process.env.PORT || '3003');
Do I need to parse the file with some regex in order to get the port number in a variable to work with it?
If you want to parse that 3003 out in a bash script you'll need to get the line in question, then pipe it to something that can get the value out. Something like:
grep 'normalizePort' <file_with_the_port> | grep -oE '[0-9]+'
You'll need to play around with it to get you to where it works for you.
However, as others have pointed out, that won't give you the port. That will only give you the default port. The real port is in an environment variable (that's what process.env means in this case), thus in a bash script you'd probably be able to access that PORT variable like so:
APP_PORT=${PORT:-3003}
You ought to read more on environment variables. Ideally you shouldn't be parsing ports out of files, unless those files are specifically created for storing config in a machine-friendly format like YAML or JSON.

How to see bash script executed command? [duplicate]

This question already has answers here:
How can I debug a Bash script? [closed]
(12 answers)
Closed 2 years ago.
I'm not sure I address this problem correctly, but I've search and search over the net and found nothing, yet.
I'm writing a bash script and I want to see what the script is really doing when being executed, like a log of all the commands, one by one, what are executed - this way, I'll be able to see why my script is falling.
Note : I've post similar question in the past, someone told me I run my bash script with sh -xe script.sh but it doesn't give me enough information to debug properly.
Please advise. Thanks!
Adding set -x at the beginning of the script displays, in the terminal, all the commands sent by the script as the terminal received it. It was exactly what I needed and it working perfectly.

Command runs in terminal, but doesn't work in script [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 3 years ago.
Improve this question
On Linux Mint, the command "timedatectl" works as expected from terminal, but doesn't work from the bash script. I think I made some very stupid mistake, as I'm a complete newbie in scripting, but can't find it myself.
#!/bin/bash
timedatectl set-timezone UTC
timedatectl status
prints out
Failed to set time zone: Invalid time zone 'UTC'
Unknown operation status
The same commands work just fine from terminal, and setting the timezone and printing out times and time settings.
UPD:
which timedatectl
prints out
/usr/bin/timedatectl
, but adding this full path to the calls in the script doesn't change the output.
Trying to set another timezone succeeds from the terminal (timezone changes), but fails from the script. (Failed to set time zone: Invalid time zone 'Europe/Paris')
echo $PATH returns
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
The system doesn't ask for password to execute these commands from the terminal.
UPD2: I've found what was wrong, thanks to the comment by Whydoubt.
I was calling the script like
bash script1.sh
but after the comment I tried to call it like
./script1.sh
and got
bash: ./script1.sh: /bin/bash^M: bad interpreter: No such file or directory
So I realised the problem was with the newline symbol (the original script was created on a Windows machine with Notepad++). After retyping the script with xed, everything started to work as expected.
The output of uname -a, hostname, hostid and hostnamectl, who, groups, stat /usr/share/zoneinfo and stat /usr/share/zoneinfo/UTC are the same from the console and from a script. /usr/share/zoneinfo/UTC is accessible from the script. I think I don't need to show the output as it's rather lengthy and my mistake is already found.
Should I write an answer myself to make the question answered and completed?

Shell script and CRON problems [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
I've written a backup script for our local dev server (running Ubuntu server edition 9.10), just a simple script to tar & gzip the local root and stick it in a backup folder.
It works fine when I run :
$ bash backups.sh
but it wont work when I run it through crontab.
59 23 * * * bash /home/vnc/backups/backup.sh >> /home/vnc/backups/backup.log 2> $1
I get the error message
/bin/sh: cannot create : nonexistent
The script makes the tar.gz in the folder it is running from (/home/user1), but then tries to copy it to a mounted share (/home/backups, which is really 192.168.0.6/backups) from a network drive, via using fstab.
The mounted share has permissions 777 but the owner and group are different to those running the script.
I'm using bash to run the script instead of sh to get around another issue I've had in the past with "bad substitution" errors
The first 2 lines of the file are
! /bin/bash
cd /home/vnc/backups
I'm probably not supplying enough information to fully answer this post yet but I can post more information as necessary, but I don't really know where to look next.
The clue is in the error message:
/bin/sh: cannot create : nonexistent
Notice that it says "sh". The Bourne shell doesn't support some features that are specific to Bash. If you're using Bash features, then you need to tell Bash to run the script.
Make the first line of your file:
#!/bin/bash
or in your crontab entry do this:
* * * * * /bin/bash scriptname
Without seeing your crontab entry and your script it's hard to be any more specific.
Perhaps the first thing you should do in your backups.sh is insert a cd /home/user1. crond may execute your script from a different directory than you think it does, and forcing it to use the same directory regardless of how it is executed could be a good first start.
Another potentially useful debugging step is to add id > /tmp/id.$$ or something like that, so you can see exactly which user account and groups are being used to run your script.
In crontab, just change 2>$1 to 2>&1. I've just done one myself. Thank you Dennis Williamson.

Why is my expect script failing on line 1? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
The very first line of my expect script fails. Here are the entire contents of my script and it fails:
#!/usr/bin/expect -f
And it fails right off the bat with
": no such file or directory
as my response. Expect is in fact installed and is located in /usr/bin/ and I am running this from root. I have no extra spaces or lines before the # sign either. Of course there was more to the script originally but it fails way before it gets to the good stuff.
Tried it and here is the result: /usr/bin/expect^M: bad interpreter
Is it possible that there's a Windows newline (the ^M) in there that's confusing the script? You can try od to see what newline character(s) is after the expect and tofromdos or an editor (e.g. emacs in hexl-mode) to remove it. See the man pages for more info.
I had this issue and found I didn't have the expect interpreter installed! Oddly enough, if you ran the command in the shell it worked. However, through a shell script I got this error:
/usr/bin/expect: bad interpreter: No such file or directory
I fixed it by simply installing the Expect interpreter. The package name that was chosen was: expect libtcl8.6
Just run:
sudo apt-get install expect
Your line endings are wrong. Shove it through dos2unix or tr -d '\r'.
I don't really know expect, to be honest, but when I run that on my system it "works" fine. Nothing happens, but that's what I'd expect. I don't get any error message. According to the man page,
#!/usr/bin/expect -f
is the correct way to start your script. Expect then slurps up the script you are executing as the cmdfile.
The way I got it to reproduce the problem was to actually put a ^M at the end of the line instead of a normal newline (saw Bert F's response and that prompted me to try it). I'm sure vim's :set list command will show any odd characters.
If you observe the error, there was a windows newline character, that is added because its copied from windows machine via mail or winscp. So to avoid this error copy the script using scp linux to linux and execute the script.

Resources