Script Bash Solaris SS7 Query signaling point - bash

I'm trying to do a script to display some configuration from the MML console (Man-Machine Language Interface) for SS7 using the file option:
cat /tmp/queryss7.txt
display-sp:;
exit:;
mml -f /tmp/queryss7.txt 0
< ERROR >:: Input source ambiguous
But this error only shows up if I use the crontab to start the script. If I start manually the script works fine...
Thx

Related

Error when run simple shell script on opensuse15.1: SubEntry: number of args (2) is invalid

The following error occurred every time I tried to run a simple shell script(test.sh):
SubEntry: number of args (2) is invalid
This single line code is in the test.sh:echo -e 'open 192.168.1.123 \nuser root pass \nput test.csv \nquit'|ftp -inv
If I run the code line directly in the command line works OK: the file test.csv is transferred successfully via FTP on server 192.168.1.123.
Any one know why I get that error when I run the shell script?!Thank you!
I found another solution for sending files via FTP using a script.
I use curl to send the file like this:
curl -T your_file ftp://your_IP
This command can be put in a script and automatic run that script.Works for me.

sending echo and error to both terminal and file log

I am trying to modify a script someone created for unix in shell. This script is mostly used to run on backed servers with no human interaction, however I needed to make another script to allow users to input information. So, it is just modifying to old version for user input. But the biggest issue I am running into is trying to get both error logs and echos to be saved in a log file. The script has a lot of them, but I wanted to have those shown on the terminal as well as send them to the log file specified, to be looked into later.
What I have is this:
exec 1> ${LOG} 2>&1
This line is pretty much send everything to the log file. That is all good, but I also have people trying to enter in information in the script, and it is sending everything to the log file including the echo needed for the prompt. This line is also at the beginning of the script, but reading more into the stderr and stdout messages. I tried:
exec 2>&1 1>>${LOG}
exec 1 | tee ${LOG} But only getting error when running it this "./bash_pam.sh: line 39: exec: 1: not found"
I have went over site such as this to solve the issue, but I am not understanding why it does not print to both. The way I insert it, it either only sends it to the log location and not to the terminal, or it sends it to the terminal, but nothing is persevered in the log.
EDIT: Some of the solutions, for this have mentioned that certain fixes will work in bash, but not in /bin/sh.
If you would like all output to be printed onto the console, while also being printed to a logfile.txt you would run this command on your script:
bash your_script.sh 2>&1 | tee -a logfile.txt
Or calling it within the file:
<bash_command> 2>&1 | tee -a logfile.txt
To append to logfile.txt instead of overwriting, add the -a option to tee.

Parsing SSH stream from batch

Using batch script to run ssh, I am finding all the output is dumped into the standard error... so my command:
ssh -i keyfile user#host "commands" 2> error.log
captures the remote server prompt for password if there are no matching keys in the local known_hosts...
This leaves me no way to capture the output for error processing or logging without leaving the user of my batch script stuck no knowing what the blank prompt is.
My other thought is to do a simple ssh to test the connection first and establish the password prompt if it's needed, then move to the command of interest. But I feel like if the first one passes, then the only thing I'm error is my remote command.
I've tried
>CON 2> error.log
... seems to do the same thing.
Unfortunately there's no TEE command default in Windows.
My best solution is to:
1) echo Enter password
2) ssh "params" 2>error.log
Suggestions?
If the remote system supports the syntax, you could do something like this:
ssh -i keyfile user#host "commands 2>&1" > output.log 2> error.log
This redirects the remote command's error output to its standard output. ssh's own standard output and standard error aren't affected. The 2>&1 part is Bourne shell syntax to redirect standard error (descriptor 2) to standard output (descriptor 1). It should work if the remote shell is sh, bash, or ksh.

shell script : write sdterr & sdtout to file

I know this has been asked many times, but I can find a suitable answer in my case.
I croned a backup script using rsync and would like to see all output, errors or not, from the all script commands. I must write the command inside the script itself, and do not want to see output in my shell.
I have been trying with no success. Below part of the script.
#!/bin/bash
.....
BKLOG=/mnt/backup_error_$now.txt
# Log everything to log file
# something like
exec 2>&1 | tee $BKLOG
# OR
exec &> $BKLOG
I have been adding at the script beginig all kinds of exec | tee $BKLOG with adding &>, 2>&1at various part of the command line, but all failed. I either get an empty log file or incomplete. I need to see on log file what rsync has done, and the error if script failed before syncing.
Thank you for help. My shell is zsh, so any solution in zsh is welcomed.
To redirect all the stdout/stderr to a file place this line on top of your script:
BKLOG=/mnt/backup_error_$now.txt
exec &> "$BKLOG"

Unable to pass parameters to a perl script inside a bash script

I would like to pass parameters to a perl script using positional parameters inside a bash script "tablecheck.sh". I am using an alias "tablecheck" to call "tablecheck.sh".
#!/bin/bash
/scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1 &
Perl script by itself works fine. But when I do "tablecheck MySQLinstance", $1 stays $1. It won't get replaced by the instance. So I get the output as follows:
Exit /scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1 &
The job exits.
FYI: alias tablecheck='. pathtobashscript/tablecheck.sh'
I have a bunch of aliases in another bash script. Hence . command.
Could anyone help me... I have gone till the 3rd page of Google to find an answer. Tried so many things with no luck.
I am a noob. But may be it has something to do with it being a background job or $1 in a path... I don't understand why the $1 won't get replaced...
If I copy your exact set up (which I agree with other commenters, is some what unusual) then I believe I am getting the same error message
$ tablecheck foo
[1]+ Exit 127 /scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1
In the /tmp/chktables_foo file that it makes there is an additional error message, in my case "bash: /scripts/tables.pl: No such file or directory"
I suspect permissions are wrong in your case

Resources