I'm trying to copy a text file from within the z/OS unix shell to a PDS titled P2.OUTPUT($010), but whenever i run the command cp file.txt "//P2.OUTPUT($010)" i get an error stating that P2.OUTPUT(-sh10) is an invalid location. For whatever reason, whenever I run the command $010 becomes -sh10. I've tried putting $010 in '' and a few other things but no matter what I do it doesn't seem to work. I believe it's an issue with accessing the file and not with the cp command because I can't view the contents of the member using the cat command either, and any error trying to access the member using any command lists it as -sh10 instead of $010. Any idea what I'm doing wrong?
The problem is that the unix shell interprets $0 as an environment-variable that has the value -sh as can be seen when using echo $0, so your command becomes cp file.txt "//P2.OUTPUT(-sh10)".
Try escaping the $ using a backslash: cp file.txt "//P2.OUTPUT(\$010)".
Related
Hello all I have a program running on a linux OS that allows me to call a bash script upon a trigger (such as a file transfer). I will run something like:
/usr/bin/env bash -c "updatelog.sh '${filesize}' '${filename}'"
and the scripts job is to update the log file with the file name and file size. But if I pass in a file name with a single quote in its file name then it will break the script and give an error saying "Unexpected EOF while looking for matching `''"
I realize that a file name with a single quote is making the calling command an invalid one since the single quote is messing with the command itself. However I don't want to sanitize the variables if I can help it cause I would like my log to have the exact file name being displayed to easier cross reference it later. Is this possible or is sanitizing the only option here?
Thanks very much for your time and assistance.
Sanitization is absolutely not needed.
The simplest solution, assuming your script is properly executable (has +x permissions and a valid shebang line), is:
./updatelog.sh "$filesize" "$filename"
If for some reason you must use the bash -c, use single quotes instead of double quotes surrounding your code, and keep your data out-of-band from that code:
bash -c 'updatelog.sh "$#"' 'updatelog' "$filesize" "$filename"
Note that only updatelog.sh "$#" is inside the -c argument and parsed as code, and that this string is in single quotes, passed through without any changes whatsoever.
Following it are your arguments $0, $1 and $2; $0 is used when printing error messages, while $1 and $2 go into the list of arguments -- aka $# -- passed through to updatelog.sh.
I am running a bash script in unix. The directory exists but it wont work when i change directory in the script.
The script is located at: /oracle/archive.sh
I run the script with: sh archive.sh
Script:
SALES_DIR="/oracle/sales/"
cd $SALES_DIR
pwd
The output show:
: No such file or directory: /oracle/sales/
/oracle
It clearly show that it is not able to change the directory, but the pwd command is working.
Your problem was that you were using editor which saved your script with CR character (more about that https://en.wikipedia.org/wiki/Newline. With tr, you've converted the script to use only Unix end-of-lines characters (LF), saying that you removed the \r character from the script.
tr -d "\r" < archive.sh > archive.new.sh
You can also detect special characters when running
cat -ve archive.sh
So in your case instead of changing directory to /oracle/sales/ you were actually trying to cd to /oracle/sales/\r
I have an rsync command that works as expected when I type it directly into a terminal. The command includes several --include='blah' and --exclude='foo' type arguments. However, if I save that command to a one-line file called "myfile" and I try `cat myfile` (or, equivalently $(cat myfile)), the rsync command behaves differently.
I'm sure it is the exact same command in both cases.
Is this behavior expected/explainable?
I've found the answer to this question. The point is that the cat command takes the contents of the file and treats it like a string. Any string operators (like the escape operator, ) are executed. Then, the final string output is what is passed to a command via the backticks.
As a solution, I've just made "myfile" a shell script that I can execute rather than trying to use cat.
I have a script to manipulate some log files and then push them up to a server, to be loaded into mysql and analyzed. I have almost all of this process figured out except the automation of collecting the logs. I use sed to rip all the " out of the log files, so it can more easily be imported into mysql. When I run the command below it works fine, but run the same command in a shell script, and it creates an empty file. I am not sure why --any help would be greatly appreciated.
sed 's/\"//g' /usr/local/tomcat/logs/localhost_access_log.$yest.txt > /$DIR/iweb$yest.txt
Here is the complete script.
#!/bin/bash
#Script to manage catalina logs on our servers.
#First create the needed variables.
date=$(date +"%m_%d_%y")
adate=$(date +"%Y-%m-%d")
yest=$(date -d 'yesterday' +"%Y-%m-%d")
Customer="iwebsup"
log=/isweb/admin/catmanage/log
DIR=/catmanage
#Make Directory
#mkdir /catmanage/tomcat1/
#Run access logs through sed to remove " from file
echo "Removing quote marks from access log and moving to direcotry" &> $log.$date
sed 's/\"//g' "/usr/local/tomcat/logs/localhost_access_log.$yest.txt" > "/$DIR/iweb$yest.txt" &> $log.$date
Your original question shows redirection with > but your actual script has &>. These are rather different things, and in fact, the latter is probably incompatible with your sh.
The portable, compatible way to include error redirection is
command >file 2>&1
Your shebang line says #!/bin/bash but based on your diagnostic remarks, I am guessing you are running this with sh after all.
By the way, tr -d '"' <file >newfile would be a more efficient way to remove double quotes from a file.
Launching a shell script starts a new system process. I suspect that within the context of this script's sub-shell, $yest is not set as a shell variable. If you're going to use $yest in a shell-script, you should ideally pass its value as an argument to the script -- or alternatively export the shell variable as an environment variable (export $yest) which will be inherited by the sub-shell process (all child processes inherit the environment of their parent process).
When debugging shell scripts, it's always useful to include set -xvu at the start of the section that you're debugging so that you can see what the script is doing and what values are stored in its variables.
-x Print commands and their arguments as they are executed.
-v Print shell input lines as they are read.
-u Treat unset variables as an error when substituting.
You can turn off this debugging by later running set -xvu.
I've created a bash shell script file that I can run on my local bash (version 4.2.10) but not on a remote computer (version 3.2). Here's what I'm doing
A script file (some_script.sh) exists in a local folder
I've done $ chmod 755 some_script.sh to make it an executable
Now, I try $ ./some_script.sh
On my computer, this runs fine. On the remote computer, this returns a Command not found error:
./some_script.sh: Command not found.
Also, in the remote version, executable files have stars(*) following their names. Don't know if this makes any difference but I still get the same error when I include the star.
Is this because of the bash shell version? Any ideas to make it work?
Thanks!
The command not found message can be a bit misleading. The "command" in question can be either the script you're trying to execute or the shell specified on the shebang line.
For example, on my system:
% cat foo.sh
#!/no/such/dir/sh
echo hello
% ./foo.sh
./foo.sh: Command not found.
./foo.sh clearly exists; it's the interpreter /no/such/dir/sh that doesn't exist. (I find that the error message varies depending on the shell from which you invoke foo.sh.)
So the problem is almost certainly that you've specified an incorrect interpreter name on line one of some_script.sh. Perhaps bash is installed in a different location (it's usually /bin/bash, but not always.)
As for the * characters in the names of executable files, those aren't actually part of the file names. The -F option to the ls command causes it to show a special character after certain kinds of files: * for executables, / for directories, # for symlinks, and so forth. Probably on the remote system you have ls aliased to ls -F or something similar. If you type /bin/ls, bypassing the alias, you should see the file names without the append * characters; if you type /bin/ls -F, you should see the *s again.
Adding a * character in a command name doesn't do what you think it's doing, but it probably won't make any difference. For example, if you type
./some_script.sh*
the * is a wild card, and the command name expands to a list of all files in the current directory whose names match the pattern (this is completely different from the meaning of * as an executable file in ls -F output). Chances are there's only one such file, so
./some_script.sh* is probably equivalent to ./some_script.sh. But don't type the *; it's unnecessary and can cause unexpected results.