Unable to use cd in .prog - oracle

while executing the following .prog script No such file or directory is thrown
#!/usr/bin/ksh
param1="$5"
echo "Parameter1 : $param1"
l_outgoing="outgoing"
l_out_path="$INTERFACE_HOME/$l_outgoing"
echo "$l_out_path"
cd $l_out_path
The script works fine till echo "$l_out_path" and it gives out the correct directory

The Script was made in Windows and migrated to Unix Sever.
using the command dos2unix worked!!
No other changes was done

Related

Basic Bash script results in "edge.sh: line 13: npm: command not found" found issue here but it didn't resolve

The following simple script is apparently not so simple.
The entire script appears to work properly until I get to the npm command.
I have looked at the numerous threads here, but none of the solutions fix the issue.
Each of the scripts is kicked off by a parent script.
Here is the parent:
#!/bin/bash/
authGogglesPath='/c/sandBox/amazon-sandbox/CraigMonroe/platform.shared.auth-goggles'
echo $'\nExecuting node commands for local running solution...\n'
#echo $(pwd)
# run the scripts
bash edge.sh ${edgePath} &
exec bash
I checked my path in the terminal and it's aware
I thought that it might be running as another associated profile so I tried the full path to npm, but the same results.
#!/bin/bash/
authGogglesPath='/c/sandBox/amazon-sandbox/CraigMonroe/platform.shared.auth-goggles'
echo $'\nExecuting node commands for local running solution...\n'
#echo $(pwd)
# run the scripts
bash edge.sh ${edgePath} &
exec bash
That calls edge.sh with a string path for arg (more for later)
edge.sh is another simple script
#!/bin/bash/
PATH=$1
#echo $PATH
if [ -z "${PATH}" ] ; then
"PATH is empty! Aborting"
exit 1
fi
cd "${PATH}"
echo $'\nExecuting Edge...\n'
npm run dev
Each time I run this I'm receiving:
$ bash edge.sh /c/sandBox/amazon-sandbox/CraigMonroe/platform.shared.auth-goggles/
Executing Edge...
edge.sh: line 13: npm: command not found
cmonroe#LP10-G6QD2X2 MINGW64 ~/cruxScripts
$
When in the terminal and manually navigating to the directory and running the command it works properly. Where the edge builds and starts.
Unless npm is in /c/sandBox/amazon-sandbox/CraigMonroe/platform.shared.auth-goggles/, doing PATH=$1 means your PATH only refers to that one folder.
No more /usr/bin or any other folders your bash session might need.
As commented, adding to the PATH should work
PATH="$1:${PATH}"

grep command works in command line, but not in bash script: get no such file or directory erro

I know that there are some related questions already about this, but can't make it work for me!
So I can run grep in the command line and it works fine, but if I do that on a bash script, I have the following error:
grep: secondword:No such file or directory
I am connecting via ssh to the server, then I run some commands. The path to grep in the server is /bin/grep, but still it does not work. Here is the sample code:
#!/bin/bash
$host="user#host";
ssh $host "
myinfo=\$(grep "word secondword" path/to/file);
"
I also verified that it does not have the CR that is created in Windows with Notepad++. Any ideas on how to fix this?
EDIT:
As suggested, I made the following change with the quotes:
#!/bin/bash
$host="user#host";
ssh $host "
myinfo=\$(grep \"word secondword\" path/to/file);
"
but now I have a very weird behavior: it looks like is listing all the files that are on the home server path.Doing echo to the variable:
file1 file2 file 3
file4 file5 etc.
Why it as this behavior? Did I miss something?
Please Put Script working Dir. While run crontab it will take as user home as default path.
#!/bin/bash
cd Your_Path
$host="user#host"
ssh $host
myinfo=\$(grep "word secondword" path/to/file)

How to properly access network location while executing bash script in cygwin's cron

I've created a bash script to take a backup of a folder to a remote location via cygwin cron however I'm experiencing an issue. The script at the end will execute a command like this one
/usr/bin/tar -zcvf //192.168.1.108/Backup/Folder/Folder.Backup.2015-12-03.1219.tar.gz /cygdrive/d/Folder
Although when I use the command it produces and then executes in the context of a cygwin bash shell it works correctly, when I run it via a cron job it fails because it doesn't recognize the remote location path correctly. If I change the path to a local /cygdrive location or to ~/ it works correctly even via cron so somehow I'm thinking that the network shares are not being correctly viewed by cygwin in it's cron environment.
Any ideas how I could solve this issue?
Here's my bash script
#!/usr/bin/bash
#the path needs to be set to execute gzip command or tar command breaks
export PATH=$PATH:/usr/bin:/bin:/usr/local/bin:/usr/local/sbin:/sbin
if [ $# -ne 3 ]
then
echo "USAGE: backup-clients <path> <name_prefix> <source>";
exit 1;
fi
DATE=`date "+%Y-%m-%d.%H%M"`;
FILEPATH="$1/$2.Backup.$DATE.tar.gz";
COMMAND="/usr/bin/tar -zcvf $FILEPATH $3";
echo "COMMAND="$COMMAND;
eval $COMMAND;
Which I run with the command
/usr/bin/bash /cygdrive/d/mybackupscript.bash "//192.168.1.108/Backup/Folder" "Folder" "/cygdrive/d/Folder"
I really appreciate any help you can provide.

Upload not working using grive and cron

I'm currently running a small database on a centos 7 server.
I've one script for creating backups and another script for uploading them to googledrive using grive. However the script only uploads my files when I run it manually (bash /folder/script.sh). When it is run via crontab the script runs but it wont upload. I cant find any error messages in /var/log/cron or /var/log/messages.
Cron log entry:
Dec 7 14:09:01 localhost CROND[6409]: (root) CMD (/root/backupDrive.sh)
Here is the script:
#!/bin/bash
# Get latest file
file="$(ls -t /backup/database | head -1)"
echo $file
# Upload file to G-Drive
cd /backup/database && drive upload -f $file
Add full path to drive or add its path to $PATH.

Stdout & stderr not redirecting on autostart

I am using Raspbian (Debian with LXDE on a Raspberry Pi).
I have created the following two files. The first one is a .desktop file so as lxde can autostart my script, and the second one is the script in question.
The problem is that when I manually start the script it works perfect, creating the directories and redirecting the streams. However when I reboot the pi, and the script autostarts I get no output at all. The script is surelly working as my final app indeed starts. Only the streams are not there.
I have no idea for what to search for, or what causes this...
.desktop
[Desktop Entry]
Type=Application
Exec=system_start.sh
system_start.sh
#!/bin/bash
cd ~/application.linux64/
mkdir system_log
DIR=system_log/$(date +%Y%m%d)
mkdir $DIR/
./start.sh 1> $DIR/$(date +%T)operation_log.txt 2> $DIR/$(date +%T)errors_log.txt
I had this same problem with Linux Mint. A working command with redirect to a file did not work when started at boot using autostart .desktop file.
Enclosing the command in bash -c " " helped:
bash -c "/home/huehuehue/myguiapp >> /home/huehuehue/myguiapp.log 2>&1"
You should probably use the whole path instead of a relative path to make your script work in any circumstances and avoid ~:
#!/bin/bash
DIR=/home/username/application.linux64/
mkdir $DIR/system_log
SUBDIR=system_log/$(date +%Y%m%d)
mkdir $SUBDIR
./start.sh 1> $SUBDIR/$(date +%T)operation_log.txt 2> $SUBDIR/$(date +%T)errors_log.txt

Resources