unable to access UNC drive from Bash Script .sh - bash

Trying to access network drive (UNC) from the bash script. The network drive needs username and password.
i Can access UNC by running some individual commands like CD, net use .However unable to execute from a script.
follows below steps
1) Mount the drive to x drive by using below command from
Command : net use x: \\\\Server_name\\Directory /user:users pass /PERSISTENT:YES
Result :Sucess mounted x drive
2) test.sh
#!/bin/bash
ls /cygdrive/x
count_node1 = cat a.log b.log.1 v.log.2 |grep "&&&&" | sort -k1,2 | grep -c 'word'
#count_node1="got it"
echo helloworld
echo $count_node1
#end
Result: helloWorld
: No such file or directory/x
count_node1: command not found
3)Further, If I run each line from Cygwin individually it is working perfectly.
trying bash profile for first time really confused.

The order matter with net use:
net use x: \\\\Server_name\\Directory pass /user:users

See syntax of your shell (bash). Correct:
count_node1=$(cat a.log b.log.1 v.log.2 |grep "&&&&" | sort -k1,2 | grep -c 'word')
or
count_node1=$(grep '&&&&' a.log b.log.1 v.log.2 |sort -k1,2 | grep -c 'word')

Remove the carriage return characters from the ends of your script's lines, i. e. save it in unix instead of dos file format.

Related

How to store or capture variable from one directory to another and execute with cut command in Linux

I need to do as following, firstly I can read a server name from the text file line by line and do a grep command on it. The main task is to find the whole FQDN server name using only the hostname in the NB logs path and do other commands further. Here is the script I wrote but I am beginner in shell scripting and need your help. What do I missing here because I was thinking How to save a variable's value into another directory path and use it there? If I manually run these commands in command line it can give me what I do need it., thank you all..
#!/bin/sh
echo
echo "Looking for the whole backup client names?"
for server in `cat ./List_ServerNames.txt`;
do
cd /usr/openv/netbackup/logs/nbproxy/
return_fqdn=`grep -im1 '$server' "$(ls -Art /usr/openv/netbackup/logs/nbproxy/ | tail -n1 | cut -
d : -f8)" | cut -d " " -f 6 | cut -c -36 | cut -d "(" -f 1`
echo $return_fqdn
done
The output of script is down below:
Looking for the whole backup client names?

Why is this bash script not changing path?

I wrote a basic script which changes the directory to a specific path and shows the list of folders, but my script shows the list of files of the current folder where my script lies instead of which I specify in script.
Here is my script:
#!/bin/bash
v1="$(ls -l | awk '/^-/{ print $NF }' | rev | cut -d "_" -f2 | rev)"
v2=/home/PS212-28695/logs/
cd $v2 && echo $v1
Does any one knows what I am doing wrong?
Your current script makes no sense really. v1 variable is NOT a command to execute as you expect, but due to $() syntax it is in fact output of ls -t at the moment of assignment and that's why you have files from current directory there as this is your working directory at that particular moment. So you should rather be doing ordinary
ls -t /home/PS212-28695/logs/
EDIT
it runs but what if i need to store the ls -t output to variable
Then this is same syntax you already had, but with proper arguments:
v1=$(ls -t /home/PS212-28695/logs/)
echo ${v1}
If for any reason you want to cd then you have to do that prior setting v1 for the same reason I explained above.

How to create a file with its name starting with dash in Linux? (ex "-file")

How can I create a file named "-file" using command line in Linux?
specify a path in front of it, e.g. ./-file
In bash -- is a flag that is interpreted as "nothing after this should be taken as a flag", so - is no longer parsed as an option.
touch -- -file
ls -ltr | awk '$NF ~ /^--/{print "rm ./" $NF}'|sh

lftp: how to recursively set permissions; firstly by directory than by file

When securing a Drupal or WordPress installation on a shared host that does not expose SSH access (a lousy situation, fwiw) lftp seems like the right approach to batch setting permissions for directories and files. The find command boasts that you can redirect its output, so one should be able to run a find, grep exclude to only match lines ending in "/" meaning a directory, and then set the permissions on such matches to 755 and perform the inverse on file matches and set to 644 and then fine tune specific files, such as settings.php and so forth.
lftp prompt> find . | grep "/$" | xargs chmod -v 755
Isn't working and I'm sure I have failed to chain these commands in the correct sequence and format.
How to get this to work?
Update: by "isn't working" I mean that the above command produces no output to the console, nor to the lftp error log. It isn't running these commands locally, fwiw. I'll reduce the command as a demonstration:
find . | grep "/$"
Will take the output of "find" and return matches, here, directories, by nature of the string match:
./daily/
./ffmpeg-installer/
./hourly/
./includes/
./includes/database/
./includes/database/mysql/
./and_so_forth_on_down
Which is cool, since I wish to perform a chmod (and internal command for lftp, with support varying by ftp server) So I expand the command like this:
find . | grep "/$" | xargs echo
Which outputs — nothing. No error output, either. The pipe from grep to xargs isn't happening.
My goal is to form the equivalent of:
chmod 755 ./daily/
chmod 755 ./ffmpeg-installer/
In lftp, the chmod command is performing an ftp-server-permissions change, not a local perms change.
For an explanation of why this does not work as expected, read on - for a solution to the given problem, scroll down.
The answer can be found in the manpage for lftp, which states that
"[s]ome commands allow redirecting their output (cat, ls, ...) to file or via pipe to external command."
So, when you are using a pipe like this on a command that does support redirection in lftp, you are piping its output to your local tools, which will eventually result in chmod trying to change the permissions for a file/directory on our local machine, and most likely fail in case you don't coincidally have the same directory layout in your current directory locally - which is probably the problem you encountered.
The grep + xargs pipe does work, I just tested the following:
lftp> find -d 2 | grep "/$"
./
./applications/
./lost+found/
./netinfo/
./packages/
./security/
./systems/
lftp> find -d 2 | grep "/$" | xargs echo
./ ./applications/ ./lost+found/ ./netinfo/ ./packages/ ./security/ ./systems/
My wild guess is that it did not appear to work for you because you did not specify a max-depth to find and the network connection + buffering in the pipe got in the way. When I try the same on a directory containing many files/subfolders it takes really long to finish and print. Did the command actually finish for you without output?
But still, what you are trying to do is not possible. As I stated, the right-hand-side of the pipe works with external commands (even if an inbuilt of the same name exists) as explained by the manual, so
lftp> chmod 644 foobar
and
lftp> echo "foobar" | xargs chmod 644
are not equivalent.
Yes, chmod is an inbuilt but used in a pipe in the client it will not execute the inbuilt - the manpage clearly states this and you can easily test this yourself. Try the following commands and check their output:
lftp> echo foo | uname -a
lftp> echo foo | ls -al
lftp> echo foo | chmod --help
lftp> chmod --help
Solution
As far as a solution to your problem is concerned, you can try something along the lines of:
#!/bin/bash
server="ftp.foo.bar"
root_folder="/my/path"
{
{
lftp "${server}" <<EOF
cd "${root_folder}"
find | grep "/$"
quit
EOF
} | awk '{ printf "chmod 755 \"%s\"\n", $0 }'
{
lftp "${server}" <<EOF
cd "${root_folder}"
find | grep -v "/$"
quit
EOF
} | awk '{ printf "chmod 644 \"%s\"\n", $0 }'
} | lftp "${server}"
This logs in to your server, cds to the folder where you want to recursively start changing the permissions, uses find + grep to find all directories, logs out, pipes this file list into awk to build chmod commands around it, repeats the whole process for files and then pipes the whole list of commands into a new lftp invocation to actually run the generated chmod commands.
You will also have to add your credentials to the lftp invocations and you might want to comment out the final | lftp "${server}" to check if it produces the desired output before you actually run the whole thing. Please report back if this works for you!

/proc directory script

I'm looking for a ruby script that accesses the /proc directory and saves the process ID and command line (cmdline) information in a file.
you may want to call ps instead of going to /proc.
cmd=`ps -eo pid,cmd`
o = File.open("output","w")
o.write(cmd)
o.close
you can also run below one liner bash script and redirect its output anywhere, as well as choose required argument option for head command.
ls -alR /proc/$(ls /proc/ |grep -i '[0-9]'|sort -n|head ) > /proc_open_files

Resources