Find and replace with CGI script - bash

I'm trying to use a CGI script to run a find and replace command on a specific text file.
I currently have a CGI script (foo.sh) which then executes a non-CGI shell script (bar.sh). In the non-CGI shell script (bar.sh), I'm able to perform a number of simple bash commands such as wget, mkdir and cd, and I'm also able to execute a .js file with the standard dot-slash bash syntax.
However, I can't get any find and replace commands to work when executed with CGI. I've tried sed, awk and perl, all of which work perfectly when used either directly on the command line or if I execute bar.sh from the command line. But once I try to execute the CGI script from the browser, the find and replace commands no longer work.
The syntax I've tried is below. Any suggestions appreciated.
sed -i 's/foo/bar/g' text.txt
{ rm text.txt && awk '{gsub("foo", "bar", $0); print}' > text.txt; } < text.txt
perl -p -i -e 's/foo/bar/g' text.txt

just a few thoughts:
if run by the web server, the script will probably be executed as a different user which could lead to permission-related problems.
maybe the awk/sed etc. commands are not in the path used by the web server process (try to use an absolute path here as well)
is there anything in the web server's error log?

Thanks for the input Michael, and sorry for the late response. You were correct in that it was a permissions issue caused by the fact that the CGI script runs not as me but as the "www-data" user.
The problem occurred because that user didn't have write privileges to execute a sed find and replace command on the target folder (in this case a directory within /usr/share/).
I changed permissions on the target folder to full read/write everyone, and now the script runs successfully.
Thanks again.

Related

Script piped into bash fails to expand globs during rm command

I am writing a script with the intention of being able to download and run it from anywhere, like:
bash <(curl -s https://raw.githubusercontent.com/path/to/script.sh)
The command above allows me to download the script, run interactive commands (e.g. read), and - for the most part - Just Works. I have run into an issue during the cleanup portion of my script, however, and haven't been able to discern a fix
During cleanup I need to remove several .bkp files created by the script's execution. To do so I run rm -f **/*.bkp inside the script. When a local copy of the script is run, this works great! When run via bash/curl, however, it removes nothing. I believe this has something to do with a failure to expand the glob as a result of the way I've connected the I/O of bash and curl, but have been unable to find a way to get everything to play nice
How can I meet all of the following requirements?
Download and run a script from a remote resource
Ensure that the user's keyboard input is connected for use in e.g. read calls within the script
Correctly expand the glob passed to rm
Bonus points: colorize input with e.g. echo -e "\x1b[31mSome error text here\x1b[0m" (also not working, suspected to be related to the same bash/curl I/O issues)

What is mean that `grep -m 1 ` command in UNIX

I googled this command but there was not.
grep -m 1 "\[{" xxx.txt > xxx.txt
However I typed this command, error didn't occured.
Actually, there was not also result of this command.
Please anyone explain me this command's working?
This command reads from and writes to the same file, but not in a left-to-right fashion. In fact > xxx.txt runs first, emptying the file before the grep command starts reading it. Therefore there is no output. You can fix this by storing the result in a temporary file and then renaming that file to the original name.
PS: Some commands, like sed, have an output file option which works around this issue by not relying on shell redirects.

Bash script not copying files

I have a bash script which is pretty simple (or so I thought - but I don't write them very often):
cp -f /mnt/storage/vhosts/domain1.COM/private/auditbaseline.php /mnt/storage/vhosts/domain1.COM/httpdocs/modules/mod_monitor/tmpl/audit.php
cp -f /mnt/storage/vhosts/domain1.COM/private/auditbaseline.php /mnt/storage/vhosts/domain2.org/httpdocs/modules/mod_monitor/tmpl/audit.php
The script copies the contents of auditbaseline to both domain 1 and domain 2.
For some reason it won't work. When I have the first line in on its own it's okay but when I add the second line I can't get it to work it locks up the scripts and they can't be accessed.
Any help would be really appreciated.
Did you perhaps create this script on a Windows machine? You should make sure that there are no CRLF line breaks in the file. Try using dos2unix (http://www.linuxcommand.org/man_pages/dos2unix1.html) to convert the file in that case.

Trouble writing a command that outputs three different commands to a file

I have to write a single command that runs the date, ls, and pwd utilities at the same time and redirects their output to a text file. I can't seem to figure this one out, a single command is fine but three is the issue
You should execute the commands in a subshell then redirect to your out file.
(date; ls; pwd) > /path/to/file
Hope this helps.

Recursive FTP directory listing in shell/bash with a single session (using cURL or ftp)

I am writing a little shellscript that needs to go through all folders and files on an ftp server (recursively). So far everything works fine using cURL - but it's pretty slow, becuase cURL starts a new session for every command. So for 500 directories, cURL preforms 500 logins.
Does anybody know, whether I can stay logged in using cURL (this would be my favourite solution) or how I can use ftp with only one session in a shell script?
I know how to execute a set of ftp commands and retrieve the response, but for the recursive listing, it has to be a little more dynamic...
Thanks for your help!
The command is actually ncftpls -R. It will recursively list all the files in a ftp folder.
Just to summarize what others have said so far. If you are trying to write a portable shell script which works as batch file, then you need to use the lftp solution since some FTP server may not implement ls -R. Simply replace 123.456.789.100 with the actual IP adress of the ftp server in the following examples:
$ lftp -c "open 123.456.789.100 && find -l && exit" > listing.txt
See the man page of lftp, go to the find section:
List files in the directory (current directory by default)
recursively. This can help with servers lacking ls -R support. You
can redirect output of this command.
However if you have a way to figure out whether or not the remote ftp server implements proper support for ls -lR, then a much better (=faster) solution will be:
$ echo ls -lR | ftp 123.456.789.100 > listing.txt
Just for reference if I execute the first command (lftp+find) it takes 0m55.384s to retrieve the full listing, while if I execute the second one (ftp+ls-R), it takes 0m3.225s.
If it's possible, try usign lftp script:
# lftp script "myscript.lftp"
open your-ftp-host
user username password
cd directory_with_subdirs_u_want_to_list
find
exit
Next thing u need is bash script to run this lftp command and write it to file:
#!/bin/bash
lftp -f myscript.lftp > myOutputFile
myOutputFile now contains the full dump of directories.
You could connect to the ftp server in a manner that it accepts commands from stdin and writes to stdout. Create two named pipes ("fifos", man mkfifo), redirect stdin and stdout of the ftp command each to one of them. Then you can write commands to the stdin-connected-fifo and read them (line-by-line with bash's read for example) from the stdout-fifo. Then use the results to see where you need to send another listing command (and print it or whatever you want to do)
In short: Not something bash scripting is suitable for :) (Until you find a tool that does what you want by itself of course)
if you just want to create a listing of all files and folders, you can use ssh instead. Something like this (but check the documentation on correct usage)
$ ssh user#host "ls -R /path"

Resources