Deleting "C:\Blah\Blah\..\...\Blah" File on Unix - windows

I have a remote Linode, which I am using Cygwin to access. An errant database file, specifically "C:\Users\Blah\Blah\website\blah\sqlite.db" was created. This file was used for local testing on my Windows machine, but was generated due to a mistake on the Linode. Note, this is the full file name inside the Linode, not the location of it. This is Windows syntax, not Unix, which is where I think the problem lies.
Now, I cannot delete it! It says, cannot remove file "file name" where file name does not have any of the original backslashes. This tells me that it cannot recognize that this is an errant windows DB file.
How can I delete this? If I had access to a GUI folder I could use that, but I only have the command line!
Please help!

The backslash and colon are not special characters to the filesystem (which is why you can have a file with those characters in its name), but backslash is a special character to the shell (and : is special in some contexts).
You just have to pass the file's name to the rm command. To do this from the shell, you need to escape the backslash characters.
This should work:
rm C:\\Users\\Blah\\Blah\\website\\blahsqlite.db
For example (I just tried this on my own system):
$ touch C:\\Users\\Blah\\Blah\\website\\blahsqlite.db
$ ls
C:\Users\Blah\Blah\website\blahsqlite.db
$ rm C:\\Users\\Blah\\Blah\\website\\blahsqlite.db
$
And if your shell supports tab completion, then you can probably just type rm Ctab
and, if there are no other files in the current directory whose names start with C, the shell will expand that to (an escaped version of) the file name. (Bash happens to insert a a \ in front of the : as well; this is unnecessary but harmless.)

Related

Error processing variables with special characters in bash script

I need help trying to find a solution to have a bash script be able to read file names with special characters. The user will start the script, but if the folder or the file has special characters, the script will fail or have an error. I have tried several options I found online, but I have not been able to make them work with the script.
The script is set up to take user input with the read command.
read -r -p "Enter directory name : " var1
If the user input is “accoutn&orders,” the script will fail due to the ‘&’ character as it won’t find the directory or file.
When the script looks for the file with specific extensions, the input folder name will be the path to copy the files to a different directory. The issue I am running into is that some of those files or directories have special characters, and the script cannot process the variables and cannot find the file when there are special characters.
The script uses a for loop to check every file in the directory, and if the file's name has a special character, it will fail the loop.
example file name:
file1#depot.rct
file2&logrecord.rct
cd $var1
ls: cannot access '/sharepool/comunityshare//'\''account.&.orders'\''': No such file or directory
line 141: cd: '/sharepool/comunityshare//'\''account.&.orders'\''': No such file or directory
I have tried using single quotes wrapping and bask slashes, but the variable is not readable.
Please note that I am not a coder or developer, I know some basic Linux commands, and I am trying to make this work while a better process is developed. I appreciate your help with this.
I was able to solve the issue using this line.
filename=$(echo "$filename" | sed 's/[&()#+*#!%^'\''^]/\\&/g')
That inserted a backslash if the variable had a special character.
account.&.orders to account.&.orders
Thank you for your help and support.

cd to an unknown directory name with spaces in a bash script

I've looked at some of the posts that have similar issues, but I can't extrapolate some of the solutions to fit my own needs, so here I am.
I have a simple shell script and I need it to cd into a directory with a space in the name. The directory is in the same place every time (/home/user/topleveldir) but the directory name itself is unique to the machine, and has a space in it (/home/user/topleveldir/{machine1}\ dir/, /home/user/topleveldir/{machine2}\ dir/). I'm confused as to the best method to cd into that unique directory in a script.
I don't see why something like following would not work
baseDir=/home/user/topleveldir
machine=<whatever machine name>
cd "$baseDir/$machine dir"
You need to quote that space character, so that the shell knows that it's part of the argument and not a separator between arguments.
If you have the directory directly on that command line in the script, use single quotes. Between single quotes, every character is interpreted literally except a single quote.
cd '/home/user/topleveldir/darkstar dir/'
If the directory name comes from a variable, use double quotes around the command substitution. Always use double quotes around command substitutions, e.g. "$foo". If you leave out the quotes, the value of the variable is split into separate words which are interpreted as glob patterns — this is very rarely desirable.
directory_name='darkstar dir'
…
cd "/home/user/topleveldir/$directory_name"

Does Bash have version issues that would prevent me from executing files?

I've created a bash shell script file that I can run on my local bash (version 4.2.10) but not on a remote computer (version 3.2). Here's what I'm doing
A script file (some_script.sh) exists in a local folder
I've done $ chmod 755 some_script.sh to make it an executable
Now, I try $ ./some_script.sh
On my computer, this runs fine. On the remote computer, this returns a Command not found error:
./some_script.sh: Command not found.
Also, in the remote version, executable files have stars(*) following their names. Don't know if this makes any difference but I still get the same error when I include the star.
Is this because of the bash shell version? Any ideas to make it work?
Thanks!
The command not found message can be a bit misleading. The "command" in question can be either the script you're trying to execute or the shell specified on the shebang line.
For example, on my system:
% cat foo.sh
#!/no/such/dir/sh
echo hello
% ./foo.sh
./foo.sh: Command not found.
./foo.sh clearly exists; it's the interpreter /no/such/dir/sh that doesn't exist. (I find that the error message varies depending on the shell from which you invoke foo.sh.)
So the problem is almost certainly that you've specified an incorrect interpreter name on line one of some_script.sh. Perhaps bash is installed in a different location (it's usually /bin/bash, but not always.)
As for the * characters in the names of executable files, those aren't actually part of the file names. The -F option to the ls command causes it to show a special character after certain kinds of files: * for executables, / for directories, # for symlinks, and so forth. Probably on the remote system you have ls aliased to ls -F or something similar. If you type /bin/ls, bypassing the alias, you should see the file names without the append * characters; if you type /bin/ls -F, you should see the *s again.
Adding a * character in a command name doesn't do what you think it's doing, but it probably won't make any difference. For example, if you type
./some_script.sh*
the * is a wild card, and the command name expands to a list of all files in the current directory whose names match the pattern (this is completely different from the meaning of * as an executable file in ls -F output). Chances are there's only one such file, so
./some_script.sh* is probably equivalent to ./some_script.sh. But don't type the *; it's unnecessary and can cause unexpected results.

What does the bash command "rm *~" do?

Does the bash command rm *~ just remove files ending in tilde or is there a more advanced bash or gnu make pattern here? Google does not seem able to search for this two symbol combination. I found this in a Makefile clean: target.
Would gnu make ever create files with trailing ~'s using only the implicit rules?
The ~ (tilde) character has a special meaning in a path in two cases:
~user # the home directory of user
~/folder # folder inside your home directory
For the most part, that's it. The command you refer to does exactly what it looks like it does: removes files whose names end in a tilde. Text editors such as emacs save backup copies of files under filenames ending in tildes.
So, this command is probably used to remove these backup copies from the current directory (but not subdirectories). One reason why one would want to do so is if the directory will be copied to a web server, as server-side code (e.g. PHP files) can contain sensitive information such as passwords.
As you guessed, rm *~ just removes file with names ending with a tilde (~). Filenames ending with a tilde are usually backup files created by editors (in particular, emacs was one of the earlier editors to use this convention). After editing source code, it is common to have a number of these files left behind. This is why the clean target in the Makefile removes these.
Whether *~ is some special bash pattern is not relevant for most makefiles, as /bin/sh is used by default to execute make recipes. Only if SHELL is set in the makefile will a different shell be used.
An easy way to see make's implicit rules is to run make -p in a directory without a makefile. You will get an error saying no targets specified, but make will also print out the implicit rules it is using. If you grep this output for a tilde, you'll see there are no implicit rules that name files with it.
Nope, just what you said. Removes files ending with ~.
Edit -> the only special meaning the ~ character may have, is as short-hand for the the current user's home directory (as $HOME), but only in the beginning of a path.
I have used that command to erase files ending in "~". I think that there is no special escape character associated with the tilde symbol.
Yes to both
Actually, both of your possibilities are somewhat true.
There is no wildcard or special filename syntax associated with ~, unless it occurs at the beginning of a word.
But the filename pattern ending in tilde is produced automatically by the mv(1) and cp(1) programs on most linux distros1 if the -b (backup) option is specified and the target file exists. A make rule on such a system might contain a mv -b ... or cp -b ... command.
1. But not on the Mac or BSD.

command substitution but without breaking output into multiple arguments

Is there a way to do command substitution in BASH shell without breaking output into multiple arguments?
I copy the path of some directory (from the location bar in a GUI file browser) to clipboard and then issue the following command, where the command xsel returns the clipboard content, which is the path of the directory in this case:
cd `xsel`
But some path contain spaces or may even contain some special characters used by BASH.
How can I pass the output of a command as a single argument and without BASH messing with special characters?
cd "$(xsel)"
seems to handle all special characters (including $ and spaces).
My test string was boo*;cd.*($\: $_
$ mkdir "$(xsel)"
$ ls
boo*;cd.*($\: $_
$ file boo\*\;cd.\*\(\$\\\:\ \$_/
boo*;cd.*($\: $_/: directory
$ cd "$(xsel)"
$ pwd
/tmp/boo*;cd.*($\: $_
Have you tried:
cd "`xsel`"
That should do the job, unless you have dollars($) or back-slashes (\) in your path.
If you aren't doing this programmatically, most terminals in Linux let you paste from the clipboard with a middle-click on your mouse. Of course, you'll still need to put quotes before and after your paste, like #dave suggests.

Resources