I have several Markdown (.md) files in a folder and I want to concatenate them and get a final Markdown file using Pandoc. I wrote a bash file like this:
#!/bin/bash
pandoc *.md > final.md
But I am getting the following error when I double-click on it:
pandoc: *.md: openBinaryFile: invalid argument (Invalid argument)
and the final.md file is empty.
If I try this:
pandoc file1.md file2.md .... final.md
I am getting the results I expect: a final.md file with the contents of all the other Markdown files.
On macOS it works fine. Why doesn't this work on Windows?
On Unix-like shells (like bash, for which your script is written) glob expansion (e.g. turning *.md into file1.md file2.md file3.md) is performed by the shell, not the application you're running. Your application sees the final list of files, not the wildcard.
However, glob expansion in cmd.exe is performed by the application:
The Windows command interpreter cmd.exe relies on a runtime function in applications to perform globbing.
As a result, Pandoc is being passed a literal *.md when it expects to see a list of files like file1.md file2.md file3.md. It doesn't know how to expand the glob itself and tries to open a file whose name is *.md.
You should be able to run your bash script in a unix-like shell like Cygwin or bash on Windows. It may also work on PowerShell, though I don't have a machine handy to test. As a last resort you could jump through some hoops to write a batch file that expands the glob and passes file names to Pandoc.
Related
I am trying to turn the following into an executable bash script
#!/bin/bash
cd ~/mlpractical
source activate mlp
jupyter notebook
after creating a .rtf file with the above, i then execute from the correct directory
chmod u+x filename
but everytime i then try and open the file i get an output telling me on line 1) the command is not found, on line 2) there is a syntax error etc.
How do I make the script executable (double-clickable) and resolve this error?
I'm not sure about making the script double-clickable (that depends on your OS, and you didn't mention what OS you're using). But it sounds like the script file is in RTF format, and that will certainly cause trouble. Shell scripts must be in absolutely plain Unix-style text files.
They can't have any formatting info, as in RTF, DOC, DOCX, etc files. At best, the shell will try to interpret the formatting info as shell commands, and get lots of errors.
They must have Unix-style line endings. If you use a text editor that saves in DOS/Windows format, you'll have trouble.
They must use a plain enough character encoding that the shell can get away with treating it as plain ASCII. That means no UTF-16. UTF-8 is ok, but don't use fancy characters like curly quotes (“ ” and ‘ ’) -- stick to plain ASCII quotes (" " and ' '). And no byte order marks!
Thanks Gordon, You are right the errors all made clear that text formatting strings were what was being passed to the shell, instead of just the plain text strings i intended.
I am using a MacOS environment and was creating files in text edit which had no .txt option only .rtf
I solved the issue by,
1) using the command line itself to echo text to a file without any extension eg.
echo 'text' > filename
2) i then couldn't figure out the newline syntax so had to keep appending text eg.
echo 'more text' >> filename
3) i then did the following, from the relevant directory, to make it executable,
chmod u+x filename
I've got a text file composed of different commands:
set +e; false; echo $?
(false; echo foo); echo \ $?
for i in .; do (false; echo foo); echo \ 1:$?; done; echo \ 2:$?
and so on.
I want to test these commands line-by-line, as a whole, in a bash child process using set -e, then check the error code from the parent bash process to see if the child exited with an error. I'm stuck on how to process the simpler commands (e.g. the first one listed above). Here's one of my recent attempts:
mapfile file < "$dir"/errorData
bash -c "${file[0]}"
echo $?
returns
bash: set +e: command not found
bash: false: command not found
bash: echo 127: command not found
I've tried several variants without luck. Using bash -c \'"${file[1]}"\' leads to one error message about command not found, with the above three strings now combined into one long string. Of course, simply wrapping the file commands in strong quotes and feeding them directly to bash in an interactive shell works fine.
Edit: I'm using bash 4.4.12(1)
For anyone who finds that the shell is parsing their text in ways that seem to violate the reference manual, make sure to check any copied input data for invisible control characters. E.g. I found a non-breaking space in place of what looked like simple whitespace, in the input file with commands to be executed. This is especially common when copying from web pages to an editor or the command line. bash uses whitespace, tabs, and newlines to tokenize, not the non-breaking space (U+0020 in the Unicode standard). A hex editor, e.g. hexfiend, can help troubleshoot similar issues.
I have a file that I pass to a bash command that will create an output in a loop like so:
for file in /file/list/*
do
command
done
I wish to save the output that would have gone to standard out of each loop to a text file in my working directory. Currently I am trying this:
for file in /file/list/*
do
command | tee "$file_command output.txt"
done
What I expect to see are new files created in my current directory titled file1.txt_commandoutput.txt, file2.txt_commandoutput.txt, etc. The output of the command should be saved as a different file for each file. However I get only one file created and it's called ".txt" and can't be opened by any standard software on Mac. I am new to bash scripting, so help would be much appreciated!
Thanks.
Your problem comes from the variable name you're using:
"$file_command_output.txt" looks for a variable named file_command_output (the dot cannot be in the variable name, but the alphanumerical characters and the underscore all can).
What you're looking for is "${file}_command_output.txt" to make the variable name more explicit.
You have two issues in your script.
First, the wrong parameter/variable is expanded (file_command instead of file) because it's followed by a character that can be interpreted as part of the name (the underscore, _). To fix it, enclose the parameter name in braces, like this: ${file}_command (see Shell Parameter Expansion in bash manual).
Second, even with fixed variable name expansion, the file won't be created in your working directory, because the file holds an absolute pathname (/file/list/name). To fix it, you'll have to strip the directory from the pathname. You can do that with either basename command, or even better with a modified shell parameter expansion that will strip the longest matching prefix, like this: ${file##*/} (again, see Shell Parameter Expansion, section on ${parameter##word}).
All put together, your script now looks like:
#!/bin/bash
for file in /file/list/*
do
command | tee "${file##*/}_command output.txt"
done
Also, to just save the command output to a file, without printing it in terminal, you can use a simple redirection, instead of tee, like this: command > "${file##*/}_com...".
If you are not aware of xargs, try this:
$ ls
file
$ cat > file
one
two
three
$ while read this; do touch $this; done < ./file
$ ls
file one three two
Fedora comes with "gstack" and a bunch of "gst-" programs which keep appearing in my bash completions when I'm trying to quickly type my git aliases. They're of course installed under /usr/bin along with a thousand other programs, so I can't just remove their directory from my PATH. Is there any way in Linux to blacklist these specific programs from appearing for completion?
I've tried the FIGNORE and GLOBIGNORE environment variables but they don't work, it looks like they're only for file completion after you've entered a command.
In 2016 Bash introduced an option for that. I'm reproducing the text from this newer answer by zuazo:
This is rather new, but in Bash 4.4 you can set the EXECIGNORE variable:
aa. New variable: EXECIGNORE; a colon-separate list of patterns that
will cause matching filenames to be ignored when searching for commands.
From the official documentation:
EXECIGNORE
A colon-separated list of shell patterns (see Pattern Matching) defining the list of filenames to be ignored by command search using
PATH. Files whose full pathnames match one of these patterns are not
considered executable files for the purposes of completion and command
execution via PATH lookup. This does not affect the behavior of the [,
test, and [[ commands. Full pathnames in the command hash table are
not subject to EXECIGNORE. Use this variable to ignore shared library
files that have the executable bit set, but are not executable files.
The pattern matching honors the setting of the extglob shell option.
For Example:
$ EXECIGNORE=$(which pytest)
Or using Pattern Matching:
$ EXECIGNORE=*/pytest
I don't know if you can blacklist specific files, but it is possible to complete from your command history instead of the path. To do that add the following line to ~/.inputrc:
TAB dynamic-complete-history
FIGNORE is for SUFFIXES only. It presumes for whatever reason that you want to blacklist an entire class of files. So you need to knock off the first letter.
E.g. To eliminate gstack from autocompletion:
FIGNORE=stack
Will rid gstack but also rid anything else ending in stack.
I've created a bash shell script file that I can run on my local bash (version 4.2.10) but not on a remote computer (version 3.2). Here's what I'm doing
A script file (some_script.sh) exists in a local folder
I've done $ chmod 755 some_script.sh to make it an executable
Now, I try $ ./some_script.sh
On my computer, this runs fine. On the remote computer, this returns a Command not found error:
./some_script.sh: Command not found.
Also, in the remote version, executable files have stars(*) following their names. Don't know if this makes any difference but I still get the same error when I include the star.
Is this because of the bash shell version? Any ideas to make it work?
Thanks!
The command not found message can be a bit misleading. The "command" in question can be either the script you're trying to execute or the shell specified on the shebang line.
For example, on my system:
% cat foo.sh
#!/no/such/dir/sh
echo hello
% ./foo.sh
./foo.sh: Command not found.
./foo.sh clearly exists; it's the interpreter /no/such/dir/sh that doesn't exist. (I find that the error message varies depending on the shell from which you invoke foo.sh.)
So the problem is almost certainly that you've specified an incorrect interpreter name on line one of some_script.sh. Perhaps bash is installed in a different location (it's usually /bin/bash, but not always.)
As for the * characters in the names of executable files, those aren't actually part of the file names. The -F option to the ls command causes it to show a special character after certain kinds of files: * for executables, / for directories, # for symlinks, and so forth. Probably on the remote system you have ls aliased to ls -F or something similar. If you type /bin/ls, bypassing the alias, you should see the file names without the append * characters; if you type /bin/ls -F, you should see the *s again.
Adding a * character in a command name doesn't do what you think it's doing, but it probably won't make any difference. For example, if you type
./some_script.sh*
the * is a wild card, and the command name expands to a list of all files in the current directory whose names match the pattern (this is completely different from the meaning of * as an executable file in ls -F output). Chances are there's only one such file, so
./some_script.sh* is probably equivalent to ./some_script.sh. But don't type the *; it's unnecessary and can cause unexpected results.