Check for wildcard in fish shell arguments - shell

In writing a function for fish shell I want to know if a lone wildcard (not part of a bigger expression) was used in the command arguments. Fish does the wildcard expansion before passing arguments to my function, so there is no easy way that I can see to do that, aside from check whether the arguments are the same as the output of ls. The inefficiency of that method makes me sad, though. Is there a better way to do this, without going into fish's source code?
EDIT:
Thanks for the input. Specifically, I am looking to add some functionality like zshell has for warning if there is a * in the arguments of rm. I know that there was an issue opened on GitHub specifically about this but I couldn't find the link again. I have typod, for example, rm * .o instead of rm *.o, and accidentally deleted all my code (... which I brought back from git, but still).
EDIT 2:
Here is the issue on GitHub: https://github.com/fish-shell/fish-shell/issues/1511

No, there's no way for a function to tell where its arguments came from. Maybe if you give more details about what you're really trying to accomplish, we can give another suggestion.

Related

Why don't makefiles behave more like shell scripts within recipes?

I find makefiles very useful, and the header of each recipe
<target> : [dependencies]
is helpful. Within a recipe, the prefixes # and - are useful, as well as the automatically-defined variables like $# and $?. However, besides that, I find the way of coding the actual recipe to be strange and unhelpful. There are so many questions on StackOverflow along the lines of "how to do this in a makefile" for something that's simple (or at least more familiar) to do in bash.
Is there a reason why the recipe contents are not just interpreted as a regular shell script? Reading the manual pages, there seems to be many tools with equivalent functionality to a shell script but with different syntax. I end up specifying .ONESHELL and escaping $ with $$, or sometimes just call a script from the recipe when I can't figure out how to make it work in a makefile. My question is whether this is just unfortunate design, or are there are important features of makefiles that force them to be designed this way?
I don't really know how to answer your question. Probably that means it's not really appropriate for StackOverflow.
The requirement for using $$ instead of $ is obvious. The reasoning for using a separate shell for each logical line of a makefile instead of passing the entire recipe to a single shell, is less clear. It could have worked either way, and this is the way it was chosen.
There is one advantage to the way it works now, although maybe most people don't care about it: you only have to indent the first recipe line with TAB, if you use backslash newline to continue each line. If you don't use backslash newline, then every line has to be indented with TAB else you don't know where the recipe ends.
If your question is, could Stuart Feldman have made very different syntax decisions that would have made it easier to write long/complex recipes in makefiles, then sure. Choosing a more obscure character than $ as a variable introducer would reduce the amount of escaping (although, shell scripting uses pretty much every special character somewhere so "reduce" is the best you can do). Choosing an explicit "start/stop" character sequence for recipes would make it simpler to write long recipes, possibly at the expense of some readability.
But that's not how it was done.

Can I use macros/variables in a `.gitignore` script?

Is there any way to use macros or variables (à la bash) in a .gitignore script? The gitignore docs don't mention anything along those lines, but I figured I'd ask just in case there's some undocumented features and/or cool workarounds. A few people have asked about using environment variables in a .gitignore, but I want to know if there's any support whatsoever for macro-like or var-like anything.
use case
I have a repository which has been undergoing a refactoring of its directory structure/paths. Certain paths are used in multiple patterns in my .gitignore script, so it would be nice to be able to have something in there along the lines of:
# set a variable
UNSTABLE_PATH=foo/bar
# use the variable in some patterns
$UNSTABLE_PATH/test_data
$UNSTABLE_PATH/test_output
And yes, before you say it, I'm aware that clever use of glob and/or recursive glob could probably cover my use case. It would just be nice if there was some simple variable support as well. Though come to think of it I would also settle for a way to make the git mv command rewrite matching paths in the .gitignore.
Well, I'll just jump to the end of the semantics debate in the comments and leave it at this:
.gitignore is not a script, it's a list of patterns. As you would then expect, it has no support for variables.
If you really need this ability, I would take a cue from the likes of sass or less: Write your own file of "ignore rules with extended syntax" and write a script to boil that down into a proper .gitignore file.
But if you want my two cents, this looks like another case that would be resolved by moving some stuff outside your work tree, so that the work tree can just be a work tree.

`functions` in bash shell

In the Z shell there's a handy command that returns a list of all available functions. The command is, conveniently, called functions. I cannot find a similar alternative in Bash. I threw together a quick & dirty (and wholly unacceptable) function to approximately do the same thing, but it has at least one glaring problem: since it relies on parsing files you must either list all the files to look in (which may become stale) or give an expression (which is guaranteed to give files you don't want to look in, such as .bash_history).
Here's the function, since I know someone will ask for it if I don't post it, but I'm pretty sure it's a dead end, or at least the wrong approach.
functions() {
grep "^function " "$HOME/."{bashrc,bash_profile,aliases,functions,projects,variables} | sort | sed -e 's/{//' | uniq
}
I could improve on this wrong-headed approach by parsing .bash_profile and getting a list of all sourced files and then parsing them for functions, but by the time you add the following complications into the mix, it's really not worth it:
You can source files with . or source.
I also happen to use a function to source files, which checks for the file's existence first.
You could easily source after && or ;: it's not necessarily the first or only thing on a line.
You have to account for the fact that functions don't necessarily have the keyword function before them.
You can omit the () after the function name.
There are probably other complicating factors I haven't thought of.
Fundamentally this is wrong because it is parsing files rather than reporting what is loaded in memory.
Is there any reasonable way to do this—get a list of all functions loaded in memory—in Bash? It seems like an enormous omission, if not.
(And for those looking for duplicate questions, this one is very different, as it's asking for a way to list only those functions that come from a specific file.)
Use typeset -f in bash. In zsh, functions is just a synonym for the same command.

Does Cygwin (or an actual UNIX shell) have some command to import names from another namespace to the current namespace, as in Python?

In Python, we can use "import" to import the names of another namespace into the current namespace.
Similarly, is there a notion like "namespace" in existence in UNIX shell scripting at all? If so, then does Cygwin (or an actual UNIX shell) have some command to import names from another namespace to the current namespace, as in Python? Thanks.
Note to the community members with admin priviledges: I really think this question IS a programming question instead of a "superuser" question. Please kindly elaborate on why if you disagree with that. Thanks a lot for your time.
There is no way to do exactly what you are asking for.
The source envFile command and it's alternate . envFile can be very helpful.
envFile file will just be a list of environment assingments.
FrontOfficeSystem=MyFrontOffice
BackOfficeSystem=myBackOffice
When you include the command in your script to 'source' the envFile (any name will work), the shell reads the code as if it was directly in your main shell script. Like 'include' in a lot of langauges. But namespaces, ... nope. See next.
More helpful : see indirect references in advanced Bash scripting, this is probably better than using eval ... (per below), but I haven't had the opportunity to work with it.
finally, you may also benefit from eval and varname indirection, i.e.
src=FrontOffice
eval \$${src}System="${src} has data"
src=BackOffice
eval \$${src}System="${src} has data"
Not a great example, but I don't have access to the scripts where I really went to town on this idea. It helped me genericize (sp) some code that otherwise would have had to be repeated 10 times, for each data src (I put the repeating block of code in a for loop, with the src names as the element list for the for(each), then the eval would expand ${src}System as FrontOfficeSystem, BackOfficeSystem). If you windup with spaces in your values for your src list, then all bets are off.
use set -vx in your terminal window and copy/paste above code to see how it works. It might help.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.

What are the most important shell/terminal concepts/commands for novice to learn?

ALthough I've had to dabble in shell scripting and commands, I still consider myself a novice and I'm interested to hear from others what they consider to be crucial bits of knowledge.
Here's an example of something that I think is important:
I think understanding $PATH is crucial. In order to run psql, for instance, the PostgreSQL folder has to be added to the $PATH variable, a step easily over looked by beginners.
Concept of pipes. The fact that you can easily redirect output and divide complex task to several simple ones is crucial.
Do yourself a favor and get this book: Learning the Bash Shell
Read and understand:
The Official Bash FAQs
Greg Wooledge's Bash FAQs and Bash Pitfalls and everything else on that site
If you're writing shell scripts, an important habit to get into is to always put double quotes around variable substitutions. That is, always write "$myvariable" (and similarly "$(mycommand)"), never plain $myvariable or $(mycommand), unless you understand exactly why you need to leave them out. (Again, the question is not “should I use quotes?”, it's “why would I want to omit the quotes?”)
The reason is that the shell does nasty things when you leave a variable substitution unquoted. (Those nasty things are called field splitting and pathname expansion. They're good in some situations, but almost never on the result of a variable or command substitution.)
If you leave out the quotes, your script may appear to work at first glance. This is because nasty things only happen if the value of the variable contains some special characters (whitespace, \, *, ? and [). This sort of latent bug tends to be revealed the day you create a file whose name contains a space and your script ends up deleting your source tree/thesis/baby pictures/...
So for example, if you have a variable $filename that contains the name of a file you want to pass to a command, always write
mycommand "$filename"
and not mycommand $filename.

Resources