Load script into a PowerShell console via an alias - windows

To load a script file into an open PS console (e.g. to import functions) dot-sourcing or the Import-module applet is needed.
Using this inside a function (to create an alias) doesn't work, e.g.:
Function psinit1 { . C:\Scripts\scriptFunktions.ps1 }
Function psinit2 { Import-module C:\Scripts\scriptFunktions.ps1 -force}
when I call psinit1 or psinit2 I don't get an error, but my functions are not available. Why doesn't this work, am I right in assuming that the function opens a new scope which loads the script (and gets closed once the function is done)?
How can I get it to work?

Unless you invoke a function via ., the dot-sourcing operator, its body executes in a child scope, so that any operations you perform inside of it - unless you explicitly target a different scope - are limited to that child scope and its descendant scopes.
Therefore, to make your functions works as intended, i.e. to make definitions visible to the caller's scope, dot-source their invocations too:
. psinit1
Generally, note that while Import-Module also accepts .ps1 scripts, its primary purpose is to act on modules. With .ps1 scripts, it effectively behaves like dot-sourcing, except that repeating an Import-Module call with a .ps1 script in a child scope fails, unless -Force is also specified (to force reloading).
The upshot: Do not use Import-Module with .ps1 scripts:
The primary reason to avoid is that it makes a promise it cannot keep: because simple dot-sourcing takes place, no actual module is being imported - even though one nominally shows up in Get-Module's output, named for the script file's base file name (e.g., foo for script foo.ps1).
Because simple dot-sourcing takes place, you can not use Remove-Module to unload the script's definitions later; while you can call Remove-Module on the imported pseudo-module, it has no effect: the dot-sourced definitions remain in effect.

Related

zsh completion script with globbing

I'd like to write a zsh-completion script, which completes with files in a directory different from the current working directory. (I can archieve this, see below.)
Additionally the user shall be able to use globbing on the completed command. How to archieve this?
Example:
Let's assume we want to complete a command called rumple. rumple takes a list of files from ~/rumple as argument (possibly nested directories/files). A valid invocation may be:
rumple pumple romple/pample
Which means rumple shall effectively work on ~/rumple/pumple and ~/rumple/romple/pample.
Now the zsh-completion script _rumple for rumple could look like:
#compdef rumple
_path_files -W ~/rumple
This completes the first part of the task: rumple p<TAB> will complete to rumple pumple (assuming there are not other files in ~/rumple starting with 'p'). However the second part is not working. rumple **/p* should complete to rumple pumple romple/pample, even if the current working directory is not ~/rumple.
Normally you would leave it to the user to configure this sort of behaviour via the _match completer or glob_complete option.
But you can force it to some extent as follows:
#compdef rumple
local a=( ~/rumple/**/* )
compstate[pattern_match]='*'
compadd ${a#$HOME/rumple/}

Power Shell Cannot Overwrite Constant Error after Running Script 2nd Time?

I have a PowerShell script with constants defined inside the script:
Set-Variable MY_CONST -option Constant -value 123
Write-Host "Hello, World!"
Write-Host $MY_CONST
Now, when I run this script once, it is fine.
When I run the script again, you get error messages:
Set-Variable : Cannot overwrite variable MY_CONST because it is read-only or constant.
I am running inside Visual Studio Code 2017.
If I exit and re-open Visual Studio Code, it works if you run it again (and fails the second time after that ...).
If you use -Option Constant, you're telling PowerShell that the resulting variable should not allow later modification.
Therefore, running Set-Variable again with the same variable name results in an error.
That said, you would only see that symptom if your script is "dot-sourced", i.e., executed directly in the caller's scope, which means that repeated invocations see definitions left behind by previous invocations.
Some environments implicitly perform "dot-sourcing" - notably, the PowerShell ISE and - as in your case - Visual Studio Code with the PowerShell extension.
A simple workaround is to add -ErrorAction Ignore to your Set-Variable call, given that it's fair to assume that the only possible failure reason is redefinition of the constant.
More generally, in environments such as the PowerShell ISE and Visual Studio Code, be aware that a given script's invocation may leave definitions behind that affect subsequent invocations.
By contrast, this is not a concern when invoking a script repeatedly from a PowerShell console/terminal window, because scripts there run in a child scope.
mhhollomon asks if using scope modifiers such as $script:... would work:
No, because in the global scope in which scripts execute in Visual Studio Code, scope $script:... (Set-Variable -Scope Script ...) is the same scope, i.e., the global scope too.
If you did want to explicitly ensure that your script doesn't modify the calling scope even when "dot-sourced", you can wrap the entire script's content in & { ... } to ensure execution in a child scope.

Is it possible to have separate bash completion functions for separate commands which happen to share the same name?

I have two separate scripts with the same filename, in different paths, for different projects:
/home/me/projects/alpha/bin/hithere and /home/me/projects/beta/bin/hithere.
Correspondingly, I have two separate bash completion scripts, because the proper completions differ for each of the scripts. In the completion scripts, the "complete" command is run for each completion specifying the full name of the script in question, i.e.
complete -F _alpha_hithere_completion /home/me/projects/alpha/bin/hithere
However, only the most-recently-run script seems to have an effect, regardless of which actual version of hithere is invoked: it seems that bash completion only cares about the filename of the command and disregards path information.
Is there any way to change this behavior so that I can have these two independent scripts with the same name, each with different completion functions?
Please note that I'm not interested in a solution which requires alpha to know about beta, or which would require a third component to know about either of them--that would defeat the purpose in my case.
The Bash manual describes the lookup process for completions:
If the command word is a full pathname, a compspec for the full pathname is searched for first. If no compspec is found for the full pathname, an attempt is made to find a compspec for the portion following the final slash. If those searches do not result in a compspec, any compspec defined with the -D option to complete is used as the default.
So the full path is used by complete, but only if you invoke the command via its full path. As for getting completions to work using just the short name, I think your only option (judging from the spec) is going to be some sort of dynamic hook that determines which completion function to invoke based on the $PWD - I don't see any evidence that Bash supports overloading a completion name like you're envisioning.
Yes, this is possible. But it's a bit tricky. I am using this for a future scripting concept I am developing: All scripts have the same name as they are build scripts, but still bash completion can do its job.
For this I use a two step process: First of all I place a main script in ~/.config/bash_completion.d. This script is designed to cover all scripts of the particular shared script name. I configured ~/.bashrc in order to load that bash completion file for these scripts.
The script will obtain the full file path of the particular script file I want to have bash completion for. From this path I generate an identifier. For this identifier there exists a file that provides actual bash completion data. So if bash completion is performed the bash completion function from the main bash completion script will check for that file and load it's content. Then it will continue with regular bash completion operation.
If you have two scripts with the same name you will have two different identifiers as those scripts share the same name but have different paths. Therefore two different configurations for bash completion can be used.
This concept works like a charm.
NOTE: I will update this answer soon providing some source code.

Building a single wrapper script that works for multiple executables

So I have a gem that has two executables say, run and run_nohup. I have created an executable file where I add all the environment stuff required to execute the run and have added this on the path.
Example:
env variable1=value variable2=value /opt/my_gem/bin/run "$#"
Now my question is, is there another way to do the same for run_nohup without duplicating this work? I ask this because, am installing all of this with chef and it would require me to create more templates, basically duplicating the old template except for the last part where I call run_nohup.
$0 is the name used to invoke the current program; thus, you can look at it to determine how you were called, or manipulate it (in the below case, stripping the directory name and using only the filename):
#!/bin/sh
exec env variable1=value variable2=value /opt/my_gem/bin/"${0##*/}" "$#"
You can take this single executable, save it in two files named run and run_nohup (which can be hardlinked together, if you like), and it'll call the appropriate tool from /opt/my_gem/bin for the name it's invoked with.
Aside: It would be slightly more efficient (save a few microseconds) to have the shell export the environment updates rather than calling through env:
#!/bin/sh
variable1=value variable2=value exec /opt/my_gem/bin/"${0##*/}" "$#"

understanding PATH variable export at the beginning of the bash script

I have fairly often seen that PATH variable is exported at the beginning of the script. For example in /etc/init.d/rc script in Debian Wheezy:
PATH=/sbin:/usr/sbin:/bin:/usr/bin
export PATH
While I understand that this ensures that executables used in the script are started from correct directories, I don't fully understand which shells are affected by this export statement. For example here I start the script named rc(PID 6582; command is "/bin/sh /etc/init.d/rc") in bash(PID 3987):
init(1)-+-acpid(1926)
|-sshd(2139)-+-sshd(2375)---bash(2448)---screen(3393)---screen(3394)-+-bash(3395)---vim(3974)
| | |-bash(3397)---pstree(6584)
| | `-bash(3987)---rc(6582)---sleep(6583)
Am I correct that this PATH export statement in rc script affects only the /bin/sh with PID 6582 because parent shells(bash with PID 3987 in my example) do not inherit variables from children? In addition, am I correct that all the commands executed in script rc are started under the /bin/sh with PID 6582 and thus use this PATH=/sbin:/usr/sbin:/bin:/usr/bin variable? If yes, then hasn't the simple PATH=/sbin:/usr/sbin:/bin:/usr/bin been enough?
The environment variables are inherited by all the processes run from the script. PATH in particular affects the behaviour of the C functions execlp() and execvp(), so all the processes launched by the init.d script that started sshd and their descendants are impacted, but only until the point where one of these descendants changes and exports it.
In particular, bash(2448) most probably changes it, as it is a login shell, to match the system's and the user's config, so all it descendans are impacted by this change.
Then when you run manually the /etc/init.d/rc script by hand, the change is inherited by the sleep command (but that one never tries to run anithing).
If yes, then hasn't the simple PATH=/sbin:/usr/sbin:/bin:/usr/bin been enough?
If you mean just setting the variable instead of also exporting it, it depends on what the rc script runs. If it launches anything that tries to run commands with any of those functions, then no, only after exporting PATH it affects the children.
PATH should already be exported by the parent shell when the script run, so indeed, there is no need.
I can imagine corner cases where the shell which runs your script might not be properly initialized, such as for a startup script running very early in the boot process, but for regular userspace scripts, things should be set up the way you want them already.

Resources