How to pass arguments to a remote powershell script? - windows

I have a PowerShell script statically hosted on my website and I want to run it on my machine without manually downloading it. So I do this:
iwr https://mywebsite/test.ps1 | iex
Which works perfectly until you don't need to pass any arguments. But if I need to use arguments what options do I have?
As a workaround I can use variables instead of arguments like so:
$arg=$true; iwr https://mywebsite/test.ps1 | iex
but this is not ideal.
Is there any better way to do this?

Create a ScriptBlock from the script file and execute that:
& ([scriptblock]::Create((iwr https://mywebsite/test.ps1))) -param1 123 -param2 "Hello there"

Related

Proper syntax for bash script line

Writing a script to retrieve various environment parameters back from a list of servers. My script returns no value when ran but the same command returns the desired value outside of a script.
I have tried using a couple of variations to retrieve the same data. One of the commands fails because of restrictions placed on the accounts I have access to. The second command works but only if executed in an elevated mode.
This fails with access denied (pwdx is restricted)
dzdo pgrep -f /some/path | xargs pwdx
This works outside of a script but returns no value within a script
dzdo /bin/readlink -e /proc/"$(pgrep -f /some/path)"/cwd
When using "bash -x" to execute my scriipt, I see the "readlink" code is blank.
Ideally, I would like to return the PID and path of the process running as the "pgrep" command does. I can work with the path alone as returned by the "readlink" version returns. The end goal is to gather the information from several servers for audit purposes. (version, etc.)
Am I using the wrong syntax for the "readlink" command? I'm fairly new to coding bash scripts so I appreciate any guidance to help understand when to to what if I'm using a command in a script vs command line.
If pwdx is the restricted program, you need to run that with dzdo, not pgrep.
pgrep -f /some/path | dzdo xargs pwdx

Restricting one instance of PS Script when all parms are same

I am running the script with following parms:
test.ps1 -parm1 abc1 -parm2 abc2 -parm3 abc3
I am executing script remotely from my another application and want to run only one instance of the script when are parms are same.
In other words, if all parms are same then only one instance of the script should be running at any time.
I am using the following logic but it is returning null
Get-WmiObject Win32_Process -Filter "Name='powershell.exe' AND CommandLine LIKE '%test.ps1%'"
If you ran this...
WmiObject Win32_Process -Filter "Name='powershell.exe' AND CommandLine LIKE '%test.ps1%'"
… and it returned nothing, then that means it's not running or ran and closed.
I just tried what you posted and the above pulls the process line as expected.
# begin test.ps1 script
Param
(
$parm1,
$parm2,
$parm3
)
'hello'
# end test.ps1 script
# SCC is a alias for a function I have to shell out to the console host as needed
# aka Start-ConsoleCommand
# it has code to prevent the console host from closing so I can work in it if needed.
scc -ConsoleCommand '.\test.ps1 -parm1 abc1 -parm2 abc2 -parm3 abc3
# console window results
hello
Check the process info
Get-WmiObject Win32_Process -Filter "Name='powershell.exe' AND CommandLine LIKE '%test.ps1%'"
# Results
...
Caption : powershell.exe
CommandLine : "C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" -NoExit -Command &{ .\test.ps1 -parm1 abc1 -parm2 abc2 -parm3 abc3
...
Yet, your stated use case is kind of odd. As, since you are started the code in the first place, and you are saying, you only want it ran once, then why bother to start it again, just to check what you already know is running?
If you are saying, you need to run it multiple time sequentially, then do that sequentially.
If you are saying, that any user can use your app from any machine, then you'd still only have one running at a time from different machines, so the check really is moot. Unless you code is working (create-update-delete actions) on the same files/database, and trying to avoid errors when another users tries to use your code to act on it.
I assume you want to prevent multiple instances of the script to be running simultaneously on 1 given host.
In that case, have your script create a so called 'lock file' (just some text file at a specified location of your choice)
At the beginning of your script, check if the file exists (if it does, another instance is running; bail out!)
if it does not exist; create the lock-file, do your script business and at the end don't forget to delete the lock-file (unless the script is not allowed to run more than once on that computer)
Feel free to add additional information into the lock file (e.g. parameters being used, process-ID), to make even more versatile use of that file.

What's the equivalent of sudo in msys?

I'm writing a cross-platform shell script that's supposed to work on Unix, Cygwin, and msys. In my shell script I need to perform actions with elevated privileges. On Unix you would do this via sudo, and on Cygwin via something like cygstart --action=runas. What's the equivalent for msys?
All my Googling so far has only turned up this, which isn't practical from a shell script since you have to interact with the GUI.
Elevate does a decent job at this, though it's not entirely sudo-equivalent.
I think I may have found a solution using PowerShell:
escape()
{
RESULT="$1"
RESULT="${RESULT/\'/\\\'\'}" # replace ' with \''
RESULT="${RESULT/\"/\\\\\\\"}" # replace " with \\\"
echo "''$RESULT''" # PowerShell uses '' to escape '
}
sudo()
{
ESCAPED=()
for ARG in "$#"
do
ESCAPED+=($(escape "$ARG"))
done
SHELL_PATH=$(cygpath -w $SHELL)
PS_COMMAND="[Console]::In.ReadToEnd() | Start-Process '$SHELL_PATH' '-c -- \"${ESCAPED[*]}\"' -Verb RunAs"
cat /dev/stdin | powershell -NoProfile -ExecutionPolicy Bypass "$PS_COMMAND"
}
Definitely a bit Extremely hackish, but it's better than nothing. (Or Batch files, for that matter.)

Laravel Envoy and bash prompt

I'm using Envoy to provision a remote server. Provisioning is done by pulling the bash script from a private repo and then execute it.
The bash script ask some confirmation like yes/no (using bash "read -p"): it works as expected when i'm connected to the remote server... the script wait for user input.
Instead Envoy seems to ignore any prompt. Is it an expected behavior?
Any workaround?
Yes, this is expected. There's nothing for read to read from so it doesn't.
You have a few options.
Rewrite your script to use a config file when there's no terminal to prompt from.
Use something like [ -t 0 ] to test if the standard input is a terminal and load a configuration file with defaults. The simplest way to do that is just have a file that contains appropriate variable assignments and just source it . defaults.sh or whatever. You don't even need the -t test if you source the defaults first since then anything the user inputs will over-ride the default value.
Rewrite your script to have sane defaults.
Rewrite whatever runs the script to provide your script input via pipeline/file via redirection (e.g. printf 'answer 1\nanswer 2\n' | ./script.sh or ./script.sh <answerfile).

Auto SSH and execute script

I have roughly 12 computers that each have the same script on them. This script merely pings all the other machines, and prints out whether the machine is "reachable" or "unreachable". However, it is inefficient to login to each machine manually using ssh to execute this script.
Suppose I'm logged into node 1. Is there any way to for me to login to node 2-12 automatically using SSH, execute the ping script, pipe the results to a file, logout and proceed to the next machine? Some kind of bash shell script?
I'm afraid I'm at a loss here since I haven't had experience with shell-scripting before.
Since the script is on the other machines, you can just have ssh run the command for you there:
ssh $hostname my_script >> results_file
When you specify a command like that, it's executed instead of the login shell.
I'll leave it up to you to figure out how to loop over hostnames!
One trick you'll need to use is setting up pre-authorized keys for each host. Then you can run a script on one host, running something like 'ssh hostname command > log.hostname'
This script might be what you are looking for: It allows you to execute one command (which can be your script) on multiple remote machines via ssh. It's a simple script with bash source available, so you should be able to customize it to your needs:
http://www.heinzi.at/projects/upgradebest.sh/
Yes you can
You need actually 2 small scripts as following:
remote_ssh.sh ( which takes as first argument the name of the machine and the rest of the arguments are your script that you want to execute with his own arguments)
Example : remote_ssh.sh node5 "echo hello world"
remote_ssh.sh as following:
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
REST_ARG=${ALL_ARG##$FST_ARG}
echo "Executing REMOTE COMMAND ON $FST_ARG"
/usr/bin/ssh $FST_ARG bash execute_ssh_command.sh $FST_ARG pwd $REST_ARG
execute_ssh_command.sh as following :
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
DIR_ARG=$2
REM_ARG="$1 $2"
REST_ARG=${ALL_ARG##$REM_ARG}
cd $DIR_ARG
$REST_ARG
of course you have to get this 2 scripts in your path of all your nodes ( maybe ~/bin/ )
Hope that it's helpful

Resources