I am running the script with following parms:
test.ps1 -parm1 abc1 -parm2 abc2 -parm3 abc3
I am executing script remotely from my another application and want to run only one instance of the script when are parms are same.
In other words, if all parms are same then only one instance of the script should be running at any time.
I am using the following logic but it is returning null
Get-WmiObject Win32_Process -Filter "Name='powershell.exe' AND CommandLine LIKE '%test.ps1%'"
If you ran this...
WmiObject Win32_Process -Filter "Name='powershell.exe' AND CommandLine LIKE '%test.ps1%'"
… and it returned nothing, then that means it's not running or ran and closed.
I just tried what you posted and the above pulls the process line as expected.
# begin test.ps1 script
Param
(
$parm1,
$parm2,
$parm3
)
'hello'
# end test.ps1 script
# SCC is a alias for a function I have to shell out to the console host as needed
# aka Start-ConsoleCommand
# it has code to prevent the console host from closing so I can work in it if needed.
scc -ConsoleCommand '.\test.ps1 -parm1 abc1 -parm2 abc2 -parm3 abc3
# console window results
hello
Check the process info
Get-WmiObject Win32_Process -Filter "Name='powershell.exe' AND CommandLine LIKE '%test.ps1%'"
# Results
...
Caption : powershell.exe
CommandLine : "C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" -NoExit -Command &{ .\test.ps1 -parm1 abc1 -parm2 abc2 -parm3 abc3
...
Yet, your stated use case is kind of odd. As, since you are started the code in the first place, and you are saying, you only want it ran once, then why bother to start it again, just to check what you already know is running?
If you are saying, you need to run it multiple time sequentially, then do that sequentially.
If you are saying, that any user can use your app from any machine, then you'd still only have one running at a time from different machines, so the check really is moot. Unless you code is working (create-update-delete actions) on the same files/database, and trying to avoid errors when another users tries to use your code to act on it.
I assume you want to prevent multiple instances of the script to be running simultaneously on 1 given host.
In that case, have your script create a so called 'lock file' (just some text file at a specified location of your choice)
At the beginning of your script, check if the file exists (if it does, another instance is running; bail out!)
if it does not exist; create the lock-file, do your script business and at the end don't forget to delete the lock-file (unless the script is not allowed to run more than once on that computer)
Feel free to add additional information into the lock file (e.g. parameters being used, process-ID), to make even more versatile use of that file.
Related
I have a PowerShell script statically hosted on my website and I want to run it on my machine without manually downloading it. So I do this:
iwr https://mywebsite/test.ps1 | iex
Which works perfectly until you don't need to pass any arguments. But if I need to use arguments what options do I have?
As a workaround I can use variables instead of arguments like so:
$arg=$true; iwr https://mywebsite/test.ps1 | iex
but this is not ideal.
Is there any better way to do this?
Create a ScriptBlock from the script file and execute that:
& ([scriptblock]::Create((iwr https://mywebsite/test.ps1))) -param1 123 -param2 "Hello there"
Writing a script to retrieve various environment parameters back from a list of servers. My script returns no value when ran but the same command returns the desired value outside of a script.
I have tried using a couple of variations to retrieve the same data. One of the commands fails because of restrictions placed on the accounts I have access to. The second command works but only if executed in an elevated mode.
This fails with access denied (pwdx is restricted)
dzdo pgrep -f /some/path | xargs pwdx
This works outside of a script but returns no value within a script
dzdo /bin/readlink -e /proc/"$(pgrep -f /some/path)"/cwd
When using "bash -x" to execute my scriipt, I see the "readlink" code is blank.
Ideally, I would like to return the PID and path of the process running as the "pgrep" command does. I can work with the path alone as returned by the "readlink" version returns. The end goal is to gather the information from several servers for audit purposes. (version, etc.)
Am I using the wrong syntax for the "readlink" command? I'm fairly new to coding bash scripts so I appreciate any guidance to help understand when to to what if I'm using a command in a script vs command line.
If pwdx is the restricted program, you need to run that with dzdo, not pgrep.
pgrep -f /some/path | dzdo xargs pwdx
I have a script lying into a Unix server which looks like this:
mainScript.sh
#some stuff here
emailScript.sh $PARAM_1 $PARAM_2
#some other stuff here
As you can see, mainScript.sh is calling another script called emailScript.sh.
The emailScript.sh is supposed to perform a query via sqlplus, then parse the results and return them via email if any.
The interesting part of the code in emailScript.sh is this:
DB_SERVER=$1
USERNAME=$2
PASSWORD=$3
EVENT_DATE=$4
LIST_AUTHORIZED_USERS=$5
ENVID=$6
INTERESTED_PARTY=$7
RAW_LIST=$(echo "select distinct M_OS_USER from MX_USER_CONNECTION_DBF where M_EVENT_DATE >= to_date('$EVENT_DATE','DD-MM-YYYY') and M_OS_USER is not null and M_OS_USER not in $LIST_AUTHORIZED_USERS;" | sqlplus -s $USERNAME/$PASSWORD#$DB_SERVER)
As you can see, all I do is just creating the variable RAW_LIST executing a query with sqlplus.
The problem is the following:
If I call the script mainScript.sh via command line (PuTTy / KiTTy), the sqlplus command works fine and returns something.
If I call the script mainScript.sh via an external job (a ssh connection opened on the server via a Jenkins job), the sqlplus returns nothing and takes 0 seconds, meaning it doesn't even try to execute itself.
In order to debug, I've printed all the variables, the query itself in order to check if something wasn't properly set: everything is correctly set.
It really seems that the command sqlplus is not recognized, or something like this.
Would you please have any idea on how I can debug this? Where should I look the issue?
You need to consider few things here. While you are running the script, from which directory location you are executing the script? And while you are executing the script from your external application from which directory location it is executing the script. Better use full path to the script like /path/to/the/script/script.sh or use cd /path/to/the/script/ command to go to the script directory first and execute the script. Also check execute permission for your application. You as an user might have permission to execute the script or sql command but your application does not have that permission. Check the user id for your application and add that into the proper group.
I made a powershell script to generate a ruby command. The command takes forever to execute
when generated and run by the script. However, when its typed, it runs quickly and gets
the job done. Script logic is :
$dat1 = one ruby command | Out-String
# $dat1 will contain the value - rake drive:unit_tests:load_data
$dat2 = " parameters here"
$dat3 = $dat1 + $dat2
$dat3 = $ExecutionContext.InvokeCommand.NewScriptBlock($dat3)
& $dat3
Can someone please help me to figure out why this is happening and how to resolve this ?
Solution: Execute your script using the following command:
powershell -noexit "&" "c:\myscripts\script1.ps1"
I just got lucky here. Since I am new to powershell, I don't know why this
works. I read one link, but I still don't understand why it works.
http://poshoholic.com/2007/09/27/invoking-a-powershell-script-from-cmdexe-or-start-run/
I'm trying to take advantage of AWS Elastic Beanstalk's facility to customize the EC2 instances it creates. This requires creating a .config file in the .ebextensions directory.
You can specify a number of commands which should be executed when the application is deployed to an instance. I'm using that to install some msi files, and also to configure EC2 to assign the instance a unique name. This then requires a reboot.
My problem is that I only want these commands to be run when an instance is first deployed. When I deploy a code-only change to existing instances they shouldn't be run.
I've tried using the "test" parameter, which should prevent the command running. I create a file as the last command, and then I check for the presence of that file in the "test" parameter. But it doesn't seem to work.
My config file is like this:
# File structure documented at http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html
files:
"C:\\Users\\Public\\EnableEc2SetComputerName.ps1":
source: "[File Source]"
commands:
init-01-ec2setcomputername-enable:
test: cmd /c "if exist C:\\Users\\Public\\initialised (exit 1) else (exit 0)"
command: powershell.exe -ExecutionPolicy Bypass -File "C:\\Users\\Public\\EnableEc2SetComputerName.ps1"
waitAfterCompletion: 0
init-05-reboot-instance:
test: cmd /c "if exist C:\\Users\\Public\\initialised (exit 1) else (exit 0)"
command: shutdown -r # restart to enable EC2 to set the computer name
waitAfterCompletion: forever
init-06-mark-initialised:
test: cmd /c "if exist C:\\Users\\Public\\initialised (exit 1) else (exit 0)"
command: echo initialised > C:\\Users\\Public\\initialised
waitAfterCompletion: 0
Is there an alternative way to accomplish this? Or am I doing something stupid?
On Unix-based systems, there are the touch and test commands (referred to in this answer asking the equivalent question for Unix systems). What's the equivalent in Windows which will work best in this situation?
I think the problem lies in the fact that you are rebooting the machine before you can write the initialized file. You should be able use a bat file which first writes the semaphore, then reboots the instance, and run that .bat file contingently on the existence of semaphore.
You can either download the .bat file with a files:source: directive or compose it in the .config with a files:content: directive.
Otherwise, your test: lines look good (I tested them locally, without a reboot).
Essentially, no. Elastic Beanstalk is an abstraction and looks after the underlying infrastructure for you. You give up a lot of environment control and gain easier deployment. If you research into CloudFormation - in particular the meta-data and cfn-init / cfn-hup, you'll see a very similar construct around the beanstalk files and commands
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
If you need to do instance customization beyond application customization - then you're possibly using the wrong tool, and having to put clumsy workarounds (until touch/test arrive from AWS) Cloud Formation scripts would probably be a better fit.
I wrote about how to configure windows instances via cloudformation and there's also extensive documentation on Amazon itself.
Given you've done all the hard work around the commands, I think it would be pretty easy to shift to a Cloud Formation script, and plonk the one time startup code into userdata.
**edit - I think you could do it like this though if you went with elastic beanstalk
command: dir initialised || powershell.exe -ExecutionPolicy Bypass -File "C:\\Users\\Public\\EnableEc2SetComputerName.ps1"
I recently ran into a very similiar problem, and utilized the answer from Jim Flanagan above and created a PowerShell script to do this.
# restarts the server if this is not the first deployment
param (
)
$strFileName="C:\Users\Public\fwinitialised.txt"
If (Test-Path $strFileName){
#This file was created on a previous deployment, reboot now.
Restart-Computer -Force
}Else{
# this is a new instance, no need to reboot.
New-Item $strFileName -type file
}
And in the .ebextensions file...
6-reboot-instance:
command: powershell.exe -ExecutionPolicy Bypass -File "C:\\PERQ\\Deployment\\RestartServerOnRedeployment.ps1"
waitAfterCompletion: forever