As described in the subject, my powershell environment is executing all external commands in separate windows. In a typical test run of my team's build script, this includes things like:
nuget.exe running for each project in a sln
nunit test runners
It's quite aggravating. The behavior actually prevents me from multi-tasking while running psake builds, since it grabs my mouse/keyboard focus whenever a new window appears/disapears. It also swallows valuable output from assorted steps in our build process.
As per Powershell suddenly opens cmd.exe for executing bats, I checked $env:PATHEXT, but it is set up correctly (includes .EXE in its items, which are a semicolon-delimited list).
I am the only member of the team seeing this behavior, and it appears to be associated with some global/roaming profile for my user, as it is still happening even after I re-imaged my machine.
Any help would be greatly appreciated and I can provide additional info upon request.
Start-process nuget.exe -NoNewWindow
get-help start-process -online
Related
Hi it's been few days having different Google Video Support Pluggin Installer tasks running
It happens the same this folk described here:
v78 of Chrome, Win 10, Every few hours a new instance of the video plugin installer exe kicks off and runs in parallel with the other(s) in place. Each instance takes ~ half a core and there is also always one MsiExec that terminates (similar usage) when the process is manually killed in task manager. The processes originate from \users\\appdata\local\google\update\install{GUID} - deleting the directories / files does no good, the updater downloads it after a few hours and tries again.
Video plugin installer is v 19.9.2600.0, 10,692,592 bytes
I'd like a way to stop this automated install or have it succeed. It seems I can do neither right now.
Any solutions to this in place?
Thank you in advance!
Quick Questions: 1) Are you behind an internet proxy server? 2) What is your security tool / malware suite / anti-virus? Can you
temporarily disable it whilst the installation finishes? 3) Have a look
in the "Deployment Mnemonic" section here? (for various causes of
deployment failure). 4) You can try to log in as another user and install fresh - see if that works. If it does, then you need a Chrome profile cleanup? 5) There is always the reboot. Always try that first to get it out of the way as the "one-size-doesn't-fit-anyone-at-all-really fix".
Technical: If I were to guess the problem is a custom action in the MSI that gets stuck "somehow". How many msiexec.exe entries are in the tast list and what context are they in? (user, system). There can be many msiexec.exe processes. Below is information on how to debug custom action failures (and other failures) by log analysis.
Logging: If this really is an MSI installer (Windows Installer), then you can try to enable MSI logging policy to create a log file for all MSI installations that run by following the instructions in the section "Globally for all setups on a machine" here. You will then find a log file in the TEMP folder after the installer has run (whenever that might be). Look for *.log files with a random name. Just sort the folder by change date to see the most recent files.
Open Temp folder: Windows Key > Tap R > Type: %TEMP% > Press: Enter.
Debugging: You can search the log for "value 3" first to find errors (see Rob Mensching's explanation in that link). You can find more information on interpreting MSI log files in the section "Interpreting MSI Log Files" here: Enable installation logs for MSI installer without any command line arguments
I executed an SSIS package using SSDT and Visual Studio. When I try to execute another package I get an error saying "The process cannot access the file XXXX.ispac because it is being used by another process". I have tried rebooting but that is a pain in the behind. How can I work around this error?
While developing an SSIS package I got the error “The process cannot access the file ‘.ispac’ because it is being used by another process”*.
Tried to close SSDT and run it again but, we still got the same error while compiling. Then, after searching over internet, we got the solution:
Solution :
Go to Task Manager–> Details Tab.
Locate the process “DtsDebugHost.exe“.
Kill this process.
There might be multiple instances of this process. Kill all of them.
After doing this, I tried to compile the package again and it was successful.
You might check your patch level. I saw this much more frequently with the 2015 release of SSDT but hasn't bit me too often since then.
Finding and killing a process
Sysinternals has an excellent tool called Process Explorer. It's free, doesn't require an install and helps you see what all is happening on your computer. In this case, you want to find the process that has its grubby finger on your file (MyProject.ispac) and then kill it.
https://helpcenter.gsx.com/hc/en-us/articles/115015880627-How-to-Identify-which-Windows-Process-is-Locking-a-File-or-Folder
A different approach that doesn't require getting Process Explorer running is to change your build from Development to Release (and back again).
Chicken Sandwich No Pickles asks via comments
How can I convert from Development to Release?
In your tool bar, click where you see Development in the dropdown (or right click the solution in Solution Explorer)
In Configuration manager, you may/may not have a listing available under Configuration. Earlier versions of SSIS projects had dev/release configurations predefined but it looks like newer ones do not. If you do not have another option, make one via <New...>
Copy the values from the Development configuration et voilà!
Now when you debug, ProjectFolder/bin/Release will exist and the dtsdebughost.exe will latch onto that file and release the pointers to ProjectFolder/bin/Development/Project.ispac
Here's a simple script you can run in powershell to kill all ssis debug process "DtsDebugHost.exe" and unlock the ispac file.
unlock_ispac.ps1
# if ssis error with 'The process cannot access the file ‘.ispac'
# run this file in powershell
get-process | foreach {
$pName = $_
if($pName.Name -eq "DtsDebugHost") {
$pName.Kill()
}
}
I have a visual studio load test which I want to run every hour so that I can start to collect some data.
To do this, I thought it would be best to make a little powershell script and put a command like this inside:
Invoke-Expression -command "& '$env:VS100COMNTOOLS..\IDE\mstest.exe'
/testcontainer:"C:\Users\benb\Documents\Visual Studio
2010\Projects\BBPerformanceTest\bin\Debug\HomePageOnly.loadtest""
That command works fine, but sometimes when its run I get a blue screen of death. However, when I run my load test through the visual studio GUI, I never get a BSOD.
two questions:
is it possible to avoid this BSOD?
Is there another way I can schedule my load test?
Thanks
I just called MStest.exe directly in the scheduled task (rather than indirectly through a powershell script). This seemed to solve the problem. Thanks
My first suggestion is to analyze the memory dump file to find the root cause of your crash (it might not be the load test). This article contains information on how to do that: http://support.microsoft.com/kb/315263
In response to your second question: you can also use a batch file instead of a powershell script.
I hope this helps.
I am automating testing with test complete and CCNet. I am getting the error message "process cannot access the file because it is being used by another process" while deleting some folders.
Is there any tools which can be used to unlock the file? I need to automate the unlock operation from CCnet
Haven't tried it myself but Unlocker might solve your problem. According to the FAQ it has a CLI:
Can Unlocker be run in command line? Yes! Unlocker -H for command line options.
Open task manager (ctrl + alt + del keys, select task manager), and see if you can find the name of your most recent application. If you find the application still running, but not visible, that means it switched to a background process, and likely has a bug/infinite loop. End the program task if you find it (you may very well find several) and that should clear up an file-IO errors. Hopefully this helped!
how can I execute a batch-file or just some (e.g. twice) commands in a job of Hudson (running on windows xp, as a non-service, but may change), that the environment just stays for the whole build.
I need to do this, because I have to change the current path with 'cd' (we are using relative paths in our proj) and 'set' some environment-variables for msbuild.
Thank you in Advance.
Not sure why you need to get out of the service realm. My understanding was so far that Hudson starts a new environment for every job, so that the jobs don't interfere with each other. So if you don't use commands that effect other ennvironments (e.g. subst) you will be fine with adding a "Execute Windows Batch Command".
If your service runs with the wrong permissions, you have two options. First, change the permission of the service (run it under a different user than the local system user) or call the runas command. If for whatever reason you still need to contain changes to certain parts of your job you can always call cmd to create a new environment.