Powershell to trigger ACR BUILD failing when called from windows scheduler - windows

I have been trying to schedule an ACR BUILD on a machine using Powershell. The approach I am using is with a service principal (as shown here: https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication?tabs=azure-cli)
I have created a build script which works fine, if I call it form within the Powershell console. However, when I schedule the script to run from the windows scheduler, it seems to skip past the ACR BUILD portion and not execute as expected.
Script below:
$myreg = "myreg"
$myregfull = "myreg.azurecr.io"
$Date = Get-Date -format "yyyyMMdd"
$logfile = "c:\Log-$Date.txt"
$user ="xxx"
$pass="xxx"
$tenant="xxx"
$subscription="xxx"
$myimage="myimage:"
Try {
# 1. Logging in as service principal
$DateForLog = Get-Date | Out-File -FilePath $logfile -Append
"--- Logging in as service principal ---" | Out-File -FilePath $logfile -Append
az login --service-principal -u $user -p $pass --tenant $tenant | Out-File -FilePath $logfile -Append
}
Catch{
"Logging in as service principal failed at $(Get-Date). Error: $($_.Exception.Message)" |
Out-File -FilePath $logfile -Append
}
Try {
# 2. Switching to subscription
$DateForLog = Get-Date | Out-File -FilePath $logfile -Append
"--- Switching to subscription ---" | Out-File -FilePath $logfile -Append
az account set --subscription $subscription | Out-File -FilePath $logfile -Append
}
Catch{
"Switching to subscription failed at $(Get-Date). Error: $($_.Exception.Message)" |
Out-File -FilePath $logfile -Append
}
Try {
# 3. Logging in to registry
$DateForLog = Get-Date | Out-File -FilePath $logfile -Append
"--- Logging in to registry $myreg.azurecr.io ---" | Out-File -FilePath $logfile -Append
$TOKEN=$(az acr login --name $myreg --expose-token --output tsv --query accessToken)
docker login $myregfull -u 00000000-0000-0000-0000-000000000000 -p $TOKEN | Out-File -FilePath $logfile -Append
}
Catch{
"Logging in to registry failed at $(Get-Date). Error: $($_.Exception.Message)" |
Out-File -FilePath $logfile -Append
}
Try {
# 4. Confirm connected
$DateForLog = Get-Date | Out-File -FilePath $logfile -Append
"--- Confirming connected ---" | Out-File -FilePath $logfile -Append
az acr show -n $myreg | Out-File -FilePath $logfile -Append
az acr repository list -n $myreg | Out-File -FilePath $logfile -Append
}
Catch{
"Confirm connected failed at $(Get-Date). Error: $($_.Exception.Message)" |
Out-File -FilePath $logfile -Append
}
Try {
# 5. Triggerng Build
$DateForLog = Get-Date | Out-File -FilePath $logfile -Append
"--- Triggering build of myreg.azurecr.io/myimage:initial ---" | Out-File -FilePath $logfile -Append
az acr build -t $myimage$Date -r $myreg . --platform windows | Out-File -FilePath $logfile -Append
}
Catch{
"Triggerng Build failed at $(Get-Date). Error: $($_.Exception.Message)" |
Out-File -FilePath $logfile -Append
}
$DateForLog = Get-Date | Out-File -FilePath $logfile -Append
When called from console, the logs show the command called, then some 15mins later (after context upload) it shows (etc). :
2022/06/14 10:26:12 Downloading source code...
Then taking approx 30 mins to build before moving to next step.
Whereas when called form scheduler, it shows the step being finished in 8 secs.
The login process is definitely successful though, because the list of repositories is shown, no matter where it is called form.
Any suggestions on what might be causing this issue would be greatly appreciated.
EDIT
Updating the question to show logs.
From scheduler:
14 June 2022 14:46:05
--- Logging in as service principal ---
[
{
"cloudName": "AzureCloud",
"homeTenantId": "xxx",
--- OMMITTED ---
"user": {
"name": "xxx",
"type": "servicePrincipal"
}
}
]
14 June 2022 14:46:31
--- Switching to subscription ---
14 June 2022 14:46:36
--- Logging in to registry myreg.azurecr.io ---
Logging in to registry failed at 06/14/2022 14:46:47. Error: The term 'docker' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
14 June 2022 14:46:47
--- Confirming connected ---
{
"adminUserEnabled": true,
"anonymousPullEnabled": false,
"creationDate": "2021-04-06T10:23:22.985285+00:00",
--- OMMITTED ---
"type": "Microsoft.ContainerRegistry/registries",
"zoneRedundancy": "Disabled"
}
[
"myrepo1",
--- OMMITTED ---
"myrepo2"
]
14 June 2022 14:47:03
--- Triggering build of myreg.azurecr.io/myimage:initial ---
14 June 2022 14:47:12
From console:
14 June 2022 14:50:14
--- Logging in as service principal ---
[
{
"cloudName": "AzureCloud",
"homeTenantId": "xxx",
--- OMMITTED ---
"user": {
"name": "xxx",
"type": "servicePrincipal"
}
}
]
14 June 2022 14:50:41
--- Switching to subscription ---
14 June 2022 14:50:47
--- Logging in to registry myreg.azurecr.io ---
Logging in to registry failed at 06/14/2022 14:50:57. Error: The term 'docker' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
14 June 2022 14:50:57
--- Confirming connected ---
{
"adminUserEnabled": true,
"anonymousPullEnabled": false,
"creationDate": "2021-04-06T10:23:22.985285+00:00",
--- OMMITTED ---
"type": "Microsoft.ContainerRegistry/registries",
"zoneRedundancy": "Disabled"
}
[
"myrepo1",
--- OMMITTED ---
"myrepo2"
]
14 June 2022 14:51:11
--- Triggering build of myreg.azurecr.io/myimage:initial ---
2022/06/14 14:51:23 Downloading source code...
2022/06/14 14:51:29 Finished downloading source code
2022/06/14 14:51:30 Using acb_vol_77064302-024f-4c7c-8933-8f1fc9a4ce4f as the home volume
2022/06/14 14:51:31 Setting up Docker configuration...
2022/06/14 14:51:38 Successfully set up Docker configuration
2022/06/14 14:51:38 Logging in to registry: myreg.azurecr.io
2022/06/14 14:51:42 Successfully logged into myreg.azurecr.io
2022/06/14 14:51:42 Executing step ID: build. Timeout(sec): 28800, Working directory: '', Network: ''
2022/06/14 14:51:42 Scanning for dependencies...
2022/06/14 14:51:46 Successfully scanned dependencies
2022/06/14 14:51:46 Launching container with name: build
Sending build context to Docker daemon 804.4kB
Step 1/7 : FROM myreg.azurecr.io/myimage:empty
empty: Pulling from myimage
4612f6d0b889: Pulling fs layer
5ff1512f88ec: Pulling fs layer
--- OMMITTED ---

The problem was that the ACR BUILD command needed an absolute file path for the docker file and also <SOURCE_LOCATION>.
When called from the console, the current location was taken, but when the script was called from the scheduler it needed to be absolute.
So instead of:
az acr build -t $myimage$Date -r $myreg . --platform windows
It needed to be:
az acr build -t $myimage$Date -r $myreg -f c:/path-to-docker-file c:/path-to-source-folder/ --platform windows
The reason this was not evident to begin with, was because of the way I was capturing the logs. No errors or warnings were given when piping the output from ACR BUILD to Out-File -FilePath $logfile.
It was only when I switched to creating a transcript of the session (and removing the piped output) that an error was shown about not being able to see dockerfile.
Start-Transcript -Path "E:\transcript.txt" -NoClobber

Related

Accessing TFS (on prem) via powershell in docker windows container

I have a window 2016 server running windowservercore. I am working on moving CI pipeline into containers. During our process, we build a version.html file. The file contains build data(like build date and build nbr) and TFS 2017 project information about merge/branch that have occured.
We had this working with TeamCity running a PowerShell script that would connect and run a query against TFS 2017. So I looked on docker hub for TFS, but did not have any luck. I also tried looking under Microsoft on docker hub and did not find anything.
I tried to create a new docker file
FROM microsoft/windowsservercore:10.0.14393.1480
# Setup shell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN Mkdir BuildStage
COPY powershell/CopyBuildToStageDir.ps1 \BuildStage
Copy powershell/BuildVersionFile.ps1 \BuildStage
RUN dir
But when I ran the Powershell file inside the windows container it said...
Unable to find type
[09:25:00][Step 2/2] [Microsoft.TeamFoundation.Client.TeamFoundationServerFactory].
[09:25:00][Step 2/2] At C:\BuildStage\BuildVersionFile.ps1:192 char:12
In the PowerShell, there is this function
#============================================================================
# Setup TFS stuff
#============================================================================
function Setup-Tfs {
# Connect to TFS
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.Client") | out-null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.TeamFoundation.VersionControl.Client") | out-null
$tfsServer = "http://ourServer";
$tfs = [Microsoft.TeamFoundation.Client.TeamFoundationServerFactory]::GetServer($tfsServer)
$Script:versionControlServer = $tfs.GetService([Microsoft.TeamFoundation.VersionControl.Client.VersionControlServer] )
$Script:recursionType = [Microsoft.TeamFoundation.VersionControl.Client.RecursionType]::Full
}
Here are more details of how we are using powershell to call TFS to get Merge and Branch information to build the version.html file...
# Need to get the last 5 changesets of Merge information for both MAIN and Iteration
Setup-Tfs
$baseLocation = "$/OurBaseLocation/"
$locationForMain = $baseLocation + $OurProjectLocation
# Query history for the TFS path
$vCSChangeSets = $versionControlServer.QueryHistory($locationForMain, $recursionType, 5)
# History of Merge changes to MAIN application (only 5 deep)
"<table border='2'>" | Out-File $VersionFname -append
"<caption>Merge Info For: $AppName </caption>" | Out-File $VersionFname -append
# Build out headers
"<TH>Changeset</TH><TH>Date</TH><TH>Comment</TH>" | Out-File $VersionFname -append
Foreach ($vCSChangeSet in $vCSChangeSets) {
# write row
$changeset = $vCSChangeSet.ChangesetID
$CheckinNotesName = $vCSChangeSet.Comment
$CreationDate = $vCSChangeSet.CreationDate
if ($CheckinNotesName.ToUpper().Contains("MERGE")){
"<TR>" | Out-File $VersionFname -append
"<TD>$changeset</TD><TD>$CreationDate</TD><TD>$CheckinNotesName</TD>" | Out-File $VersionFname -append
"</TR>" | Out-File $VersionFname -append
}
if ($CheckinNotesName.ToUpper().Contains("BRANCH")){
"<TR>" | Out-File $VersionFname -append
"<TD>$changeset</TD><TD>$CreationDate</TD><TD>$CheckinNotesName</TD>" | Out-File $VersionFname -append
"</TR>" | Out-File $VersionFname -append
}
}
# close table add space
"</table><BR/><BR/>" | Out-File $VersionFname -append
My guess is that my docker file needs to add something for "Microsoft.TeamFoundation.VersionControl.Client"
Any help would be appreciated.
What I found that worked best was to give up on the PowerShell namespaces for TFS. Instead, use the TFS API. Here is an example to get the properties of a single WI.
#============================================
# Get-TFSFieldsByWiId
#============================================
function Get-TFSFieldsByWiId([string]$Id) {
$url = 'http://YourTFSUrl:YourPort/YourProject/_apis/wit/workitems?ids=' + $Id+'&$expand=all&api-version=YourVersion'
# Step 1. Create a username:password pair
$credPair = "$(''):$($password)"
# Step 2. Encode the pair to Base64 string
$encodedCredentials = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($credPair))
# Step 3. Form the header and add the Authorization attribute to it
$headers = #{ Authorization = "Basic $encodedCredentials" }
# Step 4. Make the GET request
$responseData = Invoke-WebRequest -Uri $url -Method Get -Headers $headers -UseBasicParsing -Body ($QueryToRun|ConvertTo-Json) -ContentType "application/json"
$data = $responseData.Content
$data = $data | ConvertFrom-Json
$WIDetails = $data.value
return $WIDetails
}

IIIS WAS process cannot be stopped via Powershell

On a Windows Server 2008 R2, 64 bit-machine I am running the following code:
$global:arrServer = #("ph1", "ph2", "ph3")
$global:arrDienste = #("W3SVC", "WAS", "IISADMIN")
$global:strPfad = "D:\WASLogs\"
$global:strLogTime = Get-Date -Format "yyyy-MM-dd--hh-mm-ss"
$global:strLogDatei = $global:strPfad + "WARTUNG--" + $global:strLogTime + ".log"
Log_Abfrage_und_Generierung
Dienste_Stop
Function Dienste_Stop
{
echo "Stop of the services successful?" | Out-File $global:strLogDatei -Append -Force
foreach($strServer in $global:arrServer)
{
$strInterim2 = $strServer + " (" + $global:appServerNamen + ")"
echo " " $strInterim2 | Out-File $global:strLogDatei -Append -Force
foreach($strDienst in $global:arrDienste)
{
$objWmiService = Get-Wmiobject -Class "win32_service" -computer $strServer -filter "name = '$strDienst'"
if( $objWmiService.State )
{
$rtnWert = $objWmiService.stopService()
Switch ($rtnWert.returnvalue)
{
0 { echo "$strDienst stopped!" | Out-File $global:strLogDatei -Append -Force }
2 { echo "$strDienst throws: 'Access denied!'" | Out-File $global:strLogDatei -Append -Force }
3 { echo "Service $strDienst is not existing on $strServer!" | Out-File $global:strLogDatei -Append -Force }
5 { echo "$strDienst already stopped!" | Out-File $global:strLogDatei -Append -Force }
DEFAULT { echo "$strDienst service reports ERROR $($rtnWert.returnValue)" | Out-File $global:strLogDatei -Append -Force }
}
}
else
{
echo "Service $strDienst is not existing on $strServer!" | Out-File $global:strLogDatei -Append -Force
}
}
}
}
Function Log_Abfrage_und_Generierung
{
if([IO.Directory]::Exists($global:strPfad))
{
echo "Nothing happening here."
}
else
{
New-Item -ItemType directory -path $global:strPfad
}
}
This can be reproduced on all computers ph1, ph2 and ph3. However with some other code, WAS can be started, respectively the status can be seen.
Also to note:
All other services can be stopped? Does it has to do with the fact that the path for the WAS is like this? C:\Windows\system32\svchost.exe -k iissvcs
I use WMI on purpose.
What is going on here?
Tia
The problem could be that there are multiple services that depend on WAS which need to be stopped first. The StopService() method does not have an overload to stop dependent services. If this doesn't solve the issue check the response code from StopService to determine the problem in the link above.
It looks like you are handling the code 3 as 'service does not exist'. The docs show this code actually means 'The service cannot be stopped because other services that are running are dependent on it.'
Not sure why you're determined to use WMI when this capability is fully baked into powershell
Stop-Service WAS -Force

Publishing DacPacs in Visual Studio 2013

I have an SSDT database project in Visual Studio 2013. This is used as the "answer sheet" when publishing database updates to a database in the other environments. I recently came across Jamie Thompson's blog article on DacPacs, where he writes a great summary on what DacPacs are, and how to use them.
Now, say I have the following scenario:
The SSDT project in VS2013, which is version 1.0.33
A database in my Dev environment, which is version 1.0.32
A database in my S-test environment, whic is version 1.0.31
According to Jamie, publishing databases changes using DacPacs is idempotent, i.e. I can publish the DacPac from the SSDT project in bullet 1 to the database in bullet 3, and it will get all the changes done to the database in both version 1.0.32 and 1.033 since the DacPac contains information about the entire DB schema (which then also should include changes done in version 1.0.32).
Is this a correct understanding of how publishing a DacPac works?
Yes, once you defined your model in a DACPAC in a declarative way, you can then deploy your model to any target environment with whatever version of you database.
The engine will automatically generate the proper change scripts according to the target.
You can deploy (publish) your model from Visual Studio or from command line using the SqlPackage.exe utility. Here an example of a PowerShell script that use SqlPackage.exe and a Publish Profile file. You can choose to publish directly or generate the change script (set the $action variable). The DACPAC file and the Publish Profile file have to be in the same folder of the ps file. A log file will be generated:
$scriptPath = split-path -parent $MyInvocation.MyCommand.Definition
####################################
$action = 'Publish' #Only generate script: 'Script'; Publish directly: 'Publish'
$databaseName = 'Test'
$serverName = 'localhost'
$dacpacPath = Join-Path $scriptPath '.\Test\bin\Debug\Test.dacpac'
$publishProfilePath = Join-Path $scriptPath '.\Test\Scripts\Publish\Test.publish.xml'
$outputChangeScriptPath = Join-Path $scriptPath 'TestDeploymentScript.sql'
$logPath = Join-Path $scriptPath 'TestDeployment.log'
####################################
$sqlPackageExe = 'C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe'
if ($action.ToUpper() -eq 'SCRIPT')
{
Write-Host '********************************' | Tee-Object -File "$logPath"
Write-Host '* Database Objects Scripting *' | Tee-Object -File "$logPath"
Write-Host '********************************' | Tee-Object -File "$logPath"
$args = "/Action:Script /TargetDatabaseName:$databaseName /TargetServerName:$serverName " +
"/SourceFile:""$dacpacPath"" /Profile:""$publishProfilePath"" /OutputPath:""$outputChangeScriptPath"" "
$command = "& ""{0}"" {1}" -F $sqlPackageExe, $args
Invoke-Expression $command | Tee-Object -File "$logPath"
if($LASTEXITCODE -ne 0)
{
$commandExitCode = $LASTEXITCODE
$Error[0] | Tee-Object -File $outputChangeScriptPath
return $commandExitCode
}
}
if ($action.ToUpper() -eq 'PUBLISH')
{
# DWH
Write-Host '*********************************' | Tee-Object -File "$logPath"
Write-Host '* Database Objects Deployment *' | Tee-Object -File "$logPath"
Write-Host '*********************************' | Tee-Object -File "$logPath"
$args = "/Action:Publish /TargetDatabaseName:$databaseName /TargetServerName:$serverName " +
"/SourceFile:""$dacpacPath"" /Profile:""$publishProfilePath"" "
$command = "& ""{0}"" {1}" -F $sqlPackageExe, $args
Invoke-Expression $command | Tee-Object -File "$logPath"
if($LASTEXITCODE -ne 0)
{
$commandExitCode = $LASTEXITCODE
$Error[0] | Tee-Object -File $outputChangeScriptPath
return $commandExitCode
}
}

If Then Else broke my script

Currently I have this code -
Set-ExecutionPolicy Unrestricted
$name = (Get-WmiObject win32_bios).SerialNumber.Trim()
$oldname = (Get-WmiObject win32_computersystem).Name.Trim()
IF ($oldname -eq $name){Exit}
Else{ Rename-computer -ComputerName $oldname -NewName "$name" -force
Start-Sleep -s 5
Restart-Computer}
And I have it set to run as a scheduled task at logon and without the If Else it works perfectly however I don't want it to run every time a user logs in because it will just be a cycle of rebooting. Any help would be greatly appreciated.
I would suggest some changes:
Set-ExecutionPolicy is unnecessary, because if the machine has started processing the script, then the executionpolicy isn't a problem. So remove that, and specify it in the powershell.exe-call instead, like: powershell.exe -executionpolicy unrestricted
Use if($oldname -ne $name) { rename-computer .... } so you can remove the else part. Much cleaner
Try running the modified script below, and report back with the output in the scriptlog.txt-file.
$logpath = "c:\scriptlog.txt"
$name = (Get-WmiObject win32_bios).SerialNumber.Trim()
$oldname = (Get-WmiObject win32_computersystem).Name.Trim()
"NewName is '$name'" | Out-File $logpath -Append
"OldName is '$oldname'" | Out-File $logpath -Append
IF ($oldname -ne $name){
"If-test TRUE" | Out-File $logpath -Append
Rename-computer -ComputerName $oldname -NewName $name -Force
Start-Sleep -s 5
Restart-Computer
} else { #I've added the else-part just because of logging.
"IF-test FALSE" | Out-File $logpath -Append
}

Keep old default printer name on new print server

I have a task to update all client printer settings during a migration from and old 2003 R2 print server to a new 2008 R2 print server. All clients are Win7 with Powershell 2.0 and I created a script that adds new printers and deletes old printers on the client.
However, it mess up the default printer setting on the client, it seems to be random if it changes the default printer to a randrom printer or if no default printer is set at all.
I was thinking to use the method Get-WmiObject -Class Win32_Printer -Filter "Default = $true" and that works, I can see the correct (and old) default printer.
But if I try to set the new default printer to the same name, it fails (or more precisely, it just gets random what happens).
Maybe I am putting the function $printer.SetDefaultPrinter() on the wrong place?
Code:
Param (
$newPrintServer = "Server2",
$PrinterLog = "\\LogSVR\PrintMigration$\PrintMigration.csv"
)
<#
#Header for CSV log file:
"COMPUTERNAME,USERNAME,PRINTERNAME,RETURNCODE-ERRORMESSAGE,DATETIME,STATUS" |
Out-File -FilePath $PrinterLog -Encoding ASCII
#>
Try {
Write-Verbose ("{0}: Checking for printers mapped to old print server" -f $Env:USERNAME)
$printers = #(Get-WmiObject -Class Win32_Printer -Filter "SystemName='\\\\Server1'" -ErrorAction Stop)
$DefPrinter = Get-WmiObject -Class Win32_Printer -Filter "Default = $true"
If ($printers.count -gt 0) {
ForEach ($printer in $printers) {
Write-Verbose ("{0}: Replacing with new print server name: {1}" -f $Printer.Name,$newPrintServer)
$newPrinter = $printer.Name -replace "Server1",$newPrintServer
$returnValue = ([wmiclass]"Win32_Printer").AddPrinterConnection($newPrinter).ReturnValue
If ($returnValue -eq 0) {
"{0},{1},{2},{3},{4},{5}" -f $Env:COMPUTERNAME,
$env:USERNAME,
$newPrinter,
$returnValue,
(Get-Date),
"Added Printer" | Out-File -FilePath $PrinterLog -Append -Encoding ASCII
Write-Verbose ("{0}: Removing" -f $printer.name)
$printer.Delete()
"{0},{1},{2},{3},{4},{5}" -f $Env:COMPUTERNAME,
$env:USERNAME,
$printer.Name,
$returnValue,
(Get-Date),
"Removed Printer" | Out-File -FilePath $PrinterLog -Append -Encoding ASCII
$DefPrinter.SetDefaultPrinter()
} Else {
Write-Verbose ("{0} returned error code: {1}" -f $newPrinter,$returnValue) -Verbose
"{0},{1},{2},{3},{4},{5}" -f $Env:COMPUTERNAME,
$env:USERNAME,
$newPrinter,
$returnValue,
(Get-Date),
"Error Adding Printer" | Out-File -FilePath $PrinterLog -Append -Encoding ASCII
}
}
}
} Catch {
"{0},{1},{2},{3},{4},{5}" -f $Env:COMPUTERNAME,
$env:USERNAME,
"WMIERROR",
$_.Exception.Message,
(Get-Date),
"Error Querying Printers" | Out-File -FilePath $PrinterLog -Append -Encoding ASCII
}
I may be misunderstanding, but the default printer(defprinter) is also located on server1, right? so you create a link defprinter to printer x. then you delete all printers(including printer x) and you try to make defprinter(which no longer exists) default printer again. That won't work and a random printer will get the default attribute.
1st, you should store the unique printername($printer.name) of the defprinter before the loop starts. then when the loop is done: you search for the newly created printer wmi-object that represents the previous default printer(using the printername you saved pre-loop) and make that default..

Resources