Copy deployment result from an Azure Mac hosted agent to a private Windows agent - macos

In Azure DevOps I have a release definition that executes the command productbuild --component $(System.DefaultWorkingDirectory)/$(RELEASE.PRIMARYARTIFACTSOURCEALIAS)/My/Folder.app/ /Applications My.pkg to create a new pkg file starting from the built artifact. This command is executed on a Mac hosted agent.
Now I need to put the pkg on a specific path of a Windows machine on which I have an Azure DevOps' private agent. My problem is the copy operation from the Mac hosted machine to a private machine having the private agent. Is there any way to accomplish this task?
Thank you

Since you can't move pkg creation to build pipeline you need to upload it to for instance Blob Storage (if you use already Azure it should not be a problem), or to FTP (it could be on your host agent or not) then you should trigger pipeline/release (using this extension and passing url/location of upload pkg file.

You can publish the pkg file to Azure artifacts feed in your release pipeline using Universal Package task. And then download the pkg file to your private machine. See below steps:
1, Create a Azure Artifacts feed from Azure devops portal.
2, Add Universal Package task in your release pipeline to publish your pkg file as a universal package to above artifacts feed.
- task: UniversalPackages#0
displayName: 'Universal publish'
inputs:
command: publish
publishDirectory: '$(Build.ArtifactStagingDirectory)/package.pkg'
vstsFeedPublish: 'FeedId'
vstsFeedPackagePublish: 'package_name'
3, Add a agent job in your release pipeline stage. And configure it to run on your private agent.
Then you can and add Universal Package task in this agent job to download this pkg file to your private machine.

Related

Copy the files from Linux vm to azure repository

I don't know if it's possible or not.
I have one Linux vm where we have some files. we don't have any git installed inside the Linux vm, in other hand we have azure repository.
Using azure pipeline job, we have copy all the files from Linux vm to azure repository and start the build, if the build is successful then we have to merge the copied code into azure repo.
Please let me if it's possible

How to run a PowerShell script after check in the code to AzureDevOps

I have a PowerShell script to push a package to AzureDevOps.
I want to run it, automatically, every time that I Check In the code to AzureDevOps in Visual Studio.
Is this possible? How can I do this?
If you are pushing package to git in Azure DevOps you can create build pipeline that will execute this package, and this build pipeline will have trigger to run after each commit.
IF you are pushing packages to Azure DevOps Artifacts then you have option to use Azure DevOps RestApi or Power Shell Module for Azure DevOps and trigger specific build.

Why can't 'dotnet lambda deploy-function' pull a package from an Azure DevOps artifact feed?

I'm using the AWS deploy lambda task within Azure DevOps. Within the lambda function that gets deployed, it's setup to pull a package from an artifact feed within the same Azure DevOps repo/installation.
If I run NuGet restore in a previous step to the deploy then the package can be accessed fine however when it then hits the AWS Lambda .NET Core Deployment step it gets a 401 when trying to read from the same feed.
Does anyone know how I could configure the lambda release step to successfully read from a custom feed?
The specific error is:
Response status code does not indicate success: 401
I am having the same issue but hopefully I can offer a bit of a new angle on it.
Rather than using the AWS deploy lambda run we are packaging our lambdas and pushing them to S3 to allow CloudFormation to deploy them. This uses the AWS dotnet toolkit to construct the deployment package (which is what aws deploy lambda is doing in the background). The powershell step that performs this then looks like:
dotnet lambda package
The resultant package will then typically be generated inside of the bin/release folder beneath your project.
What this then lets you do is add the parameters --msbuild-parameters "--no-restore" to the packaging process which will not trigger the automatic restore step. Inside of Azure DevOps Build Pipelines you can set a build step before it to restore for all solutions or csproj files which will authenticate against your feed automatically. We also set the version number of the asssemblies and I wanted to get rid of an annoying warning so our current version of this call looks like:
dotnet lambda package ("/p:Version=" + $VersionNumber) "/p:PreserveCompilationContext=false" --msbuild-parameters "--no-restore"
The problem that I am now running into is that passing in the msbuild-parameters seems to set the framework to target red hat linux (rhel.7.2-x64) resulting in the following error:
publish: C:\Program Files\dotnet\sdk\2.1.500\Sdks\Microsoft.NET.Sdk\targets\Microsoft.PackageDependencyResolution.targets(198,5): error NETSDK1047: Assets file 'C:\Agent\_work\1\s\Kiosk.Microservice.User.Lambda.Command\obj\project.assets.json' doesn't have a target for '.NETCoreApp,Version=v2.0/rhel.7.2-x64'. Ensure that restore has run and that you have included 'netcoreapp2.0' in the TargetFrameworks for your project. You may also need to include 'rhel.7.2-x64' in your project's RuntimeIdentifiers. [C:\Agent\_work\1\s\Kiosk.Microservice.User.Lambda.Command\Kiosk.Microservice.User.Lambda.Command.csproj]
I very explicitly want it to build for dotnetcore2.0 so I don't actually want to build for red hat linux.
This is where I am currently stuck as if I don't use the flag to stop the unauthenticated nuget restore step, I get the unauthorized error and I can't seem to pass dotnet.exe my feed credentials. If I do use the flag it builds for red hat Linux for no coherent reason. Hopefully this gets you at least a little farther!
Update: I now have my stuff working. I went and found the dotnet cli wrapper that dotnet lambda publish is actually using in the git repository for the toolset and duplicated its steps without the middleman. Because the msbuild-parameters flag was no longer used it didn't try to build it in red hat linux. I did also have to create the zip file afterwards as well but that is fairly trivial. The following is the powershell that is generating the new packages without the aws dotnet toolset:
# Run the build that will generate the proper files
dotnet publish --no-restore -f netcoreapp2.0 -c Release
# Create the path to the zip file
$PathToZip = $PathToCSProj + "\bin\Release\netcoreapp2.0\publish"
# Create the zip file
Compress-Archive -Path $PathToZip -DestinationPath ($PathToZip + $csproj[0].Name.trim(".csproj") + ".zip")
I hope this helps!

Deploying to Octopus from Teamcity with .Net Core not creating .zip

I am doing the following steps:
dotnet restore
dotnet publish
octopusDeploy: Push packages
The second step creates a 'published-app' folder and the third step is meant to take that and create a .zip file and send it to the Octopus server.
The third step is connecting to the Octopus server but gives the error:
Running command: octo.exe push --server http://server.com/ --apikey SECRET
Pushing packages to Octopus server
Please specify a package to push
I am following this https://stackoverflow.com/a/38927027 so my third step has:
%teamcity.build.workingDir%/published-app/**/* => App.zip
Any ideas why the zip file is not being created?
Not sure if you ever got this working for yourself, however just in case it helps anyone we recently came across the same issue deploying an AspNetCore 2.0 web application running on net471 being built by TeamCity 2017.1.4 (build 47070).
After some tinkering I noticed that the "OctopusDeploy: Create and Push Packages" build step ran at our git checkout root directory, so I ending up having to use the following values for the "Package path patterns"
%ProjectDirectory%/published-app/**/* => %ProjectName%.%GitVersion.NuGetVersion%.zip
NB: %ProjectDirectory%, %ProjectName% and %GitVersion.NuGetVersion% are build parameters we have manually defined elsewhere in the build process that TeamCity can replace. %ProjectDirectory% is simply the application's source directory relative to the root of the git checkout i.e. WebApplication1 so the full path would be <full checkout path>/WebApplication1
Another gotcha that we experienced was that at the time of writing the combination of TeamCity and octo.exe (from Octopus.TeamCity v4.15.10) didn't like creating nupkg files, so make sure you try to produce a ".zip" file. In the error instances we would receive the following error:
Error from Octo.exe: Cannot run program "C:\BuildAgent\temp\buildTmp\octo-temp\3.0\octo.exe" (in directory "C:\BuildAgent\work\4e62985fa616fa1f"): CreateProcess error=206, The filename or extension is too long

Maven artifacts are deployed to a wrong location at when invoking Gradle install task on Team City CI

I'm trying to setup simple continuous integration system on my local PC. I use gradle as my build system (gradle wrapper option). One of the steps in the build process in to deploy build artifacts to a local repository (located at:
"{user_dir}/.m2/repository)". It works ok when I run it from local PC, but when it runs on Team City CI (version 9) it deploys it to a
"{windows_dir}\System32\config\systemprofile.m2\repository". This is probably some configuration issue but I couldn't manage to solve it. In the build logs I saw that it can't find the local repository in the settings.xml file. I've tried to add it but it didn't help. How can I configure Team City to use local repository folder in user directory?
I found out what was the issue. If you install Team City system services to run under admin account it will always use windows directory. In order to use the User's directory you need to install the services under that user account.
Source: https://confluence.jetbrains.com/display/TCD9/Maven+Server-Side+Settings.

Resources