We use rackspace as our cloud provider and spin up new build agents as and when needed from existing server images.
Team city then detects the build agent image but does not authorise it automatically.
Can you tell me how to authorise the build agents without the need to manually go to team city and click authorise as these servers can spin up different flavors, each with different config.
Do I just need to write the correct authorisation key to the build agent config file or is there a better approach to using team city with cloud servers?
In TeamCity 10 you can use the REST API to authorise the agent on startup using an admin username/password:
curl -sS -X PUT --data "true" -H "Content-Type:text/plain" -u ${TEAMCITY_SERVER_USERNAME}:${TEAMCITY_SERVER_PASSWORD} ${TEAMCITY_SERVER_URL}/httpAuth/app/rest/agents/${TEAMCITY_AGENT_NAME}/authorized
If you tail the BuildAgent/logs/teamcity-agent.log file you will see a Registered message and then after that you can run the above command.
The approach that worked for me was to store the unique authorisation code that is written to the build agent config file and then pass this into the team city build step. The build step then updates the build agent config file using powershell and the build agent is authorised, when it next communicates with the team city server.
Related
I'm trying to work around a problem with my Self-hosted Azure Pipeline agent. One of the workarounds listed here is to make the agent log on as myself, (instead of as the current, "Network Service" account it uses).
So I tried that. I went to the Services app, edited the "Azure Pipelines Agent" service and changed the user to be myself.
Windows then tells me that I'll need to stop the service and restart it. But when I do that, I get an error dialog with Error 1069: "The service did not start due to a logon failure"
I have tried to use both my Windows 10 Logon PIN (that I type to login when I sit down at the machine) as the password as well as my Azure AD password for our organization that lets me log on to all our resources. Neither one works.
I know I have the correct account. I don't have any other organization passwords that I know of. What am I doing wrong?
Change the logon user on DevOps agent services won't work.
If you'd like to run the agent with specific account, you need to uninstall the agent(config.cmd remove), then reconfigure the DevOps agent, type your account as below during the configuration.
You can validate the user account in DevOps pipeline with below task:
pool: self2
- script: whoami
I am trying to move my backend API app (node.js express server) from Heroku to AWS Elastic Beanstalk. But I did not realize the amount of features that Heroku was providing automatically and which I now have to set up manually in AWS.
So here is the list of features which I discovered were missing in AWS and the solutions I have implemented.
Could you please let me know if I am missing something in order to run smoothly my APIs in AWS and get the equivalent of what I had in Heroku?
auto-restart server when crashed : I am using PM2 to automatically restart my server in case of critical error
SSL certificate : I am using AWS ACM certificate,
logging : have inserted the datadog agent in order to receive logs in datadog
logging response time : I have added the "morgan-body" package to get each requests' duration and response code (had to manually filter the AWS healthchecks and search engine bots, because AWS gave me an IP adress which was visited constatntly by Baidu bots)
server timeout : I have implemented a 1200000ms timeout on the whole app (any better option ?)
auto deploy from Github : I have implemented a github automation to deploy code automatically (better options?)
Am I missing something? This app is already live so I do not want to put my customers at risk when I will move from Heroku to AWS...
Thanks for your help!
I believe you are covered:
Heroku Dynos restart after crashing or raising an error (Heroku Restarting Policy)
SSL certificates are provided for free
logging: Heroku supports various plugins, including Datadog
response time (in millisec) is logged automatically
HTTP timeout is 30 sec (it cannot be changed)
deploy from Github is possible (connecting the accounts), Docker deployment is also supported. Better options? Using Github Actions to deploy a new version after code push or tagging.
If you are migrating a production environment I strongly suggest first to setup a Heroku (Free) Dyno to test and verify all your needs are satisfied.
My company is using VSTS builds for continuous integration. Every commit triggers a build which runs on a Linux agent. The problems is after the build is finished, I need the agent to restart a service(which requires root). How can I automatically restart the service via the agent with minimal security risks?
You can add a new user to build agent machine and grant the necessary security to restart service, then configure/change the build agent running account to that user.
An article about deploying an agent on linux.
Currently, I am starting a container with bamboo remote agent on it and every time I need to manually approve the bamboo agent on the bamboo server. The idea is to automate the whole process starting with running a container which launches a bamboo remote agent and performs the build and then to kill the container. Since bamboo server expects manual approval, this is posing as a challenge. So I am looking for a way to auto approve the agent to register it.
Thank you!
I don't think there's an option to automatically auto-approve agents. Agent approval requirement is an security feature so auto approving any remote agent wouldn't be a security feature anyhow.
That being said, there's an option to disable agent authentication which will effectively mean that any new agent is approved right away -> actually what you're asking for.
You can disable agent authentication by visiting Bamboo administration pages
I'm currently 'playing' with Plastic and their (brand new) TeamCity integration plugin.
The plugin blurb says "When installing Team City on Windows systems, it normally uses the SYSTEM user account. We recommend changing the user that executes the Team City application."
The thing is, I can't work out what kind of user I should substitute: I would like to be able to access Plastic (on the server) using AD, but wouldn't that mean that TeamCity would also have to run with a network user in order to be able to access Plastic?
An alternative (for me accessing Plastic) would be user/password - but I can't make the TeamCity service run with user/password.
Am I missing something obvious, or is the paint just too wet?
I'm also using PlasticSCM and the Team city plugin, this is my configuration:
For the server: configure your PlasticSCM server with LDAP authentification and select "Active Directory" as the server type.
For the client: configure your PlasticSCM client with LDAP authentification, use your credentials and try the "Test connection" button.
The client setup will generate a "client.conf" file at "C:\Users\your_user\AppData\Local\plastic". This file is used by PlasticSCM client to authenticate with the PlasticSCM server.
So, if your TeamCity service is running with the administrator account you have to place this file in your Administrator "...\AppData\Local\plastic" directory. If you change your TeamCity service to be run with your system account you don't need to do anything, the file is in the right place.
You have another option (if you are still running the TeamCity plugin as Admin), place the "client.conf" file where your "cm.exe" file is. Because the "cm.exe" is going to try to find this file first on its own location and then in the current user "AppData\Local\plastic" directory. This option is only valid if you are the only user working with PlasticSCM in the machine.
Hope it helps!