After changing TeamCity to use the ProGet packaging service instead of the built in service, I keep getting an error that an API key is required. How would I be able to pass the ApiKey that I set up for the service in the url that I am using to access the service in TeamCity RgPublish command line?
I believe TeamCity requires that the APIkey is passed in, regardless of if the feed requires it or not. You can just use name:pass as the api key in this case.
Related
As part of our TFS build I'm trying to push the latest package from our build pipeline to our Octopus Deploy instance. However I'm getting the following error.
I'm using a script task to execute the following octo push command.
octo push --package=mypackage.nupkg --overwrite-mode=OverwriteExisting --server=https://mycompany.octopus.app --user=myname#mycompany.com --pass=mypassword --debug --LogLevel=debug
Any ideas what's causing the error and how do I fix it?
It looks attempting to log in to the Octopus server using a username and password, but your instance isn't configured to accept that authentication type.
Is this pipeline pushing to an Octopus Cloud instance? If so, your authentication is via OctopusID, an external auth provider, rather than a username/password account on the Octopus instance itself.
As a general rule, using an API key is the recommended approach here, rather than username/password authentication.
Configuration Details
We have a SonarQube 6.7.2 (build 37468) on-premise installation running.
The instance is accessible from our office IP without HTTP Basic Auth, as well as from everywhere else with HTTP Basic Auth.
The "Force user authentication" option is off.
All projects are set to private - in case someone gets past the HTTP Basic Auth.
My user belongs to the sonar-administrators groups and has "Browse" and "See Source Code" permissions on all projects.
Using the web interface in the browser, I am able to see all projects including analysis results etc., as expected.
Problem
However, using the Web API, I receive "Insufficient Privileges" errors on several API calls.
My user has a valid token that I pass to cURL as described in the documentation. I even created a new token, to be sure I'm not using an invalid one.
Example
$ curl -X GET -u my_user_token: https://sonar.example.org/api/measures/search_history?component=the_project&metrics=lines_to_cover%2Cuncovered_lines%2Ccoverage&ps=1000
{"errors":[{"msg":"Insufficient privileges"}]}
Question
Is it not possible to retrieve measures information or project information via the API for projects that are set to private?
The above call works fine if the project is to public. (But then again, if the project is set to public, that call works fine even without authentication.)
We do have the same issue when using the SonarLint plugin for PHPStorm. The plugin works fine as long as the projects are public, but server sync stops working as soon as projects are set to private.
I'm thinking maybe it would be best to deny all requests to SonarQube except from our whitelisted office IP and have everyone connect via VPN if they want to access the instance from their home office. That would allow us to make all projects public and not have any of these issues. Is that the recommended way to run an on-premise installation of SonarQube?
Turns out the SonarQube instance was running behind an nginx reverse proxy that dropped the Authorization HTTP header from the request before passing it on to SonarQube.
After fixing the nginx configuration, all Web API calls work as expected.
I am trying to get loader.io to work with codeship for my heroku app. This is the error that I am receiving:
heroku-cli: Installing CLI... 22.45MB/22.45MB
The requested API endpoint was not found. Are you using the right HTTP
verb (i.e. `GET` vs. `POST`), and did you specify your intended version
with the `Accept` header?
The application "URL FOR HEROKU" can't be accessed. Please make sure your Heroku API Key is configured correctly in the deployment configuration.
I have already added the ssh keys, but I am still getting this error. Any help?
This seems to be an error when deploying to Heroku, probably the API key or the app name you used is invalid (notice that you don't use ssh keys, but the Heroku API key, as described here on Codeship docs)
Also, I noticed you used the term "URL TO HEROKU", be aware that you should use the app name, not the full URL.
We use rackspace as our cloud provider and spin up new build agents as and when needed from existing server images.
Team city then detects the build agent image but does not authorise it automatically.
Can you tell me how to authorise the build agents without the need to manually go to team city and click authorise as these servers can spin up different flavors, each with different config.
Do I just need to write the correct authorisation key to the build agent config file or is there a better approach to using team city with cloud servers?
In TeamCity 10 you can use the REST API to authorise the agent on startup using an admin username/password:
curl -sS -X PUT --data "true" -H "Content-Type:text/plain" -u ${TEAMCITY_SERVER_USERNAME}:${TEAMCITY_SERVER_PASSWORD} ${TEAMCITY_SERVER_URL}/httpAuth/app/rest/agents/${TEAMCITY_AGENT_NAME}/authorized
If you tail the BuildAgent/logs/teamcity-agent.log file you will see a Registered message and then after that you can run the above command.
The approach that worked for me was to store the unique authorisation code that is written to the build agent config file and then pass this into the team city build step. The build step then updates the build agent config file using powershell and the build agent is authorised, when it next communicates with the team city server.
Does the system administrator need to install anything extra to get EWS Managed API working for clients on Exchange 2010? At the moment I am getting problems just using AutoDiscover via the managed API so I'm beginning to think the server has been configured incorrectly.
Has any administrator here had any experience with setting up Exchange 2010 to allow access via EWS Managed API?
EWS is enabled by default in Exchange 2010 with no changes needed. It can be affected by common changes made to the system after installation. Installing an SSL certificate with a different name than the server's can cause problems. ie: using mail.company.com instead of exmail01.company.com.
You can use the Exchange Powershell command Set-AutodiscoverVirtualDirectory to change the Internal and External URLs used to connect to EWS/Autodiscovery.
The Test-OutlookWebServices command will run through a series of tests and let you know what failed.
From there, you will be able to see the specific errors in your Autodiscovery configuration. Once they are fixed, you should see the Managed EWS API work correctly.