We have a teamcity local instance running and a bitbucket cloud instance.
I have configured the connection correctly and I can see that teamcity has accessed via the oath several times.
In teamcity it allows me to login and says to refresh after the successful login but it just cycles through the login again.
No clue what is wrong
#LukeP helped me with the first issue , now theres a second]2
Any ideas how to resolve this?
Related
I am trying to move my backend API app (node.js express server) from Heroku to AWS Elastic Beanstalk. But I did not realize the amount of features that Heroku was providing automatically and which I now have to set up manually in AWS.
So here is the list of features which I discovered were missing in AWS and the solutions I have implemented.
Could you please let me know if I am missing something in order to run smoothly my APIs in AWS and get the equivalent of what I had in Heroku?
auto-restart server when crashed : I am using PM2 to automatically restart my server in case of critical error
SSL certificate : I am using AWS ACM certificate,
logging : have inserted the datadog agent in order to receive logs in datadog
logging response time : I have added the "morgan-body" package to get each requests' duration and response code (had to manually filter the AWS healthchecks and search engine bots, because AWS gave me an IP adress which was visited constatntly by Baidu bots)
server timeout : I have implemented a 1200000ms timeout on the whole app (any better option ?)
auto deploy from Github : I have implemented a github automation to deploy code automatically (better options?)
Am I missing something? This app is already live so I do not want to put my customers at risk when I will move from Heroku to AWS...
Thanks for your help!
I believe you are covered:
Heroku Dynos restart after crashing or raising an error (Heroku Restarting Policy)
SSL certificates are provided for free
logging: Heroku supports various plugins, including Datadog
response time (in millisec) is logged automatically
HTTP timeout is 30 sec (it cannot be changed)
deploy from Github is possible (connecting the accounts), Docker deployment is also supported. Better options? Using Github Actions to deploy a new version after code push or tagging.
If you are migrating a production environment I strongly suggest first to setup a Heroku (Free) Dyno to test and verify all your needs are satisfied.
As part of our TFS build I'm trying to push the latest package from our build pipeline to our Octopus Deploy instance. However I'm getting the following error.
I'm using a script task to execute the following octo push command.
octo push --package=mypackage.nupkg --overwrite-mode=OverwriteExisting --server=https://mycompany.octopus.app --user=myname#mycompany.com --pass=mypassword --debug --LogLevel=debug
Any ideas what's causing the error and how do I fix it?
It looks attempting to log in to the Octopus server using a username and password, but your instance isn't configured to accept that authentication type.
Is this pipeline pushing to an Octopus Cloud instance? If so, your authentication is via OctopusID, an external auth provider, rather than a username/password account on the Octopus instance itself.
As a general rule, using an API key is the recommended approach here, rather than username/password authentication.
I'm trying to get the Azure Devops unit test task configured and working but appear to be hitting an issues regarding failure to login. The unit tests work when run on the local machine connected to the Azure Sql database and the user name and password have been tested successfully against the azure sql database server directly in SSMS.
I've tried a variety of workarounds such as this but to no avail. What could possibly be the issue if access is ok.
Setup the app.config as follows in the screenshots and that appeared to address the issue. Specifically in the connection string, I opted to select the level of authentication required.
Configuration Details
We have a SonarQube 6.7.2 (build 37468) on-premise installation running.
The instance is accessible from our office IP without HTTP Basic Auth, as well as from everywhere else with HTTP Basic Auth.
The "Force user authentication" option is off.
All projects are set to private - in case someone gets past the HTTP Basic Auth.
My user belongs to the sonar-administrators groups and has "Browse" and "See Source Code" permissions on all projects.
Using the web interface in the browser, I am able to see all projects including analysis results etc., as expected.
Problem
However, using the Web API, I receive "Insufficient Privileges" errors on several API calls.
My user has a valid token that I pass to cURL as described in the documentation. I even created a new token, to be sure I'm not using an invalid one.
Example
$ curl -X GET -u my_user_token: https://sonar.example.org/api/measures/search_history?component=the_project&metrics=lines_to_cover%2Cuncovered_lines%2Ccoverage&ps=1000
{"errors":[{"msg":"Insufficient privileges"}]}
Question
Is it not possible to retrieve measures information or project information via the API for projects that are set to private?
The above call works fine if the project is to public. (But then again, if the project is set to public, that call works fine even without authentication.)
We do have the same issue when using the SonarLint plugin for PHPStorm. The plugin works fine as long as the projects are public, but server sync stops working as soon as projects are set to private.
I'm thinking maybe it would be best to deny all requests to SonarQube except from our whitelisted office IP and have everyone connect via VPN if they want to access the instance from their home office. That would allow us to make all projects public and not have any of these issues. Is that the recommended way to run an on-premise installation of SonarQube?
Turns out the SonarQube instance was running behind an nginx reverse proxy that dropped the Authorization HTTP header from the request before passing it on to SonarQube.
After fixing the nginx configuration, all Web API calls work as expected.
Currently, I am starting a container with bamboo remote agent on it and every time I need to manually approve the bamboo agent on the bamboo server. The idea is to automate the whole process starting with running a container which launches a bamboo remote agent and performs the build and then to kill the container. Since bamboo server expects manual approval, this is posing as a challenge. So I am looking for a way to auto approve the agent to register it.
Thank you!
I don't think there's an option to automatically auto-approve agents. Agent approval requirement is an security feature so auto approving any remote agent wouldn't be a security feature anyhow.
That being said, there's an option to disable agent authentication which will effectively mean that any new agent is approved right away -> actually what you're asking for.
You can disable agent authentication by visiting Bamboo administration pages