Configuration Details
We have a SonarQube 6.7.2 (build 37468) on-premise installation running.
The instance is accessible from our office IP without HTTP Basic Auth, as well as from everywhere else with HTTP Basic Auth.
The "Force user authentication" option is off.
All projects are set to private - in case someone gets past the HTTP Basic Auth.
My user belongs to the sonar-administrators groups and has "Browse" and "See Source Code" permissions on all projects.
Using the web interface in the browser, I am able to see all projects including analysis results etc., as expected.
Problem
However, using the Web API, I receive "Insufficient Privileges" errors on several API calls.
My user has a valid token that I pass to cURL as described in the documentation. I even created a new token, to be sure I'm not using an invalid one.
Example
$ curl -X GET -u my_user_token: https://sonar.example.org/api/measures/search_history?component=the_project&metrics=lines_to_cover%2Cuncovered_lines%2Ccoverage&ps=1000
{"errors":[{"msg":"Insufficient privileges"}]}
Question
Is it not possible to retrieve measures information or project information via the API for projects that are set to private?
The above call works fine if the project is to public. (But then again, if the project is set to public, that call works fine even without authentication.)
We do have the same issue when using the SonarLint plugin for PHPStorm. The plugin works fine as long as the projects are public, but server sync stops working as soon as projects are set to private.
I'm thinking maybe it would be best to deny all requests to SonarQube except from our whitelisted office IP and have everyone connect via VPN if they want to access the instance from their home office. That would allow us to make all projects public and not have any of these issues. Is that the recommended way to run an on-premise installation of SonarQube?
Turns out the SonarQube instance was running behind an nginx reverse proxy that dropped the Authorization HTTP header from the request before passing it on to SonarQube.
After fixing the nginx configuration, all Web API calls work as expected.
Related
I am running two Windows server 2016s with IIS 10.0.14393. One server for staging purposes, and one for production.
The application has one "front-end app" and one "back-end REST api" running on the same IIS server. The front end communicates with the backend (suprise!). The difficulty I am facing is that the staging server works as expected, i.e no "Sign in" box appears when entering the front-end web page (React). However, on the production server this box pops-up.
When the page is loaded, there is javascript that fetches some information from the API, and it seems that this async fetch is causing the pop-up to occur (the request is in pending mode until login).
I have studied the configuration of IIS on the two servers but can't seem to find any obvious differences.
Both instances have both windows authentication and anonymous authentication turned on for both front-end and backe-end. I need this as the API has different types of authentication for the endpoints.
Anyone that has solved a similar issue?
Thanks
If someone experiences a similar issue the following link may help: https://support.microsoft.com/en-us/help/258063/internet-explorer-may-prompt-you-for-a-password
In my case I was sending the request to the api with the full domain url. The problem was fixed by just using the machine name (and port in my case) when sending the request. If the whole domain with punctuation is used, the system believes that the request is meant for the Internet and not the intranet, and will not include any credentials.
Another, and probably more robust solution, is to add the site in question to: Internet properties -> security -> Local intranet -> sites -> advanced.
I am using apiary preview --server to watch a file while editing it and have a UI generated.
I would like to hit a local dev server in the "Try" section of the UI, but when I hit "Call Resource", a request is made to POST https://jsapi.apiary.io/apis/null/http-transactions/.
HOST is set to http://localhost:3050 and I'm expecting it to hit that endpoint.
How can I change this?
$ apiary version
0.5.2
Currently, all console calls are routed via apiary.io servers to work around CORS limitations.
If published, you can work around this limitation by exposing your local port using service such as ngrok.
There is a testing of a version of the console that would make calls to the API directly and would utilise browser plugin if needed to get around CORS limitations. You should be able to utilise them soon.
I'm currently running MAMP Pro (osX 10.9.4) with several different virtual servers on my local machine, one for each of my client's projects. I've been trying to connect to the Google API use OAuth and have everything working just fine when 'REDIRECT URIS' is set to:
http://localhost:8888
However, as mentioned I've got several of these servers running,
e.g. 'https://clientname1:8890' or 'https://clientname2:8890'
Whenever I enter those into the API console I just get a 'Whoops' message telling me something has gone wrong Google's end:
"Server Error: Whoops! Our bad."
It seems as though only 'localhost' is allowed via the API for local testing, is there anyway I can set it up so I can test off any of my local servers?
I had to add my localhost to the allowed referrers list to test locally. Without that inclusion, I get 403 Forbidden errors. You probably just need to add clientname1 and clientname2 or clientname1:8890 and clientname2:8890 to the allowed referrers list in the Google Developers Console. Mine's set under public api access, so maybe it's going to be another problem for you depending what API you're using and how you're using it. Hope it helps -
I have a WebApi project that wraps the Dynamics CRM Online web service and provides a REST api. I have a simple controller that gets some contacts from CRM and returns them to the caller.
Everything works fine when I run it in the local emulator. However, when I deploy the project to Azure, I can reach the home page, but the controllers all return http 500 errors. Why would this happen? And how can I troubleshoot to get more details?
UPDATE
The issue is with the absence of Microsoft.IdentityModel.dll on the Server 2012 instance running the web role in Azure. I found this by opening web role instance in RDP, installing Fiddler, and making the request from Fiddler to the local IIS server. It responded with the detailed error.
Now my issue is figuring out how to enable IdentityModel on a Windows Azure Web Role. You're supposed to be able to add it via the Server 2012 Add Roles and Features wizard, but it's totally locked down on the Web Role. You can't check any boxes that aren't already checked. Is this even possible?
The issue is giving the Web Role access to Windows Identity Foundation when it's inherently not there. Marc Schweigert provides clear steps to do this here:
http://blogs.msdn.com/b/devkeydet/archive/2013/01/27/crm-online-amp-windows-azure-configuring-single-sign-on-sso.aspx
Go to the 23:00 mark of the video and you'll see the 4 necessary steps:
Reference Microsoft.IdentityModel.dll (need WIF SDK installed)
a. Set copy local = true
Create RegisterWIFGAC.cmd in your web role project
Create Startup Task in ServiceDefinition.csdef that invokes RegisterWIFGAC.cmd
Add GacUtil to the project (used in the startup task) to put Microsoft.IdentityModel.dll in the GAC every time the web role starts).
I'm trying to setup Web Deploy on IIS 7, so that 1-click publishing in Visual Studio works.
Every time i try and publish the app i get a 401 error, which seems to be failing to auth against WMSvc. I have set the build output verbosity to detailed and can see the web deploy command being used. When i try and run it from the command prompt i get the same 401 error (ERROR_USER_UNAUTHORIZED), however when i change the the authType parameter in the command from basic to NTLM it works fine and publishes correctly...
As far as i was aware WMSvc only worked with basic auth and not NTLM. As far as my server config goes i have tried setting the management service to accept only windows users and to allow Windows users and management service users, neither setting seems to make any odds.
I can connect fine using IIS manager locally to the remote server, but as soon as i try and use any of the export functionality on the remote server i get permission issues from the remote connection. This all seems most odd, can any one shed some light on this behaviour?
Just providing the answer that worked for me, after searching in vain I stumbled upon an article by Phil Haack (whilst looking for something else entirely):
It turned out I had a URL-ACL defined which was stopping everything from working.
Followed the instructions in that post and it all just worked like it should :-)
I personally wish web deploy was a bit less fragile when it comes to setting it up, works great once you've gone through the pain.