This is similar to thread "SonarLint - Invalid binding message in intellij".
However, following the solution leads to another problem, which is stated in the title, or from below with following full message:
Failed to update the following projects: Please check if the server bindings are updated and the module key is correct: [module_name]
By the way, module name is correct because it is selected from the "Project" dropdown values, which are retrieved remotely.
Any clue(s)?
EDIT:
Right before the failure, log states that there is GET 401 due to configuration of SiteMinder agent along with NTLM, or simply due to unsuccessful credential redirect.
Just to answer my own question, this may happen if there is proxy issues. I was able to see Get 401 in the logs during the binding.
Here's the conversation I had with SonarQube team: https://community.sonarsource.com/t/sonarlint-failed-to-update-the-following-projects/38471
Furthermore, if you experience this within the Corporate/Enterprise, reach out to DevOps team to acquire different endpoint. Enterprise configuration usually involves two endpoints:
Endpoint for SSO
Endpoint that handles/redirects API call
For SonarQube plugin to appropriately handle this, it will require endpoint that handles API.
Related
I am currently using trying to set up DSSO with Okta utilizing Firefox. I have been able to successfully set up Edge/Chrome/IE on the domain without issue. I have set the following documentation as outlined on the Okta website for setting up Firefox to no avail. We have been troubleshooting with the Okta experts for the last three days with no forward progress, so I figured I would post the information available here:
Firefox version 107.0.1 32-bit
TLS 1.2
NTLM v2
Windows Server 2019
the result that the agentlessDssoPrecheck is returning:
{"result" : "FAIL_NTMLSSP"} - (that is not a misspelling; the return should be NTLM, but whatever)
I have the following options set in Firefox:
network.negotiate-auth.trusted-uris. org.kerberos.okta.com
network.negotiate-auth.delegation-uris org.kerberos.okta.com
network.negotiate-auth.allow-non-fqdn true
network.negotiate-auth.allow-proxies true
network.automatic-ntlm-auth.trusted-uris org.kerberos.okta.com
network.automatic-auth.allow-non-fqdn true
I attempted to pull the logs using set NSPR_LOG_MODULES=negotiateauth:5, but while Firefox does create the log, it doesn't write anything, including the failure to the log. (If I set the value to all:5, I get a ton of information, it appears useless for what I am trying to troubleshoot)
I attempted to pull fiddler and Wireshark information; I haven't set up the decoding on the Wireshark portion yet; however, I did get an extract of the fiddler information, but I didn't spot anything in there that seemed to indicate why the failure was occurring.
I have one suspicion; the following option in both Edge and Chrome has been set: DisableAuthNegotiateCnameLookup = enable - I don't see an option like that in Firefox or something similar to be able to adjust that value.
Some additional information to share:
Firefox is connecting to and submitting an authentication message to org.kerberos.okta.com; however, it appears to be in the wrong format.
"""Received authorization header contains raw NTLM token, will fail
precheck. Okta is not getting Kerberos ticket but raw NTLM token
instead."""
Firefox with Kerberos can be sensitive to reverse name lookups.
For testing purposes have you tried to set a manual entry in the hosts file with an entry to return the correct DNS name?
I would be interested to know your results from Wireshark using the following filter
dns || Kerberos
I am new to JMetet and I am having a lot of difficulties in understanding how it works.
I created a TC to add an object to my system using Blaze meter. Then, I imported the TC in Jmeter.
This TC fails when it should not (at least thats what I think) because whenever I use the system it works correctly:
This is the thread group if you need it to help me:
Am I doing something wrong? AM I missing something?
IMPORTANT: Should I be able to see my object added to the system if the TC passes?
As per HTTP Status Code 403 Forbidden description:
The HTTP 403 Forbidden client error status response code indicates that the server understood the request but refuses to authorize it.
This status is similar to 401, but in this case, re-authenticating will make no difference. The access is permanently forbidden and tied to the application logic, such as insufficient rights to a resource.
If your script assumes authentication most probably it fails somewhere somehow due to missing or improperly working correlation, for example this eedd968fe... bit
looks utterly suspicious, most probably you need to replace it with a some form of dynamic parameter extracted from the previous request using a suitable JMeter Post-Processor
Normally the flow looks like:
Open login page
Identify and extract all dynamic parameters and save them into JMeter Variables
Send the parameters along with credentials in the 2nd request
Check out Using Regular Expressions to Extract Tokens and Session IDs to Variables article for example challenge and solution
I am doing performance testing for web application using JMeter. I'm new in Jmeter, but went through the concepts, still not familiar with it. All test cases is good, except there is 100% on % error column. I still wondering why is it showing ? Even though I test it using real browser, it's working properly. But it show 100% error(after login to my web application and displays 0% error for Sign In and Sign Out option). Added View Results Tree, but it shows Response code 401 as there is no error when i do it in real browser. Please help me if anyone knows the solution. Many thanks....
401 HTTP Response Code stands for Unauthorized. It means that you need to provide credentials somehow. JMeter provides HTTP Authorization Manager to deal with different authentication types and it needs to be configured differently to bypass relevant authentication challenges.
The most straightforward and easy is Basic HTTP Authentication, it requires just username and password, see Adding Auth chapter of Building a Monitor Test Plan guide for details.
If your application uses more complex authentication, like NTLM or Kerberos - you'll need to provide at least Domain and in case of Kerberos - supply realm and provide some extra settings in config files. See Windows Authentication with Apache JMeter guide for detailed instructions.
I have been trying to get this resolved, without any success.
I have a webapp residing on my domain, say www.myDomain.com. I need to call a service which is present on another domain, say www.anotherDomain.com/service.do?
I'm using SproutCore's SC.Request.getUrl(www.anotherDomain.com/service.do?) to call that service.
I get an error that says, Origin www.myDomain.com is not allowed by access-control-allow-origin.
When I was in dev stages, and using sc-server, the issue was resolved using proxies. Now that I have deployed the app to an actual server, I replaced all the lines where I had set up the proxy with the actual domain name. I have started getting that error again.
The problem is that I CANNOT MAKE ANY CHANGES to the server on the other domain. All the posts that I have come across state that the other server on the other domain ought to provide access-control-allow-origin header and that it ought to support the OPTIONS verb.
My question is, is it possible for me to connect to that service using SproutCore's SC.Request.getUrl() method?
Additionally, the other posts that I have read mentioned that a simple GET request ought not to be preflighted. Why then are my requests going as OPTION instead of GET?
Thanks a ton in advance! :D
This is not a Sproutcore issue; it's a javascript Same Origin Policy issue.
If you can't modify the production server, you have no option but to develop your own proxy server, and have your proxy hit the real service.
This is effectively replacing sc-server in your production environment.
All this server would do is take the incoming request and pass it along to www.anotherDomain.com/?service.do.
You would need to make sure you passed all parameters, cookies, headers, the http verb, etc....
This is far from ideal, because now errors can occur in more places. Did the real service fail? Did the proxy fail? etc.
If you could modify the other domain, you could
1) deploy your SC app there.
2) put in the CORS headers so you could make cross domain requests
I am trying to implement Google Check out (GCO) on a new server, the process seemed to work fine on the old server.
The error from GCO integration console is the timeout error you might expect if there is load on the server and/or the response takes longer than 3 seconds to respond.
To perform a test (not integrating with my database), I have set some code to send an email to me instead. If I hit the https url manually, I get the email and I can see an output to the screen. If I then leave it as that, Google still returns the Timeout error and I don't get an email. So I have doubts as to whether google is even able to hit the https url.
I did temporarily attempt to use the unsecure url for testing and indeed I received the email, however this solution isn't the route we've developed for, so the problem is something to do with the secure url specifically.
I have looked into the certificate which is a UTN-USERFirst-Hardware which is listed as accepted on http://checkout.google.com/support/sell/bin/answer.py?answer=57856 . I have also tried to temporarily disable the firewall with no joy. Does anyone have any sugestions?
Good to hear you figured out the problem.
I'm adding the links below to add a litle more context for future readers about how Google Checkout uses HTTP Basic Authentication:
http://code.google.com/apis/checkout/developer/Google_Checkout_XML_API.html#urls_for_posting
http://code.google.com/apis/checkout/developer/Google_Checkout_XML_API.html#https_auth_scheme
http://code.google.com/apis/checkout/developer/Google_Checkout_HTML_API_Notification_API.html#Receiving_and_Processing_Notifications