Geoserver workspace and proxy - proxy

Can someone explain how workspace proxy works?
Whats the right configuration so I can make requests from shell (please see below)?
I have Geoserver running in a docker container and is listening in the host on port 12018.
Everything is fine accesing through the web browser.
The following URL request works on browser:
http://localhost:12018/geoserver/geonode/ows?service=WFS&version=1.0.0&request=GetFeature&typeName=my_data_name35&maxFeatures=50&outputFormat=application%2Fjson
Using typeName as geonode:my_data_name35 also works:
http://localhost:12018/geoserver/geonode/ows?service=WFS&version=1.0.0&request=GetFeature&typeName=geonode%3Amy_data_name35&maxFeatures=50&outputFormat=application%2Fjson
But from cURL, the first request returns:
<?xml version="1.0" ?>
<ServiceExceptionReport
version="1.2.0"
xmlns="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.opengis.net/ogc http://schemas.opengis.net/wfs/1.0.0/OGC-exception.xsd">
<ServiceException code="InvalidParameterValue" locator="typeName">
Feature type :my_data_name35 unknown
</ServiceException></ServiceExceptionReport>
And also from cURL, the second request returns:
<?xml version="1.0" ?>
<ServiceExceptionReport
version="1.2.0"
xmlns="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.opengis.net/ogc http://schemas.opengis.net/wfs/1.0.0/OGC-exception.xsd">
<ServiceException code="InvalidParameterValue" locator="typeName">
Feature type geonode:my_data_name35 unknown
</ServiceException></ServiceExceptionReport>
Any help is appreciated. Thanks!

I found the problem, very basic actually.
The resource requested needs authentication, where the browser passes the cookie.
Using cURL, also needs to pass authentication.
It does not return forbidden maybe because some resources don't need authentication.
Sorry for the noise.

Related

AEM 6.5 (Apache Sling) /saml_login not running postProcessor

I have a protected page setup in AEM using the Authentication Requirement checkbox on the author. Then over in the OSGi I have config for my external Okta SAML config:
<?xml version="1.0" encoding="UTF-8"?>
<jcr:root xmlns:sling="http://sling.apache.org/jcr/sling/1.0"
xmlns:jcr="http://www.jcp.org/jcr/1.0"
jcr:primaryType="sling:OsgiConfig"
identitySyncType="default"
keyStorePassword="admin"
service.ranking="5002"
idpHttpRedirect="{Boolean}false"
createUser="{Boolean}true"
defaultRedirectUrl="/"
userIDAttribute="ssoGuid"
idpIdentifier=""
assertionConsumerServiceURL=""
defaultGroups="[everyone]"
storeSAMLResponse="{Boolean}false"
signatureMethod="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"
idpCertAlias="certalias___1657659258516"
addGroupMemberships="{Boolean}true"
path="[/content/mySite]"
digestMethod="http://www.w3.org/2001/04/xmlenc#sha256"
synchronizeAttributes="[...]"
clockTolerance="60"
groupMembershipAttribute="groupMembership"
idpUrl="oktaURL"
serviceProviderEntityId="https://stage.mySite.com"
logoutUrl=""
handleLogout="{Boolean}false"
userIntermediatePath="sso"
spPrivateKeyAlias=""
useEncryption="{Boolean}false"
nameIdFormat="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress"/>
And in my okta config, I have https://stage.mySite.com/saml_login as the SSO URL and https://stage.mySite.com as the audience restriction.
When I navigate to the requested page in AEM I get redirected to Okta, I sign in and am redirected to https://stage.mySite.com/saml_login, all of this is expected, here is where it gets weird, I then get a 301 redirect to https://stage.mySite.com/saml_login.html which then gives a 404. It seems like AEM does not have a listener setup and so does the redirect.
Any thoughts on what i might have misconfigured?
In my case, it was a dispatcher config issue (or nginx, not sure where the rewrite was done).
It was setup to append '.html' if it does not exist in the requested url. I needed to make an exception for that rule.

jsf ajax redirect from servlet filter not working

I've a servlet filter that will perform a redirect when the session has expired.
For non-ajax request, the filter execute the HttpServletResponse.sendRedirect(myUrl) to perform the redirect, which works great. But, I can't say the same with ajax type request.
For ajax request, the filter execute the HttpServletResponse.getWriter().println(partialResponseContent) to perform the redirect. It doesn't work. The screen stay the same, seems to be freeze and with all input fields inhibited. Any pointer on what I might be missing or what I can try to figure out the cause of problem?
Below is the partialResponseContent:
<?xml version="1.0" encoding="UTF-8"?><partial-response><redirect url="/MyAccount/command?cmd=twoFA&TargetUrl=/MyAccount/viewDevices.jsf"></redirect></partial-response>
I double checked it in Chrome's Developer Tools screen and sees that the content is being sent correctly (See attached image).
Wasted two days for this. But I'm glad to finally get it to work. The problem was caused by the value of the url attribute (in the redirect element) was not xml encoded. My url contains a & character. It needs to be encoded to & amp; for the redirect to work. Note: I used the org.apache.commons.lang.StringEscapeUtils.escapeXml to do the xml encoding.
Below is the before encoding:
<?xml version="1.0" encoding="UTF-8"?><partial-response><redirect url="/MyAccount/command?cmd=twoFA&TargetUrl=/MyAccount/viewDevices.jsf"></redirect></partial-response>
Below is the after encoding:
<?xml version="1.0" encoding="UTF-8"?><partial-response><redirect url="command?cmd=twoFA&TargetUrl=/MyAccount/viewDevices.jsf"></redirect></partial-response>

xpath3 issue in mule

Here is my incoming payload.
<?xml version="1.0" encoding="UTF-8"?>
<detail><ns1:SiperianRequestFault xmlns:ns1="urn:siperian.api">
<ns1:requestName>SearchQuery</ns1:requestName>
<ns1:errorCode>SIP-18018</ns1:errorCode>
<ns1:errorMessage>SIP-18018: Request not recognized by the user profile providers.
Review the server log for more details.</ns1:errorMessage>
</ns1:SiperianRequestFault></detail>
when I query for
xpath3('//detail')
here is the output
SearchQuerySIP-18018SIP-18018: Request not recognized by the user profile providers.Review the server log for more details.
But what I want is to extract the errorCode, errorMessage etc.
Please use #[xpath3('/detail/*:SiperianRequestFault/*:errorCode')] to get errorCode. I have used *: for defining namespace wildcard. If you want to use namespace you can define it as
<mulexml:namespace-manager includeConfigNamespaces="true">
<mulexml:namespace prefix="ns1" uri="urn:siperian.api" />
</mulexml:namespace-manager>
then expression will be like #[xpath3('/detail/ns1:SiperianRequestFault/ns1:errorCode')]
Hope this helps.

Visual Studio 2015/2017 local NuGet feed HTTP 403

We've created a local NuGet feed with Nuget.Server. It is a simple ASP.NET application that is hosted on an IIS web server that is part of our local company network.
The url of the feed looks like this:
https://abc.company.com/packages/nuget
The ISS authentication is enabled as followed:
The .NET authrization looks like this:
If I call the above mentioned feed url with Postman or Fiddler I get the following response:
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<service xml:base="https://abc.company.com/packages/nuget/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app">
<workspace>
<atom:title>Default</atom:title>
<collection href="Packages">
<atom:title>Packages</atom:title>
</collection>
</workspace>
</service>
If I now add the URL in the Visual Studio NuGet package sources:
and then choose the newly created package source the following screen appears where I enter my domain credentials (domain\user), but nothing happens:
As described before, when I access the site with any Browser (IE, Chrome, Edge) I don't have to enter my credentials and Fiddler/Postman do also not require any credentials.
On the VS PackageManager Output I get the following error message:
[Local] The V2 feed at 'https://abc.company.com/packages/nuget/Search()?$filter=IsAbsoluteLatestVersion&searchTerm=''&targetFramework='net462'&includePrerelease=true&$skip=0&$top=26' returned an unexpected status code '403 Forbidden ( The server denied the specified Uniform Resource Locator (URL). Contact the server administrator. )'.
When I call this URL with a browser I don't get any errors.
What is wrong in this setup?
The problem was caused by our company proxy, that was also taken into account when opening internal sites. So I've added the following environment variable:
NO_PROXY
as value I've added the company internal domain:
NO_PROXY = "abc.company.com"
No it works locally and on our build agents.
For my case, I have to disabled SSL Require in IIS SSL Settings

S3: No 'Access-Control-Allow-Origin' for AJAX POST

This issue is driving me a little nuts. I'm trying to upload files via AJAX POST to an S3 bucket.
I have all the credentials correct because when I do normal HTTP POSTs it creates the resource in the S3 bucket just fine. But I would really like to upload multiple file at once with progress bars, hence I need AJAX.
I have CORS setup on my S3 bucket:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>http://localhost:3000</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Right now I'm just trying to get it working in my development environment (localhost:3000, using standard Rails 4.1).
From my understanding, the above CORS rule should allow AJAX requests from localhost:3000 to the S3 bucket.
However, every time I submit a file via AJAX, I get the following error:
XMLHttpRequest cannot load https://s3.amazonaws.com/<BUCKET>. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access.
This doesn't make any sense to me because localhost:3000 IS granted access via the CORS rule.
I've also provided a snippet of the JS I used to submit the form:
$.ajax({
method: "POST",
crossDomain: true,
url: "https://s3.amazonaws.com/<BUCKET>",
data: $(this).serialize() # Contains S3 necessary values
})
The form has inputs for the Amazon S3 keys/etc necessary. I know they work because when I do normal HTTP POSTs it creates the asset properly in S3. All I'm trying to do is AJAXify the process.
Am I missing something obvious here?
Using: Rails 4.1, jquery-file-upload, fog gem (for S3)
you can try by changing
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Your question seems very similar to a problem I had, which was never properly (precisely) answered either and seemed to be an issue related to a browser limitation instead of the actual transfer technology behind it.
Here's a link to my original question and the answers I received here on SO:
Why Doesn't Microsoft Skydrive Download Multiple Files via API?
Hopefully this may offer some insight into your problem and isn't just noise.

Resources