Hy Guys,
I have been trying to integrate spring sample app, downloaded from https://github.com/spring-projects/spring-security-saml, with Ping Federate. I have used this sample app to integrate with so many other IDPs and it worked fine without any hassles. But Ping Federate seems to be bit complicated. This is what I did so far.
Create connection in Ping using my SP meta data.
Export Ping meta data
Configure it in my SP (securityContext.xml)
Start the server
I get various errors at various scenarios. The one which I am currently testing,
has following error on server restart,
org.opensaml.saml2.metadata.provider.MetadataProviderException: No IDP was configured, please update included metadata with at least one IDP
On investigating the logs, I see the root cause to be
Caused by: java.lang.NullPointerException
at org.opensaml.saml2.common.SAML2Helper.getEarliestExpiration(SAML2Helper.java:112)
at org.opensaml.saml2.metadata.provider.AbstractReloadingMetadataProvider.processCachedMetadata(AbstractReloadingMetadataProvider.java:328)
at org.opensaml.saml2.metadata.provider.AbstractReloadingMetadataProvider.refresh(AbstractReloadingMetadataProvider.java:258)
However, everything works fine if I disable metadataTrustCheck in securityContext.xml using property
< property name="metadataTrustCheck" value="false"/>
Can some one please help? I have been trying to solve this past one week. Unfortunately there is no good enough documentation from Ping for the version (latest) I am using.
Update:
Application works fine if,
Metadata trust check is disabled at SP and PF metadata is signed
Metadata trust check is enabled at SP and PF metadata is unsigned
However, I am getting above NullPointerException if
Metadata trust check is enabled at SP and PF metadata is signed
A while ago, we had exactly the same NullPointerException with IDP metadata (using opensaml 2.6.4). As written above, setting metadataTrustCheck="false" on the ExtendedMetadataDelegate did solve the problem, but was not the desired solution.
Alternatively, one could have removed the <Signature> block from the metadata, which is equally as bad.
Solution:
Besides adding the (self-signed) certificate, it was necessary to add the next certificate in the chain to the keystore as well.
For the interested reader:
Despite this error, the application continued to start and claimed "Reloading metadata was finished".
However, there's a TimerTask, which regularly checks whether metadata providers where changed i.e., if a new one was registered. Supposedly, this happens only at startup time.
Regardless, every 10 seconds (by default), a refresh is triggered internally, which leads to calculation of the expiration time. If the metadata is not loaded for any reason e.g., because of a validation error, then this leads to the mentioned NullPointerException in getEarliestExpiration().
If you're using a file-based MetadataProvider you might want customize the CachingMetadataManager and set refreshCheckInterval="-1" to disable this TimerTask.
PS: Maybe there are other reasons like a typo in the entityID, an overdue validUntil, expired certificates,... you name it. Anything, which causes the metadata not to be loaded will likely result in this issue. Another indicator is the following exception:
Caused by: org.opensaml.saml2.metadata.provider.MetadataProviderException: Metadata for issuer <ENTITY_ID> wasn't found
Related
I am having a problem with the Oracle Rest Data Services (short ORDS) and I can't find a solution.
The Problem is as follows:
We are using ORDS via a TomCat Webserver and I have 2 Endpoints defined, one to Update a dataset and one to get all datasets from this table.
If I update the value via my Endpoint the change is written in the Table, but if I try to get the table with this change ORDS only response with the old not changed table. After a certain period of Time while constantly trying to get the change it repondes with the expected values. (happens after max 1 minute, can be earlier).
Because of this behaviour I accused some type of caching, but I cant find no configuration in the oracle database or on the TomCat.
Another Point for this theory was that I logged what happens in my GET procedure and found that only the one request with the correct values gets logged, like the others didnt even happen ..
The Request giving me the old value are coming back in the 4-8 ms range while the request with the correct data is in the 100-200 ms.
Ty for your help :)
I tried logging what happens, but I got that only the request with the fresh values was logged.
I tried to restart the TomCat Webserver to make sure that the cache is cleared, but this didnt fix the Problem
I searched for a configuration in ORDS or oracle where a cache would be defined, but it was never set.
I tried to set the value via a SQL update and not an endpoint, but even here I get the change only delayed
Do you have a full overview of the communication path? Maybe there is a proxy between?
When the TomCat has no caching configuration and you restartet the webserver during your tests and still have the same issue, then there is maybe more...
Kind regards
M-Achilles
I am unable to create a new Common Data Service Database in my Power Apps default environment. Please see the error text below.
It looks like you don't have permission to use the Common Data Service
in this environment. Switch to a different environment, or create your
own.
Which as I understand I should be able to create after the Microsoft Business Application October 2018 update as listed in the article available at following link.
https://community.dynamics.com/365/b/dynamicscitizendeveloper/archive/2018/10/17/demystifying-dynamics-365-and-powerapps-environments-part-1
Also when I try to create a Common Data Service app in my default environment, I encounter following error.
The data did not load correctly. Please try again.
The environment 'Default-57e1485d-1197-4afd-b792-5c423ab508d9' is not
linked to a new CDS 2.0 instance. The operation 'ListInstanceMetadata'
is forbidden for unlinked environments
Moreover I am unable to see the default environment on https://admin.powerapps.com/environments, I can only see the Sandbox environment there.
Any ideas what I am missing here?
Thank you.
Someone else faced a similar issue and I read in one of the threads about deleting the browser cache and trying it again or trying it in a different browser resolved the issue. Could you try these first level steps and check if you still have these issues?
Ref: https://powerusers.microsoft.com/t5/Common-Data-Service-for-Apps/Default-Environment-Error-on-CDS/m-p/233582#M1281
Also, for your permission error ref: https://powerusers.microsoft.com/t5/Common-Data-Service-for-Apps/Common-Data-Service-Business-Flows/td-p/142053
I have not validated these findings. But as these answers are from MS and PowerApps team, hope it helps!
So I'm following this tutorial:
https://www.playframework.com/documentation/2.3.x/Installing
It all seems installed - i.e. all the commands work but when I try and call:
activator new my-first-app play-scala
I get the following:
Fetching the latest list of templates...
Could not fetch the updated list of templates. Using the local cache.
Check your proxy settings or increase the timeout. For more details see:
http://typesafe.com/activator/docs
OK, application "another-app" is being created using the "play-scala" template.
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://default/user/template-cache#1575831997]] after [10000 ms]
at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599)
at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)
at akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)
at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)
at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)
at java.lang.Thread.run(Thread.java:744)
And nothing happens.
I just installed it on a PC in my house under the same network so I don't think my connection is the issue. I'm not using a proxy either..
Got any ideas? I've been trying to get this working for over a day now.
I'm on OSX Yosemite by the way.
I sometimes have timeouts too, especially while working in the university on some sloppy WLAN.
There are two types of activator, the usual light-weight one and the offline version. In the second, all repositories are present so the activator does not need to gather anything from the internet.
When you go to https://www.playframework.com/download ... look for the offline distribution (around 400MB) and install it like the normal activator.
If this solves your problem, there was something wrong with the activator trying to get something from a repository (you said that you can run the project but get server timeouts).
[EDIT]: You can also set the timeout to 30 seconds and see if this helps
activator -Dactivator.timeout=30s new "project name"
I went to deploy over an existing Cloud Service (in staging) and received the following message:
"Error: No deployments were found. Http Status Code: NotFound"
Does anyone know what this means?
I am looking at the Cloud Service, and it surely exists.
UPDATE:
Been using the same deploy method as prior (successful) efforts. However, I simply right click the cloud service in Visual Studio 2013. In the Windows Azure Publish Summary, I set to: the correct cloud service name, to staging, to realease ... and press publish. Nothing special really...which is why I am perplexed
You may have exceeded the maximum number of cores allowed on your Azure subscription. Either remove unneeded deployments or ask Microsoft to increase the maximum allowed cores on your Azure subscription.
Since I had this problem and none of the answers above were the cause... I had to dig a little bit more. The RoleName specified in the Role tag must of course match the one in the EndpointAcl tag.
<Role name="TheRoleName">
<Instances count="1" />
</Role>
<NetworkConfiguration>
<AccessControls>
<AccessControl name="ac-name-1">
<Rule action="deny" description="TheWorld" order="100" remoteSubnet="0.0.0.0/32" />
</AccessControl>
</AccessControls>
<EndpointAcls>
<EndpointAcl role="TheRoleName" endPoint="HTTP" accessControl="ac-name-1" />
<EndpointAcl role="TheRoleName" endPoint="HTTPS" accessControl="ac-name-1" />
</EndpointAcls>
</NetworkConfiguration>
UPDATE
It seems that the previous situation is not the only one causing this error.
I ran into it again now due to a related but still different mismatch.
In the file ServiceDefinition.csdef the <WebRole name="TheRoleName" vmsize="Standard_D1"> tag must have a vmsize that exists (of course!) but according to Microsoft here (https://azure.microsoft.com/en-us/documentation/articles/cloud-services-sizes-specs/) the value Standard_D1_v2 should also be accepted.
At the moment it was causing this same error... once I removed the _v2 it worked fine.
Conclusion: everytime something is wrong in the Azure cfgs this error message might come along... it is then necessary to find out where it came from.
Just to add some info.
The same occured to me, my WM Size was setted to a size that was "Wrong".
I have multiple subscriptions, I was pointing one of them, and using a machine "D2", I don't know what happened, the information was refreshed and this machine disappeared as an option. I then selected "Large" (old), and worked well.
Lost 6 hours trying to upload this #$%#$% package.
I think the problem can be related to any VM Size problem
I hit this problem after resizing my role from small to extra-small. I still had the Local Storage set to the default of 20GB, which an extra-small instance can't hold. I ended up reducing it to 100MB and the deployment worked (the role I'm deploying is in maintenance mode only for a couple of months, so I don't care much about getting diagnostics from it).
A quick tip: I was getting nowhere debugging this with Visual Studio's error message. On a whim, I switched to the azure website and manually uploaded the package. That finally gave me a useful error: that VM size was too small for the resources I had requested.
I encountered this error during the initial deployment of a Cloud Service that required a specific SSL Certificate... that was missing from Azure.
Corrected the certificate - deploy succeeded.
(After the first deployment Visual Studio provides a meaningful error in this case.)
Here is how setup looks like:
ApplicationServer - GlassFish
Database Server - Oracle 10g2
Persistance Library - EclipseLink
Faces Framework - IceFaces
My Problem is that everytime I change the database connection the application/eclipselink stops working, failing to find the Persistance Unit.
After loosing a whole day trying to figure it out. I decided to delete all the information about connections and persistance units and use only one new created.
Building the project was not a problem, but running it I get an error, pointing that the there is a validationexception and a persistance unit with a given name was not found. That name is deleted and is't descriped in the persistance.xml nor in the sun-resources.xml. There is no such entry in the Services in Netbeans.
Have you seen such an error, and how can I make sure, that netbeans doesn't store information on places I can't reach from the IDE? How is it so that my application is looking for something that isn't listed anywhere...
Okay next time I have to think more, instead of asking the question here. So my problem was the cache directory of Netbeans 6.9.1.
Deleted the cache directory and everything started working again.
I hope that this problem is fixed in the next releases. it can be real pain in ... :)