Shibboleth 4 IDP: Query two different login sources with the Password flow - shibboleth

I have two login sources (an Active Directory and a local MySQL Database) that each contain different users. I want to configure the Password flow in this way:
query the AD first
if this succeeds, the user gets logged in
if it fails, query the local database and log the user in if this succeeds
else, authentication fails
How can I achieve that?

This is the solution I found:
inside the file conf/authn/password-authn-config.xml put the following lines or replace if they already exist:
<import resource="jaas-authn-config.xml"/>
<!-- Ordered list of CredentialValidators to apply to a request. -->
<util:list id="shibboleth.authn.Password.Validators">
<ref bean="shibboleth.JAASValidator"/>
</util:list>
Comment out any other resources that you don't need, such as ldap-authn-config.xml or krb5-authn-config.xml.
In my case, I want the login to succeed if either of my login sources return 'okay'. Therefore you need this line:
<!-- Controls whether all validators in the above bean have to succeed, or just one. -->
<util:constant id="shibboleth.authn.Password.RequireAll" static-field="java.lang.Boolean.FALSE"/>
If you want all login sources to succeed, just replace 'FALSE' with 'TRUE'.
Next, put the following inside conf/authn/jaas-authn-config.xml:
<!-- Specify your JAAS config. -->
<bean id="JAASConfig" class="org.springframework.core.io.FileSystemResource" c:path="%{idp.home}/conf/authn/jaas.config" />
<util:property-path id="shibboleth.authn.JAAS.JAASConfigURI" path="JAASConfig.URI" />
<!-- Specify the application name(s) in the JAAS config. -->
<util:list id="shibboleth.authn.JAAS.LoginConfigNames">
<value>ShibUserPassAuthLDAP</value>
<value>ShibUserPassAuthJAAS</value>
</util:list>
Now open conf/authn/jaas.config and write this:
ShibUserPassAuthJAAS {
relationalLogin.DBLogin required debug=true
dbDriver="com.mysql.jdbc.Driver"
userTable="login"
userColumn="email"
passColumn="password"
dbURL="jdbc:mysql://localhost:3306/login"
dbUser="your_db_user"
dbPassword="your_db_password"
hashAlgorithm="SHA2" // or what u need
saltColumn="salt" // leave empty if you don't need this
errorMessage="Invalid password"
where="status < 9999"; // remove if you don't need this
};
ShibUserPassAuthLDAP {
org.ldaptive.jaas.LdapLoginModule required
ldapUrl="ldap://localhost:10389" // your active directory url
useStartTLS="true"
baseDn="OU=example,OU=example,DC=example,DC=org" // change this to whatever you need
bindDn="CN=shibboleth,OU=example,DC=example,DC=local" // change this to whatever you need
bindCredential="your_ad_password"
userFilter="(sAMAccountName={user})"
credentialConfig="{trustCertificates=file:/opt/shibboleth-idp/credentials/ldap.pem}";
};
relationalLogin.DBLogin is a java class I use to actually check the credentials. You can download it from here: download the jar
Just put it in this directory on your idp: {shibboleth_root}/edit-webapp/WEB-INF/lib/
Now make sure you configured the password flow correctly in conf/authn/general_authn.xml:
<bean id="authn/Password" parent="shibboleth.AuthenticationFlow"
p:passiveAuthenticationSupported="true"
p:forcedAuthenticationSupported="true"/>
And to enable the Password flow change this line in idp.properties:
idp.authn.flows=
to this:
idp.authn.flows=Password
After you completed these steps, don't forget to restart jetty for the changes to take effect.
Explanation
The two entries called ShibUserPassAuthLDAP and ShibUserPassAuthJAAS in jaas-authn-config.xml are where the magic happens: the password flow will try to validate the credentials using those two configurations you provided. It will try the first one and finish authentication if it succeeds, or try the second configuration if the first fails.

Related

Call a bean method with the downloaded filename after file download using sftp outbound gateway

I am using int-sftp:outbound-gateway to download remote files. File download is working. I need to call another method after file is downloaded for both success as well as failure. In that method I need status (success or failure) and name of the file that was requested to be downloaded. Then from that method I will initiate a post download flow depending on the status like - moving file to different location, notifying the user, sending email, etc.
I have used AfterReturningAdviceInterceptor to call my own method defined in MyAfterReturningAdvice which implements AfterReturningAdvice interface. With this my method to initiate the post download flow. It does execute and I do get filename in GenericMessage's payload. My question is, do we have a better way to implement this flow.
I tried using ExpressionEvaluatingRequestHandlerAdvice's onSuccessExpression but from that I cannot call another method. All I can do is manipulate the inputMessage(GenericMessage instance).
In future sprints I will have compare checksum of downloaded file with expected checksum and re-download file for a fixed number of times if there is checksum mismatch. As soon as checksum matches I again need to call post download flow. If the download fails even at last retry, then I need to call another flow (send email, update db, notify user of failure,etc.)
I am asking this question just to make sure that my current implementation fits overall requirements.
<int:gateway id="downloadGateway" service-interface="com.rizwan.test.sftp_outbound_gateway.DownloadRemoteFileGateway"
default-request-channel="toGet"/>
<bean id="myAfterAdvice" class="org.springframework.aop.framework.adapter.AfterReturningAdviceInterceptor">
<constructor-arg>
<bean class="com.rizwan.test.sftp_outbound_gateway.MyAfterReturningAdvice">
</bean>
</constructor-arg>
</bean>
<int-sftp:outbound-gateway id="gatewayGet"
local-directory="C:\sftp-outbound-gateway"
session-factory="sftpSessionFactory"
request-channel="toGet"
remote-directory="/si.sftp.sample"
command="get"
command-options="-P"
expression="payload"
auto-create-local-directory="true">
<int-sftp:request-handler-advice-chain>
<ref bean="myAfterAdvice" />
</int-sftp:request-handler-advice-chain>
</int-sftp:outbound-gateway>
public class MyAfterReturningAdvice implements AfterReturningAdvice {
#Override
public void afterReturning(Object returnValue, Method method, Object[] args, Object target) throws Throwable {
//update db, send email, notify user.
}
}
The ExpressionEvaluatingRequestHandlerAdvice.onSuccessExpression() is the best choice for you. Its EvaluationContext is BeanFactory-aware, therefore you definitely can call any bean from that expression. The Message provided there as a root object is a good candidate to get an information about a downloaded file.
So, this is what you can do there:
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpressionString" value="#myBean.myMethod(#root)"/>
</bean>
The same you can do with the onFailureExpression.
On the other hand you may even don't need to worry about the bean access from the expression. The ExpressionEvaluatingRequestHandlerAdvice has successChannel and failureChannel options. So, the message with the result can be send there and some <service-activator> with your bean can handle a message on that channel.

Writing to multiple directories in Spring Integration file adapter

How can this be done? It works fine with one int-file:outbound-channel-adapter, but I could not make it work when I add another one. I actually added another, separate set of channel/adapter but it still did not work.
In int-file:outbound-channel-adapter tag, there is actually a "directory" attribute, but it only accepts a single directory path.
Here is the code I have tried:
<int-file:outbound-channel-adapter id="outputDirectory1"
directory="${output.directory1}"
channel="fileWriterChannel1"
filename-generator- expression="headers.get('filename')"
delete-source-files="true"/>
<int-file:outbound-channel-adapter id="outputDirectory2"
directory="${output.directory2}"
channel="fileWriterChannel2"
filename-generator-expression="headers.get('filename')"
delete-source-files="true"/>
Below are the channels, while the bean is the actual writer. Note that the two channels both refer to the bean (ref="messageTransformer"):
<int:transformer id="messageToStringTransformer1"
input-channel="messageTypeChannel"
output-channel="fileWriterChannel1"
ref="messageTransformer"
method="write"/>
<int:transformer id="messageToStringTransformer2"
input-channel="messageTypeChannel"
output-channel="fileWriterChannel2"
ref="messageTransformer"
method="write"/>
<bean id="messageTransformer" class="com.message.writer.DefaultMessageWriter"/>
If I do understand you correctly, do you want to write a Message payload to a collection of directories simultaneously? In order to have multiple file adapters listen to the same channel, you have to use a Publish Subscribe Channel using the element. For more information, please see: http://static.springsource.org/spring-integration/reference/html/messaging-channels-section.html#channel-configuration-pubsubchannel
When using a File Outbound Channel Adapter, you can also use the directory-expression attribute which is available since Spring Integration 2.2. It gives you full SpEL expression support. Thus, the directory you want to write to, can be for example a provided message header. For more information, please see:
http://static.springsource.org/spring-integration/reference/html/files.html#file-writing-output-directory

Spring, property file, empty values

I have configured spring security with a ldap server (but continue reading, it's not a problem if you have no knowledge about it, this is really a spring problem). All runs like a charm. Here is the line I use for that:
<ldap-server ldif="" root="" manager-dn="" manager-password="" url="" id="ldapServer" />
If I fill ldif and root attributes, it will run an embeded server:
<ldap-server ldif="classpath://ldap.ldif" root="dc=springframework,dc=org" manager-dn="" manager-password="" url="" id="ldapServer" />
If I fill other fields, it will run a distant server:
<ldap-server ldif="" root="" manager-dn="dc=admin,dc=springframeworg,dc=org" manager-password="password" url="ldap://myldapserver.com/dc=springframeworg,dc=org" id="ldapServer" />
All this stuff run correctly. Now I want to use Spring mechanism to load such parameters from a property file:
So I replace attribute values like this:
<ldap-server ldif="${ldap.ldif.path}" root="${ldap.ldif.root}" manager-dn="${ldap.server.manager.dn}" manager-password="${ldap.server.manager.password}" url="${ldap.server.url}" id="ldapServer" />
and create a property file with:
ldap.server.url=
ldap.server.manager.dn=
ldap.server.manager.password=
ldap.ldif.path=
ldap.ldif.root=
Now, the funny part of the problem. If I fill the following properties in the file:
ldap.server.url=ldap://myldapserver.com/dc=springframeworg,dc=org
ldap.server.manager.dn=dc=admin,dc=springframeworg,dc=org
ldap.server.manager.password=password
ldap.ldif.path=
ldap.ldif.root=
It runs a distant server as expected.
If I fill the property file like this:
ldap.server.url=
ldap.server.manager.dn=
ldap.server.manager.password=
ldap.ldif.path= classpath:ldap.ldif
ldap.ldif.root= dc=springframeworg,dc=org
It does not run, complaining that the ldap url is missing. But the problem is that if I change the spring configuration from:
<ldap-server ldif="${ldap.ldif.path}" root="${ldap.ldif.root}" manager-dn="${ldap.server.manager.dn}" manager-password="${ldap.server.manager.password}" url="${ldap.server.url}" id="ldapServer" />
to (by just removing the reference to the variable ${ldap.server.url})
<ldap-server ldif="${ldap.ldif.path}" root="${ldap.ldif.root}" manager-dn="${ldap.server.manager.dn}" manager-password="${ldap.server.manager.password}" url="" id="ldapServer" />
It runs !
My thoughs are that spring does not replace the attribute value with the property config one if this one is empty. But I find it strange.
Can you give me some clue to understand that ? And what's the best to do to configure my ldap server via a property file ?
EDIT: this is due to a poor design choice (look at accepted answer), an issue has been opened on jira :
https://jira.springsource.org/browse/SEC-1966
Ok, I think this is a spring security bug.
If I debug and look at the class LdapServerBeanDefinition, there is a method called "parse". Here is an extract:
public BeanDefinition parse(Element elt, ParserContext parserContext) {
String url = elt.getAttribute(ATT_URL);
RootBeanDefinition contextSource;
if (!StringUtils.hasText(url)) {
contextSource = createEmbeddedServer(elt, parserContext);
} else {
contextSource = new RootBeanDefinition();
contextSource.setBeanClassName(CONTEXT_SOURCE_CLASS);
contextSource.getConstructorArgumentValues().addIndexedArgumentValue(0, url);
}
contextSource.setSource(parserContext.extractSource(elt));
String managerDn = elt.getAttribute(ATT_PRINCIPAL);
String managerPassword = elt.getAttribute(ATT_PASSWORD);
if (StringUtils.hasText(managerDn)) {
if(!StringUtils.hasText(managerPassword)) {
parserContext.getReaderContext().error("You must specify the " + ATT_PASSWORD +
" if you supply a " + managerDn, elt);
}
contextSource.getPropertyValues().addPropertyValue("userDn", managerDn);
contextSource.getPropertyValues().addPropertyValue("password", managerPassword);
}
...
}
If I debug here, all variables (url, managerDn, managerPassword...) are not replaced by the value specified in the property file. And so, url has the value ${ldap.server.url}, managerDn has the value ${ldap.server.manager.dn} and so on.
The method parse creates a bean, a context source that will be used further. And when this bean will be used, place holders will be replaced.
Here, we got the bug. The parse method check if url is empty or not. The problem is that url is not empty here because it has the value ${ldap.server.url}. So, the parse method creates a context source as a distant server.
When the created source will be used, it will replace the ${ldap.server.url} by empty value (like specified in the property file). And....... Bug !
I don't know really how to solve this for the moment, but I now understand why it bugs ;)
I cannot explain it, but I think you can fix your problem using defaulting syntax, available since Spring 3.0.0.RC1 (see).
In the chageg log you can read: PropertyPlaceholderConfigurer supports "${myKey:myDefaultValue}" defaulting syntax
Anyway, I think that the problem is because "" is valid value, but no value in the property file don't.
I think that url="" works because url attribute is of type xs:token in spring-security XSD and empty string is converted to null (xs:token is removing any leading or trailing spaces, so "" can be recognized as no value). Maybe the value of ${ldap.server.url} is resolved as empty string and that is why you've got an error.
You can try use Spring profiles to define different configurations of ldap server (see Spring Team Blog for details about profiles)
I believe there is an issue here while using place holders. The following will most probably solve the problem:
Create a class which extends PropertyPlaceHolderConfigurer and override its method convertPropertyValue()
in the method you can return the property as empty string if you find anything other than a string which is of type LDAP url i.e. ldap://myldapserver.com/dc=springframeworg,dc=org
Also you need to configure your new specialization of class PropertyPlaceHolderConfigurer in the context file.
Hope this helps.
You can define empty String in the application.properties file as following:
com.core.estimation.stopwords=\ \

How to use file-based config for Saml2SecurityTokenHandler?

Using Saml2SecurityTokenHandler to validate SAML2 bearer token from internal provider or from ACS. Able to programmatically configure the handler to validate just fine, but it doesn't seem to want to pick up configuration from the microsoft.IdentityModel section in my config file. Constructing a SecurityTokenHandlerCollectionManager seems to have no notion of the named configuration section either so I can't seem to use mySaml2SecurityTokenHandler .Configuration - mySecurityTokenHandlerCollectionManager["NAME"].Configuration.
Is there a good sample of setting this up somewhere?
To use file-based config, it turns out you simply rely on the FederatedAuthentication context, rather than explicitly constructing the Saml2SecurityTokenHandler:
var handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;
var token = handlers.ReadToken(xmlReader);
var collection = handlers.ValidateToken(token);
This seems to be a known problem.
Have you tried FederatedClientCredentials --> SecurityTokenHandlerCollectionManager property --> SecurityTokenHandlerCollection --> replace the standard Saml2SecurityTokenHandler with whatever?

How can I configure the indexes for using db4o with Spring?

I'm currently evaluating the Spring-db4o integration. I was impressed by the declarative transaction support as well as the ease to provide declarative configuration.
Unfortunately, I'm struggling to figure how to create an index on specific fields. Spring is preparing the db during the tomcat server startup. Here's my spring entry :
<bean id="objectContainer" class="org.springmodules.db4o.ObjectContainerFactoryBean">
<property name="configuration" ref="db4oConfiguration" />
<property name="databaseFile" value="/WEB-INF/repo/taxonomy.db4o" />
</bean>
<bean id="db4oConfiguration" class="org.springmodules.db4o.ConfigurationFactoryBean">
<property name="updateDepth" value="5" />
<property name="configurationCreationMode" value="NEW" />
</bean>
<bean id="db4otemplate" class="org.springmodules.db4o.Db4oTemplate">
<constructor-arg ref="objectContainer" />
</bean>
db4oConfiguration doesn't provide any means to specify the index. I wrote a simple ServiceServletListener to set the index. Here's the relevant code:
Db4o.configure().objectClass(com.test.Metadata.class).objectField("id").indexed(true);
Db4o.configure().objectClass(com.test.Metadata.class).objectField("value").indexed(true);
I inserted around 6000 rows in this table and then used a SODA query to retrieve a row based on the key. But the performance was pretty poor. To verify that indexes have been applied properly, I ran the following program:
private static void indexTest(ObjectContainer db){
for (StoredClass storedClass : db.ext().storedClasses()) {
for (StoredField field : storedClass.getStoredFields()) {
if(field.hasIndex()){
System.out.println("Field "+field.getName()+" is indexed! ");
}else{
System.out.println("Field "+field.getName()+" isn't indexed! ");
}
}
}
}
Unfortunately, the results show that no field is indexed.
On a similar context, in OME browser, I saw there's an option to create index on fields of each class. If I turn the index to true and save, it appears to be applying the change to db4o. But again, if run this sample test on the db4o file, it doesn't reveal any index.
Any pointers on this will be highly appreciated.
Unfortunately I don't know the spring extension for db4o that well.
However the Db4o.configure() stuff is deprecated and works differently than in earlier versions. In earlier versions there was a global db4o configuration. Not this configuration doesn't exist anymore. The Db4o.configure() call doesn't change the configuration for running object containers.
Now you could try to do this work around and a running container:
container.ext().configure().objectClass(com.test.Metadata.class).objectField("id").indexed(true);
This way you change the configuration of the running object container. Note that changing the configuration of a running object container can lead to dangerous side effect and should be only used as last resort.

Resources