How to transfer *.pgp files using SFTP spring Integration - spring

We are developing generic automated application which will download *.pgp file from SFTP server.
The application working fine with *.txt files. But when we are trying to pull *.pgp files we are getting the below exception.
2016-03-18 17:45:45 INFO jsch:52 - SSH_MSG_SERVICE_REQUEST sent
2016-03-18 17:45:46 INFO jsch:52 - SSH_MSG_SERVICE_ACCEPT received
2016-03-18 17:45:46 INFO jsch:52 - Next authentication method: publickey
2016-03-18 17:45:48 INFO jsch:52 - Authentication succeeded (publickey).
sftpSession org.springframework.integration.sftp.session.SftpSession#37831f
files size158
java.io.IOException: inputstream is closed
at com.jcraft.jsch.ChannelSftp.fill(ChannelSftp.java:2884)
at com.jcraft.jsch.ChannelSftp.header(ChannelSftp.java:2908)
at com.jcraft.jsch.ChannelSftp.access$500(ChannelSftp.java:36)
at com.jcraft.jsch.ChannelSftp$2.read(ChannelSftp.java:1390)
at com.jcraft.jsch.ChannelSftp$2.read(ChannelSftp.java:1340)
at org.springframework.util.StreamUtils.copy(StreamUtils.java:126)
at org.springframework.util.FileCopyUtils.copy(FileCopyUtils.java:109)
at org.springframework.integration.sftp.session.SftpSession.read(SftpSession.java:129)
at com.sftp.test.SFTPTest.main(SFTPTest.java:49)
java code :
public class SFTPTest {
public static void main(String[] args) {
ApplicationContext applicationContext = new ClassPathXmlApplicationContext("beans.xml");
DefaultSftpSessionFactory defaultSftpSessionFactory = applicationContext.getBean("defaultSftpSessionFactory", DefaultSftpSessionFactory.class);
System.out.println(defaultSftpSessionFactory);
SftpSession sftpSession = defaultSftpSessionFactory.getSession();
System.out.println("sftpSessikon "+sftpSession);
String remoteDirectory = "/";
String localDirectory = "C:/312421/temp/";
OutputStream outputStream = null;
List<String> fileAtSFTPList = new ArrayList<String>();
try {
String[] fileNames = sftpSession.listNames(remoteDirectory);
for (String fileName : fileNames) {
boolean isMatch = fileCheckingAtSFTPWithPattern(fileName);
if(isMatch){
fileAtSFTPList.add(fileName);
}
}
System.out.println("files size" + fileAtSFTPList.size());
for (String fileName : fileAtSFTPList) {
File file = new File(localDirectory + fileName);
/*InputStream ipstream= sftpSession.readRaw(fileName);
FileUtils.writeByteArrayToFile(file, IOUtils.toByteArray(ipstream));
ipstream.close();*/
outputStream = new FileOutputStream(file);
sftpSession.read(remoteDirectory + fileName, outputStream);
outputStream.close();
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}finally {
try {
if (outputStream != null)
outputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
public static boolean fileCheckingAtSFTPWithPattern(String fileName){
Pattern pattern = Pattern.compile(".*\\.pgp$");
Matcher matcher = pattern.matcher(fileName);
if(matcher.find()){
return true;
}
return false;
}
}
Please suggest how to sort out this issue.
Thanks

The file type is irrelevant to Spring Integration - it looks like the server is closing the connection while reading the preamble - before the data is being fetched...
at com.jcraft.jsch.ChannelSftp.header(ChannelSftp.java:2908)
at com.jcraft.jsch.ChannelSftp.access$500(ChannelSftp.java:36)
at com.jcraft.jsch.ChannelSftp$2.read(ChannelSftp.java:1390)
at com.jcraft.jsch.ChannelSftp$2.read(ChannelSftp.java:1340)
The data itself is not read until later (line 1442 in ChannelSftp).
So it looks like a server-side problem.

Related

Anonymous caller does not have storage.objects.get access to the GCS object. Permission 'storage.objects.get' denied on resource

Ok so i have a spring boot application which is i stored many many files in google cloud storage server. Everything is ok but I have an issue when I try to delete any file (which is UI in my thymeleaf temp engine pressing DELETE button) it says Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object. Permission 'storage.objects.get' denied on resource (or it may not exist). I can upload video file without any issue into my bucket but when i pressing delete button it says above error. Where is my mistake?
Service Layer
public VideoLesson uploadFile(MultipartFile file) {
log.debug("Start file uploading service");
VideoLesson inputFile = new VideoLesson();
String originalFileName = file.getOriginalFilename();
if (originalFileName == null) {
throw new BadRequestException("Original file name is null");
}
Path path = new File(originalFileName).toPath();
try {
String contentType = Files.probeContentType(path);
VideoLessonDto fileDto = dataBucketUtil.uploadVideo(file, originalFileName, contentType);
if (fileDto != null) {
inputFile.setName(fileDto.getFileName());
inputFile.setFileUrl(fileDto.getFileUrl());
videoLessonRepository.save(inputFile);
log.debug("File uploaded successfully, file name: {} and url: {}", fileDto.getFileName(), fileDto.getFileUrl());
}
} catch (Exception e) {
log.error("Error occurred while uploading. Error: ", e);
throw new GCPFileUploadException("Error occurred while uploading");
}
log.debug("File details successfully saved in the database");
return inputFile;
}
DataBucketUtil.java
#Component
#Slf4j
public class DataBucketUtil {
#Value("${gcp.config.file}")
private String gcpConfigFile;
#Value("${gcp.project.id}")
private String gcpProjectId;
#Value("${gcp.bucket.id}")
private String gcpBucketId;
public VideoLessonDto uploadVideo(MultipartFile multipartFile, String fileName, String contentType) {
try {
log.debug("Start file uploading process on GCS");
byte[] fileData = FileUtils.readFileToByteArray(convertFile(multipartFile));
InputStream inputStream = new ClassPathResource(gcpConfigFile).getInputStream();
StorageOptions options = StorageOptions.newBuilder().setProjectId(gcpProjectId)
.setCredentials(GoogleCredentials.fromStream(inputStream)).build();
Storage storage = options.getService();
Bucket bucket = storage.get(gcpBucketId, Storage.BucketGetOption.fields());
RandomString id = new RandomString(6, ThreadLocalRandom.current());
Blob blob = bucket.create(fileName + checkFileExtension(fileName), fileData, contentType);
if (blob != null) {
log.debug("File successfully uploaded to GCS");
return new VideoLessonDto(blob.getName(), blob.getMediaLink());
}
} catch (Exception e) {
log.error("An error occurred while uploading data. Exception: ", e);
throw new GCPFileUploadException("An error occurred while storing data to GCS");
}
return null;
}
private File convertFile(MultipartFile file) {
try {
if (file.getOriginalFilename() == null) {
throw new BadRequestException("Original file name is null");
}
File convertedFile = new File(file.getOriginalFilename());
FileOutputStream outputStream = new FileOutputStream(convertedFile);
outputStream.write(file.getBytes());
outputStream.close();
log.debug("Converting multipart file : {}", convertedFile);
return convertedFile;
} catch (Exception e) {
throw new FileWriteException("An error has occurred while converting the file");
}
}
private String checkFileExtension(String fileName) {
if (fileName != null && fileName.contains(".")) {
String extension = ".mp4";
log.debug("Accepted file type : {}", extension);
return extension;
}
log.error("Not a permitted file type");
throw new InvalidFileTypeException("Not a permitted file type");
}
}
Controller Layer
#PostMapping(value = "/video_lesson/upload")
public String uploadFile(#RequestParam("file") MultipartFile file, Model model) {
String message;
try {
videoLessonService.uploadFile(file);
message = "Video bazaya müvəfəqiyyətlə yükləndi: " + file.getOriginalFilename();
model.addAttribute("message", message);
Thread.sleep(4000);
} catch (Exception e) {
message = "Diqqət bir video seçməlisiniz!";
model.addAttribute("message", message);
}
return "redirect:/video_lesson/files";
}
And this is the error output
Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object. Permission 'storage.objects.get' denied on resource (or it may not exist).
I tried in my bucket full access in the permissions, in GRANT ACCESS section but same error occurred

JMS Message Persistence on ActiveMQ

I need to ensure redelivery of JMS messages when the consumer fails
The way the producer is set up now - DefaultJmsListenerContainerFactory and Session.AUTO_ACKNOWLEDGE
I'm trying to build a jar and try in here to save the message into the server, once the app is able to consume, the producer in the jar will produce the message to the app.
Is that a good approach to do so?! any other way/recommendation to improve this?
public void handleMessagePersistence(final Object bean) {
ObjectMapper mapper = new ObjectMapper();
final String beanJson = mapper.writeValueAsString(bean); // I might need to convert to xml instead
// parameterize location of persistence folder
writeToDriver(beanJson);
try {
Producer.produceMessage(beanJson, beanJson, null, null, null);
} catch (final Exception e) {
LOG.error("Error producing message ");
}
}
here what I have to writ out the meesage:
private void writeToDriver(String beanJson) {
File filename = new File(JMS_LOCATION +
LocalDateTime.now().format(DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS")) + ".xml");
try (final FileWriter fileOut = new FileWriter(filename)) {
try (final BufferedWriter out = new BufferedWriter(fileOut)) {
out.write(beanJson);
out.flush();
}
} catch (Exception e) {
LOG.error("Unable to write out : " + beanJson, e);
}
}

Endpoint is not connected in httpclient5-beta

Hi I m trying to use httpcomponents5 beta to make persistent connection, I have tried the example given in their site, the code is as follows,
final IOReactorConfig ioReactorConfig = IOReactorConfig.custom().setSoTimeout(Timeout.ofSeconds(45)).setSelectInterval(10000).setSoReuseAddress(true).setSoKeepAlive(true).build();
final SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(new TrustAllStrategy()).build();
final PoolingAsyncClientConnectionManager connectionManager = PoolingAsyncClientConnectionManagerBuilder.create().setConnectionTimeToLive(TimeValue.of(1, TimeUnit.DAYS)).setTlsStrategy(new H2TlsStrategy(sslContext, NoopHostnameVerifier.INSTANCE)).build();
client = HttpAsyncClients.createMinimal(protocol, H2Config.DEFAULT, null, ioReactorConfig, connectionManager);
client.start();
final org.apache.hc.core5.http.HttpHost target = new org.apache.hc.core5.http.HttpHost("localhost", 8000, "https");
Future<AsyncClientEndpoint> leaseFuture = client.lease(target, null);
AsyncClientEndpoint asyncClientEndpoint = leaseFuture.get(60, TimeUnit.SECONDS);
final CountDownLatch latch = new CountDownLatch(1);
final AsyncRequestProducer requestProducer = AsyncRequestBuilder.post(target.getSchemeName()+"://"+target.getHostName()+":"+target.getPort()+locationposturl).addParameter(new BasicNameValuePair("info", requestData)).setEntity(new StringAsyncEntityProducer("json post data will go here", ContentType.APPLICATION_JSON)).setHeader("Pragma", "no-cache").setHeader("from", "http5").setHeader("Custom", customheaderName).setHeader("Secure", secureHeader).build();
locEndPoint.execute(requestProducer, SimpleResponseConsumer.create(), new FutureCallback<SimpleHttpResponse>() {
#Override
public void completed(final SimpleHttpResponse response) {
if (response != null) {
if (response.getCode() > -1) {
try {
System.out.println("http5:: COMPLETED : RESPONSE "+response.getBodyText());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
latch.countDown();
}
#Override
public void failed(final Exception ex) {
System.out.println("http5:: FAILED : "+target+locationposturl);
LoggerUtil.printStackTrace(ex);
System.out.println("http5::Exception Request failed "+LoggerUtil.getStackTrace(ex));
latch.countDown();
}
#Override
public void cancelled() {
System.out.println("http5:: CANCELLED : "+target+locationposturl);
System.out.println(http5::Exception Request cancelled");
latch.countDown();
}
});
latch.await();
This code works without a problem for the first time,but when I send a subsequent requests it throws an exception as follows,
http5:: Exception occured java.lang.IllegalStateException: Endpoint is
not connected at
org.apache.hc.core5.util.Asserts.check(Asserts.java:38) at
org.apache.hc.client5.http.impl.nio.PoolingAsyncClientConnectionManager$InternalConnectionEndpoint.getValidatedPoolEntry(PoolingAsyncClientConnectionManager.java:497)
at
org.apache.hc.client5.http.impl.nio.PoolingAsyncClientConnectionManager$InternalConnectionEndpoint.execute(PoolingAsyncClientConnectionManager.java:552)
at
org.apache.hc.client5.http.impl.async.MinimalHttpAsyncClient$InternalAsyncClientEndpoint.execute(MinimalHttpAsyncClient.java:405)
at
org.apache.hc.core5.http.nio.AsyncClientEndpoint.execute(AsyncClientEndpoint.java:81)
at
org.apache.hc.core5.http.nio.AsyncClientEndpoint.execute(AsyncClientEndpoint.java:114)
What may be the problem with endpoint, I m forcing endpoint to keep alive for a day, kindly shed some light on this

Storm-jms Spout collecting Avro messages and sending down stream?

I am new to Avro Format. I am trying to collect Avro messages from a JMS Queue using Storm-Jms spout and send them to hdfs using hdfs bolt.
Queue is sending avro but i am not able to get them in avro format using the HDFS BOLT.
How to properly collect the avro message and send them downstream without encoding errors in hdfs.
The existing HDFS Bolt does not support Writing avro Files we need to overcome this by making the following changes. In this sample Code i am using the getting JMS Messages from my spout and the converting those JMS bytes message to AVRO and emmiting them to HDFS.
This code can serve as a sample for modifying the methods in AbstractHdfsBolt.
public void execute(Tuple tuple) {
try {
long length = bytesMessage.getBodyLength();
byte[] bytes = new byte[(int)length];
///////////////////////////////////////
bytesMessage.readBytes(bytes);
String replyMessage = new String(bytes, "UTF-8");
datumReader = new SpecificDatumReader<IndexedRecord>(schema);
decoder = DecoderFactory.get().binaryDecoder(bytes, null);
result = datumReader.read(null, decoder);
synchronized (this.writeLock) {
dataFileWriter.append(result);
dataFileWriter.sync();
this.offset += bytes.length;
if (this.syncPolicy.mark(tuple, this.offset)) {
if (this.out instanceof HdfsDataOutputStream) {
((HdfsDataOutputStream) this.out).hsync(EnumSet.of(SyncFlag.UPDATE_LENGTH));
} else {
this.out.hsync();
this.out.flush();
}
this.syncPolicy.reset();
}
dataFileWriter.flush();
}
if(this.rotationPolicy.mark(tuple, this.offset)){
rotateOutputFile(); // synchronized
this.offset = 0;
this.rotationPolicy.reset();
}
} catch (IOException | JMSException e) {
LOG.warn("write/sync failed.", e);
this.collector.fail(tuple);
}
}
#Override
void closeOutputFile() throws IOException {
this.out.close();
}
#Override
Path createOutputFile() throws IOException {
Path path = new Path(this.fileNameFormat.getPath(), this.fileNameFormat.getName(this.rotation, System.currentTimeMillis()));
this.out = this.fs.create(path);
dataFileWriter.create(schema, out);
return path;
}
#Override
void doPrepare(Map conf, TopologyContext topologyContext,OutputCollector collector) throws IOException {
// TODO Auto-generated method stub
LOG.info("Preparing HDFS Bolt...");
try {
schema = new Schema.Parser().parse(new File("/home/*******/********SchemafileName.avsc"));
} catch (IOException e1) {
e1.printStackTrace();
}
this.fs = FileSystem.get(URI.create(this.fsUrl), hdfsConfig);
datumWriter = new SpecificDatumWriter<IndexedRecord>(schema);
dataFileWriter = new DataFileWriter<IndexedRecord>(datumWriter);
JMSAvroUtils JASV = new JMSAvroUtils();
}

Spring SAML extension for multiple IDP'S

we are planning to use spring saml extension as SP into our application.
But the requirement with our application is we need to communicate with more than 1 IDP's
Could any one please provide me/direct me to the example where it uses multiple IDP's
I also would like to know spring saml extension supports what kind of IDPS like OPenAM/Ping federate/ADFs2.0 etc...
Thanks,
--Vikas
You need to have a class to maintain a list of metadatas of each Idp's - say you putting those metadatas in some list which will be shared across application by static method. I have something like below
NOTE- I am not copying all class as it is that I am having, so might came across minor issues which you should be able to resolve on your own,
public class SSOMetadataProvider {
public static List<MetadataProvider> metadataList() throws MetadataProviderException, XMLParserException, IOException, Exception {
logger.info("Starting : Loading Metadata Data for all SSO enabled companies...");
List<MetadataProvider> metadataList = new ArrayList<MetadataProvider>();
org.opensaml.xml.parse.StaticBasicParserPool parserPool = new org.opensaml.xml.parse.StaticBasicParserPool();
parserPool.initialize();
//Get XML from DB -> convertIntoInputStream -> pass below as const argument
InputStreamMetadataProvider inputStreamMetadata = null;
try {
//Getting list from DB
List companyList = someServiceClass.getAllSSOEnabledCompanyDTO();
if(companyList!=null){
for (Object obj : companyList) {
CompanyDTO companyDTO = (CompanyDTO) obj;
if (companyDTO != null && companyDTO.getCompanyid() > 0 && companyDTO.getSsoSettingsDTO()!=null && !StringUtil.isNullOrEmpty(companyDTO.getSsoSettingsDTO().getSsoMetadataXml())) {
logger.info("Loading Metadata for Company : "+companyDTO.getCompanyname()+" , companyId : "+companyDTO.getCompanyid());
inputStreamMetadata = new InputStreamMetadataProvider(companyDTO.getSsoSettingsDTO().getSsoMetadataXml());
inputStreamMetadata.setParserPool(parserPool);
inputStreamMetadata.initialize();
//ExtendedMetadataDelegateWrapper extMetadaDel = new ExtendedMetadataDelegateWrapper(inputStreamMetadata , new org.springframework.security.saml.metadata.ExtendedMetadata());
SSOMetadataDelegate extMetadaDel = new SSOMetadataDelegate(inputStreamMetadata , new org.springframework.security.saml.metadata.ExtendedMetadata()) ;
extMetadaDel.initialize();
extMetadaDel.setTrustFiltersInitialized(true);
metadataList.add(extMetadaDel);
logger.info("Loading Metadata bla bla");
}
}
}
} catch (MetadataProviderException | IOException | XMLParserException mpe){
logger.warn(mpe);
throw mpe;
}
catch (Exception e) {
logger.warn(e);
}
logger.info("Finished : Loading Metadata Data for all SSO enabled companies...");
return metadataList;
}
InputStreamMetadataProvider.java
public class InputStreamMetadataProvider extends AbstractReloadingMetadataProvider implements Serializable
{
public InputStreamMetadataProvider(String metadata) throws MetadataProviderException
{
super();
//metadataInputStream = metadata;
metadataInputStream = SSOUtil.getIdpAsStream(metadata);
}
#Override
protected byte[] fetchMetadata() throws MetadataProviderException
{
byte[] metadataBytes = metadataInputStream ;
if(metadataBytes.length>0)
return metadataBytes;
else
return null;
}
public byte[] getMetadataInputStream() {
return metadataInputStream;
}
}
SSOUtil.java
public class SSOUtil {
public static byte[] getIdpAsStream(String metadatXml) {
return metadatXml.getBytes();
}
}
After user request to fetch metadata for their company's metadata, get MetaData for entityId for each IdPs -
SSOCachingMetadataManager.java
public class SSOCachingMetadataManager extends CachingMetadataManager{
#Override
public ExtendedMetadata getExtendedMetadata(String entityID) throws MetadataProviderException {
ExtendedMetadata extendedMetadata = null;
try {
//UAT Defect Fix - org.springframework.security.saml.metadata.ExtendedMetadataDelegate cannot be cast to biz.bsite.direct.spring.app.sso.ExtendedMetadataDelegate
//List<MetadataProvider> metadataList = (List<MetadataProvider>) GenericCache.getInstance().getCachedObject("ssoMetadataList", List.class.getClassLoader());
List<MetadataProvider> metadataList = SSOMetadataProvider.metadataList();
log.info("Retrieved Metadata List from Cassendra Cache size is :"+ (metadataList!=null ? metadataList.size(): 0) );
org.opensaml.xml.parse.StaticBasicParserPool parserPool = new org.opensaml.xml.parse.StaticBasicParserPool();
parserPool.initialize();
if(metadataList!=null){
//metadataList.addAll(getAvailableProviders());
//metadataList.addAll(getProviders());
//To remove duplicate entries from list, if any
Set<MetadataProvider> hs = new HashSet<MetadataProvider> ();
hs.addAll(metadataList);
metadataList.clear();
metadataList.addAll(hs);
//setAllProviders(metadataList);
//setTrustFilterInitializedToTrue();
//refreshMetadata();
}
if(metadataList!=null && metadataList.size()>0) {
for(MetadataProvider metadataProvider : metadataList){
log.info("metadataProvider instance of ExtendedMetadataDelegate: Looking for entityId"+entityID);
SSOMetadataDelegate ssoMetadataDelegate = null;
ExtendedMetadataDelegateWrapper extMetadaDel = null;
// extMetadaDel.getDelegate()
if(metadataProvider instanceof SSOMetadataDelegate)
{ssoMetadataDelegate = (SSOMetadataDelegate) metadataProvider;
((InputStreamMetadataProvider)ssoMetadataDelegate.getDelegate()).setParserPool(parserPool);
((InputStreamMetadataProvider)ssoMetadataDelegate.getDelegate()).initialize();
ssoMetadataDelegate.initialize();
ssoMetadataDelegate.setTrustFiltersInitialized(true);
if(!isMetadataAlreadyExist(ssoMetadataDelegate))
addMetadataProvider(ssoMetadataDelegate);
extMetadaDel = new ExtendedMetadataDelegateWrapper(ssoMetadataDelegate.getDelegate() , new org.springframework.security.saml.metadata.ExtendedMetadata());
}
else
extMetadaDel = new ExtendedMetadataDelegateWrapper(metadataProvider, new org.springframework.security.saml.metadata.ExtendedMetadata());
extMetadaDel.initialize();
extMetadaDel.setTrustFiltersInitialized(true);
extMetadaDel.initialize();
refreshMetadata();
extendedMetadata = extMetadaDel.getExtendedMetadata(entityID);
}
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if(extendedMetadata!=null)
return extendedMetadata;
else{
return super.getExtendedMetadata(entityID);
}
}
private boolean isMetadataAlreadyExist(SSOMetadataDelegate ssoMetadataDelegate) {
boolean isExist = false;
for(ExtendedMetadataDelegate item : getAvailableProviders()){
if (item.getDelegate() != null && item.getDelegate() instanceof SSOMetadataDelegate) {
SSOMetadataDelegate that = (SSOMetadataDelegate) item.getDelegate();
try {
log.info("This Entity ID: "+ssoMetadataDelegate.getMetadata()!=null ? ((EntityDescriptorImpl)ssoMetadataDelegate.getMetadata()).getEntityID() : "nullEntity"+
"That Entity ID: "+that.getMetadata()!=null ? ((EntityDescriptorImpl)that.getMetadata()).getEntityID() : "nullEntity");
EntityDescriptorImpl e = (EntityDescriptorImpl) that.getMetadata();
isExist = this.getMetadata()!=null ? ((EntityDescriptorImpl)ssoMetadataDelegate.getMetadata()).getEntityID().equals(e.getEntityID()) : false;
if(isExist)
return isExist;
} catch (MetadataProviderException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
}
}
return isExist;
}
Add entry in ur Spring bean xml
<bean id="metadata" class="pkg.path.SSOCachingMetadataManager">
<constructor-arg name="providers" value="#{ssoMetadataProvider.metadataList()}">
</constructor-arg>
<property name="RefreshCheckInterval" value="-1"/>
<property name="RefreshRequired" value="false"/>
</bean>
Let me know incase of any concerns.
I have recently configured two IDPs for Spring SAML extension. Here we should follow one basic rule. For each IDP we want to add, we have to configure one IDP provider as well as one SP provider. We should configure the providers in a MetadataManager bean, CachingMetadataManager for example. Here are some code snippets to get the idea what I am trying to say about:
public void addProvider(String providerMetadataUrl, String idpEntityId, String spEntityId, String alias) {
addIDPMetadata(providerMetadataUrl, idpEntityId, alias);
addSPMetadata(spEntityId, alias);
}
public void addIDPMetadata(String providerMetadataUrl, String idpEntityId, String alias) {
try {
if (metadata.getIDPEntityNames().contains(idpEntityId)) {
return;
}
metadata.addMetadataProvider(extendedMetadataProvider(providerMetadataUrl, alias));
} catch (MetadataProviderException e1) {
log.error("Error initializing metadata", e1);
}
}
public void addSPMetadata(String spEntityId, String alias) {
try {
if (metadata.getSPEntityNames().contains(spEntityId)) {
return;
}
MetadataGenerator generator = new MetadataGenerator();
generator.setEntityId(spEntityId);
generator.setEntityBaseURL(baseURL);
generator.setExtendedMetadata(extendedMetadata(alias));
generator.setIncludeDiscoveryExtension(true);
generator.setKeyManager(keyManager);
EntityDescriptor descriptor = generator.generateMetadata();
ExtendedMetadata extendedMetadata = generator.generateExtendedMetadata();
MetadataMemoryProvider memoryProvider = new MetadataMemoryProvider(descriptor);
memoryProvider.initialize();
MetadataProvider metadataProvider = new ExtendedMetadataDelegate(memoryProvider, extendedMetadata);
metadata.addMetadataProvider(metadataProvider);
metadata.setHostedSPName(descriptor.getEntityID());
metadata.refreshMetadata();
} catch (MetadataProviderException e1) {
log.error("Error initializing metadata", e1);
}
}
public ExtendedMetadataDelegate extendedMetadataProvider(String providerMetadataUrl, String alias)
throws MetadataProviderException {
HTTPMetadataProvider provider = new HTTPMetadataProvider(this.bgTaskTimer, httpClient, providerMetadataUrl);
provider.setParserPool(parserPool);
ExtendedMetadataDelegate delegate = new ExtendedMetadataDelegate(provider, extendedMetadata(alias));
delegate.setMetadataTrustCheck(true);
delegate.setMetadataRequireSignature(false);
return delegate;
}
private ExtendedMetadata extendedMetadata(String alias) {
ExtendedMetadata exmeta = new ExtendedMetadata();
exmeta.setIdpDiscoveryEnabled(true);
exmeta.setSignMetadata(false);
exmeta.setEcpEnabled(true);
if (alias != null && alias.length() > 0) {
exmeta.setAlias(alias);
}
return exmeta;
}
You can find all answers to your question in the Spring SAML manual.
The sample application which is included as part of the product already includes metadata for two IDPs, use it as an example.
Statement on IDPs is included in chapter 1.2:
All products supporting SAML 2.0 in Identity Provider mode (e.g. ADFS
2.0, Shibboleth, OpenAM/OpenSSO, Efecte Identity or Ping Federate) can be used with the extension.

Resources