How to convert file to a string using SpringIntegration - spring

I am trying to read a text file and convert it to a string using SpringIntegration.
Need help in transforming file to a string.
Git Link: https://github.com/ravikalla/spring-integration
Source Code -
#Bean
#InboundChannelAdapter(value = "payorFileSource", poller = #Poller(fixedDelay = "10000"))
public MessageSource<File> fileReadingMessageSource() {
FileReadingMessageSource sourceReader = new FileReadingMessageSource();
sourceReader.setDirectory(new File(INPUT_DIR));
sourceReader.setFilter(new SimplePatternFileListFilter(FILE_PATTERN));
return sourceReader;
}
#Bean
#Transformer(inputChannel="payorFileSource", outputChannel="payorFileContent")
public FileToStringTransformer transformFileToString() {
FileToStringTransformer objFileToStringTransformer = new FileToStringTransformer();
return objFileToStringTransformer;
}
Error -
SEVERE: org.springframework.integration.handler.ReplyRequiredException: No reply produced by handler 'fileCopyConfig.transformPayorStringToObject.transformer.handler', and its 'requiresReply' property is set to true., failedMessage=GenericMessage [payload=1|test1, headers={sequenceNumber=1, file_name=payor.txt, sequenceSize=4, correlationId=ff1fef7d-7011-ee99-8d71-96146ac9ea07, file_originalFile=source/payor.txt, id=fd4f950b-afcf-70e6-a053-7d59ff593add, file_relativePath=payor.txt, timestamp=1554875904858}]
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:119)

You can convert the file into an InputStream and the use
IOUtils.toString(inputStream) to convert it into a String.

That error is coming from somewhere else; the FTST can't return null.
I haven't looked at all your code, but this looks suspicious:
#Bean
#Transformer(inputChannel="payorRawStringChannel", outputChannel="payorRawObjectChannel")
public GenericTransformer<String, Payor> transformPayorStringToObject() {
return new GenericTransformer<String, Payor>() {
#Override
public Payor transform(String strPayor) {
String[] arrPayorData = strPayor.split(",");
Payor objPayor = null;
if (null != arrPayorData && arrPayorData.length > 1)
objPayor = new Payor(Integer.parseInt(arrPayorData[0]), arrPayorData[1]);
return objPayor;
}
};
}
It can return null; transformers are not allowed to do that.
Turn on DEBUG logging and follow the message flow to see which component is at fault.

package org.springframework.integration.samples.tcpclientserver;
import java.io.UnsupportedEncodingException;
import org.springframework.core.convert.converter.Converter;
/**
* Simple byte array to String converter; allowing the character set
* to be specified.
*
* #author Gary Russell
* #since 2.1
*
*/
public class ByteArrayToStringConverter implements Converter<byte[], String> {
private String charSet = "UTF-8";
public String convert(byte[] bytes) {
try {
return new String(bytes, this.charSet);
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
return new String(bytes);
}
}
/**
* #return the charSet
*/
public String getCharSet() {
return charSet;
}
/**
* #param charSet the charSet to set
*/
public void setCharSet(String charSet) {
this.charSet = charSet;
}
}

Related

GCM XMPP - Error on start web application server

My back end java has the goal to receive mobile messages using GCM XMPP. My web applications have spring 4.1.4 and smack 4.1.4.
Smack dependencies:
<dependency>
<groupId>org.igniterealtime.smack</groupId>
<artifactId>smack-core</artifactId>
<version>4.1.4</version>
</dependency>
<dependency>
<groupId>org.igniterealtime.smack</groupId>
<artifactId>smack-tcp</artifactId>
<version>4.1.4</version>
</dependency>
<dependency>
<groupId>org.igniterealtime.smack</groupId>
<artifactId>smack-extensions</artifactId>
<version>4.1.4</version>
</dependency>
<dependency>
<groupId>org.igniterealtime.smack</groupId>
<artifactId>smack-java7</artifactId>
<version>4.0.1</version>
</dependency>
Bean for XMPP Connection:
#Component("CcsClientImpl")
public class CcsClientImpl {
private static final String GCM_ELEMENT_NAME = "gcm";
private static final String GCM_NAMESPACE = "google:mobile:data";
private static XMPPTCPConnection connection;
/**
* Indicates whether the connection is in draining state, which means that it
* will not accept any new downstream messages.
*/
protected static volatile boolean connectionDraining = false;
private static final Logger logger = LoggerFactory.getLogger("CcsClientImpl");
#Value("${sender.id}")
private String mSenderId;
#Value("${server.api.key}")
private String mServerApiKey;
#Value("${gcm.xmpp.host}")
private String mHost;
#Value("${gcm.xmpp.port}")
private int mPort;
#Value("${gcm.xmpp.debuggable}")
private boolean mDebuggable;
#Autowired
private ProcessorFactory processorFactory;
//#Autowired
public CcsClientImpl() {
ProviderManager.addExtensionProvider(GCM_ELEMENT_NAME, GCM_NAMESPACE, new ExtensionElementProvider<ExtensionElement>() {
#Override
public DefaultExtensionElement parse(XmlPullParser parser,int initialDepth) throws org.xmlpull.v1.XmlPullParserException, IOException {
String json = parser.nextText();
return new GcmPacketExtension(json);
}
});
try {
connect(mSenderId, mServerApiKey);
} catch (XMPPException ex) {
logger.error("ERRO AO CONECTAR COM GCM XMPP", ex);
} catch (SmackException ex) {
logger.error("ERRO AO CONECTAR COM GCM XMPP", ex);
} catch (IOException ex) {
logger.error("ERRO AO CONECTAR COM GCM XMPP", ex);
}
}
/**
* Sends a downstream message to GCM.
*
* #return true if the message has been successfully sent.
*/
public boolean sendDownstreamMessage(String jsonRequest) throws
NotConnectedException {
if (!connectionDraining) {
send(jsonRequest);
return true;
}
logger.info("Dropping downstream message since the connection is draining");
return false;
}
/**
* Returns a random message id to uniquely identify a message.
*
* <p>Note: This is generated by a pseudo random number generator for
* illustration purpose, and is not guaranteed to be unique.
*/
public String nextMessageId() {
return "m-" + UUID.randomUUID().toString();
}
/**
* Sends a packet with contents provided.
*/
protected void send(String jsonRequest) throws NotConnectedException {
Stanza request = new GcmPacketExtension(jsonRequest).toPacket();
connection.sendStanza(request);
}
/// new: customized version of the standard handleIncomingDateMessage method
/**
* Handles an upstream data message from a device application.
*/
public void handleIncomingDataMessage(CcsMessage msg) {
if (msg.getPayload().get("action") != null) {
PayloadProcessor processor = processorFactory.getProcessor(msg.getPayload().get("action"));
processor.handleMessage(msg);
}
}
/**
* Handles an ACK.
*
* <p>Logs a INFO message, but subclasses could override it to
* properly handle ACKs.
*/
protected void handleAckReceipt(Map<String, Object> jsonObject) {
String messageId = (String) jsonObject.get("message_id");
String from = (String) jsonObject.get("from");
logger.info("handleAckReceipt() from: " + from + ",messageId: " + messageId);
}
/**
* Handles a NACK.
*
* <p>Logs a INFO message, but subclasses could override it to
* properly handle NACKs.
*/
protected void handleNackReceipt(Map<String, Object> jsonObject) {
String messageId = (String) jsonObject.get("message_id");
String from = (String) jsonObject.get("from");
logger.info("handleNackReceipt() from: " + from + ",messageId: " + messageId);
}
protected void handleControlMessage(Map<String, Object> jsonObject) {
logger.info("handleControlMessage(): " + jsonObject);
String controlType = (String) jsonObject.get("control_type");
if ("CONNECTION_DRAINING".equals(controlType)) {
connectionDraining = true;
} else {
logger.info("Unrecognized control type: %s. This could happen if new features are " + "added to the CCS protocol.",
controlType);
}
}
/**
* Creates a JSON encoded GCM message.
*
* #param to RegistrationId of the target device (Required).
* #param messageId Unique messageId for which CCS sends an
* "ack/nack" (Required).
* #param payload Message content intended for the application. (Optional).
* #param collapseKey GCM collapse_key parameter (Optional).
* #param timeToLive GCM time_to_live parameter (Optional).
* #param delayWhileIdle GCM delay_while_idle parameter (Optional).
* #return JSON encoded GCM message.
*/
public static String createJsonMessage(String to, String messageId,
Map<String, String> payload, String collapseKey, Long timeToLive,
Boolean delayWhileIdle) {
Map<String, Object> message = new HashMap<String, Object>();
message.put("to", to);
if (collapseKey != null) {
message.put("collapse_key", collapseKey);
}
if (timeToLive != null) {
message.put("time_to_live", timeToLive);
}
if (delayWhileIdle != null && delayWhileIdle) {
message.put("delay_while_idle", true);
}
message.put("message_id", messageId);
message.put("data", payload);
return JSONValue.toJSONString(message);
}
/**
* Creates a JSON encoded ACK message for an upstream message received
* from an application.
*
* #param to RegistrationId of the device who sent the upstream message.
* #param messageId messageId of the upstream message to be acknowledged to CCS.
* #return JSON encoded ack.
*/
protected static String createJsonAck(String to, String messageId) {
Map<String, Object> message = new HashMap<String, Object>();
message.put("message_type", "ack");
message.put("to", to);
message.put("message_id", messageId);
return JSONValue.toJSONString(message);
}
/**
* Connects to GCM Cloud Connection Server using the supplied credentials.
*
* #param senderId Your GCM project number
* #param apiKey API Key of your project
*/
public void connect(String senderId, String serverApiKey)
throws XMPPException, IOException, SmackException {
XMPPTCPConnectionConfiguration config =
XMPPTCPConnectionConfiguration.builder()
.setServiceName(mHost)
.setHost(mHost)
.setCompressionEnabled(false)
.setPort(mPort)
.setConnectTimeout(30000)
.setSecurityMode(SecurityMode.disabled)
.setSendPresence(false)
.setDebuggerEnabled(mDebuggable)
.setSocketFactory(SSLSocketFactory.getDefault())
.build();
connection = new XMPPTCPConnection(config);
//disable Roster as I don't think this is supported by GCM
Roster roster = Roster.getInstanceFor(connection);
roster.setRosterLoadedAtLogin(false);
logger.info("Connecting...");
connection.connect();
connection.addConnectionListener(new LoggingConnectionListener());
// Handle incoming packets
connection.addAsyncStanzaListener(new MyStanzaListener() , new MyStanzaFilter() );
// Log all outgoing packets
connection.addPacketInterceptor(new MyStanzaInterceptor(), new MyStanzaFilter() );
connection.login(senderId + "#gcm.googleapis.com" , serverApiKey);
logger.info("Logged in: " + mSenderId);
}
private CcsMessage getMessage(Map<String, Object> jsonObject) {
String from = jsonObject.get("from").toString();
// PackageName of the application that sent this message.
String category = jsonObject.get("category").toString();
// unique id of this message
String messageId = jsonObject.get("message_id").toString();
#SuppressWarnings("unchecked")
Map<String, String> payload = (Map<String, String>) jsonObject.get("data");
CcsMessage msg = new CcsMessage(from, category, messageId, payload);
return msg;
}
private class MyStanzaFilter implements StanzaFilter {
#Override
public boolean accept(Stanza arg0) {
// TODO Auto-generated method stub
if (arg0.getClass() == Stanza.class) {
return true;
} else {
if (arg0.getTo() != null) {
if (arg0.getTo().startsWith(mSenderId)) {
return true;
}
}
}
return false;
}
}
private class MyStanzaListener implements StanzaListener{
#Override
public void processPacket(Stanza packet) {
logger.info("Received: " + packet.toXML());
Message incomingMessage = (Message) packet;
GcmPacketExtension gcmPacket =
(GcmPacketExtension) incomingMessage.
getExtension(GCM_NAMESPACE);
String json = gcmPacket.getJson();
try {
#SuppressWarnings("unchecked")
Map<String, Object> jsonObject =
(Map<String, Object>) JSONValue.
parseWithException(json);
// present for "ack"/"nack", null otherwise
Object messageType = jsonObject.get("message_type");
if (messageType == null) {
// Normal upstream data message
CcsMessage msg = getMessage(jsonObject);
handleIncomingDataMessage(msg);
// Send ACK to CCS
String messageId = (String) jsonObject.get("message_id");
String from = (String) jsonObject.get("from");
String ack = createJsonAck(from, messageId);
send(ack);
} else if ("ack".equals(messageType.toString())) {
// Process Ack
handleAckReceipt(jsonObject);
} else if ("nack".equals(messageType.toString())) {
// Process Nack
handleNackReceipt(jsonObject);
} else if ("control".equals(messageType.toString())) {
// Process control message
handleControlMessage(jsonObject);
} else {
logger.warn("Unrecognized message type (%s)",
messageType.toString());
}
} catch (ParseException e) {
logger.info("Error parsing JSON " + json, e);
} catch (Exception e) {
logger.info("Failed to process packet", e);
}
}
}
private class MyStanzaInterceptor implements StanzaListener
{
#Override
public void processPacket(Stanza packet) {
logger.info("Sent: {0}", packet.toXML());
}
}
// public static void main(String[] args) throws Exception {
//
// SmackCcsClient ccsClient = new SmackCcsClient();
//
// ccsClient.connect(YOUR_PROJECT_ID, YOUR_API_KEY);
//
// // Send a sample hello downstream message to a device.
// String messageId = ccsClient.nextMessageId();
// Map<String, String> payload = new HashMap<String, String>();
// payload.put("Message", "Ahha, it works!");
// payload.put("CCS", "Dummy Message");
// payload.put("EmbeddedMessageId", messageId);
// String collapseKey = "sample";
// Long timeToLive = 10000L;
// String message = createJsonMessage(YOUR_PHONE_REG_ID, messageId, payload,
// collapseKey, timeToLive, true);
//
// ccsClient.sendDownstreamMessage(message);
// logger.info("Message sent.");
//
// //crude loop to keep connection open for receiving messages
// while(true)
// {;}
// }
/**
* XMPP Packet Extension for GCM Cloud Connection Server.
*/
private static final class GcmPacketExtension extends DefaultExtensionElement {
private final String json;
public GcmPacketExtension(String json) {
super(GCM_ELEMENT_NAME, GCM_NAMESPACE);
this.json = json;
}
public String getJson() {
return json;
}
#Override
public String toXML() {
return String.format("<%s xmlns=\"%s\">%s</%s>",
GCM_ELEMENT_NAME, GCM_NAMESPACE,
StringUtils.escapeForXML(json), GCM_ELEMENT_NAME);
}
public Stanza toPacket() {
Message message = new Message();
message.addExtension(this);
return message;
}
}
private static final class LoggingConnectionListener
implements ConnectionListener {
#Override
public void connected(XMPPConnection xmppConnection) {
logger.info("Connected.");
}
#Override
public void reconnectionSuccessful() {
logger.info("Reconnecting..");
}
#Override
public void reconnectionFailed(Exception e) {
logger.info("Reconnection failed.. ", e);
}
#Override
public void reconnectingIn(int seconds) {
logger.info("Reconnecting in %d secs", seconds);
}
#Override
public void connectionClosedOnError(Exception e) {
logger.info("Connection closed on error.");
}
#Override
public void connectionClosed() {
logger.info("Connection closed.");
}
#Override
public void authenticated(XMPPConnection arg0, boolean arg1) {
// TODO Auto-generated method stub
}
}
#PreDestroy
public void cleanUp() throws Exception {
logger.info("Bean do cliente XMPP está sendo destruído...");
if (connection.isConnected()) {
logger.info("Conexão GCM XMPP está aberta. Desconectando...");
connection.disconnect();
}
}
}
When starts tomcat happens the error:
Caused by: java.lang.NoClassDefFoundError: org/jivesoftware/smack/initializer/SmackAndOsgiInitializer
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at org.apache.catalina.loader.WebappClassLoaderBase.findClassInternal(WebappClassLoaderBase.java:2476)
at org.apache.catalina.loader.WebappClassLoaderBase.findClass(WebappClassLoaderBase.java:857)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1282)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1164)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.jivesoftware.smack.SmackInitialization.loadSmackClass(SmackInitialization.java:213)
at org.jivesoftware.smack.SmackInitialization.parseClassesToLoad(SmackInitialization.java:193)
at org.jivesoftware.smack.SmackInitialization.processConfigFile(SmackInitialization.java:163)
at org.jivesoftware.smack.SmackInitialization.processConfigFile(SmackInitialization.java:148)
at org.jivesoftware.smack.SmackInitialization.<clinit>(SmackInitialization.java:116)
at org.jivesoftware.smack.SmackConfiguration.getVersion(SmackConfiguration.java:96)
at org.jivesoftware.smack.provider.ProviderManager.<clinit>(ProviderManager.java:121)
at br.com.soma.service.gcm.xmpp.CcsClientImpl.<init>(CcsClientImpl.java:73)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:147)
... 60 more
Caused by: java.lang.ClassNotFoundException: org.jivesoftware.smack.initializer.SmackAndOsgiInitializer
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1313)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1164)
... 82 more
Can someone help me ? Thanks!
It works fine, now. I just set a variable for smack as you suggested. Thanks!!
<properties>
<smack.version>4.1.4</smack.version>
</properties>
<dependency>
<groupId>org.igniterealtime.smack</groupId>
<artifactId>smack-core</artifactId>
<version>${smack.version}</version>
</dependency>
<dependency>
<groupId>org.igniterealtime.smack</groupId>
<artifactId>smack-tcp</artifactId>
<version>${smack.version}</version>
</dependency>
<dependency>
<groupId>org.igniterealtime.smack</groupId>
<artifactId>smack-extensions</artifactId>
<version>${smack.version}</version>
</dependency>
<dependency>
<groupId>org.igniterealtime.smack</groupId>
<artifactId>smack-java7</artifactId>
<version>${smack.version}</version>
</dependency>

Spring Batch to read multiple files with same extension

I have custom reader to read data from CSV File.
package org.kp.oppr.remediation.batch.csv;
import java.util.Arrays;
import java.util.LinkedHashMap;
import java.util.Map;
import org.apache.commons.lang.StringUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.remediation.batch.csv.FlatFileItemReaderNewLine;
import org.remediation.batch.model.RawItem;
import org.remediation.batch.model.RawItemLineMapper;
import org.springframework.batch.core.ExitStatus;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.StepExecutionListener;
import org.springframework.batch.core.annotation.BeforeStep;
import org.springframework.batch.item.file.LineCallbackHandler;
import org.springframework.batch.item.file.LineMapper;
import org.springframework.batch.item.file.mapping.DefaultLineMapper;
import org.springframework.batch.item.file.mapping.FieldSetMapper;
import org.springframework.batch.item.file.transform.DelimitedLineTokenizer;
import org.springframework.batch.item.file.transform.FieldSet;
import org.springframework.batch.item.file.transform.LineTokenizer;
import org.springframework.core.io.Resource;
import org.springframework.util.Assert;
import org.springframework.validation.BindException;
public class RawItemCsvReader extends MultiResourceItemReader<RawItem>
implements StepExecutionListener, LineCallbackHandler,
FieldSetMapper<RawItem> {
static final Logger LOGGER = LogManager.getLogger(RawItemCsvReader.class);
final private String COLUMN_NAMES_KEY = "COLUMNS_NAMES_KEY";
private StepExecution stepExecution;
private DefaultLineMapper<RawItem> lineMapper;
private String[] columnNames;
private Resource[] resources;
// = DelimitedLineTokenizer.DELIMITER_COMMA;
private char quoteCharacter = DelimitedLineTokenizer.DEFAULT_QUOTE_CHARACTER;
private String delimiter;
public RawItemCsvReader() {
setLinesToSkip(0);
setSkippedLinesCallback(this);
}
#Override
public void afterPropertiesSet() {
// not in constructor to ensure we invoke the override
final DefaultLineMapper<RawItem> lineMapper = new RawItemLineMapper();
setLineMapper(lineMapper);
}
/**
* Satisfies {#link LineCallbackHandler} contract and and Acts as the
* {#code skippedLinesCallback}.
*
* #param line
*/
#Override
public void handleLine(String line) {
getLineMapper().setLineTokenizer(getTokenizer());
getLineMapper().setFieldSetMapper(this);
}
private LineTokenizer getTokenizer() {
// this.columnNames = line.split(delimiter);
DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer();
lineTokenizer.setQuoteCharacter(quoteCharacter);
lineTokenizer.setDelimiter(delimiter);
lineTokenizer.setStrict(true);
lineTokenizer.setNames(columnNames);
addColumnNames();
return lineTokenizer;
}
private void addColumnNames() {
stepExecution.getExecutionContext().put(COLUMN_NAMES_KEY, columnNames);
}
#Override
public void setResources(Resource[] resources) {
this.resources = resources;
super.setResources(resources);
}
/**
* Provides acces to an otherwise hidden field in parent class. We need this
* because we have to reconfigure the {#link LineMapper} based on file
* contents.
*
* #param lineMapper
*/
#Override
public void setLineMapper(LineMapper<RawItem> lineMapper) {
if (!(lineMapper instanceof DefaultLineMapper)) {
throw new IllegalArgumentException(
"Must specify a DefaultLineMapper");
}
this.lineMapper = (DefaultLineMapper) lineMapper;
super.setLineMapper(lineMapper);
}
private DefaultLineMapper getLineMapper() {
return this.lineMapper;
}
/**
* Satisfies {#link FieldSetMapper} contract.
*
* #param fs
* #return
* #throws BindException
*/
#Override
public RawItem mapFieldSet(FieldSet fs) throws BindException {
if (fs == null) {
return null;
}
Map<String, String> record = new LinkedHashMap<String, String>();
for (String columnName : this.columnNames) {
record.put(columnName,
StringUtils.trimToNull(fs.readString(columnName)));
}
RawItem item = new RawItem();
item.setResource(resources);
item.setRecord(record);
return item;
}
#BeforeStep
public void saveStepExecution(StepExecution stepExecution) {
this.stepExecution = stepExecution;
}
#Override
public void beforeStep(StepExecution stepExecution) {
//LOGGER.info("Start Raw Read Step for " + itemResource.getFilename());
}
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
LOGGER.info("End Raw Read Step for lines read: " + stepExecution.getReadCount()
+ " lines skipped: " + stepExecution.getReadSkipCount());
/*
LOGGER.info("End Raw Read Step for " + itemResource.getFilename()
+ " lines read: " + stepExecution.getReadCount()
+ " lines skipped: " + stepExecution.getReadSkipCount());
*/
return ExitStatus.COMPLETED;
}
public void setDelimiter(String delimiter) {
this.delimiter = delimiter;
}
public void setQuoteCharacter(char quoteCharacter) {
this.quoteCharacter = quoteCharacter;
}
public String[] getColumnNames() {
return columnNames;
}
public void setColumnNames(String[] columnNames) {
this.columnNames = columnNames;
}
public String getDelimiter() {
return delimiter;
}
}
I want to use MultiResourceItemReader along with this class to read multiple files with the same extension. I am using the Spring MultiResourceItemReader to do the job. I need to know how to configure private ResourceAwareItemReaderItemStream delegate; instance for this class
package org.kp.oppr.remediation.batch.csv;
import java.util.Arrays;
import java.util.Comparator;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.springframework.batch.item.ExecutionContext;
import org.springframework.batch.item.ItemReader;
import org.springframework.batch.item.ItemStream;
import org.springframework.batch.item.ItemStreamException;
import org.springframework.batch.item.ParseException;
import org.springframework.batch.item.UnexpectedInputException;
import org.springframework.batch.item.file.LineCallbackHandler;
import org.springframework.batch.item.file.LineMapper;
import org.springframework.batch.item.file.ResourceAwareItemReaderItemStream;
import org.springframework.batch.item.util.ExecutionContextUserSupport;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.core.io.Resource;
import org.springframework.util.Assert;
import org.springframework.util.ClassUtils;
public class MultiResourceItemReader <T> implements ItemReader<T>, ItemStream, InitializingBean,ResourceAwareItemReaderItemStream<T> {
static final Logger LOGGER = LogManager
.getLogger(MultipleFlatFileItemReaderNewLine.class);
private final ExecutionContextUserSupport executionContextUserSupport = new ExecutionContextUserSupport();
private ResourceAwareItemReaderItemStream<? extends T> delegate;
private Resource[] resources;
private MultiResourceIndex index = new MultiResourceIndex();
private boolean saveState = true;
// signals there are no resources to read -> just return null on first read
private boolean noInput;
private LineMapper<T> lineMapper;
private int linesToSkip = 0;
private LineCallbackHandler skippedLinesCallback;
private Comparator<Resource> comparator = new Comparator<Resource>() {
/**
* Compares resource filenames.
*/
public int compare(Resource r1, Resource r2) {
return r1.getFilename().compareTo(r2.getFilename());
}
};
public MultiResourceItemReader() {
executionContextUserSupport.setName(ClassUtils.getShortName(MultiResourceItemReader.class));
}
/**
* #param skippedLinesCallback
* will be called for each one of the initial skipped lines
* before any items are read.
*/
public void setSkippedLinesCallback(LineCallbackHandler skippedLinesCallback) {
this.skippedLinesCallback = skippedLinesCallback;
}
/**
* Public setter for the number of lines to skip at the start of a file. Can
* be used if the file contains a header without useful (column name)
* information, and without a comment delimiter at the beginning of the
* lines.
*
* #param linesToSkip
* the number of lines to skip
*/
public void setLinesToSkip(int linesToSkip) {
this.linesToSkip = linesToSkip;
}
/**
* Setter for line mapper. This property is required to be set.
*
* #param lineMapper
* maps line to item
*/
public void setLineMapper(LineMapper<T> lineMapper) {
this.lineMapper = lineMapper;
}
/**
* Reads the next item, jumping to next resource if necessary.
*/
public T read() throws Exception, UnexpectedInputException, ParseException {
if (noInput) {
return null;
}
T item;
item = readNextItem();
index.incrementItemCount();
return item;
}
/**
* Use the delegate to read the next item, jump to next resource if current
* one is exhausted. Items are appended to the buffer.
* #return next item from input
*/
private T readNextItem() throws Exception {
T item = delegate.read();
while (item == null) {
index.incrementResourceCount();
if (index.currentResource >= resources.length) {
return null;
}
delegate.close();
delegate.setResource(resources[index.currentResource]);
delegate.open(new ExecutionContext());
item = delegate.read();
}
return item;
}
/**
* Close the {#link #setDelegate(ResourceAwareItemReaderItemStream)} reader
* and reset instance variable values.
*/
public void close() throws ItemStreamException {
index = new MultiResourceIndex();
delegate.close();
noInput = false;
}
/**
* Figure out which resource to start with in case of restart, open the
* delegate and restore delegate's position in the resource.
*/
public void open(ExecutionContext executionContext) throws ItemStreamException {
Assert.notNull(resources, "Resources must be set");
noInput = false;
if (resources.length == 0) {
LOGGER.warn("No resources to read");
noInput = true;
return;
}
Arrays.sort(resources, comparator);
for(int i =0; i < resources.length; i++)
{
LOGGER.info("Resources after Sorting" + resources[i]);
}
index.open(executionContext);
delegate.setResource(resources[index.currentResource]);
delegate.open(new ExecutionContext());
try {
for (int i = 0; i < index.currentItem; i++) {
delegate.read();
}
}
catch (Exception e) {
throw new ItemStreamException("Could not restore position on restart", e);
}
}
/**
* Store the current resource index and position in the resource.
*/
public void update(ExecutionContext executionContext) throws ItemStreamException {
if (saveState) {
index.update(executionContext);
}
}
/**
* #param delegate reads items from single {#link Resource}.
*/
public void setDelegate(ResourceAwareItemReaderItemStream<? extends T> delegate) {
this.delegate = delegate;
}
/**
* Set the boolean indicating whether or not state should be saved in the
* provided {#link ExecutionContext} during the {#link ItemStream} call to
* update.
*
* #param saveState
*/
public void setSaveState(boolean saveState) {
this.saveState = saveState;
}
/**
* #param comparator used to order the injected resources, by default
* compares {#link Resource#getFilename()} values.
*/
public void setComparator(Comparator<Resource> comparator) {
this.comparator = comparator;
}
/**
* #param resources input resources
*/
public void setResources(Resource[] resources) {
this.resources = resources;
}
/**
* Facilitates keeping track of the position within multi-resource input.
*/
private class MultiResourceIndex {
private static final String RESOURCE_KEY = "resourceIndex";
private static final String ITEM_KEY = "itemIndex";
private int currentResource = 0;
private int markedResource = 0;
private int currentItem = 0;
private int markedItem = 0;
public void incrementItemCount() {
currentItem++;
}
public void incrementResourceCount() {
currentResource++;
currentItem = 0;
}
public void mark() {
markedResource = currentResource;
markedItem = currentItem;
}
public void reset() {
currentResource = markedResource;
currentItem = markedItem;
}
public void open(ExecutionContext ctx) {
if (ctx.containsKey(executionContextUserSupport.getKey(RESOURCE_KEY))) {
currentResource = ctx.getInt(executionContextUserSupport.getKey(RESOURCE_KEY));
}
if (ctx.containsKey(executionContextUserSupport.getKey(ITEM_KEY))) {
currentItem = ctx.getInt(executionContextUserSupport.getKey(ITEM_KEY));
}
}
public void update(ExecutionContext ctx) {
ctx.putInt(executionContextUserSupport.getKey(RESOURCE_KEY), index.currentResource);
ctx.putInt(executionContextUserSupport.getKey(ITEM_KEY), index.currentItem);
}
}
#Override
public void afterPropertiesSet() throws Exception {
// TODO Auto-generated method stub
}
#Override
public void setResource(Resource resource) {
// TODO Auto-generated method stub
}
}
Configuration Files for Spring is :
<batch:step id="readFromCSVFileAndUploadToDB" next="stepMovePdwFile">
<batch:tasklet transaction-manager="transactionManager">
<batch:chunk reader="multiResourceReader" writer="rawItemDatabaseWriter"
commit-interval="500" skip-policy="pdwUploadSkipPolicy" />
</batch:tasklet>
</batch:step>
<bean id="multiResourceReader"
class="org.springframework.batch.item.file.MultiResourceItemReader" scope="step">
<property name="resource" value="file:#{jobParameters[filePath]}/*.dat" />
<property name="delegate" ref="rawItemCsvReader"></property>
</bean>
<bean id="rawItemCsvReader" class="org.kp.oppr.remediation.batch.csv.RawItemCsvReader"
scope="step">
<property name="resources" value="file:#{jobParameters[filePath]}/*.dat" />
<property name="columnNames" value="${columnNames}" />
<property name="delimiter" value="${delimiter}" />
</bean>
Use a standard FlatFileItemReader (properly configured via XML) instead of your RawItemCsvReader as delegate.
This solution will answer your question because FlatFileItemReader implements AbstractItemStreamItemReader.
Remember: SB is heavly based on delegation; write a class like your reader is rarely requested.

Quartz doesn't recognize schema job_scheduling_data_2_0.xsd present in quartz jar file

I am getting below exception on server startup.
I am using quartz 2.2.21 with spring 3.2.
I have enabled quartz plugin (org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin).
Please find below the start tag of our XML file:
During server startup we are getting below log information and stacktrace:
Error Message:
Unable to load local schema packaged in quartz distribution jar. Utilizing schema online at http://www.quartz-scheduler.org/xml/job_scheduling_data_2_0.xsd
Exception:
Caused by: org.xml.sax.SAXParseException; systemId: file:///quartz_job_data.xml; lineNumber: 5; columnNumber: 104;
schema_reference.4: Failed to read schema document 'http://www.quartz-scheduler.org/xml/job_scheduling_data_2_0.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
I have the same problem. I'm using the 7.1.1 of Jboss and the problem appears when you don't have connection to the internet. This is easy as putting a fake address that's unreachable in hosts.
I tried to force to local copy but it does not work.
What I finally did is to partially overwrite the functionality until this is fixed. Watch: https://jira.spring.io/browse/SPR-13706
public class CustomXMLSchedulingDataProcessor extends org.quartz.xml.XMLSchedulingDataProcessor {
public static final String QUARTZ_XSD_PATH_IN_JAR_CLASSPATH = "classpath:org/quartz/xml/job_scheduling_data_2_0.xsd";
public CustomXMLSchedulingDataProcessor(ClassLoadHelper clh) throws ParserConfigurationException {
super(clh);
}
#Override
protected Object resolveSchemaSource() {
InputSource inputSource;
InputStream is = null;
try {
is = classLoadHelper.getResourceAsStream(QUARTZ_XSD_PATH_IN_JAR_CLASSPATH);
} finally {
if (is != null) {
inputSource = new InputSource(is);
inputSource.setSystemId(QUARTZ_SCHEMA_WEB_URL);
}
else {
return QUARTZ_SCHEMA_WEB_URL;
}
}
return inputSource;
}
}
And I did a new plugin XMLSchedulingDataProcessorPlugin overwritting just the instanciation of above class.
public class XMLSchedulingDataProcessorPlugin
extends SchedulerPluginWithUserTransactionSupport
implements FileScanListener {
/*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
* Data members.
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
private static final int MAX_JOB_TRIGGER_NAME_LEN = 80;
private static final String JOB_INITIALIZATION_PLUGIN_NAME = "JobSchedulingDataLoaderPlugin";
private static final String FILE_NAME_DELIMITERS = ",";
private boolean failOnFileNotFound = true;
private String fileNames = CustomXMLSchedulingDataProcessor.QUARTZ_XML_DEFAULT_FILE_NAME;
// Populated by initialization
private Map<String, JobFile> jobFiles = new LinkedHashMap<String, JobFile>();
private long scanInterval = 0;
boolean started = false;
protected ClassLoadHelper classLoadHelper = null;
private Set<String> jobTriggerNameSet = new HashSet<String>();
/*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
* Constructors.
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
public XMLSchedulingDataProcessorPlugin() {
}
/*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
* Interface.
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
/**
* Comma separated list of file names (with paths) to the XML files that should be read.
*/
public String getFileNames() {
return fileNames;
}
/**
* The file name (and path) to the XML file that should be read.
*/
public void setFileNames(String fileNames) {
this.fileNames = fileNames;
}
/**
* The interval (in seconds) at which to scan for changes to the file.
* If the file has been changed, it is re-loaded and parsed. The default
* value for the interval is 0, which disables scanning.
*
* #return Returns the scanInterval.
*/
public long getScanInterval() {
return scanInterval / 1000;
}
/**
* The interval (in seconds) at which to scan for changes to the file.
* If the file has been changed, it is re-loaded and parsed. The default
* value for the interval is 0, which disables scanning.
*
* #param scanInterval The scanInterval to set.
*/
public void setScanInterval(long scanInterval) {
this.scanInterval = scanInterval * 1000;
}
/**
* Whether or not initialization of the plugin should fail (throw an
* exception) if the file cannot be found. Default is <code>true</code>.
*/
public boolean isFailOnFileNotFound() {
return failOnFileNotFound;
}
/**
* Whether or not initialization of the plugin should fail (throw an
* exception) if the file cannot be found. Default is <code>true</code>.
*/
public void setFailOnFileNotFound(boolean failOnFileNotFound) {
this.failOnFileNotFound = failOnFileNotFound;
}
/*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
* SchedulerPlugin Interface.
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
/**
* <p>
* Called during creation of the <code>Scheduler</code> in order to give
* the <code>SchedulerPlugin</code> a chance to initialize.
* </p>
*
* #throws org.quartz.SchedulerConfigException
* if there is an error initializing.
*/
public void initialize(String name, final Scheduler scheduler, ClassLoadHelper schedulerFactoryClassLoadHelper)
throws SchedulerException {
super.initialize(name, scheduler);
this.classLoadHelper = schedulerFactoryClassLoadHelper;
getLog().info("Registering Quartz Job Initialization Plug-in.");
// Create JobFile objects
StringTokenizer stok = new StringTokenizer(fileNames, FILE_NAME_DELIMITERS);
while (stok.hasMoreTokens()) {
final String fileName = stok.nextToken();
final JobFile jobFile = new JobFile(fileName);
jobFiles.put(fileName, jobFile);
}
}
#Override
public void start(UserTransaction userTransaction) {
try {
if (jobFiles.isEmpty() == false) {
if (scanInterval > 0) {
getScheduler().getContext().put(JOB_INITIALIZATION_PLUGIN_NAME + '_' + getName(), this);
}
Iterator<JobFile> iterator = jobFiles.values().iterator();
while (iterator.hasNext()) {
JobFile jobFile = iterator.next();
if (scanInterval > 0) {
String jobTriggerName = buildJobTriggerName(jobFile.getFileBasename());
TriggerKey tKey = new TriggerKey(jobTriggerName, JOB_INITIALIZATION_PLUGIN_NAME);
// remove pre-existing job/trigger, if any
getScheduler().unscheduleJob(tKey);
JobDetail job = newJob().withIdentity(jobTriggerName, JOB_INITIALIZATION_PLUGIN_NAME).ofType(FileScanJob.class)
.usingJobData(FileScanJob.FILE_NAME, jobFile.getFileName())
.usingJobData(FileScanJob.FILE_SCAN_LISTENER_NAME, JOB_INITIALIZATION_PLUGIN_NAME + '_' + getName())
.build();
SimpleTrigger trig = newTrigger().withIdentity(tKey).withSchedule(
simpleSchedule().repeatForever().withIntervalInMilliseconds(scanInterval))
.forJob(job)
.build();
getScheduler().scheduleJob(job, trig);
getLog().debug("Scheduled file scan job for data file: {}, at interval: {}", jobFile.getFileName(), scanInterval);
}
processFile(jobFile);
}
}
} catch(SchedulerException se) {
getLog().error("Error starting background-task for watching jobs file.", se);
} finally {
started = true;
}
}
/**
* Helper method for generating unique job/trigger name for the
* file scanning jobs (one per FileJob). The unique names are saved
* in jobTriggerNameSet.
*/
private String buildJobTriggerName(
String fileBasename) {
// Name w/o collisions will be prefix + _ + filename (with '.' of filename replaced with '_')
// For example: JobInitializationPlugin_jobInitializer_myjobs_xml
String jobTriggerName = JOB_INITIALIZATION_PLUGIN_NAME + '_' + getName() + '_' + fileBasename.replace('.', '_');
// If name is too long (DB column is 80 chars), then truncate to max length
if (jobTriggerName.length() > MAX_JOB_TRIGGER_NAME_LEN) {
jobTriggerName = jobTriggerName.substring(0, MAX_JOB_TRIGGER_NAME_LEN);
}
// Make sure this name is unique in case the same file name under different
// directories is being checked, or had a naming collision due to length truncation.
// If there is a conflict, keep incrementing a _# suffix on the name (being sure
// not to get too long), until we find a unique name.
int currentIndex = 1;
while (jobTriggerNameSet.add(jobTriggerName) == false) {
// If not our first time through, then strip off old numeric suffix
if (currentIndex > 1) {
jobTriggerName = jobTriggerName.substring(0, jobTriggerName.lastIndexOf('_'));
}
String numericSuffix = "_" + currentIndex++;
// If the numeric suffix would make the name too long, then make room for it.
if (jobTriggerName.length() > (MAX_JOB_TRIGGER_NAME_LEN - numericSuffix.length())) {
jobTriggerName = jobTriggerName.substring(0, (MAX_JOB_TRIGGER_NAME_LEN - numericSuffix.length()));
}
jobTriggerName += numericSuffix;
}
return jobTriggerName;
}
/**
* Overriden to ignore <em>wrapInUserTransaction</em> because shutdown()
* does not interact with the <code>Scheduler</code>.
*/
#Override
public void shutdown() {
// Since we have nothing to do, override base shutdown so don't
// get extranious UserTransactions.
}
private void processFile(JobFile jobFile) {
if (jobFile == null || !jobFile.getFileFound()) {
return;
}
try {
CustomXMLSchedulingDataProcessor processor =
new CustomXMLSchedulingDataProcessor(this.classLoadHelper);
processor.addJobGroupToNeverDelete(JOB_INITIALIZATION_PLUGIN_NAME);
processor.addTriggerGroupToNeverDelete(JOB_INITIALIZATION_PLUGIN_NAME);
processor.processFileAndScheduleJobs(
jobFile.getFileName(),
jobFile.getFileName(), // systemId
getScheduler());
} catch (Exception e) {
getLog().error("Error scheduling jobs: " + e.getMessage(), e);
}
}
public void processFile(String filePath) {
processFile((JobFile)jobFiles.get(filePath));
}
/**
* #see org.quartz.jobs.FileScanListener#fileUpdated(java.lang.String)
*/
public void fileUpdated(String fileName) {
if (started) {
processFile(fileName);
}
}
class JobFile {
private String fileName;
// These are set by initialize()
private String filePath;
private String fileBasename;
private boolean fileFound;
protected JobFile(String fileName) throws SchedulerException {
this.fileName = fileName;
initialize();
}
protected String getFileName() {
return fileName;
}
protected boolean getFileFound() {
return fileFound;
}
protected String getFilePath() {
return filePath;
}
protected String getFileBasename() {
return fileBasename;
}
private void initialize() throws SchedulerException {
InputStream f = null;
try {
String furl = null;
File file = new File(getFileName()); // files in filesystem
if (!file.exists()) {
URL url = classLoadHelper.getResource(getFileName());
if(url != null) {
try {
furl = URLDecoder.decode(url.getPath(), "UTF-8");
} catch (UnsupportedEncodingException e) {
furl = url.getPath();
}
file = new File(furl);
try {
f = url.openStream();
} catch (IOException ignor) {
// Swallow the exception
}
}
} else {
try {
f = new java.io.FileInputStream(file);
}catch (FileNotFoundException e) {
// ignore
}
}
if (f == null) {
if (isFailOnFileNotFound()) {
throw new SchedulerException(
"File named '" + getFileName() + "' does not exist.");
} else {
getLog().warn("File named '" + getFileName() + "' does not exist.");
}
} else {
fileFound = true;
}
filePath = (furl != null) ? furl : file.getAbsolutePath();
fileBasename = file.getName();
} finally {
try {
if (f != null) {
f.close();
}
} catch (IOException ioe) {
getLog().warn("Error closing jobs file " + getFileName(), ioe);
}
}
}
}
}
That way you only have to use this plugin in your configuration and everything will work by default.
org.quartz.plugin.jobInitializer.class =
com.level2.quartz.processor.plugin.XMLSchedulingDataProcessorPlugin

Serializing a long string in Hadoop

I have a class which implements WritableComparable class in Hadoop. This class has two string variables, one short and one very long. I use writeChars to write these variables and readLine to read them but it seems like I get some sort of error. What is the best way to serialize such a long String in Hadoop?
I think you can use byteswritable to make it more efficient. Check the below custom key which has BytesWritable type as callId.
public class CustomMRKey implements WritableComparable<CustomMRKey> {
private BytesWritable callId;
private IntWritable mapperType;
/**
* #default constructor
*/
public CustomMRKey() {
set(new BytesWritable(), new IntWritable());
}
/**
* Constructor
*
* #param callId
* #param mapperType
*/
public CustomMRKey(BytesWritable callId, IntWritable mapperType) {
set(callId, mapperType);
}
/**
* sets the call id and mapper type
*
* #param callId
* #param mapperType
*/
public void set(BytesWritable callId, IntWritable mapperType) {
this.callId = callId;
this.mapperType = mapperType;
}
/**
* This method returns the callId
*
* #return callId
*/
public BytesWritable getCallId() {
return callId;
}
/**
* This method sets the callId given a callId
*
* #param callId
*/
public void setCallId(BytesWritable callId) {
this.callId = callId;
}
/**
* This method returns the mapper type
*
*
* #return
*/
public IntWritable getMapperType() {
return mapperType;
}
/**
* This method is set to store the mapper type
*
* #param mapperType
*/
public void setMapperType(IntWritable mapperType) {
this.mapperType = mapperType;
}
#Override
public void readFields(DataInput in) throws IOException {
callId.readFields(in);
mapperType.readFields(in);
}
#Override
public void write(DataOutput out) throws IOException {
callId.write(out);
mapperType.write(out);
}
#Override
public boolean equals(Object obj) {
if (obj instanceof CustomMRCdrKey) {
CustomMRCdrKey key = (CustomMRCdrKey) obj;
return callId.equals(key.callId)
&& mapperType.equals(key.mapperType);
}
return false;
}
#Override
public int compareTo(CustomMRCdrKey key) {
int cmp = callId.compareTo(key.getCallId());
if (cmp != 0) {
return cmp;
}
return mapperType.compareTo(key.getMapperType());
}
}
To use in say mapper code say you can generate the key of BytesWritable form using something as following :-
You can call as :
CustomMRKey customKey=new CustomMRKey(new BytesWritable(),new IntWritable());
customKey.setCallId(makeKey(value, this.resultKey));
customKey.setMapperType(this.mapTypeIndicator);
Then makeKey method is something like below :-
public BytesWritable makeKey(Text value, BytesWritable key) throws IOException {
try {
ByteArrayOutputStream byteKey = new ByteArrayOutputStream(Constants.MR_DEFAULT_KEY_SIZE);
for (String field : keyFields) {
byte[] bytes = value.getString(field).getBytes();
byteKey.write(bytes,0,bytes.length);
}
if(key==null){
return new BytesWritable(byteKey.toByteArray());
}else{
key.set(byteKey.toByteArray(), 0, byteKey.size());
return key;
}
} catch (Exception ex) {
throw new IOException("Could not generate key", ex);
}
}
Hope this may help.

Handling embedded images in IText XMLWorker

Handling embedded images in IText XMLWorker.
Is there a way to handle Embedded (Base64) images in XMLWorker? In version 5.3.5
the ImageProvider I used does not work any more (an exception is raised before),
so I Patched ImageRetrieve as follows, but obviously this will be broken in next
XMLWorker update:
package com.itextpdf.tool.xml.net;
import java.io.File;
import java.io.IOException;
import java.net.MalformedURLException;
import com.itextpdf.text.BadElementException;
import com.itextpdf.text.Image;
import com.itextpdf.text.pdf.codec.Base64;
import com.itextpdf.tool.xml.net.exc.NoImageException;
import com.itextpdf.tool.xml.pipeline.html.ImageProvider;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
/**
* #author redlab_b
*
*/
public class ImageRetrieve {
final static Pattern INLINE_PATTERN = Pattern.compile("^/data:image/(png|jpg|gif);base64,(.*)");
private final ImageProvider provider;
/**
* #param imageProvider the provider to use.
*
*/
public ImageRetrieve(final ImageProvider imageProvider) {
this.provider = imageProvider;
}
/**
*
*/
public ImageRetrieve() {
this.provider = null;
}
/**
* #param src an URI that can be used to retrieve an image
* #return an iText Image object
* #throws NoImageException if there is no image
* #throws IOException if an IOException occurred
*/
public com.itextpdf.text.Image retrieveImage(final String src) throws NoImageException, IOException {
com.itextpdf.text.Image img = null;
if (null != provider) {
img = provider.retrieve(src);
}
if (null == img) {
String path = null;
if (src.startsWith("http")) {
// full url available
path = src;
} else if (null != provider){
String root = this.provider.getImageRootPath();
if (null != root) {
if (root.endsWith("/") && src.startsWith("/")) {
root = root.substring(0, root.length() - 1);
}
path = root + src;
}
} else {
path = src;
}
if (null != path) {
try {
Matcher m;
if (path.startsWith("http")) {
img = com.itextpdf.text.Image.getInstance(path);
} else if ((m = INLINE_PATTERN.matcher(path)).matches()) {
// Let's handle the embedded image without saving it
try {
byte[] data = Base64.decode(m.group(2));
return Image.getInstance(data);
} catch (Exception ex) {
throw new NoImageException(src, ex);
}
} else {
img = com.itextpdf.text.Image.getInstance(new File(path).toURI().toURL());
}
if (null != provider && null != img) {
provider.store( src, img);
}
} catch (BadElementException e) {
throw new NoImageException(src, e);
} catch (MalformedURLException e) {
throw new NoImageException(src, e);
}
} else {
throw new NoImageException(src);
}
}
return img;
}
}
It's almost year since you asked this question, but maybe this answer will help anyway.
Recently I met similar problem. My goal was to include in generated pdf an image stored in database.
To do this I've extended the com.itextpdf.tool.xml.pipeline.html.AbstractImageProvider class and overriden its retrieve() method like this:
public class MyImageProvider extends AbstractImageProvider {
#Override
public Image retrieve(final String src) {
Image img = super.retrieve(src);
if (img == null) {
try {
byte [] data = getMyImageSomehow(src);
img = Image.getInstance(data);
super.store(src, img);
}
catch (Exception e) {
//handle exceptions
}
}
return img;
}
#Override
public String getImageRootPath() {
return "http://sampleurl/img";
}
}
Then, when building pipelines for XMLWorker [1], I pass an instance of my class to context:
htmlPipelineContext.setImageProvider(new MyImageProvider());
Now we would expect that this should work. But there's a catch! Somewhere, deep inside the xmlworker library, this htmlPipelineContext is being cloned. And during this operation our implementation of ImageProvider get's lost. This is happening inside HtmlPipelineContext's clone() method. Take a look at lines 274-280 (I refer to 5.4.4 version):
final String rootPath = imageProvider.getImageRootPath();
newCtx.setImageProvider(new AbstractImageProvider() {
public String getImageRootPath() {
return rootPath;
}
});
This is even described in HtmlPipelineContext.clone()'s javadoc [2]:
Create a clone of this HtmlPipelineContext, the clone only contains the initial values, not the internal values. Beware, the state of the current Context is not copied to the clone. Only the configurational important stuff like the (...) ImageProvider (new AbstractImageProvider with same ImageRootPath) , (...) are copied.
Isn't it funny? You get class that is designed for extending by making it abstract, but at the end it turns out, that this class serves only as a property holder.
My workaround for this:
public class MySpecialImageProviderAwareHtmlPipelineContext extends HtmlPipelineContext {
MySpecialImageProviderAwareHtmlPipelineContext () {
super(null);
}
public HtmlPipelineContext clone () {
HtmlPipelineContext ctx = null;
try {
ctx = super.clone();
ctx.setImageProvider(new MyImageProvider());
} catch (Exception e) {
//handle exception
}
return ctx;
}
}
Then I just use this instead of HtmlPipelineContext.
[1] http://demo.itextsupport.com/xmlworker/itextdoc/flatsite.html#itextdoc-menu-7
[2] http://api.itextpdf.com/xml/com/itextpdf/tool/xml/pipeline/html/HtmlPipelineContext.html#clone()
And hopefully, your solution seems to have been adopted in later version (5.5.6 at least).

Resources