Confluent HDFS Connector is losing messages - hadoop

Community, could you please help me to understand why ~3% of my messages don't end up in HDFS? I wrote a simple producer in JAVA to generate 10 million messages.
public static final String TEST_SCHEMA = "{"
+ "\"type\":\"record\","
+ "\"name\":\"myrecord\","
+ "\"fields\":["
+ " { \"name\":\"str1\", \"type\":\"string\" },"
+ " { \"name\":\"str2\", \"type\":\"string\" },"
+ " { \"name\":\"int1\", \"type\":\"int\" }"
+ "]}";
public KafkaProducerWrapper(String topic) throws UnknownHostException {
// store topic name
this.topic = topic;
// initialize kafka producer
Properties config = new Properties();
config.put("client.id", InetAddress.getLocalHost().getHostName());
config.put("bootstrap.servers", "myserver-1:9092");
config.put("key.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer");
config.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer");
config.put("schema.registry.url", "http://myserver-1:8089");
config.put("acks", "all");
producer = new KafkaProducer(config);
// parse schema
Schema.Parser parser = new Schema.Parser();
schema = parser.parse(TEST_SCHEMA);
}
public void send() {
// generate key
int key = (int) (Math.random() * 20);
// generate record
GenericData.Record r = new GenericData.Record(schema);
r.put("str1", "text" + key);
r.put("str2", "text2" + key);
r.put("int1", key);
final ProducerRecord<String, GenericRecord> record = new ProducerRecord<>(topic, "K" + key, (GenericRecord) r);
producer.send(record, new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if (e != null) {
logger.error("Send failed for record {}", record, e);
messageErrorCounter++;
return;
}
logger.debug("Send succeeded for record {}", record);
messageCounter++;
}
});
}
public String getStats() { return "Messages sent: " + messageCounter + "/" + messageErrorCounter; }
public long getMessageCounter() {
return messageCounter + messageErrorCounter;
}
public void close() {
producer.close();
}
public static void main(String[] args) throws InterruptedException, UnknownHostException {
// initialize kafka producer
KafkaProducerWrapper kafkaProducerWrapper = new KafkaProducerWrapper("my-test-topic");
long max = 10000000L;
for (long i = 0; i < max; i++) {
kafkaProducerWrapper.send();
}
logger.info("producer-demo sent all messages");
while (kafkaProducerWrapper.getMessageCounter() < max)
{
logger.info(kafkaProducerWrapper.getStats());
Thread.sleep(2000);
}
logger.info(kafkaProducerWrapper.getStats());
kafkaProducerWrapper.close();
}
And I use the Confluent HDFS Connector in standalone mode to write data to HDFS. The configuration is as follows:
name=hdfs-consumer-test
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=my-test-topic
hdfs.url=hdfs://my-cluster/kafka-test
hadoop.conf.dir=/etc/hadoop/conf/
flush.size=100000
rotate.interval.ms=20000
# increase timeouts to avoid CommitFailedException
consumer.session.timeout.ms=300000
consumer.request.timeout.ms=310000
heartbeat.interval.ms= 60000
session.timeout.ms= 100000
The connector writes the data into HDFS, but after waiting for 20000 ms (due to rotate.interval.ms) not all messages are received.
scala> spark.read.avro("/kafka-test/topics/my-test-topic/partition=*/my-test-topic*")
.count()
res0: Long = 9749015
Any idea what is the reason for this behavior? Where is my mistake? I'm using Confluent 3.0.1/Kafka 10.0.0.1.

Are you seeing the last few messages are not moved to HDFS? If so, it's likely you are running into the issue described here https://github.com/confluentinc/kafka-connect-hdfs/pull/100
Try sending one more message to the topic after the rotate.interval.ms has expired to validate this is what you are running into. If you need to rotate based on time, it's probably a good idea to upgrade to pickup the fix.

Related

How to delete alarge amount of data one by one from a table with their relations using transactional annotation

I have a large amount of data that I want to purge from the database, there are about 6 tables of which 3 have a many to many relationship with cascadeType. All the others are log and history tables independent of the 3 others
i want to purge this data one by one and if any of them have error while deleting i have to undo only the current record and show it in console and keep deleting the others
I am trying to use transactional annotation with springboot but all purging stops if an error occurs
how to manage this kind of need?
here is what i did :
#Transactional
private void purgeCards(List<CardEntity> cardsTobePurge) {
List<Long> nextCardsNumberToUpdate = getNextCardsWhichWillNotBePurge(cardsTobePurge);
TransactionTemplate lTransTemplate = new TransactionTemplate(transactionManager);
lTransTemplate.setPropagationBehavior(TransactionTemplate.PROPAGATION_REQUIRED);
lTransTemplate.execute(new TransactionCallback<Object>() {
#Override
public Object doInTransaction(TransactionStatus status) {
cardsTobePurge.forEach(cardTobePurge -> {
Long nextCardNumberOfCurrent = cardTobePurge.getNextCard();
if (nextCardsNumberToUpdate.contains(nextCardNumberOfCurrent)) {
CardEntity cardToUnlik = cardRepository.findByCardNumber(nextCardNumberOfCurrent);
unLink(cardToUnlik);
}
log.info(BATCH_TITLE + " Removing card Number : " + cardTobePurge.getCardNumber() + " with Id : "
+ cardTobePurge.getId());
List<CardHistoryEntity> historyEntitiesOfThisCard = cardHistoryRepository.findByCard(cardTobePurge);
List<LogCreationCardEntity> logCreationEntitiesForThisCard = logCreationCardRepository
.findByCardNumber(cardTobePurge.getCardNumber());
List<LogCustomerMergeEntity> logCustomerMergeEntitiesForThisCard = logCustomerMergeRepository
.findByCard(cardTobePurge);
cardHistoryRepository.deleteAll(historyEntitiesOfThisCard);
logCreationCardRepository.deleteAll(logCreationEntitiesForThisCard);
logCustomerMergeRepository.deleteAll(logCustomerMergeEntitiesForThisCard);
cardRepository.delete(cardTobePurge);
});
return Boolean.TRUE;
}
});
}
As a solution to my question:
I worked with TransactionTemplate to be able to manage transactions manually
so if an exception is raised a rollback will only be applied for the current iteration and will continue to process other cards
private void purgeCards(List<CardEntity> cardsTobePurge) {
int[] counter = { 0 }; //to simulate the exception
List<Long> nextCardsNumberToUpdate = findNextCardsWhichWillNotBePurge(cardsTobePurge);
cardsTobePurge.forEach(cardTobePurge -> {
Long nextCardNumberOfCurrent = cardTobePurge.getNextCard();
CardEntity cardToUnlik = null;
counter[0]++; //to simulate the exception
if (nextCardsNumberToUpdate.contains(nextCardNumberOfCurrent)) {
cardToUnlik = cardRepository.findByCardNumber(nextCardNumberOfCurrent);
}
purgeCard(cardTobePurge, nextCardsNumberToUpdate, cardToUnlik, counter);
});
}
private void purgeCard(#NonNull CardEntity cardToPurge, List<Long> nextCardsNumberToUpdate, CardEntity cardToUnlik,
int[] counter) {
TransactionTemplate lTransTemplate = new TransactionTemplate(transactionManager);
lTransTemplate.setPropagationBehavior(TransactionTemplate.PROPAGATION_REQUIRED);
lTransTemplate.execute(new TransactionCallbackWithoutResult() {
#Override
public void doInTransactionWithoutResult(TransactionStatus status) {
try {
if (cardToUnlik != null)
unLink(cardToUnlik);
log.info(BATCH_TITLE + " Removing card Number : " + cardToPurge.getCardNumber() + " with Id : "
+ cardToPurge.getId());
List<CardHistoryEntity> historyEntitiesOfThisCard = cardHistoryRepository.findByCard(cardToPurge);
List<LogCreationCardEntity> logCreationEntitiesForThisCard = logCreationCardRepository
.findByCardNumber(cardToPurge.getCardNumber());
List<LogCustomerMergeEntity> logCustomerMergeEntitiesForThisCard = logCustomerMergeRepository
.findByCard(cardToPurge);
cardHistoryRepository.deleteAll(historyEntitiesOfThisCard);
logCreationCardRepository.deleteAll(logCreationEntitiesForThisCard);
logCustomerMergeRepository.deleteAll(logCustomerMergeEntitiesForThisCard);
cardRepository.delete(cardToPurge);
if (counter[0] == 2)//to simulate the exception
throw new Exception();//to simulate the exception
} catch (Exception e) {
status.setRollbackOnly();
if (cardToPurge != null)
log.error(BATCH_TITLE + " Problem with card Number : " + cardToPurge.getCardNumber()
+ " with Id : " + cardToPurge.getId(), e);
else
log.error(BATCH_TITLE + "Card entity is null", e);
}
}
});
}

Incorrect file being produced using websockets in helidon

I am trying to upload a file using websockets in Helidon.I think i am doing it write the right way but the code seems to be flaky in terms of the size of the file produced which is different. The size of the file being produced is different for different runs.
How can i make sure that the file size is same on both ends?
I use a simple protocol for handshake[code below]:
Step1 client sends filesize=11000 buffer=5000
Step2 server sends SENDFILE
Step3 client >> buffer 1 server >> write 1 5000
Step4 client >> buffer 2 server >> write 2 5000
Step5 client >> buffer 3 server >> write 3 1000
Step6 client sends ENDOFFILE server >> session.close
//SERVER side OnOpen session below
session.addMessageHandler(new MessageHandler.Whole<String>() {
#Override
public void onMessage(String message) {
System.out.println("Server >> " + message);
if (message.contains("FILESIZE")) {
session.getBasicRemote().sendText("SENDFILENOW");
}
if(message.contains("ENDOFFILE")) {
System.out.println("Server >> FILE_SIZE=" + FILE_SIZE);
finalFileOutputStream.close();
session.close();
}
}
});
session.addMessageHandler(new MessageHandler.Whole<ByteBuffer>() {
#Override
public void onMessage(ByteBuffer b) {
finalFileOutputStream.write(b.array(), 0, b.array().length);
finalFileOutputStream.flush();
}
});
//CLIENT OnOpen session below
session.getBasicRemote().sendText("FILESIZE=" + FILE_SIZE);
session.addMessageHandler(new MessageHandler.Whole<String>() {
#Override
public void onMessage(String message) {
long M = FILE_SIZE / BUFFER_SIZE;
long R = FILE_SIZE % BUFFER_SIZE;
if(!message.equals("SENDFILENOW"))
return;
try {
System.out.println("Starting File read ... " + path + " " + FILE_SIZE + " " + M + " " +message );
byte[] buffer = new byte[(int) BUFFER_SIZE];
while (M > 0) {
fileInputStream.read(buffer);
ByteBuffer bytebuffer = ByteBuffer.wrap(buffer);
session.getBasicRemote().sendBinary(bytebuffer);
M--;
}
buffer = new byte[(int) R];
fileInputStream.read(buffer, 0, (int) R);
fileInputStream.close();
ByteBuffer bytebuffer = ByteBuffer.wrap(buffer);
session.getBasicRemote().sendBinary(bytebuffer);
session.getBasicRemote().sendText("FILEREADDONE");
session.close();
f.complete(true);
} catch (IOException e) {
fail("Unexpected exception " + e);
}
}
});
Your solution is unnecessarily built on top of several levels of abstraction just to use websockets. Do you really need that? Helidon is very well equipped to handle huge file upload directly and much more efficiently.
public class LargeUpload {
public static void main(String[] args) {
ExecutorService executor = ThreadPoolSupplier.create("upload-thread-pool").get();
WebServer server = WebServer.builder(Routing.builder()
.post("/streamUpload", (req, res) -> req.content()
.map(DataChunk::data)
.flatMapIterable(Arrays::asList)
.to(IoMulti.writeToFile(createFile(req.queryParams().first("fileName").orElse("bigFile.mkv")))
.executor(executor)
.build())
.onError(res::send)
.onComplete(() -> {
res.status(Http.Status.ACCEPTED_202);
res.send();
}).ignoreElement())
.build())
.port(8080)
.build()
.start()
.await(Duration.ofSeconds(10));
// Server started - do upload
//several gigs file
Path file = Path.of("/home/kec/helidon-kafka.mkv");
try (FileInputStream fis = new FileInputStream(file.toFile())) {
WebClient.builder()
.baseUri("http://localhost:8080")
.build()
.post()
.path("/streamUpload")
.queryParam("fileName", "bigFile_" + System.currentTimeMillis() + ".mkv")
.contentType(MediaType.APPLICATION_OCTET_STREAM)
.submit(IoMulti.multiFromByteChannelBuilder(fis.getChannel())
.bufferCapacity(1024 * 1024 * 4)
.build()
.map(DataChunk::create)
)
.await(Duration.ofMinutes(10));
} catch (IOException e) {
throw new RuntimeException(e);
}
executor.shutdown();
server.shutdown()
.await(Duration.ofSeconds(10));
}
static Path createFile(String path) {
try {
Path filePath = Path.of("/home/kec/tmp/" + path);
System.out.println("Creating " + filePath);
return Files.createFile(filePath);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}

SSE server sending events in a batch on final close

I have a Jersey server running locally, it exposes a SSE resource just like the examples here: https://jersey.github.io/documentation/latest/sse.html. I have a local webpack Angular app, that binds to the exposed GET endpoint and listens for data.
On the GET, I start up a thread to send notifications at regular intervals over 6-8 seconds. I don't see anything on the client UNTIL the EventOutput object is closed.
What am I doing wrong, and how can I fix this?
The server code WORKS with just a simple curl, i.e.:
curl http://localhost:8002/api/v1/notify
But on both Chrome and Safari the following code exhibits the behavior
Client (TypeScript):
this.evSource = new EventSource('http://localhost:8002/api/v1/notify');
this.evSource.addEventListener(
'event',
(x => console.log('we have ', x))
);
this.evSource.onmessage = (data => console.log(data));
this.evSource.onopen = (data => console.log(data));
this.evSource.onerror = (data => {
console.log(data);
this.evSource.close();
});
Server (Java):
// cache callback
public void eventCallback(Iterable<CacheEntryEvent<? extends Integer, ? extends Integer>> events) {
for (CacheEntryEvent<? extends Integer, ? extends Integer> x : events) {
LOGGER.info("{} Sending the following value: " + x.getValue(), Thread.currentThread().getId());
final OutboundEvent sseEvent = new OutboundEvent.Builder().name("event")
.data(Integer.class, x.getValue()).build();
this.broadcaster.broadcast(sseEvent);
}
}
#GET
#Produces(SseFeature.SERVER_SENT_EVENTS)
#ApiOperation(value = "Setup SSE pipeline", notes = "Sets up the notification pipeline for clients to access")
#ApiResponses(value = {
#ApiResponse(code = HttpURLConnection.HTTP_UNAUTHORIZED,
message = "Missing, bad or untrusted cookie"),
#ApiResponse(code = HttpURLConnection.HTTP_OK,
message = "Events streamed successfully")
})
#Timed
#ResponseMetered
public EventOutput registerNotificationEvents(
#HeaderParam(SseFeature.LAST_EVENT_ID_HEADER) String lastEventId,
#QueryParam(SseFeature.LAST_EVENT_ID_HEADER) String lastEventIdQuery) {
if (!Strings.isNullOrEmpty(lastEventId) || !Strings.isNullOrEmpty(lastEventIdQuery)) {
LOGGER.info("Found Last-Event-ID header: {}", !Strings.isNullOrEmpty(lastEventId) ? lastEventId : lastEventIdQuery );
}
LOGGER.info("{} Received request", Thread.currentThread().getId());
this.continuation = true;
final EventOutput output = new EventOutput();
broadcaster.add(output);
Random rand = new Random();
IntStream rndStream = IntStream.generate(() -> rand.nextInt(90));
List<Integer> lottery = rndStream.limit(15).boxed().collect(Collectors.toList());
IgniteCache<Integer, Integer> cache = this.ignite.cache(topic_name);
executorService.execute(() -> {
try {
lottery.forEach(value -> {
try {
TimeUnit.MILLISECONDS.sleep(500);
LOGGER.info("{} Sending the following value to Ignite: " + value + " : " + count++, Thread.currentThread().getId());
if (!cache.isClosed()) {
cache.put(1, value);
}
} catch (InterruptedException ex) {
ex.printStackTrace();
}
});
TimeUnit.MILLISECONDS.sleep(500);
continuation = false;
TimeUnit.MILLISECONDS.sleep(500);
if (!output.isClosed()) {
// THIS is where the client sees ALL the data broadcast
// in one shot
output.close();
}
} catch (InterruptedException ex) {
ex.printStackTrace();
} catch (IOException ex) {
ex.printStackTrace();
}
});
LOGGER.info("{} Completing request", Thread.currentThread().getId());
return output;
}
}
Looks like http://github.com/dropwizard/dropwizard/issues/1673 captures the problem. GZip default won't flush even if upper levels ask for it. Solution is something like
((AbstractServerFactory)configuration.getServerFactory()).getGzipFilterFactory().setSyncFlush(true);
will enable flushing to synchronize with GZip if disabling GZip all up is not an option

Duplication with Chunks in Spring Batch

I have a huge file, I need to read it and dump it into DB. Any invalid records(invalid length, duplicate keys, etc), if present need to be written into a Error Report. Due to the huge size of the file we tried using the chunk-size(commit-interval) as 1000/5000/10000. In the process I found that the data was being processed redundantly due to the usage of chunks and thus my Error Report is incorrect, it not only has the actual invalid records from the input-file but also the duplicates from the chunks.
Code snippet:
#Bean
public Step readAndWriteStudentInfo() {
return stepBuilderFactory.get("readAndWriteStudentInfo")
.<Student, Student>chunk(5000).reader(studentFileReader()).faultTolerant()
.skipPolicy(skipper)..listener(listener).processor(new ItemProcessor<Student, Student>() {
#Override
public Student process(Student Student) throws Exception {
if(processedRecords.contains(Student)){
return null;
}else {
processedRecords.add(Student);
return Student;
}
}
}).writer(studentDBWriter()).build();
}
#Bean
public ItemReader<Student> studentFileReader() {
FlatFileItemReader<Student> reader = new FlatFileItemReader<>();
reader.setResource(new FileSystemResource(studentInfoFileName));
reader.setLineMapper(new DefaultLineMapper<Student>() {
{
setLineTokenizer(new FixedLengthTokenizer() {
{
setNames(classProperties50);
setColumns(range50);
}
});
setFieldSetMapper(new BeanWrapperFieldSetMapper<Student>() {
{
setTargetType(Student.class);
}
});
}
});
reader.setSaveState(false);
reader.setLinesToSkip(1);
reader.setRecordSeparatorPolicy(new TrailerSkipper());
return reader;
}
#Bean
public ItemWriter<Student> studentDBWriter() {
JdbcBatchItemWriter<Student> writer = new JdbcBatchItemWriter<>();
writer.setSql(insertQuery);
writer.setDataSource(datSource);
writer.setItemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<Student>());
return writer;
}
I've tried with various chunk sizes, 10, 100, 1000, 5000. The accuracy of my error report deteriorates with the increase in chunk size. Writing to Error Report is happening from my implementation of Skip Policy, kindly do let me know if that code is required too to help me out.
How do I ensure that my writer picks up unique set of records in each chunk?
Skipper Implementation:
#Override
public boolean shouldSkip(Throwable t, int skipCount) throws SkipLimitExceededException {
String exception = t.getClass().getSimpleName();
if (t instanceof FileNotFoundException) {
return false;
}
switch (exception) {
case "FlatFileParseException":
FlatFileParseException ffpe = (FlatFileParseException) t;
String errorMessage = "Line no = " + ffpe.getLineNumber() + " " + ffpe.getMessage() + " Record is ["
+ ffpe.getInput() + "].\n";
writeToRecon(errorMessage);
return true;
case "SQLException":
SQLException sE = (SQLException) t;
String sqlErrorMessage = sE.getErrorCode() + " Record is [" + sE.getCause() + "].\n";
writeToRecon(sqlErrorMessage);
return true;
case "BatchUpdateException":
BatchUpdateException batchUpdateException = (BatchUpdateException) t;
String btchUpdtExceptionMsg = batchUpdateException.getMessage() + " " + batchUpdateException.getCause();
writeToRecon(btchUpdtExceptionMsg);
return true;
default:
return false;
}

Using JavaMail reading from gmail issue

I am having problems reading mails from gmail (pop3) using javamail. I have a code that woks perfectly if the mail was sent from ubuntu's Thnderbird. How ever if the mail was originally sent from mac it fails.
This is the code I ham using:
private static final String UNKNOWN_BRAND_PATH = "UNKNOWN";
public static final String FOLDER_NAME = "INBOX";
private static Logger LOG = org.slf4j.LoggerFactory.getLogger(LzMailRecieverService.class);
#Value("${lz.mail.address}")
private String lzMailUserName;
#Value("${lz.mail.password}")
private String lzMailPassword;
#Value("${lz.mail.tmp.folder}")
private String lzMailTmpFolder;
public Store connect() throws Exception {
String SSL_FACTORY = "javax.net.ssl.SSLSocketFactory";
Properties pop3Props = new Properties();
pop3Props.setProperty("mail.pop3.socketFactory.class", SSL_FACTORY);
pop3Props.setProperty("mail.pop3.socketFactory.fallback", "false");
pop3Props.setProperty("mail.pop3.port", "995");
pop3Props.setProperty("mail.pop3.socketFactory.port", "995");
URLName url = new URLName("pop3", "pop.gmail.com", 995, "", lzMailUserName, lzMailPassword);
Session session = Session.getInstance(pop3Props, null);
Store store = new POP3SSLStore(session, url);
store.connect();
return store;
}
public Folder openFolder(Store store) throws MessagingException {
Folder folder = store.getDefaultFolder();
folder = folder.getFolder(FOLDER_NAME);
folder.open(Folder.READ_ONLY);
return folder;
}
public List<MailDetails> readAttachement(Folder folder) throws IOException, MessagingException {
Message[] messages = folder.getMessages();
List<MailDetails> mailDetails = new ArrayList<MailDetails>();
for (Message message : messages) {
logMailDetails(message);
if (message.getContent() instanceof Multipart) {
Multipart multipart = (Multipart) message.getContent();
for (int i = 0; i < multipart.getCount(); i++) {
BodyPart bodyPart = multipart.getBodyPart(i);
if (!Part.ATTACHMENT.equalsIgnoreCase(bodyPart.getDisposition())) {
continue; // dealing with attachments only
}
InputStream is = bodyPart.getInputStream();
String uid = getUid(message);
String to = getTo(message);
String from = getFrom(message);
File d = new File(lzMailTmpFolder + File.separator + uid);
if (d.exists() == false) {
d.mkdir();
}
File f = new File(d, new DateTime().getMillis() + "-" + bodyPart.getFileName());
FileOutputStream fos = new FileOutputStream(f);
IOUtils.copy(is, fos);
MailDetails md = new MailDetails(to, from, f, uid);
mailDetails.add(md);
}
}
else {
LOG.warn("Message conteant is not Multipart " + message.getContentType() + " skipping ...");
}
}
return mailDetails;
}
private String getFrom(Message message) throws MessagingException {
Address[] froms = message.getFrom();
return froms[0].toString();
}
private String getTo(Message message) throws MessagingException {
Address[] tos = message.getAllRecipients();
return tos[0].toString();
}
public void logMailDetails(Message m) throws MessagingException {
Address[] f = m.getFrom();
if (f != null) {
for (int j = 0; j < f.length; j++)
LOG.debug("FROM: " + f[j].toString());
}
Address[] r = m.getRecipients(Message.RecipientType.TO);
if (r != null) {
for (int j = 0; j < r.length; j++) {
LOG.debug("TO: " + r[j].toString());
}
}
LOG.debug("SUBJECT: " + m.getSubject());
Date d = m.getSentDate();
LOG.debug("SendDate: " + d);
}
private String getUid(Message m) throws MessagingException {
try {
Address[] tos = m.getAllRecipients();
String to = tos[0].toString();
to = to.split("#")[0];
String[] parts = to.split("\\+");
return parts[parts.length - 1];
}
catch (Exception e) {
LOG.error("Failes to extract brand hash from email address " + Lists.newArrayList(m.getFrom()));
return UNKNOWN_BRAND_PATH;
}
}
The problem is that for the mails originally created in mac bodyPart.getDisposition() always returns null. No matter what I have tried I could not understand which part is the attachment part (this is what I really need: extracting the attachment from the mail).
I have looked all over the web to find ewhat is the reason for that and I failed to find an answer. How ever I found the below note written by Juergen Hoeller that indicates that there might be an issue here (more details here: http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/mail/javamail/MimeMessageHelper.html)
Warning regarding multipart mails: Simple MIME messages that just contain HTML text but no inline elements or attachments will work on more or less any email client that is capable of HTML rendering. However, inline elements and attachments are still a major compatibility issue between email clients: It's virtually impossible to get inline elements and attachments working across Microsoft Outlook, Lotus Notes and Mac Mail. Consider choosing a specific multipart mode for your needs: The javadoc on the
MULTIPART_MODE constants contains more detailed information.
Is there any example or explnation regarding using JavaMail if the mails are sent from Mac??
Yosi
The "disposition" is at best a hint; it's not required to be included.
These JavaMail FAQ entries might help:
How do I tell if a message has attachments?
How do I find the main message body in a message that has attachments?
What are some of the most common mistakes people make when using JavaMail?

Resources