How to get the number of lines written in IText 7 - itext7

I'm upgrating from IText 5 to IText 7. The code below is from version 5 which has a method to get the actual number of lines written. How can I accomplish the same using IText 7?
Paragraph p = new Paragraph(veryLongText, font);
ColumnText column1 = new ColumnText(writer.DirectContent);
column1.SetSimpleColumn(bottomX, bottomY, topX, 100);
column1.AddElement(p);
column1.Go(true);
noOfLines = column1.LinesWritten; <---- No of Lines

Layout mechanism in iText 7 is much more complex and feature-rich than mechanism in iText 5, and notion of written lines might be very much opinion-based in many complex layout cases. That's why the number of written lines is not maintained by the layout engine and is not available for query. However, it's very easy to extend your elements/renderers to support calculating the number of written lines. Here is how to do it. First, you need to override a Paragraph to aggregate the number of lines and ParagraphRenderer to provide the info of the lines written back to Paragraph:
private static class LineCountingParagraph extends Paragraph {
private int linesWritten = 0;
public LineCountingParagraph(String text) {
super(text);
}
public void addWrittenLines(int toAdd) {
linesWritten += toAdd;
}
public int getNumberOfWrittenLines() {
return linesWritten;
}
#Override
protected IRenderer makeNewRenderer() {
return new LineCountingParagraphRenderer(this);
}
}
private static class LineCountingParagraphRenderer extends ParagraphRenderer {
public LineCountingParagraphRenderer(LineCountingParagraph modelElement) {
super(modelElement);
}
#Override
public void drawChildren(DrawContext drawContext) {
((LineCountingParagraph)modelElement).addWrittenLines(lines.size());
super.drawChildren(drawContext);
}
#Override
public IRenderer getNextRenderer() {
return new LineCountingParagraphRenderer((LineCountingParagraph) modelElement);
}
}
Now, just use the customized classes instead of the standard ones and query the information after an element was added to a Document or Canvas:
LineCountingParagraph p = new LineCountingParagraph("text");
document.add(p);
System.out.println(p.getNumberOfWrittenLines());
Note that this mechanism also allows you to calculate the number of lines written that satisfy some condition. You can analyze elements in lines list.

Related

SpringBatch write to different entities

I have processed a "wrapperObject" (AimResponse in this case).
Depending on the property "type" I map to Document or SourceSpace object.
Then I need to persist these entities. I found an example similar to this one:
#Override
public void write(List<? extends List<AimResponse>> list)
throws Exception {
List<SourceSpace> sourceSpaces = new ArrayList<>();
List<Document> documents = new ArrayList<>();
for(List<AimResponse> item:list) {
for(AimResponse i:item) {
if(i.getType().indexOf("folder") >= 0) {
SourceSpace sourceSpace = Mapper.aimResponseToSourceSpace(i);
sourceSpace.setStatus(Status.FOUND.name());
sourceSpaces.add(sourceSpace);
} else if(i.getType().indexOf("document") >= 0) {
Document document = Mapper.aimResponseToDocument(i);
document.setStatus(Status.FOUND.name());
documents.add(document);
}
}
}
if(!CollectionUtils.isEmpty(sourceSpaces)) {
sourceSpaceWriter.write(sourceSpaces);
}
if(!CollectionUtils.isEmpty(documents)) {
documentWriter.write(documents);
}
}
In this example I'm not able to instantiate JdbcBatchItemWriter but anyway I think should be better if the processor could split into 2 different lists and call 2 different writers each one with its own type but I guess it's not possible.
Any help is appreciated.
ClassifierCompositeItemWriter is what you are looking for. It allows you to classify items according to a given criteria and call the corresponding writer.
In your case, you can classify items based on their type (i.getType()) and use a writer for each type. You can find an example of how to use that writer here.

A algorithm to track the status of a number

To design a API,
get(), it will return the random number, also the number should not duplicate, means it always be unique.
put(randomvalue), it will put back the generated random number from get(), if put back, get() function can reuse this number as output.
It has to be efficient, no too much resource is highly used.
Is there any way to implement this algorithm? It is not recommended to use hashmap, because if this API generate for billions of requests, saving the generated the random number still use too much space.
I could no work out this algorithm, please help give a clue, thanks in advance!
I cannot think of any solution without extra space. With space, one option could be to use TreeMap, firstly add all the elements in treeMap with as false. When element is accessed, mark as true. Similarly for put, change the value to false.
Code snippet below...
public class RandomNumber {
public static final int SIZE = 100000;
public static Random rand;
public static TreeMap<Integer, Boolean> treeMap;
public RandomNumber() {
rand = new Random();
treeMap = new TreeMap<>();
}
static public int getRandom() {
while (true) {
int random = rand.nextInt(SIZE);
if (!treeMap.get(random)) {
treeMap.put(random, true);
return random;
}
}
}
static public void putRandom(int number) {
treeMap.put(number, false);
}
}

iText 7 pdhHtml keep table rows together

I am new in iText 7, i am developing a spa project (asp.net, c#, and angularjs), where i need to implement a report for existing html page.I found iText 7 (.Net) has a easy way to implement it. Using below code of line, that's return me a byte array and i can easily show in browser as pdf also can download.
var memStream = new MemoryStream();
ConverterProperties converterProperties = new ConverterProperties();
converterProperties.SetFontProvider(fontProvider); converterProperties.SetBaseUri(System.AppDomain.CurrentDomain.BaseDirectory);
HtmlConverter.ConvertToPdf(htmlText, memStream, converterProperties);
In my raw html there has some html tables (every table has some particular rows) and i want to keep them in a page (i mean if table rows not fit in a single page then start from next page). I got a solution like below
Paragraph p = new Paragraph("Test");
PdfPTable table = new PdfPTable(2);
for (int i = 1; i < 6; i++) {
table.addCell("key " + i);
table.addCell("value " + i);
}
for (int i = 0; i < 40; i++) {
document.add(p);
}
// Try to keep the table on 1 page
table.setKeepTogether(true);
document.add(table);
But in my case i cannot implement like that way because content already exist in html tables (in my existing html page).
Advance thanks, if anyone can help me.
This can easily be done using a custom TagWorkerFactory and TableTagWorker class.
Take a look at the code samples below.
The first thing we should do is create a custom TableTagWorker that tells iText to keep the table together. We do this using the code you've mentioned: table.setKeepTogether(true).
class CustomTableTagWorker extends TableTagWorker{
public CustomTableTagWorker(IElementNode element, ProcessorContext context) {
super(element, context);
}
#Override
public void processEnd(IElementNode element, ProcessorContext context) {
super.processEnd(element, context);
((com.itextpdf.layout.element.Table) getElementResult()).setKeepTogether(true);
}
}
As you can see the only thing we changed on our custom TableTagWorker is the fact that it has to keep the table together.
The next step would be to create a custom TagWorkerFactory that maps our CustomTableTagWorker to the table tag in HTML. We do this like so:
class CustomTagWorkerFactory extends DefaultTagWorkerFactory{
#Override
public ITagWorker getCustomTagWorker(IElementNode tag, ProcessorContext context) {
if (tag.name().equalsIgnoreCase("table")) {
return new CustomTableTagWorker(tag, context); // implements ITagWorker
}
return super.getCustomTagWorker(tag, context);
}
}
All we do here is tell iText that if it finds a table tag it should pass the element to the CustomTableTagWorker, in order to be converted to a PDF object (where setKeepTogether == true).
The last step is registering this CustomTagWorkerFactory on our ConverterProperties.
ConverterProperties converterProperties = new ConverterProperties();
converterProperties.setTagWorkerFactory(new CustomTagWorkerFactory());
HtmlConverter.convertToPdf(HTML, new FileOutputStream(DEST), converterProperties);
Using these code samples I was able to generate an output PDF where tables, if small enough to render on an entire page, will never be split across multiple pages.
I had a similar issue of trying to keep together content within a div. I applied the following css property and this kept everything together. This worked with itext7 pdfhtml.
page-break-inside: avoid;

Apache Mahout - Read preference value from String

I'm in a situation where I have a dataset that consists of the classical UserID, ItemID and preference values, however they are all strings.
I have managed to read the UserID and ItemID strings by Overriding the readItemIDFromString() and readUserIDFromString() methods in the FileDataModel class (which is a part of the Mahout library) however, there doesnt seem to be any support for the conversion of preference values if I am not mistaken.
If anyone has some input to what an approach to this problem could be I would greatly appreciate it.
To illustrate what I mean, here is an example of my UserID string "Conversion":
#Override
protected long readUserIDFromString(String value) {
if (memIdMigtr == null) {
memIdMigtr = new ItemMemIDMigrator();
}
long retValue = memIdMigtr.toLongID(value);
if (null == memIdMigtr.toStringID(retValue)) {
try {
memIdMigtr.singleInit(value);
} catch (TasteException e) {
e.printStackTrace();
}
}
return retValue;
}
String getUserIDAsString(long userId) {
return memIdMigtr.toStringID(userId);
}
And the implementation of the AbstractIDMigrator:
public class ItemMemIDMigrator extends AbstractIDMigrator {
private FastByIDMap<String> longToString;
public ItemMemIDMigrator() {
this.longToString = new FastByIDMap<String>(10000);
}
public void storeMapping(long longID, String stringID) {
longToString.put(longID, stringID);
}
public void singleInit(String stringID) throws TasteException {
storeMapping(toLongID(stringID), stringID);
}
public String toStringID(long longID) {
return longToString.get(longID);
}
}
Mahout is deprecating the old recommenders based on Hadoop. We have a much more modern offering based on a new algorithm called Correlated Cross-Occurrence (CCO). Its is built using Spark for 10x greater speed and gives real-time query results when combined with a query server.
This method ingests strings for user-id and item-id and produces results with the same ids so you don't need to manage those anymore. You really should have look at the new system, not sure how long the old one will be supported.
Mahout docs here: http://mahout.apache.org/users/algorithms/recommender-overview.html and here: http://mahout.apache.org/users/recommender/intro-cooccurrence-spark.html
The entire system described, with SDK, input storage, training of model and real-time queries is part of the Apache PredictionIO project and docs for the PIO and "Universal Recommender" and here: http://predictionio.incubator.apache.org/ and here: http://actionml.com/docs/ur

Comparing ENUMs in Java 6

I am trying to find best way to compare Enums in Java 6.
Say, I have an ENUM of Ticket Types which can be associated with a Traveler. If there is a list of travelers, I would like to know the traveler with the highest class of travel.
I can iterate thru the list of travelers, create a Set of unique TicketTypes, Convert to List, Sort them, Return the last element as the highest. I would like to know if there is a better way to do this?
public enum TicketType {
ECONOMY_NON_REF(1,"Economy Class, Non-Refundable"),
ECONOMY_REF(2,"Economy Full Fare Refundable"),
BUSINESS(3,"Business Class"),
FIRST_CLASS(4,"First Class, Top of the world");
private String code;
private String description;
}
public class Traveler {
private TicketType ticketType;
public Traveler(TicketType ticketType) {
this.ticketType = ticketType;
}
}
#Test
public testCompareEnums{
List<Traveler> travelersGroup1 = new ArrayList<Travelers>();
travelersGroup1.add(new Traveler(TicketType.ECONOMY_REF));
travelersGroup1.add(new Traveler(TicketType.BUSINESS));
travelersGroup1.add(new Traveler(TicketType.ECONOMY_REF));
travelersGroup1.add(new Traveler(TicketType.ECONOMY_NON_REF));
//What is the best way to find the highest class passenger in travelersGroup1.
assertEquals(TicketType.BUSINESS, getHighestClassTravler(travelersGroup1));
List<Traveler> travelersGroup2 = new ArrayList<Travelers>();
travelersGroup2.add(new Traveler(TicketType.ECONOMY_REF));
travelersGroup2.add(new Traveler(TicketType.ECONOMY_NON_REF));
travelersGroup2.add(new Traveler(TicketType.ECONOMY_REF));
travelersGroup2.add(new Traveler(TicketType.ECONOMY_NON_REF));
assertEquals(TicketType.ECONOMY_REF, getHighestClassTravler(travelersGroup2));
}
private CredentialType getHighestClassTraveler(List travelers){
Set uniqueTicketTypeSet = new HashSet();
for (Traveler t: travelers) {
uniqueTicketTypeSet.add(t.getTicketType());
}
List<TicketType> uniqueTicketTypes = new ArrayList<TicketType>(uniqueTicketTypeSet);
Collections.sort(uniqueTicketTypes);
return uniqueTicketTypes.get(uniqueTicketTypes.size()-1);
}
There's a lot of problems with the code that you posted (it won't compile without fixing a lot of errors), but the easiest way is to make Traveler implement the Comparable interface, like so:
public int compareTo(Traveler other) {
return this.getTicketType().compareTo(other.getTicketType());
}
Then to find the the Traveler with the highest TicketType, you can simply do:
Collections.max(travelers);

Resources