Issue scaling the first page of a PDF using iText7 for .NET - itext7

I'm trying to scale the first page of a PDF using iText7 for .NET. The rest of the pages should remain untouched.
The method below works if the PDF contains one page, but if there's multiple pages, the first (supposed to be scaled) page is blank, while the remaining pages is added correctly.
What am I missing here?
public byte[] ScaleFirstPagePdf(byte[] pdf)
{
using (var inputStream = new MemoryStream(pdf))
using (var outputStream = new MemoryStream(pdf))
using (var srcPdf = new PdfDocument(new PdfReader(inputStream)))
using (var destPdf = new PdfDocument(new PdfWriter(outputStream)))
{
for (int pageNum = 1; pageNum <= srcPdf.GetNumberOfPages(); pageNum++)
{
var srcPage = srcPdf.GetPage(pageNum);
var srcPageSize = srcPage.GetPageSizeWithRotation();
if (pageNum == 1)
{
var destPage = destPdf.AddNewPage(new PageSize(srcPageSize));
var canvas = new PdfCanvas(destPage);
var transformMatrix = AffineTransform.GetScaleInstance(0.5f, 0.5f);
canvas.ConcatMatrix(transformMatrix);
var pageCopy = srcPage.CopyAsFormXObject(destPdf);
canvas.AddXObject(pageCopy, 0, 0);
}
else
{
destPdf.AddPage(srcPage.CopyTo(destPdf));
}
}
destPdf.Close();
srcPdf.Close();
return outputStream.ToArray();
}
}

I couldn't reproduce the blank page issue with this code, but definitely the files that are generated in this way can be problematic.
The issue is that you are sharing a byte buffer between two memory streams - one used for reading and another one for writing, simultaneously.
Simply using another buffer or relying on the default MemoryStream implementation solved the issue for me, and should do so for you as well because there doesn't seem to be anything suspicious about your code apart from the problem I mentioned.
Here is how you should create the output stream:
using (var inputStream = new MemoryStream(pdf))
using (var outputStream = new MemoryStream())
If you still experience issues even after this tweak then the problem is definitely file-specific and I doubt you could get any help without sharing the file.

Related

iText PDF using .getSplitRenderer for Table renderer

In iText PDF 7, I am using the .layout method of the Table renderer to determine whether a table will break across a page.
However, when I add the .getSplitRenderer (returned from the layout result object) as a child of the Documents's renderer, I get this error: "java.lang.IndexOutOfBoundsException".
I'm using iText PDF version 7.1.7 in its Java incarnation. The last three entries in the stacktrace are:
java.util.ArrayList$SubList.rangeCheck(ArrayList.java:1225)
java.util.ArrayList$SubList.get(ArrayList.java:1042)
com.itextpdf.layout.renderer.TableBorders.processAllBordersAndEmptyRows(TableBorders.java:139)
Here is a bare-bones version of the code that triggers the error:
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
PdfWriter pdfWriter = new PdfWriter(outputStream);
PdfDoc pdfDoc = new PdfDocument(pdfWriter);
PageSize pageSize = new PageSize(612, 792);
Document doc = new Document(pdfDoc, pageSize);
Table table = new Table([50, 50, 50]);
for (int i = 0; i < 1000; i++) {
for (int j = 0; j < 3; j++) {
Cell cell = new Cell();
cell.setHeight(100);
table.addCell(cell);
}
}
LayoutContext context = new LayoutContext(doc.getRenderer().getCurrentArea().clone());
TableRenderer tableRenderer = (TableRenderer)table.createRendererSubTree();
LayoutResult result = tableRenderer.setParent(doc.getRenderer()).layout(context);
if (result.getStatus() == result.PARTIAL) {
tableRenderer = (TableRenderer) result.getSplitRenderer();
doc.getRenderer().addChild(tableRenderer); // this is where the error occurs
}
When you add a child to the DocumentRenderer it will layout and draw it automatically. It is not possible to layout a renderer several times in most cases (although what can be improved here is the exception type and message).
If you want to draw the part that fits immediately you can use the following line:
tableRenderer.draw(new DrawContext(pdfDocument, new PdfCanvas(pdfDocument.getPage(doc.getRenderer().getCurrentArea().getPageNumber()))));
Complete if expression:
if (result.getStatus() == LayoutResult.PARTIAL) {
tableRenderer = (TableRenderer) result.getSplitRenderer();
tableRenderer.draw(new DrawContext(pdfDocument, new PdfCanvas(pdfDocument.getPage(doc.getRenderer().getCurrentArea().getPageNumber()))));
}
It might have some drawbacks in complex cases though, so if you are dealing with complex layout or tagged documents I would recommend using binary search to determine the amount of content that still fits and add that content as an element to Document instance still.
An approach that is between those two is adding the table completely and then removing the extra pages from PdfDocument. In this case keep in mind that you will have to recreate the DocumentRenderer because it does not keep track of low level events like page removal from PdfDocument.

How to add multiple Textfields in single or multiple pages in a Loop

I am Using Itext 5 maven and I want to add multiple textfields in multiple pdf pages. like page 1 need 3 fields, page 2 need 4 fields etc.
I have write the below code
public byte[] setupDocument(EditPdfDTO editPdfDTOList, MultipartFile attachment)
{
WritePDF obj = new WritePDF();
Document document = null;
PdfWriter writer = null;
PdfImportedPage page = null;
PdfReader reader = null;
try
{
// Create output PDF
document = new Document(PageSize.A4);
document.setMargins(0, 0, 0, 0);
writer = PdfWriter.getInstance(document,
new FileOutputStream("D:/test.pdf"));
document.open();
PdfContentByte cb = writer.getDirectContent();
// Load existing PDF
reader = new PdfReader(attachment.getBytes());
int totalPages = reader.getNumberOfPages();
for (int i = 0; i < totalPages; i++)
{
page = writer.getImportedPage(reader, i + 1);
document.newPage();
cb.addTemplate(page, 0, 0);
for (int j = 0; j < editPdfDTOList.getPdf().size(); j++)
{
if (i + 1 == editPdfDTOList.getPdf().get(j).getPageNo())
{
BaseFont baseFont = null;
try
{
baseFont = BaseFont.createFont();
}
catch (DocumentException | IOException e1)
{
e1.printStackTrace();
}
int a, b;
a = editPdfDTOList.getPdf().get(j).getxCoordinate();
b = editPdfDTOList.getPdf().get(j).getyCoordinate();
String str = editPdfDTOList.getPdf().get(j).getTextContent();
Rectangle linkLocation =
new Rectangle(a, b + baseFont.getDescentPoint(str, 10),
a + 10 + baseFont.getWidthPoint(str, 10),
b + baseFont.getAscentPoint(str, 10) + 10);
TextField field =
new TextField(writer, linkLocation, "user1" + j+UUID.randomUUID());
field.setFontSize(10);
field.setOptions(TextField.MULTILINE | TextField.READ_ONLY);
field.setTextColor(BaseColor.RED);
field.setText(str);
field.setBorderWidth(1);
cb = writer.getDirectContent();
try
{
cb.addAnnotation(field.getTextField(),false);
}
catch (IOException | DocumentException e)
{
e.printStackTrace();
}
}
}
}
}
catch (DocumentException | IOException e)
{
e.printStackTrace();
}
catch (Exception e)
{
e.printStackTrace();
}
finally
{
document.close();
}
return null;
}
this code is able to add only one Textfield on every expected but not to add 2 or many textfields in a single page.
there is no issue of multiple try--catch block.
The appropriate classes to use
First of, you say you "want to add multiple textfields in multiple pdf pages". When implementing tasks like this, i.e. tasks that take a single document and want to somehow manipulate it while keeping it structurally more or less as before, one should usually work with a PdfReader/PdfStamper couple. This allows you to concentrate on the manipulation and provides a copy of the original PDF with all its properties to work on.
Adding multiple fields to a page of an existing PDF
Adding multiple fields to a single existing page is trivial, e.g.:
PdfReader pdfReader = new PdfReader(resource);
PdfStamper pdfStamper = new PdfStamper(pdfReader, output);
TextField field1 = new TextField(pdfStamper.getWriter(),
new Rectangle(100, 800, 200, 820), "Field1");
field1.setBorderColor(BaseColor.CYAN);
field1.setBorderStyle(PdfBorderDictionary.STYLE_DASHED);
field1.setBorderWidth(BaseField.BORDER_WIDTH_MEDIUM);
field1.setText("Field 1");
pdfStamper.addAnnotation(field1.getTextField(), 1);
TextField field2 = new TextField(pdfStamper.getWriter(),
new Rectangle(300, 800, 400, 820), "Field2");
field2.setBorderColor(BaseColor.RED);
field2.setBorderStyle(PdfBorderDictionary.STYLE_INSET);
field2.setBorderWidth(BaseField.BORDER_WIDTH_THIN);
field2.setText("Field 2");
pdfStamper.addAnnotation(field2.getTextField(), 1);
pdfStamper.close();
(AddField test testAddMultipleFields)
Applied to my example document
the code generates
Thus, there is no conceptual problem adding multiple text fields to the same document page, it works in a very natural manner.
In your case I would switch to using a PdfReader/PdfStamper couple. If some issue still remain, I would inspect your data. Probably they simply contain only a single field dataset per page. Or two textfields have the same coordinates and, therefore, look like one. Or some text fields have off-screen coordinates. Or... Or... Or...
The original answer
Originally the code in the question looked differently. This original answer focused on issues of that code.
You claim your code
is able to add only one Textfield on every expected but not to add 2 or many textfields in a single page
I doubt that because
you have two distinct objects writing to the same file "D:/TemplateFilePDf/" + attachment.getOriginalFilename() concurrently, the PdfWriter writer and the PdfStamper stamper. If you get something sensible as a result of your code, then only by pure luck; and
additionally stamper is instantiated for a null instance of PdfReader. This actually will cause a NullPointerException in the constructor which will keep your textfield adding code from being executed at all.
Thus, either the code you shared is considerably different from the code you run or your test runs actually all throw that NullPointerException and you probably find the outputs of a former, less broken version of your code which happens to have added only a single text field.
After fixing those two issues, some questions still remain (e.g. what is the intention of that cb.fill()? That instruction is only allowed directly after a path definition, the path whose inner area to fill, but I don't see you defining any path).
Furthermore, you access your editPdfDTOList for a lot of relevant values but we don't know those values. Thus, we cannot run your code to try and reproduce the issue. Probably you create only a single textfield because that object contains only values for a single textfield...

.Audio Timeout Error: NET Core Google Speech to Text Code Causing Timeout

Problem Description
I am a .NET Core developer and I have recently been asked to transcribe mp3 audio files that are approximately 20 minutes long into text. Thus, the file is about 30.5mb. The issue is that speech is sparse in this file, varying anywhere between 2 minutes between a spoken sentence or 4 minutes of length.
I've written a small service based on the google speech documentation that sends 32kb of streaming data to be processed from the file at a time. All was progressing well until I hit this error that I share below as follows:
I have searched via google-fu, google forums, and other sources and I have not encountered documentation on this error. Suffice it to say, I think this is due to the sparsity of spoken words in my file? I am wondering if there is a programmatical centric workaround?
Code
I have used some code that is a slight modification of the google .net sample for 32kb streaming. You can find it here.
public async void Run()
{
var speech = SpeechClient.Create();
var streamingCall = speech.StreamingRecognize();
// Write the initial request with the config.
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
StreamingConfig = new StreamingRecognitionConfig()
{
Config = new RecognitionConfig()
{
Encoding =
RecognitionConfig.Types.AudioEncoding.Flac,
SampleRateHertz = 22050,
LanguageCode = "en",
},
InterimResults = true,
}
});
// Helper Function: Print responses as they arrive.
Task printResponses = Task.Run(async () =>
{
while (await streamingCall.ResponseStream.MoveNext(
default(CancellationToken)))
{
foreach (var result in streamingCall.ResponseStream.Current.Results)
{
//foreach (var alternative in result.Alternatives)
//{
// Console.WriteLine(alternative.Transcript);
//}
if(result.IsFinal)
{
Console.WriteLine(result.Alternatives.ToString());
}
}
}
});
string filePath = "mono_1.flac";
using (FileStream fileStream = new FileStream(filePath, FileMode.Open))
{
//var buffer = new byte[32 * 1024];
var buffer = new byte[64 * 1024]; //Trying 64kb buffer
int bytesRead;
while ((bytesRead = await fileStream.ReadAsync(
buffer, 0, buffer.Length)) > 0)
{
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
AudioContent = Google.Protobuf.ByteString
.CopyFrom(buffer, 0, bytesRead),
});
await Task.Delay(500);
};
}
await streamingCall.WriteCompleteAsync();
await printResponses;
}//End of Run
Attempts
I've increased the stream to 64kb of streaming data to be processed and then I received the following error as can be seen below:
Which, I believe, means the actual api timed out. Which is decidely a step in the wrong direction. Has anybody encountered a problem such as mine with the Google Speech Api when dealing with a audio file with sparse speech? Is there a method in which I can filter the audio down to only spoken words progamatically and then process that? I'm open to suggestions, but my research and attempts have only lead me to further breaking my code.
There is to way for recognize audio in Google Speech API:
normal recognize
long running recognize
Your sample is uses the normal recognize, which has a limit for 15 minutes.
Try to use the long recognize method:
{
var speech = SpeechClient.Create();
var longOperation = speech.LongRunningRecognize( new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = 16000,
LanguageCode = "hu",
}, RecognitionAudio.FromFile( filePath ) );
longOperation = longOperation.PollUntilCompleted();
var response = longOperation.Result;
foreach ( var result in response.Results )
{
foreach ( var alternative in result.Alternatives )
{
Console.WriteLine( alternative.Transcript );
}
}
return 0;
}
I hope it helps for you.

nokia Imaging SDK customize BlendFilter

I have created this code
Uri _blendImageUri = new Uri(#"Assets/1.png", UriKind.Relative);
var _blendImageProvider = new StreamImageSource((System.Windows.Application.GetResourceStream(_blendImageUri).Stream));
var bf = new BlendFilter(_blendImageProvider);
Filter work nice. But I want change image size for ForegroundSource property. How can I load image with my size?
If I understood you correctly you are trying to blend ForegroundSource with only a part of the original image? That is called local blending at it is currently not supported on the BlendFilter itself.
You can however use ReframingFilter to reframe the ForegroundSource and then blend it. Your chain will look like something like this:
using (var mainImage = new StreamImageSource(...))
using (var filterEffect = new FilterEffect(mainImage))
{
using (var secondaryImage = new StreamImageSource(...))
using (var secondaryFilterEffect = new FilterEffect(secondaryImage))
using (var reframing = new ReframingFilter(new Rect(0, 0, 500, 500), 0)) //reframe your image, thus "setting" the location and size of the content when blending
{
secondaryFilterEffect.Filters = new [] { reframing };
using (var blendFilter = new BlendFilter(secondaryFilterEffect)
using (var renderer = new JpegRenderer(filterEffect))
{
filterEffect.Filters = new [] { blendFilter };
await renderer.RenderAsync();
}
}
}
As you can see, you can use the reframing filter to position the content of your ForegroundSource so that it will only blend locally. Note that when reframeing you can set the borders outside of the image location (for example new Rect(-100, -100, 500, 500)) and the areas outside of the image will appear as black transparent areas - exactly what you need in BlendFilter.

Adding array of images to Firebase using AngularFire

I'm trying to allow users to upload images and then store the images, base64 encoded, in firebase. I'm trying to make my firebase structured as follows:
|--Feed
|----Feed Item
|------username
|------epic
|---------name,etc.
|------images
|---------image1, image 2, etc.
However, I can't get the remote data in firebase to mirror the local data in the client. When I print the array of images to the console in the client, it shows that the uploaded images have been added to the array of images...but these images never make it to firebase. I've tried doing this multiple ways to no avail. I tried using implicit syncing, explicit syncing, and a mixture of both. I can;t for the life of me figure out why this isn;t working and I'm getting pretty frustrated. Here's my code:
$scope.complete = function(epicName){
for (var i = 0; i < $scope.activeEpics.length; i++){
if($scope.activeEpics[i].id === epicName){
var epicToAdd = $scope.activeEpics[i];
}
}
var epicToAddToFeed = {epic: epicToAdd, username: $scope.currUser.username, userImage: $scope.currUser.image, props:0, images:['empty']};
//connect to feed data
var feedUrl = "https://myfirebaseurl.com/feed";
$scope.feed = angularFireCollection(new Firebase(feedUrl));
//add epic
var added = $scope.feed.add(epicToAddToFeed).name();
//connect to added epic in firebase
var addedUrl = "https://myfirebaseurl.com/feed/" + added;
var addedRef = new Firebase(addedUrl);
angularFire(addedRef, $scope, 'added').then(function(){
// for each image uploaded, add image to the added epic's array of images
for (var i = 0, f; f = $scope.files[i]; i++) {
var reader = new FileReader();
reader.onload = (function(theFile) {
return function(e) {
var filePayload = e.target.result;
$scope.added.images.push(filePayload);
};
})(f);
reader.readAsDataURL(f);
}
});
}
EDIT: Figured it out, had to connect to "https://myfirebaseurl.com/feed/" + added + "/images"

Resources