Genexus error "Check srcIndex and length, and the array's lower bounds" generating report - genexus

KB built on Genexus 16 U9, using generator .NET 4.0.
The system generates a report when the client request via web service, passing the invoicy's ID. Generaly it's requested simultaneously for many different docs, but every report generates an unique filename (avoiding lock the filename), converts it to base64 and delete the file.
In majority the request goes success, but sometimes it starts throwing the exception below for many requests in a short period of time. After recicling the IIS pool, it stops occuring for a while.
Report procedure: rnuc006.
Source array was not long enough. Check srcIndex and length, and the array's lower bounds. at GeneXus.Procedure.GxReportUtils.GetPrinter(Int32 outputType, String path, Stream reportOutputStream)
at GeneXus.Procedure.GXProcedure.getPrinter()
at GeneXus.Programs.rnuc006.executePrivate()
at GeneXus.Programs.rnuc006.execute(SdtSDTDadosEmissao& aP0_SDTDadosEmissao, SdtSDTDadosEnvio& aP1_SDTDadosEnvio, Int16 aP2_indiceLote, Int16 aP3_indiceRPS, String aP4_Filename)
at GeneXus.Programs.pnfs216.S121()
at GeneXus.Programs.pnfs216.executePrivate()
I'm trying to debug, but its dificult to find why it starts happening suddenly.

There's a fix to this error on v16u10, maybe you can try with that version if you have this problem again.

Related

UIPath truncating strings at 10k (10,000) characters

We are running into an issue with UIPath that recently started. It's truncating strings, in our case a base 64 encoded image, at 10k characters. Does anyone know why this might be happening, and how we can address it?
The truncation appears to be happening when loading the text variable base64Contents. Seen in the photo below.
base64Contents = Convert.ToBase64String(byteArray);
As per the UiPath documentation there is a limit of 10,000 characters. This is due to 'the default communication channel between the Robot Executor and the Robot Service has changed from WCF to IPC'
https://docs.uipath.com/activities/docs/log-message
Potential Solution
A way round this could be to write your string to a txt file rather than output it as a log. that way you are using a different activity and the 10,000 character limit may not apply.

Debugging or Finding Errors with Serialized Data

How does one check a rather large string of serialized data for errors? Are there any debugging tools?
The closest thing i've been able to get to error reporting was an error message when trying to unserialize the data using https://www.functions-online.com/unserialize.html. The error was:
WARNING: Error at offset 3445 of 94242 bytes
I'm not sure what action to take with the above message.
Thanks for any help that can be provided!
I was experiencing a similar issue, and the tool you posted really helped.
The error message is telling you where it was when it broke. The string is just one long line of characters, each one is 1 byte.
Look 3445 characters into the string to find an invalid piece.
I was able to repair a large (~28000 char) string of serialized data that had 17 errors, by running it through the tool, navigating to that character, fixing the data, which was like:
s:25:"/content/new/"
into
s:13:"/content/new/"
and then when you run the string through the tool again it will break... but further along (higher offset value in error message).
Repeat this until you've manually repaired all the errors, and then it should be deserialized without an error by the tool.
Hope this helps!

Sending file in chunk always crashes at 10th chunk

I have a strange problem with my ultra-simple method. It sends a file in 4MB chunks to foreign API. The thing is, always at 10th chunk, foreign API crashes.
It's impossible to debug the API error but it says: The specified blob or block content is invalid (That API is Azure Storage API but it's not important right now, the problems lays clearly on my side).
Because it crashes at 10th element (which is 40th megabite) it's a pain to test it and debugging it "by hand" takes a lot of time (partly in cause of my bad internet connection speed) i decided to share my method
def upload_chunk()
file_to_send = File.open('file.mp4', 'rb')
until file_to_send.eof?
#content = file_to_send.read 4194304 # Get 4MB chunk
upload_to_api(#content) # Line that produces the error
end
end
Can you see anything, that can be wrong with this code? Please have in mind that it ALWAYS crashes at 10th time and works perfectly for files of size lesser than 40 MB.
I did a search for ruby "The specified blob or block content is invalid" and found this as the second link (first was this page):
http://cloud.dzone.com/articles/azure-blob-storage-specified
This contains:
If you’re uploading blobs by splitting blobs into blocks and you get the above mentioned error, ensure that your block ids of your blocks are of same length. If the block ids of your blocks are of different length, you’ll get this error.
So my first guess is that the call to upload_to_api is assigning ids from 1-9, then when it goes to 10 the id length increases causing the problem.
If you don't have control over how the ids are generated, then perhaps you can set the amount of bytes read on each iteration to be no more than 1/9 of the total file size.

What can lead to failures in appending data to a file?

I maintain a program that is responsible for collecting data from a data acquisition system and appending that data to a very large (size > 4GB) binary file. Before appending data, the program must validate the header of this file in order to ensure that the meta-data in the file matches that which has been collected. In order to do this, I open the file as follows:
data_file = fopen(file_name, "rb+");
I then seek to the beginning of the file in order to validate the header. When this is done, I seek to the end of the file as follows:
_fseeki64(data_file, _filelengthi64(data_file), SEEK_SET);
At this point, I write the data that has been collected using fwrite(). I am careful to check the return values from all I/O functions.
One of the computers (windows 7 64 bit) on which we have been testing this program intermittently shows a condition where the data appears to have been written to the file yet neither the file's last changed time nor its size changes. If any of the calls to fopen(), fseek(), or fwrite() fail, my program will throw an exception which will result in aborting the data collection process and logging the error. On this machine, none of these failures seem to be occurring. Something that makes the matter even more mysterious is that, if a restore point is set on the host file system, the problem goes away only to re-appear intermittently appear at some future time.
We have tried to reproduce this problem on other machines (a vista 32 bit operating system) but have had no success in replicating the issue (this doesn't necessarily mean anything since the problem is so intermittent in the first place.
Has anyone else encountered anything similar to this? Is there a potential remedy?
Further Information
I have now found that the failure occurs when fflush() is called on the file and that the win32 error that is being returned by GetLastError() is 665 (ERROR_FILE_SYSTEM_LIMITATION). Searching google for this error leads to a bunch of reports related to "extents" for SQL server files. I suspect that there is some sort of journaling resource that the file system is reporting and this because we are growing a large file by opening it, appending a chunk of data, and closing it. I am now looking for understanding regarding this particular error with the hope for coming up with a valid remedy.
The file append is failing because of a file system fragmentation limit. The question was answered in What factors can lead to Win32 error 665 (file system limitation)?

Webtest for binary content, how to?

Assume a web page that returns binary content:
http://localhost/website/Default.aspx?FileId=value
and we have some files with known Ids and checksums (i.e. MD5).
How it's possible to extract whole response and calculate it's check-sum via some visual studio webtest ?
There is a property on WebTest called ResponseBodyCaptureLimit. By default only the first 1.5 MB are captured (although I noticed you said you were getting 50 MB which surprises me). Perhaps you could try cranking this number up to hold 1 GB.
http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.webtesting.webtest.responsebodycapturelimit(v=VS.100).aspx
Have you tried the property
this.Context.LastResponse.BodyBytes
after you yield the webtestrequest?
A custom validation rule may be the way to go to calculate that the hash matches the content.

Resources