twilio webhook doesn't get triggered for error 21617: The maximum allowable body text length is generally 1600 characters - events

i wrote an app where i can use whatsapp to process some text i cut/paste into it. everything works well except if i accidentally paste text longer than 1600 bytes, the following error is encountered:
Errors and Warnings
WARNING
21617 The maximum allowable body text length is generally 1600 characters, but some glyphs such as Emoji, Emoticons or other special characters will be counted as multiple characters.
DESCRIPTION
The maximum allowable body text length is generally 1600 characters, but some glyphs such as Emoji, Emoticons or other special characters will be counted as multiple characters.
but i can't seem to be able to pass this event via webhook to my app, so the app never finds out there is an issue and doesn't reply back with anything...

Related

UIPath truncating strings at 10k (10,000) characters

We are running into an issue with UIPath that recently started. It's truncating strings, in our case a base 64 encoded image, at 10k characters. Does anyone know why this might be happening, and how we can address it?
The truncation appears to be happening when loading the text variable base64Contents. Seen in the photo below.
base64Contents = Convert.ToBase64String(byteArray);
As per the UiPath documentation there is a limit of 10,000 characters. This is due to 'the default communication channel between the Robot Executor and the Robot Service has changed from WCF to IPC'
https://docs.uipath.com/activities/docs/log-message
Potential Solution
A way round this could be to write your string to a txt file rather than output it as a log. that way you are using a different activity and the 10,000 character limit may not apply.

How to handle "The size of input should not exceed 64K" exception in emailcomposetask?

EmailComposeTask on WP7 is very limited as I cant send attachments using my code. I am trying to send text as body in the emailcomposetask and it is throwing this The size of input should not exceed 64K exception. Note that my text is 42.9 Kb in size according to Notepad. How to handle this exception? Is there any solution/alternative/workaround this?
I also want to know what encoding the emailcomposetask follows for its content so that I oculd check the equivalent size of my content in that particular encoding? Please help.
This is what I did, I converted the text to Unicode and trimmed the last 63K buffer of the text which I want to send. Works perfectly for my situation. Thanks for your help guys. :)
Problem is that you are not counting with the email overhead of the message header, encoding the body, etc.
Per example, if the message body is encoded in Base64, it will get at least 1/3 larger than the original non-encoded message!
I don't think there is an ideal solution for this, though I'd try to keep the message below the 40k of text.
This is a result of MIME coding overhead. Try to compress your text.

difference between text file and binary file

Why should we distinguish between text file and binary files when transmitting them? Why there are some channels designed only for textual data? At the bottom level, they are all bits.
At the bottom level, they are all bits... true. However, some transmission channels have seven bits per byte, and other transmission channels have eight bits per byte. If you transmit ASCII text over a seven-bit channel, then all is fine. Binary data gets mangled.
Additionally, different systems use different conventions for line endings: LF and CRLF are common, but some systems use CR or NEL. A text transmission mode will convert line endings automatically, which will damage binary files.
However, this is all mostly of historical interest these days. Most transmission channels are eight bit (such as HTTP) and most users are fine with whatever line ending they get.
Some examples of 7-bit channels: SMTP (nominally, without extensions), SMS, Telnet, some serial connections. The internet wasn't always built on TCP/IP, and it shows.
Additionally, the HTTP spec states that,
When in canonical form, media subtypes of the "text" type use CRLF as the text line break. HTTP relaxes this requirement and allows the transport of text media with plain CR or LF alone representing a line break when it is done consistently for an entire entity-body.
All files are saved in one of two file formats - binary or text. The two file types may look the same on the surface, but their internal structures are different.
While both binary and text files contain data stored as a series of (bits (binary values of 1s and 0s), the bits in text files represent characters, while the bits in binary files represent custom data.
Distinguishing between the two is important as different OSs treat text files differently. For example in *nix you end your lines with just \n while in MS OSs you use \r\n and in Macs you use \n\r. Software such as FTP clients try to change the line endings on text files to match the destination OS by adding/removing the characters. This is to make sure that the text file will look properly on the destination OS.
for example, if you create a text file in *nix with line breaks and try to copy it to a windows box as a binary file and open it in notepad, you will not see any of the line endings, but just a clog of text.
Important to add to the answers already provided is that text files and binary files both represent bytes but text files differ from binary files in that the bytes are understood to represent characters. The mapping of bytes to characters is done consistently over the file using a certain code page or Unicode. When using 7 or 8-bit code pages you can spin the dial when reading these files and interpret them with an English alphabet, a German alphabet, Russian alphabet, or others. This spinning the dial doesn't affect the bytes, it does affect which characters are chosen to correspond to the bytes.
As others have stated, there is also the issue of the encoding of line break separators which is unique to text files and which may differ from platform to platform. The "line break" is not a letter in our alphabet or a symbol you can write, so other rules apply to it.
With binary files there is no implicit convention on character encoding or on the definition of a "line".
All machine language files are actually binary files.
For opening a binary file, file mode has to be mentioned as "rb"or "wb"in fopen command. Otherwise all files are opened in default mode, which is text mode.
It may be noted that text files can also be stored and processed as binary files but not viceversa.
The binary files differ from text file in 2 ways:
The storage of newline characters
The EOF character
Eg:
wt-t stands for textfile
Wb-b stands for binaryfile
Binary files do not store any special character at the end either file end is verified by ueing their size itself.

how to send arabic sms with at-command in text mode (not pdu)

Is it possible to send arabic sms with at-command in text mode (not pdu) and get a delivery report?
It depends on what the device supports. The AT interface itself is ASCII only, so if you want to do anything other than ASCII text you need a device that provides you a way to put Arabic text over that interface - effectively an encoding scheme, at which point you might as well be using PDU mode anyway.
You could put the modem in HEX mode with AT+CSCS="HEX", turn on delivery report with AT+CNMI command, and encode your message in unicode format for AT+CMGS command. Each character should be represented by four hexadecimal digits.

Console output spits out Chinese(?) characters

This is a real shot in the dark, however maybe someone had a similar issue. Some console apps are being invoked by either SQL Server 2008, or Autosys (job schedule) under Windows Server 2008; output results of execution are being saved into .txt files. Every so often, with no definite pattern as far as I can tell saved output is displayed as a series of what I presume are Chinese characters. Have anyone encountered phenomenon above?
Typically when you discover chinese characters in output unexpectedly, it's because someone passed a 7-bit or 8-bit character array to an API which expected an array of unicode characters. The system interprets the 8-bit characters as 16 bit unicode characters and they end up being interpreted as unicode characters. At some point later the unicode characters are converted back to 8-bit characters, probably just before they're saved to the text file.
Note: This is an oversimplification but it should be enough to help you figure it out.

Resources