Should the Userdata when launching an ec2 instance always be a String? Cant that be a byte array?
Java API: I use ec2Client.runInstance(TEST_IMAGE_ID, instanceType, "USER_DATA");
According to Amazon:
The user data must be base64 encoded before being submitted to the API. The API command line tools perform the base64 encoding for you. The data is in base64 and is decoded before being presented to the instance.
You need to find out if your Java API will perform this base-64 encoding for you or if you have to do it yourself. [See Matt Solnit's comment below.]
In any case, be careful that you do not exceed the limit of 16KB for user-data.
Related
I have data in SQLite that is text including a curly apostrophe (U-2019 ’) but the character is typed as in work’s. A browser makes the request for the data over a web socket connection which is configured as chan configure $sock -encoding iso8859-1 -translation crlf -buffering full simply because that is what I set it to before sending the headers to the browser in response to the request to upgrade the connection to a web socket and failed to consider it again thereafter.
I was mistaken it is endoded as chan configure $sock -buffering full -blocking 0 -translation binary.
If I use replace( ..., '’', '’') in the SQL then the browser displays the text with the curly apostrophe rendered normally. And if I follow the example in this SO question and encode the result of the SQL query as set encoded [encoding convertto utf-8 $SQLResult] it renders correctly also.
What is the correct way to handle this? Should the SQL results be endoded to utf-8 or should the web socket be configured differently? If the socket is encoded as utf-8 then it fails on the first message sent from the browser/client.
Thank you.
The data in SQLite should be Unicode (probably UTF-8 but that shouldn't matter; the package binding handles that). That means that the data arrives in Tcl correctly encoded. Tcl's internal encoding schemes are complicated and should be ignored for the purposes of this discussion: pretend that's all Unicode too.
With a websocket, you need to say what sort of messages ("frames") you are sending. For normal payloads, they can be either binary frames or text frames. Binary frames are a bunch of bytes. Text frames are UTF-8 (as specified by the standard). STOMP frames are text frames where the content is JSON (with a few basic rules and some higher-order things over what operations can be stated).
If you are sending text, you should use UTF-8. Ideally by configuring the socket to do that in the part of the websocket client that mediates between the point that receives the text to send (maybe JSON) and the part that writes the frame onto the underlying socket itself. Above that point, you should just work with ordinary Tcl strings. If you're forming JSON messages in Tcl, the rl_json package is recommended, or you can get SQLite to produce JSON directly. (JSON's serialized form is always UTF-8, and it is conceptually Unicode.)
If you are sending binary frames, you handle all the details yourself. Using encoding convertto utf-8 may help with forming the message frame body, or you can use any other encoding scheme you prefer (such as the one you mention in your question).
I provide users with a SAS token to upload blobs. I'd like to check whether the blobs represent a valid image or not. How can I do this? In the SAS token, I make sure the blob name ends with a jpeg extension, but this does not mean the users upload an image since everything is uploaded as a byte stream.
This is Not possible as described here. Perhaps, the better way to validate is at the front end when the user tries to upload the file.
You can write an Azure Function that will be triggered every time a new blob is uploaded. And in that function you can validate if the blob is a valid image file, if it is not then you can delete it or send an email to uploader.
In COSMOS is there an easy way to see the raw bytes being sent over the line by the Command Sender? We want to capture what is sent out and don't know where all the protocol layers get added.
If you go to the Cmd Packets tab in the Command and Telemetry Server you can click View Raw on the command packet you're sending to see the raw bytes. Note that these are the raw bytes in the defined command. If your interface changes the data in some way (puts on a CRC, etc) then the bytes over the line may be different.
The following page describes the COSMOS log format: https://cosmosrb.com/docs/logging/
You can also add the LOG_RAW keyword after your interface definition to log all the raw data going out over the interface. Note that COSMOS does not interpret this log and you'll have to use a hex editor to view it.
Is it true WebHDFS does not support SequenceFiles?
I can't find anything that says it does. I have the usual small file problem and believe SequenceFiles would work well enough, but I need to use WebHDFS. I need to create and then append to a SequenceFile via WebHDFS.
I think it's true. There is no web API to append to a sequence file.
However you can append binary data, and if your sequence file is not block-compressed, you should be able to format your data on the client with relatively little effort. You can do it by running your input through a sequence file writer on the client, and then using the output for uploading (either the whole file, or a slice representing the delta since last append).
You can read more about sequence file format here.
I have a requirement where I need to upload xsl (excel 2003) into SQL 2008R2 database. I am using ORCHAD stuff for scheduling.
I am using HTTPPOSTEDFILEBASE filestream to convert into byte array and store in database.
After storing background scheduler picks up the task and process the data stored. I need to create objects from data in excel and send for processing. I am struck at decoding byte array :(
What is the best way to handle this kinda requirement? any libraries which I make use.
My web app is build with MVC3, EF4.1,repository pattern, Autofaq.
I have not used the HTTPPOSTEDFILEBASE class, but you could:
Convert the file to a byte stream
Save it as the appropriate byte/blob type in your database (store the extension in a separate field)
Retrieve bytes and add the appropriate extension to the file stream
Treat as a normal file...
But I'm actually wondering if your requirements even demand this. Why are you storing the file in the first place? If you are only using the file data to shape your business object (that I'm guessing gets saved somewhere), you could perform that data extraction, shaping, and persistence before you store the file as raw bytes so you never have to reconstitute the file for that purpose.