I want download File from Sqlserver in Chunks and respond it to my client using pushstreamcontent or streamcontent from my web API. What is the correct approach to achieve this?
I have two approaches in my mind
Multiple calls: Call WEb API from client and get file metadata on initial call. Pass the Chunksize and Contentstart parameters and download chunks.
Single Call: Download file in Tempfolder in Server side and Push stream content to client in single call.
I believe this should get you through:
public HttpResponseMessage Get([FromUri]string filename)
{
string path = HttpContext.Current.Server.MapPath("~/" + filename);
if (!File.Exists(path))
{
throw new HttpResponseException("The file does not exist.", HttpStatusCode.NotFound);
}
try
{
MemoryStream responseStream = new MemoryStream();
Stream fileStream = File.Open(path, FileMode.Open);
bool fullContent = true;
if (this.Request.Headers.Range != null)
{
fullContent = false;
// Currently we only support a single range.
RangeItemHeaderValue range = this.Request.Headers.Range.Ranges.First();
// From specified, so seek to the requested position.
if (range.From != null)
{
fileStream.Seek(range.From.Value, SeekOrigin.Begin);
// In this case, actually the complete file will be returned.
if (range.From == 0 && (range.To == null || range.To >= fileStream.Length))
{
fileStream.CopyTo(responseStream);
fullContent = true;
}
}
if (range.To != null)
{
// 10-20, return the range.
if (range.From != null)
{
long? rangeLength = range.To - range.From;
int length = (int)Math.Min(rangeLength.Value, fileStream.Length - range.From.Value);
byte[] buffer = new byte[length];
fileStream.Read(buffer, 0, length);
responseStream.Write(buffer, 0, length);
}
// -20, return the bytes from beginning to the specified value.
else
{
int length = (int)Math.Min(range.To.Value, fileStream.Length);
byte[] buffer = new byte[length];
fileStream.Read(buffer, 0, length);
responseStream.Write(buffer, 0, length);
}
}
// No Range.To
else
{
// 10-, return from the specified value to the end of file.
if (range.From != null)
{
if (range.From < fileStream.Length)
{
int length = (int)(fileStream.Length - range.From.Value);
byte[] buffer = new byte[length];
fileStream.Read(buffer, 0, length);
responseStream.Write(buffer, 0, length);
}
}
}
}
// No Range header. Return the complete file.
else
{
fileStream.CopyTo(responseStream);
}
fileStream.Close();
responseStream.Position = 0;
HttpResponseMessage response = new HttpResponseMessage();
response.StatusCode = fullContent ? HttpStatusCode.OK : HttpStatusCode.PartialContent;
response.Content = new StreamContent(responseStream);
return response;
}
catch (IOException)
{
throw new HttpResponseException("A generic error occured. Please try again later.", HttpStatusCode.InternalServerError);
}
}
Note when using Web API, you don't need to manually parse the Range header in the form of text. Web API automatically parses it for you, and gives you a From and a To property for each range. The type of From and To is Nullable, as those properties can be null (think bytes=-100 and bytes=300-). Those special cases must be handled carefully.
Another special case to consider is where To is larger than the resource size. In this case, it is equivalent to To is null, where you need to return starting with From to the end of the resource.
If the complete resource is returned, usually status code is set to 200 OK. If only part of the resource is returned, usually status code is set to 206 PartialContent.
This solution is part of the article here which covers a lot of other things and I encourage you to check it out: https://blogs.msdn.microsoft.com/codefx/2012/02/23/more-about-rest-file-upload-download-service-with-asp-net-web-api-and-windows-phone-background-file-transfer/
Related
I am trying to use the Ruby SDK to upload videos to YouTube automatically. Inserting a video, deleting a video, and setting the thumbnail for a video works fine, but for some reason trying to add captions results in an invalid metadata client error regardless of the parameters I use.
I wrote code based on the documentation and code samples in other languages (I can't find any examples of doing this in Ruby with the current gem). I am using the google-apis-youtube_v3 gem, version 0.22.0.
Here is the relevant part of my code (assuming I have uploaded a video with id 'XYZ123'):
require 'googleauth'
require 'googleauth/stores/file_token_store'
require 'google-apis-youtube_v3'
def authorize [... auth code omitted ...] end
def get_service
service = Google::Apis::YoutubeV3::YouTubeService.new
service.key = API_KEY
service.client_options.application_name = APPLICATION_NAME
service.authorization = authorize
service
end
body = {
"snippet": {
"videoId": 'XYZ123',
"language": 'en',
"name": 'English'
}
}
s = get_service
s.insert_caption('snippet', body, upload_source: '/path/to/my-captions.vtt')
I have tried many different combinations, but the result is always the same:
Google::Apis::ClientError: invalidMetadata: The request contains invalid metadata values, which prevent the track from being created. Confirm that the request specifies valid values for the snippet.language, snippet.name, and snippet.videoId properties. The snippet.isDraft property can also be included, but it is not required. status_code: 400
It seems that there really is not much choice for the language and video ID values, and there is nothing remarkable about naming the captions as "English". I am really at a loss as to what could be wrong with the values I am passing in.
Incidentally, I get exactly the same response even if I just pass in nil as the body.
I looked at the OVERVIEW.md file included with the google-apis-youtube_v3 gem, and it referred to the Google simple REST client Usage Guide, which in turn mentions that most object properties do not use camel case (which is what the underlying JSON representation uses). Instead, in most cases properties must be sent using Ruby's "snake_case" convention.
Thus it turns out that the snippet should specify video_id and not videoId.
That seems to have let the request go through, so this resolves this issue.
The response I'm getting now has a status of "failed" and a failure reason of "processingFailed", but that may be the subject of another question if I can't figure it out.
I am running a PureScript app that is being served up by a backend Suave application in F#. In the front end, I need to open a WebSocket connection in PureScript to the backend, but part of the path needs to be dynamic based on how the backend app is running (for example on some boxes it is: ws://host1:9999/ws/blah, on others it might be ws://host2:7777/ws/blah).
So I need to get the current URL that my app is being served up on so that I can just put a ws:// on the front, and a ws/blah on the end (or somehow do a relative WebSocket path?).
I've tried doing something like:
wdw <- window
htmldoc <- document wdw
let doc = htmlDocumentToDocument htmldoc
docUrl <- url doc
connection <- WS.create (WS.URL $ "ws://" <> docUrl <> "ws/blah") []
But the document URL given has http:// on the front of it. I could hack up the string and rip that part out, but I'm hoping to find a more elegant way.
If it matters, I'm also using Halogen here so I have access to their API if there is something useful in there for this situation.
I was able to piece it together from stholzm's suggestion above.
In the documentation for location, there are functions for Hostname and Port that can be used to piece together the base url. The location can be obtained via the location function that takes in a window instance.
In the end, my code looks like
I'm using SerilogMetrics's BeginTimedOperation() in a Web API, and it would be really great to be able to use the HttpRequestNumber or HttpRequestId properties (from the respective Serilog.Extra.Web enrichers) as the identifier, making it super easy to correlate timing-related log entries with others across a request.
Something like:
using (logger.BeginTimedOperation("doing some work", HttpRequestNumberEnricher.CurrentRequestNumber))
{ ... }
Short of poking around in HttpContext.Current for the magically- (i.e. non-public) named properties, is this achievable? Thanks!
If you begin a timed operation during a web request, the operation's events will already be tagged with the HttpRequestId.
You'll see it when logging to a structured log server like Seq, but if you're writing it out to a text file or trace then the property won't be included in the output message by default. To show it in there use something like:
.WriteTo.File(...,
outputTemplate: "{Timestamp} [{Level}] ({HttpRequestId}) {Message} ...")
The logging methods use a default template you can draw on for inspiration, and there's some info spread around the wiki though there's no definitive reference.
We need to decode the following BodyBinary value which is getting recorded for a ThickClient Application based on .Net WCF Web Services with Custom Bindings and GZIP Encoding
P.S: The body binary content was reduced to make it easier for posting here
web_custom_request("Service.svc",
"[this is not a link]URL=webservice.svc",
"Method=POST",
"Resource=0",
"RecContentType=application/x-gzip",
"Referer=",
"Snapshot=t1.inf",
"Mode=HTTP",
"EncType=application/x-gzip",
"BodyBinary=\\x1F\\x8B\\x08\\x00\\x00",
LAST);
We need to decode the BodyBinary for Parameterizing the input values for various other flows.
I have read about Data Format Extension which is a custom coding methodology introduced by HP but it's appearing very complex for us with limited coding background.
[Edit]
Current Approach:
We are not recording the application with VuGen and in fact are using Fiddler4(with GZIP and UNGZIP Custom Rules) to capture the Web Service Communication-UNGZIP the Request, then use the same in a Web Custom Request using lr_zip so that the Server can understand the request.
The main challenge here is that there is a lot of manual work in capturing all the Web Service Calls, UnGzipping them, creating a Custom Request and then hitting the Server.
If the same can be handled by Load Runner automatically or after recording with VuGen and doing some custom Decoding--Parameterization--Encoding-Posting to Server then it would drastically reduce our efforts.
char * param_xmlsource_GetUserAccess;
param_xmlsource_GetUserAccess="Entire Soap Request(UnGzipped using Fiddler)";
web_set_user("{Username}","{Password}","{Env_URL}");
lr_save_string(lr_eval_string(param_xmlsource_GetUserAccess),"xmlsource_GetUserAccess");
lr_start_transaction("Transaction_GetUserAccess");
lr_zip("target=xmltarget_GetUserAccess", "source=xmlsource_GetUserAccess");
web_custom_request("web_custom_request",
"URL=WebService.svc",
"Method=POST",
"TargetFrame=",
"EncType=application/x-gzip",
"Resource=0",
"Referer=",
"Mode=HTTP",
"Body={xmltarget_GetUserAccess}",
LAST);
lr_end_transaction("Transaction_GetUserAccess",LR_AUTO);
[Edit]
updated the question by replacing the word "decrypt" with "decode" which is what is happening here.
Here is the answer from our expert:
One of suggestions may be ContentEncoding=gzip argument and plain text request body in the script (should be converted manually)…
Just as sample:
Code:
web_reg_save_param("gzipped", "LB=","RB=","Search=Body",LAST);
web_custom_request("echo",
"URL=http://<myserver>/echo_post",
"Method=POST",
"ContentEncoding=gzip",
"Body=~!txtPassword=~#admin&~!txtLogin=~#admin&~!clientType=~#Swing&~!actionID=~#swing%2FcomsHome&~!alreadylogin=~#No",
LAST);
lr_unzip("source=gzipped","target=plain");
Output:
Action.c(9): t=350ms: 107-byte request body for "…” (RelFrameId=1, Internal ID=1)
Action.c(9): \x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x0B\xABS,\xA9(\tH,..\xCF/J\xB1\xADSNL\xC9\xCD\xCCS\xAB
Action.c(9): \x03\x89\xFA\xE4\xA7g\xE6!\t%\xE7d\xA6\xE6\x95\x84T\x16\xA4\x02\x05\x83\xCB3\xF3\xD2\x81\x82
Action.c(9): \x89\xC9%\x99\xF9y\x9E.#\xA1b\x90\x90\xAA\x91[r~n\xB1G~n*H6\xA7(51\xA52\x07j\x92_>\x00\xF9
Action.c(9): \xEB#\x99o\x00\x00\x00….
Action.c(9): t=384ms: 107-byte response body for "…" (RelFrameId=1, Internal ID=1)
Action.c(9): \x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x0B\xABS,\xA9(\tH,..\xCF/J\xB1\xADSNL\xC9\xCD\xCCS\xAB
Action.c(9): \x03\x89\xFA\xE4\xA7g\xE6!\t%\xE7d\xA6\xE6\x95\x84T\x16\xA4\x02\x05\x83\xCB3\xF3\xD2\x81\x82
Action.c(9): \x89\xC9%\x99\xF9y\x9E.#\xA1b\x90\x90\xAA\x91[r~n\xB1G~n*H6\xA7(51\xA52\x07j\x92_>\x00\xF9
Action.c(9): \xEB#\x99o\x00\x00\x00
Action.c(9): Notify: Saving Parameter "gzipped = \x1f‹\x08\x00\x00\x00\x00\x00\x00\x0b«S,©( H,..Ï/J±-SNLÉÍÌS«\x03‰úä§gæ! %çd¦æ•„T\x16¤\x02\x05ƒË3óÒ\x81‚‰É%\x99ùyž.#¡b\x90\x90ª‘[r~n±G~n*H6§(51¥2\x07j’_>\x00ùë#\x99o\x00\x00\x00".
Action.c(22): Notify: Saving Parameter "plain = ~!txtPassword=~#admin&~!txtLogin=~#admin&~!clientType=~#Swing&~!actionID=~#swing%2FcomsHome&~!alreadylogin=~#No".
Hope this helps.
Jus to add my cents worth, ContentEncoding = gzip will work fine as long as winit mode is enabled. In our case, our scripts have to fire async calls and hence we cannot use winit mode. I raised the same issue with LR support and they really didn't have much to say about it.
I have seen similar issue many times.. you can try this
selecting one of the compression method
2.save script
then set it back to "None"
Refer this image : https://i.stack.imgur.com/IQYZH.jpg
I am building an SNMP Agent for a Windows application using the Microsoft WinSNMP API. Currently everything is working for single-item get and set-request, and also for get-next to allow walking the defined tree (albeit with some caveats that are not relevant to this question).
I am now looking at multi-item get and also get-bulk.
My current procedure is to iterate through the list of requested items (the varbindlist within the PDU), treating each one individually, effectively causing an internal get. The result is added to the VBL, set into the PDU, and then sent back to the SNMP Manager, taking into account invalid requests, etc.
My question is how should I handle "too much" data (data that cannot fit into a single transport layer message)? Or more accurately, is there a way to test whether data is "too big" without actually attempting to transmit? The only way I can see in the API is to try sending, check the error, and try again.
In the case of a get-request this isn't a problem - if you can't return all of the requested data, you fail: so attempt sending, and if the error report is SNMPAPI_TL_PDU_TOO_BIG, send a default "error" PDU.
However, it is allowable for a response to bulk-get to return partial results.
The only way I can see to handle this is a tedious (?) loop of removing an item and trying again. Something similar to the following (some detail removed for brevity):
// Create an empty varbindlist
vbl = SnmpCreateVbl(session, NULL, NULL);
// Add all items to the list
SnmpSetVb(vbl, &oid, &value); // for each OID/Value pair
// Create the PDU
pdu = SnmpCreatePdu(session, SNMP_PDU_RESPONSE, ..., vbl);
bool retry;
do {
retry = false;
smiINT failed = SnmpSendMsg(session, ..., pdu);
if (failed && SNMPAPI_TL_PDU_TOO_BIG == SnmpGetLastError()) {
// too much data, delete the last vb
SnmpDeleteVb(vbl, SnmpCountVbl(vbl));
SnmpSetPduData(pdu, ..., vbl);
retry = true;
};
} while(retry);
This doesn't seem like an optimal approach - so is there another way that I've missed?
As a side-note, I know about libraries such as net-snmp, but my question is specific to the Microsoft API.
The RFC does require you to do what you pasted,
https://www.rfc-editor.org/rfc/rfc3416
Read page 16.
There does not seem to be a function exposed by WinSNMP API that can do this for you, so you have to write your own logic to handle it.