Best Technology to use to set up an audio file cache - caching

We have a client application that allows users to download full length 192Kb/s MP3 audio files. Because the files are stored externally to us as a business, we need to be able to:
1) Copy file from external location into a local Server cache
2) Copy that file to the client that requested it
Obviously further requests on the same file would come from the cache and would not need to go external.
Now, we already have a current system that does this (using a Squid Cache), but the problem is that 2 only executes once 1 is fully complete. This means that if a 10min long 192kb/s track takes 75 seconds to be copied from an external location into the cache, the client's HTTP timeout kicks in at about 60 seconds! This does not fulfil our requirements.
It seems that what we need is a cache that can transfer out to a client WHILE it is getting data from an external location. And my questions are:
1) Can this be done with a Squid Cache (this is the legacy incumbent and not my choice)?
2) If not, what technology would be the most suited for this kind of scenario (cost is not really an issue)?
Please let me know if this isn't clear in any way!

Here's an asp.net handler I wrote a while back to proxy some stuff from another server. It wouldn't be that hard to write to file and use the file second time round. Flushing the response in the loop would make it deliver while downloading:
namespace bla.com
{
/// <summary>
/// Summary description for $codebehindclassname$
/// </summary>
[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class Proxy : IHttpHandler
{
private static Regex urlRegex=new Regex(#"http://some_regex_here_to_prevent_abuse_of_proxy.mp3",RegexOptions.Compiled);
public void ProcessRequest(HttpContext context)
{
var targetUrl = context.Request.QueryString["url"];
MatchCollection matches = urlRegex.Matches(targetUrl);
if (matches.Count != 1 || matches[0].Value != targetUrl)
{
context.Response.StatusCode = 403;
context.Response.ContentType = "text/plain";
context.Response.Write("Forbidden");
return;
}
HttpWebRequest req = (HttpWebRequest) WebRequest.Create(targetUrl);
Stream responseStream;
using (HttpWebResponse response = (HttpWebResponse)req.GetResponse())
{
responseStream = response.GetResponseStream();
context.Response.ContentType = response.ContentType;
byte[] buffer = new byte[4096];
int amt;
while ((amt = responseStream.Read(buffer, 0, 4096))>0)
{
context.Response.OutputStream.Write(buffer, 0, amt);
Debug.WriteLine(amt);
}
responseStream.Close();
response.Close();
}
context.Response.Flush();
}
public bool IsReusable
{
get
{
return false;
}
}
}
}

Related

Extremely Slow file upload to a Blazor Server app deployed as Azure Web App

I created a Blazor Server app that would allow end users to upload large excel files that would be consumed in downstream logic.
I use a the standard .NET core 5 InputFile component to upload the excel file to the app, within the app, I read the stream async, copy it into a memory stream and then use ExcelDataReader to convert it into dataset.
The challenge I see is that the upload takes a long time specifically when App is deployed to Azure. To dig a bit deeper into what exactly was consuming the time, I track progress of the StreamCopy operation:
The following code handles my upload:
private async Task OnInputFileChange(InputFileChangeEventArgs e)
{
this.StateHasChanged();
IReadOnlyList<IBrowserFile> selectedFiles;
selectedFiles = e.GetMultipleFiles();
foreach (var file in selectedFiles)
{
DataSet ds = new DataSet();
{
bool filesuccesfullRead = false;
//allowing a 100MB file at once
var timer = new Timer(new TimerCallback(_ =>
{
if (fileTemplateData.uploadProgressInfo.percentage <= 100)
{
// Note that the following line is necessary because otherwise
// Blazor would not recognize the state change and not refresh the UI
InvokeAsync(() =>
{
StateHasChanged();
});
}
}), null, 1000, 1000);
using (Stream stream = file.OpenReadStream(104857600))
using (MemoryStream ms = new MemoryStream())
{
fileTemplateData.uploadProgressInfo = new GlobalDataClass.CopyProgressInfo();
await ExtensionsGeneric.CopyToAsync(stream, ms, 128000, fileTemplateData.uploadProgressInfo);
System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance);
try
{
using (var reader = ExcelReaderFactory.CreateReader(ms))
{
ds = reader.AsDataSet(new ExcelDataSetConfiguration()
{
ConfigureDataTable = _ => new ExcelDataTableConfiguration()
{
UseHeaderRow = false
}
});
filesuccesfullRead = true;
}
}
catch (Exception ex)
{
Message = "Unable to read provided file(s) with exception " + ex.ToString();
}
stream.Close();
ms.Close();
}
}
ds.Dispose();
ds = null;
}
fileTemplateData.fileloading = false;
this.StateHasChanged();
}
Here is the CopyToAsync Function which is same as regular stream copy but provides progress tracking:
public static async Task CopyToAsync(this Stream fromStream, Stream destination, int bufferSize, GlobalDataClass.CopyProgressInfo progressInfo)
{
var buffer = new byte[bufferSize];
int count;
progressInfo.TotalLengthinBytes = fromStream.Length;
while ((count = await fromStream.ReadAsync(buffer, 0, buffer.Length)) != 0)
{
progressInfo.BytesTransfered += count;
progressInfo.percentage = Math.Round((((double)progressInfo.BytesTransfered /(double) progressInfo.TotalLengthinBytes) * 100), 1);
await destination.WriteAsync(buffer, 0, count);
}
}
public class CopyProgressInfo
{
public long BytesTransfered { get; set; }
public long TotalLengthinBytes { get; set; }
public double percentage { get; set; }
public DateTime LastProgressUpdateVisualized = new DateTime();
}
Now Let me put the question:
Using this code, I achieve a fair upload speed when the app is running on a local host(A 75MB file with tonnes of data would upload in around 18 seconds). When the app is deployed to an Azure App service plan, the same file would take more than 10 minutes to upload, which makes me feel something is seriously wrong. Using progress tracking, I was able to confirm that the time is being consumed by the CopytoAsync function and not the logic after that.
Here's what I have investigated:
I checked my internet upload speed on two seprate connections with a stable upload bandwidth of more than 25Mbps, so this is not an issue.
I upgraded the app service plan to a higher tier momentarily to see if upload bandwidth was somehow linked with Azure App Service plan tier, even increasing it to a powerful P3V2 tier made no difference.
To see if the specific Datacenter where my App service sits in was offering poor upload performance from my part of the world, I checked average upload speed using https://www.azurespeed.com/Azure/UploadLargeFile and a 75Mb file would upload in around 38 seconds to Azure West Europe Datacenter. So I donot see if the connectivity is the problem here.
With all that is mentioned above, what could be causing such a poor file upload speed when uploading the file onto a Deployed Blazor Server Web App.
I don't see such performance impact. I upload to azure blob storage though.
My implementation summary:
razor component called imageUpload.razor that contains
public async Task HandleFileSelected(InputFileChangeEventArgs e)
and calls a service like:
await hService.UploadImgToAzureAsync(imageFile.OpenReadStream(), fileName);
service that contains the following:
public async Task UploadImgToAzureAsync(Stream fileStream, string fileName)
{
return await ImageHelper.UploadImageToStorage(fileStream, fileName);
}
ImageHelper calls AzureStorage.cs
AzureStorage.cs that handles calls UploadFromStreamAsync
I finally managed to improve the upload performance, unfortunately Blazor's built in InputFile component doesn't seem to be designed very well for large file uploads specially when the app has been deployed. I used Tewr's upload file component with a larger buffer size(128000) and that has significantly improved performance(3X reduction). Tewr's sample code is available here:
https://github.com/Tewr/BlazorFileReader/blob/master/src/Demo/Blazor.FileReader.Demo.Common/IndexCommon.razor

Webmasters API - Quota limits

We're trying to download page data for sites using the Webmasters API .NET Client Library, by calling WebmastersService.SearchAnalytics.Query(). To do this we are using Batching and sending approx. 600 requests in one batch. However most of these fail with the error "Quota Exceeded". The amount that fail varies each time but it is only about 10 of the 600 that work (and it varies where they are within the batch). The only way we can get it to work is to reduce the batch size down to 3, and wait 1 second between each call.
According to the Developer Console our daily quota is set to 1,000,000 (and we have 99% remaining) and our per user limit is set to 10,000 requests / second / user.
The error we get back is:
Quota Exceeded [403] Errors [ Message[Quota Exceeded] Location[ - ]
Reason[quotaExceeded] Domain[usageLimits]]
Is there another quota which is enforced? What does "Domain[usage limits]" mean - is the domain the site we are query the page data for, or is it our user account?
We still get the problem if we run each request separately, unless we wait 1 second between each call. Due to the number of sites and the number of pages we need to download the data for this isn't really an option.
I found this post which points out that just because the max batch size is 1000 doesn't mean to say the Google service you are calling supports batches of those sizes. But I'd really like to find out exactly what the quota limits really are (as they don't relate to the Developer Console figures) and how to avoid the errors.
Update 1
Here's some sample code. Its specially written just to prove the problem so no need to comment on it's quality ;o)
using Google.Apis.Auth.OAuth2;
using Google.Apis.Services;
using Google.Apis.Util.Store;
using Google.Apis.Webmasters.v3;
using Google.Apis.Webmasters.v3.Data;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
new Program().Run().Wait();
}
private async Task Run()
{
List<string> pageUrls = new List<string>();
// Add your page urls to the list here
await GetPageData("<your app name>", "2015-06-15", "2015-07-05", "web", "DESKTOP", "<your domain name>", pageUrls);
}
public static async Task<WebmastersService> GetService(string appName)
{
//if (_service != null)
// return _service;
//TODO: - look at analytics code to see how to store JSON and refresh token and check runs on another PC
UserCredential credential;
using (var stream = new FileStream("c:\\temp\\WMT.json", FileMode.Open, FileAccess.Read))
{
credential = await GoogleWebAuthorizationBroker.AuthorizeAsync(
GoogleClientSecrets.Load(stream).Secrets,
new[] { Google.Apis.Webmasters.v3.WebmastersService.Scope.Webmasters },
"user", CancellationToken.None, new FileDataStore("WebmastersService"));
}
// Create the service.
WebmastersService service = new WebmastersService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = appName,
});
//_service = service;
return service;
}
private static async Task<bool> GetPageData(string appName, string fromDate, string toDate, string searchType, string device, string siteUrl, List<string> pageUrls)
{
// Get the service from the initial method
bool ret = false;
WebmastersService service = await GetService(appName);
Google.Apis.Requests.BatchRequest b = new Google.Apis.Requests.BatchRequest(service);
try
{
foreach (string pageUrl in pageUrls)
{
SearchAnalyticsQueryRequest qry = new SearchAnalyticsQueryRequest();
qry.StartDate = fromDate;
qry.EndDate = toDate;
qry.SearchType = searchType;
qry.RowLimit = 5000;
qry.Dimensions = new List<string>() { "query" };
qry.DimensionFilterGroups = new List<ApiDimensionFilterGroup>();
ApiDimensionFilterGroup filterGroup = new ApiDimensionFilterGroup();
ApiDimensionFilter filter = new ApiDimensionFilter();
filter.Dimension = "device";
filter.Expression = device;
filter.Operator__ = "equals";
ApiDimensionFilter filter2 = new ApiDimensionFilter();
filter2.Dimension = "page";
filter2.Expression = pageUrl;
filter2.Operator__ = "equals";
filterGroup.Filters = new List<ApiDimensionFilter>();
filterGroup.Filters.Add(filter);
filterGroup.Filters.Add(filter2);
qry.DimensionFilterGroups.Add(filterGroup);
var req = service.Searchanalytics.Query(qry, siteUrl);
b.Queue<SearchAnalyticsQueryResponse>(req, (response, error, i, message) =>
{
if (error == null)
{
// Process the results
ret = true;
}
else
{
Console.WriteLine(error.Message);
}
});
await b.ExecuteAsync();
}
}
catch (Exception ex)
{
Console.WriteLine("Exception occurred getting page stats : " + ex.Message);
ret = false;
}
return ret;
}
}
}
Paste this into program.cs of a new console app and add Google.Apis.Webmasters.v3 via nuget. It looks for the wmt.json file in c:\temp but adjust the authentication code to suit your setup. If I add more than 5 page urls to the pageUrls list then I get the Quota Exceeded exception.
I've found that the stated quotas don't really seem to be the quotas. I had to slow my requests down to avoid this same issue (1/sec), even though I was always at or below the stated rate limit (20/sec). Furthermore, it claims that it gives a rateLimitExceeded error in the docs for going too fast, but really it returns a quotaExceeded error. It might have to do with how Google averages the rate of requests over time (as some of the requests we made were simultaneous, even though the long-run average was designed to be at or below 20/sec), but I cannot be sure.

Windows Phone sends more than one web requests in order in a call

Reccently, I am working on a project in Windows Phone. and In this project, to validate a user, I need to check at 3 web API, the logic is like below:
Step 1: access web api 1 to get the token
Step 2: access web api 2 to get the username/password by the token retrieved in Step 1
Step 3: access web API 3 to validate the user name/password in step 2
you can see we need to access those 3 API in order. as well know, window phone now access the network asynchronously, which causes a big challenge on make those API access in order, and which make the soure code hard to maintainace.
I also consider the synchronous source code like below, but I found there are some problems to access the network,many exeption will be thrown. For example, when an exception is thrown, I try to use asynchronous web request to access the same URL, it is OK. I am strugglig in it now. And I have to introduce thread to call it to avoid to block the UI thread.
internal static class HttpWebRequestExtensions
{
public const int DefaultRequestTimeout = 60000;
public static bool IsHttpExceptionFound = false;
public static WebResponse GetResponse(this WebRequest request, int nTimeOut = DefaultRequestTimeout)
{
var dataReady = new AutoResetEvent(false);
HttpWebResponse response = null;
var callback = new AsyncCallback(delegate(IAsyncResult asynchronousResult)
{
try
{
response = (HttpWebResponse)request.EndGetResponse(asynchronousResult);
dataReady.Set();
}
catch(Exception e)
{
IsHttpExceptionFound = true;
}
});
request.BeginGetResponse(callback, request);
if (dataReady.WaitOne(nTimeOut))
{
return response;
}
return null;
}
public static WebResponse PostRequest(this HttpWebRequest request, String postData, int nTimeOut = DefaultRequestTimeout)
{
var dataReady = new AutoResetEvent(false);
HttpWebResponse response = null;
var callback = new AsyncCallback(delegate(IAsyncResult asynchronousResult)
{
Stream postStream = request.EndGetRequestStream(asynchronousResult); //End the operation.
byte[] byteArray = Encoding.UTF8.GetBytes(postData); //Convert the string into a byte array.
postStream.Write(byteArray, 0, postData.Length); //Write to the request stream.
postStream.Close();
dataReady.Set();
});
request.BeginGetRequestStream(callback, request);
if (dataReady.WaitOne(nTimeOut))
{
response = (HttpWebResponse)request.GetResponse(nTimeOut);
if (IsHttpExceptionFound)
{
throw new HttpResponseException("Failed to get http response");
}
return response;
}
return null;
}
}
Any suggestion on using asynchronous web request to solve my case?
There's an example here of using asynchronous web services in a chained manner to call the Microsoft Translator service on WP7
Maybe it will give you some pointers?
http://blogs.msdn.com/b/translation/p/wp7translate.aspx

WebService ASP.NET MVC 3 Send and Receive

I've been racking my brain for a couple of days now on how to approach a new requirement.
I have two websites. The first one lets the user fill out an application. The second website is an internal website use to manage the users applications. I need to develop a "web service" that sends the application data from website 1 to website 2 and return a response to website 2 of success or failure. I have never done a web service before and I'm a bit confused on where to start. I've been reading various examples online but they all seem to be just a starting point for building a webservice... no specific examples.
So for posting the data website 1, what would my controller method look like? Do I use Json to post the data to website 2? What would and example of that look like? Is there some form of redirect in the method that points to website 2?
So for posting the response back to website 2 what would that controller method look like? I assume I would use Json again to send the response back to website 1? Is there some form of redirect in the method that points back to website 1?
I would use JSON and POST the application to the web service.
First I am assuming the application data is contained in some type of object. Use JSON.Net to serialize the object into JSON. It will look something like the following code.
var application = new Application();
string serializedApplication = JsonConvert.Serialize(application);
Second is to POST the code your endpoint(webservice, mvc action). To this you'll need to make a HTTPRequest to the endpoint. The following code is what I use to make to POST the code.
public bool Post(string url, string body)
{
//Make the post
ServicePointManager.ServerCertificateValidationCallback = (sender, certificate, chain, errors) => true;
var bytes = Encoding.Default.GetBytes(body);
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
Stream stream = null;
try
{
request.KeepAlive = false;
request.ContentLength = bytes.Length;
request.ContentType = "application/x-www-form-urlencoded";
request.Timeout = -1;
request.Method = "POST";
stream = request.GetRequestStream();
stream.Write(bytes, 0, bytes.Length);
}
finally
{
if (stream != null)
{
stream.Flush();
stream.Close();
}
}
bool success = GetResponse(request);
return success;
}
public bool GetResponse(HttpWebRequest request)
{
bool success;
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
if (response.StatusCode != HttpStatusCode.OK && response.StatusCode != HttpStatusCode.Created)
{
throw new HttpException((int)response.StatusCode, response.StatusDescription);
}
var end = string.Empty;
using (StreamReader reader = new StreamReader(responseStream))
{
end = reader.ReadToEnd();
reader.Close();
success = JsonConvert.DeserializeObject<bool>(end);
}
response.Close();
}
}
return success;
}
So now you have can POST JSON to an endpoint and receive a response the next step is to create the endpoint. The following code will get you started on an endpoint in mvc that will receive an application and process it.
[HttpPost]
public ActionResult SubmitApplication()
{
//Retrieve the POSTed payload
string body;
using (StreamReader reader = new StreamReader(Request.InputStream))
{
body = reader.ReadToEnd();
reader.Close();
}
var application = JsonConvert.Deserialize<Application>(body);
//Save the application
bool success = SaveApplication(application);
//Send the server a response of success or failure.
return Json(success);
}
The above code is a good start. Please note, I have not tested this code.
You have obviously more than one client for the data & operations. so a service is what you are looking for.
ASP.NET MVC is a good candidate for developing RESTful services. If you (and your Manager) are ready to use beta version, Then Checkout ASP.NET-Web API.
If you want to stay with a stable product, Go for MVC3. you may need to write some custom code to return the data in XML as well as JSON to server different kind of clients. There are some tutorials out there.
So create a Service (ASP.NET MVC / WCF Service) .You may then create 2 client apps, one for the external clients and another for the Internal users. Both of this apps can call methods in the Service to Create/ Read the user accounts / or whatever operation you want to do.
To make the apps more interactive and lively , you may conside including a wonderful thing called SiganalR, which helps you to get some real time data without continuosly polling the data base/ middle tier very in every n seconds !

How do you save images to a Blackberry device via HttpConnection?

My script fetches xml via httpConnection and saves to persistent store. No problems there.
Then I loop through the saved data to compose a list of image url's to fetch via queue.
Each of these requests calls the httpConnection thread as so
...
public synchronized void run()
{
HttpConnection connection = (HttpConnection)Connector.open("http://www.somedomain.com/image1.jpg");
connection.setRequestMethod("GET");
String contentType = connection.getHeaderField("Content-type");
InputStream responseData = connection.openInputStream();
connection.close();
outputFinal(responseData, contentType);
}
public synchronized void outputFinal(InputStream result, String contentType) throws SAXException, ParserConfigurationException, IOException
{
if(contentType.startsWith("text/"))
{
// bunch of xml save code that works fine
}
else if(contentType.equals("image/png") || contentType.equals("image/jpeg") || contentType.equals("image/gif"))
{
// how to save images here?
}
else
{
//default
}
}
What I can't find any good documentation on is how one would take the response data and save it to an image stored on the device.
Maybe I just overlooked something very obvious. Any help is very appreciated.
Thanks
I tried following this advise and found the same thing I always find when looking up BB specific issues: nothing.
The problem is that every example or post assumes you know everything about the platform.
Here's a simple question: What line of code writes the read output stream to the blackberry device? What path? How do I retrieve it later?
I have this code, which I do not know if it does anything because I don't know where it is supposedly writing to or if that's even what it is doing at all:
** filename is determined on a loop based on the url called.
FileOutputStream fos = null;
try
{
fos = new FileOutputStream( File.FILESYSTEM_PATRIOT, filename );
byte [] buffer = new byte [262144];
int byteRead;
while ((byteRead = result.read (buffer ))!=- 1)
{
fos.write (buffer, 0, byteRead);
}
fos.flush();
fos.close();
}
catch(IOException ieo)
{
}
finally
{
if(fos != null)
{
fos.close();
}
}
The idea is that I have some 600 images pulled from a server. I need to loop the xml and save each image to the device so that when an entity is called, I can pull the associated image - entity_id.png - from the internal storage.
The documentation from RIM does not specify this, nor does it make it easy to begin figuring it out.
This issue does not seem to be addressed on this forum, or others I have searched.
Thanks
You'll need to use the Java FileOutputStream to do the writing. You'll also want to close the connection after reading the data from the InputStream (move outputFinal above your call to close). You can find all kinds of examples regarding FileOutputStream easily.
See here for more. Note that in order to use the FileOutputStream your application must be signed.

Resources