I am trying to live stream 360 video via A-frame, by setting the src attribute of the sky to a Flask endpoint that serves each frame when a GET request is made. The code works, however, even though I have disabled caching, the memory usage of the page consistently increases over time and it slows down significantly. Much of the memory seems to go to what's labeled as (closure) in the chrome dev tools (32 bytes per frame).
Chrome memory usage
Code:
<a-sky set-sky=""></a-sky>
THREE.Cache.enabled = false
AFRAME.registerComponent('set-sky', {
schema: {default:''},
init: function() {
this.timeout = setInterval(this.updateSky.bind(this), 41);
this.sky = this.el;
this.sky.setAttribute( 'crossorigin', "Anonymous");
this.sky.setAttribute( 'src', "http://0.0.0.0:5000/video_feed?" + new Date().getTime());
},
remove: function() {
clearInterval(this.timeout);
this.el.removeObject3D(this.object3D);
},
updateSky: function() {
THREE.texture = {}
this.sky.setAttribute( 'src', "http://0.0.0.0:5000/video_feed?" + new Date().getTime());
}
});
On the server side I also configured the request not to store the images
response.headers['Pragma-Directive'] = 'no-cache'
response.headers['Cache-Directive'] = 'no-cache'
response.headers['Cache-Control'] = 'no-store '
response.headers['Pragma'] = 'no-cache'
response.headers['Expires'] = '0'
What can I do to prevent data from being stored in memory?
Related
I am using FastImage for caching image and it loads image very fast after caching data as expected. But my server is generating new uri (s3 presigned url) each time for same image.
So, FastImage is considering it as new image and tries to download everytime which affects my app performance.
My question is, Is there any optimistic way to render images fast as possible without caching it?
If you have chance to modify the server side application, you can create Authorization headers instead of creating presigned urls.
This function should help.
import aws4 from 'aws4';
export function getURIWithSignedHeaders(imagePath) {
if(!imagePath){
return null;
}
const expires = 86400; // 24 hours
const host = `${process.env.YOUR_S3_BUCKET_NAME}.s3.${process.env.YOUR_S3_REGION}.amazonaws.com`;
// imagePath should be something like images/3_profileImage.jpg
const path = `/${imagePath}?X-Amz-Expires=${expires}`;
const opts = {
host,
path,
headers: {
'Content-Type': 'image/jpeg'
}
};
const { headers } = aws4.sign(opts, {accessKeyId: process.env.YORU_ACCESS_KEY_ID, secretAccessKey: process.env.YOUR_SECRET_ACCESS_KEY});
return {
uri: `https://${host}${path}`,
headers: {
Authorization: headers['Authorization'],
'X-Amz-Content-Sha256': headers['X-Amz-Content-Sha256'],
'X-Amz-Date': headers['X-Amz-Date'],
'Content-Type': 'image/jpeg',
}
}
}
See: 609974221
I am running into what looks like a memory leak on Android using Appcelerator. I am making an HTTP GET call repeatedly until all data is loaded. This call happens about 50 times, for a total of roughly 40 MB of JSON. I am seeing the memory usage spike dramatically if this is executed. If I execute these GETs the heap size (as reported by Android Device Monitor, the preferred method to check memory according to the official Appcelerator docs) gets up to ~240 MB and stays there for as long as the app runs. If I do not execute these GETs, it only uses about 50 MB. I don't think this is a false heap reading either, because if I execute the GETs again (from page 1) I run out of memory.
I have looked through the code and cannot find any obvious leaks, such as storing all results in a global variable or something. Are the HTTP responses being cached somewhere?
Here is my code, for reference. syncThings(1, 20) (sanitized name :) ) gets called during startup. It in turn calls a helper function syncDocuments(). Here are the two functions. Don't worry about launchMainWindow() unless you think it could be relevant, but assume it does no cleanup.
function syncThings(page, itemsPerPage) {
var url = "the_url";
console.log("Getting page " + page);
syncDocuments(url,
function(response) {
if (response.totalDocumentsInQuery == itemsPerPage) {
// More pages to get
setTimeout(function() {
syncThings(page + 1, itemsPerPage);
}, 1);
} else {
// This was the last page
launchMainWindow();
}
},
function(e) {
Ti.API.error('Default error callback called for syncThings;', e);
dispatcher.trigger('app:update:stop');
});
}
function syncDocuments(url, successCallback, errorCallback) {
new HTTPRequest({
url: url,
method: 'GET',
headers: {
'Content-Type': 'application/json'
},
timeout: 30000,
success: function (response) {
Ti.API.info('Success callback called for ' + url);
successCallback(response);
},
error: function (error) {
errorCallback(error);
}
}).send();
}
Any ideas? Am I doing something wrong here?
Edit: I am using Titanium SDK 6.0.1.GA. This happens on all Android versions.
Try using the file-property of the HTTPClient: http://docs.appcelerator.com/platform/latest/#!/api/Titanium.Network.HTTPClient-property-file
otherwise the file will be loaded into memory.
There will be a memory leak fix in 6.1.0: https://github.com/appcelerator/titanium_mobile/pull/8818 that might fix something too.
Here is what I'm trying to do:
Retrieve raw data of an image (jpeg) from a URL given to me by an API
Pass the raw data or buffer to a function that uploads it to another server
NEVER PIPE THE IMAGE TO THE DISK
I've followed every example I can find (that doesn't pipe to disk), but still the content comes out corrupted. I have tried forcing various "accept-encodings" (gzip, deflate) but they basically resolve to the same data, just compressed.
I believe this has something to do with the response encoding rather than how I am asking for the data.
Here's the code so far:
var parsedUrl = require('url').parse(PATH_TO_IMAGE)
var params = {
hostname: parsedUrl.hostname,
path: parsedUrl.path,
}
return http.get(params, function(photo_res) {
var photoData = '';
res.setEncoding('binary');
photo_res.on('data', function(chunk) {
photoData += chunk;
});
photo_res.on('end', function() {
// DO STUFF TO UPLOAD IMAGE
});
photo_res.on('error', function(err) {
console.error('Unable to download photo:', err);
return done(err);
});
});
You have a simple typographic error which may be causing Node to interpret your data stream with the incorrect type. Your error is in this line:
res.setEncoding('binary');
To avoid confusion you should keep the response variable named res, and since your data is in binary format, it might be better to keep it as a buffer.
http.get(options, function(res) {
var photoData = [];
res.setEncoding('binary');
res.on('data', function(chunk) {
photoData.push(chunk);
});
res.on('end', function() {
var photo = Buffer.concat(photoData);
});
res.on('error', function(err) {
console.error('Unable to download photo:', err);
});
});
In the example, I store all chunks of data into an array, then use Buffer.concat() to create a single buffer. It is better this way because you were originally appending your image's data onto a string, which may have cause the corruption.
Hi I am using fineuploader 3.3.0 version.
I am facing problem with fineuploader in IE9. as fine uploader does not support sizeLimit in ie9.
I am checking the file size at server side with simple contentlength check if (this.Request.Files[0].ContentLength > 5242880).
but it took 1-2 mins to get this response. Also the 1.4 MB file is taking too long to upload.
Can some one please let me know what is causing it, following is the fineuploader code I am using:-
$('#restricted-fine-uploader').fineUploader({
request: {
endpoint: '/apm/api/job/UploadDocument/?category=' + JobDocuments.category + '&mode=' + JobDocuments.forceupload + '&jobid=' + job_manager_details.jobId
},
autoUpload: true,
text: {
uploadButton: 'Upload File'
},
multiple: false,
validation: {
allowedExtensions: ['doc', 'docx', 'xls', 'xlsx', 'pdf'],
sizeLimit: 5242880,
itemLimit: 1
},
showMessage: function (message) {
// Using Twitter Bootstrap's classes and jQuery selector and method
$('#restricted-fine-uploader').append('<div class="alert alert-error">' + message + '</div>');
}
}).bind('submit', function (event, id, fileName) {
$('#displaymessage').hide();
$('li. qq-upload-fail').hide();
job_manager_details.isuploading = 1;
// fileCount++;
}).bind('complete', function (event, id, fileName, responseJSON) {
$('li. qq-upload-fail').hide();
$('#displaymessage').hide();
job_manager_details.isuploading = 0;
if (responseJSON.success) {
// fileCount--;
ShowJobDocuments();
// if (fileCount == 0 && !$('div.alert-error').html()) {
$('#jobDocumentDialog').dialog("close");
// }
}
})
I just had the same issue and found one more clue.
The VM is incredibly slow (WinXP/IE8) while the network was a NAT'd but it became very fast as soon as it was switched to being bridged.
The speed of the upload should not be influenced by Fine Uploader in any noticeable way. All Fine Uploader does for non File API browsers, such as IE9 and older, is submit a <form> containing the file and related parameters. If you are noticing slow upload times, most likely something in your environment is the cause of the issue. You haven't provided any additional information about your environment, so I can't offer any advice on that front.
As you may already know, file size checking is not possible client-side in IE9 and earlier due to lack of File API support.
I'm doing direct upload to S3 using html form and ajax. It's working good, files get uploaded. My problem is that progress events arrive too quickly. Like all of them up to 99.9% are fired immediately at the start of the upload.
var fd = new FormData();
// put data from the form to FormData object
$.each( $('#upload-form').serializeArray(), function(i, field) {
fd.append(field.name, field.value);
});
// add selected filename to the form data
var file = document.getElementById('path-to-file').files[0];
fd.append("file", file);
var xhr = getXmlHttpRequest(); // cross-browser implementation
xhr.upload.addEventListener("progress", function(e) {
console.log(e.loaded + "/" + e.total);
});
xhr.open('POST', 'http://mybucket.s3.amazonaws.com/', true);
xhr.send(fd);
also tried this way
xhr.upload.onprogress = function(evt)
{
if (evt.lengthComputable)
{
console.log(e.loaded + "/" + e.total);
}
};
Browser log looks like this:
[22:54:47.245] POST http://mybucket.s3.amazonaws.com/
[22:54:47.287] 359865/5680475
[22:54:47.330] 1408441/5680475
[22:54:47.408] 2751929/5680475
[22:54:47.449] 3964345/5680475
[22:54:47.509] 5668281/5680475
and then nothing for the time it actually takes to upload a 5M file.
If browser info is relevant, I have Firefox 20.0.1