I have a mobile application that allows uploading images either via camera or choosing it from a photo library. While uploading the user is prompted if he or she wants a high quality image to be uploaded or not. If yes, quality option is set to 100 else set to 50. This seems to work fine for android and ios builds but not for windows build. Even after selecting the high quality option, the image uploaded has low quality.Is there something that I'm missing? A plugin that might need to be added specific to windows to make stuff work?
Please help.
I've included a part of the code snippet:
.factory('$Camera', function ($q, $crypt, $settings) {
return {
getPicture: function () {
var q = $q.defer();
var options = {
quality: $settings.getValue('uploadHighQualityImage') ? 100 : 50,
destinationType: navigator.camera.DestinationType.FILE_URI,
sourceType: navigator.camera.PictureSourceType.CAMERA,
targetWidth: 500,
targetHeight: 500
};
navigator.camera.getPicture(function (result) {
// Put the options
q.resolve(result);
}, function (err) {
q.reject(err);
}, options);
return q.promise;
},
Related
Why is my D3 visualization not captured by phantomjs? It is the only element on this page not captured and saved as png.
My oncoprintsave.js file:
var page = require('webpage').create();
page.viewportSize = { width: 1920, height: 1080 };
page.open('https://jonkatz2.github.io/2019/03/11/D3-oncoprint', function(status) {
if (status !== 'success') {
console.log('Unable to load the address!');
phantom.exit(1);
} else {
window.setTimeout(function () {
page.render('oncoprint.png');
phantom.exit();
}, 2000);
}
});
and in my Ubuntu console I enter:
phantomjs oncoprintsave.js
This is just a representative example. I am making a shiny app in which I plan to capture some D3 visualizations as png (server-side) and include them in a rmarkdown-PDF report. I've tried r2d3::save_d3_png and got blank images, and I'm trying to troubleshoot it by calling htmlwidgets::saveWidget directly, then sending a system call to phantomjs on the resulting html page.
I suspect an error in my D3 script is causing it to fail, but I'm too new to D3 to identify it, and no errors appear if I add the --debug=true option.
I never solved this, but I switched to chrome and got what I want:
google-chrome --headless --disable-gpu --screenshot --window-size=1920,1080 https://jonkatz2.github.io/2019/03/11/D3-oncoprint
I try to create a Screen Sharing application with the opentok JS client that shares the publishers audio as well.
Screen Sharing works fine. But the audio is never shared.
Now, I noticed a warning in the console (Firefox) saying Invalid audioSource passed to Publisher - when using screen sharing no audioSource may be used. Does that mean it is not possible at all, or that the audio source is invalid?
With v2.13.0 it is now possible to pass a MediaStreamTrack as a custom audioSource and videoSource to initPublisher. This means you are able to add your microphone audio to the screen sharing stream. This will only work in Chrome or Firefox. IE does not support MediaStreamTrack's and Safari does not currently support screensharing.
const publish = Promise.all([
OT.getUserMedia({
videoSource: 'screen'
}),
OT.getUserMedia({
videoSource: null
})
])
.then(([screenStream, micStream]) => {
return OT.initPublisher(null, {
videoSource: screenStream.getVideoTracks()[0],
audioSource: micStream.getAudioTracks()[0]
});
});
Here is a sample of it all working https://output.jsbin.com/wozuhuc This sample will only work in Firefox because Chrome needs an extension.
I contacted the tokbox support and they confirmed, that the audio has to be published in an additional stream.
I had a go at making this work in Chrome, which is possible by using getDisplayMedia({video: true, audio: true}), which now shows a tickbox to allow the user to share device audio:
You can then use this to create a normal publisher which uses the video and audio streams from this call like so:
let prom = navigator.mediaDevices.getDisplayMedia({ video:true, audio: true });
prom.then(function(result) {
console.log("Collected permission. Going to start publishing.");
desktopStream = result;
var audioStream = desktopStream.getAudioTracks()[0];
var videoStream = desktopStream.getVideoTracks()[0];
console.log(audioStream);
// Screen sharing is available. Publish the screen.
screenPublisher = OT.initPublisher('ownScreen',
{
videoSource: videoStream,
audioSource: audioStream,
name: 'Sharing Screen',
maxResolution: { width: 1920, height: 1920 },
mirror: false,
fitMode: "contain",
},
function(error) {
if (error) {
console.log(error);
// Look at error.message to see what went wrong.
} else {
session.publish(screenPublisher, function(error) {
if (error) {
handleError();
}
$('#shareScreen').fadeOut(150, function(){
$('#stopShare').fadeIn(150);
});
$('#stopShare').addClass('blob blue');
});
}
}
);
You can also add a name to the screen share publisher to allow subscribers to distinguish between a regular video publisher and this new custom screen share publisher.
If you create a subscriber, and connect it to the session, it will receive audio and video from all publishers. As far as I know, there is no audio in screen sharing, that's why you cannot publish it. That should solve it. I hope this helps.
I am having difficulties using the Kapsel OData plugin to retrieve data from a store when the device is offline.
Here is the situation :
Cordova application for Windows platform
When the app opens, I start by opening a store against my OData service (similar to the Northwind service)
The store is created and opened. I then try and retrieve data from the store using OData.read or by setting a model and then calling read() on it.
The store will successfully open. However, my call to read the data will succeed when the device is online, and fail when it is offline, no matter which of the two previous methods I use.
Here is my code :
function openStore() {
var properties = {
"name": "Emergency",
"host": applicationContext.registrationContext.serverHost,
"port": applicationContext.registrationContext.serverPort,
"https": applicationContext.registrationContext.https,
"serviceRoot": appId,
"definingRequests": {
"Products": "/Products"
}
};
store = sap.OData.createOfflineStore(properties);
store.open(openStoreSuccessCallback, errorCallback);
}
function openStoreSuccessCallback() {
sap.OData.applyHttpClient();
retrieveWithModel();//retrieveWithOData();
}
function retrieveWithModel() {
var uri = applicationContext.applicationEndpointURL;
var user = applicationContext.registrationContext.user;
var password = applicationContext.registrationContext.password;
var headers = { "X-SMP-APPCID": applicationContext.applicationConnectionId };
var oModel = new sap.ui.model.odata.ODataModel(uri, {
json: "true",
user: user,
password: password,
headers: headers
});
sap.ui.getCore().setModel(oModel);
oModel.read("/Products", {
success: function (oEvent) {
var msg = new Windows.UI.Popups.MessageDialog("Success");
msg.showAsync();
},
error: function (err) {
console.log("you have failed");
var msg = new Windows.UI.Popups.MessageDialog("Fail");
msg.showAsync();
}
});
}
function retrieveWithOData() {
var sURL = applicationContext.applicationEndpointURL + "/Products";
var oHeaders = {};
oHeaders['Authorization'] = authStr;
oHeaders['X-SMP-APPCID'] = applicationContext.applicationConnectionId;
//oHeaders['Content-Type'] = "application/json";
//oHeaders['X-CSRF-Token'] = "FETCH";
var request = {
headers: oHeaders,
requestUri: sURL,
method: "GET"
};
OData.read(request,
function (data, response) {
console.log('Success');
},
function (err) {
console.log('Fail');
}
);
}
Kapsel SDK version is 3.8.0
SMP SDK is SP08
Cordova version 5.3.3
I am wondering if this is an issue with the way the app is launched. I need a way to open the same instance of the application every time, so the offline store will be kept with all its data. Because Cordova-generated Visual Studio projects do not generate an .exe file (only .appx files which would need to be signed and sideloaded to be used), the way I proceed is : I run the application in online mode from Visual Studio, then pin it to the taskbar or start menu, close it and switch the device to offline mode, and reopen it from the task bar.
However, more and more it seems like this method does not work as expected.
Can anyone confirm that a Visual Studio project, opened from the taskbar, should run the same way as when it is run from VS, with the same dependencies, libraries etc? If such is the case (and I can't really imagine why it wouldn't be), does anyone have any experience with these technologies and see what a potential issue could be?
Any help would be greatly appreciated.
Thanks!
Ok I found the solution to my problem. In case anyone ever encounters the same issue, the problem was that my offline store was not being used (you can see with Fiddler that there are outbound requests to the backend system even in offline mode).
The Visual studio project does keep the store from one build or launch to the next.
Trying to get my debugging up and running in a smooth way. I can currently test the FileSystem API by running my device but it is a painful process. I tried to set up the Chrome fileSystem API using code I found here on SO. I get a return on the fileSystem (although it is unnamed "" and fullPath = "/"). I get the request from Chrome that the app wants to store locally, which I accept. But creating a directory fails every time. The code for the device (only difference is the request for the local file system) works.
Any thoughts? I would love to be able to debug in Ripple as a first step before moving to the device.
// Chrome dev tools
// Request Quota (only for File System API)
navigator.webkitPersistentStorage.requestQuota(1024*1024, function(grantedBytes) {
window.webkitRequestFileSystem(PERSISTENT, grantedBytes, onFileSystemSuccess, onFileSystemFail);
}, function(e) {
console.log('Error', e);
});
// // Non-chrome (device): get local file system
// window.requestFileSystem(LocalFileSystem.PERSISTENT, 0, onFileSystemSuccess, onFileSystemFail);
function onFileSystemSuccess(fileSystem) {
console.log(fileSystem.name);
console.log(fileSystem.root.name);
var storageDir = fileSystem.root
console.log(storageDir, storageDir.fullPath);
// Find or create new STL directory
function getDirSuccess(dirEntry) {
console.log("Directory Name: " + dirEntry.name);
alert("Your files are stored in: " + dirEntry.name);
// Find or create new templates file.
// test file
dirEntry.getFile("readme.txt", {create: true, exclusive: false}, gotFileEntry, fail);
function gotFileEntry(fileEntry) {
fileEntry.createWriter(gotFileWriter, fail);
}
function gotFileWriter(writer) {
writer.onwriteend = function(evt) {
console.log("contents of file now 'some sample text'");
writer.truncate(11);
writer.onwriteend = function(evt) {
console.log("contents of file now 'some sample'");
writer.seek(4);
writer.write(" different text");
writer.onwriteend = function(evt){
console.log("contents of file now 'some different text'");
}
};
};
writer.write("some sample text");
alert('wrote a text file');
}
function fail(error) {
console.log(error.code);
}
Hi I am using fineuploader 3.3.0 version.
I am facing problem with fineuploader in IE9. as fine uploader does not support sizeLimit in ie9.
I am checking the file size at server side with simple contentlength check if (this.Request.Files[0].ContentLength > 5242880).
but it took 1-2 mins to get this response. Also the 1.4 MB file is taking too long to upload.
Can some one please let me know what is causing it, following is the fineuploader code I am using:-
$('#restricted-fine-uploader').fineUploader({
request: {
endpoint: '/apm/api/job/UploadDocument/?category=' + JobDocuments.category + '&mode=' + JobDocuments.forceupload + '&jobid=' + job_manager_details.jobId
},
autoUpload: true,
text: {
uploadButton: 'Upload File'
},
multiple: false,
validation: {
allowedExtensions: ['doc', 'docx', 'xls', 'xlsx', 'pdf'],
sizeLimit: 5242880,
itemLimit: 1
},
showMessage: function (message) {
// Using Twitter Bootstrap's classes and jQuery selector and method
$('#restricted-fine-uploader').append('<div class="alert alert-error">' + message + '</div>');
}
}).bind('submit', function (event, id, fileName) {
$('#displaymessage').hide();
$('li. qq-upload-fail').hide();
job_manager_details.isuploading = 1;
// fileCount++;
}).bind('complete', function (event, id, fileName, responseJSON) {
$('li. qq-upload-fail').hide();
$('#displaymessage').hide();
job_manager_details.isuploading = 0;
if (responseJSON.success) {
// fileCount--;
ShowJobDocuments();
// if (fileCount == 0 && !$('div.alert-error').html()) {
$('#jobDocumentDialog').dialog("close");
// }
}
})
I just had the same issue and found one more clue.
The VM is incredibly slow (WinXP/IE8) while the network was a NAT'd but it became very fast as soon as it was switched to being bridged.
The speed of the upload should not be influenced by Fine Uploader in any noticeable way. All Fine Uploader does for non File API browsers, such as IE9 and older, is submit a <form> containing the file and related parameters. If you are noticing slow upload times, most likely something in your environment is the cause of the issue. You haven't provided any additional information about your environment, so I can't offer any advice on that front.
As you may already know, file size checking is not possible client-side in IE9 and earlier due to lack of File API support.