I am building up a firefox extension which allows users to drag and drop things. Now if a user closes the application or reloads the page, I want to restore the last activity he did.
Example : User moves a box from point X to Y.
There can be many more boxes as well.
Now after page reload or application startup, if user puts the addon ON, I want the box's position to be Y. So for this purpose should I be using the firefox preference thing or is there any other better way of doing it.
I wrote a Firefox extension for accessing bookmarks from a site a friend of mine developed. I store the bookmarks data locally as JSON in a text file in the user's extension profile directory in case the bookmark service is down.
My function for saving bookmarks JSON is:
/**
* Stores bookmarks JSON to local persistence asynchronously.
*
* #param bookmarksJson The JSON to store
* #param fnSuccess The function to call upon success
* #param fnError The function to call upon error
*/
RyeboxChrome.saveBookmarkJson = function(bookmarksJson, fnSuccess, fnError) {
var cu = Components.utils;
cu.import("resource://gre/modules/NetUtil.jsm");
cu.import("resource://gre/modules/FileUtils.jsm");
var bookmarksJsonFile = RyeboxChrome.getOrCreateStorageDirectory();
bookmarksJsonFile.append("bookmarks.txt");
// You can also optionally pass a flags parameter here. It defaults to
// FileUtils.MODE_WRONLY | FileUtils.MODE_CREATE | FileUtils.MODE_TRUNCATE;
var ostream = FileUtils.openSafeFileOutputStream(bookmarksJsonFile);
var converter = Components.classes["#mozilla.org/intl/scriptableunicodeconverter"].createInstance(Components.interfaces.nsIScriptableUnicodeConverter);
converter.charset = "UTF-8";
var istream = converter.convertToInputStream(bookmarksJson);
NetUtil.asyncCopy(istream, ostream, function(status) {
if ( !Components.isSuccessCode(status) && typeof(fnError) === 'function' ) {
fnError();
} else if ( typeof(fnSuccess) === 'function' ) {
fnSuccess();
}
return;
});
};
The function for reading the data is:
/**
* Reads bookmarks JSON from local persistence asynchronously.
*
* #param fnSuccess Function to call when successful. The bookmarks JSON will
* be passed to this function.
*
* #param fnError Function to call upon failure.
*/
RyeboxChrome.getBookmarksJson = function(fnSuccess, fnError) {
Components.utils.import("resource://gre/modules/NetUtil.jsm");
var bookmarksJsonFile = RyeboxChrome.getOrCreateStorageDirectory();
bookmarksJsonFile.append("bookmarks.txt");
NetUtil.asyncFetch(bookmarksJsonFile, function(inputStream, status) {
if (!Components.isSuccessCode(status) && typeof(fnError) === 'function' ) {
fnError();
} else if ( typeof(fnSuccess) === 'function' ){
var data = NetUtil.readInputStreamToString(inputStream, inputStream.available());
fnSuccess(data);
}
});
};
Finally, my getOrCreateStorageDirectory function is:
/**
* Storage of data is done in a Ryebox directory within the user's profile
* directory.
*/
RyeboxChrome.getOrCreateStorageDirectory = function() {
var ci = Components.interfaces;
let directoryService = Components.classes["#mozilla.org/file/directory_service;1"].getService(ci.nsIProperties);
// Reference to the user's profile directory
let localDir = directoryService.get("ProfD", ci.nsIFile);
localDir.append("Ryebox");
if (!localDir.exists() || !localDir.isDirectory()) {
localDir.create(ci.nsIFile.DIRECTORY_TYPE, 0774);
}
return localDir;
};
It was suggested to me by Nickolay Ponomarev that using the following can be a good option:
Annotation Service
SQLite DB
Right now I am planning to use database for the usage.
Related
I am trying to get a cache working in my plugin.
In ext_localconf.php
if (!is_array($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension'] = [];}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['frontend'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['frontend'] = 'TYPO3\\CMS\\Core\\Cache\\Frontend\\StringFrontend';}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['options'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['options'] = ['defaultLifetime' => 0];}
if (!isset($GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['groups'])) {
$GLOBALS['TYPO3_CONF_VARS']['SYS']['caching']['cacheConfigurations']['myextension']['groups'] = ['pages'];}
In my controller action :
$cacheIdentifier = 'topic' . $topic->getUid();
$cache = GeneralUtility::makeInstance(CacheManager::class)->getCache('myextension');
$result = unserialize($cache->get($cacheIdentifier));
if ($result !== false ) {
DebuggerUtility::var_dump($result);
} else {
$result = $this->postRepository->findByTopic($topic->getUid(), $page, $itemsPerPage);
$cache->set($cacheIdentifier, serialize($result), ['tag1', 'tag2']);
DebuggerUtility::var_dump($result);
}
The first time the page with the action gets loaded all is ok and the entry has been made in de database (cf_myextension and cf_myextension_tags}.
But the 2nd time the cache gets loaded and I get an error. Even DebuggerUtility::var_dump($result); does not work:
Call to a member function map() on null
in ../typo3/sysext/extbase/Classes/Persistence/Generic/QueryResult.php line 96
*/
protected function initialize()
{
if (!is_array($this->queryResult)) {
$this->queryResult = $this->dataMapper->map($this->query->getType(), $this->persistenceManager->getObjectDataByQuery($this->query));
}
}
/**
A normal var_dump works and spits out the cache entry. What is the problem? Do I forget something? Can't a QueryResult together with some other variables not be stored as an array in the cache? I also tried VariableFrontend cache, which produced the same error.
The E-tools GUI application indents and pretty formats HTML, JavaScript, JSON and SQL. To install the e-tools snap package in all currently supported versions of Ubuntu open the terminal and type:
sudo snap install e-tools
We have an SAPUI5 App deployed on SAP PO. The problem is that whenever we do changes and deploy the new version of our application, the changes are not reflected and we need to do a Hard Reload and Clear browser Cache to fetch new changes.
This is causing a lot of issues as we cannot ask clients to clear cache after every change.
Below are the unsuccessful methods we tried so far:
Enabling "resources/sap-ui-cachebuster/sap-ui-core.js" in SAPUI5 bootstrap.
Using 'Application Cache buster' for application resource ( using sap-ui-cachebuster-info.json)
Setting HTML header to keep no cache:
<meta http-equiv='cache-control' content='no-cache, no-store, must-revalidate'>
<meta http-equiv='Expires' content='-1'>
<meta http-equiv='Pragma' content='no-cache'>
Clear cookies with below code:
document.cookie.split(";").forEach(function(c) {
document.cookie = c.replace(/^ +/, "").replace(/=.*/, "=;expires=" + new Date().toUTCString() + ";path=/");
});
None of the above solutions have worked so far. This is what we see in Networks tab of Chrome:
NOTE: Application is deployed on SAP PO 7.4 ( JAVA Stack)
We had the same issue than you on SAP MII and I have spent months with several OSS Calls for SAP to provide an acceptable solution.
They did so in the SP3 of SAP MII (we haven't updated yet but I hope their correction is right), but this will not apply in your case as you're on SAP PO but it's still a Java Stack.
So I think you should open an OSS Call, recommending to SAP to consult SAP Notes:
2463286 - Issues when reloading JavaScript files
2459768 - Force browsers to reload modified resource files
They will probably redirect you to the following stack overflow topic:
http://stackoverflow.com/questions/118884/how-to-force-browser-to-reload-cached-css-js-files
But this is only a work around, SAP web server on Java stack doesn't seem to be working correctly and they have to provide a correction.
Hope this will help you.
EDIT
Hi,
Here is an update, there is a work around that we sometime use.
We have a URL parameter which is used to identify if a reload of the page is needed.
See below a JS snippet that we embed the index.html page of the SAPUI5 app.
Hope this will help you.
<script>
window.onload = function () {
version = undefined;
fCheckVersionMatch = false;
onInit();
};
/***************************************************************************
* Function launch when we start the application it test
* - if the Reload parameters is set in the url
* - if we are loading an hold application with a false reload value
****************************************************************************/
var onInit = function() {
checkParamReload();
};
/***************************************************************************
* Check in the url if there is the reload value and if this value is less
* than the difference with the date now => avoid when using favorite link
* to load a previous version with an incorrect time stamp
****************************************************************************/
var checkParamReload = function() {
var sUrlParameters = window.top.document.location.search;
var regexReload = /(\?|&)reload=([^&]*)/;
var aReload = sUrlParameters.match(regexReload);
var nTime = aReload == null ? NaN : parseInt(aReload[2]);
if ( isNaN(nTime) || Math.abs(Date.now() - nTime) > 60000 ) {
// In case no reload tag is present or it's value is more than 1 minute ago, reload page
reloadPage(true); // True means force reload => reset retry count.
}
};
/***************************************************************************
* Reload page and make sure the reload param is updated.
* If force reload is used, retry count is resetted, otherwise it is
* it is incremented up to a limit, which - in case it is reached - stops
* the reload process and instead display an error message.
****************************************************************************/
var reloadPage = function (bForce) {
var retries = 0;
var oLocation = window.top.document.location;
var sSearch = oLocation.search;
sSearch = queryReplace(sSearch, "reload", _ => Date.now());
if (bForce) {
sSearch = queryReplace(sSearch, "retry", _ => 0);
} else {
sSearch = queryReplace(sSearch, "retry", function (n) {
if (isNaN(parseInt(n))) {
return 0;
} else {
retries = parseInt(n);
return retries + 1;
}
});
}
if (retries < 10) {
// Reload Page
window.top.document.location.replace(oLocation.origin + oLocation.pathname + sSearch + oLocation.hash);
} else {
// Display error
document.getElementById('Error').style.display = "block";
}
};
var queryReplace = function (sQuery, sAttribute, fnReplacement) {
// Match the attribute with the value
var matcher = new RegExp(`(\\?|&)${ sAttribute }=([^&]*)`);
var sNewQuery = sQuery.length < 2 ? `?${ sAttribute }=` : sQuery;
if (sNewQuery.search(matcher) < 0) {
// If we could not match, we add the attribute at the end
sNewQuery += "&" + sAttribute + "=" + fnReplacement("");
} else {
sNewQuery = sNewQuery.replace(matcher, (_, delim, oldVal) => delim + sAttribute + "=" + fnReplacement(oldVal));
}
return sNewQuery;
}
</script>
I really love the DropZoneJS component and am currently wrapping it in an EmberJS component (you can see demo here). In any event, the wrapper works just fine but I wanted to listen in on one of Dropzone's events and introspect the file contents (not the meta info like size, lastModified, etc.). The file type I'm dealing with is an XML file and I'd like to look "into" it to validate before sending it.
How can one do that? I would have thought the contents would hang off of the file object that you can pick up on many of the events but unless I'm just missing something obvious, it isn't there. :(
This worked for me:
Dropzone.options.PDFDrop = {
maxFilesize: 10, // Mb
accept: function(file, done) {
var reader = new FileReader();
reader.addEventListener("loadend", function(event) { console.log(event.target.result);});
reader.readAsText(file);
}
};
could also use reader.reaAsBinaryString() if binary data!
Ok, I've answer my own question and since others appear interested I'll post my answer here. For a working demo of this you can find it here:
https://ui-dropzone.firebaseapp.com/demo-local-data
In the demo I've wrapped the Dropzone component in the EmberJS framework but if you look at the code you'll find it's just Javascript code, nothing much to be afraid of. :)
The things we'll do are:
Get the file before the network request
The key thing we need become familiar with is the HTML5 API. Good news is it is quite simple. Take a look at this code and maybe that's all you need:
/**
* Replaces the XHR's send operation so that the stream can be
* retrieved on the client side instead being sent to the server.
* The function name is a little confusing (other than it replaces the "send"
* from Dropzonejs) because really what it's doing is reading the file and
* NOT sending to the server.
*/
_sendIntercept(file, options={}) {
return new RSVP.Promise((resolve,reject) => {
if(!options.readType) {
const mime = file.type;
const textType = a(_textTypes).any(type => {
const re = new RegExp(type);
return re.test(mime);
});
options.readType = textType ? 'readAsText' : 'readAsDataURL';
}
let reader = new window.FileReader();
reader.onload = () => {
resolve(reader.result);
};
reader.onerror = () => {
reject(reader.result);
};
// run the reader
reader[options.readType](file);
});
},
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/mixins/xhr-intercept.js#L10-L38
The code above returns a Promise which resolves once the file that's been dropped into the browser has been "read" into Javascript. This should be very quick as it's all local (do be aware that if you're downloading really large files you might want to "chunk" it ... that's a more advanced topic).
Hook into Dropzone
Now we need to find somewhere to hook into in Dropzone to read the file contents and stop the network request that we no longer need. Since the HTML5 File API just needs a File object you'll notice that Dropzone provides all sorts of hooks for that.
I decided on the "accept" hook because it would give me the opportunity to download the file and validate all in one go (for me it's mainly about drag and dropping XML's and so the content of the file is a part of the validation process) and crucially it happens before the network request.
Now it's important you realise that we're "replacing" the accept function not listening to the event it fires. If we just listened we would still incur a network request. So to **overload* accept we do something like this:
this.accept = this.localAcceptHandler; // replace "accept" on Dropzone
This will only work if this is the Dropzone object. You can achieve that by:
including it in your init hook function
including it as part of your instantiation (e.g., new Dropzone({accept: {...})
Now we've referred to the "localAcceptHandler", let me introduce it to you:
localAcceptHandler(file, done) {
this._sendIntercept(file).then(result => {
file.contents = result;
if(typeOf(this.localSuccess) === 'function') {
this.localSuccess(file, done);
} else {
done(); // empty done signals success
}
}).catch(result => {
if(typeOf(this.localFailure) === 'function') {
file.contents = result;
this.localFailure(file, done);
} else {
done(`Failed to download file ${file.name}`);
console.warn(file);
}
});
}
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/mixins/xhr-intercept.js#L40-L64
In quick summary it does the following:
read the contents of the file (aka, _sendIntercept)
based on mime type read the file either via readAsText or readAsDataURL
save the file contents to the .contents property of the file
Stop the send
To intercept the sending of the request on the network but still maintain the rest of the workflow we will replace a function called submitRequest. In the Dropzone code this function is a one liner and what I did was replace it with my own one-liner:
this._finished(files,'locally resolved, refer to "contents" property');
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/mixins/xhr-intercept.js#L66-L70
Provide access to retrieved document
The last step is just to ensure that our localAcceptHandler is put in place of the accept routine that dropzone supplies:
https://github.com/lifegadget/ui-dropzone/blob/0.7.2/addon/components/drop-zone.js#L88-L95
using the FileReader() solution is working amazingly good for me:
Dropzone.autoDiscover = false;
var dz = new Dropzone("#demo-upload",{
autoProcessQueue:false,
url:'upload.php'
});
dz.on("drop",function drop(e) {
var files = [];
for (var i = 0; i < e.dataTransfer.files.length; i++) {
files[i] = e.dataTransfer.files[i];
}
var reader = new FileReader();
reader.onload = function(event) {
var line = event.target.result.split('\n');
for ( var i = 0; i < line.length; i++){
console.log(line);
}
};
reader.readAsText(files[files.length-1]);
I'm using Laravel 4 with dompdf package: https://github.com/barryvdh/laravel-dompdf
When I generate a report and it converts it to PDF on my local, everything is fine and displays well, but when I do the same exact thing on my production server, it displays random letters where there is either dynamic or static content.
Screenshot of local vs production:
http://s28.postimg.org/u5zk3pc19/report_diff.png
Here is the code that creates the PDF:
/**
* Create PDF
*
*/
public function createPdf( $reportData )
{
if( $this->validate() )
{
// Get Final Data Information
$btu_hp = static::getBtuHp( $reportData['booth_cfm'], $reportData['cure_temp_hp'], $reportData['outside_temp'] );
$btu_current = static::getBtuCurrent( $reportData['booth_cfm'], $reportData['bake_temp_current'], $reportData['outside_temp'] );
$reportData['energy_percentage_per_unit'] = static::getEnergyPercentagePerUnit( $btu_hp, $btu_current );
$reportData['energy_dollar_per_unit'] = static::getEnergyDollarPerUnit( $reportData['cost_per_therm'], $reportData['bake_time_current'], $reportData['cure_time_hp'], $btu_current, $btu_hp );
$reportData['time_savings_per_unit'] = static::getTimeSavingsPerUnit( $reportData['bake_time_current'], $reportData['cure_time_hp'] );
$reportData['time_savings_per_year'] = static::getTimeSavingsPerYear( $reportData['time_savings_per_unit'][0], $reportData['units_per_day'], $reportData['production_days'] );
$reportData['labor_dollar_per_year'] = static::getLaborDollarPerYear( $reportData['labor_rate'], $reportData['time_savings_per_year'][0] );
$reportData['energy_dollar_per_year'] = static::getEnergyDollarPerYear( $reportData['energy_dollar_per_unit'][0], $reportData['units_per_day'], $reportData['production_days'] );
$view = View::make('pages.report.hp-report.print', array('report' => $reportData));
if( ! $this->saveAsPdf($view, $this->generateFileName()) )
{
return false;
}
return true;
}
return false;
}
/**
* Save report as PDF
* #param html HTML of PDF
* #param fileName Name of File
*
*/
public function saveAsPdf( $html, $fileName = null )
{
if(is_null($fileName))
$fileName = $this->generateFileName();
$htmlPath = $this->reportDirectory.'/'.$fileName.'.html';
$pdfPath = $this->reportDirectory.'/'.$fileName.'.pdf';
file_put_contents( $htmlPath, $html );
// set recent PDF to name of PDF
$this->recentReportFile = $fileName . '.pdf';
return PDF::loadFile($htmlPath)->save($pdfPath);
}
/**
* Get most recent uploaded PDF
*
*/
public function getRecentPdf()
{
return $this->recentReportFile;
}
/**
* Generate file name for PDF
*
*/
public function generateFileName()
{
return Auth::user()->id . '_hp_' . str_random(10) . '_' . time();
}
Everything writes fine and it uses the right template and has the styling... Only the static content and dynamic content (values written out with PHP variables) display badly, although you can see some of the static content like Energy Savings and such prints fine.
Is there a reason this could be all jumbled up on the live server, but not local?
Here is the HTML for the view that is being grabbed (the HTML the php variables are injected into): http://pastebin.com/5bMR6G2s
And here is my config file for dompdf:
http://pastebin.com/Ld6MQckG
There are two possible causes for this:
Missing font(s) on the production server. Make sure you have the correct fonts installed on the production site.
Character encoding issues. I'm not sure which site (dev/live) the issue is on, but it may be that one is outputting UTF-8 and the other is not. You could try to sort this out by detecting the encoding on the input file on both dev and live by using mb_detect_encoding and see if they're different. If they are, then use mb_convert_encoding before converting to PDF.
I am using GWT and I want to upload an image and display its preview without interacting with the server, so the gwtupload lib does not look like a way to go.
After the image is uploaded and displayed, user can optionally save it. The idea is to send the image through GWT-RPC as a Base64 encoded String and finally store it in the DB as a CLOB.
Is there any easy way to do it either with GWT or using JSNI?
This document answers your question:
Reading files in JavaScript using the File APIs
/**
* This is a native JS method that utilizes FileReader in order to read an image from the local file system.
*
* #param event A native event from the FileUploader widget. It is needed in order to access FileUploader itself. *
* #return The result will contain the image data encoded as a data URL.
*
* #author Dušan Eremić
*/
private native String loadImage(NativeEvent event) /*-{
var image = event.target.files[0];
// Check if file is an image
if (image.type.match('image.*')) {
var reader = new FileReader();
reader.onload = function(e) {
// Call-back Java method when done
imageLoaded(e.target.result);
}
// Start reading the image
reader.readAsDataURL(image);
}
}-*/;
In the imageLoaded method you can do something like logoPreview.add(new Image(imageSrc)) where the imageSrc is the result of loadImage method.
The handler method for FileUpload widget looks something like this:
/**
* Handler for Logo image upload.
*/
#UiHandler("logoUpload")
void logoSelected(ChangeEvent e) {
if (logoUpload.getFilename() != null && !logoUpload.getFilename().isEmpty()) {
loadImage(e.getNativeEvent());
}
}
Let's say you have camera property which is an instance of FileUploader do the following:
camera.getElement().setAttribute("accept", "image/*");
camera.getElement().setAttribute("capture", "camera");
camera.getElement().setClassName("camera");
And have imgPreview an instance of Image, do the following.
imgPreview.getElement().setAttribute("id", "camera-preview");
imgPreview.getElement().setAttribute("src", "#");
imgPreview.getElement().setAttribute("alt", "#");
Now add a handler on camera object which calls a native method test1(NativeEvent)
camera.addChangeHandler( e-> {
test1(e.getNativeEvent());
});
public static native void test1(NativeEvent event) /*-{
var image = event.target.files[0];
var reader = new FileReader();
reader.onload = function(e) {
$wnd.$('#camera-preview').attr('src', e.target.result);
}
// Start reading the image
reader.readAsDataURL(image);
}-*/;