I was just wondering if there is a convenient way to cache an entire PWA on click? Just like you would download and install a native app from an app store?
If I am not wrong, the only solution currently is that you have to add all existing files in an array and use the cache.addAll method (as you see below). You can execute the function then if the button was clicked.
function downloadApp() {
caches.open(appCache).then(function(cache) {
return cache.addAll([
'/',
'/files/1',
'/files/2',
// ...
// All PWA files
]);
})
}
Do you know any better approaches for this?
Related
Im creating an Electron app. I save the user progress in a file. I want the app to show the usual 'Save changes before closing' when the user has not saved and tries to close the App.
I could show a custom dialog, however, I would want to do it the native way.
(Example: On macOS, when you edit a file, the red button changes, letting know the user that the app has unsaved content)
I know this has to be done probably inside the Electron's listener for a closing app:
app.on('window-all-closed', () => {
if (process.platform !== 'darwin') {
app.quit()
}
})
... preventing quit() from being called. And instead handling the unsaved file state and dialog.
PD: I already handle the logic to know whether the user has saved its progress or not. I just want to know how to set the 'Unsaved' state to my electron app and correctly handle it.
(The example is Visual Studio Code, which is also an Electron App)
I usually use a global variable to indicate changes had occured and for example in the case of closing the app:
Code in the main:
mainWindow.on('close', function (event) {
if (global.savetoask== 'Yes') {
event.preventDefault();
//send a ipc message to request a confirm dialog
.............
} else {
app.exit();
}
});
We are using the latest version 4.7.3 of CKEditor (Full) available from nuget. We've tried a number of suggested solutions to disable the Preview toolbar button while in Source Mode, but could not get it to work. There are cases when there are more than one editor on a page, and they are added as user controls (.ascx) due to some unrelated logic. For example we've tried the below:
CKEDITOR.on('instanceReady', function (instance) {
instance.editor.addCommand('preview', {
modes: { wysiwyg: 1, source: 0 }
});
});
We configure the toolbar buttons via config.js.
CKEDITOR.editorConfig = function (config) {
config.toolbar_CMToolbar =
[
{ name: 'sourcedialog', items: ['Source', '-', 'Preview'] }
];
};
The reason we need this is to avoid a security issue when malicious script has been added while in Source Mode and the Preview was immediately requested, causing javascript to execute. Ordinarily the wysiwyg mode would clean this up and the malicious scripts would have been validated.
Below is the sample script that triggers the issue, for reference. (include everything from double-quote to tag close)
"><img src=x onerror=alert(7)>
Granted this is just evading the main issue rather than fixing it, but this workaround would be handled quicker.
Hoping to hear suggestions on how to correct this. Thanks!
You can change properties of commands like this:
CKEDITOR.on('instanceReady', function(evt) {
evt.editor.commands.preview.modes.source = 0;
});
General Questions
Hello! I'm delving into the world of Chrome Extensions and am having some problems getting the overall workflow down. It seems that Google has recently switched to heavily advocating Event Pages instead of keeping everything in background.js and background.html. I take part of this to mean that we should pass off most of your extension logic to a content script.
In Google's Event Page primer, they have the content script listed in the manifest.json file. But in their event page example extension, it is brought in via this code block in background.js: chrome.tabs.executeScript(tab.id, {file: "content.js"}, function() { });
What are the advantages of doing it one way over the other?
My Code
I'm going forward with the programatic way of injecting the content script, like Google's example.
manifest.json
{
"manifest_version": 2,
"name": "Test",
"description": "Let's get this sucker working",
"version": "0.0.0.1",
"permissions": [
"tabs",
"*://*/*"
],
"background": {
"scripts": ["background.js"],
"persistent": false
},
"browser_action": {
"default_icon": "icon.png"
}
}
background.js
chrome.browserAction.onClicked.addListener(function() {
console.log("alert from background.js");
chrome.tabs.executeScript({file: "jquery-2.0.2.min.js"}, function() {
console.log("jquery Loaded");
});
chrome.tabs.executeScript({file: "content.js"}, function() {
console.log("content loaded");
});
});
content.js
console.log('you\'r in the world of content.js');
var ans = {};
ans.createSidebar = function() {
return {
init: function(){
alert("why hello there");
}
}
}();
ans.createSidebar.init();
I am able to get the first 3 console.log statements to show up in the background page's debugger. I'm also able to get the alert from content.js to show up in any website. But I'm not able to see the console.log from content.js, nor am I able to view any of the JS from content.js. I've tried looking in the "content scripts" section of the background page debugger's Sources tab. A few other posts on SO have suggested adding debugger; statements to get it to show, but I'm not having any luck with anything. The closest solution I've seen is this post, but is done by listing the content script in the manifest.
Any help would be appreciated. Thanks!
Content scripts' console.log messages are shown in the web page's console instead of the background page's inspector.
Adding debugger; works if the Developer Tool (for the web page where your content script is injected) is opened.
Therefore, in this case, you should first activate the Developer Tool (of the web page) before clicking the browser action icon and everything should work just fine.
I tried to use the debuggermethod, but it doesn't not work well because the project is using require.js to bundle javascript files.
If you are also using require.js for chrome extension development, you can try adding something like this to the code base, AND change eval(xhr.responseText) to eval(xhr.responseText + "\n//# sourceURL=" + url);. (like this question)
Then you can see the source file in your dev tool (but not the background html window)
manifest v3
You can add console.log statements to your content scripts.
This is one of the best ways to debug an application.
Let's say you want to access a DOM node from the content script.
const node = document.querySelector("selector")
node will be Element instance if it exists else it will be null
If you can see the node in the Elements tab but not able to access it via content script then the node might have not been loaded at the time you accessed it.
Follow this answer to fix this issue.
I am writing a Firefox add-on for Linux users to pass credentials for NTLM authenticated sites.some what similar to AutoAuth which is written using XUL framework
https://addons.mozilla.org/en-us/firefox/addon/autoauth/
my question is how to access Authentication Dialog using Firefox SDK?
With the add-on sdk you don't have XUL overlays so only thing you really can do outside of that is to use the window watcher. Since popup windows are considered windows you'll see them in the onTrack function when they popup in the browser.
This example code watches windows looking for the window location chrome://global/content/commonDialog.xul which is similar to what the autoauth add-on is doing. That dialog is used for a number of auth questions so you'll have to do the additional work of detecting NTLM auth.
var { isBrowser } = require("sdk/window/utils");
var delegate = {
onTrack: function (window) {
if (!isBrowser(window) && window.location === "chrome://global/content/commonDialog.xul") {
// this could be the window we're looking for modify it using it's window.document
}
},
onUntrack: function (window) {
if (!isBrowser(window) && window.location === "chrome://global/content/commonDialog.xul") {
// undo the modifications you did
}
}
};
var winUtils = require("window-utils");
var tracker = new winUtils.WindowTracker(delegate);
With this code you're pretty much at the point of the autoauth add-on's load() function. You can use window.document.getElementById() to access the DOM of that window and alter the elements within it.
NOTE That the window-utils module is deprecated so you'll need to keep up with the SDK as they move from that module to (hopefully) something else similar.
I have a phonegap app that uses jqm that works fine in android and ios.
Porting to WP7 i have an issue with the history, specifically history.back() (but also .go(-1) etc). This refers to going back in history where the previous 'page' was in the same physical html file, just a different data-role=page div.
using a jwm site in a regular browser is fine (with separate 'pages' in the same html file). Also, using history.back() when we go from one html file to another in the app is fine. It's the specific combination of WP7.5, jqm and PG.
Has anyone come across a solution for this? it's driving me crazy, and has been as issue since PG 1.4.1 and jwm 1.0.
EDIT 1: It's possible that the phonegap process of initialising the webview on WP7.5 somehow overrides the jqm history overrides, after they've loaded.
EDIT 2: definitely something to do with jqm not being able to modify the history. each time there is a 'page' change, history.length is still 0.
EDIT 3: When i inspect the 'history' object, i found there is no function for replaceState or pushState - i know jqm uses this for history nav, maybe that's the problem.
ok - this isn't perfect, but here's a solution (read: hack) that works for me. It only works for page hash changes, not actual url changes (but you could add a regex check for that). Put this somewhere in the code that runs on ondeviceready:
if (device.platform == 'WinCE') {
window.history.back = function () {
var p = $.mobile.urlHistory.getPrev();
if (p) {
$.mobile.changePage("#" + p.pageUrl, { reverse: true });
$.mobile.urlHistory.stack.splice(-2, 2);
$.mobile.urlHistory.activeIndex -= 2;
}
}
}