I am a newbie to Addon-SDK. I am making an Addon in which I log some information meant for manually viewing later on. I came across the
Simple-Storage API but as far as I could figure out, it saves the information internally in some format which can only be accessed via function calls i.e. "ss.storage.variable_name".
I was wondering if this information is accessible somehow from windows directory structure i.e. maybe from some file in the profile directory.
And secondly, is there any way to access the SQlite Database, or any third
party API?
(I don't know why but the cfx test for this API gives me errors, so not able
to use this).
You can try with localStorage of HTML5 via Javascript: http://www.w3schools.com/html/html5_webstorage.asp
And then with Store: https://developer.mozilla.org/es/docs/Storage
Related
I want to make a Firefox extension that can store and retrieve data from a database. However I've only been finding solutions that would work locally for each user. I'd like every user to have access to the same database.
Is that possible?
It is possible to access remote SQL databases like MySQL and PostgreSQL with node.js modules, but it is more sensible to create REST API front ends to your databases and call them from the extension. Exposing the SQL calls directly in your web extension is not a good idea. It is basically bad security practice and will expose your database to hackers.
You will also need your addon to pass Mozilla's approval process if you are going to distribute publicly and I doubt the reviewers will be pleased to see raw SQL calls in your extension's code.
The more sensible way is to update the database is through a REST API front end.
A simple example on how to create a REST API for a Postgres database can be found at Node.js, Express.js, and PostgreSQL: CRUD REST API example - LogRocket Blog and this playlist show how to create a REST interface in a Firefox extension - Build a Firefox Extension from Scratch that integrates with Node.js - DEV Community
The above database example is quite simple. For real world use you will need a more advanced REST framework for your API which sanitizes the data before inserting it into the databasse. You have more reading to do here.
However if you need to make SQL calls directly from your extension which I still don't advise, you can include some packages from node.js in your web extension, and use browserify which extracts and packages the modules needed into your extension. Your addons though had better be for private or in-house use, not for public distribution.
Some nodejs modules for database access are - https://github.com/mysqljs/mysql, https://node-postgres.com/ and https://www.npmjs.com/package/pg.
Just a little advice. Feel free to ignore it if you have nothing to do with it. Your question sounds quiet generic. You should learn and doing it by yourself first and only ask here when there are specific issues you're stuck with.
By "locally", I think you mean via Web SQL or IndexedDB. They're called local database and their behaviors are totally different from what you're looking for.
I should haven't need to tell you to do this. Just in case. Of course first thing first you need to know how website is working for both front end and back end, not just local stuff, especially how they're communicating between each other. So you should know about HTTP request, Javascript, and AJAX.
What has it to do with Firefox extension?. Not just Firefox, browser extension is just another type of web page that overlaying the opened web page in all kind of browser. In Firefox the opened page is called activeTabs. The only difference from regular web page is you need to signup your account first, manifest.json file as your project root file, and it compile from command line with web-ext tools. In case if you're facing Cross-origin resource sharing (CORS) restriction, follow instructions HERE and allow the URL on server side.
I created a new .NET based web application named shabang using the web client and it displays the default woohoo-page when I browse to shabang.azurewebsites.net. Then, I discovered that there's a console in the web client and when I use it, I can see a directory structure including my login name, home directory etc. However, I can't find the files being served and, since I can see a web page being rendered when browsing, I'm certain they are there, somewhere.
Where is the (or equivalent of) wwwroot?
I've googled it but it's a bit hard to word what I'm asking in a way that Google will understand. Just a bunch of guides that are way too advanced to distill the information from.
By default, it is d:\home\site\wwwroot. It can also be modified using the Azure Portal.
Note that you should use the Kudu Console to see it. e.g. https://shabang.scm.azurewebsites.net
I've tried some of the services out there, including droplet, ctrlq.org/save, and some other sites that support directly fetching a file from a url and uploading it to dropbox, google drive and the like. Without the user having to store the file on a local disk.
Now the problem is none of these services support multiple urls or batch uploading, but I have quite a few urls and I really need a service where I can put them in, split them with enters or semicolons, and have the files uploaded to dropbox.(or any other cloud storage)
Any help would be gladly appreciated.
The Dropbox Saver JavaScript control allows you to save up to 100 files to the user's Dropbox in one shot. You'll need to programmatically create the button using Dropbox.createSaveButton as explained in the linked page.
It seems like the 100-file limit (at any one time) is universal, but you might find that it isn't the case when using the DropBox REST API. It looks possible to do this with NodeJS server side (OAuth and posts) or Javascript client side (automating FileReader). I'll review and try to add content so these aren't just links.
If you can leave a page open for about 20 minutes due to "technical limitations", the dropbox should be loadable 100-at-a-time like that, assuming each upload takes less than 2 seconds; it's an easy hook to add a progress indicator.
If you're preloading the dropbox once yourself or the initial load is compatible with manual action, perhaps mapping a drive and trying to unzip an archive of your links to it would work. If your list of links isn't extremely volatile then the REST API could be used to synchronize changes.
Edit: Forgot to include this page on CloudConvert, which unzips archives containing up to 100 files into DropBox. Your use case doesn't seem to include retrieving the actual content at your servers (generated zip files), sending the automation list to the browser and then having the browser extract to dropbox, but it's another option.
The Dropbox API now offers the ability to save a file into Dropbox directly via a URL. There's a blog post about it here:
https://blogs.dropbox.com/developers/2015/06/programmatically-saving-a-url-to-dropbox/
The documentation can be found here:
https://www.dropbox.com/developers/core/docs#save-url
Using the jQuery wrapped version of Fineuploader v3.3.
Is it possible to populate the file list with files already in the upload folder?
I think "_addToList(id, name)" should do the trick, but I can't get it to work. Any ideas?
Seems that they are currently working on this feature:
https://github.com/Widen/fine-uploader/issues/784
So, this will be available soon.
This is not a behavior that Fine Uploader currently supports. Fine Uploader only displays files that users have submitted to the uploader since the current uploader instance was created. It doesn't try to be an all-in-one web application. You could probably add your own item to the list/UI via javascript. That probably wouldn't be terribly difficult, but seems like an odd thing to do.
If you'd like to discuss your specific use case more, please open up a feature request in the Github issue tracker.
Generally, client side code cannot add stored or hard-coded path based file names for use in any type of POST or upload operation. Obviously this is a security measure, you can imagine if a malicious web page could add to a generic POST operation some type of baked in file name. So from what I understand, only the user can specify path based file names, via a file browser for the session that it is included in. This applies to HTML/JavaScript/jQuery but am unsure if Flash/Silverlight based solutions would also be limited. I think a Java based uploader would be free of this. But you are just moving closer and closer to installed software.
I need a way for cache images and html files in PhoneGap from my site. I'm planning that users will see site without internet connection like it will be with it. But I see information only about sql data storing, but how can I store images (and use later).
To cache images check out this library -of which I'm the creator-:
imgcache.js
. It's designed for the very purpose of caching images using the local filesystem. If you check out the examples you will see that it can also detect when an image fails to be loaded (because you're offline or you have a very bad connection) and then replaces it automatically with the cached image. The user of the webapp doesn't even notice it's offline.
As for html pages.. if they're html static files, they could be stored locally in the web app (file:// in phonegap).
If they're dynamically generated pages, check the localStorage API if you have a small amount of data, otherwise the filesystem API.
For my web app I retrieve only json data from my server (and process/render it using Backbone+Underscore). The json payload is stored into the localStorage. If the application gets offline, it will fetch json data from the localStorage instead of the server (home-baked fork of Backbone.dualStorage)
You then get the full offline experience: pages+images.
Caching like you might need for simple offline operation is not exactly that easy.
Your first option is the cache manifest. It has some limitations (like the size of the cache) but might work for you since it was designed to do what you want.
Another options is that you can store content on the disk of the device using the file system APIs. This has some drawbacks like security and the fact that you have to load the file from a path / url that is different than you might normally load it from on the web. Check out the hydra plugin for an example of this.
One final option might be to store stuff in localStorage (which has the benefit of being private on all platforms) and then pull it out of there when needed ... that means base64'ing all your images tho so that is a pretty big departure from just standard caching.
Caching is very much possible on Android OS. but on Apple as stated above there are limitations with the size of the images and cache size etc.
If you are willing to integrate and allow the caching on iOS you can use "cache manifest" to do so. but keep the draw backs and limitations in mind.
Also
if you want to save the file to Documents folder under my App, Apple will reject your App. The reason is the system backup all data under Documents folder to iCould after iOS6, so Apple does not allow big data like images or JSON file which could sync from your server again to keep in this folder.
So there is another work around which is good So one can use LocalFileSystem.TEMPORARY instead. It does not save the data to Library/Cache, but it save data to temp folder of App, which does not been auto backup to iCloud and not auto deleted either.
Regards
Rajeev