Making websites available offline - asp.net-mvc-3

I am using HTML5 offline storage. The goal is to make the whole site available offline. So intuitively, no server requests means all the pages need to be on the client. The only way I know of to accomplish such a task is to make the site into one page then show hide portions with jquery when the user "navigates". Is there a better way?

The html 5 offline spec allows multiple pages to be saved offline so you don't need to put all your content onto one page.
EDIT: link to spec http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html

Be careful that your jquery does not still point to the cloud. You'll need to save the relevant .js files locally.
N.B. If your whole site can be generated and saved as individual .html files then all you need to do is to save these files in the correct (relative) directory structure.

Related

How can I upload multiple files from urls directly to cloud storage

I've tried some of the services out there, including droplet, ctrlq.org/save, and some other sites that support directly fetching a file from a url and uploading it to dropbox, google drive and the like. Without the user having to store the file on a local disk.
Now the problem is none of these services support multiple urls or batch uploading, but I have quite a few urls and I really need a service where I can put them in, split them with enters or semicolons, and have the files uploaded to dropbox.(or any other cloud storage)
Any help would be gladly appreciated.
The Dropbox Saver JavaScript control allows you to save up to 100 files to the user's Dropbox in one shot. You'll need to programmatically create the button using Dropbox.createSaveButton as explained in the linked page.
It seems like the 100-file limit (at any one time) is universal, but you might find that it isn't the case when using the DropBox REST API. It looks possible to do this with NodeJS server side (OAuth and posts) or Javascript client side (automating FileReader). I'll review and try to add content so these aren't just links.
If you can leave a page open for about 20 minutes due to "technical limitations", the dropbox should be loadable 100-at-a-time like that, assuming each upload takes less than 2 seconds; it's an easy hook to add a progress indicator.
If you're preloading the dropbox once yourself or the initial load is compatible with manual action, perhaps mapping a drive and trying to unzip an archive of your links to it would work. If your list of links isn't extremely volatile then the REST API could be used to synchronize changes.
Edit: Forgot to include this page on CloudConvert, which unzips archives containing up to 100 files into DropBox. Your use case doesn't seem to include retrieving the actual content at your servers (generated zip files), sending the automation list to the browser and then having the browser extract to dropbox, but it's another option.
The Dropbox API now offers the ability to save a file into Dropbox directly via a URL. There's a blog post about it here:
https://blogs.dropbox.com/developers/2015/06/programmatically-saving-a-url-to-dropbox/
The documentation can be found here:
https://www.dropbox.com/developers/core/docs#save-url

Check on which pages an image is used?

Is there a certain way to check which pages on a website use a specific image?
Say I have some image which I don't use on a page anymore, so I'd like to delete it from my server. But I'm not entirely sure if it's being used on other pages, is there a way to check if it's still being shown on other pages?
You can hook your website to google webmaster tools and wait a little bit after a while 404 errors will appear there. This way you can track unused resources and dead ends.
This includes images.
There is a better way if you have direct access to the web server.
Visit every page in your website or let google crawl it.
You can later sort the files by date modified and ones which are not modified lately are not used.
You have to make sure you get the images from the pages so I would use a historyless cahceless session.
How to sort the files according to the time stamp in unix?

How to use bootstrap size option in joomla 3.0?

I m new to joomla world. pls can any1 tell how to use bootstrap size option in joomla 3.0?
and i have 1 more question, what is the use of index.html in every modules folder which has no content in it ?
Second question answer
Web servers list all its directory-content in the browser if there's not present an index.html, making it easy for attackers to click on any of the links and view the contents; worse, if it's a PHP file, which will invariably execute upon clicking. That brings three risks:
Direct access to a PHP file exposes sensitive information (e.g. the
server's path structure) to directly alter codes.
It makes easier uploading hacking scripts to a site through any of
its vulnerable component. This allows for direct web access which
compromises the site.
It reveals the names and size of the site's files and helps
identifying any vulnerable extension, making it an easy target
The index.html files prevent the file listings from such exposures.
The "bootstrap size" option in the module parameters has to be supported by the used module chrome. From the default system chromes, only the html5 one does support it. Depending on your template, there may be other chromes as well which do support it. But since it's a rather new parameter, most templates probably don't support it yet.

Does use of echo base_url(); to call CSS, images and Javascript files make a website slow?

I am using Codeigniter. I am keeping my images,CSS and Javascript files in a folder called "support" in the document root of my application. So my document root folder looks like this-
.settings
application
support
system
.buildpath
.project
index
.htacces
Now my question is will it make my website take time to load as I have to use <? echo base_url();?>support/ every time I need to get something from my support folder? Because you see when I am using <? echo base_url();?>I am actually calling the full website address.. and I have 7 CSS and 13 javascript files to call from "support" so it will definitely take time to load the website. (Please correct me if I am wrong). If you think by this a website can get slow could you please tell me where exactly should I put my CSS,images and javascript files in. I heard views is not a good place for this.
Thanks in Advance :)
This question is probably bigger than you think.
First of all, using <? echo base_url();?> instead of "hard-coding" your web address will not slow down your site. A function call like this is very negligible to the speed of loading your pages.
I think the other part of your question is regarding architecture.
When you think of speed for your website, you need to know what factors slow down the loading of your page. (Although not an exhaustive list, this will help in your case):
the number of files (images, css, javascript, etc.) that need to load for your page
the cache-ability of those files
some server side header nonsense (e-tags and so-forth)
the processing to build your php pages
the size of your page
Now, in your instance, I would recommend putting all of your "static" files in the document root under a folder (say static). Then, access them all in your "views" with the base_url() function.
This way, your page as it's delivered to the browser, will make external calls for those static files - allowing the browser to cache all of those files (assuming the headers are set up correctly). If you put them into views, then they're actually added to the page that is being requested. So, the next page that is requested has to download those files again along with that second page being requested. Make sense?
To help with the "number of files", you can always concatenate and minify any css/javascript that you have. So instead of the browser downloading and caching 8 js files, you can serve it 1 js file with all of your code.

Content Water Marking

We have members-only paid content that is frequently copied and republished without our permission.
We are trying to ‘watermark’ our content by including each customer’s user id in a fake css class, for example <p class='userid_1234'> (except not so obivous, of course :), that would help us track the source of the copying, and then we place that class somewhere in the article body.
The problem is, by including user-specific information into an article, it makes it so that the article content is ineligible for caching because it is now unique to each user.
This bumps the page load time from ~.8ms to ~2.5sec for each article page view.
Does anyone know of any watermarking strategies that can still be used with caching?
Alternatively, what can be done to speed up database access? ( ha, ha, that there’s just a tiny topic i’m sure.. )
We're using the CMS Expression Engine, but I'd like to hear about any strategies. They don't have to be EE-specific.
If you're talking about images then you could use PHP to add a watermark to the images.
How can I add an image onto an image in PHP like a watermark
its a tool to help track down the lazy copiers who just copy the source code as-is. this is not preventative, nor is it a deterrent. – Ian 12 hours ago
Going by your above comment you are happy with users copying your content, just not without the formatting etc. So what you could do is provide the users an embed type of source code for that particular content just like YouTube does with videos. Into that embed source code you could add your own links back to your site, utilize your own CSS etc.
That way you can still allow the members to use the content but it will always come out the way you intended it with links back to your site.
Thanks
You could always cache a version that uses a special string, like #!username!#, and then later fill it in with PHP based on which user is viewing it.
Another way I believe is to switch from caching on the server to instead let the browser cache it locally for a little. That way it is only cached per user, and it reduces the calls to your database. Because an article is pretty static, you could just let the local computer cache it, and pull in comments via javascript.
This last one is probably not one you are really looking for, but I'm gonna come out and say it anyway. You could not treat your users like thieves, and instead treat the thieves as thieves. Go to the person hosting the servers your content is on and send them an email telling them copyrighted premium content is being hosted on their servers without your permission. You can even automate that process.
How to find out what sites are posting your content? Put a link in the body content to your site, and do a Google Search/Blog Search for articles linking to that site. To automate it, use Google Blog Search because it offers RSS feeds. Any one that has a link back to your site could go into a database with a link to the page, someone could look at it, and if it is the entire article, go do a Whois and send them an email.
What makes you think adding css to something is going to stop people from copying it without that CSS? It's more likely that they are just coping the source of the content you are showing them and ignoring all the styling around it. For example, I use tamper data to look at all HTTP requests made by Firefox, if I can see it on the page, I can see it in the logs. Even with all the "protection" some sites try to put in place, they generally will never work. I can grab what I want, without using any screen capture/recording.
If you were serving flv's, for example, I would easily be able to grab the source of that even if you overlayed it with some CSS. I think the best approach would be to get the sites publishing your premium content and ask them to remove it. It's either that or watermark the actual content on the fly while sending it to the browser.

Resources