Where can I find the articles in the Joomla File System? - joomla

I have a webpage created by Joomla, and I can see its file system by FTP.
I see a lot of folders, but I don't know where can I find the articles's contents (the text).
My webpage doesn't work because of a SQL error, and I can't login the administrator page, but I want to download the contents of the articles of the webpage.

Hmm,as stated, the Joomla "Articles" are database content so you wont find the equivalent of html "files" for each article in your ftp folders sadly. However, even though there has been a sql error, there is still a chance that you can access the DB tables to read your database through phpmyadmin as suggested. If not, assuming that your site has been around long enough to be fully indexed by google, you should still be able to get hold of the text for your articles through Google's cache - at least that way you can copy and paste the text from your old articles into notepad giving you a reference and saving you having to retype or rewrite them. Read more on Google Cache here http://www.googleguide.com/cached_pages.html

Related

Parsing data: is img link slower then image from own server?

I'm parsing data from other website, and question wether it's better to download images and show them on my own or just links to website images I parsed . Is the link to image by default slower then image from own source?
Couldn't find answer to the simple question. If question is discussable and doesn't belong here, someone comment down please in order to delete it.
Some rules of thumb:
Don't display content on your page which you 'source' from another site without the other sites permission. ('Share this' links provided by youtube are okay, directly linking to the .flv file of someones video from another site to display on yours is not).
Don't copy content from other domains onto your domain without their permission first (doing so would be a copyright violation).
So to answer your question: You should copy the content onto your domain/host, but only if they have given permission to allow this kind of use.
Edit: I am interpreting your question as "I am taking content from another website [and putting it on my own] and I am wondering if I should link directly to their content ( tags pointing to the other domain) or if I should download/copy the content to my website and have my server handle everything?"
The "technical" answer is "it depends on how good your host is compared to the other host when serving content to the average visitor". Compare a page run by Google vs. the same thing run on a home server behind a 56k modem. It matters if you have broadband, but if you're on a 33.3k modem it doesn't.

Realtime drive javascript example not working - Google API

I set up the Realtime drive example shown here: https://developers.google.com/drive/realtime/realtime-quickstart
On this site: http://shuub.com
But the thing is, that when I access the link from a different browser (logged in a different Google account), it won't load the file.
All I need is to edit some plain text with another user, without needing to access a google account, it doesn't even need to be saved after closing the site. Is it possible?
Thanks for reading.
But the thing is, that when I access the link from a different browser
(logged in a different Google account), it won't load the file.
Probably you need to share the file with the other user first.
Open Google Drive in your Browser. If you did not modify the example code, your file should be located in the root folder. It's probably named "New Realtime Quickstart File". Right-click on the file and share it with the other user by adding his account to the list and granting all permissions.
All I need is to edit some plain text with another user, without
needing to access a google account, it doesn't even need to be saved
after closing the site. Is it possible?
The website you have linked is not reachable so I don't know what you want to do exactly.
Probably you could also use other and (in that use case without saving and login) simpler techniques like Mozillas TogetherJS (you can try it on jsfiddle.net) or you could use a tool like Etherpad.

Google Docs Viewer - File Request Timeout

I'm working on a Joomla website, which has a set of documents that needs to displayed using a Google Docs viewer.
Though only Authenticated users can reach the file, but the file can also be access through direct path like http://www.example.com/files/somefile.pdf even without authentication.
So when i tried to view a file through Google Viewer with a link something like this..
http://docs.google.com/viewer?url=http://www.example.com/files/somefile.pdf
The files which are of size less than 100kb are viewable and for rest all an error message is displayed as:
Sorry, it took too long to find the document at the original source. Please try again later.
You can also try to download the original document by clicking here.
So I'm not sure whether this is something to do with the Google Doc Viewer, Joomla or any Server issue for request timeout.
How can I make each file irrespective of size viewable with Google Docs?
If its PDF only, you can also just use pdfjs from Mozilla directly. Then you should check your URL encoding. If the issue remains, check out https://code.google.com/p/google-api-php-client/ for converting your docs in-place. Opening them with pdfjs is still recommended to bypass Google-Doc-Viewer problems, at least that is how I could get this working properly.

Reload Document into Google Docs Viewer (Clear Cache)

Google Docs Viewer (http://docs.google.com/viewer) creates a cache of a document after the first viewing. To see what I mean, try the following:
Upload file.pdf to your server (i.e., http://example.com).
Visit http://docs.google.com/viewer?url=http://example.com/file.pdf
Upload a new file to replace file.pdf (but use the same name).
Revisit http://docs.google.com/viewer?url=http://example.com/file.pdf.
Google Docs Viewer still shows the old file.pdf.
Anyone know how to correct this?
(I have already tried clearing browser cache, switching browsers, and logging in with a different google account to view the link.)
It appears there is no way to clear the cache. Although, from my experience, Google tends to do it automatically about once a day.
Maybe if you append a dynamic query string parameter to filename maybe cache will not work.
ex: http://docs.google.com/viewer?url=http://example.com/file.pdf?time=3454354
I added ?time=0
Seemed to work.

Content Water Marking

We have members-only paid content that is frequently copied and republished without our permission.
We are trying to ‘watermark’ our content by including each customer’s user id in a fake css class, for example <p class='userid_1234'> (except not so obivous, of course :), that would help us track the source of the copying, and then we place that class somewhere in the article body.
The problem is, by including user-specific information into an article, it makes it so that the article content is ineligible for caching because it is now unique to each user.
This bumps the page load time from ~.8ms to ~2.5sec for each article page view.
Does anyone know of any watermarking strategies that can still be used with caching?
Alternatively, what can be done to speed up database access? ( ha, ha, that there’s just a tiny topic i’m sure.. )
We're using the CMS Expression Engine, but I'd like to hear about any strategies. They don't have to be EE-specific.
If you're talking about images then you could use PHP to add a watermark to the images.
How can I add an image onto an image in PHP like a watermark
its a tool to help track down the lazy copiers who just copy the source code as-is. this is not preventative, nor is it a deterrent. – Ian 12 hours ago
Going by your above comment you are happy with users copying your content, just not without the formatting etc. So what you could do is provide the users an embed type of source code for that particular content just like YouTube does with videos. Into that embed source code you could add your own links back to your site, utilize your own CSS etc.
That way you can still allow the members to use the content but it will always come out the way you intended it with links back to your site.
Thanks
You could always cache a version that uses a special string, like #!username!#, and then later fill it in with PHP based on which user is viewing it.
Another way I believe is to switch from caching on the server to instead let the browser cache it locally for a little. That way it is only cached per user, and it reduces the calls to your database. Because an article is pretty static, you could just let the local computer cache it, and pull in comments via javascript.
This last one is probably not one you are really looking for, but I'm gonna come out and say it anyway. You could not treat your users like thieves, and instead treat the thieves as thieves. Go to the person hosting the servers your content is on and send them an email telling them copyrighted premium content is being hosted on their servers without your permission. You can even automate that process.
How to find out what sites are posting your content? Put a link in the body content to your site, and do a Google Search/Blog Search for articles linking to that site. To automate it, use Google Blog Search because it offers RSS feeds. Any one that has a link back to your site could go into a database with a link to the page, someone could look at it, and if it is the entire article, go do a Whois and send them an email.
What makes you think adding css to something is going to stop people from copying it without that CSS? It's more likely that they are just coping the source of the content you are showing them and ignoring all the styling around it. For example, I use tamper data to look at all HTTP requests made by Firefox, if I can see it on the page, I can see it in the logs. Even with all the "protection" some sites try to put in place, they generally will never work. I can grab what I want, without using any screen capture/recording.
If you were serving flv's, for example, I would easily be able to grab the source of that even if you overlayed it with some CSS. I think the best approach would be to get the sites publishing your premium content and ask them to remove it. It's either that or watermark the actual content on the fly while sending it to the browser.

Resources