what is the best way to manage pdfs stockage and availabilty? - laravel

we are currently working on a project of restaurants management, we are using laravel/mysql on the backend and we are looking for the best way to manage invoices pdfs , for now we are just storing theme in a public folder but i think this not a prefect way. we think to send the just the order data to frontend and generate the pdf when the user click on a download button but it's still not efficient, so i need an idea to manage pdf invoices and get theme any time the user wanted them without hurt performance.

Storing the data in a database and then retrieving it as well as generating a pdf upon user request is the most efficient way. Saves your storage space and bandwidth. Any other means would be much more consuming.

Related

web analytics without the help of google analytics

I am trying to display dashboards to every user who accesses my website with an analysis of his previous data. Can I do it without the help of google analytics?
Well if you do not want to use Google-Analytics then YES you can do it.
Few steps you need to do:
Create a database table in which save page URL and visitor's ip
Also try to get time that how many seconds/minutes visitor stay on page
You may generate reports and show on dashboard
By the way, if we have a free of cost solution then why you are spending time on it ?
Paperclip is free and ethical: Website. Users can see all the data collected about them. I'm in their beta program, where you can suggest things to them and they'll add it.

how facebook is storing and maintain the image in application?

I'm going to create application where i need to save tons of image and maintain it, so please can any one tell me what is the best practice to save and maintain image, how Facebook does this.
Facebook using Big Data to store images.

ASP.NET MVC best practice for creation in one transaction

Currently I am working on a MVC web application that should have a creation dialog for some kind of entry.
It should be possible to enter some text information as well as upload documents, images, videos, etc.
The following problem arises:
Are there any general best practices for uploading the whole bunch of information at ONCE? The object should not be created in the database until the user really decides to submit the information.
I thought about some solutions
Storing the uploads with the FileAPI in the browser
Immediate AJAX-Upload when selecting files. But where to "cache" the file on the server? The entry is not in the Database since I am creating the object.
Creation of a database entry when opening the form? But this would result in junk in the database
Any suggestions are much appreciated
Thank you
Kind regards
I think this approach will be good to follow.
Have Session cache which will keep the files-bytes in server memory.
When user comes on the upload page, clear it.
When user uploads the files, save file-bytes on server session cache.
When user really wants to upload files - say - submit files - kind of button, get the files from session cache and upload in the database.
Clears the session cache when its saved in database.
In case of large files, like videos, you would like to create a temporary folder(per user), save files inside that folder - instead of session cache, and clear/delete the folder after the files are saved in database.

ASP MVC Big Gallery Web site Image Storage Best Practices

I have a client who wants to build a huge Image gallery website, and I am confused about how to structure the website for future Storage expansion.
Let me explain more...
Let us say that each user will upload his images to
website.com/Uploads/User/Images
Now creating the upload logic and displaying the images is not my issue here, my real problem is that say I have 200 GB hard Disk and if i have 20000 Customer where each client uploads 10 MB max, now as you see I will run out of space.
So how do I handle expansion in future without changing structure of web site, meaning that users will always upload to the same Path I have mentioned above, so obviously my front-end views will fetch images from same location too.
It may be stupid but I am lost on this. I mean, how guys like Facebook or other big sites do that ?
You can try using a cloud cdn(content delivery network), which will be dynamically expandable. amazon/rackspace, they are well known for this kind of service.
Ok after Tedious Searching, i have found the answer, basically it boils down to two Methods,
One Called Push, where you have to store the Files on the CDN Server by FTP, Api, etc...
The other called Pull Origin, where you dont have to change anything, you just Configure Your CDN to Fetch the Resources from your Servers, of course you have to Store files on Original Server First.
there is a lot more to it, but if anyone had my same wondering , just make Search about CDN Push or Pull in Google

Content Water Marking

We have members-only paid content that is frequently copied and republished without our permission.
We are trying to ‘watermark’ our content by including each customer’s user id in a fake css class, for example <p class='userid_1234'> (except not so obivous, of course :), that would help us track the source of the copying, and then we place that class somewhere in the article body.
The problem is, by including user-specific information into an article, it makes it so that the article content is ineligible for caching because it is now unique to each user.
This bumps the page load time from ~.8ms to ~2.5sec for each article page view.
Does anyone know of any watermarking strategies that can still be used with caching?
Alternatively, what can be done to speed up database access? ( ha, ha, that there’s just a tiny topic i’m sure.. )
We're using the CMS Expression Engine, but I'd like to hear about any strategies. They don't have to be EE-specific.
If you're talking about images then you could use PHP to add a watermark to the images.
How can I add an image onto an image in PHP like a watermark
its a tool to help track down the lazy copiers who just copy the source code as-is. this is not preventative, nor is it a deterrent. – Ian 12 hours ago
Going by your above comment you are happy with users copying your content, just not without the formatting etc. So what you could do is provide the users an embed type of source code for that particular content just like YouTube does with videos. Into that embed source code you could add your own links back to your site, utilize your own CSS etc.
That way you can still allow the members to use the content but it will always come out the way you intended it with links back to your site.
Thanks
You could always cache a version that uses a special string, like #!username!#, and then later fill it in with PHP based on which user is viewing it.
Another way I believe is to switch from caching on the server to instead let the browser cache it locally for a little. That way it is only cached per user, and it reduces the calls to your database. Because an article is pretty static, you could just let the local computer cache it, and pull in comments via javascript.
This last one is probably not one you are really looking for, but I'm gonna come out and say it anyway. You could not treat your users like thieves, and instead treat the thieves as thieves. Go to the person hosting the servers your content is on and send them an email telling them copyrighted premium content is being hosted on their servers without your permission. You can even automate that process.
How to find out what sites are posting your content? Put a link in the body content to your site, and do a Google Search/Blog Search for articles linking to that site. To automate it, use Google Blog Search because it offers RSS feeds. Any one that has a link back to your site could go into a database with a link to the page, someone could look at it, and if it is the entire article, go do a Whois and send them an email.
What makes you think adding css to something is going to stop people from copying it without that CSS? It's more likely that they are just coping the source of the content you are showing them and ignoring all the styling around it. For example, I use tamper data to look at all HTTP requests made by Firefox, if I can see it on the page, I can see it in the logs. Even with all the "protection" some sites try to put in place, they generally will never work. I can grab what I want, without using any screen capture/recording.
If you were serving flv's, for example, I would easily be able to grab the source of that even if you overlayed it with some CSS. I think the best approach would be to get the sites publishing your premium content and ask them to remove it. It's either that or watermark the actual content on the fly while sending it to the browser.

Resources