I'm trying to convert emails from my mailbox, into either HTML or PDF programatically.
My main motivation is to able to create summary of emails on a web page, and able to expand a particular email and view the entire content of the email.
I figured PDF might be an option since I do not have to worry about linking the contents in the email (eg. image) to the storage location of the image.
I'm starting with the ruby Mail gem and I also came across mhonarc. I'm not sure if mhonarc is a too much for what I'm trying to do, so I decided to ask here to see if there alternatives out there.
Gem pdfkit is used for generating PDFs. Please read readme, gem requires wkhtmltopdf library in your system.
I would approach email conversion via the IMAP library (there might be newer ones out by now) and then serve everything from a simple Sinatra app in the browser. You can use a template language to generate simple html output and logic. Of course you can also use prawn for pdf generation. Many different ways to get there... Not sure why you don't just load emails and show partial content. Life can be so simple.
Related
Hello guys I need help here,
I want to send PDF mails to users on a monthly basics. but the PDF will have values from the db. so each users has a different content from another user all taking values from db. If i can't achieve this perfectly on Laravel I need another language that can help me achieve this thanks.
$schedule->command('custom:command')
->monthly();
You can achieve anything in Laravel which you can achieve in PHP :) and as far as web development is concerned, you can achieve anything in PHP :)
However, I am not sure what exactly is your question about so I will answer all aspects of sending an email with dynamic pdf.
Get fillable PDF
First of all, you need fillable fields in your pdf in order to generate a dynamic pdf in run time. If you dont have fillable fields in your pdf, you can create them using any online tool. One example is: https://www.pdfescape.com/
You can also use Adobe to do so if you like.
Create a PDF with dynamic data
Once you have your empty, fillable PDF, then you can populate the data in it using PDFTK library. Of course, you will have to install this tool in your Server in order to use it.
I recommend using it directly, running exec or similar commands in PHP. However, if you are not comfortable, you can use various Pdftk wrapper, available in Laravel. One example is: https://github.com/mikehaertl/php-pdftk
I myself prefer using exec directly from PHP. Here is a REALLY GOOD tutorial for that: https://www.sitepoint.com/filling-pdf-forms-pdftk-php/
Send the email
Well once you have your PDF generated and saved on Server, all you need to do is attach it with your email and send. You can find all about attachments in Laravel here: https://laravel.com/docs/5.1/mail#attachments
Run it via cron
If you have to do it on a regular basis. I advise you create a cron on your Server. There are various ways to do that. I advise you do it via cPanel, if you have one on your server of course as it provides very user-friendly way of creating crons. If not, you can also use terminal to edit the cron files in Linux Server.
I do not recommend using Laravel inbuild scheduler because it has known issues.
Well that's it, voila..that's the whole process of sending dynamically generated PDFs via email on regular basis.
Good luck!
i have to develop a minimalistic and simple windows phone 7/7.1 app for my college website for displaying new notices and and any new content in the students' section . The website has a separate page for notices and a separate page for study material download. The site has no rss feeds. Please help me figure out how can i read the data into my app and display it on the app.
the website is www.niecdelhi.ac.in
Thank You
When a site doesn't publicly expose data in a machine readable format, you can scrape the HTML and extract the data yourself. This is a tedious and error-prone process, not to mention that changing the site design will immediately break your code.
To scrape a site, use a library that can access the HTML. For example, you can use HTML Agility Pack to extract data from HTML and then use it for whatever purpose you want.
You can also create a web service that will periodically perform data extraction and you can then publish RSS feed from your own service. This way your mobile applications don't depend on the parsing code which you can always tweak and update.
I need an easy way to generate static web pages so that I can serve them up with Apache or Nginx. Currently I am using SproutCore's build tool (Abbot) to generate static pages but that is a little bit cumbersome as it is designed for building SproutCore apps, not non-SproutCore HTML pages.
Here are my requirements:
Javascript must be combined and minified
CSS files must be combined
Each image / CSS / Javascript asset must have unique URL for better caching (query string isn't enough)
Asset URL should be different only when it really changes
Localization support thorough HTML, CSS, Javascript and image files
Nice template engine with layouts, partials etc.
Here are possible solutions I have found:
Create the site using Ruby on Rails, then get all resources using wget like http://usefulfor.com/ruby/2009/03/23/use-rails-to-create-a-static-site-rake-and-subversion/
Use Middleman: http://middlemanapp.com
Any thoughts on this?
After a longish evaluation process I have decided to use Middleman. It does the trick and I love its simplicity and the fact that I can use existing Rack components with it.
Best Regards,
Pekka Mattila
I'm the creator of Middleman and would be eager to help you get comfortable using Middleman. My main goal is to give users the power of Rails, but focused on static development. Some of the actual code of Middleman is simplified versions of Ab
Here's what I do:
Ruby on Rails 3 with the High Voltage Gem, which makes it easy
to serve a static page body using the common templates. It requires a
simple entry in the routes (and you can use namespaces to create a
hierarchy).
Apache reverse proxy to stand-alone Passenger (which uses nginx I
believe) to run the Rails app. This article describes how to
configure it.
Stand-alone passenger will read the URL, see if there is a corresponding file in /public with the .html on it, and serve that. If not found, it will invoke Rails and generate the page. In essence, page caching, with the option of publishing your URLs with or without the .html. There is a section in the Passenger docs about page caching specifically.
As far as combining and minifying js and css, here's a good stackoverflow thread.
Rails has excellent i18n/l10n support.
Rails template engine is very nice to work with. And you can use HAML if you prefer.
For your 3rd and 4th points, I'm a little confused. You want css and js combined, but then you want each to have it's own URL. In Rails, the "cache => true" directive on asset tags takes care of adding a query string parameter that changes when the content does, which is a fairly traditional scheme. I'm not sure what context you are working in where that would not work. Any CDN I've ever used works fine with that, as does an web server implementing the HTTP spec correctly. Anyway, changing the actual path or file in the URL would require changing all references to it. Maybe I'm misunderstanding?
Monkeyman has the template engine you need, I think. Think of it as Middleman's little Scala brother. Nowhere as mature or feature rich yet, but we'll get there eventually. The current incarnation supports HAML, Jade, SSP for layouts, Markdown for content and a couple of other things.
Without any special order
jekyll - quite simple
middleman - a lot of funcionalities
nanoc - a lot of funcionalities
stasis - use controllers
staticmatic
frank
gumdrop
ruby on rails + wget
ruby on rails + high voltage + apache reverse proxy
You should probably also checkout mod_pagespeed. It will at least give you this:
Javascript must be combined and minified
CSS files must be combined
Each image / CSS / Javascript asset must have unique URL for better caching (query string isn't enough)
Asset URL should be different only when it really changes
It won't give you this:
Localization support thorough HTML, CSS, Javascript and image files
Nice template engine with layouts, partials etc.
You can have a look at docpad. It's written in coffeescript and runs on Nodejs. It is document based, where you write some documents and layouts, it will compile them and write them in the out directory. You can write documents in a lot of languages via plugins
It also supports multiple level of file compilation. For example from eco to markdown to html.
Another great feature of it is that you can query on other documents being generated in a document. For example in the first page, you have something like this to get all blog posts:
database.findAll({url : /posts/})
Which will return all documents having posts in their url.
We have members-only paid content that is frequently copied and republished without our permission.
We are trying to ‘watermark’ our content by including each customer’s user id in a fake css class, for example <p class='userid_1234'> (except not so obivous, of course :), that would help us track the source of the copying, and then we place that class somewhere in the article body.
The problem is, by including user-specific information into an article, it makes it so that the article content is ineligible for caching because it is now unique to each user.
This bumps the page load time from ~.8ms to ~2.5sec for each article page view.
Does anyone know of any watermarking strategies that can still be used with caching?
Alternatively, what can be done to speed up database access? ( ha, ha, that there’s just a tiny topic i’m sure.. )
We're using the CMS Expression Engine, but I'd like to hear about any strategies. They don't have to be EE-specific.
If you're talking about images then you could use PHP to add a watermark to the images.
How can I add an image onto an image in PHP like a watermark
its a tool to help track down the lazy copiers who just copy the source code as-is. this is not preventative, nor is it a deterrent. – Ian 12 hours ago
Going by your above comment you are happy with users copying your content, just not without the formatting etc. So what you could do is provide the users an embed type of source code for that particular content just like YouTube does with videos. Into that embed source code you could add your own links back to your site, utilize your own CSS etc.
That way you can still allow the members to use the content but it will always come out the way you intended it with links back to your site.
Thanks
You could always cache a version that uses a special string, like #!username!#, and then later fill it in with PHP based on which user is viewing it.
Another way I believe is to switch from caching on the server to instead let the browser cache it locally for a little. That way it is only cached per user, and it reduces the calls to your database. Because an article is pretty static, you could just let the local computer cache it, and pull in comments via javascript.
This last one is probably not one you are really looking for, but I'm gonna come out and say it anyway. You could not treat your users like thieves, and instead treat the thieves as thieves. Go to the person hosting the servers your content is on and send them an email telling them copyrighted premium content is being hosted on their servers without your permission. You can even automate that process.
How to find out what sites are posting your content? Put a link in the body content to your site, and do a Google Search/Blog Search for articles linking to that site. To automate it, use Google Blog Search because it offers RSS feeds. Any one that has a link back to your site could go into a database with a link to the page, someone could look at it, and if it is the entire article, go do a Whois and send them an email.
What makes you think adding css to something is going to stop people from copying it without that CSS? It's more likely that they are just coping the source of the content you are showing them and ignoring all the styling around it. For example, I use tamper data to look at all HTTP requests made by Firefox, if I can see it on the page, I can see it in the logs. Even with all the "protection" some sites try to put in place, they generally will never work. I can grab what I want, without using any screen capture/recording.
If you were serving flv's, for example, I would easily be able to grab the source of that even if you overlayed it with some CSS. I think the best approach would be to get the sites publishing your premium content and ask them to remove it. It's either that or watermark the actual content on the fly while sending it to the browser.
I wonder how to get at tags in blog posts (WordPress, Blogger, or Blogspot) programmatically (API, RSS feed, XML, other methods). Preferably a solution usable in Ruby on Rails.
See what the life streaming Rails apps such as kakuteru are using. Tagging across multiple Web2.0 style streams is important to kakuteru and I think they may employ a number of techniques. The also employ zemanta which has an API to generate tags from content. You can see zemanta's example of getting tags using ruby
If either of those have an API, i'd start by reading their documentation; outside of that; you can roll your own screenscraper (which is likely to be useless, given the amount of DOM content that is script generated these days)
Wordpress API
Blogger API