I am creating a script to send 10 files to different recipients daily basis. These files size can be upto 15 MB. The phpmailer is taking lot of time 10 to 15 min. so it become unusable to me. I am using gmail smtp.
//Server settings
//$mail->SMTPDebug = SMTP::DEBUG_SERVER; // Enable verbose debug output
$mail->SMTPDebug = 0; //Alternative to above constant
$mail->isSMTP(); // Send using SMTP
$mail->Host = 'smtp.gmail.com'; // Set the SMTP server to send through
$mail->SMTPAuth = true; // Enable SMTP authentication
$mail->Username = 'rmudelhi#gmail.com'; // SMTP username
$mail->Password = 'rainfall2016'; // SMTP password
//$mail->SMTPSecure = PHPMailer::ENCRYPTION_STARTTLS; // Enable TLS encryption; `PHPMailer::ENCRYPTION_SMTPS` encouraged
$mail->SMTPSecure= 'tls';
$mail->Port = 587;
Can someone help me to improve this?
It's not that PHPMailer is slow, it's that you're trying to perform an inherently slow operation during page load; you're transferring around 150 megabytes per recipient, which is never going to happen quickly.
Do not do the email sending during the page load - save the fact that you need to do the send in your database or queue, and process the send asynchronously.
The easiest way to do this is to use a cron job to pick up the send tasks and run them.
Alternatively, do not send the files by email, but send a message containing links that point at them, so that the recipients can download them via HTTP more efficiently. Email is simply not an efficient method for sending large files.
Related
I want to create a simple gRPC endpoint which the user can upload his/her picture. The protocol buffer declaration is the following:
message UploadImageRequest {
AuthToken auth = 1;
// An enum with either JPG or PNG
FileType image_format = 2;
// Image file as bytes
bytes image = 3;
}
Is this approach of uploading pictures (and recieving pictures) still ok regardless of the warning in the gRPC documentation?
And if not, is the better approach (standard) to upload pictures using the standard form and storing the image file location instead?
For large binary transfers, the standard approach is chunking. Chunking can serve two purposes:
reduce the maximum amount of memory required to process each message
provide a boundary for recovering partial uploads.
For your use-case #2 probably isn't very necessary.
In gRPC, a client-streaming call allows for fairly natural chunking since it has flow control, pipelining, and is easy to maintain context in the client and server code. If you care about recovery of partial uploads, then bidirectional-streaming works well since the server can be responding with acknowledgements of progress that the client can use to resume.
Chunking using individual RPCs is also possible, but has more complications. When load balancing, the backend may be required to coordinate with other backends each chunk. If you upload the chunks serially, then the latency of the network can slow upload speed as you spend most of the time waiting to receive responses from the server. You then either have to upload in parallel (but how many in parallel?) or increase the chunk size. But increasing the chunk size increases the memory required to process each chunk and increases the granularity for recovering failed uploads. Parallel upload also requires the server to handle out-of-order uploads.
the solution provided in the question will not work for files having large sizes. it will only work for smaller image sizes.
the better and standard approach is use chunking. grpc supports streaming a built in. so it is fairly easy to send in chunks
syntax = 'proto3'
message UploadImageRequest{
bytes image = 1;
}
rpc UploadImage(stream UploadImageRequest) returns (Ack);
in the above way we can use streaming for chunking.
for chunking all the languages provide its own way to chunk file based on chunk size.
Things to take care:
you need to handle the chunking logic, streaming helps in sending naturally.
if you want to send the metadata also there are three approaches.
1: use below structure
message UploadImageRequest{
AuthToken auth = 1;
FileType image_format = 2;
bytes image = 3;
}
rpc UploadImage(stream UploadImageRequest) returns (Ack);
here bytes is still chunks and for the first chunk send AuthToken and FileType and for all other requests just don't send those metadata.
2: you can also use oneof which is much easier.
message UploadImageRequest{
oneof test_oneof {
Metadata meta = 2;
bytes image = 1;
}
}
message Metadata{
AuthToken auth = 1;
FileType image_format = 2;
}
rpc UploadImage(stream UploadImageRequest) returns (Ack);
3: just use below structure and in first chunk send metadata and other chunks will have data. you need to handle that in code.
syntax = 'proto3'
message UploadImageRequest{
bytes message = 1;
}
rpc UploadImage(stream UploadImageRequest) returns (Ack);
lastly for auth you can use headers instead of sending that in message.
Based on the following code, I built a version of an echo server, but with a threaded delay. This was built because I've noticed that upon initial connection, my first send is sent back to the client, but the client does not receive it until a second send. My real-world use case is that I need to send messages to the server, do a lot of processing, and then send the result back... say 10-30 seconds later (could be hours in some cases).
http://www.wangafu.net/~nickm/libevent-book/Ref8_listener.html
So here is my code. For brevity's sake, I have only included the libevent-related code; not the threading code or other stuff. When debugging, a new connection is set up, the string buffer is filled properly, and debugging reveals that the writes go successfully.
http://pastebin.com/g02S2RTi
But I only receive the echo from the send-before-last. I send from the client numbers to validate this and when I send a 1 from the client, I receive nothing from the server via echo... even though the server is definitely writing to the buffer using evbuffer_add ( I have also tried this using bufferevent_write_buffer).
From the client when I send a 2, I then receive the 1 from the previous send. It's like my writes are being cached.... I have turned off nagle.
So, my question is: Does libevent cache sends using the following method?
evbuffer_add( outputBuffer, buffer, length );
Is there a way to flush this cache? Is there some other method to mark the cache as finished or complete? Can I force a send? It never sends on it's own... I have even put in delays. Replacing evbuffer_add with "send" works perfectly every time.
Most likely you are affected by Nagle algorithm - basically it buffers outgoing data, before sending it to the network. Take a look at this article: TCP/IP options for high-performance data transmission.
Here is an example how to disable buffering:
int flag = 1;
int result = setsockopt(sock, /* socket affected */
IPPROTO_TCP, /* set option at TCP level */
TCP_NODELAY, /* name of option */
(char *) &flag, /* the cast is historical
cruft */
sizeof(int)); /* length of option value */
While writing an app that uses socket.io, I'm finding that the heartbeat debug messages drown out the debug messages I want to see. What's the best way to shut off the debug messages for only the heartbeat?
Removing heartbeat debug messages from socket.io
Set up your socket.io with dynamic environmental configurations, and for the development environment turn down the 'heartbeat interval'
Here's some tips, most of them come from the Configuring-Socket.IO page of the socketIO v09 wiki.
var io = require('socket.io').listen(80);
// This is sugar for if process.env.NODE_ENV==='production'
io.configure('production', function(){
// send minified client
io.enable('browser client minification');
// apply etag caching logic based on version number
io.enable('browser client etag');
io.enable('browser client gzip'); // gzip the file
io.set('log level', 1); // reduce logging
// enable all transports
// (optional if you want flashsocket support,
// please note that some hosting
// providers do not allow you to create servers that listen on a
// port different than 80 or their
// default port)
io.set('transports', [
'websocket'
, 'flashsocket'
, 'htmlfile'
, 'xhr-polling'
, 'jsonp-polling'
]);
});
io.configure('development', function(){
io.set('heartbeats', false); //removes heartbeats
io.set('log level', 1); // reduces all socket.io logging, including heartbeats.
});
So do that, and then you're good on the heartbearts being on (the default) in production, and while you're at it -- you got the basic suggested production settings.
Also, depending on framework, or where exactly in the express/socket.io/top-level-app way of making things, you might have a setup like.
app.io.enable('browser client minification');
or
app.configure(function() {
app.enable('junny')
app.set('foo', 'bar')
});
So, to then control the heartbeats, first make sure you're getting the most out of your express/api logging, so load your app like this:
$ DEBUG=* node ./app.js
Then detect if you're in the development env (most top level way):
if (process.env.NODE_ENV === 'development') {
io.set('heartbeat interval', 60*750);
}
// 60*750 = 45000ms = 45 seconds.
That would be turn the heartbeat interval to 45 seconds.
One can find all options at the socket.io configuration wiki page, a blockquote most relevant to this question:
heartbeat timeout defaults to 60 seconds
The timeout for the client, we should receive a heartbeat from the
server within this interval. This should be greater than the heartbeat
interval. This value is sent to the client after a successful
handshake.
heartbeat interval defaults to 25 seconds
The timeout for the server when it should send a new heartbeat to the
client.
If you want the heartbeats/message still then adjust those times upward, you'll have to test on your own system, but follow the general guideline/ratio and then adjust/test from there.
I want to process a bunch of my emails with Net::IMAP, but I want to skip the ones with attachments because they take too long. Any hints? Ultimately, I'm looking to download all the text and HTML content of emails, as fast as possible. Right now, I'm fetching one UID at a time, and parallelizing it with 15 processes (the max GMAIL allows), but I'm hitting a snag on the messages with attachments.
imap = Net::IMAP.new('imap.gmail.com', 993, usessl = true, certs = nil, verify = false)
imap.authenticate('XOAUTH2', user.email, user.token)
mailbox = "[Gmail]/All Mail"
imap.select(mailbox)
message_id = 177
imap.fetch(message_id,'RFC822')[0].attr['RFC822'] # this fetches the whole message, including attachment, which makes it slow...
imap rfc does not provide any such implementations but as an alternative there could be a possible hack, assuming that the mail is following the rfc standard, fetch the entire mail header of mail and check for content type if it is multipart/mixed then for sure an attachment exists in a mail.
On content type you can get more info at http://en.wikipedia.org/wiki/MIME
This is just one of possible ways to find an attachment.
I am writing a newsletter application using CDO.Message. But get an error back that we have to many connections. Seems they have a limit of 10 simultaneous connections.
So, is there a way to send several messages on one connection, or disconnect faster?
There is a cdo/configuration/smtpconnectiontimeout parameter, but I think that's more about how long the sender will try.
(If we send,ant it fails, it will succeed again after some minutes, probably meaning that the connection is disconnected).
(We are using CDO partly because we are pulling the HTML message body from a webserver)
Edit:
Public Sub ipSendMail(ByVal toEmail As String, ByVal fromEmail As String, ByVal subject As String, ByVal url As String)
Dim iMsg As Object
Set iMsg = CreateObject("CDO.Message")
iMsg.From = fromEmail
iMsg.To = toEmail
iMsg.Subject = subject
iMsg.CreateMHTMLBody(url)
iMsg.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2
iMsg.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/smtpserver") = "relay.wwwwwwwwww.net"
iMsg.Configuration.Fields.Item_
("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25
iMsg.Configuration.Fields.Item _
("http://schemas.microsoft.com/cdo/configuration/smtpconnectiontimeout") = 0
iMsg.Configuration.Fields.Update()
iMsg.Send()
Set iMsg = Nothing
End Sub
Try to use SMTP instead of CDO, System.Web.Mail.SmtpMail
You could implement a queue, that is processed by a background thread. The background thread would only send one message at a time.
You can store the email in a database table, which is processed by a scheduled task or a stored procedure. Those can again send one mail at a time, and have the advantage of being able to retry, if it goes wrong.
Ordinarily you only need one connection regardless of how many messages you are sending.
Perhaps you are not releasing something that you should be.
Edit: Just a thought, the SMTP server you are sending to, it wouldn't happen to be host on an XP box perhaps for testing reasons?
Edit: Ok so your SMTP server is fine.
What platform is the server supplying the result of the URL?
I know that CDO can be quirky at times, so these are the possible suggestions that I would make:
A queue would probably work the best for you. After that, I would consider setting up a local SMTP server without inbound connection limits that uses a smarthost to queue up your outbound messages. (This could actually be written fairly easily. The "S" is for "Simple" and it actually is.)
If all else fails... You could always roll your own mailer component implementing RFCs 2821 and 2822 (or whatever the latest and greatest RFCs are for SMTP and message format)
EDIT: If the newsletter you are sending out is identical for all recipients, you can address it to a dummy recipient (i.e. newsletter#yourdomain.com) and BCC it to the recipient list (or a subset of the recipient list). Just be careful not to get flagged as unsolicited commercial email. Let your provider know what you are doing. They have to deal with the complaints, and you are the one paying the bill. Letting them know that complaints would be mostly unwarranted (and few and far between) will help to assuage their natural risk aversion.