I've a Problem. I created a Newsletter System based on TYPO3 8 + EXT:Newsletter and my customer sends like 4 Newsletter Pages with attachments every week (about 20000+ Recipients).
The problem is that it's really slow. I've set the interval to 50 Newsletter per Minute and at the beginning of the send task...it's all okay. It sends like 40 - 50 Newsletter per minute, but after some time it goes down...
Here's a little statistic for yesterday:
16:21 - 17:21 o'clock 933 sent
17:21 - 18:21 o'clock 749 sent
18:21 - 19:21 o'clock 605 sent
And now it's at 250 in 1hour...by the way the customer is sending through his own SMTP (Domainfactory).
Do you have any idea? And yea the extension uses swiftmailer from TYPO3 8 core to send out.
Are you on a shared hosting plan? What you describe sounds like your SMTP account gets throttled because you hit some limits defined by your provider. Check Domain Factory's tos at: https://www.df.eu/de/agb/
Bei Shared-Hosting-Produkten ist es dem Kunden untersagt, über den Webserver mittels Skripten massenhaft E-Mails [...] zu versenden.
It states that you are not allowed to send mass mails from their shared hosting.
I have had a perfomance issue with sending emails through ext:newsletter es well. In my case it was due to the preperation of the links in the newsletter with activated ext:realurl. As soon as I disabled ext:realurl, the emails where sent very fast.
Hope, this helps!
Related
Question.
I would like to know if it is possible to measure time and send messages in Slack.
If possible, it would be helpful if you could give me a general idea of how to do this.
Ideal
Me「 "start" 」
post date and time: 1/1 9:30
Me「"Finish"」
posting date and time: 1/1 17:00
Bot message 「"I worked 7 hours and 30 minutes."」
I'm performing a load tests against IBM MQ and would like to have 10 msgs./users being submitted in 10 minutes (just a as a proof of concept).
I'm injecting the respective load like this:
scn_message_ZIP_DP102.inject(rampUsers(10) over(10 minutes)).protocols(jmsConfigMQ1)
But when checking the logs I'm seeing the applicaiton is being flooded with messages. I'd expect to have just 10 messages being submitted in a timeframe of 10 minutes.
Well we have an answer - in 10 minutes you start 10 users and every of them is sending message after message in a 48 hour loop, so instead of 10 messages you probably have hundreds of millions of them. Remove during loop and it should be fine fe.:
val scnMessageID14 = scenario("Load testing InboundQueue on MQ-HOST-1 with MessageID14")
.exec(
jms("F&F testing with MessageID 14")
.send
.queue("MESSAGES.QUEUE")
.textMessage(message14)
)
I have a working FB Bot built with Ruby which allows players to play a scavenger hunt.
Sometimes though, when I have multiple players in a team, FB is sending me a players 'Answer' webhook twice. I have looked into it and at first thought it was to do with the 20 second timeout if FB gets no 200 OK response (Docs here). After checking the logs though, I am receiving the second webhook from FB only 14 seconds later. See below:
# Webhook #1
{"object"=>"page", "entry"=>[{"id"=>"252445748474312", "time"=>1532153642358, "messaging"=>[{"sender"=>{"id"=>"1709242109154907"}, "recipient"=>{"id"=>"252445748474312"}, "timestamp"=>1532153641935, "message"=>{"mid"=>"0FeOChulGjuPgg3YJqEgajNsY8kMfNRt_bpIdeegEeE54h-KB8szcd-EQ-UHUT3850RwHgH4TxVYFkoFwxqhtg", "seq"=>402953, "text"=>"Larrikins"}}]}]}
# Webhook #2 (14 seconds later)
{"object"=>"page", "entry"=>[{"id"=>"252445748474312", "time"=>1532153656901, "messaging"=>[{"sender"=>{"id"=>"1709242109154907"}, "recipient"=>{"id"=>"252445748474312"}, "timestamp"=>1532153641935, "message"=>{"mid"=>"0FeOChulGjuPgg3YJqEgajNsY8kMfNRt_bpIdeegEeE54h-KB8szcd-EQ-UHUT3850RwHgH4TxVYFkoFwxqhtg", "seq"=>402953, "text"=>"Larrikins"}}]}]}
Notice both are exactly the same apart from the first "time" attribute (14 secs later).
Due to a number of methods and calls that I process after receiving the first webhook, the 200 OK response is only being sent back to FB once I have finished sending my messages in response (hence the 14 second delay).
So I have two questions:
Is the 14 second delay too long and that is why FB is resending? If so, how can I send a 200OK response straight away (head :ok)?
Is it another issue entirely?
You also ensure that "Echo" is disabled.
Go to Settings>Webhooks, edit events.
Asyncronous language like NodeJS is recomended, in my case y work with AWS SQS, I have workers that process the requests witout blocking (dont wait), I return 200,"ok" to FB to avoid that FB send again the message to my webhook.
Anothe apporach maybe store the mid in database, and check in each request if the mid exists, if exists the dont process the message. I was use Dynamo DB (AWS) with TTL enabled, thus with TTL my database autoclean every hour erasing old request.
I think it is the 15 second wait before replying, was also happening to me as Facebook auto retries when you don't reply fast enough. Te EEe Te's idea is solid, write some mechanism to cache mids and check if it is a duplicate before processing
I am building an autocomplete functionality and realized the amount of time taken between the client and server is too high (in the range of 450-700ms)
My first stop was to check if this is result of server delay.
But as you can see these Nginx logs are almost always 0.001 milliseconds (request time is the last column). It’s hardly a cause of concern.
So it became very evident that I am losing time between the server and the client. My benchmarks are Google Instant's response times. Which almost often is in the range of 30-40 milliseconds. Magnitudes lower.
Although it’s easy to say that Google's has massive infrastructural capabilities to deliver at this speed, I wanted to push myself to learn if this is possible for someone who is not that level. If not 60 milliseconds, I want to shave off 100-150 milliseconds.
Here are some of the strategies I’ve managed to learn.
Enable httpd slowstart and initcwnd
Ensure SPDY if you are on https
Ensure results are http compressed
Etc.
What are the other things I can do here?
e.g
Does have a persistent connection help?
Should I reduce the response size dramatically?
Edit:
Here are the ping and traceroute numbers. The site is served via cloudflare from a Fremont Linode machine.
mymachine-Mac:c name$ ping site.com
PING site.com (160.158.244.92): 56 data bytes
64 bytes from 160.158.244.92: icmp_seq=0 ttl=58 time=95.557 ms
64 bytes from 160.158.244.92: icmp_seq=1 ttl=58 time=103.569 ms
64 bytes from 160.158.244.92: icmp_seq=2 ttl=58 time=95.679 ms
^C
--- site.com ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 95.557/98.268/103.569/3.748 ms
mymachine-Mac:c name$ traceroute site.com
traceroute: Warning: site.com has multiple addresses; using 160.158.244.92
traceroute to site.com (160.158.244.92), 64 hops max, 52 byte packets
1 192.168.1.1 (192.168.1.1) 2.393 ms 1.159 ms 1.042 ms
2 172.16.70.1 (172.16.70.1) 22.796 ms 64.531 ms 26.093 ms
3 abts-kk-static-ilp-241.11.181.122.airtel.in (122.181.11.241) 28.483 ms 21.450 ms 25.255 ms
4 aes-static-005.99.22.125.airtel.in (125.22.99.5) 30.558 ms 30.448 ms 40.344 ms
5 182.79.245.62 (182.79.245.62) 75.568 ms 101.446 ms 68.659 ms
6 13335.sgw.equinix.com (202.79.197.132) 84.201 ms 65.092 ms 56.111 ms
7 160.158.244.92 (160.158.244.92) 66.352 ms 69.912 ms 81.458 ms
mymachine-Mac:c name$ site.com (160.158.244.92): 56 data bytes
I may well be wrong, but personally I smell a rat. Your times aren't justified by your setup; I believe that your requests ought to run much faster.
If at all possible, generate a short query using curl and intercept it with tcpdump on both the client and the server.
It could be a bandwidth/concurrency problem on the hosting. Check out its diagnostic panel, or try estimating the traffic.
You can try and save a response query into a static file, then requesting that file (taking care as not to trigger the local browser cache...), to see whether the problem might be in processing the data (either server or client side).
Does this slowness affect every request, or only the autocomplete ones? If the latter, and no matter what nginx says, it might be some inefficiency/delay in recovering or formatting the autocompletion data for output.
Also, you can try and serve a static response bypassing nginx altogether, in case this is an issue with nginx (and for that matter: have you checked out nginx' error log?).
One approach I didn't see you mention is to use SSL sessions: you can add the following into your nginx conf to make sure that an SSL handshake (very expensive process) does not happen with every connection request:
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
See "HTTPS server optimizations" here:
http://nginx.org/en/docs/http/configuring_https_servers.html
I would recommend using New Relic if you aren't already. It is possible that the server-side code you have could be the issue. If you think that might be the issue, there are quite a few free code profiling tools.
You may want to consider an option to preload autocomplete options in the background while the page is rendered and then save a trie or whatever structure you use on the client in the local storage. When the user starts typing in the autocomplete field you would not need to send any requests to the server but instead query local storage.
Web SQL Database and IndexedDB introduce databases to the clientside.
Instead of the common pattern of posting data to the server via
XMLHttpRequest or form submission, you can leverage these clientside
databases. Decreasing HTTP requests is a primary target of all
performance engineers, so using these as a datastore can save many
trips via XHR or form posts back to the server. localStorage and
sessionStorage could be used in some cases, like capturing form
submission progress, and have seen to be noticeably faster than the
client-side database APIs.
For example, if you have a data grid component or an inbox with
hundreds of messages, storing the data locally in a database will save
you HTTP roundtrips when the user wishes to search, filter, or sort. A
list of friends or a text input autocomplete could be filtered on each
keystroke, making for a much more responsive user experience.
http://www.html5rocks.com/en/tutorials/speed/quick/#toc-databases
I am based in the UK and have two webservers, one German based (1&1) and the other is UK based (Easyspace).
I recently signed up to the UK easyspace server because it was about the same price I paid for my 1&1 server but also I wanted to see if my sites hosted on a UK server gave better results in terms of UK based traffic.
Its seems my traffic is roughly the same for both servers... however 1&1 server performance and customer service is much better than Easyspace so I was thinking about cancelling it and getting another 1&1 server.
I understand about latency issues where USA/Asia would be much slower for UK traffic but I am just wondering what your thoughts are traffic, SEO etc and if you think I should stick with a UK server or if it doesn't matter?
Looking forward to your replies.
I have never heard of common search engines ranking sites by their response time as it is highly variable due to the nature of the internet.
If a search engine would penalize you for the subnet you are on then you likely have bigger problems.
I get better results on google.com.au for my sites than on other flavours of google, even though the sites are not hosting in Australia. So I would suggest that the actual physical location of the servers won't matter so much and if you are wanting to be higher up on google.co.uk you might want a co.uk domain?
Google associates a region with your site mostly through its suffix (TLD/SLD, eg. .co.uk), but if you create a Google Webmaster Tools account you can tell it otherwise in the odd case it makes a mistake.
As far as the traffic is concerned the site will be loaded fast for UK visitors. I suggest using this server if most of your visitors are from UK. Server location does not have to do anything with SEO.
Stick with your UK server if you think its better.
My main concern is losing UK based customers if the server is located outside the UK but it appears from the comments that this is probably not the case.
However, my UK server is based in Scotland, my other server is based in Germany and is actually closer to London than Scotland?
Just to compare the speed between Scotland server and Germany server:
=== Germany Based ===
Pinging firststopdigital.com [87.106.101.189]:
Ping #1: Got reply from 87.106.101.189 in 126ms [TTL=46]
Ping #2: Got reply from 87.106.101.189 in 126ms [TTL=46]
Ping #3: Got reply from 87.106.101.189 in 126ms [TTL=46]
Ping #4: Got reply from 87.106.101.189 in 126ms [TTL=46]
Variation: 0.4ms (+/- 0%)
Shortest Time: 126ms
Average: 126ms
Longest Time: 126ms
=== UK Based ===
Pinging pb-net.co.uk [62.233.81.163]:
Ping #1: Got reply from 62.233.81.163 in 120ms [TTL=55]
Ping #2: Got reply from 62.233.81.163 in 119ms [TTL=55]
Ping #3: Got reply from 62.233.81.163 in 119ms [TTL=55]
Ping #4: Got reply from 62.233.81.163 in 119ms [TTL=55]
Variation: 0.3ms (+/- 0%)
Shortest Time: 119ms
Average: 119ms
Longest Time: 120ms
The difference is around 6ms which is not much at all.
Incidentally I just performed a ping to a USA based domain I own:
Pinging pbnetltd.com [74.86.61.36]:
Ping #1: Got reply from 74.86.61.36 in 6.4ms [TTL=121]
Ping #2: Got reply from 74.86.61.36 in 6.3ms [TTL=121]
Ping #3: Got reply from 74.86.61.36 in 6.2ms [TTL=121]
Ping #4: Got reply from 74.86.61.36 in 6.3ms [TTL=121]
Variation: 0.2ms (+/- 3%)
Shortest Time: 6.2ms
Average: 6.3ms
Longest Time: 6.4ms
The USA timings are much quicker considering the extra distance across the Atlantic to NY and back (9am UK time so USA are asleep - will try again tonight).