Swiftmailer error - what does this stack trace mean? - swiftmailer

I have Swiftmailer set up to set emails via turbo-smtp, and have just started getting a 566 SMTP limit exceeded error. Is this turbo-smtp telling me I've sent too many emails, or my server's ISP, or is there something else in that error that needs to be addressed?
I'm not sending any more emails than I normally do via the day to day operation of the site.
Here's the error:
[09-Feb-2016 01:34:02 UTC] PHP Fatal error: Uncaught exception 'Swift_TransportException' with message 'Expected response code 354 but got code "566", with message "566 SMTP limit exceeded
"' in /usr/local/lib/php/Swift/Transport/AbstractSmtpTransport.php:386
Stack trace:
#0 /usr/local/lib/php/Swift/Transport/AbstractSmtpTransport.php(281): Swift_Transport_AbstractSmtpTransport->_assertResponseCode('566 SMTP limit ...', Array)
#1 /usr/local/lib/php/Swift/Transport/EsmtpTransport.php(245): Swift_Transport_AbstractSmtpTransport->executeCommand('DATA\r\n', Array, Array)
#2 /usr/local/lib/php/Swift/Transport/AbstractSmtpTransport.php(321): Swift_Transport_EsmtpTransport->executeCommand('DATA\r\n', Array)
#3 /usr/local/lib/php/Swift/Transport/AbstractSmtpTransport.php(432): Swift_Transport_AbstractSmtpTransport->_doDataCommand()
#4 /usr/local/lib/php/Swift/Transport/AbstractSmtpTransport.php(449): Swift_Transport_AbstractSmtpTransport->_doMailTransaction(Object(Swift_Message), 'support#songboo...', Array, Array)
#5 /usr/local/lib/php/Swift/Transport/Abstra in /usr/local/lib/php/Swift/Transport/AbstractSmtpTransport.php on line 386
My Swift code:
$transport = Swift_SmtpTransport::newInstance('pro.turbo-smtp.com', 25)
I also tried
$transport = Swift_SmtpTransport::newInstance('pro.turbo-smtp.com', 465, 'ssl')
Thanks for your time and help.

Turned out I'd hit my turbo-smtp limit. I hadn't looked at it for a couple of years, so had forgotten how it all worked. Upgraded my account, and it's working again. Ideally, they'd send you a simple email saying - "hey, you've hit your email limit, and emails aren't getting out - you should upgrade to fix the problem" for us dummies.

Related

Sentry Rate Limiting

We have a couple of aws lambda functions which are using the same Sentry Client Key for error reporting. Recently we started to receive rate limiting error.
{
"errorType": "SentryError",
"errorMessage": "HTTP Error (429)",
"name": "SentryError",
"stack": [
"SentryError: HTTP Error (429)",
" at new SentryError (/var/task/node_modules/#sentry/utils/dist/error.js:9:28)",
" at ClientRequest.<anonymous> (/var/task/node_modules/#sentry/node/dist/transports/base/index.js:212:44)",
" at Object.onceWrapper (events.js:421:26)",
" at ClientRequest.emit (events.js:314:20)",
" at ClientRequest.EventEmitter.emit (domain.js:483:12)",
" at HTTPParser.parserOnIncomingClient (_http_client.js:601:27)",
" at HTTPParser.parserOnHeadersComplete (_http_common.js:122:17)",
" at TLSSocket.socketOnData (_http_client.js:474:22)",
" at TLSSocket.emit (events.js:314:20)",
" at TLSSocket.EventEmitter.emit (domain.js:483:12)"
]
}
However, no actual errors occurs in the lambda function. From what we understand Sentry is recording every lambda executions and sending this information as trace. We are using a low tracesSampleRate, but the error still occurs.
AWSLambda.init({
dsn: process.env.SENTRY_DNS,
sampleRate: 1.0,
tracesSampleRate: 0.1,
});
I don't know if this is relevant, but the functions are not inside our VPC, they are using AWS ip address pool.
We tried to find any indication of errors related to rate limiting in the Sentry dashboard, but without success.
Thanks!
I started getting this issue in one of our systems around the same time you posted this. Something that stopped it happening was to disable tracing all together (tracesSampleRate: 0). Not truly a fix, but enough to clear our error logs for now while we investigate a proper fix.

Google.apis returns error code 400 after creating maximum amount of service account keys

We are using Google.apis Version 1.36.1 SDK in order to create service account keys for GCP Service accounts.
When we reach maximum amount of keys (10) instead of getting a valid error message / error code we recieve a general 400 error code with a "Precondition check failed." message.
We used to get error code 429 indicating we have reached maximum amount of keys.
Current GoogleApiException object :
Google.GoogleApiException: Google.Apis.Requests.RequestError
Precondition check failed. [400]
Errors [
Message[Precondition check failed.] Location[ - ] Reason[failedPrecondition] Domain[global]
]
The current return code does not provide us with enough information, Is there any other way for us to know the reason of the failure ?
This error message is also related to limits. You can take the official documentation for the Classroom API as an example.
I have found myself in a similar situation where we were deleting service account keys to immediately create new ones. We were getting the same error because there is a delay on the system where it can take from 60-90 seconds to delete the key for you to be able to create it again.

Outlook REST API 500 LegacyPagingToken error

I am using the Microsoft Outlook REST API to synchronize messages in a folder using skipTokens with the Prefer: odata.track-changes header.
After 62 successful rounds of results, I get an error 500 ErrorInternalServerError with the message Unable to cast object of type 'LegacyPagingToken' to type 'Microsoft.Exchange.Services.OData.Model.SkipToken'
I have tried:
Retrying the same query (https://outlook.office.com/api/v2.0/me/MailFolders/Inbox/messages/?%24skipToken=1BWUA9eXs5dN89tPsr_FOvtzINQAA0Cwk5o), which results in the same error
Restarting the sync, which results in the same error at the same point
Adding a new message to the Inbox and restarting the sync, which results in the same error at the same point
Moving the messages from that part of the sync to another folder (in case the messages themselves were causing the problem), which results in the same error at the same point
Has anybody run into this error or have suggestions on what might cause it or workarounds?
It looks like the issue was on my end while parsing the skipToken from the #odata.nextLink response. The token in the original question is invalid - the actual skipToken passed back from the API had -AAAA on the end. After 63 queries, in which the skipToken increments, the Base64 encoded form started using characters the regexp I was using didn't find. Switching from a \w regexp to a proper URL parser solved the problem.

Not found anything first time on Magento catalog search. General error: 1205

When you find some keywords on my Magento store sometimes not appear anything but if you continue searching the same keyword finally results appear.
Variable 'innodb_lock_wait_timeout' was 50 and now is 100. Nothing happens.
Sometimes I get this error:
a:5:{i:0;s:91:"SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction";i:1;s:3666:"#0 /home/nginx/www/includes/src/Varien_Db_Statement_Pdo_Mysql.php(110): Zend_Db_Statement_Pdo->_execute(Array)
#1 /home/nginx/www/includes/src/__default.php(64796): Varien_Db_Statement_Pdo_Mysql->_execute(Array)
#2 /home/nginx/www/includes/src/__default.php(54128): Zend_Db_Statement->execute(Array)
....
I need help!

Redis shutdown my Magento website

I was looking everywhere if I can find an answer to this issue but I was not able.
So my issue is with Redis server, every night at random hour my website simply shut down due to the issue above.
To get my website up I just need to reboot my AWS Instance.
Thank you so much for your help !
PHP Fatal error: Uncaught exception 'CredisException' with message ' operation not permitted' in /var/www/lib/Credis/Client.php:704\n
Stack trace:
#0 /var/www/lib/Credis/Client.php(538): Credis_Client->read_reply('select')
#1 /var/www/lib/Credis/Client.php(440): Credis_Client->__call('select', Array)
#2 /var/www/app/code/community/Cm/Cache/Backend/Redis.php(135): Credis_Client->select(0)
#3 /var/www/lib/Zend/Cache.php(153): Cm_Cache_Backend_Redis->__construct(Array)
#4 /var/www/lib/Zend/Cache.php(94): Zend_Cache::_makeBackend('Cm_Cache_Backen...', Array, true, true)\n
#5 /var/www/app/code/core/Mage/Core/Model/Cache.php(137): Zend_Cache::factory('Varien_Cache_Co...', 'Cm_Cache_Backen...', Array, Array, true, true, true)
#6 /var/www/app/code/core/Mage/Core/Model/Config.php(1348): Mage_Core_Model_Cache->__construct(Array)
#7 /var/www/app/Mage.php(463): Mage_Core_Model_Config->getModelInstance('core/cache', Array)
#8 /var/www/app/code/core/Mage/Core/Model/App.php(401): Mage::getModel('core/cache', Array)\n#9 /var/www/app/code/core/Mag in /var/www/lib/Credis/Client.php on line 704
This sounds like your redis instance is left open on the internet and you've been hacked.
Make sure you secure correctly your instance.
Gist explaining the problem

Resources