Suddently can't access DO Spaces locally (Laravel) - laravel

I have a laravel site up and running. We have three copies currently working - local, staging and production.
Up until today all three of these were acccessing the same digitalocean spaces with no issue.
Today we are getting a timeout whenever a request is made from the local environment - it continues to work perfectly on staging and development. Our .env files are identical with the acception of app key / name etc. Our config file are identical. The code that makes the request is identical.
We are receiving the following error
Aws\S3\Exception\S3Exception: Error executing "ListObjects" o"https://example.com/?prefix=document.pdf%2F&max-keys=1&encoding-type=url"; AWS HTTP error: cURL error 28: Failed to connect to site.com port 443: Connection timed out (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://example.com/?prefix=document.pdf%2F&max-keys=1&encoding-type=url in file /var/www/html/vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php on line 195
We have tried everything we can think of. We have completly restarted the local servers (laravel sail) to no effect. The only difference is the local copy of the the site is served over http whereas both staging and production are served over https. This hasn't caused an issue in the past however.
Any ideas on what could be causing this would be greatly appretiated.
Thanks

To anyone who finds in the future.
The issues resolved itself after about 12 hours.
It is almost certain that this was an issues on DO's end.
If it occurs again I'll be contacting support as #James has pointed out.

Related

`ddev get --list` doesn't work (lookup api.github.com: i/o timeout)

I need to add Solr to a DDEV project but am encountering errors when attempting to gather information about available services.
I'm following guidance here:
https://ddev.readthedocs.io/en/stable/users/extend/additional-services/
When I attempt to list all available services: ddev get --list, I receive this response after approx 30 seconds:
Failed to list available add-ons: Unable to get list of available services: Get "https://api.github.com/search/repositories?q=topic:ddev-get+fork:true+org:drud": dial tcp: lookup api.github.com: i/o timeout
I'm not sure what the problem is. If I curl the URL from the error message, ie curl https://api.github.com/search/repositories?q=topic:ddev-get+fork:true+org:drud, I receive a JSON response from Github with information about the repository.
This has happened for over two days now. I may be overlooking something but am not sure what, exactly. I'm able to run DDEV projects using the standard installation (mariadb, nginx, nodejs, mailhog) but continue to run into errors re listing add-ons.
I have ddev v.1.21.4 installed.
I'm using an M1 Mac on macOS 13.1.
Thank you.
Your system is unable to do a DNS lookup of the hostname api.github.com, and this is happening on your macOS host. Are you able to ping api.github.com? Have you tried rebooting?
You may want to temporarily disable firewall, VPN, virus checker to see if that changes things. But you'll want to be able to get to where you can ping api.github.com.
There is an obscure golang problem on macOS affecting situations where people have more than one DNS server, so that could be it if you're in that category. You also might want to consider changing the DNS server for your system to 1.1.1.1, as this can sometimes be a problem with your local DNS server (but of course the fact that you can curl the URL argues against that).

Laravel S3 File get contents

Inside of my Laravel application inside of my job class I have the following code. On my live server this code runs just fine however on my local I get an error and not sure what I need to do to fix this problem. Has anyone been able to solve this with using the AWS S3 file driver for Laravel?
Storage::disk('s3')->put($path, file_get_contents($this->url), 'public');
file_get_contents(http://webapp.dev/storage/uploads/folder/folder/folder/imagename.jpeg): failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found
Do you have a webserver running locally that listens on port 80 for requests made to webapp.dev?
Does the directory for webapp.dev in fact have "imagename.jpeg" in that location?
This just looks like a 404 because that address doesn't exist on your local environment, but does exist on your live one.
Or, the context of $this is different on your local environment than it is on your production environment. We can't tell that from your original post, though, because you've only provided that one line and the resulting error.

Mailgun emails work on local but not server

I am using the same setup as on the server. I have an EC2 instance running Ubuntu, then I am using Docker to host an Ubuntu image with runs my Laravel project on nginx and php7. My local is setup up the exact same, I use the same Docker image and everything.
When I test my emails on my local they work seamlessly, no errors or problems but as soon as I test it on my EC2 I get the following error in Laravel:
Swift_TransportException: Connection to tcp://smtp.mailgun.org:587 Timed Out in /app/vendor/swiftmailer/swiftmailer/lib/classes/Swift/Transport/AbstractSmtpTransport.php:404
I have tried using ports 25, 2525 and 465 butt he exact same result. Here are my env variables:
MAIL_DRIVER=smtp
MAIL_HOST=smtp.mailgun.org
MAIL_PORT=587
MAIL_USERNAME=postmaster#placeholder.com
MAIL_PASSWORD=5uup3rL0nGPa55w0RdY0uPr0bablykn0
MAIL_ENCRYPTION=tls
MAIL_FROM_ADDRESS=no-reply#placeholder.com
MAIL_FROM_NAME="placeholder Team"
MAILGUN_DOMAIN=placeholder.com
MAILGUN_SECRET=key-MyK3y1s0ac001y0uw15hy0uhadi7h3h3
The secret and password is fake data
On Mailgun's dashboard I have verified my domain (locally I use localhost.MYDOMAIN.com pointing to 127.0.0.1) and all checks are green except for mxa.mailgun.org and mxb.mailgun.org because we are using gmail for our emails. Not sure if this is the source but I cannot risk disabling the emails just for a test.
If I telnet to Mailgun using telnet smtp.mailgun.org 25 (or any other port) I get a connection, so I do get access.
I also applied to relieve the email sending throttling on my server that Amazon puts onto EC2 servers. Not sure when this will actually be in effect so not sure if it will help or not (It might)
I am not sure why I only get a timeout on my server and it works on my local but any advice would be appreciated!
I did try searching around for answers but did not succeed.
WORKAROUND If anyone is struggling with this same issue, it is not worth the effort, rather just implement the Mailgun API, this way you do not have to put up with these issues, it is what I have now done which, had I known about the issues I would have faced, would have done from the start. So still no solution from my side, just avoided it and also why I am not updating with an answer and instead just updating with an edit. Not sure if this is the correct way.

Redis not able to connect using redis-rb (with Redistogo URL)

I've created a Redis To Go Nano plan on Heroku and I'm using the connection URL in different
Heroku applications to share a rate-limit counter. Following all usual steps this is what I did.
I've added the add-on and I got back the REDISTOGO_URL.
# redis url
redis://user:pass#spadefish.redistogo.com:9014/
This is the raised error.
RuntimeError: nodename nor servname provided, or not known
I tried to simulate the connection from command line.
store = Redis.connect(url: 'redis://user:pass#spadefish.redistogo.com:9014/')
store.get('key') # raises error
And I get that error. If I use the local Redis instance everything works just fine.
store = Redis.connect(url: 'redis://localhost:6379/0')
store('key') # does not raise error
Everything makes me think it's a problem related to the Redis URL.
Am I missing something?
This was an issue that occurred with the redis to go spadefish server.
A CNAME was not initially configured for spadefish so you were getting a DNS resolution error.
The CNAME for spadefish has been added and you should not have a problem connecting to your instance.

Subdomain caching issue

I have set up a subdomain abc.mysite.com to point a specific IP on another server. I did this by creating the following A records:
abc 300 in A xx.xxx.xx.xx and www.abc 300 in A xx.xxx.xx.xx
My host confirms that this was done correctly, however (3 days later) the domain still resolves intermittently. That is, sometimes it resolves to the correct IP and I see the correct page and other times I see a 404 error or a default website page from cpanel.
My host suggests that it is a caching issue and if I perform a flushdns and clear my browser cache, this fixes the problem. But i am puzzled as to why it reoccurs.
Could there by something on the other server triggering it? Or is it just a matter of waiting a little longer for propagation?
Forgive me if the problem isn't clear. This stuff is not my forte.
A 404 error indicates an error on the webservers side - not on the DNS level. That means, that if you see a 404 error or the cPanel default site, DNS is working fine but the web server does not respond.
http://en.wikipedia.org/wiki/HTTP_404#Overview
Check the web server logs and/or speak to your provider about the issue.
What was the TTL before you made your changes? I've seen 86,400 seconds (one day) and 604,800 (one week) as common choices in the past. (The important number is what the TTL was set to before you made your change, as that dictates how long stale data is held in DNS caches.)

Resources