I've created a Redis To Go Nano plan on Heroku and I'm using the connection URL in different
Heroku applications to share a rate-limit counter. Following all usual steps this is what I did.
I've added the add-on and I got back the REDISTOGO_URL.
# redis url
redis://user:pass#spadefish.redistogo.com:9014/
This is the raised error.
RuntimeError: nodename nor servname provided, or not known
I tried to simulate the connection from command line.
store = Redis.connect(url: 'redis://user:pass#spadefish.redistogo.com:9014/')
store.get('key') # raises error
And I get that error. If I use the local Redis instance everything works just fine.
store = Redis.connect(url: 'redis://localhost:6379/0')
store('key') # does not raise error
Everything makes me think it's a problem related to the Redis URL.
Am I missing something?
This was an issue that occurred with the redis to go spadefish server.
A CNAME was not initially configured for spadefish so you were getting a DNS resolution error.
The CNAME for spadefish has been added and you should not have a problem connecting to your instance.
Related
I need to add Solr to a DDEV project but am encountering errors when attempting to gather information about available services.
I'm following guidance here:
https://ddev.readthedocs.io/en/stable/users/extend/additional-services/
When I attempt to list all available services: ddev get --list, I receive this response after approx 30 seconds:
Failed to list available add-ons: Unable to get list of available services: Get "https://api.github.com/search/repositories?q=topic:ddev-get+fork:true+org:drud": dial tcp: lookup api.github.com: i/o timeout
I'm not sure what the problem is. If I curl the URL from the error message, ie curl https://api.github.com/search/repositories?q=topic:ddev-get+fork:true+org:drud, I receive a JSON response from Github with information about the repository.
This has happened for over two days now. I may be overlooking something but am not sure what, exactly. I'm able to run DDEV projects using the standard installation (mariadb, nginx, nodejs, mailhog) but continue to run into errors re listing add-ons.
I have ddev v.1.21.4 installed.
I'm using an M1 Mac on macOS 13.1.
Thank you.
Your system is unable to do a DNS lookup of the hostname api.github.com, and this is happening on your macOS host. Are you able to ping api.github.com? Have you tried rebooting?
You may want to temporarily disable firewall, VPN, virus checker to see if that changes things. But you'll want to be able to get to where you can ping api.github.com.
There is an obscure golang problem on macOS affecting situations where people have more than one DNS server, so that could be it if you're in that category. You also might want to consider changing the DNS server for your system to 1.1.1.1, as this can sometimes be a problem with your local DNS server (but of course the fact that you can curl the URL argues against that).
I have a laravel site up and running. We have three copies currently working - local, staging and production.
Up until today all three of these were acccessing the same digitalocean spaces with no issue.
Today we are getting a timeout whenever a request is made from the local environment - it continues to work perfectly on staging and development. Our .env files are identical with the acception of app key / name etc. Our config file are identical. The code that makes the request is identical.
We are receiving the following error
Aws\S3\Exception\S3Exception: Error executing "ListObjects" o"https://example.com/?prefix=document.pdf%2F&max-keys=1&encoding-type=url"; AWS HTTP error: cURL error 28: Failed to connect to site.com port 443: Connection timed out (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://example.com/?prefix=document.pdf%2F&max-keys=1&encoding-type=url in file /var/www/html/vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php on line 195
We have tried everything we can think of. We have completly restarted the local servers (laravel sail) to no effect. The only difference is the local copy of the the site is served over http whereas both staging and production are served over https. This hasn't caused an issue in the past however.
Any ideas on what could be causing this would be greatly appretiated.
Thanks
To anyone who finds in the future.
The issues resolved itself after about 12 hours.
It is almost certain that this was an issues on DO's end.
If it occurs again I'll be contacting support as #James has pointed out.
I recently started fiddling with Cloud Run for Anthos on GoogleCloud and I just can't enable HTTPS access. I've followed every step in the docs but it still doesn't work. I have a custom .dev domain which I configured through these steps and everything is fine with HTTP but HTTPS still says connection refused
curl http://api.default.customdomain.dev - works fine
but curl https://api.default.customdomain.dev - says:
curl: (7) Failed to connect to api.default.customdomain.dev port 443: Connection refused
I'm pretty sure there's something not specified in the docs, it happens a lot with GCP docs. Has anyone else struggled with this and might be able to help? Thanks!
EDIT: It was actually my fault - when creating the cert/private key secret I provided default for the --namespace value instead of gke-system. So, yeah... it's fixed now.
It was actually my fault - when creating the cert/private key secret I provided default for the --namespace value instead of gke-system. So, yeah... it's fixed now.
I've been trying to get my head around this all day. I understand how to create and manage single level subdomains in laravel, such as subdomain.domain.com
However, I'm trying to add a second level subdomain, for example: subsubdomain.subdomain.domain.com
I can get Homestead working fine with single level subdomains, but whenever I add the extra subdomain, I get a connection refused - unable to connect error in chrome.
There's nothing in the nginx error log either.
This is what I've done:
Open `~/.homestead/Homestead.yaml
Add the domain subsubdomain.subdomain.domain.com in addition to subdomain.domain.com
Save and exit, then run vagrant reload --provision
I can see the new sub-subdomain added to the hosts file, as well as a conf file created in the vagrant box
When I try to access subdomain.domain.com it works fine, when I try to access subsubdomain.subdomain.domain.com it fails with refused to connect.
I have no idea what to try next, there's nothing in the nginx error log, Homestead is up and running because I can access the single level subdomain completely fine. The only one that isn't working is the second level subdomain.
Any info on what I might be doing wrong, or anything else that might be helpful to debug would be greatly appreciated.
Update
I've managed to connect to the server if I add the port :8000 to the address: subsubdomain.subdomain.domain.com doesn't work, but subsubdomain.subdomain.domain.com:8000 works
I'm experiencing a 503 error with heroku on my project using WebSockets and a custom domain.
Connecting on http://www.mydomain.com (That point with CNAME on my heroku app)
WebSocket connection to 'ws://www.mydomain.com/shoutbox' failed: Error during WebSocket handshake: Unexpected response code: 503
Connecting on http://myapp.herokuapp.com
Everything goes allright with adress ws://myapp.herokuapp.com/shoutbox. Everything is also good in my local setup.
Is there any cross-domain issue I'm not aware off ? I'm using play!2 as server side fwk, but I don't think there is any relation to this problem.
[EDIT]
If I can only connect within my own domain then it would be fine. Cause this would be the address I'd like people to use.
I'm assuming you already enabled heroku labs:enable websockets since your herokuapp domain is working properly.
I have a hunch your DNS query is hitting a Heroku endpoint that doesn't support websockets, i.e. it's cached from before you enabled the websockets functionality.
If this behavior only happens on a single client, try flushing your DNS cache and trying again. Alternatively, make sure the DNS records for both of your domains are resolving to the same IP.