Sauce connect using local proxy - http-proxy

I want all traffic to pass over proxy on my local machine.
I have my proxy on localhost:8888
I tried:
sc -u <user_name> -k <key> -p localhost:8888
As it mentioned here “Getting Error: failed to connect to tunnel VM” with intern/saucelabs I tried to pass -p and --pac, but it didn't help me.
This documentation page, that probably could have solved all my problems, is inaccessible due to some reason.
Please help me to solve it.

sc -u <user_name> -k <key> -p 127.0.0.1:8888
Solved the problem

Related

cURL error 28: Failed to connect to systemb port 80: Timed out for http://systemb/api/push_products [duplicate]

This question shows research effort; it is useful and clear
I have checked the cURL not working properly
When I run the command curl -I https://www.example.com/sitemap.xml
curl: (7) Failed to connect
Failed to connect on all port
this error only on one domain, all other domain working fine, curl: (7) Failed to connect to port 80, and 443
Thanks...
First Check your /etc/hosts file entries, may be the URL which You're requesting, is pointing to your localhost.
If the URL is not listed in your /etc/hosts file, then try to execute following command to understand the flow of Curl Execution for the particular URL:
curl --ipv4 -v "https://example.com/";
After many search, I found that Hosts settings not correct
Then I check nano /etc/hosts
The Domain point to wrong IP in hosts file
I change the wrong IP and its working Fine
This is new error Related to curl: (7) Failed to connect
curl: (7) Failed to connect
The above error message means that your web-server (at least the one specified with curl) is not running at all — no web-server is running on the specified port and the specified (or implied) port. (So, XML doesn't have anything to do with that.)
you can download the key with browser
then open terminal in downloads
then type sudo apt-key add <key_name>.asc
Mine is Red Hat Enterprise(RHEL) Virtual Machine and I was getting something like the following.
Error "curl: (7) Failed to connect to localhost port 80: Connection refused"
I stopped the firewall by running the following commands and it started working.
sudo systemctl stop firewalld
sudo systemctl disable firewalld
If the curl is to the outside world, like:
curl www.google.com
I have to restart my cntlm service:
systemctl restart cntlm
If it's within my network:
curl inside.server.local
Then a docker network is overlapping something with my CNTLM proxy, and I just remove all docker networks to fix it - you can also just remove the last network you just created, but I'm lazy.
docker network rm $(docker network ls -q)
And then I can work again.

Unable to connect to EC2 server with my MacBook

I am unable to connect to EC2 (CentOs) from my MacBook. When I connect it from ubuntu machine, it will be connected. Currently, I got the following the following error:
ec2 ssh sign_and_send_pubkey: no mutual signature supported Account locked due to 290 failed logins
How can I solve the problem?
I have tried the following command:
ssh -i key.pem ec2-user#ip
I was locked out and couldn't access the machine to enter in the suggested answer's change to ssh config.
I added the following argument to the ssh call -o PubkeyAcceptedKeyTypes=+ssh-rsa and it worked.
Example:
ssh -i "keypair.cer" -o PubkeyAcceptedKeyTypes=+ssh-rsa ec2-user#ip
Note: the ssh call will accept both .cer and .pem filetypes.
edit or create the file ~/.ssh/config and add the following content:
Host *
PubkeyAcceptedKeyTypes=+ssh-dss
After that, try again.

AWS DocumentDB through Proxy

I have an AWS DocumentDB set up that I can connect to just fine through my jump box using:
mongo --ssl --host aws-host:27017 --sslCAFile rds-combined-ca-bundle.pem --username my_user --password <insertYourPassword>
I'd like to be able to connect to it through localhost for some testing. I cannot connect directly so I attempted to open a tunnel from my jump:
ssh -i ~/.ssh/my-key user#my_jump -L 27017:aws-host:27017 -N
After that I tried the basic MongoDB connect command:
mongo --ssl --host localhost:27017 --sslCAFile rds-combined-ca-bundle.pem --username my_user --password <insertYourPassword>
I get an error I understand:
The server certificate does not match the host name. Hostname: localhost does not match SAN(s)
I tried using export http_proxy to use http://my_jump:27017 and using the command above again with no luck.
Any suggestions or help on how to connect?
Try to disable ssl hostname validation:
mongo --ssl --sslAllowInvalidHostnames ...
Note --sslAllowInvalidHostnames is available from version 3.0.
If this does not work, try to remove --ssl entirely from the connection options, as according to the documentation:
The mongo shell verifies that the hostname (specified in --host option
or the connection string) matches the SAN (or, if SAN is not present,
the CN) in the certificate presented by the mongod or mongos. If SAN
is present, mongo does not match against the CN. If the hostname does
not match the SAN (or CN), the mongo shell will fail to connect.

Letsencrypt renewal fails: Could not bind to IPv4 or IPv6.. Skipping

The full error message I'm getting is:
Attempting to renew cert from /etc/letsencrypt/renewal/somedomain.com.conf produced an unexpected error: Problem binding to port 443: Could not bind to IPv4 or IPv6.. Skipping.
This is running on an AWS ubuntu 14.04 instance. All ports are open outgoing and 443 is open incoming.
You just need to stop all running servers like Apache, nginx or OpenShift before doing this.
Stop Nginx
sudo systemctl stop nginx
Stop Apache2
sudo systemctl stop apache2
you probably run the script with (preconfigurated) --standalone when your server is already running at port 443.
You can stop server before renew and start them after.
man says:
--apache Use the Apache plugin for authentication & installation
--standalone Run a standalone webserver for authentication
--nginx Use the Nginx plugin for authentication & installation
--webroot Place files in a server's webroot folder for authentication
--manual Obtain certificates interactively, or using shell script hooks
If I run renew with --apache I can't get any error.
As hinted in the other answers, you need to pass the option for your running webserver, for example:
Without webserver param:
sudo certbot renew
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges:tls-sni-01 challenge for example.com
Cleaning up challenges
Attempting to renew cert (example.com) from /etc/letsencrypt/renewal/example.com.conf produced an unexpected
error:
Problem binding to port 443: Could not bind to IPv4 or IPv6..
Skipping.
Then, again with the webserver param (success):
sudo certbot renew --nginx
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges: tls-sni-01 challenge for example.com
Waiting for verification...
Cleaning up challenges
new certificate deployed with reload of nginx server; fullchain is
/etc/letsencrypt/live/example.com/fullchain.pem
Congratulations, all renewals succeeded. The following certs have been
renewed: /etc/letsencrypt/live/example.com/fullchain.pem (success)
[This is specifically for ubuntu]
Login as root user to your server
Stop your server using the following command (for nginx)
service nginx stop
Then renew your certificate
certbot renew
Start your server
service nginx start
[TIP] To check the expiry date of your renewed certificate, enter the command below
ssl-cert-check -c [Path_to_your_certificate]/fullchain.pem
For example
ssl-cert-check -c /etc/letsencrypt/live/[your_domain_name]/fullchain.pem
Or
ssl-cert-check -c /etc/letsencrypt/live/[your_domain_name]/cert.pem
If you don't have ssl-cert-check already installed in your server, install it using
apt install ssl-cert-check
Note: The certificate can be renewed only if it is not expired. If it is expired, you have to create new one.
For NodeJS/PM2 users
I was using PM2 for my NodeJS service and when trying to renew the certificate I also got the "Problem binding to port 80: Could not bind to IPv4 or IPv6." error message.
As mentioned in above answers for Apache/Ngnix, Stopping my service and then trying to renew solved the problem.
pm2 stop all
sudo certbot renew
pm2 start all
First you need to install NGiNX lets encrypt plugin (if you work with NGiNX):
sudo apt install python-certbot-nginx
Then you can safely run:
sudo certbot renew --nginx
and it will work.
Note: certbot should already be installed.
For ngnix
sudo certbot renew --nginx
This happened because you used --standalone. The purpose of that option is to launch a temporary webserver because you don't have one running.
Next time use the --webroot method, and you'll be able to use your already running nginx server.
Borrowing from #JKLIR Simply run
/etc/letsencrypt/letsencrypt-auto renew --apache >> /var/log/letsencrypt/renew.log
to renew the ssl certificate
If you're trying to perform the certbot command as a regular user, you may not have access to bind to port 80 and other lower ports. If this is the case, you can grant python access to bind via:
First, see if you can find python 3+ (adjust as needed)
echo "$(readlink -f "$(which python3)")"
Allow python to open port 80 as a regular user (adjust as needed)
sudo setcap CAP_NET_BIND_SERVICE=+eip "$(readlink -f "$(which python3)")"
Re-run the failing certbot command.
Important: On Ubuntu 18.04, Python is called python3. It may be called a number of different things depending on the OS and how you obtained certbot. This command WILL VARY between OSs.
Warning: These lower ports are restricted for good reason. There are security considerations with the setcap command. You may read more about them here: https://superuser.com/a/892391
I use Nginx and needed to stop the server before I can proceed. Then I run the command:
$ sudo ./certbot-auto certonly --standalone -d chaklader.ddns.net
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for chaklader.ddns.net
Waiting for verification...
Cleaning up challenges
Subscribe to the EFF mailing list (email: xxx.chakfffder#gmail.com).
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/cdddddder.ddns.net/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/chaklader.ddns.net/privkey.pem
Your cert will expire on 2045-01-10. To obtain a new or tweaked
version of this certificate in the future, simply run certbot-auto
again. To non-interactively renew *all* of your certificates, run
"certbot-auto renew"
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
I had a similar issue when I was running two websites (hosts) on a single instance. I stopped Nginx and then ran sudo certbot certonly --standalone --preferred-challenges http -d domain.com -d www.domain.com. After restarting Nginx everything started to work fine.

Setup passphraseless ssh to localhost on OS X

I'm trying to get Hadoop's Pseudo-Distributed Operation example (http://hadoop.apache.org/common/docs/stable/single_node_setup.html) to work on OS X Lion, but am having trouble getting the ssh to work without a passphrase.
The instructions say the following:
Setup passphraseless ssh
Now check that you can ssh to the localhost without a passphrase: $
ssh localhost
I'm getting connection refused:
archos:hadoop-0.20.203.0 travis$ ssh localhost
ssh: connect to host localhost port 22: Connection refused
If you cannot ssh to localhost without a passphrase, execute the
following commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
After this step I am still getting connection refused. Any ideas???
Sounds like you don't have SSH enabled. Should be in the network settings control panel somewhere.
You go to "System Preferences > Sharing > Remote Access" and there's a list of authorized users. Change it to "All Users".
That's solves this problem.
Check the permissions on your .ssh directory. Some ssh implementations require that the directory be chmod 700. Otherwise, they just ignore it.
Also, check the output of
ssh -v localhost
to see how the ssh client is trying to connect. The output is very detailed, and will help you decide if it's an authentication problem.
I had the same issue.
Please check if the ssh server is running or not.
If yes, open the /etc/init.d/ssh_config and /etc/init.d/sshd_config files. The issue is that the server is running on a different port and the client is pointing to different port.
Before this please ensure that openssh-server and client are installed.
I had the same problem and i solved it the following manner :
SSH is activated.
ssh -v localhost (as stated by Herko)
In the ouput, i identified that the authentication method by DSA is not supported.
debug1: Skipping ssh-dss key /Users/john/.ssh/id_dsa - not in PubkeyAcceptedKeyTypes
I simply re-generate an ECDSA keys and remove the DSA key pairs.
After the keys generation, the procedure given on Hadoop documentation holds.
Therefore, it is important to check, if the authentication method is supported by the Openssh configuration.

Resources