I'm trying to upload a bosh release into the director. I use a Virtualbox environment and I'm behind a corporate proxy.
Even when I've tried to set the proxy with
export https_proxy=http://myproxy:3128
or with
export BOSH_ALL_PROXY=http://myproxy:3128
I never manage to do any download
Does someone know how to do ?
MBP-de-Olivier:bosh-deployment olivier$ bosh -e vbox upload-release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=283
Using environment '192.168.50.6' as client 'admin'
Task 13
Task 13 | 15:28:45 | Downloading remote release: Downloading remote release (00:00:05)
L Error: Failed to open TCP connection to bosh.io:443 (Address family not supported by protocol - socket(2) for "bosh.io" port 443)
Task 13 | 15:28:50 | Error: Failed to open TCP connection to bosh.io:443 (Address family not supported by protocol - socket(2) for "bosh.io" port 443)
Have you tried to download the release locally and upload to your bosh director from local host.
I think you should add the following lines:
export http_proxy=http://yourproxy:3128
export https_proxy=http://yourproxy:3128
Are you sure that 3128 is the correct port? This seems as if you are using cntlm (or another similar local proxy). If it is a local proxy: Is the service running? Can the service connect to the corporate proxy?
My guess is that the BOSH director does not know it must use a proxy. I'm under the impression you tried to configure the proxy at the bosh-cli level, but the download is performed by the director itself.
You could try to re-deploy the director with your proxy configuration. You can use this ops file in order to do so.
Related
I'm new to Amazon Web Service (AWS).
I already created a PostgreSQL from AWS RDS:
Endpoint: database-1.XXX.rds.amazonaws.com
Port: 5432
Public accessibility: Yes
Availablity zone: ap-northeast-1c
After that, I will push my application that using the database to AWS (maybe deploy to EKS).
However, I want to try testing the database server from my local computer first.
I haven't tried testing from my laptop PC at home yet, but I think it will connect OK because my laptop PC is not using the HTTP proxy to connect to the network.
The problem is that I want to try testing from my company PC, which needs setup the HTTP proxy to connect to the internet. The PC spec:
Windows 10
Installed PostgreSQL 10
Firstly, I tried using psql command-line:
psql -h database-1.XXXX.rds.amazonaws.com -U postgre
> Unknown host
set http_proxy=http://user:password#my_company_proxy:3128
set https_proxy=http://user:password#my_company_proxy:3128
psql -h database-1.XXXX.rds.amazonaws.com -U postgre
> Unknown host
set http_proxy=http://my_second_company_proxy:3128
set https_proxy=http://my_second_company_proxy:3128
psql -h database-1.XXXX.rds.amazonaws.com -U postgre
> Unknown host
Then, I tried using the pgAdmin tool.
As from the internet post, it said that we can use "SSH Tunnel" for inputing proxy:
However, the error message will be shown:
So, anyone can help suggest if we can connect to the public PostgreSQL server through HTTP proxy?
I think problem is Postgres uses plain TCP/IP protocol and you are trying to use HTTP proxy. Also you're trying to create SSH tunnel against your HTTP proxy server which won't work.
So I'd suggest following solutions:
Use TCP proxy instead of HTTP proxy
Create an EC2 or any instance that has SSH access from your company network and has access to public internet. So that you can create SSH tunnel through that instance to achieve your goal.
NOTE: Make sure you PostgreSQL is accessible from public internet (although this is usually bad idea, but it's out of scope this question) sometimes security group configs prevent it to connect from public internet.
Just add all ports(5432,3128...) in the Security Group from your RDS and specify your IP. Don't forget "/32"
Let me add that "unknown host" is usually an indication that you're not resolving the DNS hostname. Also, your HTTP proxy should not interfere with connections to databases since they aren't on port 80 or 443. A couple of things you can try (assuming you're on windows) sub in your actual url:
nslookup database-1.XXXX.rds.amazonaws.com
telnet database-1.XXXX.rds.amazonaws.com 5432
You should also check the security group that is attached to your RDS and make sure you've opened up the ip address that you're originating from on port TCP/5432.
Lastly check that your VPC has DNS and Hostnames enabled. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating
I am new to mongoDB trying to setup tools for my new project. Most of my infrastructure run on AWS so i prefer to use AWS documentDB. I manage to connect to documentDB from EC2 both via mongo client or NodeJS aplication. but it would be good to mange documentDB from my Windows workstation using MongoDB Compass.
As we know, we can not direct connect any mongo client from outside AWS to DocumentDB Connecting to an Amazon DocumentDB Cluster from Outside an Amazon VPC
so we need SSH tunnel through EC2. I try many options but still fail... below are most likely 2 options:
Option 1: Connect using MongoDB Compass SSH tunnel
Error: unable to get local issuer certificate
both RDS-COMBINED-CA-BUNDLE.PEM and SSH Key already supplied so which one unable to get?
as red highlight on SSH port, I also tried to open another SSHD port on server and tried to connect using second port but still failed.
Option 2: Connect using Putty SSH tunnel
Error: Hostname/IP does not match certificate's altnames...
since MongoDB Compass need to connect to locathost to get into tunnel and i still can not find the way to supply --sslAllowInvalidHostnames options.
So, what i can do to get around this ?
MongoDB Compass: 1.25.0
I am done with Compass.
successful established "robo3t" connection to AWS DocumentDB using this guild.
https://docs.aws.amazon.com/documentdb/latest/developerguide/robo3t.html
As of Jan 2022 MongoDB Compass does not support sslInvalidHostNameAllowed=true in the connection builder form, this is the parameter you are missing in order to connect to AWS DocumentDB while ssh tunneling to a machine inside the same VPC of the database itself.
I used Studio 3T and it worked perfectly. You could create the connection string yourself or try other GUI.
Edit Jan 2023:
I just gave a try to compass again and it seems they now support sslInvalidHostNameAllowed flag through the UI, you could still change manually the connection string but then any UI interaction would overwrite it.
If you edit the connection string directly in MongoDB Compass you can set options that may not be accessible in the user interface.
Below is an example with tweaked parameters to connect without using TLS:
mongodb://xxxx:yyyy#localhost:27017/?authSource=admin&connectTimeoutMS=10000&readPreference=primary&authMechanism=SCRAM-SHA-1&serverSelectionTimeoutMS=5000&appname=MongoDB%20Compass&ssl=false
For Hostname, are you using DocumentDB endpoint? In one screenshot, I see you are using localhost.
I have managed to connect with option 1.
The workaround can be by establish connection using SSH Tunnel (port forward) and so that SSH tunnel opens a port on your local system that connects through to another port at the other end of the tunnel.
Using the below command establishes a tunnel on terminal and later you can use this channel/connection to connect MongoDB using MongoDB Compass.
For example:
ssh user#aws-ec2-ip-address -L 35356:127.0.0.1:27017 -N
where -L as the Local listening side
Port 35356 is listening on localhost (that is in this case your EC2) and port forwards through to port 27017 on remote server.
Note - Add identity file in .ssh/config
Ex - On Mac
Host XXXXXXX
HostName 52.xx.xx.xx
User ubuntu
IdentityFile ./path/prod.pem
I have created a LAMP Bitnami VM on Google Cloud Platform Compute Engine.
vsftpd is installed already and I have edited the options to include:
listen=YES
listen_address=0.0.0.0
write_enable=YES
local_enable=YES
anonymous_enable=NO
local_umask=022
userlist_enable=YES
userlist_deny=NO
userlist_file=/etc/vsftpd.allowed_users
I have the PHP server up and running on http://my-ip-address but when I try to navigate to ftp://my-ip-address the browser just hangs.
I haven't used ftp for about 100 years so I'm not sure if I'm going about this the right way.
Do I need to do something with the firewall? I tried to do that but GCP wouldn't accept ftp as a protocol.
I've also tried with Filezilla but I get 'connection timed out'.
What am I missing please?
Make sure you have GCP firewall rule (ingress) in place to allow tcp:21 for FTP traffic to reach to the instance.
You can install tcpdump package on the server to monitor the traffic for verification.
To monitor the traffic on port 21 (ftp) can use the following syntax:
sudo tcpdump -i interface port 21
Example: sudo tcpdump -i eth0 port 21
I verified this on the GCE LAMP Bitnami VM with vsftpd package installed and was able to ftp from browser.
Moreover, FTP is an insecure protocol. You can set-up SFTP for more security and encrypted traffic.
Yes you missing Google Cloud Firewall, You have open some ports to make a successful connection with your ftp server. Go and visit this Set up an FTP Server on Google Cloud Platform -siteyaar.com blog post, This will help you.
First add this line in vsftpd.conf file.
pasv_min_port=40000
pasv_max_port=50000
After that open this ports 20,21,990,40000-50000 from Google cloud Firewall.
I have a server(AWS) to which I have ssh access.
There is a service(supervisor) running on this service on port 9001 whose web view can be accessed through 127.0.0.1:9001 had it been a local machine.
But since it is not a local machine, how do I access it?
I got the ip address of the machine using ifconfig | grep inet and then tried accessing it through https://172.11.11.1:9001/
Bit dint work.
When I tried wget https://172.11.11.1:9001/ it shows
Connecting to 172.31.19.8:9001... and hangs there.
I have added the following line in my supervisor conf file.
[inet_http_server]
port = *:9001
Can someone please help me with this?
This is more of a server config question. You'll most likely find your AWS access properties allow connections on post 22, 80 and 443 only. In AWS console you'll need to add a new security access group to allow port 9001 to be accessed.
Hi i am configuring ftp on amazon ec2 mincro linux instance i have suceessfully install and configure vsftpd on my instance, and also i create user for my ftp. but when i ftp my instance it will give following error
Error:
"Connection reset by by peer"
can any one help me regarding this what was im going wrong, or what im missing,
Note i have configure instance firewall as following
Custom tcp rule:
Port range : 20 - 21
source : 0.0.00/0
any hep is greatly appreciated. thanks in advance.
I'm using an instance for VPN and FTP(vsftpd) servers. Add this in the /etc/vsftpd/vsftpd.conf
pasv_addr_resolve=NO|YES
pasv_address=You Elastic IP Address|Hostname
pasv_min_port=2020
pasv_max_port=2020
Where pasv_address is your Elastic IP Address (set pasv_addr_resolve=NO) or you can use dyndns service and set pasv_addr_resolve=YES correspondingly. Then open 2020 and 21 ports in firewall. In this configuration you can use FTP server even in passive mode (Incoming connections are prohibited in you local PC).
All vsftpd config options are described here