I installed hadoop cluster using bdutil (instead of click to deploy). I am not able to access job tracker page at locahost:50030/jobtracker.jsp (https://cloud.google.com/hadoop/running-a-mapreduce-job)
I am checking it locally using lynx instead of from my client browser (so localhost instead of external ip)
My setting in my config file for bdutil is
MASTER_UI_PORTS=('8088' '50070' '50030')
but after deploying the hadoop cluster when I do firewall rules list I get following
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
default-allow-http default 0.0.0.0/0 tcp:80,tcp:8080 http-server
default-allow-https default 0.0.0.0/0 tcp:443 https-server
default-allow-icmp default 0.0.0.0/0 icmp
default-allow-internal default 10.240.0.0/16 tcp:1-65535,udp:1-65535,icmp
default-allow-rdp default 0.0.0.0/0 tcp:3389
default-allow-ssh default 0.0.0.0/0 tcp:22
Now I dont see port 50030 in the list of rules. Why so?
so I run a command to add them (manually)
gcloud compute firewall-rules create allow-http --description "Incoming http allowed." --allow tcp:50030 --format json
Now it gets added and I can see in the output of firewall-rules list command.
But still when I do lynx locahost:50030/jobtracker.jsp I get unable to connect. Then, I run a hadoop job so that there is some output to view and then run lynx command but still see unable to connect.
Can someone tell me where I am going wrong in this complete process?
An ephemeral IP is an external IP. The difference between an ephemeral IP and a static IP is that a static IP can be reassigned to another virtual machine instance, while an ephemeral IP is released when the instance is destroyed. An ephemeral IP can be promoted to a static IP through the web UI or the gcloud command-line tool.
You can obtain the external IP of your host by querying the metadata API at http://169.254.169.254/0.1/meta-data/network. The response will be a JSON document that looks like this (pretty-printed for clarity):
{
"networkInterface" : [
{
"network" : "projects/852299914697/networks/rabbit",
"ip" : "10.129.14.59",
"accessConfiguration" : [
{
"externalIp" : "107.178.223.11",
"type" : "ONE_TO_ONE_NAT"
}
]
}
]
}
The firewall rule command seems reasonable, but you may want to choose a more descriptive name. If I saw a rule that said allow-http, I would assume it meant port 80. You may also want to restrict it to a target tag placed on your Hadoop dashboard instance; as written, your rule will allow access on that port to all instances in the current project.
Related
I wish to run my elasticsearch remotely on gcloud VM, this is configured to run at 127.0.0.1 at a specific port 9200. How to access this from a website outside this vm? If I change the network host to 0.0.0.0 on the yml file, even 9200 port becomes inaccessible. How do I overcome this problem?
Changed network.host: [_site_ , _local_ , _global_ ]
_site_ = internal ip given by google cloud vm,
_local_ = 127.0.0.1,
_global_ = found using curl ifconfig.me,
Opened a specific port (9200) and tried to connect with global IP address.
curl to the global ip gives
>Output: Failed to connect to (_global_ ip) port 9200: connection refused.
So put network.host:0.0.0.0 and then try to allow 9200 and 9201 port and restart the elasticsearch service.If you are using ubuntu then sudo service elasticsearch restart then check by doing curl -XGET 'http://localhost:9200?pretty'.Let me know if you are still facing any issues.
Use following configurations for elasticsearch.yml
network.host: 0.0.0.0
action.auto_create_index: false
index.mapper.dynamic: false
Solved this problem by going through the logs and found out that the public ip address is re-mapped to the internal ip address, hence network.host can't be set to external ip directly. Elasticsearch yml config is as follows:
'network.host: xx.xx.xxx.xx' is set to the internal ip (given by google),
'http.cors.enabled: true',
'http.cors.allow-origin:"*", (Do not use * in production, its a security issue)
'discovery.type: single-node' in my case to make it work independently and not in a cluster
Now this sandboxed version can be accessed from outside the VM using the external IP address given by Google.
I tried to run this hello world app on an AWS EC2 instance with docker-compose up --build . It works as expected and is accessible remotely from the EC2 public IP when I use port 80 i.e., "80:80" as shown in the docker-compose file.
However, if I change to another port such as "5106:80", it is not accessible from a remote host using <public IPv4 address>:5106 even though it's available locally if I ssh unto the EC2 instance and try localhost:5106. Please note:
I've ensured the EC2 is in a public subnet and I have configured the security group to make the port (in this case, 5106) accept inbound traffic from my laptop.
I know it's not a problem with the hello-world app because I experience exactly the same problem with another app i.e., only port 80 works with docker-compose port mapping on EC2.
As it works with port 80 and doesn't work with port 5106 it could mean one of two possibilities:
There is an issue with your security groups. You should check you have added port 5106 in your inbound rules of your security group.
There is an issue with a firewall or antivirus that doesn't allow you to connect to web pages in different ports rather than 80 or 443. You may try if this happens with another device or on another network.
In this case, it seemed to be the latter.
Possible that the docker network needs to be deleted?
docker network rm $(docker network ls -q)
Then run docker-compose up again.
I have a server(AWS) to which I have ssh access.
There is a service(supervisor) running on this service on port 9001 whose web view can be accessed through 127.0.0.1:9001 had it been a local machine.
But since it is not a local machine, how do I access it?
I got the ip address of the machine using ifconfig | grep inet and then tried accessing it through https://172.11.11.1:9001/
Bit dint work.
When I tried wget https://172.11.11.1:9001/ it shows
Connecting to 172.31.19.8:9001... and hangs there.
I have added the following line in my supervisor conf file.
[inet_http_server]
port = *:9001
Can someone please help me with this?
This is more of a server config question. You'll most likely find your AWS access properties allow connections on post 22, 80 and 443 only. In AWS console you'll need to add a new security access group to allow port 9001 to be accessed.
I'm trying to set up two instances under an elastic load balancer, but cannot figure out how I'm supposed to access the instances through the load balancer.
I've set up the instances with a security group to allow access from anywhere to certain ports. I can access the instances directly using their "Public DNS" (publicdns) host name and the port PORT:
http://[publicdns]:PORT/
The load balancer contains the two instances and they are both "In Service" and it's forwarding the port (PORT) onto the same port on the instances.
However, if I request
http://[dnsname]:PORT (where dnsname is the A Record listed for the ELB)
it doesn't connect to the instance (connection times out).
Is this not the correct way to use the load balancer, or do I need to do anything to allow access to the load balancer? The only mention of security groups in relation to the load balancer is to restrict access to the instances to the load balancer only, but I don't want that. I want to be able to access them individually as well.
I'm sure there's something simple and silly that I've forgotten, not realised or done wrong :P
Cheers,
Svend.
Extra info added:
The Port Configuration for the Load Balancer looks like this (actually 3 ports):
10060 (HTTP) forwarding to 10060 (HTTP)
Stickiness: Disabled(edit)
10061 (HTTP) forwarding to 10061 (HTTP)
Stickiness: Disabled(edit)
10062 (HTTP) forwarding to 10062 (HTTP)
Stickiness: Disabled(edit)
And it's using the standard/default elb security group (amazon-elb-sg).
The instances have two security groups. One external looking like this:
22 (SSH) 0.0.0.0/0
10060 - 10061 0.0.0.0/0
10062 0.0.0.0/0
and one internal, allowing anything within the internal group to communicate on all ports:
0 - 65535 sg-xxxxxxxx (security group ID)
Not sure it makes any difference, but the instances are m1.small types of image ami-31814f58.
Something that might have relevance:
My health check used to be HTTP:PORT/ but the load balancer kept saying that the instances were "Out of Service", even though I seem to get a 200 response on the request on that port.
I then changed it to TCP:PORT and it then changed to say they were "In Service".
Is there something very specific that should be returned for the HTTP one, or is it simply a HTTP 200 response that's required? ... and does the fact that it wasn't working hint towards why the load balancing itself wasn't working either?
It sounds like you have everything set up correctly. Are they the same ports going into the loadbalancer as the instance? Or are you forwarding the request to another port?
As a side note, when I configure my loadbalancers I don't generally like to open up my instances on any port for the general public. I only allow the loadbalancer to make requests to those instances. I've noticed in the past that many people will make malicious requests to the IP of the instance trying to find a security breach. I've even seen people trying to brute force login into my windows machines....
To create a security rule only for the loadbalancers run the following commands and remove any other rules you have in the security-group for the port the loadbalancer is using. If you're not using the commandline to run these commands then just let me know which interface you're trying to use and i can try to come up with a sample that will work for you.
elb-create-lb-listeners <load-balancer> --listener "protocol=http, lb-port=<port>, instance-port=<port>"
ec2-authorize <security-group> -o amazon-elb-sg -u amazon-elb
Back to your question. Like I said, the steps you explained are correct, opening the port on the instance and forwarding the port to the instance should be enough. Maybe you need to post the full configuration of your instance's security group and the loadbalancer so that I can see if there is something else affecting your situation.
I went ahead and created a script that will reproduce the same exact steps that i'm using. This assumes you're using linux as an operating system and that the AWS CLI tools are already installed. If you don't have this setup already I recommend starting a new Amazon Linux micro instance and running the script from there since they have everything already installed.
Download the X.509 certificate files from amazon https://aws-portal.amazon.com/gp/aws/securityCredentials
Copy the certificate files to the machine where you will run the commands
Save two variables that are required in the script
aws_account=<aws account id>
keypair="<key pair name>"
Export the certificates as environmental variables
export EC2_PRIVATE_KEY=<private_Key_file>
export EC2_CERT=<cert_file>
export EC2_URL=https://ec2.us-east-1.amazonaws.com
Create the security groups
ec2-create-group loadbalancer-sg -d "Loadbalancer Test group"
ec2-authorize loadbalancer-sg -o loadbalancer-sg -u $aws_account
ec2-authorize loadbalancer-sg -p 80 -s 0.0.0.0/0
Create the user-data-file for the instance so that apache is started and the index.html file is created
mkdir -p ~/temp/
echo '#! /bin/sh
yum -qy install httpd
touch /var/www/html/index.html
/etc/init.d/httpd start' > ~/temp/user-data.sh
Start the new instance and save the instanceid
instanceid=`ec2-run-instances ami-31814f58 -k "$keypair" -t t1.micro -g loadbalancer-sg -g default -z us-east-1a -f ~/temp/user-data.sh | grep INSTANCE | awk '{ print $2 }'`
Create the loadbalancer and attach the instance
elb-create-lb test-lb --availability-zones us-east-1a --listener "protocol=http, lb-port=80, instance-port=80"
elb-register-instances-with-lb test-lb --instances $instanceid
Wait until your instance state in the loabalancer is "InService" and try to access the urls
After installing TeamViewer, I have changed the wampserver port to 8080, so the address is http://localhost:8080.
For the host file located at C:\WINDOWS\system32\drivers\etc\, I have also made the change as below
BEFORE
127.0.0.1 www.example.com
AFTER
127.0.0.1:8080 www.example.com
When I access www.example.com, it doesn't redirect to my wampserver, how can I fix it?
I managed to achieve this by using Windows included Networking tool netsh.
As Mat points out : The hosts file is for host name resolution only, so a combination of the two did the trick for me.
Example
Overview
example.app:80
| <--Link by Hosts File
+--> 127.65.43.21:80
| <--Link by netsh Utility
+--> localhost:8081
Actions
Started my server on localhost:8081
Added my "local DNS" in the hosts file as a new line
127.65.43.21 example.app
Any free address in the network 127.0.0.0/8 (127.x.x.x) can be used.
Note: I am assuming 127.65.43.21:80 is not occupied by another service.
You can check with netstat -a -n -p TCP | grep "LISTENING"
added the following network configuration with netsh command utility
netsh interface portproxy add v4tov4 listenport=80 listenaddress=127.65.43.21 connectport=8081 connectaddress=127.0.0.1
I can now access the server at http://example.app
Notes:
- These commands/file modifications need to be executed with Admin rights
- netsh portproxy needs ipv6 libraries even only to use v4tov4, typically they will also be included by default, otherwise install them using the following command: netsh interface ipv6 install
You can see the entry you have added with the command:
netsh interface portproxy show v4tov4
You can remove the entry with the following command:
netsh interface portproxy delete v4tov4 listenport=80 listenaddress=127.65.43.21
Links to Resources:
Using Netsh
Netsh commands for Interface IP
Netsh commands for Interface Portproxy
Windows Port Forwarding Example
The hosts file is for host name resolution only (on Windows as well as on Unix-like systems). You cannot put port numbers in there, and there is no way to do what you want with generic OS-level configuration - the browser is what selects the port to choose.
So use bookmarks or something like that.
(Some firewall/routing software might allow outbound port redirection, but that doesn't really sound like an appealing option for this.)
What you want can be achieved by modifying the hosts file through Fiddler 2 application.
Follow these steps:
Install Fiddler2
Navigate to Fiddler2 menu:- Tools > HOSTS.. (Click to select)
Add a line like this:-
localhost:8080 www.mydomainname.com
Save the file & then checkout www.mydomainname.com in browser.
Fiddler2 -> Rules -> Custom Rules
then find function OnBeforeRequest on put in the next script at the end:
if (oSession.HostnameIs("mysite.com")){
oSession.host="localhost:39901";
oSession.hostname="mysite.com";
}
The simplest way is using Ergo as your reverse proxy:
https://github.com/cristianoliveira/ergo
You set your services and its IP:PORT and ergo routes it for you :).
You can achieve the same using nginx or apache but you will need to configure them.
This doesn't give the requested result exactly, however, for what I was doing, I was not fussed with adding the port into the URL within a browser.
I added the domain name to the hosts file
127.0.0.1 example.com
Ran my HTTP server from the domain name on port 8080
php -S example.com:8080
Then accessed the website through port 8080
http://example.com:8080
Just wanted to share in case anyone else is in a similar situation.
If what is happening is that you have another server running on localhost and you want to give this new server a different local hostname like
http://teamviewer/
I think that what you are actually looking for is Virtual Hosts functionality. I use Apache so I do not know how other web daemons support this. Maybe it is called Alias. Here is the Apache documentation:
Apache Virtual Hosts examples
-You can use any free address in the network 127.0.0.0/8 , in my case needed this for python flask and this is what I have done :
add this line in the hosts file (you can find it is windows under : C:\Windows\System32\drivers\etc ) :
127.0.0.5 flask.dev
Make sure the port is the default port "80" in my case this is what in the python flask: app.run("127.0.0.5","80")
now run your code and browse flask.dev
Using netsh with connectaddress=127.0.0.1 did not work for me.
Despite looking everywhere on the internet I could not find the solution which solved this for me, which was to use connectaddress=127.x.x.x (i.e. any 127. ipv4 address, just not 127.0.0.1) as this appears to link back to localhost just the same but without the restriction, so that the loopback works in netsh.
You need NGNIX or Apache HTTP server as a proxy server for forwarding http requests to appropriate application -> which listens particular port (or do it with CNAME which provides Hosting company). It is most powerful solution and this is just a really easy way to keep adding new subdomains, or to add new domains automatically when DNS records are pointed at the server.
Apache era call it Virtual host ->
httpd.apache.org/docs/trunk/vhosts/examples.html
NGINX -> Server Block
https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/
Alternate way
Install Redirector
Click Edit redirects -> Create New Redirect