I have uploaded the composer-rest-server on AWS machine , i have managed to launch without errors the composer-rest-server , although it always refer to http://localhost:3000 , i want to change my localhost to my actual host ip on AWS
can someone help ?
You need to make your REST server available on an IP or domain resolvable hostname ie on an available network interface such that other REST clients can consume it.
Its likely your REST server (accessed through Explorer) is already listening on 0.0.0.0:3000 and hence listening on all configured interfaces (on your server). For REST server deployment more info here -> https://hyperledger.github.io/composer/integrating/deploying-the-rest-server.html
the parameter in your COMPOSER_DATASOURCES to change (if you need to change it) is here (straight out of loopback basically):
COMPOSER_DATASOURCES='{
"db": {
"name": "db",
"connector": "mongodb",
"host": "mongo",
"ip": "10.99.98.x"
}
}'
referenced here -> https://hyperledger.github.io/composer/integrating/deploying-the-rest-server.html
Related
I have a Spring Boot application on port 5000 that I am trying to deploy to ECS fargate. When I build it as Docker image locally I can easily do 80:5000 and do not need the port on the URL.
I cannot seem to do above on ECS fargate.
When I set the container port to 5000 in the Task definition. It created it like this:
{
...
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000,
"protocol": "tcp"
}
],
...
}
I tried fixing it as JSON, but I received an error messages that host and container ports must match.
Like this, I had to open in the security group a TCP inbound rule for port 5000 and I need to visit my application's public IP with the 5000 port. It does not work without it (port 80 is opened in the security group also).
I have done this before with ALBs and services of more than 1 container and it works fine with a domain name or the dns of the load balancer without the 5000 port.
Can I achieved this with a single container? Sorry for my noobness.
I have done this before with ALBs and services of more than 1 container and it works fine with a domain name or the dns of the load balancer without the 5000 port.
Can I achieved this with a single container?
No. You would either need to modify your SpringBoot app to listen on port 80, or add an Application Load Balancer in front of the ECS service. Note that even if you configured the container to listen on port 80 that's still very insecure. If you are exposing an ECS container to web browsers you should absolutely be using an Application Load Balancer configured with an AWS ACM SSL certificate to make the connection between the web browser and AWS secure.
I have created VM instance on google cloud platform in which I have installed NiFi. There are two types of Ip addresses:
1) Internal IP
2) External IP
Now, when I start NiFi services it is hosting its services on Internal IP but when I try to access external IP via local browser I am unable to access it since its a private IP. I tried creating a firewall rule with Ingress option and which will listen to all IP's and port number 8080 but of no use.
So where am I going wrong?? I tried searching for relevant solutions but no luck.
Attaching screenshot of the firewall config:
Please help me with some links / solutions.
Your issue is a misunderstanding of how ip works in google cloud.
You have two types of ip as you stated, internal ip is for communication between the computes instances and services inside the google cloud vpc. The important part is that it works only in google cloud on your project and that is the internal ip of your instance.
External ip is an optional ip that is attributed to the instance to allow external communication, so not from google cloud, as from your browser for example. But this external IP is not really known to your instance, that's what confused you, but don't worry, if you try to access your 8080 port on the external ip you won't have any errors and should see your app.
I solved my problem in below ways:
1) I edited my VM and unchecked allow https traffic option.
2) I changed my NiFi listener port from 8080 to 80 since 8080 is blocked in my organization.
No firewalls added. Atleast it worked for me
I have windows server 2008 R2 running in AWS (with Elastic IP)where i am running Apache service to host one of website which i can easily access using various method like
Like "localhost:port" works fine
Using server NIC ip address:port works fine.
In my current use case where i want to expose this website on my elastic IP i am not able to do so.
However if i host any website on my IIS i can access or view using my elastic IP but i am unable to host apache website on IIS.
Whenever i try to access apache one it never worked.
I tweaked firewall setting
I also updated conf file of apache whenever i try to give elastic IP address it will not start.
Failed with following error message
The Apache service named reported the following error:
>>> (OS 10049)The requested address is not valid in its context. : make_sock: could not bind to address 52.xx.xx.xx:8888
Following is my service details of apache on services.msc
C:\Program Files (x86)\vcollapp\apache\bin\Apache.exe -k runservice
Now how can i expose my site which is running on apache service inside windows to elastic ip of AWS.
Thanks in Advance for your help and time.
Try binding to 0.0.0.0? This will accept connections from all interfaces, not just localhost.
If you check your network interfaces, do you see an interface with the Elastic IP? Or do you just have your Private IP?
There are different ways of using Elastic IP's, and my assumption here is that the IP is not physically attached to your machine, but instead, you have a private IP that all traffic from the Elastic IP is routed through.
WAN -> Elastic -> Private
I have a kubernetes (0.15) cluster running on CoreOS instances on Amazon EC2
When I create a service that I want to be publicly accessible, I currently add some private IP addresses of the EC2 instances to the service description like so:
{
"kind": "Service",
"apiVersion": "v1beta3",
"metadata": {
"name": "api"
},
"spec": {
"ports": [
{
"name": "default",
"port": 80,
"targetPort": 80
}
],
"publicIPs": ["172.1.1.15", "172.1.1.16"],
"selector": {
"app": "api"
}
}
}
Then I can add these IPs to an ELB load balancer and route traffic to those machines.
But for this to work I need to have a maintain the list of all the machines in my cluster in all the services that I am running, which feels wrong.
What's the currently recommended way to solve this?
If I know the PortalIP of a service is there a way to make it routable in the AWS VPC infrastructure?
Is it possible to assign external static (Elastic) IPs to Services and have those routed?
(I know of createExternalLoadBalancer, but that does not seem to support AWS yet)
If someone will reach this question then I want to let you know that external load balancer support is available in latest kubernetes version.
Link to the documentation
You seem to have a pretty good understanding of the space - unfortunately I don't have any great workarounds for you.
CreateExternalLoadBalancer is indeed not ready yet - it's taking a bit of an overhaul of the services infrastructure to get it working for AWS because of how differently AWS's load balancer is from GCE's and Openstack's load balancers.
Unfortunately, there's no easy way to have the PortalIP or an external static IP routable directly to the pods backing the service, because doing so would require the routing infrastructure to update whenever any of the pods gets moved or recreated. You'd have to have the PortalIP or external IP route to the nodes inside the cluster, which is what you're already effectively doing with the PublicIPs field and ELB.
What you're doing with the load balancer right now is probably the best option - it's basically what CreateExternalLoadBalancer will do once it's available. You could instead put the external IPs of the instances into the PublicIPs field and then reach the service through one of them, but that's pretty tightly coupling external connectivity to the lifetime of the node IP you use.
I have tried everything, I can get to my application using the ec2-x-x-x-x.compute-1.amazonaws.com, I cannot ping the address.
However, when I do ping the amazon DNS, it identifies with the IP address of but does not respond to ping.
When I put the IP address in the browser, it times out and gives me the Chrome "Oops", I have went through the Security vgroup several times.
I have checked the server, including the IPtables and the ports that Apache is listening to.
I don't have a lot of knowledge in this area, But I tried everything in the forum and more.
I even created another Elastic IP and associated it with the instance.
Please help.
By default, you cannot ping an EC2 instance, since it is blocked by the firewall (see why can't I ping my instance):
Ping uses ICMP ECHO, which by default is blocked by your firewall.
You'll need to grant ICMP access to your instances by updating the
firewall restrictions that are tied to your security group.
ec2-authorize default -P icmp -t -1:-1 -s 0.0.0.0/0
Check out the latest developer guide for details.
Section: Instance Addressing and Network Security -> Network Security
-> Examples
As for HTTP requests - your instance is available and looks fine (I suggest you remove the real DNS name from your post though)...
For ec2 best options is
1) open port 5060 and 10000-20000 udp on firewall(security group)
2) order and attach elastic IP.
3) in sip.conf add
externhost=elastic_ip_her
localnet=10.0.0.0/255.0.0.0
Every time you start/stop that instance attach same elastic IP.
For web access you also need open port 80 in security group