Block web traffic from internet (public access) - jelastic

I just configured an NGINX instance on the Jelastic platform.
In my environment's firewall inbound rules there are now a few default rules added.
With source All. (HTTP, HTTPS, ...)
I changed the source of these firewall rules to Local LAN.
So I expect that when I go to my Jelastic public URL in my browser on my own computer, I do not get a website but I'm blocked by the firewall.
This is not happening. I do not want the website to be used from the outside. This environment will host some REST API's and workers running on the inside and only triggered by other environments I have.
Kind regards.
Roel

We recommend following this guide to disable access to your container (CT) from the outside: https://docs.jelastic.com/release-notes-59/#restrict-node-access-via-shared-load-balancer-slb
However, please keep in mind that you'll not be able to access this CT from another CT too.
UPDATE:
little clarification
If "Access via SLB" disabled, the nodes within the layer are inaccessible via SLB (including the Open in Browser button in the dashboard) and return the 403 "Forbidden error" instead of the intended service. Herewith, access via the private network from the other nodes of the environment, access via SSH and access via endpoints from the public network is not affected.
We also want to draw your attention to that described feature is available from the Jelastic PaaS 5.9 release

Related

Hosting a VPN on Heroku

I was wondering if it's possible to host a private vpn on heroku?
My (hypothetical) use case is that let's say there's some service that's only available in Europe but I want to access it in the USA. I'd like to turn a European heroku server into a personal vpn that just allows me to access that service.
I did some research and can't find anyone else who's tried/documented this.
You basically want a proxy. So heroku forbids running an open proxy, so you should restrict use.
XIX. Operate an “open proxy” or any other form of Internet proxy service that is capable of forwarding requests to any end user or third-party-supplied Internet host;
--https://www.heroku.com/policy/aup
But technically it is possible - you might want to try it: https://github.com/Rob--W/cors-anywhere, if you want to use the browser you will need to download the headers from the server.js file
Note that this project is not intended to be used as an open proxy, so for example relative paths are not loaded properly.
You might want to try it - it might be more appropriate, I just did not try it myself ... :)
https://github.com/http-party/node-http-proxy#setup-a-basic-stand-alone-proxy-server

How to use Azure Traffic Manager with a custom domain, if the DNS settings don't allow for forwarding

I have an Azure web app up and running, using a custom domain purchased outside of Azure... and that all runs fine. So I have https://myappname.azurewebsites.net/ loading fine with my domain name URL https://www.myappname.com
I'm trying to upgrade the web app, though using Azure Traffic Manager. I've cloned the app a few times, each on its own app service plan, and I have the traffic manager all up and running fine. I can successfully hit different versions of my cloned website based on the traffic manager configuration profile... so no issues there.
The only issue is that I can only access the "traffic managed" version of my website via the standard azure URL -> myappname.trafficmanager.net.
All examples I've seen say all I really need to do now, is go into my DNS Management screen, and add domain forwarding, however, my online DNS management tool does not offer this option.
I can't really change my A record in the DNS management screen, because I don't know the IP address of myappname.trafficmanager.net
Every place I've tried to change the name of the current/working Azure URL (like in awverify text files, www cnames, etc.) does nothing. The DNS still points to the single instance which remains in the IP address od the DNS managers A record.
Also, since my live/single instance is linked to the domain name (along with the SSL binding), I can't add those properties to the clones, which makes sense....only one version can be live. However I could unbind that when I make the switch from the single instance web app to the traffic managed set of clones, but I fear I can only bind that to one of the clones. I can't seem to bind it to the myappname.trafficmanager.net version, which might cascade down to all of its endpoints. Is there a way to bind my domain name and SSL cert to more than one version of my web app?
Thanks!
Is there a way to bind my domain name and SSL cert to more than one
version of my web app?
I don't think you can do that unless you have two different domains or subdomains with each own SSL cert. Each web app hostname is unique globally and each SSL binding is attached with the web app domain name.
If you have a purchased domain and just keep the default xxx.azurewebsites.net as each hostname. Then you could configure the two Azure app serves as the endpoint of TM.
By default, Azure provided a wildcard cert for this domain *azurewebsites.net, so you can automatically access this hostname with HTTPS without any extra cert. Then use a CNAME record www in the domain domain.com in your DNS provider to point to the traffic manager hostname myappname.trafficmanager.net. Since Traffic Manager works as DNS level, it does not validate the server and client SSL, you could safely ignore the SSL warning when accessing with traffic manager hostname.
Feel free to let me know if you have any question.

WSO2 ESB proxy service on Windows

i'm using the WSO2 ESB to integrate several services on the Windows virtual machine.
I used the simple proxy to map the services deployed on it. But the problem is what i can't access them from outside it nevetheless the port 8280 where services are deployed is open for internet, but i can see only blank page instead. What could be wrong?
Another question is i was trying to map the WSO2 ESB management console itself to be availbe from outside the machine using simple proxy, and i'm failed, it loads me the this is what i see on trying the service.
Could you please give me a hint on how to resolve this issue? is it possible to share the esb mgmt console using the ESB itself?
Thanks a lot in advance,
Do u have proxy in the middle? It looks like on screenshot webpage missing all pictures, meanwhile css was loaded successfully.
Another question which kind of virtual machine u use? For example in virtualbox by default virtual machine behind NAT.
I wasn't able to connect to server on virtual machine from host only opposite way server on host available in virtual machine.
To make server in virtual machine available on host need to configure network as bridge.
Not sure if it helps, but I think I had a similar problem in our corporate network after I applied all the security patches (poodle,Diffie-Hellman etc.). I had to configure the addresses in catalina.xml (if i remember right) that are/under which allowed to access the admin console. Cannot tell you more details because I'm on holiday :-)
Maybe it's worth to give it a try.
Another example from real life. HTTP Response from external resource was application/json, status of response 200 OK. ESB configured to use
<messageFormatter contentType="application/json"
class="org.apache.synapse.commons.json.JsonStreamFormatter"/>
but content was simple text/plain.
During parsing body of http response exception was thrown and just silently was written to log, without any fault message processing. Just empty response to client.
To clarify that services reachable, there is echo service by default on server, which respond content equal to request. Try to use it.
was trying to map the WSO2 ESB management console itself to be availbe
from outside the machine using simple proxy
By default the management console tries to enforce the port 9443 for dynamic links (JSP) pages. That's why you see only part of the pages and you shouldn't be able to log on.
what you can do is edit the repository/conf/tomcat/catalina-server.xml and to the Connector running the port 9443 you can add an attribute proxyPort="443", the carbon console will be happy to run on 443.
For the services, my educated guess would be on the firewall / network rules, however without other information I cannot answer (or - they are working, just you may not try to access them by simple browser request)

No 'Access-Control-Allow-Origin' issue, despite all resources being on same domain

I am writing a javascript/strophejs xmpp client, and have been so far using it to connect to a xmpp server hosted at hosted.im, via a public BOSH service (http://bosh.metajack.im:5280/xmpp-httpbind). The html/javascript is also hosted online, at testserver.host56.com (not the real url).
Now, I decided to host the xmpp server on the amazon web cloud, and use my own Bosh service, hosted on this server as well.
Now, my ec2 instance is at myAWSDNS.us-west-2.compute.amazonaws.com (also not real url).
I also have a BOSH service up and running, at myAWSDNS.us-west-2.compute.amazonaws.com:7070.
Finally, I have also allowed traffic to this ec2 instance through both the instances firewall and through the AWS Security Group policy.
However, when trying to connect to this instance's xmpp server (openfire), using my JS/strophejs client, I get the following message in the Chrome javascript console:
XMLHttpRequest cannot load http://myAWSDNS.us-west-2.compute.amazonaws.com:7070/. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://myAWSDNS.us-west-2.compute.amazonaws.com' is therefore not allowed access
Why am I getting this issue, if the origin is on the same domain as the requested resource?
The Ec2 instance is running Windows Server 2012.
This is the code I use to log in:
var conn = new Strophe.Connection("http://myAWSDNS.us-west-2.compute.amazonaws.com:7070/");
conn.connect("chris#myAWSDNS.us-west-2.compute.amazonaws.com", "myPassword", somecallback);
Thanks,
best regards,
Chris
As previously mentioned, even if you're on the same domain, the ports must also match otherwise CORS is required.
You may not be using the correct URL for your connection manager, all of the ones I've seen use an address ending in /http-bind/ or similar.
Have you tried connecting with Strophe.Connection("http://myAWSDNS.us-west-2.compute.amazonaws.com:7070/http-bind/");?
Also, you can test for the presence of the crossdomain.xml file by simply visiting http://myAWSDNS.us-west-2.compute.amazonaws.com:7070/crossdomain.xml to ensure that CORS has been successfully enabled.
The browser will not allow since the ports are different. I don't know what you have at AWS, but you can proxy the request in both direction, like as:
http://myAWSDNS.us-west-2.compute.amazonaws.com/http-bind/ <---------> http://myAWSDNS.us-west-2.compute.amazonaws.com:7070/
See item no 5: Connecting with Strophe.js of the tutorial for Apache use case.

how to allow communication with an application from outside the network?

I have an application running in my personal network. This application can send emails to users and they can acknowledge the receipt via the email they receive as long as they are on my personal network. This is because they have to access the application to perform the acknowledge action.
I want to extend this and see if I can allow acknowledgements via emails from outside the network as well. I know I have to change my application to do this but not sure which way to go. Can some one throw some light?
My application is a spring based web application.
You need to configure your firewall to allow outside access to whatever port the app runs on.
You need to configure port fowarding on your gateway to direct outside traffic to the system running your app (unless your gateway is the server running the app).
After that you should just be able to go to youroutfacingip:portforapp
for example http://123.456.78.90:12345
in a web browser anywhere
you can setup DNS if you want to use a URL instead of an ip.
Keep in mind, anyone can go to this url, so make sure it has access control.

Resources