I am trying to serve my node js API (deployed on AWS EC2 and attached with application load balancer) through cloudfront url, is it possible?
Here are the steps I followed so far -
Created S3 bucket to host static website hosting
Created cloudfront distribution and linked S3 bucket with it. I can access S3 bucket contents with default url generated from cloudfront
Created custom origin for node js instance
Created behavior "api/*" to access node js API through cloudfront.
But when I am trying to access API with following url -
http://d3m30a4naen9t2.cloudfront.net/api/getItems
it throws "not found", it's not 404, this response is from EC2 server however the specified route exists.
Can anyone help please?
I am using ELB. I deployed my code which is on node.js, and everything is working fine. I was facing a lots of problem in ELB but finally we came to stable stage. If you want to serve your APIs then first use SSL which has lesser protocols in other words, use less secure SSL otherwise, your API will not be able to hit from any other source. Simply deploy your code through git or directly from filezilla and run the command on both servers (primary and secondary) as pm2 start index.js/server.js or whatever your main express file is.
Suggestion: Please be careful while selecting security certificates, because on ELB, if you do not follow the correct implementation, you will definitely face the problem of "API not accessible" or "Remote server is unable to connect".
Related
I have a Spring boot application deployed using AWS Elastic Beanstalk, im using S3 bucket for my angular app.
I have generated certifacate using aws certifacate manager and created CloudFront Distribution so my angular app is loaded on https.
The problem is I am calling a rest API from Https deployed Application to Http Rest API.
I keep getting this error:
Mixed Content: The page at "https://mywebsite.com" was loaded over HTTPS, but requested an insecure XMLHttpRequest 'http://myendpoint'. This request has been blocked; the content must be served over HTTPS.
I tried generating my own certificate in my spring boot application it worked locally but once deployed on elastic beanstalk web services doesnt respond.
any tip on how use https / beanstalk ?
The error message sums the problem clearly. It would be a huge security issue to allow unencrypted data transfer, for seemingly securely encrypted web page.
Moreover, you don't really want to do SSL termination on your instances, for performance reasons, you don't want to manually manage keys an so forth.
In your situation, I would advise setting up a CloudFront distribution in front of your ALB (which I assume you have). That will solve your problems immediately, as CloudFront will automatically setup a domain for you and will expose your endpoints via HTTPS. Afterwards if you decide, you can easily setup a custom domain and certificates.
Finally, I recommend reading this article to make sure you avoid common pitfalls when configuring ALB and CloudFront.
Best, Stefan
I'm building a small web application for a personal project. It will be an Angular web application which will talk to a Spring-Boot service layer which in turn will read/write stuff to MongoDb.
I hope to host all this on a single EC2 instance in AWS. My question is how to configure a web server (like Apache but doesn't have to be) to 'beautify' the URLs a bit. Example, without touching anything angular will run at something like host:4200 and the service layer at host:8080. I will then have to map a proper domain to host in AWS, but the hiding of ports etc is where it gets murky for me.
I want to be able to hit my web app at domain.com (no ports etc) and I also want my service layer to ideally have a similar setup e.g. domain.com/service (no ports etc).
How do I configure a webservice to do this for me? Examples or pointers to specific examples would be ideal, but even a pointer to the right documentation will be helpful.
This thread is kind of similar to what I want but not too helpful: How to deploy Spring framework backend and Angular 2 frontend application in any online server?
You can use a setup with AWS CloudFront as reverse proxy and CDN cache. You can map the Domain Name and SSL Certificates(You can use AWS issued free SSL Certificates through AWS Certificate Manager) to CloudFront while the EC2 instance is plugged as an origin behind CloudFront as shown in the following diagram.
In the diagram I have optionally added, which is a common practice in designing applications in AWS.
Hosting the Angular App in S3
Using Autoscaling & Loadbalancing for EC2 instances.
You need to use Apache or other web server as a reverse proxy. Start here -
https://devops.profitbricks.com/tutorials/configure-apache-as-a-reverse-proxy-using-mod_proxy-on-ubuntu/
You then will need to setup a custom domain name. The easiest option is to just use an ELB (now called Classic Load Balancer). More details are here -
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/using-domain-names-with-elb.html
I am writing a javascript/strophejs xmpp client, and have been so far using it to connect to a xmpp server hosted at hosted.im, via a public BOSH service (http://bosh.metajack.im:5280/xmpp-httpbind). The html/javascript is also hosted online, at testserver.host56.com (not the real url).
Now, I decided to host the xmpp server on the amazon web cloud, and use my own Bosh service, hosted on this server as well.
Now, my ec2 instance is at myAWSDNS.us-west-2.compute.amazonaws.com (also not real url).
I also have a BOSH service up and running, at myAWSDNS.us-west-2.compute.amazonaws.com:7070.
Finally, I have also allowed traffic to this ec2 instance through both the instances firewall and through the AWS Security Group policy.
However, when trying to connect to this instance's xmpp server (openfire), using my JS/strophejs client, I get the following message in the Chrome javascript console:
XMLHttpRequest cannot load http://myAWSDNS.us-west-2.compute.amazonaws.com:7070/. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://myAWSDNS.us-west-2.compute.amazonaws.com' is therefore not allowed access
Why am I getting this issue, if the origin is on the same domain as the requested resource?
The Ec2 instance is running Windows Server 2012.
This is the code I use to log in:
var conn = new Strophe.Connection("http://myAWSDNS.us-west-2.compute.amazonaws.com:7070/");
conn.connect("chris#myAWSDNS.us-west-2.compute.amazonaws.com", "myPassword", somecallback);
Thanks,
best regards,
Chris
As previously mentioned, even if you're on the same domain, the ports must also match otherwise CORS is required.
You may not be using the correct URL for your connection manager, all of the ones I've seen use an address ending in /http-bind/ or similar.
Have you tried connecting with Strophe.Connection("http://myAWSDNS.us-west-2.compute.amazonaws.com:7070/http-bind/");?
Also, you can test for the presence of the crossdomain.xml file by simply visiting http://myAWSDNS.us-west-2.compute.amazonaws.com:7070/crossdomain.xml to ensure that CORS has been successfully enabled.
The browser will not allow since the ports are different. I don't know what you have at AWS, but you can proxy the request in both direction, like as:
http://myAWSDNS.us-west-2.compute.amazonaws.com/http-bind/ <---------> http://myAWSDNS.us-west-2.compute.amazonaws.com:7070/
See item no 5: Connecting with Strophe.js of the tutorial for Apache use case.
I am hosting my website on S3.
On my local host I am using backboneJS above a PHP Rest API that uses mySQL as a database.
So i opened an EC2 to host my Rest API but then realized this means cross domain AJAX.
How can i use EC2 as a Rest API if my index.html sits on S3?
What are my other DB options for S3?
many thanks,
Your JavaScript is being executed on web pages served from S3, and it has to access a REST API from a server you run on EC2. Unless the web pages and server are in the same domain (say, example.com), this will be a cross-origin request, prohibited by browsers by default.
Solution 1: have your S3 pages and your EC2 server in the same domain. S3 allows static website hosting that makes your S3 objects (web pages) available at the address of your choice. Put them and your EC2 server at addresses on the same domain, and it can work.
Solution 2: have your REST API server allow cross-origin requests. Since you control this EC2 server you can modify it to tell web browsers to allow pages from other domains to make such requests to your server. This involves setting several headers on your responses, and usually requires your server to respond to HTTP OPTIONS requests properly, too. See the CORS specification to get started on that.
You also ask about other DB options. If you keep your own EC2 server to provide the REST API it can use pretty much any kind of database you like, either running on the same or other EC2 instances, or database-as-a-service offerings like AWS RDB or AWS DynamoDB. But if you want to connect to the database directly from your web pages' JavaScript you would have to use a service that provides an HTTP API directly and that supports CORS. RDB does not provide an HTTP API at all, and DynamoDB does not seem to support CORS at this time, so neither of them would work.
I want to make my web app fast, especially the first page load (index.html).
Can I do this by hosting myfastapp.com on Rackspace CloudFiles and then have a subdomain called nodeserver.myfastapp.com which connects to a Node Server on Joyent.
Note: The node server will only connect via socket.io to tell the client which additional files to grab from the CDN (myfastapp.com).
There's a guide for this in the Cloud Files docs at Create Static Website.
There should be no issue with that the logistics of that.
The main issue is getting the main site on Cloud Files due to cname restrictions, at least in the Rackspace system, but it can probably be done.