I am trying to setup Private NPM registry using Verdaccio on ec2 instance (Linux). For this I have configured a CloudFront as well.
I am able to access the Registry using the Public EC2 IP but,
On visiting the CloudFront host, I keep getting
Mixed Content: The page at 'https://somedomain.com/' was loaded over HTTPS, but requested an insecure script 'http://somedomain.com/-/static/runtime.d4346a1621c9b92e2ba9.js'. This request has been blocked; the content must be served over HTTPS.
I have tried this config but doesn't seem to work
VERDACCIO_PUBLIC_URL='https://somedomain.org'; <- environment variable (bash_profile)
url_prefix: '/' <- in config.yaml file
Related
I've successfully created a subdomain on Route53 that is connected with EC2 Instances. When I access my subdomain subdomain.domain.com there appears Amazon Linux AMI Test page.
How to upload my website to subdomain.domain.com?
If usually add files to /var/www/html/ on primary domain, what about subdomain?
I also have checked the root server using WinSCP and there is no subdomain directory.
In this case my subdomain is "blog".
Hope to get the best answer.
Thanks.
"subdomain.domain.com" is a logical name (called hostname) for the IP of your EC2 instance. You should not expect any subdomain directory on your EC2 instance.
It looks like the directory /var/www/html/ does not have any landing page (index.html). Upload your website under the directory /var/www/html/ such that your index.html is placed directly under the directory /var/www/html/.
I've created SSH tunnels in the past, but I'm having trouble on OSX. I'm looking to take a website's port 80, and direct it to my localhost:8080. When I run this command
ssh -L 8080:<cloud_ip_address>:80 root#<cloud_ip_address> -N
I get the default apache 'it works!' page.
Why am I not getting the port 80 of the remote machine (which is running a web app)?
UPDATE
I still do not have a solution yet, but I have some more information. The page I am getting is the default page in /var/www/html but I am serving a Flask app which does not have static pages.
Because HTTP protocol contains not only the IP address, but also the hostname (the URL you type into your browser), which differs between the <cloud_hostname> and localhost. The easiest way to trick it is to create /etc/hosts (there will be some OSX alternative -- search ...) entry redirecting the hostname of your remote machine to localhost.
127.0.0.1 <cloud_hostname>
But note that in this case you will not be able to access the remote machine using the hostname!
I have deployed a sails application with dokku on a amazon ec2 instance. After deployment I randokku run app-name sails console and my sails app is running when I check the sails logs its says its running on localhost:5000.
And dokku app-name url will give me a url example.com but when I try to access example.com in the browser it doesn't work. Isn't the app supposed to run on that url given by dokku? and when I hit that url shouldn't the ngnix proxy to localhost:5000 ?
What am I missing here ?
Your application should be listening on the 0.0.0.0 interface, as otherwise the nginx process outside of the container cannot proxy to it.
I have HDFS running on an EC2 node (pseudo multi-node setup) and I use it to access files via the WebHDFS's REST API by doing a GET at e.g. this:
http: slash slash ec2-xx-xx-xx-xx.us-west-2.compute.amazonaws.com:50070/webhdfs/v1/foo/bar.txt?op=OPEN
This gives me a temporary redirect to
http: slash slash ip-yy-yy-yy-yy.us-west-2.compute.internal:50075/webhdfs/v1/foo/bar.txt?op=OPEN&namenoderpcaddress=localhost:9000&offset=0
Here xx-xx-xx-xx is the public, static IP assigned to my instance and yy-yy-yy-yy is the local ip for the instance.
This makes the redirect fail because ip-yy-yy-yy-yy.us-west-2.compute.internal cannot be opened from the browser, obviously! I would want the redirect URL generated to have the static, public IP assigned to my instance, resolvable by the default public DNS.
Here is the list of HDFS configuration defaults but I can't understand what is causing this situation.
My hdfs-site.xml configs:
dfs.replication: 1
dfs.webhdfs.enabled:true
My core-site.xml configs:
fs.defaultFS: hdfs://localhost:9000
Any sort of help is appreciated, thanks!
I am trying to setup an enviroment where I run some of my backend locally, and send requests to an EC2 instance from my local computer. I have CDH 4.5 setup, and it works OK. When I run the following request
curl --negotiate -i -L -u:hdfs http://ec2-xx-xx-xx-xx.eu-west-1.compute.amazonaws.com:50070/webhdfs/v1/tmp/test.txt?op=OPEN
This works from any EC2 instance in that region but does not work outside that. If I try locally it would return the following error
curl: (6) Could not resolve host: ip-xx-xx-xx-xx.eu-west-1.compute.internal
Not sure where I can set this not to redirect the call this way?
Many thanks
The easiest and fastest way to solve this problem is to configure your client hosts file to map the internal address to the external address.
WebHDFS uses the host name configured in hdfs-site.xml which is configured automatically by the Cloudera agent on that datanode. I don't know of a way to override the configured hostname for each datanode in CDH.