I'm working on a repo which is serving a create-react-app from a node endpoint. So, the react app is nested as a child directory:
.
├── Procfile
├── frontend
│ ├── README.md
│ ├── build
│ ├── package.json <---- "proxy": "http://localhost:$PORT"
│ ├── public
│ ├── src
│ │ ├── App.css
│ │ ├── App.js
│ │ └── // etc...
│ └── .env <----- frontend env file, removed PORT value from here
├── package.json
├── src
│ ├── app.js
│ ├── server.js
│ └── // etc...
├── .env <--- backend env file, PORT=9000 for node
├── static.json
└── yarn.lock
With port value removed from the .env file, CRA runs on port 3000. If I hardcode port 9000 instead of $PORT, then the proxy works properly in development.
However, when deploying to production, I want the frontend to proxy Heroku's dynamic port number, this is one example:
Heroku seems to ignore the port value even if I intentionally define it in the env in their website, with a value of 9000.
My question is how do I define the proxy on the frontend without having CRA to instance at that port number, e.g. apply PORT=9000 in the frontend .env but have CRA load at port 3000.
I've tried defining the port number in the script, while making sure that I've defined PORT=9000 in the frontend env:
"scripts": {
"start": "export PORT=3000 && react-scripts start",
CRA will load at 3000, but I get a proxy error:
Heroku doesn't let you chose your port, but rather allocates a port for your app to use as an environment variable. Read more here
Each web process simply binds to a port, and listens for requests coming in on that port. The port to bind to is assigned by Heroku as the PORT environment variable.
Remove all hardcoded PORT variables
It's not ideal to use $PORT in your package.json file as you cannot add logic to it. In your nodejs app read the port variable like so:
const PORT = process.env.PORT || 3000
This will set the port variable to whatever is in the environment variable PORT and if it is not set, will default to 3000
It is not efficient to serve a production app with CRA
Don't run two servers for react and nodejs, instead use your nodejs app to serve a production built react app
const express = require('express')
const path = require('path')
const app = express()
// All your other routes go here
app.use('/', express.static(path.join(__dirname,'client/build'))) // this must be the last one
NOTE: This is assuming your react app is built inside client/build relative to your project root
The proxy setting is only for development convenience and will not work if the app is not served by CRA
Make heroku build your react app during buildtime with:
npm --prefix client run build # or if you use yarn
yarn --cwd client build
in your outer package.json file's build script
You start script is going to run your nodejs server:
"scripts": {
"start": "node src/server.js",
"build": "npm --prefix client run build"
}
Don't commit your .env files to heroku, instead set environment variables directly using heroku config:set KEY=VALUE if you have heroku cli or use the dashboard settings
NOTE: Do this before pushing your code to have these variables accessible during buildtime of the react app
Related
I created a .NET 5 ASP.NET web application with a Dockerfile below.
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /source
# copy csproj and restore as distinct layers
COPY *.csproj .
RUN dotnet restore
# copy and publish app and libraries
COPY . .
RUN dotnet publish -c release -o /app --no-restore
# final stage/image
FROM mcr.microsoft.com/dotnet/runtime:5.0
EXPOSE 3000
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["./TestNET5"]
I added the variables PORT and WEBSITES_PORT with values 3000 in the Configuration settings of the Azure Web App and I also added -e environment=Production -e ASPNETCORE_ENVIRONMENT=Production but I am still getting the error below.
Container xxx didn't respond to HTTP pings on port: 3000, failing site start. See container logs for debugging.
Is there I'm missing here? I already checked several articles but I couldn't seem to find a solution.
If all the things have been done as you said, then the possible reason is that your image does not work well.
Below I will list all the things you have to do to deploy your application image to Azure App Service:
make sure the image can work well locally;
set the environment variable WEBSITES_PORT if the port the container exposed is not 80 or 443, here its value is 3000;
set the environment variables for the docker registry to authenticate if it's private, for example, if you push the image to ACR.
Following this
Setup:
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-19T22:12:47Z", GoVersion:"go1.12.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.7-gke.10", GitCommit:"8d9b8641e72cf7c96efa61421e87f96387242ba1", GitTreeState:"clean", BuildDate:"2019-04-12T22:59:24Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"linux/amd64"}
knative-serve & Istio is v0.5.2
cert-manager is v0.7 applyed with --validate=false as k8s is 1.12.7
Cert-manager ClusterIssuer status:
│ status: │
│ conditions: │
│ - lastTransitionTime: "2019-04-29T21:29:40Z" │
│ message: Certificate issuance in progress. Temporary certificate issued. │
│ reason: TemporaryCertificate │
│ status: "False" │
│ type: Ready
I have done as in the documentation, but setting up Google DNS not described
I have manually created a DNS in Google DNS consule.
My domain is pointing at the nameservers and I can ping the right server ip address,
When creating the DNS I added a record set:
*.mydomain.com. A 300 x.x.x.x
Note: also tried without " * "
I have seen here, that they talk abaout setting TXT?
Do you know how to make this(cert-manager & TLS) it work?
First, look at the logs being issued by the cert manager pod kubectl logs -n <namespace> pod/podname.
Cert manager will tell you why the challenge is failing.
One common reason is the rate limiting by Letsencrypt and you have to wait for 7 days.
You can also view this same issue on github https://github.com/jetstack/cert-manager/issues/1745
I have this Dockerfile:
FROM node:argon
ENV http_proxy http://user:pass#proxy.company.priv:3128
ENV https_proxy https://user:pass#proxy.company.priv:3128
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]
But I get this error, in npm install step:
npm info it worked if it ends with ok npm info using npm#2.14.12 npm
info using node#v4.2.6 npm WARN package.json deployer-ui#1.0.0 No
description npm WARN package.json deployer-ui#1.0.0 No repository
field. npm WARN package.json deployer-ui#1.0.0 No README data npm info
preinstall deployer-ui#1.0.0 npm info attempt registry request try #1
at 7:09:23 AM npm http request GET
https://registry.npmjs.org/body-parser npm info attempt registry
request try #1 at 7:09:23 AM npm http request GET
https://registry.npmjs.org/express npm info retry will retry, error on
last attempt: Error: tunneling socket could not be established,
cause=write EPROTO npm info retry will retry, error on last attempt:
Error: tunneling socket could not be established, cause=write EPROTO
I guess it is due to the proxy. I have also tried to put
RUN npm config set proxy http://user:pass#proxy.company.priv:3128
RUN npm config set https-proxy http://user:pass#proxy.company.priv:3128
but still getting the same error.
Moreover, in my file /etc/systemd/system/docker.service.d/http-proxy.conf I have this:
Environment="HTTP_PROXY=http://user:pass#proxy.company.priv:3128"
Environment="HTTPS_PROXY=https://user:pass#proxy.company.priv:3128"
Thanks in advance.
First the https_proxy should use an http url, not an https url.
Second, you don't need to embed your proxy settings in your Dockfile: you can use build time variables
docker build --build-arg HTTP_PROXY=http://user:pass#proxy.company.priv:3128 --build-arg HTTPS_PROXY=http://user:pass#proxy.company.priv:3128 .
Finally, proxy settings at the docker service level allows the docker daemon to pull images from internet. It does not mean the unix command executed (RUN directive) by docker build would benefit from them. Hence the need to pass them as build-time environment variables.
I also had the same issue and did not want to set any proxy information in my image as I did not want be dependant of my company environment.
My solution was to use a cntlm running in gateway mode. To do so I put the flag Gateway set to yes the following allow rules in my cntlm configuration file:
Gateway yes
# Allow local
Allow 127.0.0.1
# Allow docker subnetwork
Allow 172.17.0.0/16
Then I was able to run my docker file by getting the dokcer0 interface address (got with ifconfigcommand):
docker build -t my-image --build-arg HTTP_PROXY=http://172.17.0.1:3128 --build-arg HTTPS_PROXY=http://172.17.0.1:3128 .
Same with docker run:
docker run --rm -e HTTP_PROXY=http://172.17.0.1:3128 --build-arg HTTPS_PROXY=http://172.17.0.1:3128 my-image
However please note that since docker 17.07 you can simply configure the docker client proxy.
Hence your ~/.docker/config.json will be like
{
"proxies": {
"default":{
"httpProxy": "http://172.17.0.1:3128/",
"httpsProxy": "http://172.17.0.1:3128/",
"noProxy": "127.0.0.1,172.17.0.0/16,*.some.compagny.domain"
}
}
Adding this to Dockerfile worked for me:
RUN npm config set https-proxy http://user:password#proxy.company.priv:80
RUN npm config set proxy http://user:password#proxy.company.priv:80
As described in the Docker Documentation adding the following to ~/.docker/config.json helped me:
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:3001",
"httpsProxy": "http://127.0.0.1:3001",
"noProxy": "*.test.example.com,.example2.com"
}
}
}
(Just that you know, this package was written by myself)
You can use docker-container-proxy, it allows configuration of a proxy for any docker container without editing any code.
Just run:
npx dockerproxy start --address company-proxy-address.com --port 8080
# Do anything else that needs a Proxy
I have a few directories with different Mercurial histories that I am working on in parallel. They all have the same Vagrantfile so it would be natural to use just one instance for all of them.
But when I run "vagrant up" in a new directory, it starts from linking the existent VM, setting up the environment, and so on.
How do I share the Vagrant instance between different directories?
UPDATE: my directory structure:
\
Vagrantfile
puppet
*.pp
support
nginx.conf
uwsgi.development.ini
other_repo_related_files_and_dirs
Well, if you want to share some directories with the same Vagrant's instance, you can configure the Vagrantfile.
This is an example with two VM (app and web), using the same box (ubuntu-12.04) and the same Vagrantfile. Each instance have two folders (one folder by VM).
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define 'app' do |app_config|
app_config.vm.box = 'ubuntu-12.04'
app_config.vm.host_name = 'app'
app_config.vm.network "private_network", ip: "192.168.33.33"
app_config.vm.synced_folder "app_config", "/app_config"
end
config.vm.define 'web' do |web_config|
web_config.vm.box = 'ubuntu-12.04'
web_config.vm.host_name = 'web'
web_config.vm.network "private_network", ip: "192.168.33.34"
web_config.vm.synced_folder "web_config", "/web_config"
end
end
The app machine has an app_config folder and the web machine have a web_config folder(these folders are in the same level of the Vagrantfile file).
When you enter to each VM with the vagrant ssh command you can see each folder.
This is into app machine.
roberto#rcisla-pc:~/Desktop/multiple$ vagrant ssh app
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic-pae i686)
* Documentation: https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Mon Jan 27 13:46:36 2014 from 10.0.2.2
vagrant#app:~$ cd /app_config/
vagrant#app:/app_config$ ls
app_config_file
This is into web machine.
roberto#rcisla-pc:~/Desktop/multiple$ vagrant ssh web
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic-pae i686)
* Documentation: https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Mon Jan 27 13:47:12 2014 from 10.0.2.2
vagrant#web:~$ cd /web_config/
vagrant#web:/web_config$ ls
web_config_file
vagrant#web:/web_config$
And this is the structure for my directory.
.
├── **app_config**
│ └── *app_config_file*
├── attributes
├── Berksfile
├── Berksfile.lock
├── chefignore
├── definitions
├── files
│ └── default
├── Gemfile
├── libraries
├── LICENSE
├── metadata.rb
├── providers
├── README.md
├── recipes
│ └── default.rb
├── resources
├── templates
│ └── default
├── test
│ └── integration
│ └── default
├── Thorfile
├── Vagrantfile
├── Vagrantfile~
└── **web_config**
└── *web_config_file*
I hope this help you.
Just thinking out loud here. Not sure if it's a solution that meets your demands.
If you set-up a directory structure like this
/Main
/projects
/mercurial_history_1
/mercurial_history_2
/mercurial_history_3
/puppet
/modules
/manifests
default.pp
Vagrantfile
I'm not sure what kind of projects you are running, but if you are running a apache webserver for example. You could specify a separate vhost for every mercurial project inside the VM. So you can point the DocumentRoot to the specific mercurial project.
For this solution you have to add the following line in the Vagrantfile
config.vm.network "private_network", ip: "22.22.22.11" <- Just an example IP
Then on you host machine you can update the hosts file with the IP and corresponding vhostname servername. It's a little bit more work, but you can add vhosts using a provisioner to make life easier ;)
This way you only have one VM running that runs al your mercurial projects
I try to start Elastic search in clustering with 2 nodes :
I run Command :
service elasticsearch start
then I run 2 instances of elasticsearch in order to join the cluster with commands:
/bin/elasticsearch
But when I check the head_plugin : localhost:2900/_plugin/head/ I get the Cluster health status Yellow, and the nodes didn't join the cluster
How can I configure the two nodes to make them join the cluster ?
thanks
EDIT:
This is what I get :
root#vmi17663:~# curl -XGET 'http://localhost:9200/_cluster/nodes?pretty=true'
{
"ok" : true,
"cluster_name" : "nearCluster",
"nodes" : {
"aHUjm3SjQa6MbRoWCnL4pQ" : {
"name" : "Primary node",
"transport_address" : "inet[/ip#dress:9300]",
"hostname" : "HOSTNAME",
"version" : "0.90.5",
"http_address" : "inet[/ip#dress:9200]"
}
}
}root#vmi17663:~# curl -XGET 'http://localhost:9201/_cluster/nodes?pretty=true'
{
"ok" : true,
"cluster_name" : "nearCluster",
"nodes" : {
"pz7dfIABSbKRc92xYCbtgQ" : {
"name" : "Second Node",
"transport_address" : "inet[/ip#dress:9301]",
"hostname" : "HOSTNAME",
"version" : "0.90.5",
"http_address" : "inet[/ip#dress:9201]"
}
}
I made it work !
As expected It was iptables Problem I added this rule
-A INPUT -m pkttype --pkt-type multicast -j ACCEPT
and everything went smooth
Make sure you have different elasticsearch.yml files for each node.
Make sure each is configured to join the same cluser via cluster.name: "mycluster"
You can start an additional nodes (new jvm process) off the same code install like this:
<es home>/bin/elasticsearch -d -Des.config=<wherever>/elasticsearch-1/config/elasticsearch.yml
<es home>/bin/elasticsearch -d -Des.config=<wherever>/elasticsearch-2/config/elasticsearch.yml
My setup looks like this:
elasticsearch-1.0.0.RC1
├── LICENSE.txt
├── NOTICE.txt
├── README.textile
├── bin
├── config
├── data
├── lib
├── logs
└── plugins
elasticsearch-2
├── config
├── data
├── logs
├── run
└── work
elasticsearch-3
├── config
├── data
├── logs
├── run
└── work
elasticsearch-1
├── config
├── data
├── logs
├── run
└── work
I start all three with aliases like this:
alias startes1='/usr/local/elasticsearch-1.0.0.RC1/bin/elasticsearch -d -Des.config=/usr/local/elasticsearch-1/config/elasticsearch.yml'
alias startes2='/usr/local/elasticsearch-1.0.0.RC1/bin/elasticsearch -d -Des.config=/usr/local/elasticsearch-2/config/elasticsearch.yml'
alias startes3='/usr/local/elasticsearch-1.0.0.RC1/bin/elasticsearch -d -Des.config=/usr/local/elasticsearch-3/config/elasticsearch.yml'
If your nodes don't join, then you need to check your cluster.name setting, and make sure that each node can communicate to each other via port 9300. (9200 is for incoming traffic, and 9300 is for node to node traffic).
So as #mcolin mentioned make sure your cluster name is the same for each node. To do so, open up your /etc/elasticsearch/elasticsearch.yml file on your 1st server, and find the line that says "cluster.name" and note what it is set to. Then go to your other servers and make sure they are set to the exact same thing.
To do this, you could run this command:
sudo vim /etc/elasticsearch/elasticsearch.yml
and set the following line to be something like:
cluster.name: my_node_name
Additionally, your nodes might not be able to talk to each other. My nodes are running on AWS, so I went to my EC2 panel and made sure my instances were in the same security group. Then I set my security group to allow all instances within it to talk to each other by creating a rule like this:
Custom TCP Rule TCP 9300 dev-elasticsearch
(or to be wild and dangerous, set this:)
All traffic All All dev-elasticsearch
Within a minute of setting this I checked my cluster status and all was well:
curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'