Knative pod http request - go

When I make request for this started server: https://gist.github.com/Rasarts/1180479de480d7e36d6d7aef08babe59#file-server
I get right response:
{
"args": {},
"headers": {
"Accept-Encoding": "gzip",
"Connection": "close",
"Host": "httpbin.org",
"User-Agent": "Go-http-client/1.1"
},
"origin": "",
"url": "https://httpbin.org/get"
}
But when I make request to that server on minikube which was created this way:
https://gist.github.com/Rasarts/1180479de480d7e36d6d7aef08babe59#file-serve-yaml
I get error:
ERROR: Get https://httpbin.org/get: EOF<nil>
How can I make http requests from kubernetes pod?

Knative uses Istio and Istio, by default, doesn't allow outbound traffic to external hosts, such as httpbin.org. That's why your request is failing.
Follow this document to learn how to configure Knative (so that it configures Istio correctly) to make outbound connections. Or, you can directly configure the Istio by adding an egress policy: https://istio.io/docs/tasks/traffic-management/egress/

Related

Docker on Windows 10 networking issue

I'm using Docker on a Window 10 laptop. I recently tried to get some code to run in a container to connect to another server on the network. I ended up making a Ubuntu container and found the issue is a IP conflict between the docker network and the server resource (172.17.1.3).
There appears to be an additional layer of networking on the Windows Docker setup with isn't present on the Unix system, and the docker comments to "simply using a bridge network" doesn't resolve this issue.
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "d60dd1169153e8299a7039e798d9c313f860e33af1b604d05566da0396e5db19",
"Created": "2020-02-28T15:24:32.531675705Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Is it possible to change the subnet/gateway to avoid the IP conflict? If so how? I tried the simple thing and making a new docker network:
docker network create --driver=bridge --subnet=172.15.0.0/28 --gateway=172.15.0.1 new_subnet_1
There still appears to have a conflict somewhere, I can reach other devices just nothing in 172.17.0.0/16. I assume guessing it's somewhere in the HyperV, vEthernet adapter, or vswitch.
UPDATE 1
I took a look at wireshark (PC level) with the new_subnet_1 network and I did not see these packets leave the vSwitch interface or the PC's NIC.
I did see this Docker forum which is indicating an issue with the Hyper-V and V-switch that could be the issue.
Docker Engine v19.03.5
DockerDesktopVM created by Docker for Windows install
UPDATE 2
After several Hyper-v edits and putting the environment back together I check the DockerDesktopVm. After getting in from a privileged container I found that the docker0 network had the IP conflict. Docker0 is appears to be the same default bridge network that I was avoiding, because it is a pre-defined network it cannot be removed, and all my traffic is being sent to it.
After several offshoots, and breaking my environment at least once, I found that the solution was easier then I had though.
Tuned off Docker Desktop Services
Added the following line to the %userprofile%\.docker\deamon.json file in windows 10
....lse,
"bip": "172.15.1.6/24" <<new non conflicting range
}
Restarted Docker Desktop Service
Easy solution after chasing options in Hyper-V and the Docker Host Linux VM.

How do you use LetsEncrypt onHostRule with the Consul Catalog backend for LetsEncrypt?

I was able to get onDomain working, but someone in the Slack channel stated that onDomain is deprecated for Traefik, though there is no mention of deprecation in the Traefik documentation.
[edit]
There is a reference to this deprecation here: https://github.com/containous/traefik/issues/2212
I am using the Consul catalog backend with host rules for my services, being set with tags:
ex:
{
"service": {
"name": "application-java",
"tags": ["application-java", "env-SUBDOMAIN", "traefik.tags=loadbalanced", "traefik.frontend.rule=Host:SUBDOMAIN.domain.com"],
"address": "",
"port": 8080,
"enable_tag_override": false,
"checks": [{
"http": "http://localhost:8080/api/health",
"interval": "10s"
}]
}
}
However, no certificate is generated for SUBDOMAIN.domain.com - requests just use the TRAEFIK DEFAULT CERT.
What is the recommended method for getting Traefik to generate certificates for Consul catalog services automatically?
It looks like this might only work with the frontEndRule option in the main config, rather than with the "traefik.frontend.rule" override tag.
I added this line:
frontEndRule = "Host:{{getTag \"traefik.subdomain\" .Attributes
.ServiceName }}.{{.Domain}}"
and this Consul catalog tag:
traefik.subdomain=SUBDOMAIN
and I'm getting the Fake certificate from the LE staging server now.

Generating -bind parameter in Consul JSON files for use with Marathon

I am working on launching Consul containers on docker with Marathon and I've run into a somewhat subjective issue regarding creating the JSON files.
Currently I plan to launch containers with JSON files of this format
server-1.json
{
"id": "consul-server-2",
"cmd": "consul agent -server -client=0.0.0.0 -ui -bind=100.10.30.40 -retry-join=server-1.local -data-dir=/tmp/consul",
"cpus": 1,
"mem": 512.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "consul:latest",
"name": "dev-consul",
"network": "HOST"
}
},
"constraints": [
[
"hostname",
"CLUSTER",
"server-1.local"
]
]
}
I need to be able to change the -bind address for each JSON file and I was planning on using heredocs with BASH but I am not sure if there are better practices as far as ease of maintainability for creating these type of files.
Ideally I would have liked to have a field in Consul or Marathon which could automatically give me the IP address of a specific port to feed to -bind but because I have multiple private IPs it seems I need to configure it manually.
It sounds like you have a configuration management issue.
If I understand you correctly you have a number of servers in an internal network where each has an internal IP address and you now want to generate the right service files for each server.
Typically you would use a configuration management system like e.g. Ansible, Chef or Puppet to solve this.
Personally I can recommend Ansible since it is easy to get started with and low overhead.
To solve your problem you would then first create an inventory file with the IP addresses of your servers and then create a Jinja2 template for your service files.
You can then use the correct IP address for each server in that template and finally deploy all the files with Ansible.
Look at consul 0.7.2 or newer. There is a soon-to-be documented feature in Consul that allows for runtime configuration of IP addresses. I wouldn't recommend running Consul in a container unless running net=host, but using the configuration snippet above:
{
"id": "consul-server-2",
"cmd": "consul agent -server -client='{{ GetPrivateIP }}' -ui -bind=100.10.30.40 -retry-join=server-1.local -data-dir=/tmp/consul",
"cpus": 1,
"mem": 512.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "consul:latest",
"name": "dev-consul",
"network": "HOST"
}
},
"constraints": [
[
"hostname",
"CLUSTER",
"server-1.local"
]
]
}
Other options for what address to use can be explored based on the hashicorp/go-sockaddr package.

EC2 instance not getting pinged

I have ec2 instance running and which is linked with elastic ip.
when I ping it from local machine then It shows request time out because of which I am not able connect to it via putty and win scp.
I am facing this issue from last 2days.
It was working well for last 2 months.
Please help.
My instance is runig and healthy.
If you want to ping an EC2 instance from your local machine you need to allow inbound Internet Control Message Protocol (ICMP) traffic. Please check your Security Groups to make sure this is allowed. Remember that all inbound traffic is disable by default. You may need to establish a rule similar to this one (JSON format):
"AllowIngressICMP": {
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties": {
"GroupId": <Your Security Group here>,
"IpProtocol": "icmp",
"FromPort": "-I",
"ToPort": "-I",
"CidrIp": "0.0.0.0/0"
** The -I means "every port"

Crossbar SSL/TLS configuration with intermediate and cross-signed certificates

Using the latest version of Crossbar (0.13, installed from apt-get on Ubuntu 14.04) I am having trouble making connections using SSL and intermediate certificates.
If I set up the server without a ca_certificates property in the tls key then the server runs fine and connections can be made using Google Chrome via the wss protocol. However trying to make a connection using thruway fails with the following error:
Could not connect: Unable to complete SSL/TLS handshake: stream_socket_enable_crypto(): SSL operation failed with code 1. OpenSSL Error messages: error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure
Which having spoken with the Thruway team seems to be a certificate issue - on our live site we use an intermediate and cross-signed certificate from Gandi which is needed for some browsers and therefore some open-ssl implementations.
It seems that whilst browsers are happy to make a TLS connection with just a key and cert, Thruway requires a chain. However the configuration below using the two certificates provided by Gandi does not work for either Chrome or Thruway. Chrome shows the error:
failed: WebSocket opening handshake was canceled
When using the .crossbar/config.json file below. So, is this a problem with my config, with my certificates or with some other part of the Open-SSL stack?
(The file below has been altered to remove any potentially sensitive information so may appear like it wouldn't work for other reasons. If the connection works the underlying auth and other components work fine, so please keep answers/comments regarding the TLS implementation. The comments are not valid JSON but are included so readers can see the certificate files in use)
{
"version": 2,
"controller": {},
"workers": [
{
"type": "router",
"realms": [
{
"name": "test",
"roles": [
{
"name": "web",
"authorizer": "test.utils.permissions",
"disclose": {
"caller": true,
"publisher": true
}
},
{
"name": "no",
"permissions": []
}
]
}
],
"transports": [
{
"type": "websocket",
"endpoint": {
"type": "tcp",
"port": 9001,
"interface": "127.0.0.1"
},
"auth": {
"wampcra": {
"type": "static",
"users": {
"authenticator": {
"secret": "authenticator-REDACTED",
"role": "authenticator"
}
}
}
}
},
{
"type": "web",
"endpoint": {
"type": "tcp",
"port": 8089,
"tls": {
"key": "../ssl/key.pem",
"certificate": "../ssl/cert.pem",
"ca_certificates": [
"../ssl/gandi.pem", // https://www.gandi.net/static/CAs/GandiProSSLCA2.pem
"../ssl/gandi-cross-signed.pem" // https://wiki.gandi.net/en/ssl/intermediate#comodo_cross-signed_certificate
],
"dhparam": "../ssl/dhparam.pem"
}
},
"paths": {
"/": {
"type": "static",
"directory": "../web"
},
"ws": {
"type": "websocket",
"url": "wss://OUR-DOMAIN.com:8089/ws",
"auth": {
"wampcra": {
"type": "dynamic",
"authenticator": "test.utils.authenticate"
}
}
}
}
}
]
},
{
"type": "guest",
"executable": "/usr/bin/env",
"arguments": [
"php",
"../test.php",
"ws://127.0.0.1:9001",
"test",
"authenticator",
"authenticator-REDACTED"
]
}
]
}
There are other questions which address issues similar to this#
This one deals with the fact that any TLS error terminates a WSS connection with no useful error.
This one deals specifically with the handshake cancellation but in their case it was an improperly configured library used in compilation, which isn't relevant in this case as Crossbar has been installed from apt-get
This is not an issue with Crossbar. This appears to be a problem with the WAMP client - Thruway. Davidwdan is the owner of the Thruway Github repo and he says:
"Thruway's Ratchet transport provider does not directly support SSL. You'll need to put some sort of proxy in front of it."
You can find more information regarding what Davidwdan and others have to say about this right here https://github.com/voryx/Thruway/issues/163.
Now to get to the solution. Mind you, the following is only for Apache users. If you are running on Nginx the idea is pretty much the same.
A couple things to note before we get started.
Follow Crossbar's tutorial for the install! Don't try to do it yourself! There is more to setting up Crossbar then meets the eye. The fine folks over at Crossbar have laid out detailed instructions just for you! https://crossbar.io/docs/Installation/.
For this example, I have Crossbar and Apache running on the same machine. Although this is not a requirement and does not matter!
The first thing you want to do is create a new virtual host. I chose port 4043 for this virtual host, but you can choose whatever you would like. This virtual host is going to be for every WAMP library that does NOT have an issue connecting via wss:// (with an SSL). Here is a full list of WAMP clients: http://wamp-proto.org/implementations/. Make sure the ProxyPass directive and the ProxyPassReverse directive has the IP address pointing to the machine that the CROSSBAR router exists on. In my case since Apache and Crossbar are running on the same machine I just use 127.0.0.1. Also make sure the port being used in the ProxyPass directive and the ProxyPassReverse directive is the exact same as the port that you defined in your .crossbar/config.json! You will also need an SSL certificate set up on this virtual host as well, which you can see I have added below the Proxy directives.
Listen 4043
<VirtualHost *:4043>
ServerName example.org
ProxyRequests off
SSLProxyEngine on
ProxyPass /ws/ ws://127.0.0.1:8000/
ProxyPassReverse /ws/ ws://127.0.0.1:8000/
## Custom fragment
SSLEngine on
SSLCertificateFile /path/to/server_cert.pem
SSLCertificateKeyFile /path/to/server_key.pem
SSLCertificateChainFile /path/to/server_ca.pem
</VirtualHost>
Next, make sure your Crossbar router is NOT setup with an SSL! This is super important. Thruway or any other library that is NOT able to connect via SSL WON'T be able to use the router if you have it configured to use an SSL! Below is a working Crossbar config.json file that you would be able to use.
{
"version": 2,
"controller": {},
"workers": [
{
"type": "router",
"realms": [
{
"name": "production_realm",
"roles": [
{
"name": "production_role",
"permissions": [
{
"uri": "",
"match": "prefix",
"allow": {
"call": true,
"register": true,
"publish": true,
"subscribe": true
}
}
]
}
]
}
],
"transports": [
{
"type": "websocket",
"endpoint": {
"type": "tcp",
"port": 8000
},
"options": {
"allowed_origins": ["http://*","https://*"]
},
"auth": {
"ticket": {
"type": "static",
"principals": {
"production_user": {
"ticket": "tSjlwueuireladgte",
"role": "production_role"
}
}
}
}
}
]
}
]
}
Notice how the port number defined above matches the port number defined in the virtual host.
./crossbar/config.json:
"endpoint": {
"type": "tcp",
"port": 8000
},
virtual host:
ProxyPass /ws/ ws://127.0.0.1:8000/
ProxyPassReverse /ws/ ws://127.0.0.1:8000/
Also, if you read other tutorials, some people will tell you to make sure you use the ProxyPreserveHost directive in your virtual host file. DON'T LISTEN TO THEM! This will produce lots of unexpected results. When this directive is enabled, this option will pass the Host: line from the incoming request to the proxied host, instead of the hostname specified in the ProxyPass line! Even Apache says to stay away from this directive https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypreservehost. If you do have it enabled you will receive an error similar to below:
failing WebSocket opening handshake ('missing port in HTTP Host header
'example.org' and server runs on non-standard port 8000 (wss =
False)')
Last but not least, make sure all of the following Apache libraries are installed and enabled. On recent Apache installations all of the following libraries come installed by default and just need to be enabled:
$ sudo a2enmod proxy
$ sudo a2enmod proxy_http
$ sudo a2enmod proxy_balancer
$ sudo a2enmod lbmethod_byrequests
$ sudo a2enmod proxy_wstunnel
Make sure you open up whichever port your virtual host file is listening on and whichever port your crossbar router is listening on. In my case:
$ sudo ufw allow 4043
$ sudo ufw allow 8000
And finally restart Apache so all your changes can take effect.
$ sudo service apache2 restart
Last but not least I want to give a quick explanation of why all of this has to be done:
When you have an SSL certificate setup on your server the browser will throw an error when trying to connect to any WAMP router without using wss://.
Normally the solution to this would be to configure your WAMP router to use the SSL certificate that is already set up on your server.
The only issue with this is that Thruway.php (the only good php client I know that works with WAMP) does not play well with wss://. Even the creators of Thruway.php on GitHub say it doesn’t work.
The solution to this issues is to use a reverse proxy.
First you need to set up your WAMP router and make sure it is not using an SSL certificate.
Next you need to setup a reverse proxy so wss:// requests get converted to ws://. This will allow your browser to connect to the WAMP router without complaining.
Since the WAMP router is not set up to use an SSL, Thruway.php will work fine as well!
And well.... That's all folks! I know I needed to give a detailed answer to this question because it took me 5 days to figure all of this out!
#Tay-Bae's answer was already very useful. But it wasn't working for me, the client was getting a 200 OK response. All I need to do is to forward WSS traffic to my internal WS client which does not support WSS (Thruway).
After looking in the forums, I stumbled uppon this answer : https://serverfault.com/a/846936.
They add a rewrite part which seems to be required to re-route the request. I thought ProxyPassReverse should do it, but it doesn't. So here's my working config :
Listen 4043
<VirtualHost *:4043>
ServerName mydomain.net
ProxyRequests off
SSLProxyEngine on
ProxyPass /ws/ ws://127.0.0.1:8080/
ProxyPassReverse /ws/ ws://127.0.0.1:8080/
## Custom fragment
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/mydomaine.net/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/mydomain.net/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/mydomain.net/chain.pem
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]
RewriteCond %{HTTP:CONNECTION} Upgrade$ [NC]
RewriteRule .* ws://localhost:8080%{REQUEST_URI} [P]
</IfModule>
LogLevel debug
ErrorLog ${APACHE_LOG_DIR}/error_thruway.log
CustomLog ${APACHE_LOG_DIR}/access_thruway.log combined
</VirtualHost>

Resources