Using Finagle's clientbuilder, how do I set the host externally? - proxy

I am building a simple proxy to point to another server. Everything works but I need to find a way to be able to set the hosts in a ClientBuilder externally most likely using Docker or maybe some sort of configuration file. Here is what I have:
import java.net.InetSocketAddress
import com.twitter.finagle.Service
import com.twitter.finagle.builder.{ServerBuilder, ClientBuilder}
import com.twitter.finagle.http.{Request, Http}
import com.twitter.util.Future
import org.jboss.netty.handler.codec.http._
object Proxy extends App {
val client: Service[HttpRequest, HttpResponse] = {
ClientBuilder()
.codec(Http())
.hosts("localhost:8888")
.hostConnectionLimit(1)
.build()
}
val server = {
ServerBuilder()
.codec(Http())
.bindTo(new InetSocketAddress(8080))
.name("TROGDOR")
.build(client)
}
}
If you know of a way to do this or have any ideas about it please let me know!

if you want running this simple proxy in a docker container and manage the target host ip dynamically, you can try to pass a target host ip through environment variable and change your code like this
import java.net.InetSocketAddress
import com.twitter.finagle.Service
import com.twitter.finagle.builder.{ServerBuilder, ClientBuilder}
import com.twitter.finagle.http.{Request, Http}
import com.twitter.util.Future
import org.jboss.netty.handler.codec.http._
object Proxy extends App {
val target_host = sys.env.get("TARGET_HOST")
val client: Service[HttpRequest, HttpResponse] = {
ClientBuilder()
.codec(Http())
.hosts(target_host.getOrElse("127.0.0.1:8888"))
.hostConnectionLimit(1)
.build()
}
val server = {
ServerBuilder()
.codec(Http())
.bindTo(new InetSocketAddress(8080))
.name("TROGDOR")
.build(client)
}
}
this will let your code read system environment variable TARGET_HOST. when you done this part, you can try to start your docker container by adding the following parameter to your docker run command:
-e "TARGET_HOST=127.0.0.1:8090"
for example docker run -e "TARGET_HOST=127.0.0.1:8090" <docker image> <docker command>
note that you can change 127.0.0.1:8090 to your target host.

You need a file server.properties and put your configuration inside the file:
HOST=host:8888
Now get docker to write your configuration with every startup with a docker-entrypoint bash script. Add this script and define environment variables inside your Dockerfile:
$ ENV HOST=myhost
$ ENV PORT=myport
$ ADD docker-entrypoint.sh /docker-entrypoint.sh
$ ENTRYPOINT ["/docker-entrypoint.sh"]
$ CMD ["proxy"]
Write out your docker-entrypoint.sh:
#!/bin/bash -x
set -o errexit
cat > server.properties << EOF
HOST=${HOST}:${PORT}
EOF
if [ "$1" = 'proxy' ]; then
launch server
fi
exec "$#"
Launch Docker with your configuration and the command "proxy":
$ docker run -e "HOST=host" -e "PORT=port" image proxy
You can also do linking when your not sure of your server container ip adress:
$ docker run -e "HOST=mylinkhost" -e "PORT=port" --link myservercontainer:mylinkhost image proxy

Related

How to export environment variable on remote host with GitlabCI

I'm using GitlabCI to deploy my Laravel applications.
I'm wondering how should I manage the .env file. As far as I've understood I just need to put the .env.example under version control and not the one with the real values.
I've set all the keys my app needs inside Gitlab Settings -> CI/CD -> Environment Variables and I can use them on the runner, for example to retrieve the SSH private key to connect to the remote host, but how should I deploy these variables to the remote host as well? Should I write them with bash in a "runtime generated" .env file and then copy it? Should I export them via ssh on the remote host? Which is the correct way to manage this?
If you open to another solution i propose using fabric(fabfile) i give you an example:
create .env.default with variable like :
DB_CONNECTION=mysql
DB_HOST=%(HOST)s
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=%(USER)s
DB_PASSWORD=%(PASSWORD)s
After installing fabric add fabfile on you project directory:
from fabric.api import env , run , put
prod_env = {
'name' : 'prod' ,
'user' : 'user_ssh',
'deploy_to' : '/path_to_project',
'hosts' : ['ip_server'],
}
def set_config(env_config):
for key in env_config:
env[key] = env_config[key]
def prod():
set_config(prod_env)
def deploy(password,host,user):
run("cd %s && git pull -r",env.deploy_to)
process_template(".env.default",".env" , { 'PASSWORD' : password , 'HOST' : host,'USER': user } )
put( ".env" , "/path_to_projet/.env" )
def process_template(template , output , context ):
import os
basename = os.path.basename(template)
output = open(output, "w+b")
text = None
with open(template) as inputfile:
text = inputfile.read()
if context:
text = text % context
#print " processed \n : %s" % text
output.write(text)
output.close()
Now you can run from you local to test script :
fab prod deploy:password="pass",user="user",host="host"
It will deploy project on your server and check if it process .env
If it works now it's time for gitlab ci this is an example file :
image: python:2.7
before_script:
- pip install 'fabric<2.0'
# Setup SSH deploy keys
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
deploy_staging:
type: deploy
script:
- fab prod deploy:password="$PASSWORD",user="$USER",host="$HOST"
only:
- master
$SSH_PRIVATE_KEY,$PASSWORD,$USER,$HOST is environnement variable gitlab,you should add a $SSH_PRIVATE_KEY private key which have access to the server.
Hope i don't miss a step.

How to connect Opentracing application to a remote Jaeger collector

I am using Jaeger UI to display traces from my application. It's work fine for me if both application an Jaeger are running on same server. But I need to run my Jaeger collector on a different server. I tried out with JAEGER_ENDPOINT, JAEGER_AGENT_HOST and JAEGER_AGENT_PORT, but it failed.
I don't know, whether my values setting for these variables is wrong or not. Whether it required any configuration settings inside application code?
Can you provide me any documentation for this problem?
In server 2 , Install jaeger
$ docker run -d --name jaeger \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:latest
In server 1, set these environment variables.
JAEGER_SAMPLER_TYPE=probabilistic
JAEGER_SAMPLER_PARAM=1
JAEGER_SAMPLER_MANAGER_HOST_PORT=(EnterServer2HostName):5778
JAEGER_REPORTER_LOG_SPANS=false
JAEGER_AGENT_HOST=(EnterServer2HostName)
JAEGER_AGENT_PORT=6831
JAEGER_REPORTER_FLUSH_INTERVAL=1000
JAEGER_REPORTER_MAX_QUEUE_SIZE=100
application-server-id=server-x
Change the tracer registration application code as below in server 1, so that it will get the configurations from the environment variables.
#Produces
#Singleton
public static io.opentracing.Tracer jaegerTracer() {
String serverInstanceId = System.getProperty("application-server-id");
if(serverInstanceId == null) {
serverInstanceId = System.getenv("application-server-id");
}
return new Configuration("ApplicationName" + (serverInstanceId!=null && !serverInstanceId.isEmpty() ? "-"+serverInstanceId : ""),
Configuration.SamplerConfiguration.fromEnv(),
Configuration.ReporterConfiguration.fromEnv())
.getTracer();
}
Hope this works!
Check this link for integrating elasticsearch as the persistence storage backend so that the traces will not remove once the Jaeger instance is stopped.
How to configure Jaeger with elasticsearch?
Specify "JAEGER_AGENT_HOST" and ensure "local_agent" is not specified in tracer config file.
Below is the working solution for Python
import os
os.environ['JAEGER_AGENT_HOST'] = "123.XXX.YYY.ZZZ" # Specify remote Jaeger-Agent here
# os.environ['JAEGER_AGENT_HOST'] = "16686" # optional, default: "16686"
from jaeger_client import Config
config = Config(
config={
'sampler': {
'type': 'const',
'param': 1,
},
# ENSURE 'local_agent' is not specified
# 'local_agent': {
# # 'reporting_host': "127.0.0.1",
# # 'reporting_port': 16686,
# },
'logging': True,
},
service_name="your-service-name-here",
)
# create tracer object here and voila!
Guidance of Jaeger: https://www.jaegertracing.io/docs/1.33/getting-started/
Jaeger-Client features: https://www.jaegertracing.io/docs/1.33/client-features/
Flask-OpenTracing: https://github.com/opentracing-contrib/python-flask
OpenTelemetry-Python: https://opentelemetry.io/docs/instrumentation/python/getting-started/

Capture output of a shell script inside a docker container to a file using docker sdk for python)

I have a shell script inside my docker container called test.sh. I would like to pipe the output of this script to a file. I can do this using docker exec command or by logging into the shell (using docker run -it) and running ./test.sh > test.txt. However, I would like to know how the same result can be achieved using the docker sdk for python. This is my code so far:
import docker
client = docker.APIClient(base_url='unix://var/run/docker.sock')
container= client.create_container(
'ubuntu:16.04', '/bin/bash', stdin_open=True, tty=True, working_dir='/home/psr', \
volumes=['/home/psr/data'], \
host_config=client.create_host_config(binds={
'/home/xxxx/data_generator/data/': {
'bind': '/home/psr/data',
'mode': 'rw',
},
})
)
client.start(container=container.get('Id'))
cmds= './test.sh > test.txt'
exe=client.exec_create(container=container.get('Id'), cmd= cmds,
stdout=True)
exe_start=client.exec_start(exec_id=exe, stream=True)
for val in exe_start:
print (val)
I am using the Low-Level API of the docker sdk. In case you know how to achieve the same result as above using the high level API, please let me know.
In case anyone else had the same problem, here is how I solved it. Please let me know in case you have a better solution.
import docker
client = docker.APIClient(base_url='unix://var/run/docker.sock')
container= client.create_container(
'ubuntu:16.04', '/bin/bash', stdin_open=True, tty=True,
working_dir='/home/psr', \
volumes=['/home/psr/data'], \
host_config=client.create_host_config(binds={
'/home/xxxx/data_generator/data/': {
'bind': '/home/psr/data',
'mode': 'rw',
},
})
)
client.start(container=container.get('Id'))
cmds= './test.sh'
exe=client.exec_create(container=container.get('Id'), cmd=cmds,
stdout=True)
exe_start=client.exec_start(exec_id=exe, stream=True)
with open('path_to_host_directory/test.txt', 'wb') as f: # wb: For Binary Output
for val in exe_start:
f.write(val)

Jenkins Environment Variables in Groovy Init

I am building a Docker image of Jenkins, and have passed ENV variables to the jenkins.sh initialization file:
Dockerfile
...
COPY ./jenkins.sh /usr/local/bin/jenkins.sh
jenkins.sh
echo ENV: "$ENV"
echo CLUSTER: "$CLUSTER"
echo REGION: "$REGION"
When I run the image, these values print out perfectly, but I would like to use them in Groovy scripts during the initialization of Jenkins.
The following throws an error during start:
import java.util.Arrays
import java.util.logging.Logger
Logger logger = Logger.getLogger("ecs-cluster")
logger.info("Loading Archeus-Midwayer...")
import jenkins.model.*
instance = Jenkins.getInstance()
def env = System.getenv()
println(env['CLUSTER'])
Error
WARNING: Failed to run script file:/var/jenkins_home/init.groovy.d/init_ecs.groovy
groovy.lang.MissingPropertyException: No such property: CLUSTER for class: init_ecs
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:53)
at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:52)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:307)
How can I capture the environment variables present in jenkins.sh?
Thanks!
Check the env vars with:
def env = System.getenv()
env.each {
println it
}
Export the env vars in jenkins.sh.
See also Access to build environment variables from a groovy script in a Jenkins build step ( Windows).

use shell script to detect inside docker container [duplicate]

[Updated1] I have a shell which will change TCP kernel parameters in some functions, but now I need to make this shell run in Docker container, that means, the shell need to know it is running inside a container and stop configuring the kernel.
Now I'm not sure how to achieve that, here is the contents of /proc/self/cgroup inside the container:
9:hugetlb:/
8:perf_event:/
7:blkio:/
6:freezer:/
5:devices:/
4:memory:/
3:cpuacct:/
2:cpu:/docker/25ef774c390558ad8c4e9a8590b6a1956231aae404d6a7aba4dde320ff569b8b
1:cpuset:/
Any flags above can I use to figure out if this process is running inside a container?
[Updated2]: I have also noticed Determining if a process runs inside lxc/Docker, but it seems not working in this case, the content in /proc/1/cgroup of my container is:
8:perf_event:/
7:blkio:/
6:freezer:/
5:devices:/
4:memory:/
3:cpuacct:/
2:cpu:/docker/25ef774c390558ad8c4e9a8590b6a1956231aae404d6a7aba4dde320ff569b8b
1:cpuset:/
No /lxc/containerid
Docker creates .dockerenv and .dockerinit (removed in v1.11) files at the top of the container's directory tree so you might want to check if those exist.
Something like this should work.
#!/bin/bash
if [ -f /.dockerenv ]; then
echo "I'm inside matrix ;(";
else
echo "I'm living in real world!";
fi
To check inside a Docker container if you are inside a Docker container or not can be done via /proc/1/cgroup. As this post suggests you can to the following:
Outside a docker container all entries in /proc/1/cgroup end on / as you can see here:
vagrant#ubuntu-13:~$ cat /proc/1/cgroup
11:name=systemd:/
10:hugetlb:/
9:perf_event:/
8:blkio:/
7:freezer:/
6:devices:/
5:memory:/
4:cpuacct:/
3:cpu:/
2:cpuset:/
Inside a Docker container some of the control groups will belong to Docker (or LXC):
vagrant#ubuntu-13:~$ docker run busybox cat /proc/1/cgroup
11:name=systemd:/
10:hugetlb:/
9:perf_event:/
8:blkio:/
7:freezer:/
6:devices:/docker/3601745b3bd54d9780436faa5f0e4f72bb46231663bb99a6bb892764917832c2
5:memory:/
4:cpuacct:/
3:cpu:/docker/3601745b3bd54d9780436faa5f0e4f72bb46231663bb99a6bb892764917832c2
2:cpuset:/
We use the proc's sched (/proc/$PID/sched) to extract the PID of the process. The process's PID inside the container will differ then it's PID on the host (a non-container system).
For example, the output of /proc/1/sched on a container
will return:
root#33044d65037c:~# cat /proc/1/sched | head -n 1
bash (5276, #threads: 1)
While on a non-container host:
$ cat /proc/1/sched | head -n 1
init (1, #threads: 1)
This helps to differentiate if you are in a container or not. eg you can do:
if [[ ! $(cat /proc/1/sched | head -n 1 | grep init) ]]; then {
echo in docker
} else {
echo not in docker
} fi
Using Environment Variables
For my money, I prefer to set an environment variable inside the docker image that can then be detected by the application.
For example, this is the start of a demo Dockerfile config:
FROM node:12.20.1 as base
ENV DOCKER_RUNNING=true
RUN yarn install --production
RUN yarn build
The second line sets an envar called DOCKER_RUNNING that is then easy to detect. The issue with this is that in a multi-stage build, you will have to repeat the ENV line every time you FROM off of an external image. For example, you can see that I FROM off of node:12.20.1, which includes a lot of extra stuff (git, for example). Later on in my Dockerfile I then COPY things over to a new image based on node:12.20.1-slim, which is much smaller:
FROM node:12.20.1-slim as server
ENV DOCKER_RUNNING=true
EXPOSE 3000
COPY --from=base /build /build
CMD ["node", "server.js"]
Even though this image target server is in the same Dockerfile, it requires the ENV var to be defined again because it has a different base image.
If you make use of Docker-Compose, you could instead easily define an envar there. For example, your docker-compose.yml file could look like this:
version: "3.8"
services:
nodeserver:
image: michaeloryl/stackdemo
environment:
- NODE_ENV=production
- DOCKER_RUNNING=true
Thomas' solution as code:
running_in_docker() {
(awk -F/ '$2 == "docker"' /proc/self/cgroup | read non_empty_input)
}
Note
The read with a dummy variable is a simple idiom for Does this produce any output?. It's a compact method for turning a possibly verbose grep or awk into a test of a pattern.
Additional note on read
What works for me is to check for the inode number of the '/.'
Inside the docker, its a very high number.
Outside the docker, its a very low number like '2'.
I reckon this approach would also depend on the FileSystem being used.
Example
Inside the docker:
# ls -ali / | sed '2!d' |awk {'print $1'}
1565265
Outside the docker
$ ls -ali / | sed '2!d' |awk {'print $1'}
2
In a script:
#!/bin/bash
INODE_NUM=`ls -ali / | sed '2!d' |awk {'print $1'}`
if [ $INODE_NUM == '2' ];
then
echo "Outside the docker"
else
echo "Inside the docker"
fi
We needed to exclude processes running in containers, but instead of checking for just docker cgroups we decided to compare /proc/<pid>/ns/pid to the init system at /proc/1/ns/pid. Example:
pid=$(ps ax | grep "[r]edis-server \*:6379" | awk '{print $1}')
if [ $(readlink "/proc/$pid/ns/pid") == $(readlink /proc/1/ns/pid) ]; then
echo "pid $pid is the same namespace as init system"
else
echo "pid $pid is in a different namespace as init system"
fi
Or in our case we wanted a one liner that generates an error if the process is NOT in a container
bash -c "test -h /proc/4129/ns/pid && test $(readlink /proc/4129/ns/pid) != $(readlink /proc/1/ns/pid)"
which we can execute from another process and if the exit code is zero then the specified PID is running in a different namespace.
golang code, via the /proc/%s/cgroup to check a process in a docker,include the k8s cluster
func GetContainerID(pid int32) string {
cgroupPath := fmt.Sprintf("/proc/%s/cgroup", strconv.Itoa(int(pid)))
return getContainerID(cgroupPath)
}
func GetImage(containerId string) string {
if containerId == "" {
return ""
}
image, ok := containerImage[containerId]
if ok {
return image
} else {
return ""
}
}
func getContainerID(cgroupPath string) string {
containerID := ""
content, err := ioutil.ReadFile(cgroupPath)
if err != nil {
return containerID
}
lines := strings.Split(string(content), "\n")
for _, line := range lines {
field := strings.Split(line, ":")
if len(field) < 3 {
continue
}
cgroup_path := field[2]
if len(cgroup_path) < 64 {
continue
}
// Non-systemd Docker
//5:net_prio,net_cls:/docker/de630f22746b9c06c412858f26ca286c6cdfed086d3b302998aa403d9dcedc42
//3:net_cls:/kubepods/burstable/pod5f399c1a-f9fc-11e8-bf65-246e9659ebfc/9170559b8aadd07d99978d9460cf8d1c71552f3c64fefc7e9906ab3fb7e18f69
pos := strings.LastIndex(cgroup_path, "/")
if pos > 0 {
id_len := len(cgroup_path) - pos - 1
if id_len == 64 {
//p.InDocker = true
// docker id
containerID = cgroup_path[pos+1 : pos+1+64]
// logs.Debug("pid:%v in docker id:%v", pid, id)
return containerID
}
}
// systemd Docker
//5:net_cls:/system.slice/docker-afd862d2ed48ef5dc0ce8f1863e4475894e331098c9a512789233ca9ca06fc62.scope
docker_str := "docker-"
pos = strings.Index(cgroup_path, docker_str)
if pos > 0 {
pos_scope := strings.Index(cgroup_path, ".scope")
id_len := pos_scope - pos - len(docker_str)
if pos_scope > 0 && id_len == 64 {
containerID = cgroup_path[pos+len(docker_str) : pos+len(docker_str)+64]
return containerID
}
}
}
return containerID
}
Based on Dan Walsh's comment about using SELinux ps -eZ | grep container_t, but without requiring ps to be installed:
$ podman run --rm fedora:31 cat /proc/1/attr/current
system_u:system_r:container_t:s0:c56,c299
$ podman run --rm alpine cat /proc/1/attr/current
system_u:system_r:container_t:s0:c558,c813
$ docker run --rm fedora:31 cat /proc/1/attr/current
system_u:system_r:container_t:s0:c8,c583
$ cat /proc/1/attr/current
system_u:system_r:init_t:s0
This just tells you you're running in a container, but not which runtime.
Didn't check other container runtimes but https://opensource.com/article/18/2/understanding-selinux-labels-container-runtimes provides more info and suggests this is widely used, might also work for rkt and lxc?
What works for me, as long as I know the system programs/scrips will be running on, is confirming if what's running with PID 1 is systemd (or equivalent). If not, that's a container.
And this should be true for any linux container, not only docker.
Had the need for this capability in 2022 on macOS and only the answer by #at0S still works from all the other options.
/proc/1/cgroup only has the root directory in a container unless configured otherwise
/proc/1/sched showed the same in-container process number. The name was different (bash) but that's not very portable.
Environment variables work if you configure your container yourself, but none of the default environment variables helped
I did find an option not listed in the other answers: /proc/1/mounts included an overlay filesystem with "docker" in its path.

Resources