How to connect Opentracing application to a remote Jaeger collector - opentracing

I am using Jaeger UI to display traces from my application. It's work fine for me if both application an Jaeger are running on same server. But I need to run my Jaeger collector on a different server. I tried out with JAEGER_ENDPOINT, JAEGER_AGENT_HOST and JAEGER_AGENT_PORT, but it failed.
I don't know, whether my values setting for these variables is wrong or not. Whether it required any configuration settings inside application code?
Can you provide me any documentation for this problem?

In server 2 , Install jaeger
$ docker run -d --name jaeger \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:latest
In server 1, set these environment variables.
JAEGER_SAMPLER_TYPE=probabilistic
JAEGER_SAMPLER_PARAM=1
JAEGER_SAMPLER_MANAGER_HOST_PORT=(EnterServer2HostName):5778
JAEGER_REPORTER_LOG_SPANS=false
JAEGER_AGENT_HOST=(EnterServer2HostName)
JAEGER_AGENT_PORT=6831
JAEGER_REPORTER_FLUSH_INTERVAL=1000
JAEGER_REPORTER_MAX_QUEUE_SIZE=100
application-server-id=server-x
Change the tracer registration application code as below in server 1, so that it will get the configurations from the environment variables.
#Produces
#Singleton
public static io.opentracing.Tracer jaegerTracer() {
String serverInstanceId = System.getProperty("application-server-id");
if(serverInstanceId == null) {
serverInstanceId = System.getenv("application-server-id");
}
return new Configuration("ApplicationName" + (serverInstanceId!=null && !serverInstanceId.isEmpty() ? "-"+serverInstanceId : ""),
Configuration.SamplerConfiguration.fromEnv(),
Configuration.ReporterConfiguration.fromEnv())
.getTracer();
}
Hope this works!
Check this link for integrating elasticsearch as the persistence storage backend so that the traces will not remove once the Jaeger instance is stopped.
How to configure Jaeger with elasticsearch?

Specify "JAEGER_AGENT_HOST" and ensure "local_agent" is not specified in tracer config file.
Below is the working solution for Python
import os
os.environ['JAEGER_AGENT_HOST'] = "123.XXX.YYY.ZZZ" # Specify remote Jaeger-Agent here
# os.environ['JAEGER_AGENT_HOST'] = "16686" # optional, default: "16686"
from jaeger_client import Config
config = Config(
config={
'sampler': {
'type': 'const',
'param': 1,
},
# ENSURE 'local_agent' is not specified
# 'local_agent': {
# # 'reporting_host': "127.0.0.1",
# # 'reporting_port': 16686,
# },
'logging': True,
},
service_name="your-service-name-here",
)
# create tracer object here and voila!
Guidance of Jaeger: https://www.jaegertracing.io/docs/1.33/getting-started/
Jaeger-Client features: https://www.jaegertracing.io/docs/1.33/client-features/
Flask-OpenTracing: https://github.com/opentracing-contrib/python-flask
OpenTelemetry-Python: https://opentelemetry.io/docs/instrumentation/python/getting-started/

Related

$ characters get removed from json

I am currently improving the deployment process of our service.
In particular, I want to update the revision stored in the opsworks CustomJson stack property as soon as we have deployed a new one.
For this I created a new task in our rakefile. Here is the code:
desc "update revision in custom json of stack"
task :update_stack_revision, [:revision, :stack_id] do |t, arg|
revision = arg[:revision]
stack_id = arg[:stack_id]
# get stack config
stack_description = `aws opsworks \
--region us-east-1 \
describe-stacks \
--stack-id #{stack_id}`
# get the json config
raw_custom_json = JSON.parse(stack_description)["Stacks"][0]["CustomJson"]
# make it parseable by removing invalid chararcters
raw_custom_json = raw_custom_json.gsub(/(\\n)/, '')
raw_custom_json = raw_custom_json.gsub(/(\\")/, '"')
# parse json and update revision
parsed_custom_json = JSON.parse(raw_custom_json)
parsed_custom_json["git"]["revision"] = revision
# transform updated object back into json and bring it into a format required by aws opsworks
updated_json = JSON.generate(parsed_custom_json)
updated_json = updated_json.gsub('"', '\"')
# send update
`aws opsworks \
--region us-east-1 \
update-stack \
--stack-id #{stack_id} \
--custom-json "#{updated_json}"`
end
During this process $ characters are lost for some reason.
I tried reproducing this error by executing each command individually. Apparently the last one - aws opsworks update-stack - is at fault here. I'd really like to know why and how to stop this.

Not able to see newly added log in docker ELK

I'm using sebp/elk's dockerised ELK. I've managed to get it running on my local machine and I'm trying to input dummy log data by SSH'ing into the docker container and running:
/opt/logstash/bin/logstash --path.data /tmp/logstash/data \
-e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'
After typing in some random text, I cannot see it indexed by elasticsearch when I visit http://localhost:9200/_search?pretty&size=1000

Capture output of a shell script inside a docker container to a file using docker sdk for python)

I have a shell script inside my docker container called test.sh. I would like to pipe the output of this script to a file. I can do this using docker exec command or by logging into the shell (using docker run -it) and running ./test.sh > test.txt. However, I would like to know how the same result can be achieved using the docker sdk for python. This is my code so far:
import docker
client = docker.APIClient(base_url='unix://var/run/docker.sock')
container= client.create_container(
'ubuntu:16.04', '/bin/bash', stdin_open=True, tty=True, working_dir='/home/psr', \
volumes=['/home/psr/data'], \
host_config=client.create_host_config(binds={
'/home/xxxx/data_generator/data/': {
'bind': '/home/psr/data',
'mode': 'rw',
},
})
)
client.start(container=container.get('Id'))
cmds= './test.sh > test.txt'
exe=client.exec_create(container=container.get('Id'), cmd= cmds,
stdout=True)
exe_start=client.exec_start(exec_id=exe, stream=True)
for val in exe_start:
print (val)
I am using the Low-Level API of the docker sdk. In case you know how to achieve the same result as above using the high level API, please let me know.
In case anyone else had the same problem, here is how I solved it. Please let me know in case you have a better solution.
import docker
client = docker.APIClient(base_url='unix://var/run/docker.sock')
container= client.create_container(
'ubuntu:16.04', '/bin/bash', stdin_open=True, tty=True,
working_dir='/home/psr', \
volumes=['/home/psr/data'], \
host_config=client.create_host_config(binds={
'/home/xxxx/data_generator/data/': {
'bind': '/home/psr/data',
'mode': 'rw',
},
})
)
client.start(container=container.get('Id'))
cmds= './test.sh'
exe=client.exec_create(container=container.get('Id'), cmd=cmds,
stdout=True)
exe_start=client.exec_start(exec_id=exe, stream=True)
with open('path_to_host_directory/test.txt', 'wb') as f: # wb: For Binary Output
for val in exe_start:
f.write(val)

Error in using persistent data store with COMPOSER REST SERVER

I tried to setup a persistent data store for REST server but was unable to do it.I am posting the steps which I have followed to do it.
Steps which I followed to set a persistent data store for REST server.
Started an instance of MongoDB:
root#ubuntu:~# docker run -d --name mongo --network composer_default -p 27017:27017 mongo
dda3340e4daf7b36a244c5f30772f50a4ee1e8f81cc7fc5035f1090cdcf46c58
Created a new, empty directory. Created a new file named Dockerfile the new directory, with the following contents:
FROM hyperledger/composer-rest-server
RUN npm install --production loopback-connector-mongodb passport-github && \
npm cache clean && \
ln -s node_modules .node_modules
Changed into the directory created in step 2, and build the Docker image:
root#ubuntu:~# cd examples/dir/
root#ubuntu:~/examples/dir# ls
Dockerfile ennvars.txt
root#ubuntu:~/examples/dir# docker build -t myorg/my-composer-rest-server .
Sending build context to Docker daemon 4.096 kB
Step 1/2 : FROM hyperledger/composer-rest-server
---> 77cd6a591726
Step 2/2 : RUN npm install --production loopback-connector-couch passport-github && npm cache clean && ln -s node_modules .node_modules
---> Using cache
---> 2ff9537656d1
Successfully built 2ff9537656d1
root#ubuntu:~/examples/dir#
Created file named ennvars.txt in the same directory.
The contents are as follows:
COMPOSER_CONNECTION_PROFILE=hlfv1
COMPOSER_BUSINESS_NETWORK=blockchainv5
COMPOSER_ENROLLMENT_ID=admin
COMPOSER_ENROLLMENT_SECRET=adminpw
COMPOSER_NAMESPACES=never
COMPOSER_SECURITY=true
COMPOSER_CONFIG='{
"type": "hlfv1",
"orderers": [
{
"url": "grpc://localhost:7050"
}
],
"ca": {
"url": "http://localhost:7054",
"name": "ca.example.com"
},
"peers": [
{
"requestURL": "grpc://localhost:7051",
"eventURL": "grpc://localhost:7053"
}
],
"keyValStore": "/home/ubuntu/.hfc-key-store",
"channel": "mychannel",
"mspID": "Org1MSP",
"timeout": "300"
}'
COMPOSER_DATASOURCES='{
"db": {
"name": "db",
"connector": "mongodb",
"host": "mongo"
}
}'
COMPOSER_PROVIDERS='{
"github": {
"provider": "github",
"module": "passport-github",
"clientID": "a88810855b2bf5d62f97",
"clientSecret": "f63e3c3c65229dc51f1c8964b05e9717bf246279",
"authPath": "/auth/github",
"callbackURL": "/auth/github/callback",
"successRedirect": "/",
"failureRedirect": "/"
}
}'
Loaded the env variables by the following command.
root#ubuntu:~/examples/dir# source ennvars.txt
Started the docker container by the below command
root#ubuntu:~/examples/dir# docker run \
-d \
-e COMPOSER_CONNECTION_PROFILE=${COMPOSER_CONNECTION_PROFILE} \
-e COMPOSER_BUSINESS_NETWORK=${COMPOSER_BUSINESS_NETWORK} \
-e COMPOSER_ENROLLMENT_ID=${COMPOSER_ENROLLMENT_ID} \
-e COMPOSER_ENROLLMENT_SECRET=${COMPOSER_ENROLLMENT_SECRET} \
-e COMPOSER_NAMESPACES=${COMPOSER_NAMESPACES} \
-e COMPOSER_SECURITY=${COMPOSER_SECURITY} \
-e COMPOSER_CONFIG="${COMPOSER_CONFIG}" \
-e COMPOSER_DATASOURCES="${COMPOSER_DATASOURCES}" \
-e COMPOSER_PROVIDERS="${COMPOSER_PROVIDERS}" \
--name rest \
--network composer_default \
-p 3000:3000 \
myorg/my-composer-rest-server
942eb1bfdbaf5807b1fe2baa2608ab35691e9b6912fb0d3b5362531b8adbdd3a
It got executed successfully. So now I should be able to access the persistent and secured REST server by going to explorer page of loopback
But when tried to open the above url got the below error.
Error Image
Have I missed any step or done something wrong.
Two things:
You need to put export in front of the envvars in your envvars.txt file.
Check the version of Composer you are running. The FROM hyperledger/composer-rest-server command will pull the latest version of the rest server down, and if your composer version is not updated, the two will be incompatible.

Using Finagle's clientbuilder, how do I set the host externally?

I am building a simple proxy to point to another server. Everything works but I need to find a way to be able to set the hosts in a ClientBuilder externally most likely using Docker or maybe some sort of configuration file. Here is what I have:
import java.net.InetSocketAddress
import com.twitter.finagle.Service
import com.twitter.finagle.builder.{ServerBuilder, ClientBuilder}
import com.twitter.finagle.http.{Request, Http}
import com.twitter.util.Future
import org.jboss.netty.handler.codec.http._
object Proxy extends App {
val client: Service[HttpRequest, HttpResponse] = {
ClientBuilder()
.codec(Http())
.hosts("localhost:8888")
.hostConnectionLimit(1)
.build()
}
val server = {
ServerBuilder()
.codec(Http())
.bindTo(new InetSocketAddress(8080))
.name("TROGDOR")
.build(client)
}
}
If you know of a way to do this or have any ideas about it please let me know!
if you want running this simple proxy in a docker container and manage the target host ip dynamically, you can try to pass a target host ip through environment variable and change your code like this
import java.net.InetSocketAddress
import com.twitter.finagle.Service
import com.twitter.finagle.builder.{ServerBuilder, ClientBuilder}
import com.twitter.finagle.http.{Request, Http}
import com.twitter.util.Future
import org.jboss.netty.handler.codec.http._
object Proxy extends App {
val target_host = sys.env.get("TARGET_HOST")
val client: Service[HttpRequest, HttpResponse] = {
ClientBuilder()
.codec(Http())
.hosts(target_host.getOrElse("127.0.0.1:8888"))
.hostConnectionLimit(1)
.build()
}
val server = {
ServerBuilder()
.codec(Http())
.bindTo(new InetSocketAddress(8080))
.name("TROGDOR")
.build(client)
}
}
this will let your code read system environment variable TARGET_HOST. when you done this part, you can try to start your docker container by adding the following parameter to your docker run command:
-e "TARGET_HOST=127.0.0.1:8090"
for example docker run -e "TARGET_HOST=127.0.0.1:8090" <docker image> <docker command>
note that you can change 127.0.0.1:8090 to your target host.
You need a file server.properties and put your configuration inside the file:
HOST=host:8888
Now get docker to write your configuration with every startup with a docker-entrypoint bash script. Add this script and define environment variables inside your Dockerfile:
$ ENV HOST=myhost
$ ENV PORT=myport
$ ADD docker-entrypoint.sh /docker-entrypoint.sh
$ ENTRYPOINT ["/docker-entrypoint.sh"]
$ CMD ["proxy"]
Write out your docker-entrypoint.sh:
#!/bin/bash -x
set -o errexit
cat > server.properties << EOF
HOST=${HOST}:${PORT}
EOF
if [ "$1" = 'proxy' ]; then
launch server
fi
exec "$#"
Launch Docker with your configuration and the command "proxy":
$ docker run -e "HOST=host" -e "PORT=port" image proxy
You can also do linking when your not sure of your server container ip adress:
$ docker run -e "HOST=mylinkhost" -e "PORT=port" --link myservercontainer:mylinkhost image proxy

Resources