Jaeger Service not shown in Jaeger UI - opentracing

I installed the jaeger all in one in Docker with:
docker run --rm --name jaeger -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 14267:14267 -p 14268:14268 -p 9411:9411 jaegertracing/all-in-one:1.7
And below is the sample code on how I initialize the tracer and spans.
I get the logs in my console but it does not reflect in my Jaeger UI.
Could anyone please help me with this?
logging = new LoggingReporter();
SamplerConfiguration sampler = new SamplerConfiguration();
sampler.withType("const");
sampler.withParam(1);
ReporterConfiguration reporter = new ReporterConfiguration();
reporter.withLogSpans(true);
reporter.withSender(sender);
tracer = Configuration.fromEnv("sample_jaeger").withSampler(sampler).withReporter(reporter).getTracer();
Scope scope = tracer.buildSpan("parent-span").startActive(true);
Tags.SAMPLING_PRIORITY.set(scope.span(), 1);
scope.span().setTag("this-is-test", "YUP");
logging.report((JaegerSpan) scope.span());

Are you closing the tracer and the scope? If you are using a version before 0.32.0, you should manually call tracer.close() before your process terminates, otherwise the spans in the buffer might not get dispatched.
As for the scope, it's common to wrap it in a try-with-resources statement:
try (Scope scope = tracer.buildSpan("parent-span").startActive(true)) {
Tags.SAMPLING_PRIORITY.set(scope.span(), 1);
scope.span().setTag("this-is-test", "YUP");
logging.report((JaegerSpan) scope.span());
}
You might also want to check the OpenTracing tutorial at https://github.com/yurishkuro/opentracing-tutorial or the Katacoda-based version at https://www.katacoda.com/courses/opentracing
-- EDIT
and is deployed on a different hostname and port
Then you do need to tell the tracer where to send the traces. Either export the JAEGER_ENDPOINT environment variable, pointing to a collector endpoint, or set JAEGER_AGENT_HOST/JAEGER_AGENT_PORT, with the location of the agent. You can check the available environment variables for your client on the following URL: https://www.jaegertracing.io/docs/1.7/client-features/

Related

Client authentication failure on Neo4J

I basically followed the steps described here: https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#configure-spring-boot-project
My application.properties contains the following:
spring.neo4j.uri=neo4j://localhost:7687
spring.neo4j.authentication.username=neo4j
spring.neo4j.authentication.password=verySecret357
I have a Neo4jConfiguration Bean which only specifies the TransactionManager, rest is (supposedly) taken care of by spring-boot-starter-data-neo4j:
#Configuration
public class Neo4jConfiguration {
#Bean
public ReactiveNeo4jTransactionManager reactiveTransactionManager(Driver driver,
ReactiveDatabaseSelectionProvider databaseNameProvider) {
return new ReactiveNeo4jTransactionManager(driver, databaseNameProvider);
}
}
Neo4j (5.3.0) runs in a Docker container I started with
docker run -d --name neo4j -p 7474:7474 -p 7687:7687 -e 'NEO4J_AUTH=neo4j/verySecret357' neo4j:4.4.11-community
I can access it through HTTP on my localhost:7474 and can authenticate using the credentials above.
Now, when I run my springboot app and try to create Nodes in Neo4j, I keep getting the same exception:
org.neo4j.driver.exceptions.AuthenticationException: The client is unauthorized due to authentication failure.
Running in debug, it however seems the client authentication scheme is correctly set:
Any thoughts on what I might be doing wrong ?
Edit: one thing though, I would assume that the "authToken" would contain a base64-encoded String (username:password) as the scheme is basic auth. It looks like it's not the case (using neo4j-java-driver:5.2.0).
Edit: seems to be related to the Docker image. A standalone neo4j instance works fine.

Unable to run chrome with Testcafe on macOS

I have been trying without success to run tests on chrome using Testcafe in MacOS. I have generated all the certificates required but when launching chrome with testcafe it reports ERR_SSL_VERSION_OR_CIPHER_MISMATCH. Below are the arguments I am passing -
yarn run testcafe --hostname localhost --ssl 'pfx = testingdomain.pfx;rejectunasuthorized=true;--ssl key = testingdomain.key;cert=testingdomain.crt' "chrome --use-fake-ui-for-media-stream --allow-insecure-localhost --allow-running-insecure-content" e2e/testmac.js --live
When I remove loading PFX cert and run below, I am able to get to the webpage, but cant access mic and camera. My command to maximzie browser window also does not work
yarn run testcafe --hostname localhost "chrome --use-fake-ui-for-media-stream --allow-insecure-localhost --allow-running-insecure-content --live" e2e/testmac.js --ssl 'key=testingdomain.key;cert=testingdomain.crt' --live
My simple test -
import { Selector, ClientFunction } from 'testcafe';
fixture`Audio Configuration Combination`.page`http://XXX.XXX.XXXXXX/sandbox/index.html`;
test('Launch SDK,', async (browser) => {
await browser.getCurrentWindow().maximizeWindow().wait(100000);
});
I have problems only on mac. Same setup is working fine on windows. I need to access mic and camera and so passing in "--use-fake-ui-for-media-stream". But I dont see a camera preview. Passing in "--use-fake-device-for-media-stream" loads up fake devices which is something I dont need.
Any help is greatly appreciated.
According to this comment on GitHub, it should be sufficient to use either of the two approaches to mock user media, not necessarily both. If you specify testcafe --hostname localhost, you shouldn't need to specify --ssl at all.
I ran the following test from the GitHub discussion mentioned above on macOS:
mock-media-test.js
fixture `WebRTC`
.page`https://webrtc.github.io/samples/src/content/getusermedia/canvas/`;
test(`test`, async t => t.wait(30000));
I used the following command:
testcafe "chrome --use-fake-ui-for-media-stream" mock-media-test.js --hostname localhost
The test ran as expected, and the page displayed the stream from my camera. The --use-fake-device-for-media-stream flag worked for me as well.

Service dependencies not shown in Jaeger between Spring Boot Applications

I'm currently trying to trace two Spring Boot (2.1.1) applications with Jaeger using https://github.com/opentracing-contrib/java-spring-web
<dependency>
<groupId>io.opentracing.contrib</groupId>
<artifactId>opentracing-spring-web-starter</artifactId>
</dependency>
also tryed with no success
<dependency>
<groupId>io.opentracing.contrib</groupId>
<artifactId>opentracing-spring-jaeger-cloud-starter</artifactId>
</dependency>
The tracing of the Spans for every single service / app works fine, but not over REST requests on a global level.
There is no dependency shown between the services like you can see in the image.
Shouldn't this work out of the box through the library? Or do I have to implement some interceptors and request filters by my own and if so, how?
You can CHECKOUT a minimalistic project containing the problem here
Btw: Jaeger runs as all-in-one via docker and works as expected
docker run \
--rm \
--name jaeger \
-p5775:5775/udp \
-p6831:6831/udp \
-p6832:6832/udp \
-p5778:5778 \
-p16686:16686 \
-p14268:14268 \
-p9411:9411 \
jaegertracing/all-in-one:latest
The problem is that you are using RestTemplate template = new RestTemplate(); to get an instance of the RestTemplate to make a REST call.
Doing that means that Opentracing cannot instrument the call to add necessary HTTP headers.
Please consider using #Autowired RestTemplate restTemplate
Could you try using a more recent version of Jaeger: https://www.jaegertracing.io/docs/latest/getting-started/#all-in-one - actually 1.11 is now out, so could try that.

sending application arguments on a docker-compose for a spring boot application?

I am trying to do something super simple/
Should be that easy but seems like I am forgetting something...
I made a simple application.
It has properties on a yml file, like this:
then I create a Dockerfile.yml
It receives parameters when running... like this:
EXPOSE 8080
CMD ["java", "-server", "$JAVA_OPTS", "-jar", "helloworld.jar", "$APP_ARGS" ]
It should override some parameters, like the ${KAFKA_OUTPUT_TOPIC}
when I run it on a docker-compose I made this:
hello-world:
image: my-docker-image
ports:
- 8080:8080
environment:
KAFKA_BROKERS: kafka:9092
KAFKA_INPUT_TOPIC: test
Then it fails because the KAFKA_INPUT_TOPIC default value has an invalid character, what it means it fails to set the new parameter test .
I have to say when I set default values it works fine, but doesn't work for me that... I have no idea then how to send it as a parameter, any idea?
Thanks

How to configure sonarqube 7.1 in AWS elasticbeanstalk

I have tried a few ways to get sonarQube running in our AWS environment, all successfully. However, SonarQube is unstable. Whenever Elastic beanstalk recycles an instance, my SonarQube environment is wiped out.
Here is what I tried:
Attempt 1: EC2 instance. I create the EC2 instance off of a bitnami ami imageId: ami-0f9cf81913a6dce27
This seemed like pretty simple process. But I prefer elastic beanstalk environment to manage our sonarQube EC2 instances.
Attempt 2: Create a EB Environment using a single docker instance, with this dockerfile:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "sonarqube:7.1"
},
"Ports": [{
"ContainerPort": "9000"
}]
}
This created the EB environment. It creates an RDS instance (with mySql 5.x) to store the scan data (in a database called ebdb). The sonarQube server hosts an internal elasticsearch instance locally for it's search data.
I then have to add a few environment variables to support the RDS instance (jdbc username, password, url endpoint, etc).
I then have to configure the sonarQube security side.
No marketplace features are installed. So I add SonarJava, Groovy, and SonarJS.
I add a login user for scans. All good.
Except, occasionally Elastic Beanstalk will have a health issue and drop the current instance, and re-create a new instance.
In this case, everything is still in tact - security: users, passwords, etc. Except the marketplace features are gone. So code scans will fail until I manually add them back.
The schema for single instance docker container is pretty sparse, I did not see any way to further customize w/ the docker file.
Attempt 3: Use multi-instance docker container. The schema is more robust, perhaps I can configure sonarQube more explicitly. e.g. You can pass environment variables, mysql settings, etc.
I was unable to get this to work. I did learn I needed to set the memory above 2 GB, for elasticsearch to start up. But i was unable to get the sonarQube environment to come up.
I might revisit this later.
Attempt 4: use AMI in elastic beanstalk (with terraform aws provider)
main.tf
resource "aws_elastic_beanstalk_application" "sonarqube" {
name = "SonarQube"
description = "SonarQube for nano-services"
}
resource "aws_elastic_beanstalk_environment" "nonprod" {
name = "${var.application-name}"
application = "${aws_elastic_beanstalk_application.sonarqube.name}"
solution_stack_name = "64bit Amazon Linux 2018.03 v2.10.0 running Docker 17.12.1-ce"
wait_for_ready_timeout = "30m"
setting {
namespace = "aws:autoscaling:updatepolicy:rollingupdate"
name = "Timeout"
value = "PT1H"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "ServiceRole"
value = "aws-elasticbeanstalk-service-role"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "DeploymentPolicy"
value = "Rolling"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "BatchSizeType"
value = "Fixed"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "BatchSize"
value = "1"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "IgnoreHealthCheck"
value = "true"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "EC2KeyName"
value = "web-aws-key"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "arn:aws:iam::<redacted>:instance-profile/aws-elasticbeanstalk-ec2-role"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "instanceType"
value = "t2.xlarge"
}
setting {
namespace = "aws:elb:listener:443"
name = "ListenerProtocol"
value = "SSL"
}
setting {
namespace = "aws:elb:listener:443"
name = "InstanceProtocol"
value = "SSL"
}
setting {
namespace = "aws:elb:listener:443"
name = "SSLCertificateId"
value = "arn:aws:acm:<redacted>"
}
setting {
namespace = "aws:elb:listener:443"
name = "ListenerEnabled"
value = "true"
}
}
Initially I included the sonarQube AMI:
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "imageId"
value = "ami-0f9cf81913a6dce27"
}
This does create everything. However, the EC2 instances respond too slowly, and EB goes to Grey status. Even though SonarQube is up and running, EB is unaware of it. So I commented this out, and manually modified the image id as a one-off.
wait_for_ready_timeout does assist with this, as that simply keeps terraform from timing out. e.g. It finishes in 22.5 minutes instead of a hard stop at 20 minutes.
In this case, it creates SonarQube with a local mysql database (no RDS instance) w/ elasticsearch being local as well.
SonarQube's market place features are also included, except for Groovy. Which I added.
However, same issue as before. When EB drops an instance and re-creates it, the sonarQube environment is wiped out. This time, the credentials, marketplace features, and everything.
Has anyone run into this problem and figured it out?
I resolved the issue by using ECS (Fargate), instead of the Elastic Beanstalk container.
Steps:
Create an RDS mysql instance in AWS for sonar
Open a mysql shell for this instance, and configure it for sonar, see: Sonar setup with MySql
Create a dockerfile with the plugins you care about, e.g:
FROM sonarqube:latest
ENV SONARQUBE_JDBC_USERNAME=[YOUR-USERNAME] \
SONARQUBE_JDBC_PASSWORD=[YOUR-PASSWORD] \
SONARQUBE_JDBC_URL=jdbc:mysql://[YOUR-RDS-ENDPOINT]:3306/sonar?useSSL=false&useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
RUN wget "https://sonarsource.bintray.com/Distribution/sonar-java-plugin/sonar-java-plugin-5.7.0.15470.jar" \
&& wget "https://sonarsource.bintray.com/Distribution/sonar-javascript-plugin/sonar-javascript-plugin-4.2.1.6529.jar" \
&& wget "https://sonarsource.bintray.com/Distribution/sonar-groovy-plugin/sonar-groovy-plugin-1.4.jar" \
&& mv *.jar $SONARQUBE_HOME/extensions/plugins \
&& ls -lah $SONARQUBE_HOME/extensions/plugins
EXPOSE 9000
EXPOSE 9092
I exposed 9092 in case i wanted to comment out the mysql connection, and test locally with the internal h2 database at some point.
Verify the docker image runs locally
eval $(docker-machine env)
docker build -t sonar .
docker run -it -d --rm --name sonar -p 9000:9000 -p 9092:9092 sonar:latest
echo $DOCKER_HOST
Open a browser to this ip address, port 9000. e.g. http://192.x.x.x:9000
Create a new ECS repository called sonar to store the docker image.
The AWS interface actually tells you how to publish your docker image, so this should be self-evident.
Tag and push the docker file to the sonar repository
$(aws ecr get-login --no-include-email --region [YOUR-AWS-REGION])
docker tag sonar:latest [YOUR-ECS-DOCKER-IMAGE-URI]/sonar:latest
docker push [YOUR-ECS-DOCKER-IMAGE-URI]/sonar:latest
Create a new fargate cluster, called sonar
Create a new task definition.
For your container, use the ECS docker image URI. I gave mine 6 GB memory and 2 cpus, with 1024 cpu units. Here I exposed port 9000 and 9092. I added the environment vars in the Dockerfile here as well.
Create an ECS service, and include the task. Run it, verify the logs cloudwatch. And hit the public endpoint on port 9000, and done.
I largely borrowed from this: https://www.infralovers.com/en/articles/2018/05/04/sonarqube-on-aws-fargate/
I hope this helps others.

Resources