I have .NET Core Web API Server and OpenTelemetry collector as docker container. But with my docker-compose, any event from web server does not be sent to otel collector container while it is working using "http://localhost:4318" as otlp endpoint without docker.
My docker-compose.yml:
version: "2.1"
services:
web:
build:
context: ..
dockerfile: ./MyWebServer/Dockerfile
container_name: web
networks:
- test_net
ports:
- "8080:80"
otel:
image: "otel/opentelemetry-collector:latest"
container_name: otel
command: ["--config=/etc/otel-collector-config.yml"]
volumes:
- ./OpenTelemetry/otel-collector-config.yml:/etc/otel-collector-config.yml
networks:
test_net:
ipv4_address: 172.23.10.4
ports:
- "4318:4318"
networks:
test_net:
ipam:
config:
- subnet: 172.23.0.0/16
My Program.cs:
using System.Diagnostics;
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
const string serviceName = "MyWebServer";
const string serviceVersion = "1.0.0";
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
builder.Services.AddOpenTelemetryTracing(tracerProviderBuilder =>
{
tracerProviderBuilder
.AddOtlpExporter(opt =>
{
opt.Endpoint = new Uri("http://172.23.10.4:4318");
opt.Protocol = OtlpExportProtocol.HttpProtobuf;
})
.AddSource(serviceName)
.SetResourceBuilder(
ResourceBuilder.CreateDefault()
.AddService(serviceName: serviceName, serviceVersion: serviceVersion))
.AddHttpClientInstrumentation()
.AddAspNetCoreInstrumentation()
.AddSqlClientInstrumentation();
});
var app = builder.Build();
app.UseRouting();
app.UseEndpoints(endpoints => { endpoints.MapControllers(); });
app.Run();
You dont need to use IPaddress here. you can use service name and docker will resolve it automatically for you.
opt.Endpoint = new Uri("http://172.23.10.4:4318");
so in your case it should be
opt.Endpoint = new Uri("http://otel:4318");
Related
I have created Microservices using Spring Boot and Eureka . I have used API Gateway for the microservices .
All the microservices ( eureka clients ) are visible on eureka server but giving an error like the below
api-Gateway port : 8999
product-service : 9001
product-detail-service : 9002
eureka-server : 8761
api-gatway application.properties
server.port =8999
spring.application.name = api-gateway
eureka.client.instance.preferIpAddress = true
eureka.client.serviceUrl.defaultZone= http://localhost:8761/eureka
spring.cloud.gateway.routes[0].id=product-service
spring.cloud.gateway.routes[0].uri=lb://product-service
spring.cloud.gateway.routes[0].predicates[0]=Path=/product/**
spring.cloud.gateway.routes[1].id=product-detail-service
spring.cloud.gateway.routes[1].uri=lb://product-detail-service
spring.cloud.gateway.routes[1].predicates[0]=Path=/productDetail/**
eureka-server application.properties
server.port=8761
eureka.client.register-with-eureka = false
eureka.server.waitTimeInMsWhenSyncEmpty = 0
product-detail-service application.properties
server.port=9002
spring.application.name = product-detail-service
eureka.instance.preferIpAddress = true
product-service application.properties
server.port = 9001
spring.application.name = product-service
eureka.client.instance.preferIpAddress = true
docker-compose.yml
version: '3.8'
services:
api-server:
build: ../apigateway
ports:
- 8999:8999
environment:
- eureka.client.service-url.defaultZone=http://eureka-server:8761/eureka
depends_on:
- product-service
- product-detail-service
eureka-server:
build: ../eureka_server
ports:
- 8761:8761
depends_on:
- product-service
- product-detail-service
product-service:
build: ../product_service
ports:
- 9001:9001
environment:
- eureka.client.service-url.defaultZone=http://eureka-server:8761/eureka
depends_on:
- product-detail-service
product-detail-service:
build: ../product_details_service
ports:
- 9002:9002
environment:
- eureka.client.service-url.defaultZone=http://eureka-server:8761/eureka
my docker images are created successfully and are running fine without docker-compose .
I have used networks and much more but still not resolved
Please help I am trying to solve the issue from 3 days
I have an application that will write logs to elasticsearch using serilog. I configured the APM server using docker-compose. Once I start the application up and perform an operation (navigate through pages in the browser), then close the application. Those logs are then recorded to elasticsearch. I came across this article that talks about correlating logs with APM. I selected a few steps to follow since I am not using python in this application, and noticed that there are transactions that are inside of APM.
With these transactions, how I would I be able to correlate the logs to each other. In other words, how can I tie these logs together is there a unique variable/id/key that will tie all the logs that were recorded in one single transaction (when I started the application, performed operations, then closed the application)?
When I looked into each of the transactions, I noticed that they have a transcation_id and a trace_id. But, they are changing per each operation that I am performing. I am wanting to know if it is possible and if it is, how can I gather all the logs that pertain to that single transaction? For instance, if I query by a single id, then all of those logs will be returned.
docker-compose.yml
version: '2.2'
services:
apm-server:
image: docker.elastic.co/apm/apm-server:7.13.0
depends_on:
elasticsearch:
condition: service_healthy
kibana:
condition: service_healthy
cap_add: ["CHOWN", "DAC_OVERRIDE", "SETGID", "SETUID"]
cap_drop: ["ALL"]
ports:
- 8200:8200
networks:
- elastic
command: >
apm-server -e
-E apm-server.rum.enabled=true
-E setup.kibana.host=kibana:5601
-E setup.template.settings.index.number_of_replicas=0
-E apm-server.kibana.enabled=true
-E apm-server.kibana.host=kibana:5601
-E output.elasticsearch.hosts=["elasticsearch:9200"]
healthcheck:
interval: 10s
retries: 12
test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:8200/
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.0
environment:
- bootstrap.memory_lock=true
- cluster.name=docker-cluster
- cluster.routing.allocation.disk.threshold_enabled=false
- discovery.type=single-node
- ES_JAVA_OPTS=-XX:UseAVX=2 -Xms1g -Xmx1g
ulimits:
memlock:
hard: -1
soft: -1
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
healthcheck:
interval: 20s
retries: 10
test: curl -s http://localhost:9200/_cluster/health | grep -vq '"status":"red"'
kibana:
image: docker.elastic.co/kibana/kibana:7.13.0
depends_on:
elasticsearch:
condition: service_healthy
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
ports:
- 5601:5601
networks:
- elastic
healthcheck:
interval: 10s
retries: 20
test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:5601/api/status
volumes:
esdata:
driver: local
networks:
elastic:
driver: bridge
UPDATED
After looking into the documentation for Elastic.Apm.SerilogEnricher, I went ahead and included it to my Startup.cs file and my Program.cs file. Just wanted to double check that I am incorporating it correctly.
Startup.cs:
namespace CustomerSimulatorApp
{
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
var logger = new LoggerConfiguration()
.Enrich.WithElasticApmCorrelationInfo()
.WriteTo.Console(outputTemplate: "[{ElasticApmTraceId} {ElasticApmTransactionId} {Message:lj} {NewLine}{Exception}")
.CreateLogger();
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews();
// create a new node instance
var node = new Uri("http://localhost:9200");
// settings instance for the node
var settings = new ConnectionSettings(node);
settings.DefaultFieldNameInferrer(p => p);
services.AddSingleton<IElasticClient>(new ElasticClient(settings));
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseAllElasticApm(Configuration);
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
});
}
}
}
Program.cs
namespace CustomerSimulatorApp
{
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.UseSerilog((context, configuration) =>
{
configuration.Enrich.FromLogContext()
.Enrich.WithElasticApmCorrelationInfo()
.Enrich.WithMachineName()
.WriteTo.Console()
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri(context.Configuration["ElasticConfiguration:Uri"]))
{
IndexFormat = $"{context.Configuration["ApplicationName"]}-logs-{context.HostingEnvironment.EnvironmentName?.ToLower().Replace(".", "-")}-{DateTime.UtcNow:yyyy-MM}",
AutoRegisterTemplate = true
})
.Enrich.WithProperty("Environment", context.HostingEnvironment.EnvironmentName)
.ReadFrom.Configuration(context.Configuration);
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
})
.UseAllElasticApm();
}
}
I noticed when I ran the program and performed operations on the browser then checking APM that the trace.id & transaction.id are still changing so I am not able to correlate this single transaction that I performed on the browser with the logs. Did I implement the Elastic.Apm.SerilogEnricher incorrectly above?
Different ID's (there are more diff. ones but do not want to expand it with screenshots)
They all change per page redirect so I am unable to gather the logs from a single ID.
This is what I see on the console as well with the updated startup.cs and program.cs files:
I eventually shut down the program:
If you're using Serilog to send logs to Elasticsearch, and also using Elastic APM .NET agent in your application to capture traces, you can reference Elastic.Apm.SerilogEnricher to enrich logs with APM trace ids and transaction ids (and in a coming 1.6 release, span ids) if there is an active transaction when logging
var logger = new LoggerConfiguration()
.Enrich.WithElasticApmCorrelationInfo()
.WriteTo.Console(outputTemplate: "[{ElasticApmTraceId} {ElasticApmTransactionId} {Message:lj} {NewLine}{Exception}")
.CreateLogger();
Take a look at the documentation, which has more information.
I'd like to use Traefik as a reverse proxy behind a Ratchet WebSocket server (3rd option suggested in deploy section).
The goal is to manage HTTPS and wss with the reverse proxy while keeping a simple HTTP and ws on the Ratchet server.
My WebSocket server exposes on port 8080, like in this example:
public function run()
{
$loop = React\EventLoop\Factory::create();
$pusher = new Pusher();
// Listen for the web server to make a ZeroMQ push after an AJAX request
$context = new React\ZMQ\Context($loop);
$pull = $context->getSocket(ZMQ::SOCKET_PULL);
$pull->bind('tcp://0.0.0.0:5555');
$pull->on('message', array($pusher, 'onEntry'));
// Set up our WebSocket server for clients wanting real-time updates
$webSock = new React\Socket\Server('0.0.0.0:8443', $loop);
$webServer = new IoServer(
new HttpServer(
new WsServer(
new WampServer(
$pusher
)
)
),
$webSock
);
$loop->run();
}
Following this post, I have been able to configure HTTPS via Traefik.
Here is my simplified docker-compose.yml:
nginx:
image: wodby/nginx:$NGINX_TAG
container_name: "${PROJECT_NAME}_nginx"
depends_on:
- php
environment:
NGINX_STATIC_OPEN_FILE_CACHE: "off"
NGINX_ERROR_LOG_LEVEL: debug
NGINX_BACKEND_HOST: php
NGINX_SERVER_ROOT: /var/www/html/webroot
NGINX_VHOST_PRESET: $NGINX_VHOST_PRESET
volumes:
- ./html:/var/www/html:cached
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`)"
- "traefik.http.middlewares.${PROJECT_NAME}_nginx_https.redirectscheme.scheme=https"
- "traefik.http.routers.${PROJECT_NAME}_nginx.entrypoints=web"
- "traefik.http.routers.${PROJECT_NAME}_nginx.middlewares=${PROJECT_NAME}_nginx_https#docker"
- "traefik.http.routers.${PROJECT_NAME}_nginx_https.rule=Host(`${PROJECT_BASE_URL}`)"
- "traefik.http.routers.${PROJECT_NAME}_nginx_https.tls=true"
- "traefik.http.routers.${PROJECT_NAME}_nginx_https.entrypoints=websecure"
php:
build:
context: .
dockerfile: docker/php-fpm/Dockerfile
container_name: "${PROJECT_NAME}_php"
volumes:
- ./html:/var/www/html
labels:
- "traefik.http.routers.php.rule=Host(`${PROJECT_BASE_URL}`)"
traefik:
image: traefik:v2.0
container_name: "${PROJECT_NAME}_traefik"
command:
- "--api.insecure=true"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--providers.docker=true"
- "--providers.file.filename=/etc/traefik/dynamic_conf/config.yml"
- "--providers.file.watch=true"
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./docker/traefik/config.yml:/etc/traefik/dynamic_conf/config.yml" # used to define the certificate path
- "./docker/certs:/tools/certs"
However, how can I now forward HTTPS/wss to HTTP/ws to the php service?
I am working on a Laravel project. I started writing browser tests using Dusk. I am using docker as my development environment. When I run the tests, I am getting the "Connection refused" error.
This is my docker-compose.yaml file.
version: '3'
services:
apache:
container_name: res_apache
image: webdevops/apache:ubuntu-16.04
environment:
WEB_DOCUMENT_ROOT: /var/www/public
WEB_ALIAS_DOMAIN: restaurant.localhost
WEB_PHP_SOCKET: php-fpm:9000
volumes: # Only shared dirs to apache (to be served)
- ./public:/var/www/public:cached
- ./storage:/var/www/storage:cached
networks:
- res-network
ports:
- "8081:80"
- "443:443"
php-fpm:
container_name: res_php
image: jguyomard/laravel-php:7.3
volumes:
- ./:/var/www/
- ./ci:/var/www/ci:cached
- ./vendor:/var/www/vendor:delegated
- ./storage:/var/www/storage:delegated
- ./node_modules:/var/www/node_modules:cached
- ~/.ssh:/root/.ssh:cached
- ./composer.json:/var/www/composer.json
- ./composer.lock:/var/www/composer.lock
- ~/.composer/cache:/root/.composer/cache:delegated
networks:
- res-network
db:
container_name: res_db
image: mariadb:10.2
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: restaurant
MYSQL_USER: restaurant
MYSQL_PASSWORD: secret
volumes:
- res-data:/var/lib/mysql
networks:
- res-network
ports:
- "33060:3306"
chrome:
image: robcherry/docker-chromedriver
networks:
- res-network
environment:
CHROMEDRIVER_WHITELISTED_IPS: ""
CHROMEDRIVER_PORT: "9515"
ports:
- 9515:9515
cap_add:
- "SYS_ADMIN"
networks:
res-network:
driver: "bridge"
volumes:
res-data:
driver: "local"
The following is the driver function in DuskTestCase.php class
/**
* Create the RemoteWebDriver instance.
*
* #return \Facebook\WebDriver\Remote\RemoteWebDriver
*/
protected function driver()
{
$options = (new ChromeOptions)->addArguments([
'--disable-gpu',
'--headless',
'--window-size=1920,1080',
]);
return RemoteWebDriver::create(
'http://localhost:9515', DesiredCapabilities::chrome()->setCapability(
ChromeOptions::CAPABILITY, $options
)
);
}
I run the tests running the following command.
docker-compose exec php-fpm php artisan dusk
Then I get the following error.
Facebook\WebDriver\Exception\WebDriverCurlException: Curl error thrown for http POST to /session with params: {"capabilities":{"firstMatch":[{"browserName":"chrome","goog:chromeOptions":{"args":["--disable-gpu","--headless","--windo
w-size=1920,1080"]}}]},"desiredCapabilities":{"browserName":"chrome","platform":"ANY","chromeOptions":{"args":["--disable-gpu","--headless","--window-size=1920,1080"]}}}
Failed to connect to localhost port 9515: Connection refused
/var/www/vendor/php-webdriver/webdriver/lib/Remote/HttpCommandExecutor.php:331
What is wrong with my configuration and how can I fix it?
I connect from my web app to an Elasticsearch instance using the Java API:
static Optional<Client> getClient(String hostName, int port) {
return getHost(hostName).map(host -> TransportClient.builder().build()
.addTransportAddress(new InetSocketTransportAddress(host, port))
);
}
static Optional<InetAddress> getHost(String hostName) {
InetAddress host = null;
try {
host = InetAddress.getByName(hostName);
} catch (UnknownHostException e) {
LOG.warn("Could not get host: {}", hostName, e);
}
return Optional.ofNullable(host);
}
Before switching to Docker Compose, I started Elasticsearch using
docker run --net=my-network --name=myDb elasticsearch
and my web app using
docker run --net=my-network -p 8080:4567 me/myapp
and sure enough, I could connect to the database using getClient("myDb", 9300).
But now that I've switched to Docker Compose, I don't know how to connect to Elasticsearch. This is my current docker-compose.yml:
version: '2'
services:
my-app:
image: me/myapp
ports:
- "8080:4567"
networks:
- my-network
elasticsearch:
image: elasticsearch
networks:
- my-network
networks:
my-network:
driver: bridge
Understandably, this results in an UnknownHostException since I haven't set the hostname of my Elasticsearch instance in the docker-compose.yml.
How do I do that?
I had to set an alias:
elasticsearch:
image: elasticsearch
networks:
my-network:
aliases:
- myDb
Now I can connect to myDb on port 9300 when calling getClient("myDb", 9300) shown in the question.