Error trying to connect from quarkus to keycloak - quarkus

when starting up a quarkus service (v2.1.1), and trying to connect to a keycloak instance (v15.0.1) I am getting the following exception stack trace:
ERROR [io.qua.run.Application] (Quarkus Main Thread) Failed to start application (with profile dev): io.quarkus.oidc.common.runtime.OidcEndpointAccessException
at io.quarkus.oidc.runtime.OidcProviderClient.getJsonWebKeySet(OidcProviderClient.java:75)
at io.quarkus.oidc.runtime.OidcProviderClient.lambda$getJsonWebKeySet$0(OidcProviderClient.java:54)
at io.smallrye.context.impl.wrappers.SlowContextualFunction.apply(SlowContextualFunction.java:21)
at io.smallrye.mutiny.operators.uni.UniOnItemTransform$UniOnItemTransformProcessor.onItem(UniOnItemTransform.java:36)
at io.smallrye.mutiny.vertx.AsyncResultUni.lambda$subscribe$1(AsyncResultUni.java:35)
at io.vertx.mutiny.ext.web.client.HttpRequest$10.handle(HttpRequest.java:717)
at io.vertx.mutiny.ext.web.client.HttpRequest$10.handle(HttpRequest.java:714)
at io.vertx.ext.web.client.impl.HttpContext.handleDispatchResponse(HttpContext.java:371)
at io.vertx.ext.web.client.impl.HttpContext.execute(HttpContext.java:358)
at io.vertx.ext.web.client.impl.HttpContext.next(HttpContext.java:336)
at io.vertx.ext.web.client.impl.HttpContext.fire(HttpContext.java:303)
at io.vertx.ext.web.client.impl.HttpContext.dispatchResponse(HttpContext.java:265)
at io.vertx.ext.web.client.impl.HttpContext.lambda$null$8(HttpContext.java:520)
at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:96)
at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:59)
at io.vertx.core.impl.EventLoopContext.lambda$runOnContext$0(EventLoopContext.java:37)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:497)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:832)
Here are the configs I have put:
# OIDC Configuration
quarkus.oidc.auth-server-url=https://<HOST>/auth/realms/<REALM_NAME>
quarkus.oidc.client-id=<CLIENT_ID>
# quarkus.oidc.application-type=service
quarkus.oidc.credentials.secret=<SECRET>
quarkus.oidc.tls.verification=required
# Enable Policy Enforcement
quarkus.keycloak.policy-enforcer.enable=true
Anyone got an idea what's wrong here?

Thanks to a comment above I found this quarkus config here to be helpful:
quarkus.log.min-level=DEBUG
quarkus.log.category."io.quarkus.oidc".level=DEBUG
Which gave me this error:
Caused by: org.keycloak.authorization.client.util.HttpResponseException: Unexpected response from server: 400 / Bad Request / Response from server: {"error":"invalid_client","error_description":"Bearer-only not allowed"}
So clearly, I had misconfigured the client inside keycloak. Too bad the original error message didn't give me that information by default.

Related

Quarkus grpc is throwing start up error: Unable to start the gRPC server: java.nio.channels.UnresolvedAddressException

I am trying to start the grpc server with the property
quarkus.grpc.server.use-separate-server=true
in that case, i am getting the below error during server start up
2023-01-19 13:12:51,762 WARN [io.qua.grp.run.GrpcServerRecorder] (main) Using legacy gRPC support, with separate new HTTP server instance. Switch to single HTTP server instance usage with quarkus.grpc.server.use-separate-server=false property
2023-01-19 13:12:51,824 INFO [io.qua.grp.run.GrpcServerRecorder] (vert.x-eventloop-thread-0) Registering gRPC reflection service
2023-01-19 13:12:51,934 ERROR [io.qua.grp.run.GrpcServerRecorder] (vert.x-eventloop-thread-0) Unable to start the gRPC server: java.nio.channels.UnresolvedAddressException
at java.base/sun.nio.ch.Net.checkAddress(Net.java:149)
at java.base/sun.nio.ch.Net.checkAddress(Net.java:157)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:330)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:141)
But when I start the grpc server with property
quarkus.grpc.server.use-separate-server=false
the grpc server starts but the client is not able to access the server
I am getting the below error on the client side
13:54:28 ERROR line=111 traceId=, parentId=, spanId=, sampled= [qu.ms.of.OfferResource] (executor-thread-0) Exception: UNAVAILABLE: upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111: io.grpc.StatusRuntimeException: UNAVAILABLE: upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111
at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:271)
at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:252)
at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:165)
How do we overcome this issue?

NiFi doesn't start on MacOS Big Sur

I have installed NiFi using Homebrew following the instructions on this page.
Once I go start NiFi using
nifi start
I get the following:
Java home: /usr/local/opt/openjdk#11/libexec/openjdk.jdk/Contents/Home
NiFi home: /usr/local/Cellar/nifi/1.15.0/libexec
Bootstrap Config File: /usr/local/Cellar/nifi/1.15.0/libexec/conf/bootstrap.conf
Error: Could not find or load main class org.apache.nifi.bootstrap.RunNiFi
Caused by: java.lang.ClassNotFoundException: org.apache.nifi.bootstrap.RunNiFi
I also see this error in the nifi-app.log
2021-12-08 13:06:37,463 ERROR [Write-Ahead Local State Provider Maintenance] o.a.n.c.s.p.l.WriteAheadLocalStateProvider Failed to checkpoint Write-Ahead Log used to stor$
java.io.FileNotFoundException: ./state/local/partition-0/1.journal (No such file or directory)
at java.base/java.io.FileOutputStream.open0(Native Method)
at java.base/java.io.FileOutputStream.open(FileOutputStream.java:298)
at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:237)
at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:187)
at org.wali.MinimalLockingWriteAheadLog$Partition.rollover(MinimalLockingWriteAheadLog.java:788)
at org.wali.MinimalLockingWriteAheadLog.checkpoint(MinimalLockingWriteAheadLog.java:534)
at org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider$CheckpointTask.run(WriteAheadLocalStateProvider.java:286)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Any ideas?
I got the same error. Mine was due to the 8443 being occupied. I think, you can either find what's using the 8443 or you can try to change the port nifi.web.https.port=8443 in ./bin/conf/nifi.properties to something else. I hope it helps if you are still facing the issues.

Spring boot and opentelemetry: UNIMPLEMENTED: unknown service opentelemetry.proto.collector.metrics.v1.MetricsService

I am trying to integrate opentelemetry in in my spring boot application with jaeger. I am running jaeger on my local machine (started with binary not with docker) and I am starting my spring boot application with following command:
java -javaagent:/Users/<user-name>/temp/opentelemetry-javaagent-all.jar -Dotel.traces.exporter=jaeger -Dotel.exporter.jaeger.endpoint=localhost:14250 -Dotel.resource.attributes=service.name=playground-app -Dotel.javaagent.debug=false -Dotel.metrics.exporter=jaeger -Dotel.exporter.otlp.endpoint=localhost:14250 -Dotel.exporter.otlp.traces.endpoint=localhost:14250 -Dotel.exporter.otlp.metrics.endpoint=localhost:14250 -jar playground-app-0.0.1-SNAPSHOT.jar
when I hit a endpoint I am getting following error on command line:
[opentelemetry.auto.trace 2021-11-15 17:11:29:043 +0530] [grpc-default-executor-0] WARN io.opentelemetry.exporter.otlp.trace.OtlpGrpcSpanExporter - Failed to export spans. Error message: UNIMPLEMENTED: unknown service opentelemetry.proto.collector.trace.v1.TraceService
followed by
[opentelemetry.auto.trace 2021-11-15 17:12:08:513 +0530] [grpc-default-executor-0] WARN io.opentelemetry.exporter.otlp.metrics.OtlpGrpcMetricExporter - Failed to export metrics
io.grpc.StatusRuntimeException: UNIMPLEMENTED: unknown service opentelemetry.proto.collector.metrics.v1.MetricsService
at io.grpc.Status.asRuntimeException(Status.java:533)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:464)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:428)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:461)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:617)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:803)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:782)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Here is the dependencies section from build.gradle
implementation platform("io.opentelemetry:opentelemetry-bom:1.9.0")
implementation platform('io.opentelemetry:opentelemetry-bom-alpha:1.9.0-alpha')
implementation('io.opentelemetry:opentelemetry-api')
implementation('io.opentelemetry:opentelemetry-api-metrics')
implementation group: 'io.opentelemetry', name: 'opentelemetry-proto', version: '1.7.1-alpha'
I could not find anything on web with respect to this problem. Any idea what i am doing wrong. I didn't make ay other changes in my application for this integration.
My guess:
-Dotel.metrics.exporter=jaeger is a problem. What I have seen only prometheus metrics exporter is implemented for now (if you are using https://github.com/open-telemetry/opentelemetry-java-instrumentation). There is no jaeger metric exporter.
Also I don't understand why you need OTLP endpoint (OTLP != Jaeger). I would remove them:
-Dotel.exporter.otlp.endpoint=localhost:14250
-Dotel.exporter.otlp.traces.endpoint=localhost:14250
-Dotel.exporter.otlp.metrics.endpoint=localhost:14250
Simple config:
-Dotel.traces.exporter=jaeger
-Dotel.exporter.jaeger.endpoint=http://localhost:14250
-Dotel.resource.attributes=service.name=playground-app
should be fine as a starting point.

Why always I ma getting connection time out exception in zuul api gateway?

I am using Netflix Zuul and Eureka . There are 3 instances of my client application registered in Eureka and all are up and running.
always I am getting connection time out exception?
Please find below zuul api gateway application.properties and error
spring.application.name=netflix-zuul-api-gateway-server
server.port=8765
eureka.client.service-url.default-zone=http://localhost:8761/eureka
Error:
Fri Mar 22 21:18:23 IST 2019
There was an unexpected error (type=Internal Server Error, status=500).
GENERAL
com.netflix.zuul.exception.ZuulException: Forwarding error
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.handleException(RibbonRoutingFilter.java:198)
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:173)
Caused by: java.lang.RuntimeException: org.apache.http.conn.ConnectTimeoutException: Connect to 5CG7084F28840.corpzone.internalzone.com:9000 [5CG7084F28840.corpzone.internalzone.com/10.212.226.226, 5CG7084F28840.corpzone.internalzone.com/fe80:0:0:0:eded:ff6a:cdf1:fed1%17, 5CG7084F28840.corpzone.internalzone.com/fe80:0:0:0:4:352c:f52b:1d1d%14] failed: connect timed out
Caused by: org.apache.http.conn.ConnectTimeoutException: Connect to 5CG7084F28840.corpzone.internalzone.com:9000 [5CG7084F28840.corpzone.internalzone.com/10.212.226.226, 5CG7084F28840.corpzone.internalzone.com/fe80:0:0:0:eded:ff6a:cdf1:fed1%17, 5CG7084F28840.corpzone.internalzone.com/fe80:0:0:0:4:352c:f52b:1d1d%14] failed: connect timed out
strong text
strong text

sonarqube exception caught on transport layer

Good afternoon everyone, the problem is this I have a server with SonarQube, that when I try to start the windows service, it gets up but then it stops.
The following error appears in the sonarqube log:
2017.11.14 11:04:52 WARN sea[o.e.transport.netty] [sonar-1510653879773] exception caught on transport layer [[id: 0x346b46fb, /127.0.0.1:59330 => /127.0.0.1:9001]], closing connection
java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method) ~[na:1.8.0_152]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) ~[na:1.8.0_152]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_152]
at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_152]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[na:1.8.0_152]
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [elasticsearch-1.1.2.jar:na]
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [elasticsearch-1.1.2.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_152]
2017.11.14 11:04:52 INFO app[o.s.p.m.TerminatorThread] Process[search] is stopping
2017.11.14 11:04:52 INFO sea[o.s.p.StopWatcher] Stopping process
Do you know why this error?
I have set the sonar.properties correctly, including set the value of the sonar.search.port property to 0 as this link suggests: Sonar launch error, but the problem persists.
I hope you can give me a hand...
Regards!!!
UnComment below line in sonar property file and change port 9001 to 0
#sonar.search.port=9001
sonar.search.port=0
I had the same problem and i could fix it like this:
Go to this folder: sonarqube-x.x\conf
Open this file: sonar.properties
Find the word: #sonar.web.port
Change the value from 9000 to another port, like 9002
Save your changes
Start again your sonarqube
Access to the server with port 9000: http://localhost:9000
The reason could be the port number of sonarQube OR the one of elasticSearch instance used by sonarQube (I had a similar problem before), so the step to change both/one of those ports are :
Go to this folder: sonarqube-x.x\conf
Open this file: sonar.properties
For sonarQube port:
Find: #sonar.web.port
Change the value from 9000 to another port, like 9123; and un-comment the line (remove # in the beginning) sonar.web.port=9123
For sonarQube's elasticSearch instace port:
Find: #sonar.search.port
change this line To sonar.search.port=0 (this means that he will search for any available port and bind it)
Save your changes
Start again your sonarqube
Access to the server with the new specified sonarQube-port: http://localhost:9123
I experienced this error when upgrading SonarQube from version 5.6.7 to 6.7.1.
Originally I thought this was due to the port number but upon checking the web.log I noticed that there was an error relating to the LDAP plugin (2.2.0.608).
ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube org.sonar.plugins.ldap.LdapException: The property 'ldap.url' is empty and no realm configured to try auto-discovery.
Updating the sonar.properties file with the correct configuration allowed SonarQube to start.
I just occurred an exactly same question as you did.
I started SonarQube with MariaDB 5.5, but I found some error messages in sonarqube-x.x/logs/web.log:
2021.01.21 14:36:17 INFO web[][o.s.p.ProcessEntryPoint] Starting web
......
2021.01.21 14:36:19 ERROR web[][o.s.s.p.Platform] Web server startup failed: Unsupported mysql version: 5.5. Minimal supported version is 5.6.
So I changed my database to MySQL 5.7 and it started successfully.
Not quite sure you had the same problem, but just check these log files and see what actually happened during starting.

Resources