Can't hit spring-cloud-dataflow HTTP(source) application - spring

I have been following a tutorial to create a stream with spring-cloud-dataflow. It creates the following stream -
http --port=7171 | transform --expression=payload.toUpperCase() | file --directory=c:/dataflow-output
All three applications start up fine. I am using rabbitMQ and if I log in to the rabbit UI I can see that two queues get created for the stream. The tutorial said that I should be able to POST a message to http://localhost:7171 using postman. When I do this nothing happens. I do not get a response, I do not see anything in the queues, and no file is created. In my dataflow logs I can see this being listed.
local: [{"targets":["skipper-server:20060","skipper-server:20052","skipper-server:7171"],"labels":{"job":"scdf"}}]
The tutorial was using an older version of dataflow that I do not believe made use of skipper. Since I am using skipper, does that change the url? I tried http://skipper-server:7171 and http://localhost:7171 but neither of these seem to be reaching the endpoint. I did turn off SSL cert verification in the postman settings.
Sorry for asking so many dataflow questions this week. Thanks in advance.

I found that the port I was trying to hit (7171) which was on my skipper server was not exposed. I had to add and expose the port on the skipper server configuration in my .yml file. I found this post which clued me in.
How to send HTTP requests to my server running in a docker container?
skipper-server:
image: springcloud/spring-cloud-skipper-server:2.1.2.RELEASE
container_name: skipper
expose:
- "7171"
ports:
- "7577:7577"
- "9000-9010:9000-9010"
- "20000-20105:20000-20105"
- "7171:7171"
environment:
- SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_LOCAL_ACCOUNTS_DEFAULT_PORTRANGE_LOW=20000
- SPRING_CLOUD_SKIPPER_SERVER_PLATFORM_LOCAL_ACCOUNTS_DEFAULT_PORTRANGE_HIGH=20100
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:1111/dataflow
- SPRING_DATASOURCE_USERNAME=xxxxx
- SPRING_DATASOURCE_PASSWORD=xxxxx
- SPRING_DATASOURCE_DRIVER_CLASS_NAME=org.mariadb.jdbc.Driver
- SPRING_RABBITMQ_HOST=127.0.0.1
- SPRING_RABBITMQ_PORT=xxxx
- SPRING_RABBITMQ_USERNAME=xxxxx
- SPRING_RABBITMQ_PASSWORD=xxxxx
entrypoint: "./wait-for-it.sh mysql:1111-- java -Djava.security.egd=file:/dev/./urandom -jar /spring-cloud-skipper-server.jar"

Related

Kibana says "Kibana server is not ready yet." and "elasticsearch-reset-password" returns an error

I am new to Elasticsearch/Kibana and am trying to set up a basic installation via Docker. I've backed myself into a corner, and I need help finding my way out.
I have the following docker-compose.yml.
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.4.0
environment:
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
cap_add:
- IPC_LOCK
ports:
- "9200:9200"
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:8.4.0
environment:
- ELASTICSEARCH_HOSTS=http://elasticssearch:9200
ports:
- "5601:5601"
I run docker compose up . and the logs look mostly good. However, when I try to connect to http://localhost:5601/, I see a message "Kibana server is not ready yet." that never goes away.
The end of the Elasticsearch log looks like this.
{"#timestamp":"2022-08-26T15:26:25.616Z", "log.level":"ERROR", "message":"exception during geoip databases update", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#4]","log.logger":"org.elasticsearch.ingest.geoip.GeoIpDownloader","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.ElasticsearchException","error.message":"not all primary shards of [.geoip_databases] index are active","error.stack_trace":"org.elasticsearch.ElasticsearchException: not all primary shards of [.geoip_databases] index are active\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloader.updateDatabases(GeoIpDownloader.java:134)\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloader.runDownloader(GeoIpDownloader.java:274)\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:102)\n\tat org.elasticsearch.ingest.geoip#8.4.0/org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:48)\n\tat org.elasticsearch.server#8.4.0/org.elasticsearch.persistent.NodePersistentTasksExecutor$1.doRun(NodePersistentTasksExecutor.java:42)\n\tat org.elasticsearch.server#8.4.0/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:769)\n\tat org.elasticsearch.server#8.4.0/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}
2022-08-26T15:26:26.005783998Z {"#timestamp":"2022-08-26T15:26:26.002Z", "log.level": "INFO", "current.health":"GREEN","message":"Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.geoip_databases][0]]]).","previous.health":"RED","reason":"shards started [[.geoip_databases][0]]" , "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.routing.allocation.AllocationService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
2022-08-26T15:26:26.264786433Z {"#timestamp":"2022-08-26T15:26:26.264Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-Country.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#2]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
2022-08-26T15:26:26.304814423Z {"#timestamp":"2022-08-26T15:26:26.304Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-ASN.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#3]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
2022-08-26T15:26:27.017126446Z {"#timestamp":"2022-08-26T15:26:27.016Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-City.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[1de6b5b3d4cb][generic][T#1]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"vGjmfQNWTRS2sEeG0AiwuQ","elasticsearch.node.id":"3CcC2gJmRk2tQZOQTwU9HA","elasticsearch.node.name":"1de6b5b3d4cb","elasticsearch.cluster.name":"docker-cluster"}
I'm not sure if that ERROR about "geoip databases" is a problem. It does look like cluster health is "GREEN".
The end of the Kibana logs looks like this.
[2022-08-26T15:26:25.032+00:00][INFO ][plugins.ruleRegistry] Installing common resources shared between all indices
2022-08-26T15:26:25.091816903Z [2022-08-26T15:26:25.091+00:00][INFO ][plugins.cloudSecurityPosture] Registered task successfully [Task: cloud_security_posture-stats_task]
2022-08-26T15:26:26.081102019Z [2022-08-26T15:26:26.080+00:00][INFO ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
2022-08-26T15:26:26.155818080Z [2022-08-26T15:26:26.155+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo ENOTFOUND elasticssearch
2022-08-26T15:26:26.982333104Z [2022-08-26T15:26:26.981+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell
That "Unable to retrieve version information from Elasticsearch nodes." ERROR looks more like it could be a problem, but I'm not sure what to do about it. One online question that sounds similar comes down to the difference between ELASTICSEARCH_HOSTS and ELASTICSEARCH_URL for an earlier version of Elastic that doesn't seem relevant here.
Poking around online also turns up situations in which the "Kibana server is not ready yet." error is a problem with the security setup. The whole security setup part is a bit confusing to me, but it seems like one thing that might have happened is that I failed to setup passwords correctly. I'm trying to start over, so I shelled into the Elasticsearch instance and ran elasticsearch-reset-password --username elastic. I saw the following error.
elasticsearch#1de6b5b3d4cb:~$ elasticsearch-reset-password --username elastic
15:24:34.593 [main] WARN org.elasticsearch.common.ssl.DiagnosticTrustManager - failed to establish trust with server at [172.18.0.2]; the server provided a certificate with subject name [CN=1de6b5b3d4cb], fingerprint [cc4a98abd8b44925c631d7e4b05f048317c8e02b], no keyUsage and extendedKeyUsage [serverAuth]; the session uses cipher suite [TLS_AES_256_GCM_SHA384] and protocol [TLSv1.3]; the certificate has subject alternative names [IP:172.18.0.3,DNS:localhost,IP:127.0.0.1,DNS:1de6b5b3d4cb]; the certificate is issued by [CN=Elasticsearch security auto-configuration HTTP CA]; the certificate is signed by (subject [CN=Elasticsearch security auto-configuration HTTP CA] fingerprint [ba8730cc6481e4847e4a14eff4f774ca1c96ad0b] {trusted issuer}) which is self-issued; the [CN=Elasticsearch security auto-configuration HTTP CA] certificate is trusted in this ssl context ([xpack.security.http.ssl (with trust configuration: Composite-Trust{JDK-trusted-certs,StoreTrustConfig{path=certs/http.p12, password=<non-empty>, type=PKCS12, algorithm=PKIX}})])
java.security.cert.CertificateException: No subject alternative names matching IP address 172.18.0.2 found
Those are all the problems I have encountered. I don't know what they mean or which are significant, and Googling doesn't turn up any clear next steps. Any suggestions as to what is going on here?
Never mind. Stupid mistake. I misspelled elasticsearch in the line.
ELASTICSEARCH_HOSTS=http://elasticssearch:9200
"ss" instead of "s". Easy to overlook. The error message in the Kibana logs was telling me what the problem was. I just didn't know how to interpret it.
Even though this was just a typo I'm going to leave this question up in case someone makes the same mistake and gets confused in the same way.

Caddy not working in api-platfrom 2.6.4 distribution - panic: proto: file "pb.proto" is already registered

When I try us api-platform version 2.6.4 I am not able to run it when i build adn strat containers and check logs caddy is not working i get an error like this. Any idea? Caddy version is 2.3.0
caddy_1 | panic: proto: file "pb.proto" is already registered
caddy_1 | See https://developers.google.com/protocol-buffers/docs/reference/go/faq#namespace-conflict
tureality_caddy_1 exited with code 2
Other people have reported having this bug and I had it too.
Fortunately, the bug as just been fixed by Dunglas itself. :)
https://github.com/api-platform/api-platform/issues/1881#issuecomment-822663193
The repair was done at the mercure level and not in the api platform source code itself so you can keep your current version.
You just have to docker-compose up and it will work.

RedisCommandTimeOutException while making connecting micronaut lambda with elastic-cache

I am trying to create a lambda using Micronaut-2 connecting to elastic-cache.
I have used redis-lettuce dependency in the project with the following configuration and encryption on the transaction is enabled in the elastic-cache config.
redis:
uri: redis://{aws master node endpoint}
password: {password}
tls: true
ssl: true
io-thread-pool-size: 5
computation-thread-pool-size: 4
I am getting below exception:
command timed out after 1 minute(s): io.lettuce.core.rediscommandtimeoutexception
io.lettuce.core.rediscommandtimeoutexception: command timed out after 1 minute(s) at
io.lettuce.core.exceptionfactory.createtimeoutexception(exceptionfactory.java:51) at
io.lettuce.core.lettucefutures.awaitorcancel(lettucefutures.java:119) at
io.lettuce.core.futuresyncinvocationhandler.handleinvocation(futuresyncinvocationhandler.java:75)
at io.lettuce.core.internal.abstractinvocationhandler.invoke(abstractinvocationhandler.java:79)
com.sun.proxy.$proxy22.set(unknown source) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:29) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:16) at
io.micronaut.function.aws.micronautrequesthandler.handlerequest(micronautrequesthandler.java:73)
I have tried with spring cloud function with same network (literally on the same lambda) with the same elastic cache setup, it is working fine.
Any direction that can help me to debug this issue, please.
This might be late.
First thing to mention here is, an elastic-cache can only be accessed within a VPC. If you want to access it from the internet, it needs to have NAT GW enabled.

Run time issues with Ballerina Integrator

I am trying to run the sample File Integration with FTP which is given by Ballerina Integrator.
While running the service i am facing same issue each and every time.
I have installed Ballerina Integrator only. I have done uninstall and installation freshly after that also Same issue.
Please help me.
I could successfully run the sample with following configurations. (sample data are given). Here I have used a Secured FTP server to do the configuration.
listener ftp:Listener dataFileListener = new({
protocol: ftp:SFTP,
host: "18.156.78.137",
port: 22,
secureSocket: {
basicAuth: {
username: "cloudloc",
password: "fsf#$#213"
}
},
path: "/clouddir/"
});
ftp:ClientEndpointConfig ftpConfig = {
protocol: ftp:SFTP,
host: "18.156.78.137",
port: 22,
secureSocket: {
basicAuth: {
username: "cloudloc",
password: "fsf#$#213"
}
}
};
Make sure you set the path parameter correctly in the dataFileListener. Without this parameter I could reproduce your attached error.
Once this is correctly configured you would get a log printed like follows.
2020-01-24 15:13:23,758 INFO [wso2/ftp] - Listening to remote server at 18.156.78.137...
2020-01-24 15:13:24,333 INFO [wso2/file_integration_using_ftp] - Added file path: /clouddir/a1.txt
2020-01-24 15:13:24,415 INFO [wso2/file_integration_using_ftp] - Added file: /clouddir/a1.txt - 12
Just install Ballerina Integrator alone which is packed with Ballerina 1.0.2 so no need to install Ballerina again or separately. From VSCode why output is not coming means,VSCode's market place all are upgraded with latest version.
Locally installed "BI with Ballerina" is lower version, In VSCode "BI with Ballerina" is latest one. Mismatched version is the main problem which i was faced.

drone.io 0.5 slack no longer working

We had slack notification working in drone.io 0.4 just fine, but since we updated to 0.5 I can't get it working despite trying out the documentation.
Before, it was like this
build:
build and deploy stuff...
notify:
slack:
webhook_url: $$SLACK_WEBHOOK_URL
channel: continuous_integratio
username: drone
You can see here that I used the $$ to reference the special drone config file of old.
Now my latest attempt looks like this
pipeline:
build and deploy stuff...
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/...
channel: continuous_integratio
username: drone
According to the documentation slack is now indented within the pipeline (previously build) level.
I tried changing slack out for notify like it was before, used the SLACK_WEBHOOK secret only via the drone cli and there where other things I attempted as well.
Does anyone know what I might be doing wrong?
This is an (almost exact) yaml I am using with slack notification enabled with the exception that I've masked the credentials
pipeline:
build:
image: golang
commands:
- go build
- go test
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/XXXXXXXXX/YYYYYYYYY/ZZZZZZZZZZZZZZZZZZZZZZZZ
when:
status: [ success, failure ]
There is unfortunately nothing in your example that jumps out, perhaps with the exception of the channel name having a typo (although I'm not sure if that represents your real yaml configuration or not)
If you are attempting to use secrets (via the cli) you need to make sure you sign your yaml file and commit the signature file to your repository. You can then reference your secret in the yaml similar to 0.4 but with a slightly different syntax:
pipeline:
build:
image: golang
commands:
- go build
- go test
slack:
image: plugins/slack
webhook: ${SLACK_WEBHOOK}
when:
status: [ success, failure ]
You can read more about secrets at http://readme.drone.io/usage/secret-guide/
You can also invoke the plugin directly from the command line to help test different input values. This can help with debugging. See https://github.com/drone-plugins/drone-slack#usage
The issue was that in 0.4 the notify plugin was located outside the scope of the pipeline (then build) and now since 0.5 its located inside the pipeline. This combined with the fact that when a pipeline fails it quits the scope immediately, which means the slack (then notify) step never get's reached at all anymore.
The solution to this is to just explicitly tell it to execute the step on failure with the when command:
when:
status: [ success, failure ]
This is actually mentioned in the getting-started guide, though, but I didn't go through till the end as I was aiming to quickly get it up and running and didn't worry about what I considered to be edge cases.

Resources