We have a Spring Boot 2 REST service app consisting of multiple micro-services. ELB is used for service registration and discovery, e.g. there are services available at
http://<dev-env>/api/serviceA/actuator/info
{
"app" : {
"name" : "App Name",
"description" : "App Description"
},
"git" : {
"branch" : "origin/brancha",
"commit" : {
"id" : "204f7a0",
"time" : "2019-07-10T11:09:13Z"
}
},
"build" : {
"version" : "0.0.1-SNAPSHOT",
"build" : {
"version" : "v_0.1.151"
},
http://<dev-env>/api/serviceB/actuator/info
....
Every service uses SB 2 Actuator.
How can we aggregate this version information from two or more micro-services into single page(kind of dashboard)? Are there any ready solutions?
In your dashboard application just open a http connection and query all the microservices actuator endpoints. There are tons of ways to get started. You should choose one that you are comfortable with.
Related
My Elasticsearch7.8.0 is running in OCI OKE (Kubernetes running in Oracle Cloud). I want to setup Elasticsearch backup snapshot with OCI Object store using OCI Amazon S3 Compatibility API. Added repository-s3 plugin and configured ACCESS_KEY and SECRET_KEY in the PODs. While repository, I am getting "s_s_l_peer_unverified_exception"
PUT /_snapshot/s3-repository
{
"type": "s3",
"settings": {
"client": "default",
"region": "OCI_REGION",
"endpoint": "OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com",
"bucket": "es-backup"
}
}
Respose :
{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unable to upload object [tests-0J3NChNRT9WIQJknHAssKg/master.dat] using a single upload",
"caused_by" : {
"type" : "sdk_client_exception",
"reason" : "Unable to execute HTTP request: Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]",
"caused_by" : {
"type" : "s_s_l_peer_unverified_exception",
"reason" : "Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]"
}
}
}
},
"status" : 500
}
I hope you are aware of when to use S3 Compatible API.
"endpoint":"OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com"
Please modify OCI_TENANCY to TENANCY_NAMESPACE. Please refer to this link for more information.
You can find your tenancy namespace information in Administration -> Tenancy Details page.
Well you shouldn't be talking to es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com where your bucket name is part of the domain. You can try it in your browser and you'll get a similar security warning about certs.
If you look at https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm#usingAPI you'll see a mention of:
The application must use path -based access. Virtual host-style access (accessing a bucket as bucketname.namespace.compat.objectstorage.region.oraclecloud.com) is not supported.
AWS is migrating from path based to sub-domain based URLs for S3 (https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/) so the ES S3 plugin is probably defaulting to doing things the new AWS way.
Does it make a difference if you use an https:// URL for the endpoint value? Looking at my 6.8 config I have something like:
{
"s3-repository": {
"type": "s3",
"settings": {
"bucket": "es-backup",
"client": "default",
"endpoint": "https://{namespace}.compat.objectstorage.us-ashburn-1.oraclecloud.com/",
"region": "us-ashburn-1"
}
}
}
What I'm guessing is that having a full URL for the endpoint probably sets the protocol and path_style_access or 6.8 didn't require you to set path_style_access to true but 7.8 might. Either way, try a full URL or setting path_style_access to true. Relevant docs at https://www.elastic.co/guide/en/elasticsearch/plugins/master/repository-s3-client.html
I have Zuul + Eureka + Spring Boot Service Endpoint + Hateoas response configuration. When I access the service through Zuul Gateway, the resource links in the response are direct links to the service endpoints, shouldn't they be the Gateway links? What am i missing here?
Gateway Endpoint : http://localhost:8762/catalog/products/10001
Direct Service Endpoint : http://localhost:8100/products/10001
application.properties for Zuul
spring.application.name=zuul-server
eureka.client.service-url.default-zone=http://localhost:8761/eureka/
# Map paths to services
zuul.routes.catalog-service=/catalog/**
zuul.addProxyHeaders=true
Actual Response on Gateway Endpoint : http://localhost:8762/catalog/products/10001
{
"title" : "The Title",
"description" : "The Description",
"brand" : "SOME BRAND",
"price" : 100,
"color" : "Black",
"_links" : {
"self" : {
"href" : "http://localhost:8100/products/10001"
}
}
}
Expected Response should have Gateway URL in href
{
"title" : "The Title",
"description" : "The Description",
"brand" : "SOME BRAND",
"price" : 100,
"color" : "Black",
"_links" : {
"self" : {
"href" : "http://localhost:8762/catalog/products/10001"
}
}
}
I've got this issue and resolved it via this post on github
The gist is
spring-boot <=2.1
#Bean
ForwardedHeaderFilter forwardedHeaderFilter() {
return new ForwardedHeaderFilter();
}
spring-boot >= 2.2
server.use-forward-headers=true
While #Nikhil said he fixed by just adding #Bean, in my case is just the opposite:
I just added forward-headers-strategy: FRAMEWORK (currently, server.use-forward-headers is deprecated) and it worked for me that way.
Thank you #Zipster!
Additional info:
Property server.use-forward-headers accepts three possible values:
NONE
NATIVE
FRAMEWORK
I tested the three options to check the differences.
My Zuul gateway (port 8080) routes are configured as it follows:
zuul:
prefix: /api/v0
sensitive-headers: Cookie,Set-Cookie # Allow Authorization header forwarding
routes:
api-v0-questions:
path: /questions/**
service-id: api-v0-questions
strip-prefix: false
NATIVE - URL points to gateway and stripping the /api/v0 (top prefix):
"_links": {
"self": {
"href": "http://localhost:8080/questions/5f6fa0300ec87b34b70393ca"
}
FRAMEWORK - URL points to gateway and DOES NOT strip the /api/v0 prefix:
"_links": {
"self": {
"href": "http://localhost:8080/api/v0/questions/5f6fa0300ec87b34b70393ca"
}
NONE - URL points to service, just like as adding no property at all:
"_links": {
"self": {
"href": "http://localhost:8081/questions/5f6f96ba0ec87b34b70393b2"
}
I just tried working with elasticsearch and now trying to create first watcher
There are some information I have read in elasticsearch documentation : https://www.elastic.co/guide/en/x-pack/current/watcher-getting-started.html
And now I trty to create one :
https://es.origin-test.cloud.rccf.ru/apiconnect508/_xpack/watcher/watch/audit_watch
PUT method + auth headers
I put in :
{ "trigger" : {
"schedule": {
"interval": "1h"
}
}, "actions" : { "send_email" : {
"email" : {
"to" : "ext_avolkova#rencredit.ru",
"subject" : "Watcher Notification",
"body" : "{{ctx.payload.hits.total}} logs found"
} } } }
But now I see mistake :
No handler found for uri
[/apiconnect508/_xpack/watcher/watch/log_audit] and method [PUT]
Please, help me to create one simple watcher
Based on the support matrix, elasticsearch 2.x is not compatible with x-pack.
You might want to install Watcher as a separate plugin using this document.
My application has users and servers, every user can have multiple servers. I would like to get a server by giving the user Id and the server name.
Example database:
{
"_id" : ObjectId("5a168093abfc7d0da1eaf039"),
"email" : "test#test.com",
"name" : "asdf",
"password" : "$2a$10$sC2WjxBdmsmN/x4GI7rgS.HBr4C.W8oxFJ7p/WMC24YcoARX8pNba",
"is_admin" : false,
"servers" : [
{
"_id" : BinData(3,"5xEk0HAIc7P+JevFrKP5lQ=="),
"name" : "asdf",
"host" : "asdf",
"ssl" : true
}
],
"_class" : "com.stack.database.main.model.User"
}
The user id is unique and the server name is unique per user. I would like to get the Server model(The servers list, not an actual #Document but a sub document) by giving the user id(5a168093abfc7d0da1eaf039) and the server name(asdf). I would like to use the repositories for this, the ReactiveCrudRepository repository.
I know I can get a user with a certain certain server name by using this in the UserRepository:
Mono<User> findByServers_Name(String name);
But I want to get the server model based on the user id and server name. I was thinking about something like this:
Mono<Server> findByIdAndServers_Name(ObjectId id, String name);
Note that this also has to be in the user model because the Server has no repository because it is a sub document.
How can I can a server model based on user id and server name with reactive crud repositories?
I'm having issues with connecting from my Go client to my es node.
I have elasticsearch behind an nginx proxy that sets basic auth.
All settings are default in ES besides memory.
Via browser it works wonderfully, but not via this client:
https://github.com/olivere/elastic
I read the docs and it says it uses the /_nodes/http api to connect. Now this is probably where I did something wrong because the response from that api looks like this:
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elasticsearch",
"nodes" : {
"u6TqFjAvRBa3_4FndfKh4w" : {
"name" : "u6TqFjA",
"transport_address" : "127.0.0.1:9300",
"host" : "127.0.0.1",
"ip" : "127.0.0.1",
"version" : "5.6.2",
"build_hash" : "57e20f3",
"roles" : [
"master",
"data",
"ingest"
],
"http" : {
"bound_address" : [
"[::1]:9200",
"127.0.0.1:9200"
],
"publish_address" : "127.0.0.1:9200",
"max_content_length_in_bytes" : 104857600
}
}
}
}
I'm guessing I have to set the IPs to my actual IP/domain (my domain is like es01.somedomain.com)
So how do i correctly configure elastisearch so that my go client can connect?
My config files for nginx look similar to this: https://www.elastic.co/blog/playing-http-tricks-nginx
Edit: I found a temporary solution by setting elastic.SetSniff(false) in the Options for the client, but I think that means I can't scale ES horizontally. So still looking for an alternative.
You are looking for the HTTP options, specifically http.publish_host and http.publish_port, which should be set to the publicly reachable address and port of the Nginx server proxying the ES node.
Note that with Elasticsearch listening on 127.0.0.1:9300 for the transport, you won't be able to form a cluster with nodes on other hosts. The transport can be configured similarly with the transport options.