How to configure oplog url for mongdb with sharding enabled in Rocket.Chat - rocket.chat

I have configured mongodb with sharding enabled. I don't find any Rocket.Chat documentation on what should be the oplogurl.

Related

How can we setup Spring Cloud Data Flow datasource to use Kerberos Auth8n

I'd like to install SCDF 2.6.x with an Oracle DB and Kerberos auth8n.
I am following the spring cloud data flow adocs in the source, and the online guides at https://docs.spring.io/spring-cloud-dataflow/docs/current-SNAPSHOT/reference/htmlsingle/#_oracle.
There's clarity on how to use an Oracle datasource, but only with username and password for authentication.
My aim is to be able to use Kerberos auth8n with an Oracle driver, and specify this in the server-config.yml for kubernetes deployment.

Configuration or link required to connect cluster of Pivotal Coud Cache in Spring boot microservices

I am setting up the Spring-boot microservices with the cluster bi-direction Pivotal cloud cache.
I have set up the bi-directional cluster in Pivotal Cloud, I have a list of locators with ports.
I have already some online docs.
https://github.com/pivotal-cf/PCC-Sample-App-PizzaStore
But couldn't understand the on which configuration the spring boot app will know to connect.
I am looking for some tutorial or some reference where I can have spring boot app linked up with the PCC(gemfire)
The way you configure a app running in PCF (Pivotal Cloud Foundry) to talk to a PCC (Pivotal Cloud Cache) service instance is by binding the app to that service instance. You can bind it either by running the cf bind command or by adding the service name in the app`s manifest.yml, something like the below
path: build/libs/cloudcache-pizza-store-1.0.0-SNAPSHOT.jar
services:
- dev-service-instance
I hope you are using Spring Boot for Apache Geode & Pivotal GemFire (SBDG) in your app, if not I recommend you to use it as it makes connecting to PCC service instance extremely easy. SBDG has the logic to extract credentials, hostname:ports needed to connect to a service instance.
You as a app developer just need to
Create the service instance.
Bind your app to the service instance.
The boilerplate code for configuring credentials, hostnames, ips are handled by SBDG.
When you deploy an application in Cloud Foundry, (or Pivotal Cloud), you need to bind it to one or more services. Service details are then automatically exposed to the app via the VCAP_SERVICES environment variable. In the case of PCC this will include the name and port of the locator. By adding the spring-geode-starter (or spring-gemfire-starter) jar to the application it will automatically process the VCAP_SERVICES value and extract the necessary endpoint information in order to connect to the cluster.
Furthermore, if security is enabled on your PCC instance, you will also need to have created a service key. As with the locator details, the necessary credentials will be exposed via VCAP_SERVICES and the starter jar will automatically process and configure them.

Connecting to Redis Cluster from Spring Boot caching abstraction

I am using Spring Boot with Caching abstraction support to connect to redis-server for storing/fetching data for caching.
I use the below properties for connecting to redis-server.
spring.redis.host=localhost
spring.redis.port=6379
spring.redis.password=mypassword
spring.cache.type=redis
I am able to successfully connect to redis server and store/fetch data.
For high availability,it is decided to use 3 nodes , 1 master and 2 slave nodes for redis server.
In that case, I am not sure how to provide the configuration to connect to redis cluster from my spring boot application.
Are there any properties supported by spring boot to connect to redis cluster.
I am using out-of-box support of spring caching abstraction with support for annotation #cacheable.
I have the below 2 dependencies added to my pom.xml
spring-boot-starter-cache and spring-boot-starter-data-redis
and also using spring boot 1.5.13.
Is there support for connecting to redis cluster when using spring caching abstraction in spring boot.

Running Apache Ignite Cluster on Pivotal Cloud Foundry environment

I am trying to build a Apache Ignite Cluster on Pivotal Cloud Foundry environment as follows.
Created a Spring-Boot app that starts a new Ignite node, Deployed it on on Cloud Foundry ( ex. Ignite-Node1)
Created another Spring-Boot app which will also starts a new Ignite node, Deployed it on on Cloud Foundry ( ex. Ignite-Node2)
Now, even though both the apps are running in a same CloudFoundry Env, they are not forming the Ignite Cluster as they are not able to discover each other.
Apache Ignite documentation provides examle configuration details for AWS, Google cloud , however there are no examples for Pivotal Cloud Foundry.
Can somebody provide me with an example configuration on how to get the Ignite Cluster running on Cloud Foundry environment.
Srini
Container-to-container (app-to-app) networking is not supported on Pivotal Cloud Foundry, although it is possible to enable.
By default all communications must be made 'via the front door' through Cloud Foundry's router, either via HTTP or on a TCP port. One cannot choose which instance of an application to hit. This is due to change with the Container Networking initiative, the progress of which you can check on Pivotal Tracker. There is a detailed design document which is publicly available.
An alternative approach which is more appropriate for data services is to deploy them using BOSH. As a PCF user, you may wish to consider creating a PCF Tile for Apache Ignite.
My company has been helping Hazelcast create a PCF Tile that will create dedicated-VM clusters on-demand. Perhaps you could consider making use of Hazelcast instead?

Configure Elasticsearch in JHipster with URL

I want to use cloud elasticsearch (Bonsai) with JHipster.
Bonsai provides BONSAI_URL env variable.
How is this properly configured in application-prod.yml?

Resources