Share objects across multiple containers - spring-boot

We develop a spring-boot application which is deployed on OpenShift 3. The application should be scalable to at least two pods. But we use internal caches and other "global" data (some lists, some maps...) which should be the same (i.e. shared) for all the pods.
Is there a way to achieve such data sharing by a) a service, which is embedded inside the spring-boot application itself (this implies that each pod needs to find/know each other) or does it b) need in every case a standalone (potentially also scalable) cache service?
a)
|---- Application ----|
| |
| |-------------| |
| | Pod 1 | * | |
| |----------^--| |
| | |
| |----------v--| |
| | Pod 2 | * | |
| |----------^--| |
| | |
| |----------v--| |
| | Pod n | * | |
| |-------------| |
| |
|----------------------
* "embedded cache service"
b)
|---- Application ----|
| |
| |-------------| |
| | Pod 1 | |-----\
| |-------------| | \
| | | \
| |-------------| | \ |-----------------------|
| | Pod 2 | |-----------| Cache Service/Cluster |
| |-------------| | / |-----------------------|
| | | /
| |-------------| | /
| | Pod n | |------/
| |-------------| |
| |
|----------------------
Typically, if we would use memcached or redis I think b) would be the only solution. But how is it with Hazlecast?

With Hazelcast, you can both use a & b.
For scenario a, assuming you're using k8s on OpenShift, you can use Hazelcast Kubernetes discovery plugin so that pods deployed in the same k8s cluster discover themselves & form a cluster: https://github.com/hazelcast/hazelcast-kubernetes
For scenario b, Hazelcast has an OpenShift image as well, which requires an Enterprise subscription: https://github.com/hazelcast/hazelcast-openshift. If you need open-source version, you can use Hazelcast Helm Chart to deploy data cluster separately: https://github.com/helm/charts/tree/master/stable/hazelcast

Related

Maven cli command to search packages

Is there a command line command in Maven to search / find packages? Something similar to npm search:
# npm search indexeddb
NAME | DESCRIPTION | AUTHOR | DATE | VERSION | KEYWORDS
indexeddb | A pure-JavaScript… | =bigeasy | 2014-02-13 | 0.0.0 | btree leveldb levelup binary mvcc database json b-tree concurrent persistence durable
lokijs | Fast document… | =techfort | 2021-04-20 | 1.5.12 | javascript document-oriented mmdb json nosql lokijs in-memory indexeddb
localforage | Offline storage,… | =tofumatt | 2021-08-18 | 1.10.0 | indexeddb localstorage storage websql
idb-keyval | A… | =jaffathecake | 2022-01-11 | 6.1.0 | idb indexeddb store keyval localstorage storage promise
idb | A small wrapper… | =jaffathecake | 2022-03-14 | 7.0.1 |
y-indexeddb | IndexedDB database… | =dmonad | 2022-01-21 | 9.0.7 | Yjs CRDT offline shared editing collaboration concurrency
minimongo | Client-side mongo… | =broncha… | 2022-05-23 | 6.12.4 | mongodb mongo minimongo IndexedDb WebSQL storage
#karsegard/indexeddb-expo | Export/import an… | =fdt2k | 2021-09-20 | 2.1.4 | IndexedDB JSON import export serialize deserialize backup restore
rt-import | | | | |
dexie | A Minimalistic… | =anders.ekdahl… | 2022-04-27 | 3.2.2 | indexeddb browser database
fortune-indexeddb | IndexedDB adapter… | =daliwali | 2021-06-17 | 1.2.1 | indexeddb adapter
bytewise | Binary… | =deanlandolt | 2015-06-19 | 1.1.0 | binary sort collation serialization leveldb indexeddb
fortune-localforage | localForage adapter… | =acoreyj | 2018-08-29 | 1.3.0 | indexeddb adapter
idb-kv | A tiny key value… | =kayleepop | 2019-09-28 | 2.1.1 | idb kv indexeddb key value api batch performance
idbkv-chunk-store | Abstract chunk… | =kayleepop | 2019-05-16 | 1.1.2 | idb indexeddb chunk store abstract batch batching performance fast small writes
fortune-indexeddb-with-bu | IndexedDB adapter… | =acoreyj | 2018-05-29 | 1.0.3 | indexeddb adapter
ndle | | | | |
fake-indexeddb | Fake IndexedDB: a… | =dumbmatter | 2022-06-08 | 3.1.8 | indexeddb datastore database embedded nosql in-memory polyfill shim
redux-persist-indexeddb-s | Redux Persist… | =mpintos | 2019-12-11 | 1.0.4 | redux redux-persist indexeddb
torage | | | | |
indexeddb-export-import | Export/import an… | =polarisation | 2021-11-16 | 2.1.5 | IndexedDB JSON import export serialize deserialize backup restore
#n1md7/indexeddb-promise | Indexed DB wrapper… | =n1md7 | 2022-05-08 | 7.0.4 | db indexed-db promise indexed npm package
#sighmir/indexeddb-export | Export/import an… | =sighmir | 2019-12-30 | 1.1.1 | IndexedDB JSON import export serialize deserialize
-import
If yes, how can I find packages for a search string?

Laravel - How to run schedule in multiple packages

currently, i'm working with project that has some modules which need to run schedule job.
I want to take schedule job into Kernel of each module (not Kernel in app/Console directory).
I did like this Laravel 5 Package Scheduled Tasks but it only run on 1 module. The others did not run.
Can anybody please help me! Thanks
My source code has structure like this:
app
| Console
| | Commands
| | | Command.php
| | Kernel.php
bootstrap
...
Module
| Module 1
| | Console
| | | Commands
| | | | Command11.php
| | | | Command12.php
| | | Kernel.php
| Module 2
| | Console
| | | Commands
| | | | Command21.php
| | | | Command22.php
| | | Kernel.php

How to run the same build with different predefined parameters in TeamCity

I try to improve our build process and use 2-3 predefined parameters for run on one single build.
Description: we have build configuration with parameters C1, C2, C3 and related build steps B1, B2, B3. They link to each other C1-B1, C2-B2, C3-B3. In this scheme all works fine, I pass parameters like mentioned here - How to pass Arguments between build configuration steps in team city?,
but I'm a bit worried because B1, B2, B3 are full copy each other and this i would like to improve it. Only one problem I couldn't find any mechanism to pass parameters from different configuration.If I use %dep. mechanism i can use parameter from only one configuration.
UPD: Currect scheme
+---+ +---+ +---+
| | | | | |
| C1| | C2| | C3|
| | | | | |
+-+-+ +-+-+ +-+-+
| | |
| | |
+-v-+ +-v-+ +-v-+
| | | | | |
| B1| | B2| | B3|
| | | | | |
+---+ +---+ +---+
the desired scheme:
+---+ +---+ +---+
| | | | | |
| C1| | C2| | C3|
| | | | | |
+-+-+ +-+-+ +-+-+
| | |
| | |
| +-v--+ |
| | | |
| | | |
+-----> B1 <----+
| |
+----+
C1, C2, C3 setup configuration parameters
B1 contains only build steps, like clean, build, dist
Could anyone help me with that? Any ideas?
In your case, you can introduce a TeamCity Metarunner. The idea is that you combine multiple build steps with parameters, and extract them as new entity, available as build runner.
See this documentation section for step by step instructions on creating a metarunner.
You can define a parameter in the template, call it, for example, external.param and give no definition.
Then, in each configuration (C1, C2 and C3) define the value of this parameter as reference to specific dependency.
external.param = %dep.<source_cfg_id>.<source_param_name>%

WireMock with multiple hosts and dnsmasq?

I'm working on mocking endpoints for an iOS app that hits over a dozen different HTTPS hosts. Based on my understanding from the WireMock docs and this answer from the maintainer, WireMock is designed to proxy / mock only one host. My current plan is to use dnsmasq to point one host at WireMock at a time.
+----------+ +-------------------+
| | | |
| WireMock +-->Proxy+---> 123.example.com |
| | | |
+----^-----+ +-------------------+
|
| +-------------------+
+-------+ +----+----+ | |
| +---> https://123.example.com +---> | +-----> xyz.example.com |
| | | | | | |
| App +---> https://xyz.example.com +---> dnsmasq +---+ +-------------------+
| | | |
| +---> https://9.different.com +---> +---+ +-------------------+
+-------+ +---------+ | | |
+-----> 9.different.com |
| |
+-------------------+
This seems pretty clunky, is there a better way to mock multiple hosts like this? One of the primary constraints is that these have to be tested over HTTPS and not unencrypted.
You should be able to achieve this by creating stubs that have a host header condition in the request. That way e.g. a request to
GET /hello
Host: firsthost.example.com
And a request to
GET /hello
Host: secondhost.example.com
would match different stubs and there return different responses.

How to Move a whole partition to another tabel on another database?

Database: Oracle 12c
I want to take single partition, or a set of partitions, disconnect it from a Table, or set of tables on DB1 and move it to another table on another database. I would like to avoid doing DML to do this for performance reasons (It needs to be fast).
Each Partition will contain between three and four hundred million records.
Each Partition will be broken up into approximately 300 Sub-Partitions.
The task will need to be automated.
Some thoughts I had:
Somehow put each partition in it's own datafile upon creation, then detaching from the source and attaching it to the destination?
Extract the whole partition (not record-by-record)
Any other non-DML Solutions are also welcom
Example (Move Part#33 from both to DB#2, preferably with a single, operation):
__________________ __________________
| DB#1 | | DB#2 |
|------------------| |------------------|
|Table1 | |Table1 |
| Part#1 | | Part#1 |
| ... | | ... |
| Part#33 | ----> | Part#32 |
| Subpart#1 | | |
| ... | | |
| Subpart#300 | | |
|------------------| |------------------|
|Table2 | |Table2 |
| Part#1 | | Part#1 |
| ... | | ... |
| Part#33 | ----> | Part#32 |
| Subpart#1 | | |
| ... | | |
| Subpart#300 | | |
|__________________| |__________________|
Please read the document below with all the examples of exchanging partitions of table.
https://oracle-base.com/articles/misc/partitioning-an-existing-table-using-exchange-partition

Resources