Accessing data between microservices when one is a console app - microservices

Let’s say I have microservice “UserManagement” and microservice “UserReportService”.
UserManagement is a web API project which exposes public endpoints. It is used to manage user information.
UserReportService is a console application that gathers each user’s info each day and generates a report — JSON data and a file. It stores it in its own database. Since this is a console application there is no http endpoint for any other service to access this data.
If my UserManagement microservice needs to expose this data via an API, how can I accomplish this?
I’ve thought of two ways:
The console app saves the data to the UserManagement microservice instead of its own database via an http endpoint on the UserManagement microservice or via some kind of message queue.
The console app saves the data to its own database and the data is replicated to the UserManagement microservice for consumption.

One very important thing in microservices is to each one manage its own data, and this data should be exposed through clearly defined APIs.
With that said, your 1st approach is incorrect, since one microservice would be bound to the other's database. The second one is not recommended for similar reasons.
The recommended approach would be the following:
┌──────────────┐ ┌───────────────────┐
│ │ │ │
│UserManagement│ │ UserReportService │
│ │ │ │
└─────┬────────┘ └────────┬──────────┘
│ │
│ │
▼ ▼
┌──────────────┐ ┌────────────┐
│ │ │ │
│ UsersDb │ │ ReportDb │
│ │ │ │
└──────────────┘ └────────────┘
UserManagement is kept as is.
UserReportService should be converted to a Web API that follows one of the following approaches:
It has a background service that generates the report and saves it in the database. This would work as a Cron Job.
If you do not want a Cron Job, you can have an API that generates and saves the report when invoked.
This way the two microservices manage their own data, and can change internally as needed.

Related

Provisioning with Ansible and Vagrant multiple vagrantfiles

I'm creating a monitoring environment that has monitoring_servers and monitored_boxes, and of course Ansible controller. For testing roles etc I've created a new "project" that worked well in terms of organizing the development. But now, when most of the stuff is (hopefully) working as should I would love to get the whole Infrastructure easier to manage, if possible, from one file state.
I've been googling this every now and then and IIRC I still haven't found a solution to have one master Vagrantfile which then could call other Vagrantfiles to kickstart needed boxes.
Now there is one Vagrantfile for creating Ansible Controller, 3 ubuntu nodes and 3 Windows nodes, and another to spin up three Ubuntu VM's for Grafana, Loki, and Prometheus. Then there would be needs for an Alert manager, maybe for Influxdb, etc, and keeping all those machines in one vagrant file haven't worked very well for me as I would like to see a situation where there is:
Vagrantfile (master) to create Ansible Controller and from that file, I could call files like "monitoring_stack", "monitored_boxes", "common_purpose_boxes" and so on.
Master
├── Vagrantfile.ansible.controller
└── monitoring
├── monitored_boxes
│   └── Vagrantfile.monitored
├── monitoring_servers
│   └── Vagrantfile.monitoring
└── whatever_boxes
└── Vagrantfile.whatever
Something like that would be an ideal setup to manage.
If that's not doable nor easy to get to are there other methods you normally take to tackle similar setups?
Maybe just forget the whole Vagrant and go full-on with Pulumi or Terraform. Then again, that probably wouldn't solve this issue either as I want to provide a playground for other team members also to test and play with new toys.
Thanks, everyone for any tips :)
Hopefully I'm not too late.
Vagrant supports multi-nodes setup, within the same Vagrantfile:
https://www.vagrantup.com/docs/multi-machine
I'm currently working on a dual-node setup with ansible provisioning (WIP):
https://gitlab.com/UnderGrounder96/gitlab_jenkins

AWS SAM nested stacks

I'm using nested stack with SAM, and I have a parent stack which contains SNS resource and two children, childStack1 want to publish a message to the SNS topic, childStack2 wants to get notified from the same SNS, but how can I access the shared SNS topic from childStack1/2 yaml or lambda?
ParentStack:
├── ChildStack1
|-------NestedFunction1
│-------template.yaml
├── ChildStack2
|-------NestedFunction2
|-------template.yaml
template.yaml <--this contains SNS resource
Thx...

WSO2 Governance Not Finding All Assets in JDBC Database

I'm new to WSO2 and working on an existing application that uses WSO2.
We load our database of assets into wso2 but not all of the assets show up in the store or publisher when queried.
It seems there is some disconnect between what is in the database/carbon and what can be seen in store/publisher.
the missing assets can be found by:
Calling the database directly
looking them up in carbon
using the store or publisher url with the asset id
the governance rest api through id only
the assets are missing in:
doing searches in the store/publisher gui
doing searches with the governance api
All the ones missing have invalid asset names according to our rxt definitions. I removed these validations in carbon but still was not able to find them.
We have validations in the rxt files for asset names, would this affect what is seen in store/publisher?
Is there a way to sync up the governance registry with the database so that it would show all the assets in the store and publisher?
Any help is much appreciated!!
I'm facing the same problem with store/publisher. After searching for a solution, we found some info about this issue. WSO2 is not indexing some assets in Solr.
You could try to reindex the assets with this steps:
1 - Backup the solr folder which resides in /solr and remove from API Manager home location.
2 - Open /repository/conf/registry.xml
3 - Under indexingConfiguration tag there is a value called lastAccessTimeLocation.
Default value is
/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime
Change that value to
/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime1
4 - Restart the server
For me this didn't work, but in some questions here many people said that could be the best solution for this issue.
WSO2 loss APIs after changes in docker container
WSO2 API Manager issues with solr
After some long investigating, I found that some entries weren't in the REG_LOG table and some dates in the REG_LOG table made entries not be indexed. The solution for this was adding to the REG_LOG table with current timestamps which forced a reindex and then the missing assets were able to be searched for in the web ui.

Selenium webdriver and Ruby. Standard directory and file structure

I want to make my Selenium Webdriver Ruby tests suites, test cases, and test methods in separate files, so i can reuse the code between them. Right now I have separate ruby files for every test suite, containing every test case, and every method. This works, but its not the best way to maintain a lot of test suites constantly.
So I wanted to know what is the standard way to do this file separation, from a complete text file, to separate files for test cases and methods.
I found the following structure but don't understand how to use it with my requirements:
.
├── bin (not used)
├── data (not used)
├── doc (not used)
├── etc (I use it to store 3 different HOSTS files i overwrite depending on some parameters)
├── ext (not used)
├── lib (not used)
├── logs (keeps execution logs)
│  └── screenshots (keeps only failed test cases screenshots)
└── tests (test suites... with test data, test cases, and methods, in a
single file per test suite)
I have found the answer I was looking for, because the directory I was more troubled about was the "tests/" directory, where I has all my tests, and the best way to have shared code between them, is to have a module with methods in a "tests/support" or "tests/shared" directory.

Several apps (i.e war files) on same Beanstalk instance

In order to be conservative on resources (and costs), I would like to put more than 1 war file (representing different apps) on the same EC2 beanstalk instance.
I would like then to have appl A mapping to myapp.elasticbeanstalk.com/applA using warA and appl B mapping to myapp.elasticbeanstalk.com/applB using warB
But, the console allows you to upload a single and only war for any instance.
1) So, I understand that its not possible with the current interface. Am I right ?
2) Though, is is possible to achieve this via "non-standard" ways: uploading warA via interface and copying / updating warB to /tomcat6/webapps via ssh, ftp, etc ?
3) With (2), my concern is that B will be lost each time BT health checker decides to terminate the instance (successive failed checks for example) and restart a new one. I would then have to make warB as part of my customized AMI used by applA and create a new version of this AMI each time i update warB
Please, help me
regards
didier
You are correct ! You can not (yet ) have multiple war in beanstalk.
Amazon Forum answer is here
https://forums.aws.amazon.com/thread.jspa?messageID=219284
There is a workaround though, but not using Beanstalk, but plain EC2:
https://forums.aws.amazon.com/thread.jspa?messageID=229121
http://blog.jetztgrad.net/2011/02/how-to-customize-an-amazon-elastic-beanstalk-instance/
Shameless plug: While not related directory, I've made a plugin for Maven 2 to automate Beanstalk deployments and Elastic MapReduce as well. Check out http://beanstalker.ingenieux.com.br/
This is an old question but it took me some time to find a more up to date answer so I thought I'd share my findings.
Multiple WAR deployment is now supported natively by Elastic Beanstalk (and has been for some time).
Simply create a new zip file with each of your WAR files inside of it. If you want one of them to be available at the root context name it ROOT.war like you would if you were deploying to Tomcat manually.
Your zip file structure should looks like so:
MyApplication.zip
├── .ebextensions
├── foo.war
├── bar.war
└── ROOT.war
Full details can be found in the Elastic Beanstalk documentation.
The .ebextensions folder is optional and can contain configuration files that customize the resources deployed to your environment. See Elastic Beanstalk Environment Configuration for information on using configuration files.
There another hack which allows you to boot an arbitrary jar by installing java and using a node.js boot script:
http://docs.ingenieux.com.br/project/beanstalker/using-arbitrary-platforms.html
Hope it helps

Resources