I have a JavaFX client application, where it can log in to the service using Google OpenID.
Client user clicks on Google Sign In
Following URL will be opened in the default browser
https://accounts.google.com/o/oauth2/v2/auth?
response_type=code&
client_id=
scope=email&
redirect_uri=localhost:8080/login/oauth2
state=qwerrffadfadf
User logs in to Google & Authorize
Page will be auto redirected to localhost:8080/login/oauth2 with auth code
Server exchange auth code to access token
Server validate and get user's email address
Now, when JavaFX client sends a request, the server should allow any resources to the authenticated user. But, as you can see, there is no way to connect Google users to the JavaFX client session. I have seen applications are using similar methods to allow social login (Postman, Nvidia Experience).
How should I handle this?
+------------------------+
+---------+ Authorization Code +------------+
| | To | |
| + ----+ Access token exchange +--------+ |
| | +------------------------+ | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| v | |
+---+---+----+ +--------+---v------+
| | Authorization | |
| MyServer +<-------+Code-+ | Google Resource |
| | | | Server |
+------------+ | | |
| +----+---------+----+
| | ^
+-------------------------------------+ | | |
| | | | |
| | | | |
| +-----------------+ | | | |
| | | | | | |
| | JavaFX Client | | | | |
| | | | | Authorization |
| +-------+---------+ | | Code | Authorization
| | | | | | Request
| + | | | |
| Clicked on Google Sign in | | | |
| | | | | |
| v | | | |
| +------+-------+ | | | |
| | +--------------+ | |
| | | | | |
| | Client Web +<-----------------------------+ |
| | Browser | | |
| | +----------------------------------------+
| +--------------+ |
| |
| |
| Client's Computer |
| |
+-------------------------------------+
We have multiple Tomcats, each with multiple .war files (= Spring Boot app) deployed in it.
We now need some distributed caching between app1 on tomcat1 and app1 on tomcat2. It´s essential that app2 on tomcat1 (and app2 on tomcat2) cannot see the Hazelcast cache of the other deployed apps.
The following image shows this situation:
Tomcat 1 Tomcat 2
+-----------------------------------+ +-----------------------------------+
| | | |
| app1.war app2.war | | app1.war app2.war |
| +----------+ +----------+ | | +----------+ +----------+ |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| | | | | | | | | | | |
| +----+-----+ +----+-----+ | | +----+-----+ +-----+----+ |
| | | | | ^ ^ |
+-----------------------------------+ +-----------------------------------+
| | | |
| | | |
| | | |
| | | |
+--------------------------------------+ |
Shared cache via Hazelcast | |
| |
+---------------------------------------+
Shared cache via Hazelcast
Is this possible with Hazelcast? And if so, how?
Right now I only find solution talking about shared web sessions via Hazelcast. But this doesn´t seem to be a solution for me here, or am I wrong?
If your applications must be strictly isolated, then you probably need to use different cluster groups. Cluster groups make it possible for different clusters to coexist on the same network, while being completely unreachable to one another (assuming correct configuration).
If, however, you just need application data to be separate, then you can just make sure that app1 instances use caches with names that do not clash with app2 cache names. This is the simplest implementation.
If you are deploying a sort of multitenant environment where you have security boundaries between the two groups of applications, then going for the cluster group option is better as you can protect clusters with passwords, and applications will be using distinct ports to talk to one another in those groups.
Yes, this is possible.
You can configure the cache name.
Application app1 uses a cache named app1. Application app2 uses a cache named app2.
If you configure it correctly then they won't see each's others data.
If by "essential" that they can't you mean that you have a stronger requirement than preventing accidental mis-configuration, then you need to use role-based security.
How to launch and LXD container on another node and exchange ssh keys with the container?
That is, how to give Ansible direct access to the LXD container using SSH?
I am aware of the authorized_key module however this would only exchange keys between the host and Ansible and not Ansible and the LXD container.
Please see the below diagram which describes the machine layout:
+----------------------------+ +----------------------------+
| | | |
| Baremetal Machine <------------------+ Ansible Machine |
| + | | |
| | | | |
| | | | |
| | | | |
| +--------------------+ | | |
| | | | | | |
| | v | | | |
| | LXD Container | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| +--------------------+ | | |
| | | |
+----------------------------+ +----------------------------+
Start containers from images that support some sort of provisioning system.
Most common is cloud-init – it's already inside many official cloud images.
When you create such a container, just add required configuration settings via user.user-data config option and it will be automatically applied when container started.
lxd_container module support config parameter to set container configuration options.
You can find useful cloud config examples here.
Do we need to think about underlying cluster while designing nifi templates?
Here is my simple flow
+-----------------+ +---------------+ +-----------------+
| | | | | |
| READ FROM | | MERGE | | PUT HDFS |
| KAFKA | | FILES | | |
| +-----------------------> | +---------------------> | |
| | | | | |
| | | | | |
| | | | | |
+-----------------+ +---------------+ +-----------------+
I have 3 nodes cluster.. When system is running I check "cluster" menu and see only master node is utilizing sources, other cluster nodes seems idle... The question is in such a cluster should I design template according to cluster or nifi should do the load balancing.
I saw one of my colleagues created remote processors for each node on cluster and put a load balancer in front of these within template, is it required? (like below)
+------------------+
| | +-------------+
| REMOTE PROCESS | | input port |
+----> | GROUP FOR | | (rpg) |
| | NODE 1 | +-------------+
| | | |
| | | |
| +------------------+ v
+-----------------+ +-----------------+ RPG
| | | | | +--------------+
| READ FROM | | | | | |
| KAFKA | | LOAD BALANCER | | +------------------+ | MERGE FILES |
| +-------------> | +-------------> | | | |
| | | | | | REMOTE PROCESS | | |
| | | | | | GROUP FOR | | |
| | | | | | NODE 2 | | |
+-----------------+ +-----------------+ RPG | | +--------------+
| +------------------+ |
| |
| v
|
| +-------------------+ +---------------+
| | | | |
| | REMOTE PROCESS | | PUT HDFS |
+-----> | GROUP FOR | | |
| NODE 3 | | |
| | | |
| | | |
+-------------------+ +---------------+
And what is the use-case for load-balancer except remote clusters, can I use load-balancer to split traffic into several processors to speedup the operation?
Apache NiFi does not do any automatic load balancing or moving of data, so it is up to you to design the data flow in a way that utilizes your cluster. How to do this will depend on the data flow and how the data is being brought into the cluster.
I wrote this article once to try and summarize the approaches:
https://community.hortonworks.com/articles/16120/how-do-i-distribute-data-across-a-nifi-cluster.html
In you case with Kafka, you should be able to have the flow run as shown in your first picture (without remote process groups). This is because Kafka is a data source that will allow each node to consume different data.
If ConsumeKafka appears to be running on only one node, there could be a couple of reasons for this...
First, make sure ConsumeKafka is not scheduled for primary node only.
Second, figure out how many partitions you have for your Kafka topic. The Kafka client (used by NiFi) will assign 1 consumer to 1 partition, so if you have only 1 partition then you can only ever have 1 NiFi node consuming from it. Here is an article to further describe this behavior:
http://bryanbende.com/development/2016/09/15/apache-nifi-and-apache-kafka
So I'm stuck. I am working on a credit system with expirations. Similar to credit card miles but not exactly. By the way I am sorry for the book ahead but I needed to add enough detail to help get the whole picture.
What I need is a system where a user accumulates credits for doing activities. But they can also spend these credits on activities. The credits should expire after 30 days if they are not used. I seem to be stuck on how to accurately calculate this in a batch that will run every night. Any ideas in any language would be greatly appreciated as I seem to be stuck on just one minor detail that I can't get around. Here is an example of the data:
7/1: +5 - user signs up
7/2: +5 - user interacts with system
7/2: -3 - user purchases activity
7/3: +5 - user interacts with system
So at this point the user has received 15 credits and has spent 3. Leaving him with a total of 12 credits. (At least I got basic math down :P)
I should add that currently we are playing with the idea of having two fields: last processed, next processed. So these values at this time assuming it was a new sign up are:
Last Processed Date: 7/1
Next Process Date: 8/1
So now 8/1 comes around. The batch starts and looks at all credits that are older than 30 days. Which at this point is 5.
This is where it starts to get fuzzy.
Then the system should look at all the credits that have been spent in the last 30 days to see if they are using any credits. Because they should only expire if they haven't been used. So there are 3. So I then deduct the user 2 credits because that is the difference of credits earned older than 30 days and what has been spent. So I finish the batch and set the dates accordingly for the next day. Now assuming they haven't spent anymore I start the calculation over of credits earned older than 30, which is 5 and credits spent which again is 3. But I obviously don't want to consider the 3 credits that I considered yesterday. What is a good approach to not include those 3 credits again for consideration.
That is where I am stuck.
We are thinking about writing a debit record for the expired credits so we can track them but having a hard time seeing how I can use it in this calculation.
If you read this far thank you. If you even make a somewhat effort in the answer I will at a minimum give you an up vote for effort.
EDIT:
Ok #Greg mentioned something that I forgot to address. The idea of putting a flag on the credits considered. A valid point but not one that can work because of the following scenario:
Let's say that on a particular day a user spends 10 credits. But the expired credits that the batch is considering only accumulated to 5. Well he should still have 5 more credits left over to not have expired because he spent more than a single expiration. So the flag wouldn't work because we would have skipped those 5 extra credits. Hope that makes sense?
For every user of the system keep an array, that stores information about the amount of credits available to the user for the next 30 consecutive days
For example the data for some user might look like this
8 |
7 | |
6 | | | |
5 | | | | | | | | | | |
4 | | | | | | | | | | | | | | | | |
3 | | | | | | | | | | | | | | | | | | | | | | | |
2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
-------------------------------------------------------------
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
^ ^ ^
| \_ |
today tomorrow in 15 days
Every time the user earns some credits, You increase amounts for all days by the number of credits earned. For example if the user earns 2 credits the table changes as follows. It's like rising the whole graph up.
10 |
9 | |
8 | | | |
7 | | | | | | | | | | |
6 | | | | | | | | | | | | | | | | |
5 | | | | | | | | | | | | | | | | | | | | | | | |
4 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
-------------------------------------------------------------
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
^ ^ ^
| \_ |
today tomorrow in 15 days
If The user has x credits today and spends y credits, You decrease the amount of credits available to him to x - y, for every day he has an amount greater than x - y. For days he has no more than x - y, the amount stays the same. It's like cutting the top of the graph off. For example if the user spends 3 credits the graph changes to
7 | | | | | | | | | | |
6 | | | | | | | | | | | | | | | | |
5 | | | | | | | | | | | | | | | | | | | | | | | |
4 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
-------------------------------------------------------------
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
^ ^ ^
| \_ |
today tomorrow in 15 days
Every day You shift the graph to the left to model expiring credits. The user will have the following amounts tomorrow
7 | | | | | | | | | |
6 | | | | | | | | | | | | | | | |
5 | | | | | | | | | | | | | | | | | | | | | | |
4 | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
-------------------------------------------------------------
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
^ ^ ^
| \_ |
today tomorrow in 15 days
I wouldn't consider trying to process the data as you present it. Instead, you should keep track of how many credits the user has, and when they expire. That way you keep track of which credits were used when the purchase is made, instead of trying to work it all out later.
So when the user signs up, they have:
5 credits expiring on 8/1
After interacting with the system the next day:
5 credits expiring on 8/1
5 credits expiring on 8/2
After purchasing something:
2 credits expiring on 8/1
5 credits expiring on 8/2
And so on.
Assuming you run this batch on a daily basis, you can have a table that keeps track of all the credits they earned, and the credits they used (negative credits).
At the beginning of the next month, your job is simply to find out which of the credits earned on the first day were not spent during the month.
The number of credits earned on the first day - the credits they spent all of last month. If the number is positive, they have some credits that need to expired. So simple add a record in the table with a negative credit. This will zero-out the unused credits.
The next day, repeat the process by seeing how many credits they earned on the second day minus the sum of all the credits they earned in the last month, taking into account the record with the negative credits you created the previous day.
How about adding a flag to the expenditures? If the flag is not set, then you can include that expenditure in the batch, if necessary. If you do use the expenditure to offset an expiration, then you set the flag. Next time through, you'll ignore that expenditure because the flag is set.
Use a debit record to record normal expenditures. When the monthly batch job runs, it can calculate the total debits which are less than or equal to the expiring credits. If there are credits to expire, simply insert an appropriate debit record (appropriate == to cancel the excess, in your application). In this way, any 'running total' code which examines only credits and debits will reach the same balance that your batch code intended.
One approach to this problem is to store only the transactions, not the balance. Then you always calculate the balance in real time when needed. Here's the data:
Date : Amount : Expiries
7/1 : +5 : 7/31
7/2 : +5 : 8/1
7/2 : -3 : never
7/3 : +5 : 8/2
The balance at any time is simply the total of all transactions that have not yet expired. No need to run any batch processes.
Regarding Julians reply (that I can't comment to yet), I'm dealing with just the same problem and Julians approach won't work because that would result the account being able to go negative.
If the user didn't use the service for one month, on 8/4 the account balance would be -3 and one activity worth of 5 would bring the balance to 2, not to 5 as it should.