Understanding reactor's FluxProcessor.wrap(upstream, downstream) - spring

Processors (Subjects in RxJava) act both as Publishers and Subscribers, so they could subscribe to a Publisher and, in addition, be subscribed so that they pass the values they got from the top Subscriber:
Publisher
|
\/
Processor
|
\/
Subscriber
How does FluxProcessor.wrap() fit in this schema? For instance I would like to create a FluxProcessor with FluxProcessor.wrap that gets values from a Flux.range() and can be subscribed to get values.

Related

How to design RabbitMQ implementation in spring api project?

This is going to be a declarative question. I know what is rabbitMQ and why it is used for.
I have a spring API project that has many API endpoints.
For example, localhost:80/do-A, localhost:80/do-B
Now, My client project creates requests with these endpoints with required parameters.
Inside my API project, the endpoints look like
Map(/do-1)
public Customer DoA(Customer customerObject){
return customer
}
As far as I know, RabbitMQ is a middleware between API and the CLIENT to store requests, and API will be able to retrieve each request one by one and this approach ensures the stability and no request loss during heavy load, especially when it comes to transactional activities.
If I implement rabbitMQ, the design will look like,
Client->Create request>send to rabbitMQ, Listen to rabbitMQ
API retrieves the request from the queue, processes it, and sends the response to the queue.
So, the question is, what should I need to do to convert my existing endpoints into the rabbit implemented endpoint? Will they still be there after rabbit implementation or I will have to change them all and attach listeners for all of them one by one?
You need to design your system around the queue, where you need to enqueue the message in any asynchronous task executor like Rqueue or AMQP.
In Queueing based solution you would enqueue all the payload related to API request so that you can handle them without any issues.
For a sample case you can record like
class Request{
String URL;
Map<String, Object> body; // String
}
Once you enqueue this request then you need to consume these requests from the queue. Post consumption you can take all the necessary actions.
Edit:
Flow:
+------+ +---------+
| | ---> ==Request Queue=== --> | |
|Client| |Consumer |
| | <--- ==Response Queue=== <-- | |
+------+ +---------+
A client would generate an API request with request Id, a consumer would consume the request from request queue, after processing, it will enqueue the response in the response queue. The response queue entry must contain the request ID apart from any other data so that client can relate a response to a request.
At very high-level entry in request-queue would look like
class Request{
String id;
String URL;
Map<String, Object> body; // String
// Any other fields
}
Response queue entry
class Response{
String id;
String requestId;
// Any other fields
}

How do I return a message back to SQS from lambda trigger

I have lambda trigger that reads messages from SQS queue. In some conditions, the message may not be ready for processing so I'd like to put the message back in queue for 1min and try again. Currently, I am create another copy of this customer record and posting this new copy in the queue. Is there a reason/way for me to keep the original record in queue as opposed to creating a new one
def postToQueue(customer):
if 'attemptCount' in customer.keys():
attemptCount = int(customer["attemptCount"]) + 1
else:
attemptCount = 2
customer["attemptCount"] = attemptCount
# Get the service resource
sqs = boto3.resource('sqs')
# Get the queue
queue = sqs.get_queue_by_name(QueueName='testCustomerQueue')
response = queue.send_message(MessageBody=json.dumps(customer), DelaySeconds=60)
print('customer postback: ', customer)
print ('response from writing ot the queue is: ', response)
#main function
for record in event['Records']:
if 'body' in record.keys():
customer = json.loads(record['body'])
print("attempting to process customer", customer, " at: ", datetime.datetime.now())
if (not ifReadyToProcess(customer)):
postToQueue(customer)
else:
processCustomer(customer)
This is not an ideal setup for SQS triggering Lambda functions.
My testing shows that messages sent to SQS will immediately trigger the Lambda function, even if a Delay setting is provided. Therefore, putting a message back onto the SQS queue will cause Lambda to fire again straight after.
To avoid a situation where Lambda is continually checking whether a message is ready for processing, I would recommend:
Use Amazon CloudWatch Events to trigger a Lambda function on a schedule (eg every 2 minutes)
The Lambda function should pull messages from the queue and check if they are ready to process.
If they are ready, then process them and delete them
If they are not ready, then push them back onto the queue with a Delay setting and delete the original message
Note that this is different to having SQS directly trigger Lambda. Instead, the Lambda function should call ReceiveMessages() to obtain the message(s) itself, which allows the Delay function to add some time between checks.
Another option: Instead of re-inserting a message into the queue, you could simply take advantage of the Default Visibility Timeout setting by not deleting the message. A message that is read from the queue, but not deleted, will automatically "reappear" on the queue. You could use this as the "retry" time period. However, this means you will need to handle Dead Letter processing yourself (eg if a message fails to be processed after n tries).

Trigger/Handle events between programs in different ABAP sessions

I have two programs running in separated sessions. I want to send a event from program A and catch this event in program B.
How can I do that ?
Using class-based events is not really an option, as these cannot be used to communicate between user sessions.
There is a mechanism that you can use to send messages between sessions: ABAP Messaging Channels. You can send anything that is either a text string, a byte string or can be serialised in any of the above.
You will need to create such a message channel using the repository browser SE80 (Create > Connectivity > ABAP Messaging Channel) or with the Eclipse ADT (New > ABAP Messaging Channel Application).
In there, you will have to define:
The message type (text vs binary)
The ABAP programs that are authorised to access the message channel.
The scope of the messages (i.e. do you want to send messages between users? or just for the same user? what about between application servers?)
The message channels work through a publish - subscribe mechanism. You will have to use specialised classes to publish to the channel (inside report A) and to read from the channel (inside report B). In order to wait for a message to arrive once you have subscribed, you can use the statement WAIT FOR MESSAGE CHANNELS.
Example code:
" publishing a message
CAST if_amc_message_producer_text(
cl_amc_channel_manager=>create_message_producer(
i_application_id = 'DEMO_AMC'
i_channel_id = '/demo_text'
i_suppress_echo = abap_true )
)->send( i_message = text_message ).
" subscribing to a channel
DATA(lo_receiver) = NEW message_receiver( ).
cl_amc_channel_manager=>create_message_consumer(
i_application_id = 'DEMO_AMC'
i_channel_id = '/demo_text'
)->start_message_delivery( i_receiver = lo_receiver )
" waiting for a message
WAIT FOR MESSAGING CHANNELS
UNTIL lo_receiver->text_message IS NOT INITIAL
UP TO time SECONDS.
If you want to avoid waiting inside your subscriber report B and to do something else in the meanwhile, then you can wrap the WAIT FOR... statement inside a RFC and call this RFC using the aRFC variant. This would allow you to continue doing stuff inside report B while waiting for an event to happen. When this event happens, the aRFC callback method that you defined inside your report when calling the RFC would be executed.
Inside the RFC, you would simply have the subscription part and the WAIT statement plus an assignment of the message itself to an EXPORTING parameter. In your report, you could have something like:
CALL FUNCTION 'ZMY_AMC_WRAPPER' STARTING NEW TASK 'MY_TASK'
CALLING lo_listener->my_method ON END OF TASK.
" inside your 'listener' class implementation
METHOD my_method.
DATA lv_message TYPE my_message_type.
RECEIVE RESULTS FROM FUNCTION 'ZMY_AMC_WRAPPER'
IMPORTING ev_message = lv_message.
" do something with the lv_message
ENDMETHOD.
You could emulate it by checking in program B if a parameter in SAP memory has changed. program A will set this parameter to send the event. (ie SET/ GET PARAMETER ...). In effect you're polling event in B.
There a a lot of unknown in your desription. For example is the event a one-shot operation or can A send several event ? if so B will have to clear the parameter when done treating the event so that A know it's OK to send a new one (and A will have to wait for the parameter to clear after having set it)...
edited : removed the part about having no messaging in ABAP, since Seban shown i was wrong

How to architecture a web-socket server with client subscription of specific responses in Phoenix?

I'm developing a web-socket server that I need to send real-time messages using Phoenix Framework to my clients.
The basic idea of my web-socket server is that a client can subscribe for some type of information and expect to receive only it, other clients would never receive it unless they subscribe to it too, the same information is broadcasted to every (and only) client subscribed to it in real-time.
Also, these information are separated in categories and sub categories, going down to 4 levels of categories.
So, for example, let's say I have 2 types of category information CatA, and CatB, each category can have sub categories, so CatA can have CatA.SubCatA and CatA.SubCatB sub categories, each sub categories can also have other subcategories and so on.
These information are generated by services, one for each root category (they handle all the information for the subcategories too), so we have CatAService and CatBService. These services needs to run as the server starts, always generating new information and broadcasting it to anyone that is subscribed to it.
Now, I have clients that will try to subscribe to these information, my solution for now is to have a channel for each information type available, so a client can join a channel to receive information of the channel's type.
For that I have something like that in the js code:
let channel = socket.channel("CatA:SubCatA:SubSubCatA", {})
channel.join()
channel.on("new_info", (payload) => { ... }
In this case, I would have a channel that all clients interested in SubSubCatA from SubCatA from CatA can join and a service for CatA that would generate and broadcast the information for all it's sub categories and so on.
I'm not sure if I was able to explain exactly what I want, but if something is not clear, please tell me what so I can better explain it, also, I made this (very bad) image as an example of how all the communication would happen https://ibb.co/fANKPb .
Also, note that I could only have one channel for each category and broadcast all the subcategories information for everyone that joined that category channel, but I'm very concerned about performance and network bandwidth, So my objective is to only send the information to only the clients that requested it.
Doing some tests here, it seems that If the client joins the channel as shown in the js code above, I can do this:
MyServerWeb.Endpoint.broadcast "CatA:SubCatA:SubSubCatA", "new_info", message
and that client (and all the other clients listening to that channel, but only then) will receive that message.
So, my question is divided in two parts, one is more generic and is what are the correct ways to achieve what I described above.
The second is if the solution I already came up is a good way to solve this since I'm not sure if the length of the string "CatA:SubCatA:SubSubCatA" creates an overhead when the server parses it or if there is some other limitation that I'm not aware.
Thanks!
You have to make separate channels for each class of clients and depending upon the ids which you are getting, you can broadcast the messages after checking about the clients joining the channel
def join("groups:" <> group_slug, _params, socket) do
%{team_id: team_id, current_user: user} = socket.assigns
case Repo.get_by(Group, slug: group_slug, team_id: team_id) do
nil ->
{:error, %{message: "group not found"}}
group ->
case GroupAuthorization.can_view?(group.id, user.id) do
true ->
messages = MessageQueries.group_latest_messages(group.id, user)
json = MessageView.render("index.json", %{messages: messages})
send self(), :after_join
{:ok, %{messages: json}, assign(socket, :group, group)}
false ->
{:error, %{message: "unauthorized"}}
end
end
end
This is an example of sending messages only to the users in groups which are subscribed and joined to the group. Hope this helps.

Subscribe messages for selected routing keys in single subscription (rabbitmq)

How to subscribe messages for selected routing keys in single subscription. example:
User sending message described by "tags" (tags=routing keys):
messagingTemplate.convertAndSend("/topic/example.tagA.tagB.tagC.tagD"), sending_message);
I want to subscribe messages for routing tagA OR tagB so it's working when I put 2 subscriptions:
socket.stomp.subscribe("/topic/example.#.tagA.#", notify());
socket.stomp.subscribe("/topic/example.#.tagB.#", notify());
Does it have some overhead(eg. network overhead) if there will be not 2 subscriptions but more eg. 50 ?
If previous solution have overhead is it possible to join this 2 subscriptions into one ? also solutions in single subscription looks better(because more concise).

Resources