Is there a way to check an ActiveMQ server is still up, without actually receiving or sending or doing a transaction? I am employing a "Best Effort 1 Phase Commit" logic - i.e. not using XA, but instead commit db right before committing the JMS, assuming that the JMS commit will go through unless the server has gone down (as opposed to the SQL commit, which can fail due to e.g. constraint violations).
This is very close to perfectly good enough. However, if I could, right before committing the database transaction, check whether the ActiveMQ server actually was running right now, I would tighten the potential crash-window just a tad bit more.
A way to send some kind of "ping" through the connection would be exactly what I was looking for, e.g. get hold of some kind of status from the actual server (not just ask the connection object whether it believes it has a connection to the server). For a SQL Server, "SELECT 1" is pretty nice for this scenario. I guess I could send a message to a topic without any consumers, but this seems a bit heavy handed. It would be no problem to employ methods on the actual ActiveMQConnection object (as opposed to purely the JMS API).
Related
You have a command/operation which means you both need to save something in database end send an event/message to another system. For example you have an OrderService and when a new order is created you want to publish an "OrderCreated"-event for another system/systems to react on (either direct message or using a message broker) and do something.
The easiest (and naive) implementation is to save in db and if successful then send message. But of course this is not bullet proof because the other service/message broker is down or your service crash before sending message.
One (and common?) solution is to implement "outbox pattern", i.e. instead of publish messages directly you save the message to an outbox table in your local database as part of your database transaction (in this example save to outbox table as well as order table) and have a different process (polling db or using change data capture) reading the outbox table and publish messages.
What is your solution to this dilemma, i.e. "update database and send message or do neither"? Note: I am not talking about using SAGAs (could be part of a SAGA though but this is next level).
I have in the past used different approaches:
"Do nothing", i.e just try to send the message and hope it will be sent. Which might be fine in some cases especially with a stable message broker running on same machine.
Using DTC (in my case MSDTC). Beside all the problem with DTC it might not work with your current solution.
Outbox pattern
Using an orchestrator which will retry process if you have not got a "completed" event.
In my current project it is not handled well IMO and I want to change it to be more resilient and self correcting. Sometimes when a service is calling another service and it fails the user might retry and it might work ok. But some operations might require out support to fix it (if it is even discovered).
ATM it is not a Microservice solution but rather two large (legacy) monoliths communicating and is running on same server but moving to a Microservice architecture in the near future and might run on multiple machines.
I am wondering the best practice for long lived GRPC calls.
I have a typical Client --> Server call (both golang) and the server processing can take up to about 20-30 seconds to complete. I need the client to wait until it is completed before I move on. Options that I see (and I don't love any of them):
Set timeout to absurd length (e.g. 1 min) and just wait. This feels
like a hack and also I expect to run into strange behavior in my
service mesh with things like this going on.
Use a stream - I still need to do option #1 here and it really
doen't help me much as my response is really just Unary and a stream
doesn't do me much good
Polling - (i implemented this and it works but I don't love it) - I
do most of the processing async and have my original GRPC call
return a transactionID that is stored in Redis and holds the state
of the transaction. I created a different GRPC endpoint to poll the
status of the transaction in a loop.
Queue or Stream (e.g. Kafka Stream) - setup the client to be a
listener into something like a Kafka topic and have my server notify
the (Queue || Stream) when it is done so that my client would pick
it up. I thought this would work but seemed way over-engineered.
Option #3 is working for me but sure feels pretty dirty. I am also 100% dependent on Redis. Given that GRPC is built on HTTP2 then I would think that maybe there is some sort of Server Push option but I am not finding any.
I fear that I am overlooking a simple way to handle this problem.
Thanks
Long-lived gRPC channel is an important use case and fully supported. However, one gRPC channel may have more than one TCP connection, and TCP can get disconnected due to inactivity. You can use keep-alive or HTTP/2 ping to keep TCP alive. See this thread for more details.
None of the options you mentioned address the issue that your server takes a while to respond. Unless there’s something I’m missing, nothing in your question is a gRPC issue.
I am wondering how we can ensure message durability when using websphere MQ and WCF. I want to be able to have my WCF process pick messages off of the queue and if there is an issue that the applciation encounters (power outage, etc) I don't lose the messages. I also would like to not have to use a transaction if at all possible because I want to eliminate distributed transactions.
Thanks,
S
Well, there's transactions and there's distributed transactions. The "right" answer is to use the WMQ 1-phase commit here. That doesn't have the complexity of XA transactions but it does give you the ability to roll back a message without losing it. In fact, when using clients you really should be using at least 1-phase commit just to prevent loss of messages.
Short of that there is always the "browse-with-lock, delete-message-under-cursor" method. I'm pretty sure everything you need to do the browseing, locking and deleting is exposed under .NET but perhaps Shashi will comment and confirm.
WebSphere MQ WCF custom channel has a feature "Assured Delivery" that guarantees that a service request or reply is actioned and not lost. This is the 1-phase commit (also known as SYNC_POINT in) WMQ.
"Assuered Delivery" is a service contract attribute. Here are more details about the feature.
I have an application that uses a Spring-EntityManager (JPA) and I wonder what happens if the database happens to be unavailable during the lifetime of my aforesaid application.
I expect in that situation it will throw an exception the first time to do anything on the database, right?
But, say I wait 10 minutes and try again then and the DB happens to be back. Will it recover? Can I arrange it so it does?
Thanks
Actually, neither Spring nor JPA have anything to do with it. Internally all persistence frameworks simply call DataSource.getConnection() and expect to receive (probably pooled) JDBC connection. Once they're done, they close() the connection effectively returning it to the pool.
Now when the DataSource is asked to give a connection but database is unaivalable it will throw an exception. That exception will propagate up and will be somehow handled by whatever framework you use.
Now to answer your question - typically DataSource implementation (like dbcp, c3p0, etc.) will discard connection known to be broken and replace it with a fresh one. It really depends on the provider, but you can safely assume that once the database is available again, the DataSource will gradually get rid of sick connections and replace them with healthy ones.
Also many DataSource implementors provide ways of testing the connection periodically and before it is returned to the client. This is important in pooled environemnts where the DataSource contains a pool of connections and when the database becomes unavailable it has no way to discover that. So some DataSources test connection (by calling SELECT 1 or similar) before giving it back to the client and do the same once in a while to get rid of broken connections, e.g. due to broken underlying TCP connection.
TL;DR
Yes, you will get an exception and yes the system will work normally once the database is back. BTW you can easily test this!
I'm creating a new Client / Server application in C# and expect to have a fairly high rate of connections. That made me think of database connection pools which help mitigate the expense of creating and disposing connections between the client and database.
I would like to create a similar capability for my application and haven't been able to find any good examples of how to apply this pattern. Do I really need to spin up an instance of a TcpClient every time I want to send a message to the server and receive a receipt message? Each connection is expected to transport between 1-5KB with each receiving a 1KB response message.
I realize this question is somewhat vague, but I am starting from scratch so I am open to suggestions. Even if that means my suppositions are all wrong.
Introducing connection pool is a sort of optimization. Premature optimization can be bad.
I would recommend you start develpment without introducing connection pool. When client and server code is stable enough you can create load tests and detect performance problems.
Connection pool is required if time to create a connection is considerable compared to the rate at which data is coming to/from server (load tests should indicate that).
If data from client is send not that often you may not even need connection pool.