Cancel deletion of a Kubernetes Resource from Finalizer - go

In Kubernetes, there is deletionTimestamp to signal an ongoing deletion and there are finalizers to model tasks during the process of deletion.
However, it could be, that during the deletion, the specification of a parent object changes in a way that would effective make cancelling the deletion the most desirable solution.
I'd expect a clear and complete documentation of deletionTimestamp and finalization covering the entire lifecycle of deletionTimestamp. It seems that most people seem to assume that it is either zero or nonzero and cannot be changed while it is nonzero. However, there seems to be no documentation on that. I also do not want to "just check", because just check is subject to change and may stop working tomorrow.

The answer is No,
Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources marked for deletion. Finalizers alert controllers to clean up resources the deleted object owned. Documentation is here
Reason being garbage collection used this identifier, In foreground cascading deletion, the owner object you're deleting first enters a deletion in progress state Read through this for detailed understanding
Foreground cascading deletion

Related

Commitment Level

I've written a basic rpc client which polls the state of an Solana account to look for a specific condition (i.e. a unique int64 Id being written to it). When the condition arises, I call a smart contract which takes the same account as a mutable argument.
Before doing anything, the program checks for the same condition. However this check fails. I understand we're dealing with a distributed system and that state maybe inconsistent for a period of time, but I can repeatedly call for over 30 secs and it fails each time, before ultimately succeeding.
I've read about the concept of commitment levels but always assumed the account state passed into the smart contract would be the latest state of the world (i.e. processed)? What I appear to be observing is it's more like the finalised state.
Can anyone shed some light on what might be going on here?
I will try and come up with a minimal code example to demonstrate the problem but just wanted to ask the question first, to see if anyone can point me in the right direction.
Thanks
So if you look at the docs you linked, processed has a note:
the block may still be skipped by the cluster
This is a very important note if you're only looking for account state changes and don't want some that may be false. There's a number of reasons that a slot can be skipped, or a transaction could be rejected by the cluster.
If any of the above happens, then the account state that is accepted by the cluster as a whole may not be reflected in processed, but finalized.
In the end my specific problem came down to pre-flight checks using the 'finalized' commitment level when my logic for polling the account was using 'confirmed'. Modifying the preflightCommitment argument on sendTransaction fixed the problem for me.

Turn recovery on after first message

I have a persistent actor which receives many messages. Fist message is CREATE (case class) and next messages are UPDATEs (case classes). So if it receives CREATE then it should not go into persistence to run recovery because the storage is empty for this actor. It's performance wasting from my perspective.
Is there any possibility to do not call recovery for particular input message (the first one which is CREATE), please?
A persistent actor will always have to hit the database, because there is no other way to know whether it having existed before - it could have been created in a previous instance of the application that was stopped or it could have been created on a different node in a cluster.
In general a good pattern for performance is to keep the actor in memory after it has been hit the first time, as that will allow as fast responses as possible. The most common way to do this is using Cluster Sharding (which you can read more about in the docs here: https://doc.akka.io/docs/akka/current/cluster-sharding.html?language=scala#cluster-sharding
I have never heard of anyone seeing the hit for an empty persistent actor as a performance problem and I'm not sure it is possible to solve that in a general way, so if you have such a problem and somehow can know the actor was never created before you can not do that with Akka Persistence but would have to build a special solution for that yourself.

Is it acceptable to have an invalid state in eventsourcing after event upgrading and before patching?

Let say I have a stream of persisted events that build a valid state according to some "schema" I have defined.
I change the schema and the events are upgraded to reflect this.
However, some state could not be made valid just by upgrading events, I also needed to add more events to patch the state to make it fully valid.
Firstly, is this reasoning at all valid in terms of event sourcing?
If so, how do I handle cases where a specific version of a state no longer becomes valid? I mean is this acceptable? Should it still be possible to rehydrate a version with invalid state? If this is a write model and it's not the latest version, I could not modify this state anyway so maybee it's no big deal?
However, some state could not be made valid just by upgrading events, I also needed to add more events to patch the state to make it fully valid.
"Compensating events" is the usual term; there is a clerical error in the book of record, so we need to add a new event to the history that corrects the mistake.
If so, how do I handle cases where a specific version of a state no longer becomes valid?
As a rule, you want to be wary, extremely wary, of introducing any automated validation that prevents you from loading an invalid history. Remember, state is just state; the business rules constrain the way the domain is allowed to change. Leaving broken states readable, but broken, is safe.
In particular, if you allow the state to load, it is a straight forward exercise to enumerate your event streams, test the final state of the object, and produce an exception report for any streams that produce an invalid state, escalating them to operators/management for handling, and so on.
Assuming that you are reasonable careful about input validation, and comparing whether your proposed command is consistent with latest known state (aggregates enforce business rules, but they don't need to hoard those rules for themselves), then you can probably achieve error rates low enough that you don't need aggressive data validation. That's especially true when the errors are easy to detect and cheap to fix.
Failing that, freezing any aggregates while they are in an invalid state is a good way to prevent further damage.
But if you really need the state to stay valid, there's a trick that you can play with compensating events.
Consider: the basic pattern of event sourcing looks something like
History history = repository.getHistoryById(id)
State current = State.SEED
for (Event e : history) {
current = current.apply(e)
}
There's actually a hidden concept here, which encapsulates the logic for processing the events prior to passing them to the state. Hidden, because the null case just passes the enumerated events straight through to the target.
History history = repository.getHistoryById(id)
Historian historian = new Historian();
State current = State.SEED
for (Event e : historian.reviewEvents(history)) {
current = current.apply(e)
}
The historian gives you a place to put your compensating event logic - based on its own state, the historian passes through most events, but fixes the ones that knows needs edits/compensation/redactions
Where does the historian state come from? Why, from the history of the historian, of course. You load the history of the event corrections, which will typically be short, into the historian, and then let the historian clean up the events for the aggregate.
And if you need corrections for the historian? It's turtles all the way down! Each stream has a unique historian; the identifier for the historian's stream is calculated from the stream it filters (named UUID's, for example, would allow you to do this). So for each stream, you check to see if a historian stream exists; when you find one that doesn't, you know to stop searching and use the null historian, roll up the changes, process the final sequence of events to regenerate the state of your real object, and off you go.
Mind you, I haven't seen a reference implementation of this idea anywhere; it's whiteboard sound, but the truth is I've been deferring this requirement in my own designs.

How to handle saving on child context but the objected is already deleted in parent context?

I have core data nested contexts setup. Main queue context for UI and saving to SQLite persistent store. Private queue context for syncing data with the web service.
My problem is the syncing process can take a long time and there are the chance that the syncing object is deleted in the Main queue context. When the private queue is saved, it will crash with the "Core Data could not fulfill faulted" exception.
Do you have any suggestion on how to check this issue or the way to configure the context for handle this case?
There is no magic behind nested contexts. They don't solve a lot of problems related to concurrency without additional work. Many people (you seem to be one of those people) expect things to work out of the box which are not supposed to work. Here is a little bit of background information:
If you create a child context using the private queue concurrency type then Core Data will create a queue for this context. To interact with objects registered at this context you have to use either performBlock: or performBlockAndWait:. The most important thing those two methods do is to make sure to invoke the passed block on the queue of the context. Nothing more - nothing less.
Think about this for a moment in the context of a non Core Data based application. If you want to do something in the background you could create a new queue and schedule blocks to do work on that queue in the background. If your job is done you want to communicate the result of the background operations to another layer inside your app logic. What happens when the user deleted the object/data in the meantime which is related to the results from the background operation? Basically the same: A crash.
What you experience is not a Core Data specific problem. It is a problem you have as soon you introduce concurrency. What you need is to think about a policy or some kind of contract between your child and parent contexts. For example, before you delete the object from the root context you should cancel all of the operations/blocks which are running on other queues and wait for the cancellation to finish before you actually delete the object.

I Need an Analogy: Triggers and Events

For another question, I'm running into a misconception that seems to arise here at SO occasionally. Some questioners seem to think that Triggers are to Databases as Events are to OOP.
Does anyone have a good analogy to explain why this is a flawed comparison, and the consequences of misapplying it?
EDIT:
Bill K. has hit it correctly, but maybe doesn't see the importance of the critical differeence between the event and the callback function that strikes me, anyway. Triggers actually cause code to execute every time the event occurs; callbacks only occur whenever one has been registered for an event (which is not true for the vast majority of events); and even then, in most cases the callback's first action is to deregister itself (or at least the callback contains a qualifcation exit so it only executes once.)
If you write a trigger, it will unfailingly execute every time the event occurs, because there's no way to register or deregister to code segment.
Triggers are a way to interpose repeating logic synchronously into the thread of execution (i.e. synchronicity). Events are a means to defer logic until later (i.e. implement asynchronicity).
There are exceptions and mitigations in both cases, but the basic patterns of triggers and callbacks are mostly opposite in intention and implementation. Often the distinction doesn't seem to have fully sunk in. (IMHO, YMMV). :D
They're not the same thing, but they're not unrelated.
In both cases, the mechanism can be described approximately as follows:
Some block of code declares "interest" for changes in state.
Your application affects some change.
The system runs the block of code in response to the change.
Perhaps a database trigger is more like a callback function that has registered interest in a specific event.
Here's an analogy: the event is a rubber ball that you throw. The trigger is a dog that chases after a thrown ball.
If there's some other difference that you have in mind that makes it "dangerous" (note: OP has edited this choice of word out of the question) to compare triggers and events, you can describe what you mean.
Triggers are a way to interpose
repeating logic synchronously into the
thread of execution (i.e.
synchronicity). Events are a means to
defer logic until later (i.e.
implement asynchronicity).
Okay, I see what you mean more clearly. But I think it's in some ways subject to the implementation. I wouldn't assume an event handler has to deregister itself; it depends on the system you're using. A UNIX signal handler, for example, has to prevent itself from catching a new signal while it's already handling one. But a Java servlet inside a Tomcat container should be thread-safe because it may be called concurrently by multiple threads. They're both event handlers, of different kinds.
Event handlers may be synchronous or asynchronous. Can a handler in a publish/subscribe system read messages that were posted recently, but prior to the handler registering its interest? Or only messages posted concurrently?
There's another important reason to treat triggers as different from event handlers: I frequently recommend against doing anything in a trigger that affects state outside the database.
For example, sending an email, writing to a file, posting to a web service, or forking a process is inappropriate inside a trigger. If for no other reason than the transaction that spawned the trigger may be rolled back, but you can't roll back those external effects. You may not even be using explicit transactions, but say you send an email in a BEFORE trigger, but the operation fails because of a NOT NULL constraint or something.
Instead, all such work should be done by code in one's application, after one has confirmed that the SQL operation was successful and the transaction committed.
It's too bad that people keep trying to do inappropriate work inside a trigger. There are senior developers at MySQL who promote UDFs to read and write data in memcached. Wow -- I just noticed these have made it into the MySQL 6.0 product!! Shocking!
So here's another attempt at an analogy, comparing triggers and events to the process of a criminal trial:
A BEFORE trigger is an allegation.
An AFTER trigger is an indictment.
COMMIT is a conviction after a guilty verdict.
ROLLBACK is an acquittal after an innocent verdict.
You only want to put the perpetrator in prison after they are convicted.
Whereas an EVENT is the crime itself.

Resources