duplicate key given in txn request while trying to remove all keys by prefix and put them again - etcd

Trying to use coreos/jetcd for updating haproxy settings in etcd from Java-code.
What I want to achieve is:
remove all endpoints for single host
add an updated data for given host
I want to remove all keys by prefix and put actual data in etcd as atomic operation.
That's why I tried to use etcd transactions. My code is:
Op.DeleteOp deleteOp = Op.delete(
fromString(prefix),
DeleteOption.newBuilder().withPrefix(fromString(prefix)).build()
);
Txn tx = kvClient.txn().Else(deleteOp);
newKvs.forEach((k,v) -> {
tx.Else(Op.put(fromString(k), fromString(v), DEFAULT));
});
try {
tx.commit().get();
} catch (InterruptedException | ExecutionException e) {
log.error("ETCD transaction failed", e);
throw new RuntimeException(e);
}
ETCD v3 API is used (etcd v3.2.9). KVstore is initially empty and I want to add 3 records.
prefix value is:
/proxy-service/hosts/example.com
and kvs is a Map:
"/proxy-service/hosts/example.com/FTP/0" -> "localhost:10021"
"/proxy-service/hosts/example.com/HTTPS/0" -> "localhost:10443"
"/proxy-service/hosts/example.com/HTTP/0" -> "localhost:10080"
An Exception happens on commit().get() line with the following root cause:
Caused by: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: etcdserver: duplicate key given in txn request
at io.grpc.Status.asRuntimeException(Status.java:526)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:427)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:41)
at com.coreos.jetcd.internal.impl.ClientConnectionManager$AuthTokenInterceptor$1$1.onClose(ClientConnectionManager.java:267)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:419)
at io.grpc.internal.ClientCallImpl.access$100(ClientCallImpl.java:60)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:493)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$500(ClientCallImpl.java:422)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:525)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:102)
... 3 more
What am i doing wrong and how else can I complete several etcd changes as atomic operation?

It looks like that the operations to remove a key and then adding a new value for the same key cannot be in the same txn. According to the The etcd APIv3 documentation:
Txn processes multiple requests in a single transaction. A txn request increments the revision of the key-value store and generates events with the same revision for every completed request. It is not allowed to modify the same key several times within one txn.

Related

KTable & LogAndContinueExceptionHandler

I have a very simple consumer from which I create a materialized view. I have enabled validation on my value object (throwing Constraintviolationexception for invalid json data). When I receive a value on which the validation fails, I exepct the value to logged & consumer should read the next offset as I have LogAndContinueExceptionHandler enabled.
However LogAndContinueExceptionHandler is never invoked and consumePojo State transition from PENDING_ERROR to ERROR
Code
#Bean
public Consumer<KTable<String, Pojo>> consume() {
return values->
values
.filter((key, value) -> Objects.nonNull(key))
.mapValues(value-> value, Materialized.<String, Pojo>as(Stores.inMemoryKeyValueStore("POJO_STORE_NAME"))
.withKeySerde(Serdes.String())
.withValueSerde(SerdeUtil.pojoSerde())
.withLoggingDisabled())
.toStream()
.peek((key, value) -> log.debug("Receiving Pojo from topic with key: {}, and UUID: {}", key, value == null ? 0 : value.getUuid()));
}
Why is it that LogAndContinueExceptionHandler is not invoked in case of KTable?
Note: If code is changed to KStreams then I see logging and records being skipped but with KTable not !!
In order to handle exceptions not handled by Kafka Streams use the KafkaStreams.setUncaughtExceptionHandler method and StreamsUncaughtExceptionHandler implementation, this needs to return one of 3 available enumerations:
REPLACE_THREAD
SHUTDOWN_CLIENT
SHUTDOWN_APPLICATION
and in your case REPLACE_THREAD is the best option, as you can see in KIP-671:
REPLACE_THREAD:
The current thread is shutdown and transits to state DEAD.
A new thread is started if the Kafka Streams client is in state RUNNING or REBALANCING.
For the Global thread this option will log an error and revert to shutting down the client until the option had been added
In Spring Kafka you can replace default StreamsUncaughtExceptionHandler by StreamsBuilderFactoryBean:
#Autowired
void setMyStreamsUncaughtExceptionHandler(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
streamsBuilderFactoryBean.setStreamsUncaughtExceptionHandler(exception -> StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.REPLACE_THREAD);
}
I was able to solve the problem after looking at the logs carefully, I found that valueSerde for the Pojo, was showing useNativeDecoding (default being JsonSerde) due to this DeserializationExceptionHandler wasn't invoked and thread terminated.
Problem went away when I fixed the valueSerde in application.properties

MassTransit - How to fault messages in a batch

I am trying to utilize the MassTransit batching technique to process multiple messages to reduce the individual queries to be database (read and write).
If there is an exception while processing one/more of the messages, then the expectation is to fault only required messages and have the ability to process the rest of the messages.
This is common scenario in my use case ,what I am trying to establish here is a way to perform batch processing that caters for poisoned messages
For example, if I have a batch size of 10 messages, 10 in the queue and 1 persistently fails, I still need a means of ensuring the other 9 can be processed successfully. It is fine if all 10 need to be returned to the queue and subset re-consumed - but the poisoned message needs to be eliminated somehow. Does this requirement discount the use of batching?
I have tried below, however did solve my use case.
catching the exception and raising NotifyFaulted for that specific message.
modified sample-twitch application, to throw an exception to something like below , based on https://github.com/MassTransit/Sample-Twitch/blob/master/src/Sample.Components/BatchConsumers/RoutingSlipBatchEventConsumer.cs
file.
public Task Consume(ConsumeContext<Batch<RoutingSlipCompleted>> context)
{
if (_logger.IsEnabled(LogLevel.Information))
{
_logger.Log(LogLevel.Information, "Routing Slips Completed: {TrackingNumbers}",
string.Join(", ", context.Message.Select(x => x.Message.TrackingNumber)));
}
for (int i = 0; i < context.Message.Length; i++)
{
try
{
if (i % 2 != 0)
throw new System.Exception("business error -message failed");
}
catch (System.Exception ex)
{
context.Message[i].NotifyFaulted(TimeSpan.Zero, "batch routing silp faulted", ex);
}
}
return Task.CompletedTask;
}
I have dig into a few more threads that look similar to the issue ,for reference.
Masstransit error handling for batch consumer
If you want to use batch, and have a message in that batch that cannot be processed, you should catch the exception and do something else with the poison message. You could write it someplace else, publish some type of event, or whatever else. But MassTransit does not allow you to partially complete/fault messages of a batch.

Flowfiles stuck in queue (Apache NiFi)

I have following flow:
ListFTP -> RouteOnAttribute -> FetchFTP -> UnpackContent -> ExecuteScript.
Some of files are stuck in queue UnpackContent -> ExecuteScript.
ExecuteScript ate some flowfiles and they just disappeared: failure and success relationships are empty. It just showed some activity in Tasks/Time field. All of them stuck in queue before ExecuteScript. I tried to empty queue, but not all of flowfiles have been deleted from this queue. About 1/3 of them still stuck in queue. I tried to disable all processors and empty queue again but it returns: 0 FlowFiles (0 bytes) were removed from the queue.
When i'm trying to change Connection destionation it returns:
Cannot change destination of Connection because FlowFiles from this Connection are currently held by ExecuteScript[id=d33c9b73-0177-1000-5151-83b7b938de39]
ExecuScript from this answer (uses Python).
So, I can't empty queue because its always return message that there is no any flowfile, and i can't remove connection. This has been going on for several hours.
Connection configuration:
Scheduling is set to 0 sec, no penalties for flowfiles, etc.
Is it script problem?
UPDATE
Changed script to:
flowFile = session.get()
if (flowFile != None):
# All processing code starts at this indent
if errorOccurred:
session.transfer(flowFile, REL_FAILURE)
else:
session.transfer(flowFile, REL_SUCCESS)
# implicit return at the end
Same result.
UPDATE v2
I set concurent tasks to 50 and then ran ExecuteScript again and terminated it. I got this error:
UPDATE v3
I created additional ExecuteScript processor with same script and it works fine. But after i stopped this new processor and create new flowfiles, this processor now have same problems: it's just stuck.
Hilarious. Is ExecuteScript for single use?
You need to modify Your code in nifi-1.13.2 because NIFI-8080 caused these bugs. Or you just use nifi 1.12.1
JythonScriptEngineConfigurator:
#Override
public Object init(ScriptEngine engine, String scriptBody, String[] modulePaths) throws ScriptException {
// Always compile when first run
if (engine != null) {
// Add prefix for import sys and all jython modules
prefix = "import sys\n"
+ Arrays.stream(modulePaths).map((modulePath) -> "sys.path.append(" + PyString.encode_UnicodeEscape(modulePath, true) + ")")
.collect(Collectors.joining("\n"));
}
return null;
}
#Override
public Object eval(ScriptEngine engine, String scriptBody, String[] modulePaths) throws ScriptException {
Object returnValue = null;
if (engine != null) {
returnValue = ((Compilable) engine).compile(prefix + scriptBody).eval();
}
return returnValue;
}

How do i know, which Peer did the Transaction in Hyperledger Fabric Go?

I am working on getting a transaction id info, which will give the peer details for the transaction. Currently, I am able to retrieve the History for a key, which gives me the list of transaction committed to that key.
MyCode:
historyRes, err := stub.GetHistoryForKey(userNameIndexKey)
if err != nil {
return shim.Error(fmt.Sprintf("Unable to get History key from the ledger: %v", err))
}
for historyRes.HasNext() {
history, errIt := historyRes.Next()
if errIt != nil {
return shim.Error(fmt.Sprintf("Unable to retrieve history in the ledger: %v", errIt))
}
deleted := history.GetIsDelete()
ds := strconv.FormatBool(deleted)
fmt.Println("History TxId = "+history.GetTxId()+" -- Delete = "+ds)
}
Output
History TxId = 78c8d17c668d7a9df8373fd85df4fc398388976a1c642753bbf73abc5c648dd8 -- Deleted = false
History TxId = 102bbb64a7ca93367334a8c98f1f7be17e6a8d5277f0167c73da47072d302fa3 -- Deleted = true
But I don't know, which peer did this transaction. Is there any API available in fabric-sdk-go to retrieve peer info for a transaction id.
please suggest me some solution.
The call stub.GetHistoryForKey(userNameIndexKey) will query the state database and not the ledger (channel). The information about the identity who made the transaction is stored in the block.
I have implemented the same thing with NodeJS SDK. However, Go SDK too contains similar API calls. The following steps worked for me:
Using your SDK, get the transactionId
Use the SDK function for querying block by transactionId. References here.
At this step, you'll get the block. Now the identity of the submitter is located within this block. Hints: Payload -> Header -> Signature Header -> Creater -> IdBytes.
These identity bytes are the X509 certs of the submitter. You can read the certificate info to find out which member did submit this transaction. The subject and OUs will indicate the Organization of the peer that did the transaction.

Spring LDAP querybuilder PartialResultException

I'm trying to get all the users from my LDAP server, doing the search from the base, this is my code:
public LdapTemplate ldapTemplate() {
LdapContextSource ctxSrc = new LdapContextSource();
ctxSrc.setUrl("ldap://127.0.0.1:389/");
ctxSrc.setBase("dc=test,dc=com");
ctxSrc.setUserDn("admin");
ctxSrc.setPassword("password");
ctxSrc.afterPropertiesSet();
LdapTemplate lt = new LdapTemplate(ctxSrc);
return lt;
}
private LdapTemplate ldapTemplate = ldapTemplate();
public List<User> getAllUsers() {
LdapQuery query= query().base("").where("objectclass").is("user");
return ldapTemplate.search(query, new UserAttributesMapper());
}
This is the error:
10:07:09.406 [main] DEBUG o.s.l.c.s.AbstractContextSource - AuthenticationSource not set - using default implementation
10:07:09.413 [main] DEBUG o.s.l.c.s.AbstractContextSource - Not using LDAP pooling
10:07:09.416 [main] DEBUG o.s.l.c.s.AbstractContextSource - Trying provider Urls: ldap://127.0.0.1:389/dc=test,dc=com
10:07:09.548 [main] DEBUG o.s.l.c.s.AbstractContextSource - Got Ldap context on server 'ldap://127.0.0.1:389/dc=test,dc=com'
Exception in thread "main" org.springframework.ldap.PartialResultException: Unprocessed Continuation Reference(s); nested exception is javax.naming.PartialResultException: Unprocessed Continuation Reference(s); remaining name '/'
at org.springframework.ldap.support.LdapUtils.convertLdapException(LdapUtils.java:216)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:385)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:309)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:616)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:586)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:1651)
at ldap.example.UserRepositoryImpl.getAllUsers(UserRepositoryImpl.java:81)
at ldap.example.test.LdapApp.main(LdapApp.java:23)
Caused by: javax.naming.PartialResultException: Unprocessed Continuation Reference(s); remaining name '/'
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2914)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2888)
at com.sun.jndi.ldap.AbstractLdapNamingEnumeration.getNextBatch(AbstractLdapNamingEnumeration.java:148)
at com.sun.jndi.ldap.AbstractLdapNamingEnumeration.hasMoreImpl(AbstractLdapNamingEnumeration.java:217)
at com.sun.jndi.ldap.AbstractLdapNamingEnumeration.hasMore(AbstractLdapNamingEnumeration.java:189)
at org.springframework.ldap.core.LdapTemplate.search(LdapTemplate.java:365)
... 6 more
BUILD FAILED (total time: 1 second)
When I filter by ou it works, but I need to filter from the root.
You write in question's comment that changing port helps.
But changing port doesn't solve this problem.
Port 3268 points to Active Directory special place - Global Catalog. There is set of all object - but each of them has only small subset of attributes (for example distinguishedName, cn, sAMAccountName...).
So - it works until you don't need more specific attributes.
Problem analysis
The exception occurs because AD, as the result of your query, returns referral objects:
[Active Directory] (...) generate referrals in response to queries that request data about objects that exist in the forest, but not contained on the directory server handling the request. These are called internal cross references, because they refer to domains, schema, and configuration containers within the forest.
And if referral chasing is disabled:
If referral chasing is not enabled and a subtree search is performed, the search will return all objects within the specified domain that meet the search criteria. The search will also return referrals to any subordinate domains that are direct descendants of the directory server domain. The client must resolve the referrals by binding to the path specified by the referral and submitting another query.
You can enable referral chasing, but it cost - it slow down application - you can read about this here. And I think it is not necessary in most cases.
Solution 1:
Sometimes the sufficient solution is to assign more specific baseDN - ctxSrc.setBase() method in your question. Maybe all your users are inside inner path e.g "ou=user,dc=department,dc=test,dc=com".
Read more in this answer.
Solution 2:
In Spring LdapTemplate you can also ignore this exception with method setIgnorePartialResultException():
ldapTemplate.setIgnorePartialResultException(true);
Read more in this answer.

Resources