opendaylight: what is the purpose of InstructionKey? - opendaylight

I am programming an SDN application using opendaylight. I see that an InstructionKey is required to create an instruction but I don't know where it is used. Any idea what it's use is?
I am always setting it to new InstructionKey(0) but I am not sure if this is correct. Thanks.

The InstructionKey is not necessary. This is part of code I have created :
Instruction applyActionsInstruction = new InstructionBuilder()
.setOrder(0).setInstruction(new ApplyActionsCaseBuilder()
.setApplyActions(applyActions)
.build())
.build();
instructions.add(applyActionsInstruction);
Instructions allInstructions = new InstructionsBuilder()
.setInstruction(instructions)
.build();
and it works fine, the Instruction is created even without the key specified.

Related

How to optimize react-native-keychain performance?

I am building a react-native application that uses react-native-keychain to securely save the user's tokens. I know that keychain is for saving username/password combination but i thought it would do no harm to save my tokens instead. I'm currently implementing some checking mechanisms that will check if there is a valid refresh token availavable (meaning that the last user didnt log out when leaving the app, as usually happens in mobile apps) and will act accordingly. This seems to be performing poorly (too slow) and i have come to the conclusion that it is the fetching of the token that is holding the app back (Keychain.getGenericPassword()).
The question is: Since keychain seems to be the safest way to store credentials localy, is there a way to optimise its performance or is there an equally safe but generaly faster alternative?
"react-native-keychain" version: "6.2.0"
For anyone still trying to resolve this issue. I had the same issue where the delay for me was about 10s or more. I was able to reduce it to less than a second after going through these two issues 1, 2. I followed the steps mentioned in this comment.
Use {storage: KeyChain.STORAGE_TYPE.AES} option when using the methods getGenericPassword and setGenericPassword
Goto this file: node_modules\react-native-keychain\android\src\main\java\com\oblador\keychain\KeychainModuleBuilder.java and set the DEFAULT_USE_WARM_UP to false.
Goto this file:
node_modules\react-native-keychain\android\src\main\java\com\oblador\keychain\KeychainModule.java
inside method getGenericPassword and change the following: Change
these lines
final String accessControl = getAccessControlOrDefault(options);
final boolean useBiometry = getUseBiometry(accessControl);
final CipherStorage current = getCipherStorageForCurrentAPILevel(useBiometry);
to
// final String accessControl = getAccessControlOrDefault(options);
// final boolean useBiometry = getUseBiometry(accessControl);
// final CipherStorage current = getCipherStorageForCurrentAPILevel(useBiometry);
final CipherStorage current = getSelectedStorage(options);
The issue seems to be because of a warming mechanism used with the RSA encryption. Please follow the above three links for further info.

Spring Cloud Stream error handling... error

That's not really an issue, beacause I found a workaround, but it conflicts with the documentation, so I wanted to share and document about it.
FYI Spring Boot 2.1.10 + SCSt 2.1.4 + RabbitMQ binder
I first implemented an application local error handler as given into official docs :
#StreamListener(Sink.INPUT)
public void handle(Person value) {
throw new RuntimeException("BOOM!");
}
#ServiceActivator(inputChannel = Sink.INPUT + ".my-group.errors") // won't work
public void error(ErrorMessage message) {
log.error("Handling ERROR: " + message.getPayload().getMessage());
}
spring.cloud.stream.bindings.input.destination=persons.inputs
spring.cloud.stream.bindings.input.group=my-group
But that didn't go well, to say the least. This is what I eventually had to keep:
#ServiceActivator(inputChannel = "persons.inputs.my-group.errors")
As you can see, what's happening is that I had to stick to the actual destination definition instead of the channel's; which I think is very uncomfortable! And I want to underline, again, that this is contradictory to the official docs here: https://docs.spring.io/spring-cloud-stream/docs/current/reference/htmlsingle/#_application_error_handling (plus there are noticeable typos, IMHO: they even write that the destinationName is actually required)
Can anyone share thoughts about the situation with me? Have I done it right and am I right to think that this is wrong?
It's a bug in the documentation; it is, indeed, unfortunate the binding name was not used in the error channel name instead of the destination and group, but it's too late to change it now. We could possibly do something in a future release.
Please open 2 GitHub issues to
fix the documentation
consider adding an option to name the error channel using the binding name instead.

spring-integration-aws dynamic file download

I've a requirement to download a file from S3 based on a message content. In other words, the file to download is previously unknown, I've to search and find it at runtime. S3StreamingMessageSource doesn't seem to be a good fit because:
It relies on polling where as I need to wait for the message.
I can't find any way to create a S3StreamingMessageSource dynamically in the middle of a flow. gateway(IntegrationFlow) looks interesting but what I need is a gateway(Function<Message<?>, IntegrationFlow>) that doesn't exist.
Another candidate is S3MessageHandler but it has no support for listing files which I need for finding the desired file.
I can implement my own message handler using AWS API directly, just wondering if I'm missing something, because this doesn't seem like an unusual requirement. After all, not every app just sits there and keeps polling S3 for new files.
There is S3RemoteFileTemplate with the list() function which you can use in the handle(). Then split() result and call S3MessageHandler for each remote file to download.
Although the last one has functionality to download the whole remote dir.
For anyone coming across this question, this is what I did. The trick is to:
Set filters later, not at construction time. Note that there is no addFilters or getFilters method, so filters can only be set once, and can't be added later. #artem-bilan, this is inconvenient.
Call S3StreamingMessageSource.receive manually.
.handle(String.class, (fileName, h) -> {
if (messageSource instanceof S3StreamingMessageSource) {
S3StreamingMessageSource s3StreamingMessageSource = (S3StreamingMessageSource) messageSource;
ChainFileListFilter<S3ObjectSummary> chainFileListFilter = new ChainFileListFilter<>();
chainFileListFilter.addFilters(
new S3SimplePatternFileListFilter("**/*/*.json.gz"),
new S3PersistentAcceptOnceFileListFilter(metadataStore, ""),
new S3FileListFilter(fileName)
);
s3StreamingMessageSource.setFilter(chainFileListFilter);
return s3StreamingMessageSource.receive();
}
log.warn("Expected: {} but got: {}.",
S3StreamingMessageSource.class.getName(), messageSource.getClass().getName());
return messageSource.receive();
}, spec -> spec
.requiresReply(false) // in case all messages got filtered out
)

BuildCoreAdmin solrnetfacility is returning nullpointer exception

Need to do swapping of my cores and hence need to set the SolrCoreAdmin in my solrFacility, so I can use it throughout my application using my Windsor container.
When I do the following :
var solrFacility = new SolrNetFacility(ConfigurationSettings.ContentSearch_Solr_ServiceBaseAddress);
solrFacility.AddCore(AgentsIndex.IndexName, typeof(AgentsIndexMapper), ConfigurationSettings.ContentSearch_Solr_ServiceBaseAddress +"/"+ AgentsIndex.IndexName);
solrFacility.AddCore(AgentsIndex.SwapIndexName, typeof(AgentsIndexMapper), ConfigurationSettings.ContentSearch_Solr_ServiceBaseAddress + "/" + AgentsIndex.SwapIndexName);
solrFacility.BuildCoreAdmin(ConfigurationSettings.ContentSearch_Solr_ServiceBaseAddress);
_WindsorContainer.AddFacility("solr", solrFacility);
I get following error:
[NullReferenceException: Object reference not set to an instance of an object.]
Castle.Facilities.SolrNetIntegration.SolrNetFacility.BuildCoreAdmin(ISolrConnection conn) +40
I looked up the code inside BuildCoreAdmin using reflector and I think its trying to access a base.Kernel...is that null? How do I set that kernel? The base file is AbstractFacility and that has a function setKernel. What should this kernel be?
How do I solve this? im new to solrnet and need your help. Thanking you.
Regards,
Kasturi Chavan
I got the solution for this. I was doing it wrong. Thought of posting the answer so it helps others.
When initilaizing the solrnetfacility , the solrcoreadmin are also initialised. I later added this solrnetfacility to my container (castle windsor in this case) so i can use it over entire application.
Later when i wanted to swap the indexes after rebuild, i just resolved it
ISolrCoreAdmin _ISolrCoreAdmin = c.GetContainer().Resolve<ISolrCoreAdmin>();
ResponseHeader response = _ISolrCoreAdmin.Swap(corename, othername);
And all set!
Hope it helps someone. Thanks a lot for all help provided.

How does someone use Guava's CacheLoader asynchronously

The question says it all I'd like to use CacheBuilder, but my values are pulled in asynchronously. This worked previously with MapMaker as the CacheLoader wasn't a requirement. Now I'd like to know if I can hack this up or if there are any non deprecated alternatives. Thank you.
I think the question you're trying to ask is "How can I use CacheBuilder without having to specify a CacheLoader?" If that's the case, then there will be support for this in Guava release 11.0. In the meantime a build() method on CacheLoader is already checked into trunk (as of this morning):
http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/cache/CacheBuilder.html
One method would be to make with generic parameters K and V as your desired outputs:
LoadingCache<K, ListenableFuture<V>> values = CacheBuilder.newBuilder()
.build(
new CacheLoader<K, ListenableFuture<V>>() {
public ListenableFuture<V> load(K key) {
/* Get your future */
}
});

Resources