I am using elastic package for golang. I want to use its BulkProcessor for sending huge amount of documents in background. As per shown in wiki, I could create a processor. But I don't want to create it every time I send the documents. I want to know if there is processor service exists in the connection and pass data to existing processor rather than creating new. How can I achieve it?
Register the bulk processor separately from sending the documents. The bulk processor only lives as long as your process, so to ensure you only create it once, create it when your process starts. Then elsewhere in your application, you can send documents whenever you need to.
Alternatively, if you must do it on-demand, you can use a sync.Once to ensure the creation only happens once.
Related
I've already read official documentation and find no way.
My datas to es are from kafka which sometimes can be out of order. In the past, message from kafka is parsed and directly insert or update ES doc with specific ID. To avoid the older data override the newer data, I have to check whether the doc with specific ID is already exists and some properties of this doc are meet the conditions. Then I do the UPDATE action(or INSERT).
What I'm doing now is 'search before update'.
Before updating a doc, I search from ES with specific ID(included in kafka msg). Then check if this doc meets the conditions(for example, whether update_time is older?). Lastly I update the doc. And I set refresh to true to update index instantly.
What I'm worried about?
It seems Transactional.
If there is only one Thread executing synchronously, is it possible that When I process next message the doc updated in last message process is not refresh at ES?
If I have several Threads consuming kafka message, how to check before update? Can I use script to solve this problem?
If there is only one Thread executing synchronously, is it possible that When I process next message the doc updated in last message process is not refresh at ES?
That is a possibility since indexes are refreshed once in every second (by default), reducing this value is neither recommended nor guaranteed to give you the desired result since Elasticsearch is NOT designed for this.
If I have several Threads consuming kafka message, how to check before update? Can I use script to solve this problem?
You can use script if the number of fields being updated are very limited. Personally I have found script to be best suited for single field update and that too for corner use cases, it should not be used as a general practice. Any more than that and you are running into the same risk as that with stored procedures in the RDBMS world. It makes data management volatile overall and a system which is harder to maintain/extend in the longer run.
Your use case is best suited for optimistic locking support available from Elasticsearch out of the box. Take a look at Elasticsearch Versioning Support for full details.
You can very well use the inbuilt doc version if concurrency is the only problem that you need to solve. If, however, you need more than concurrency (out of order message delivery and respective ES updates) then you should use your application/domain specific field as the inbuilt version wouldn't work as-is.
You can very well use any of the app specific (numeric) field as a version field and use it for optimistic locking during document updates. If you use this approach, please pay special attention to all insert, update, delete operations for that index. Quoting AS-IS from versioning support - when using external versioning, make sure you always add the current version (and version_type) to any index, update or delete calls. If you forget, Elasticsearch will use it's internal system to process that request, which will cause the version to be incremented erroneously
I'll recommend you evaluate the inbuilt version first and use it if it fulfills your needs. It'll make the overall design much simpler. Consider the app specific version as the second option if the inbuilt version does not meet your requirements.
If there is only one Thread executing synchronously, is it possible that When I process next message the doc updated in last message process is not refresh at ES?
Ad 1. It is possible to save data in ElasticSearch and in a short while after receive stale result (before the index is updated)
If I have several Threads consuming kafka message, how to check before update? Can I use script to solve this problem?
Ad 2. If you process Kafka messages in several threads, it would be the best to use business data (eg. some business ids) as partition keys in Kafka to ensure data is processed in order. Remember to use Kafka to consume messages in many threads and don't consume messages by single consumer to fan out later to multiple threads.
It seems it would be best to ensure data is processed in order and then drop checking in Elasticsearch since it is not guaranteed to give valid results.
I want to make one system that monitor third party from NIFI.....i want to apply filter condition on invoke HTTP processor that get all the alarm from web app in every 5 min but will not fetch the duplicate alarm that already fetched before is it possible ? or else is it possible when the alarm will come in to my web app then workflow in NIFI will trigger and do some task like update assign or delete.
THANK in advance
You can use DetectDuplicate to determine if an incoming flowfile is an exact copy of a recently-seen flowfile (as determined by a value computed from the flowfile attributes), which leverages the distributed state cache. This cache is pluggable and can be provided by a native implementation, HBase, or Redis.
Implement cache in TIBCO BW
I need to implement cache/memory in TIBCO BW.
Contents of this cache should be available across BW projects.
What I am trying to do is, when I receive a message containing multiple shipment records - (Shipment and delivery No is unique combination)
I need to first check in cache if any of these records exist.
If yes - reject whole XML
If not, then push this data into cache/memory.
Once this is done then I need to call SOAP request reply to external system.
In Another project, when Acknowledgement is received from external system, I need to check the records in the message, find out those records in cache and delete them.
Is there any way to do this?
Challenge here is there is no unique key for a whole message. Each record with combination of shipment/delivery is unique.
Here is what I tried and challenge in it:
1) I thought of putting the data in a file and naming the file as the requestID/Key for each message.
Then in another project, check the file and delete it
But since we dont have key, I cannot do that.
2) Using shared variables:
I believe shared variables will not be available across bw projects. So, this option is out
3) third option is to use EMS queue, park the message temporarily there containing records.
Then search in this, and if the records match reject the request.
And, in acknowledgement (another project), search for the records in ems message and delete that particular message.
Any help on this would be appreciated.
thanks
Why don't you use database or file to store the records ?
Because when you stop or restart or a problem occured in the appnode, the cache will be erased and you will not be able to retreive the records not treated.
4th option is using tibco activity 'Java Global instance' for implementing cache in Java.
"The Java Global Instance shared configuration resource allows you to
specify a Java object that can be shared across all process instances
in a Java Virtual Machine (JVM). When the process engine is started,
an instance of the specified Java class is constructed."
You can google ton's of cache implementation in Java for example https://crunchify.com/how-to-create-a-simple-in-memory-cache-in-java-lightweight-cache/
Regarding your statement:
"Challenge here is there is no unique key for a whole message. Each record with combination of shipment/delivery is unique"
So, you do have unique key - combination of shipment/delivery is unique.
If record size is not too huge you can use the record as key "as is" or create unique hash for each record and use it as key if key size is an issue.
I wanted to get a record from the aerospike. So, I was using the Client.Get method.
However, whenever I do a Get I also want to refresh the TTL of the record. So, usually we use a WritePolicy which allows us to set a ttl. But then the Get method accepts only a BasePolicy
Is the following way correct or is there a better way of doing this?
client.Get(nil, key, bin)
client.Touch(myWritePolicy, key)
Do it within an operate() command, you can touch() as well as get() in the same lock, one network trip. Note, if your record is stored on disk, updating TTL, however you do it, will entail a new write of the record to a different location on the disk because TTL info is stored in the record metadata.
Here is what we came up with. By using 3 value status column.
0 = Not indexed
1 = Updated
2 = Indexed
There will be 2 jobs...
Job 1 will select top X records where status = 0 and pop them into a queue like RabitMQ.
Then a consumer will bulk insert those records to ES and update the status of DB records to 1.
For updates, since we have control of our data... The SQL stored proc that updates that particular record will set it's status to 2. Job2 will select top x records where status = 2 and pop them on RabitMQ. Then a consumer will bulk insert those records to ES and update the status of DB records to 1.
Of course we may need an intermediate status for "queued" so none of the jobs pick up the same record again but the same job should not run if it hasn't completed. The chances of a queued record being updated are slim to none. Since updates only happen at end of day usually the next day.
So I know there's rivers (but being deprecated and probably not flexible like ETL)
I would like to bulk insert records from my SQL server to Elasticsearch.
Write a scheduled batch job of some sort either ETL or any other tool doesn't matter.
select from table where id > lastIdInsertedToElasticSearch this will allow to load the latest records into Elasticsearch at scheduled interval.
But what if a record is updated in the SQL server? What would be a good pattern to track updated records in the SQL server and then push the updated records in ES? I know ES has document versions when putting the same Id. But can't seem to be able to visualize a pattern.
So IMHO, batch inserts are good for building or re-building the index. So for the first time, you can run batch jobs that run SQL queries and perform bulk updates. Rivers, as you correctly pointed out, don't provide a lot of flexibility in terms of transformation.
If the entries in your SQL data store are created by you (i.e. some codebase in your control), it would be better that the same code base updates documents in Elasticsearch, may be not directly but by notifying some other service or with the help of queues to not waste time in responding to requests (if that's the kind of setup you have).
We have a pretty similar use case of Elasticsearch. We provide search inside our app, which performs search across different categories of data. Some of this data is actually created by the users of our app through our app - so we handle this easily. Our app writes that data to our SQL data store and pushes the same data in RabbitMQ for indexing/updating in Elasticsearch. On the other side of RabbitMQ, we have a consumer written in Python that basically replaces the entire document in Elasticsearch. So the corresponding rows in our SQL datastore and documents in Elasticsearch share the ID which enables us to update the document.
Another case is where there are a few types of data that we perform search on comes from some 3rd party service which exposes the data over their HTTP API. The data creation is in our control but we don't have an automated mechanism of updating the entries in Elasticsearch. In this case, we basically run a cron job that takes care of this. We have managed to tune the cron's schedule because we also have a limited number of API queries quota. But in this case, our data is not really updated so much per day. So this kind of system works for us.
Disclaimer: I co-developed this solution.
I needed something like the jdbc-river that could do more complex "roll-ups" of data. After careful consideration of what it would take to modify the jdbc-river to suit my needs, I ended up writing the river-net.
Here are a few of the features:
It gets fairly decent performance (comparable to the jdbc-river. We get upwards of 6k rows/sec)
It can join many tables to create complex nested arrays of documents without creating duplicate child documents
It follows a lot of the same conventions as the jdbc-river.
It also supports reading from files.
It's written in C#
It uses Quartz.Net and supports cron expressions for scheduling.
This project is open source, and we already have a second project (also to be open sourced) that does generic job scheduling with RabbitMQ. We have ported over a lot of this project, and plan to the RabbitMQ river for better performance and stability when indexing into Elasticsearch.
To combat large updates, we aren't hitting tables directly. Instead we use stored procedures that only grab deltas. We also have an option on the sp to reset the delta to reindex everything.
The project is fairly young with only a few commits, but we are open to collaboration and new ideas.