Chronicle Queue Proxy Method Value is Same Object Instance Each Time - chronicle

I'm using Chronicle V4 proxy API to convert a message into a function call.
When myMethod(Thing a) is invoked after a readOne() call, the 'a' object instance ID is the same each time but the content has the latest state.
Imagine:
readOne();
readOne();
Methods fired:
myMethod(Thing a)
myMethod(Thing a)
The second call with param 'a' now with different state overrides any previous caches version of 'a' in say a hashmap in memory, because the java object instance ID is the same one when myMethod was invoked initially.
I'm hoping this is some odd in my setup - be good to know if this is by design or just an issue my end.

This is by design to provide implicit recycling of the object.
If you want a new object you can use Marshallable.deepCopy() or use Marshallable.copyTo() an existing one. Unless you retain the object, there shouldn't be an issue. If you write it out to another queue, for example, it is written immediately rather than in the background.
It is implemented this way so you can process millions of events and create very few objects. i.e. less than 1 byte of garbage per message.
I highly recommend using the latest version of Queue https://search.maven.org/search?q=g:net.openhft%20AND%20a:chronicle-queue currently v5.17.4

Related

Creating object in Sinatra app that maintains data on new HTTP requests

I'm building a very simple app using Sinatra. I am not required to use persistent storage so I'm not using a database; however, I want to keep an object that contains a record of all my transactions. The object should not reinitialize when there is a new HTTP request.
I have tried putting a #transactions variable into an initialize method and I've tried set :transactions, Transactions.new, both in my controller (neither of which worked). I just tried
configure do
##transactions = Transactions.new
end
and it's still saying the object is nil (the Transactions initialize method doesn't use params and initializes all instance variables, so nothing should be nil).
Are there other ways to accomplish this?
What you want is persistent storage. I know that you're trying to avoid it but you should perhaps just bite the bullet.
You could invent one from scratch like for example using a JSON file on the file system but thats not really going to save you any time. Better yet is to use a memory based storage like memcached, or an actual database like SQLite.
Using a class variable is a dead end since they are not thread safe and will not actually persist after the program ends.

How to use std::unique_ptr for a std::list variable

I have a consumer-producer situation where I am constantly pushing data into a list, if it doesn't already exist there, and then every few seconds I package and send the data to a server.
At the moment I have a thread that delays for some number of seconds, wakes up, sets a flag so nothing is added to the list, does the packaging, deleting items processed from the list, and then allows the program to start adding to the list again.
This was fine for a prototype but now I need to make it work better in a more realistic situation.
So, I want to have the producer get the information, and when the size is large enough or enough time elapses pass the list to a thread to process.
I want to pass the reference to the list, and unique_ptr would be beneficial so once it is moved the producer thread can just create a new list and for all practical purposes be using the same list as before.
But when I tried to change my list from
typedef list<string> STRINGQUEUE;
STRINGQUEUE newMachineQueue;
to
std::unique_ptr<STRINGQUEUE> newMachineQueue;
Then I get errors that insert is not a member of std::unique_ptr.
I don't think I want to use newMachineQueue.get() and then do my operations as I believe I lose the benefits of unique_ptr then.
So, how can I use unique_ptr on a list and be able to call the methods in the list?
Just use it like you would use a pointer.
newMachineQueue->insert(...);
You might be interested in the documentation.
You also don't need to use a unique_ptr, but you can just move the list and reassign a new one to it.
void consumer(std::list<string> list) {
// accept by value!
}
std::list<string> machineQueue;
// hand-off to consumer
consumer(std::move(machineQueue));
machineQueue = std::list<string>{}; // new list

Way to persist ruby objects that get their state changed via game ticks

Context
I'm making a game in ruby. It has a model named Character with attributes like energy and money. A Character can have a behavior, e.g. sleeping. Finally the Character has a tick method that calls its behavior method sleep!. Simplified it looks like this:
class Character
attr_accessor :energy
def initialize(behavior)
#current_behavior = behavior
end
def tick
self.send "#{#current_behavior}!"
end
private
def sleep!
self.energy -= 1 if energy > 0
end
end
As in a lot of games the tick method of every Character needs to be invoked every n minutes. I want to use EventMachine for this. In a periodic timer it calls Character.tick_all that should invoke the tick method of every Character instance.
Question
What kind of persistence engine can I use for this? On server startup I want to load all Character instances in memory so they can be ticked. For now its ok if every instance gets persisted after its state changes because of a tick.
I've tried it with Rails and ActiveRecord. But it requires at least one write and read action for every tick which seems a bit of an overkill.
Edit
I've looked into SuperModel. It seems to do exactly what I want, but it's last commit was about a year ago...
For simple lookups and storage of data in memory, there are 2 choices as far as I know:
Memcached: This is one of the most simple key-value stores around, and allows to easily SET and GET values that are associated with keys. However, as soon as you kill the process, all memories are flushed/destroyed, and it is a bit of a problem with storing data from users
Redis: Redis provides all the simple key-value lookup functionality of Memcached and much more. Apart from the fact, that is Redis is great of working with hashes, Redis provides set functionality (= not storing duplicates), and sorted set functionality (e.g. for scoring and ranking items). Also, Redis keeps track of data on the file-system and this is great when you need to restart a process once in a while.

How does infinispan know that it have to take the changes from delta aware object

We are using infinispan and in our system we have a big object in which we have to push small changes per transaction. I have implemented the DeltaAware interface for this object and also the Delta. The problem i am facing is that the changes are not getting propagated to other nodes and only the initial object state is prapogated to other nodes. Also the delta and commit methods are not called on the big object which implements DeltaAware. Do i need to register this object somewhere other than simply putting it in the cache ?
Thanks
It's probably better if you simply use an AtomicHashMap, which is a construction within Infinispan. This allows you to group a series of key/value pairs as a single value. Infinispan can detect changes in this AtomicHashMap because it implements the DeltaAware interface. AHM is a higher level construct than DeltaAware, and one that probably suits you better.
To give you an example where AtomicHashMaps are used, they're heavily used by JBoss AS7 HTTP session replication, where each session id is mapped to an AtomicHashMap. This means that we can detect when individual session data changes and only replicate that.
Cheers,
Galder

Why we send id instead of whole object in workers?

In Ruby practice is to send id instead of object in workers. Isn't that kind of CPU consuming process because we have to retrieve Object again from database.
Several reasons:
Saves space on the queue, also transfer time (app => queue, queue => workers).
Often it is easier to fetch fresh object from the database (as opposed to retrieving cached copy from the queue)
Argument to Resque.enqueue must be JSON-serializable. Complex objects not always can be serialized.
If you think about it the reasons are pretty obvious:
your object may change between the time te action is queued and handled and in general you don't want an outdated object.
an id a a lot lighter to transport than a whole object which you will need to serialize it in json/yaml or anything else.
if you need the associations the problem just got even worse :)
But in the end it depends on your application, if you only need some informations you can just send them to your worker directly without even using the full model.

Resources