Copy data from one Chronicle to another - chronicle

For the concept of taking back up , i need to copy data from one chronicle queue to another .
Would it be safe to do a directly copy the whole Bytes object from wire of one queue into another ?
something like
documentContext().wire().bytes().read(byte_buffer)
and then wrapping this byte_buffer into byte_store and writing as
documentContext().wire().bytes().write(byte_Store).
The reason i'm doing it is avoid any conversion back and forth into custom objects?

You can, but a simpler approach is to copy directly from one to the other.
ChronicleQueue inQ = SingleChronicleQueueBuilder.binary("in").build();
ExcerptTailer tailer = inQ.createTailer();
ChronicleQueue outQ = SingleChronicleQueueBuilder.binary("out").build();
ExcerptAppender appender = outQ.acquireAppender();
while(true) {
try (DocumentContext inDC = tailer.readingDocument()) {
if (!inDC.isPresent()) {
// not message available
break; // or pause or do something else.
}
try (DocumentContext outDC = appender.writingDocument()) {
outDC.wire().write(inDC.wire().bytes());
}
}
}
}

Related

Chronicle Queue: How to read excepts/documents with different WireKey?

Assume a chronicle queue, and a producer that writes 2 types of messages into the queue.
Each type of message is written with a different "WireKey".
// Writes: {key1: TestMessage}
appender.writeDocument(w -> w.write("key1").text("TestMessage"));
// Writes: {key2: AnotherTextMessage}
appender.writeDocument(w -> w.write("key2").text("AnotherTextMessage"));
Question:
How can I write a single-threaded consumer that can read both types of messages and handle them differently?
What I've tried:
// This can read both types of messages, but cannot
// tell which type a message belongs to.
tailer.readDocument(wire -> {
wire.read().text();
});
// This only reads type "key1" messages, skips all "key2" messages.
tailer.readDocument(wire -> {
wire.read("key1").text();
});
// This crashes. (because it advances the read position illegally?)
tailer.readDocument(wire -> {
wire.read("key1").text();
wire.read("key2").text();
});
I was hoping I can do something like wire.readKey() and get the WireKey of a document, then proceed to read the document and handle it dynamically. How can I do this?
Note: I'm aware this can be accomplished using methodReader and methodWriter, and it seems like documentation/demo recommends this approach (?) But I'd prefer not to use that API, and be explicit about reading and writing messages. I assume there has to be a way to accomplish this use case.
Thank you.
You are correct, e.g. MethodReader accomplishes it.
You can do it two ways
// a reused StringBuilder
StringBuilder sb = new StringBuilder();
wire.read(sb); // populate the StringBuilder
or a more convenient method is
String name = wire.readEvent(String.class);
switch(name) {
case "key1":
String text1 = wire.getValueIn().text();
// do something with text1
break;
case "key2":
String text2 = wire.getValueIn().text();
// do something with text1
break;
default:
// log unexpected key
}
For other readers who don't know about MethodReader, the same messages can be accomplished with
interface MyEvents {
void key1(String text1);
void key2(String text2);
}
MyEvents me = wire.methodWriter(MyEvents.class);
me.key1("text1");
me.key2("text2");
MyEvents me2 = new MyEvents() {
public void key1(String text1) {
// handle text1
}
public void key2(String text2) {
// handle text2
}
};
Reader reader = wire.methodReader(me2;
do {
} while(reader.readeOne());
NOTE: The content is the same, so you can mix and match the two options
You can use a Chronicle Queue instead of a Wire to persist this information

Downlolad and save file from ClientRequest using ExchangeFunction in Project Reactor

I have problem with correctly saving a file after its download is complete in Project Reactor.
class HttpImageClientDownloader implements ImageClientDownloader {
private final ExchangeFunction exchangeFunction;
HttpImageClientDownloader() {
this.exchangeFunction = ExchangeFunctions.create(new ReactorClientHttpConnector());
}
#Override
public Mono<File> downloadImage(String url, Path destination) {
ClientRequest clientRequest = ClientRequest.create(HttpMethod.GET, URI.create(url)).build();
return exchangeFunction.exchange(clientRequest)
.map(clientResponse -> clientResponse.body(BodyExtractors.toDataBuffers()))
//.flatMapMany(clientResponse -> clientResponse.body(BodyExtractors.toDataBuffers()))
.flatMap(dataBuffer -> {
AsynchronousFileChannel fileChannel = createFile(destination);
return DataBufferUtils
.write(dataBuffer, fileChannel, 0)
.publishOn(Schedulers.elastic())
.doOnNext(DataBufferUtils::release)
.then(Mono.just(destination.toFile()));
});
}
private AsynchronousFileChannel createFile(Path path) {
try {
return AsynchronousFileChannel.open(path, StandardOpenOption.CREATE);
} catch (Exception e) {
throw new ImageDownloadException("Error while creating file: " + path, e);
}
}
}
So my question is:
Is DataBufferUtils.write(dataBuffer, fileChannel, 0) blocking?
What about when the disk is slow?
And second question about what happens when ImageDownloadException occurs ,
In doOnNext I want to release the given data buffer, is that a good place for this kind operation?
I think also this line:
.map(clientResponse -> clientResponse.body(BodyExtractors.toDataBuffers()))
could be blocking...
Here's another (shorter) way to achieve that:
Flux<DataBuffer> data = this.webClient.get()
.uri("/greeting")
.retrieve()
.bodyToFlux(DataBuffer.class);
Path file = Files.createTempFile("spring", null);
WritableByteChannel channel = Files.newByteChannel(file, StandardOpenOption.WRITE);
Mono<File> result = DataBufferUtils.write(data, channel)
.map(DataBufferUtils::release)
.then(Mono.just(file));
Now DataBufferUtils::write operations are not blocking because they use non-blocking IO with channels. Writing to such channels means it'll write whatever it can to the output buffer (i.e. may write all the DataBuffer or just part of it).
Using Flux::map or Flux::doOnNext is the right place to do that. But you're right, if an error occurs, you're still responsible for releasing the current buffer (and all the remaining ones). There might be something we can improve here in Spring Framework, please keep an eye on SPR-16782.
I don't see how your last sample shows anything blocking: all methods return reactive types and none are doing blocking I/O.

Indy10, IdTCPClient's IOHandler send additional data

Trying to send a data to another machine. I use C++Builder XE6 and Indy10.
TMemoryStream *sms = new TMemoryStream();
sms->Write(msgData, msgSize);
Form1->IdTCPClient1->IOHandler->WriteBufferOpen();
Form1->IdTCPClient1->IOHandler->Write(sms, 0, true);
Form1->IdTCPClient1->IOHandler->WriteBufferFlush();
delete sms;
When I checked send data with Wireshark, I sent data that additional data and msgData to the machine.
The additional data is " 00 00 00 20 ", and it is at head of sending data.
Does IdTcpClient usually send additional data like this?
It is sending extra data because you are setting the AWriteByteCount parameter of Write(TStream) to true. 00 00 00 20 is the stream size in network byte order. msgSize is 0x00000020, aka 32. If you do not want the stream size sent, you need to set the AWriteByteCount parameter to false instead:
Form1->IdTCPClient1->IOHandler->Write(sms, 0, false);
Also, you should be using WriteBufferClose() instead of WriteBufferFlush(), and do not forget to call WriteBufferCancel() if Write() raises an exception. WriteBufferClose() sends the buffered data to the socket and then closes the buffer so subsequent writes are not buffered. WriteBufferFlush() sends the buffered data to the socket, but does not close the buffer, thus subsequent writes will be buffered.
Also, you can simplify the overhead a little by replacing TMemoryStream with TIdMemoryBufferStream, so that you do not have to make a separate copy of your message data in memory:
TIdMemoryBufferStream *sms = new TIdMemoryBufferStream(msgData, msgSize);
try
{
Form1->IdTCPClient1->IOHandler->WriteBufferOpen();
try
{
Form1->IdTCPClient1->IOHandler->Write(sms, 0, true); // or false
Form1->IdTCPClient1->IOHandler->WriteBufferClose();
}
catch (const Exception &)
{
Form1->IdTCPClient1->IOHandler->WriteBufferCancel();
throw;
}
}
__finally
{
delete sms;
}
Alternatively, use a RAII approach:
class BufferIOWriting
{
private:
TIdIOHandler *m_IO;
bool m_Finished;
public:
BufferIOWriting(TIdIOHandler *aIO) : m_IO(aIO), m_Finished(false)
{
IO->WriteBufferOpen();
}
~BufferIOWriting()
{
if (m_Finished)
m_IO->WriteBufferClose();
else
m_IO->WriteBufferCancel();
}
void Finished()
{
m_Finished = true;
}
};
{
std::auto_ptr<TIdMemoryBufferStream> sms(new TIdMemoryBufferStream(msgData, msgSize));
BufferIOWriting buffer(Form1->IdTCPClient1->IOHandler);
Form1->IdTCPClient1->IOHandler->Write(sms.get(), 0, true); // or false
buffer.Finished();
}
With that said, I would suggest just getting rid of write buffering altogether:
TIdMemoryBufferStream *sms = new TIdMemoryBufferStream(msgData, msgSize);
try
{
Form1->IdTCPClient1->IOHandler->Write(sms, 0, true); // or false
}
__finally
{
delete sms;
}
Or:
{
std::auto_ptr<TIdMemoryBufferStream> sms(new TIdMemoryBufferStream(msgData, msgSize));
Form1->IdTCPClient1->IOHandler->Write(sms.get(), 0, true); // or false
}
Write buffering is useful when you need to make multiple related Write() calls that should have their data transmitted together in as few TCP frames as possible (in other words, letting the Nagle algorithm do its job better). For instance, if you need to Write() individual fields of your message. Write buffering does not make much sense to use when making a single Write() call, especially of such a small size. Let Write(TStream) handle its own buffering internally for you.

Asynchronous image loading in AS3

I understand that images are to be loaded asynchronously in AS3, and that that synchronisation should be handled using events and event listeners.
So, in a simple case, it would look like this:
var loader : Loader = new Loader();
var im_file: URLRequest = new URLRequest ("imfile.png");
loader.load(im_file);
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, loading_complete);
function loading_complete (e : Event) : void
{ // ... do smt with your loaded data // }
What I want to do is have a PreLoader class that will load all the images I need beforehand.
In that case, how do I let all the other classes know when the loading is done?
Do I dispatch events? What is the best practise in this case?
Thanks in advance,
Praskaton
Most likely you want to create a queue and add your image paths to the queue. Then after each image is done loading, you proceed to the next item in the queue. When all images are loaded, you dispatch a COMPLETE event or something similar to let your app know it's all done.
Check QueueLoader or Casalib for how they implement single or bulk image loading.
Adding to the answer that #Boon provided, this is how you could go about the actual setting up of the image queue.
Firstly, you need a list that will store all of the images that still need to be loaded. This makes it easy for you to define as many images as you want. It can be the 'queue':
var queue:Array = [
"http://interfacelift.com/wallpaper/previews/03177_orionnebulaintheinfrared#2x.jpg",
"http://interfacelift.com/wallpaper/previews/03175_purpleclouds#2x.jpg",
"http://interfacelift.com/wallpaper/previews/03173_goodmorning2013#2x.jpg"
];
The next thing to do is set up what I would call the 'core' method of what we're doing. It will handle loading the next image as well as notifying us when the queue is empty. It looks something like this:
function loadNext():void
{
if(queue.length > 0)
{
// Notice here that we use .pop() on the queue, which will select and
// remove the last item from queue.
var req:URLRequest = new URLRequest( queue.pop() );
var photo:Loader = new Loader();
photo.load(req);
photo.contentLoaderInfo.addEventListener(Event.COMPLETE, loadComplete);
}
else
{
// The queue is finished - dispatch an event or whatever you fancy to
// let the rest of the application know we're done here.
trace("Queue finished.");
}
}
And then of course our listener function to deal with the completion of loaded images. Notice here that we call loadNext() - this is the key to beginning the load of the next image in the queue only once the currently loading image has finished.
function loadComplete(e:Event):void
{
addChild(e.target.content as Bitmap);
// Begin loading next image in the queue.
loadNext();
}
And to start the process we of course just use this, which will either immediately notify us that the queue is finished if it's empty, or start loading the images in sequence.
// Start loading the queue.
loadNext();
Additional / tidy up:
If you want to be able to recycle this code or just tidy up, you can easily make this into a class. The class could be called ImageQueue and its structure will contain the above queue array, loadNext() method and loadComplete() method. It can also have an add() method for adding images to the queue initially in a cleaner manner.
Here is the foundation of that class, which you can finish up if you're interested:
public class ImageQueue
{
private var _queue:Array = [];
public function add(image:String):void{ }
public function loadNext():void{ }
private function _loadComplete(e:Event):void{ }
}

FIFO queue synchronization

Should FIFO queue be synchronized if there is only one reader and one writer?
What do you mean by "synchronized"? If your reader & writer are in separate threads, you want the FIFO to handle the concurrency "correctly", including such details as:
proper use of FIFO API should never cause data structures to be corrupted
proper use of FIFO API should not cause deadlock (although there should be a mechanism for a reader to wait until there is something to read)
the objects read from the FIFO should be the same objects, in the same order, written to the FIFO (there shouldn't be missing objects or rearranged order)
there should be a bounded time (one would hope!) between when the writer puts something into the FIFO, and when it is available to the reader.
In the Java world there's a good book on this, Java Concurrency In Practice. There are multiple ways to implement a FIFO that handles concurrency correctly. The simplest implementations are blocking, more complex ones use non-blocking algorithms based on compare-and-swap instructions found on most processors these days.
Yes, if the reader and writer interact with the FIFO queue from different threads.
Depending on implementation, but most likely. You don't want reader to read partially written data.
Yes, unless its documentation explicitly says otherwise.
(It is possible to implement a specialized FIFO that doesn't need synchronization if there is only one reader and one writer thread, e.g. on Windows using InterlockedXXX functions.)
Try this code for concurrent fifo usage:
public class MyObjectQueue {
private static final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
private static final ReadLock readLock;
private static final WriteLock writeLock;
private static final LinkedList<MyObject> objects;
static {
readLock = lock.readLock();
writeLock = lock.writeLock();
objects = new LinkedList<MyObject>();
}
public static boolean put(MyObject p) {
writeLock.lock();
try {
objects.push(p);
return objects.contains(p);
} finally {
writeLock.unlock();
}
}
public static boolean remove(MyObject p) {
writeLock.lock();
try {
return objects.remove(p);
} finally {
writeLock.unlock();
}
}
public static boolean contains(MyObject p) {
readLock.lock();
try {
return objects.contains(p);
} finally {
readLock.unlock();
}
}
public MyObject get() {
MyObject o = null;
writeLock.lock();
try {
o = objects.getLast();
} catch (NoSuchElementException nse) {
//list is empty
} finally {
writeLock.unlock();
}
return o;
}
}

Resources