libtorrent alerts - read_piece_alert - libtorrent

I have a multi-file torrent (3 files). I subscribed to read_piece_alert as explained here.
std::vector<alert*> alerts;
ses.pop_alerts(&alerts);
for (alert* i : alerts) {
switch (a->type()) {
case read_piece_alert::alert_type:
{
read_piece_alert* p = (read_piece_alert*)a;
if (p->ec) {
// read_piece failed
break;
}
// use p
break;
}
case file_renamed_alert::alert_type:
{
// etc...
}
}
}
How can I know to which file the piece belongs in the multi-file torrent?
For example my multi-file torrent has an .AVI, .TXT and .JPG. Is there some kind of index to know to which file the piece actually belongs?

yes. you can map a piece-index into one or more file-indices + offsets with the map_block() function on file_storage. See the documentation.

Related

How to handle saving and loading different versions of a file?

In making an application that saves files with a specific format which in the future will have added or different functionality, requiring the saved file to have a different format, are there any techniques available to handle this "versioning"?
I would be interested in reading into some of them explaining how it is possible to load all the possible formats of the saved file that were created by the different versions of the application.
My idea currently is to save a version indicator in the saved file and use distinct load functions for every "version" that had it's own format, trying to tie them all with the current functionality of the latest version of the app.
This is mostly opinion based so handle it as such... Here are my insights on the topic:
fileformat
You should have 2 identificators. One for file format sometimes called magic number and second version. Both should be somewhere at the start of file and usually encoded as ASCII so you can easily check them with notepad or whatever.
Its a good idea to have chunks of data with they own type and version identificators.
loader - single fileformat detector
I use this to check for specific fileformat. The input is array (small usually 1Kbyte) holding first bytes of file, array size and file size. The function checks if the file is valid file of some type. This is used to autodetect fileformat and not relay on file extension (necessity on Windows and low grade users as they often corrupt the file extention)
The function returns true/false after checking identificators (and or file logic)
loader - single fileformat
This should load file into your app. For multi versions you got 2 options. Either have separate code for each version or one function partitioned with if statements like this:
if (version==???) ...
if (version>=???) ... else ...
if ((version>=???)&&(version<???)) ...
to manage the diferent parts.
I prefer the partitioned code approach as its usually less code and better manageable because different versions usually adds just some minor changes and most of the code states the same.
loader - multi fileformat
Simply load first bytes of file into memory and check all the supported fileformats using function from #2. Once succesfully detected fileformat load the file using its loader function from #3. If no fileformat detected then use file extention ...
Here simple C++/VCL example of #4 from my SVG loader class:
bool decode_interface_class::load(AnsiString name)
{
int hnd=-1;
int siz=0,siz0=0;
BYTE *dat=NULL;
reset();
#ifdef decode_interface_log
decode_id.num=0;
decode_log="";
#endif
decode_cfg =true;
decode_col =true;
decode_tool=true;
decode_ext=ExtractFileExt(name).LowerCase();
decoded_ext=".";
decoded_info="";
decode_emf emf;
decode_wmf wmf;
decode_dkr dkr;
decode_dk3 dk3;
decode_box box;
decode_bxl bxl;
decode_dxf dxf;
decode_svg svg;
decode_v2x v2x;
decode_v2d v2d;
const int _size=4096;
BYTE head[_size];
#ifdef decode_interface_log
siz=0; // find minimal size
if (siz<_decode_emf_hdr) siz=_decode_emf_hdr;
if (siz<_decode_wmf_hdr) siz=_decode_wmf_hdr;
if (siz<_decode_dkr_hdr) siz=_decode_dkr_hdr;
if (siz<_decode_dk3_hdr) siz=_decode_dk3_hdr;
if (siz<_decode_box_hdr) siz=_decode_box_hdr;
if (siz<_decode_bxl_hdr) siz=_decode_bxl_hdr;
if (siz<_decode_dxf_hdr) siz=_decode_dxf_hdr;
if (siz<_decode_svg_hdr) siz=_decode_svg_hdr;
if (siz<_decode_v2x_hdr) siz=_decode_v2x_hdr;
if (siz<_decode_v2d_hdr) siz=_decode_v2d_hdr;
if (siz>_size)
{
decode_log+="Decoding header size too small needed to be "+AnsiString(siz)+" Bytes.\r\n";
}
#endif
hnd=FileOpen(name,fmOpenRead);
if (hnd<0)
{
#ifdef decode_interface_log
decode_log+="File "+name+" not found.\r\n";
#endif
return false;
}
siz=FileSeek(hnd,0,2);
FileSeek(hnd,0,0);
dat=new BYTE[siz];
if (dat==NULL)
{
#ifdef decode_interface_log
decode_log+="Not enough memory need: "+AnsiString(siz)+" Bytes.\r\n";
#endif
FileClose(hnd);
return false;
}
siz0=siz;
siz=FileRead(hnd,dat,siz);
FileClose(hnd);
if (siz!=siz0)
{
#ifdef decode_interface_log
decode_log+="Disc drive or file system error.\r\n";
#endif
}
this[0].filename=name;
// file signature detection
for (int i=0;i<_size;i++) if (i<siz) head[i]=dat[i]; else head[i]=0;
if (emf.is_header(head,_size,siz)) { decoded_ext=_decode_emf_ext; emf.load(this[0],dat,siz); }
else if (wmf.is_header(head,_size,siz)) { decoded_ext=_decode_wmf_ext; wmf.load(this[0],dat,siz); }
else if (dkr.is_header(head,_size,siz)) { decoded_ext=_decode_dkr_ext; dkr.load(this[0],dat,siz); }
else if (dk3.is_header(head,_size,siz)) { decoded_ext=_decode_dk3_ext; dk3.load(this[0],dat,siz); }
else if (box.is_header(head,_size,siz)) { decoded_ext=_decode_box_ext; box.load(this[0],dat,siz); }
else if (bxl.is_header(head,_size,siz)) { decoded_ext=_decode_bxl_ext; bxl.load(this[0],dat,siz); }
else if (dxf.is_header(head,_size,siz)) { decoded_ext=_decode_dxf_ext; dxf.load(this[0],dat,siz); }
else if (svg.is_header(head,_size,siz)) { decoded_ext=_decode_svg_ext; svg.load(this[0],dat,siz); }
else if (v2x.is_header(head,_size,siz)) { decoded_ext=_decode_v2x_ext; v2x.load(this[0],dat,siz); }
else if (v2d.is_header(head,_size,siz)) { decoded_ext=_decode_v2d_ext; v2d.load(this[0],dat,siz); }
// if fail use file extension
else if (decode_ext==_decode_emf_ext) { decoded_ext=_decode_emf_ext; emf.load(this[0],dat,siz); decoded_info="*"+decoded_info; }
else if (decode_ext==_decode_wmf_ext) { decoded_ext=_decode_wmf_ext; wmf.load(this[0],dat,siz); decoded_info="*"+decoded_info; }
else if (decode_ext==_decode_dkr_ext) { decoded_ext=_decode_dkr_ext; dkr.load(this[0],dat,siz); decoded_info="*"+decoded_info; }
else if (decode_ext==_decode_dk3_ext) { decoded_ext=_decode_dk3_ext; dk3.load(this[0],dat,siz); decoded_info="*"+decoded_info; }
else if (decode_ext==_decode_box_ext) { decoded_ext=_decode_box_ext; box.load(this[0],dat,siz); decoded_info="*"+decoded_info; }
else if (decode_ext==_decode_bxl_ext) { decoded_ext=_decode_bxl_ext; bxl.load(this[0],dat,siz); decoded_info="*"+decoded_info; }
else if (decode_ext==_decode_dxf_ext) { decoded_ext=_decode_dxf_ext; dxf.load(this[0],dat,siz); decoded_info="*"+decoded_info; }
else if (decode_ext==_decode_svg_ext) { decoded_ext=_decode_svg_ext; svg.load(this[0],dat,siz); decoded_info="*"+decoded_info; }
else if (decode_ext==_decode_v2x_ext) { decoded_ext=_decode_v2x_ext; v2x.load(this[0],dat,siz); decoded_info="*"+decoded_info; }
else if (decode_ext==_decode_v2d_ext) { decoded_ext=_decode_v2d_ext; v2d.load(this[0],dat,siz); decoded_info="*"+decoded_info; }
// if fail then error
else{
#ifdef decode_interface_log
decode_log+="File "+name+" not recognized.\r\n";
#endif
}
if (decode_cfg)
{
if (!decode_col )
{
if (decode_tool) set_cfgs (dk3_charaktool ,33);
set_colors(dk3_charakcolor,33);
}
if (!decode_tool) set_tools (dk3_charaktool ,33);
}
#ifdef decode_interface_log
if (decode_ext!=decoded_ext)
decode_log+="Wrong file extension in "+name+" should be \""+decoded_ext+"\"\r\n";
hnd=FileCreate(ExtractFilePath(Application->ExeName)+"svg_decode.log");
FileWrite(hnd,decode_log.c_str(),decode_log.Length());
FileClose(hnd);
#endif
compute();
compute_objsize();
if (dat) delete[] dat;
return true;
}
Each fileformat has defined its header size _decode_???_hdr in bytes and default file extention _decode_??_ext for the detection of fileformat. Functions ???.is_header(...) are the #2 and ???.load(...) are the #3. I am using loading from memory instead of direct file access because its better suite my needs. However this is not convenient for too big files.

Go to next page until element found

I'm using Cypress for my automated tests. I'm trying to find a product on a page and click on it. If the product not displayed on the page, go to the next one until it is found.
I've been trying a lot of things already: while loop, each loop, simple cy.get but none of them work. Can anyone help me to solve this?
You'll need a recursive command, implementation of which will depend on your specific scenario. There's no one-size-fits-all solution, but generally it will look something like this:
function findElem ( targetSelector ) {
// first, we need to query a container (could be table, or a generic item
// container). The important thing is, this container must exist on every
// page you'll traverse.
cy.get('.someContainer').then( $container => {
// synchronously find the target element, however you want. In this
// example I use an attribute selector, but you can do whatever you
// want.
if ( $container.find(targetSelector).length ) {
return true;
} else {
return false;
}
}).then( found => {
if ( found ) {
return cy.log(`found elem "${targetSelector}"`);
} else {
// synchronously check if there's a next page/table
if ( Cypress.$('.nextPageButton').length ) {
// when know that there is a next page, click it using cypress
cy.get('.nextPageButton').click();
// here, assert that next page/table is loaded, possibly by
// asserting that current page/table is removed from DOM
// then, recurse
return findElem(targetSelector);
} else {
throw new Error(`Couldn't find elem "${targetSelector}" on any page`);
}
}
});
}
it('test', () => {
findElem(`[data-id="42"]`);
});
The crux of the solution is using a combination of commands to query a container, and synchronous inspection (using jQuery or whatever) to find the actual element. Learn more at Conditional Testing.
For a concrete implementation, you can refer to an older answer I gave to a similar question.

Check if a specific cache tag exists

To check if a cache key exists, you can do this in Laravel:
use Cache;
if (Cache::has('key_name')) {
// continue
}
Is there a way to do the same for tags?
// ! Pseudo code
if (Cache::hasTag('tag_name')) {
// continue
}
Or, at least, to know whether there are any keys tagged with a specific tag?
// ! Pseudo code
if (Cache::tags('tag_name')->count() > 0) {
// continue
}

Do terminal operations close the stream?

dirPath contains 200k files. I want to read them one by one and do some processing. The following snippet causes java.nio.file.FileSystemException: dirPath/file-N Too many open files. Isn't the terminal operation forEach() supposed to close the open stream (i.e. the open file) before moving to the next one? In other words, do I have to add try-with-resources for the streamed files?
Files.list(dirPath)
.forEach(filePath -> {
Files.lines(filePath).forEach() { ... }
});
No forEach does not close the stream (created by Files.list or Files.lines). It is documented in the javadoc, for example for Files.list:
The returned stream encapsulates a Reader. If timely disposal of file system resources is required, the try-with-resources construct should be used to ensure that the stream's close method is invoked after the stream operations are completed.
A nested forEach is the wrong tool, in most cases.
The code
Files.list(dirPath).forEach(filePath -> Files.lines(filePath).forEach(line -> { ... });
can and should be replaced by
Files.list(dirPath).flatMap(filePath -> Files.lines(filePath)).forEach(line -> { ... });
or well, since it’s not that easy in this case:
Files.list(dirPath).flatMap(filePath -> {
try { return Files.lines(filePath);}
catch(IOException ex) { throw new UncheckedIOException(ex); }
}).forEach(line -> { });
as a side-effect, you get the following for free:
Stream.flatMap(…):
Each mapped stream is closed after its contents have been placed into this stream.
So that’s the preferred solution. Or well, to make it entirely correct:
try(Stream<Path> dirStream = Files.list(dirPath)) {
dirStream.flatMap(filePath -> {
try { return Files.lines(filePath);}
catch(IOException ex) { throw new UncheckedIOException(ex); }
}).forEach(line -> { });
}

Restrict Google Places Autocomplete to return addresses only

autocomplete = new google.maps.places.Autocomplete(input, { types: ['geocode'] });
returns streets and cities amongst other larger areas. Is it possible to restrict to streets only?
This question is old, but I figured I'd add to it in case anyone else is having this issue. restricting types to 'address' unfortunately does not achieve the expected result, as routes are still included. Thus, what I decided to do is loop through the result and implement the following check:
result.predictions[i].types.includes('street_address')
Unfortunately, I was surprised to know that my own address was not being included, as it was returning the following types: { types: ['geocode', 'premise'] }
Thus, I decided to start a counter, and any result that includes 'geocode' or 'route' in its types must include at least one other term to be included (whether that be 'street_address' or 'premise' or whatever. Thus, routes are excluded, and anything with a complete address will be included. It's not foolproof, but it works fairly well.
Loop through the result predictions, and implement the following:
if (result.predictions[i].types.includes('street_address')) {
// Results that include 'street_address' should be included
suggestions.push(result.predictions[i])
} else {
// Results that don't include 'street_address' will go through the check
var typeCounter = 0;
if (result.predictions[i].types.includes('geocode')) {
typeCounter++;
}
if (result.predictions[i].types.includes('route')) {
typeCounter++;
}
if (result.predictions[i].types.length > typeCounter) {
suggestions.push(result.predictions[i])
}
}
I think what you want is { types: ['address'] }.
You can see this in action with this live sample: https://developers.google.com/maps/documentation/javascript/examples/places-autocomplete (use the "Addresses" radio button).

Resources