Xpath: select preceding and following sibling - xpath

HTML:
<div class="reviewText" id="game-review">
<h2>xWays Hoarder xSplit Review</h2>
<p>While other studios are busy studying Greek mythology or Ancient Rome for their backstory, NoLimit City is contemplating the amount of shit a male produces with an average life-expectancy of 76 years. The <strong>11,030x</strong> potential is literally a reflection of that number (in kilograms), and it triggers the ‘King of Waste’ big win feature when cracked.</p>
<p>Talk about being too successful to ‘give a shite’ anymore, but please do understand that we mean this in the best possible way. xWays Hoarder xSplit is another stunningly good release from the crafty studio, building on the already established and successful xWays feature, while also adding a brand new and innovative xSplit Wild to the mix.</p>
<div class="imgText">
<div class="parentImgText">
<div class="childImgText"><img alt="Reel-Screen" class="textImgBig blurring lazyloaded" data-src="https://slotcatalog.com/userfiles/image/games/Nolimit-City/20688/xWays-Hoarder-xSplit-6504018.jpg" title="Reel-Screen" src="https://slotcatalog.com/userfiles/image/games/Nolimit-City/20688/xWays-Hoarder-xSplit-6504018.jpg" width="826" height="464"></div>
</div>
<span class="titleImg">xWays Hoarder xSplit Slot - Reels Screen</span></div>
<h2 id="xways-hoarder-xsplit-features">xWays Hoarder xSplit Features</h2>
<p>xWays symbols can land on the 3 middle reels only, and they always reveal 2 to 4 instances of a randomly selected matching pay symbol. If you land multiple xWays symbols, they all reveal the same matching symbol.</p>
<p>The newcomer, xSplit Wilds, can land during base game play only, and this symbol will only appear on reels 3, 4 and 5. Whenever they land, all symbols to the left on the same row are split in two. If an xWays symbol is split by the xSplit Wild, the value of the xWays symbol is doubled for every split. The xSplit turns into a regular wild, and this wild can also be split by a new xSplit symbol.</p>
<p>You trigger the Bunker Raid Bonus Round when you land <strong>3 or 4 Fallout Scatters</strong> anywhere on reels 2 to 5. This awards <strong>7 or 10 Bunker Raid Spins</strong>, respectively. However, if a scatter is split by an xSplit Wild, it turns into a Super Scatter. If you trigger the bonus round with a Super Scatter involved, the Super Scatter and the symbols below turn into sticky xWays symbols.</p>
<p>All xWays symbols you land during the bonus round are sticky for the duration of the feature, and they also award <strong>+1 extra spin</strong>. The xWays symbols drop down to the bottom row, or the lowest xWays symbol on that reel, and it will then merge with any sticky xWays symbol already present.</p>
<p>xWays symbols are also collected in a meter, and you reach a new ‘Hoarder Level’ for every 3 you land. Each new Hoarder level removes the lowest remaining character (premium) and object (low value) symbol from the reels. Starting at level zero, the bonus round comes with 3 Hoarder levels on top of that.</p>
<p>If you manage to reach the top of the <strong>Hoarder meter</strong>, the whole feature shifts to the Wasteland Free Spins round. Only 4 symbols will land during this top-tier bonus round, namely the 2 highest value character/premium symbols and the 2 highest value object symbols.</p>
<p>The 9 xWays symbols you’ve gathered on the 3 middle reels will now merge to form a mega symbol that covers the entire 3 middle reels per free spin. Each of the merged xWays symbols still come with a symbol count of x2 to x4 however, multiplying with each other per spin to generate potentially massive payouts.</p>
<p>Non-UK players can take advantage of the <strong>Bonus Buy feature</strong> with the following options:</p>
<ul>
<li>Bunker Raid Spins with 7 free spins costs <strong>95x</strong> your stake with a <strong>96.38 %</strong> RTP.</li>
<li>Bunker Raid Spins with 10 free spins costs <strong>180x</strong> your stake with a <strong>96.38 %</strong> RTP.</li>
<li>Wasteland Free Spins with 7 free spins costs <strong>777x</strong> your stake with a <strong>96.68 %</strong> RTP.</li>
<li>Random Mystery option costs <strong>218x</strong> your stake with a <strong>96.5%</strong> RTP.</li>
</ul>
<p>In this context, it’s good to keep in mind that the Bonus Round triggers organically once every 307 spins on average.</p>
<h2 id="the-200-spins-xways-hoarder-xsplit-experience">The 200 Spins xWays Hoarder xSplit Experience</h2>
<p>We purchase the random bonus round option pretty soon after the video starts, and this gave us 10 bunker raid spins. We landed quite a few xWays symbols, and this kept the feature going for a good while. One big win followed another, and we will definitely be back for more! Check it all out for yourself in the 8-and-a-half-minute highlights video below.</p>
I need to select all p and li tags after one h2 and before another h2
My xpath now is //div[#id='game-review']//h2/following-sibling::p[preceding-sibling::h2] which is not working. Right now this code returns all elements with p tag between the headers, but I need to get p for each header with h2 tag

This XPath will do that work:
//*[self::p or self::*/li][preceding-sibling::h2 and following-sibling::h2]
Explanations:
[preceding-sibling::h2 and following-sibling::h2] is limiting the sibling elements between the h2 tags.
//*[self::p or self::*/li] selecting p tag nodes or li sub-nodes of any kind of parent tag.
It works, you can check it here or with any similar XPath online validator.

Related

Get all possible valid positions of ships in battleship game

I'm creating probability assistant for Battleship game - in essence, for given game state (field state and available ships), it would produce field where all free cells will have probability of hit.
My current approach is to do a monte-carlo like computation - get random free cell, get random ship, get random ship rotation, check if this placement is valid, if so continue with next ship from available set. If available set is empty, add how the ships were set to output stack. Redo this multiple times, use outputs to compute probability of each cell.
Is there sane algorithm to process all possible ship placements for given field state?
An exact solution is possible. But does not qualify as sane in my books.
Still, here is the idea.
There are many variants of the game, but let's say that we start with a worst case scenario of 1 ship of size 5, 2 of size 4, 3 of size 3 and 4 of size 2.
The "discovered state" of the board is all spots where shots have been taken, or ships have been discovered, plus the number of remaining ships. The discovered state naively requires 100 bits for the board (10x10, any can be shot) plus 1 bit for the count of remaining ships of size 5, 2 bits for the remaining ships of size 4, 2 bits for remaining ships of size 3 and 3 bits for remaining ships of size 2. This makes 108 bits, which fits in 14 bytes.
Now conceptually the idea is to figure out the map by shooting each square in turn in the first row, the second row, and so on, and recording the game state along with transitions. We can record the forward transitions and counts to find how many ways there are to get to any state.
Then find the end state of everything finished and all ships used and walk the transitions backwards to find how many ways there are to get from any state to the end state.
Now walk the data structure forward, knowing the probability of arriving at any state while on the way to the end, but this time we can figure out the probability of each way of finding a ship on each square as we go forward. Sum those and we have our probability heatmap.
Is this doable? In memory, no. In a distributed system it might be though.
Remember that I said that recording a state took 14 bytes? Adding a count to that takes another 8 bytes which takes us to 22 bytes. Adding the reverse count takes us to 30 bytes. My back of the envelope estimate is that at any point in our path there are on the order of a half-billion states we might be in with various ships left, killed ships sticking out and so on. That's 15 GB of data. Potentially for each of 100 squares. Which is 1.5 terabytes of data. Which we have to process in 3 passes.

Algorithm to organise calendar events using minimum positions

I want to build an algorithm that organizes calendar events to display positions.
Each event looks like this:
{
title: 'A Title',
start: aDate,
end: anotherDate,
position: aNumber
}
I want to achieve a layout similar to this
(A & B have position 0, C & D position 1 and E position 2) or any other combination, but not use more positions than necessary.
Can anyone suggest witch algorithm might do the trick of automatically assigning suitable positions to my events? (Name reference or pseudocode would be a lot of help)
My thoughts up till now are to keep track in the event object the other overlapping events and then somehow compare their positions / overlaps if any to get the number, but I can't quite figure it out.
Given several existing free lanes where you can place an event, any of them is an acceptable choice, in the sense that the total maximum number of required lanes will not be affected. Given no free lanes, there is only one choice: add a new lane.
Therefore, the problem is actually very simple: just place an event in the first free lane that you can find (or create a new one if none is currently free), and keep track of the lanes that are occupied and the times when they will be freed up.
This greedy approach could look as follows:
initialize a list of free lanes
for each event e,
1. check which occupied lanes are free for e.startTime
2. assign e.lane to a free lane, or add a new free lane if none empty
3. mark the e.lane as occupied until e.endTime is reached
Step 2 can stick to the lowest-number free lane (to yield a more top-compact representation), or to spread lanes out a bit (which may make aesthetic sense, although you will not know the total number of lanes required until after a 1st pass).
In any case, the algorithm only requires one pass, and minimal additional memory (keeping track of which lanes are occupied until what times).

Reconciling display differences of RPresentation

RStudio seems to handle display of output slides inconsistently/poorly. How can I control the output so that the saved version of my slides matches what I see in RStudio?
This test document:
test
========================================================
author:
date:
autosize: true
Exponentials
========================================================
> "...King Shihram asked Sissa ben Dahir what reward he wanted... Sissa said that he would take this reward: the king should put one grain of wheat on the first square of a chessboard, two grains of wheat on the second square, four grains on the third square, eight grains on the fourth square, and so on...
> [The King] ordered his slaves to bring out the chessboard and they started putting on the wheat. Everything went well for a while, but the king was surprised to see that by the time they got halfway through the chessboard the 32nd square required more than four billion grains of wheat, or about 100,000 kilos of wheat...
> [T]o finish the chessboard you would need as much wheat as six times the weight of all the living things on Earth." - Story of [Ibn Khallikan](https://en.wikipedia.org/wiki/Ibn_Khallikan), _ca_. 1260 AD, [via](http://quatr.us/islam/literature/chesswheat.htm)
<!-- BREAKING UP QUOTE BLOCK -->
> "Humans don't understand exponential growth. If you fold a paper 50 times it goes to the moon and back." - Mark Zuckerberg [via](http://www.kazabyte.com/2011/12/we-dont-understand-exponential-functions.html)
Displays like this in RStudio:
But when I open it as a standalone HTML page, it's overlarge and the quote box is narrower:
I'd like my Rpres to be available without necessarily needing RStudio on the local machine. How can I synergize the output I use to intermediate output with the final product? That is, how can I be more sure of what the slides will look like on export while working in RStudio? What controls do I have at my disposal for manipulating the output of the slides?

Algorithm for most recently/often contacts for auto-complete?

We have an auto-complete list that's populated when an you send an email to someone, which is all well and good until the list gets really big you need to type more and more of an address to get to the one you want, which goes against the purpose of auto-complete
I was thinking that some logic should be added so that the auto-complete results should be sorted by some function of most recently contacted or most often contacted rather than just alphabetical order.
What I want to know is if there's any known good algorithms for this kind of search, or if anyone has any suggestions.
I was thinking just a point system thing, with something like same day is 5 points, last three days is 4 points, last week is 3 points, last month is 2 points and last 6 months is 1 point. Then for most often, 25+ is 5 points, 15+ is 4, 10+ is 3, 5+ is 2, 2+ is 1. No real logic other than those numbers "feel" about right.
Other than just arbitrarily picked numbers does anyone have any input? Other numbers also welcome if you can give a reason why you think they're better than mine
Edit: This would be primarily in a business environment where recentness (yay for making up words) is often just as important as frequency. Also, past a certain point there really isn't much difference between say someone you talked to 80 times vs say 30 times.
Take a look at Self organizing lists.
A quick and dirty look:
Move to Front Heuristic:
A linked list, Such that whenever a node is selected, it is moved to the front of the list.
Frequency Heuristic:
A linked list, such that whenever a node is selected, its frequency count is incremented, and then the node is bubbled towards the front of the list, so that the most frequently accessed is at the head of the list.
It looks like the move to front implementation would best suit your needs.
EDIT: When an address is selected, add one to its frequency, and move to the front of the group of nodes with the same weight (or (weight div x) for courser groupings). I see aging as a real problem with your proposed implementation, in that it requires calculating a weight on each and every item. A self organizing list is a good way to go, but the algorithm needs a bit of tweaking to do what you want.
Further Edit:
Aging refers to the fact that weights decrease over time, which means you need to know each and every time an address was used. Which means, that you have to have the entire email history available to you when you construct your list.
The issue is that we want to perform calculations (other than search) on a node only when it is actually accessed -- This gives us our statistical good performance.
This kind of thing seems similar to what is done by firefox when hinting what is the site you are typing for.
Unfortunately I don't know exactly how firefox does it, point system seems good as well, maybe you'll need to balance your points :)
I'd go for something similar to:
NoM = Number of Mail
(NoM sent to X today) + 1/2 * (NoM sent to X during the last week)/7 + 1/3 * (NoM sent to X during the last month)/30
Contacts you did not write during the last month (it could be changed) will have 0 points. You could start sorting them for NoM sent in total (since it is on the contact list :). These will be showed after contacts with points > 0
It's just an idea, anyway it is to give different importance to the most and just mailed contacts.
If you want to get crazy, mark the most 'active' emails in one of several ways:
Last access
Frequency of use
Contacts with pending sales
Direct bosses
Etc
Then, present the active emails at the top of the list. Pay attention to which "group" your user uses most. Switch to that sorting strategy exclusively after enough data is collected.
It's a lot of work but kind of fun...
Maybe count the number of emails sent to each address. Then:
ORDER BY EmailCount DESC, LastName, FirstName
That way, your most-often-used addresses come first, even if they haven't been used in a few days.
I like the idea of a point-based system, with points for recent use, frequency of use, and potentially other factors (prefer contacts in the local domain?).
I've worked on a few systems like this, and neither "most recently used" nor "most commonly used" work very well. The "most recent" can be a real pain if you accidentally mis-type something once. Alternatively, "most used" doesn't evolve much over time, if you had a lot of contact with somebody last year, but now your job has changed, for example.
Once you have the set of measurements you want to use, you could create an interactive apoplication to test out different weights, and see which ones give you the best results for some sample data.
This paper describes a single-parameter family of cache eviction policies that includes least recently used and least frequently used policies as special cases.
The parameter, lambda, ranges from 0 to 1. When lambda is 0 it performs exactly like an LFU cache, when lambda is 1 it performs exactly like an LRU cache. In between 0 and 1 it combines both recency and frequency information in a natural way.
In spite of an answer having been chosen, I want to submit my approach for consideration, and feedback.
I would account for frequency by incrementing a counter each use, but by some larger-than-one value, like 10 (To add precision to the second point).
I would account for recency by multiplying all counters at regular intervals (say, 24 hours) by some diminisher (say, 0.9).
Each use:
UPDATE `addresslist` SET `favor` = `favor` + 10 WHERE `address` = 'foo#bar.com'
Each interval:
UPDATE `addresslist` SET `favor` = FLOOR(`favor` * 0.9)
In this way I collapse both frequency and recency to one field, avoid the need for keeping a detailed history to derive {last day, last week, last month} and keep the math (mostly) integer.
The increment and diminisher would have to be adjusted to preference, of course.

Mahjong - Arrange tiles to ensure at least one path to victory, regardless of layout

Regardless of the layout being used for the tiles, is there any good way to divvy out the tiles so that you can guarantee the user that, at the beginning of the game, there exists at least one path to completing the puzzle and winning the game?
Obviously, depending on the user's moves, they can cut themselves off from winning. I just want to be able to always tell the user that the puzzle is winnable if they play well.
If you randomly place tiles at the beginning of the game, it's possible that the user could make a few moves and not be able to do any more. The knowledge that a puzzle is at least solvable should make it more fun to play.
Place all the tiles in reverse (ie layout out the board starting in the middle, working out)
To tease the player further, you could do it visibly but at very high speed.
Play the game in reverse.
Randomly lay out pieces pair by pair, in places where you could slide them into the heap. You'll need a way to know where you're allowed to place pieces in order to end up with a heap that matches some preset pattern, but you'd need that anyway.
I know this is an old question, but I came across this when solving the problem myself. None of the answers here are quite perfect, and several of them have complicated caveats or will break on pathological layouts. Here is my solution:
Solve the board (forward, not backward) with unmarked tiles. Remove two free tiles at a time. Push each pair you remove onto a "matched pair" stack. Often, this is all you need to do.
If you run into a dead end (numFreeTiles == 1), just reset your generator :) I have found I usually don't hit dead ends, and have so far have a max retry count of 3 for the 10-or-so layouts I have tried. Once I hit 8 retries, I give up and just randomly assign the rest of the tiles. This allows me to use the same generator for both setting up the board, and the shuffle feature, even if the player screwed up and made a 100% unsolvable state.
Another solution when you hit a dead end is to back out (pop off the stack, replacing tiles on the board) until you can take a different path. Take a different path by making sure you match pairs that will remove the original blocking tile.
Unfortunately, depending on the board, this may loop forever. If you end up removing a pair that resembles a "no outlet" road, where all subsequent "roads" are a dead end, and there are multiple dead ends, your algorithm will never complete. I don't know if it is possible to design a board where this would be the case, but if so, there is still a solution.
To solve that bigger problem, treat each possible board state as a node in a DAG, with each selected pair being an edge on that graph. Do a random traversal, until you find a leaf node at depth 72. Keep track of your traversal history so that you never repeat a descent.
Since dead ends are more rare than first-try solutions in the layouts I have used, what immediately comes to mind is a hybrid solution. First try to solve it with minimal memory (store selected pairs on your stack). Once you've hit the first dead end, degrade to doing full marking/edge generation when visiting each node (lazy evaluation where possible).
I've done very little study of graph theory, though, so maybe there's a better solution to the DAG random traversal/search problem :)
Edit: You actually could use any of my solutions w/ generating the board in reverse, ala the Oct 13th 2008 post. You still have the same caveats, because you can still end up with dead ends. Generating a board in reverse has more complicated rules, though. E.g, you are guaranteed to fail your setup if you don't start at least SOME of your rows w/ the first piece in the middle, such as in a layout w/ 1 long row. Picking a completely random (legal) first move in a forward-solving generator is more likely to lead to a solvable board.
The only thing I've been able to come up with is to place the tiles down in matching pairs as kind of a reverse Mahjong Solitaire game. So, at any point during the tile placement, the board should look like it's in the middle of a real game (ie no tiles floating 3 layers up above other tiles).
If the tiles are place in matching pairs in a reverse game, it should always result in at least one forward path to solve the game.
I'd love to hear other ideas.
I believe the best answer has already been pushed up: creating a set by solving it "in reverse" - i.e. starting with a blank board, then adding a pair somewhere, add another pair in a solvable position, and so on...
If you a prefer "Big Bang" approach (generating the whole set randomly at the beginning), are a very macho developer or just feel masochistic today, you could represent all the pairs you can take out from the given set and how they depend on each other via a directed graph.
From there, you'd only have to get the transitive closure of that set and determine if there's at least one path from at least one of the initial legal pairs that leads to the desired end (no tile pairs left).
Implementing this solution is left as an exercise to the reader :D
Here are rules i used in my implementation.
When buildingheap, for each fret in a pair separately, find a cells (places), which are:
has all cells at lower levels already filled
place for second fret does not block first, considering if first fret already put onboard
both places are "at edges" of already built heap:
EITHER has at least one neighbour at left or right side
OR it is first fret in a row (all cells at right and left are recursively free)
These rules does not guarantee a build will always successful - it sometimes leave last 2 free cells self-blocking, and build should be retried (or at least last few frets)
In practice, "turtle" built in no more then 6 retries.
Most of existed games seems to restrict putting first ("first on row") frets somewhere in a middle. This come up with more convenient configurations, when there are no frets at edges of very long rows, staying up until last player moves. However, "middle" is different for different configurations.
Good luck :)
P.S.
If you've found algo that build solvable heap in one turn - please let me know.
You have 144 tiles in the game, each of the 144 tiles has a block list..
(top tile on stack has an empty block list)
All valid moves require that their "current__vertical_Block_list" be empty.. this can be a 144x144 matrix so 20k of memory plus a LEFT and RIGHT block list, also 20 k each.
Generate a valid move table from (remaning_tiles) AND ((empty CURRENT VERTICAL BLOCK LIST) and ((empty CURRENT LEFT BLOCK LIST) OR (empty CURRENT RIGHT BLOCK LIST)))
Pick 2 random tiles from the valid move table, record them
Update the (current tables Vert, left and right), record the Tiles removed to a stack
Now we have a list of moves that constitute a valid game. Assign matching tile types to each of the 72 moves.
for challenging games, track when each tile becomes available. find sets that have are (early early early late) and (late late late early) since it's blank, you find 1 EE 1 LL and 2 LE blocks.. of the 2 LE block, find an EARLY that blocks ANY other EARLY that (except rightblocking a left side piece)
Once youve got a valid game play around with the ordering.
Solitaire? Just a guess, but I would assume that your computer would need to beat the game(or close to it) to determine this.
Another option might be to have several preset layouts(that allow winning, mixed in with your current level.
To some degree you could try making sure that one of the 4 tiles is no more than X layers below another X.
Most games I see have the shuffle command for when someone gets stuck.
I would try a mix of things and see what works best.

Resources