I would like some help with a bit of recursive code I need to traverse a graph stored as a collection in PL/SQL.
---------
|LHS|RHS|
---------
| 1 | 2 |
| 2 | 3 |
| 2 | 4 |
| 3 | 5 |
---------
Assuming 1 is the start node, I would like to be able to find 2-3 and 2-4 without looping through the entire collection to check each LHS. I know one solution is to use a global temporary table instead of a collection, but I would really like to avoid reading and writing to and from disk if at all possible.
Edit: The expected output for the above example would be an XML like this:
<1>
<2>
<3>
<5>
</5>
</3>
<4>
</4>
</2>
</1>
Thanks.
Related
I'm using JPA and also I have a table like below structure:
|--------------|--------------------------------|
| ID | Title | OrderNumber |
|--------------|--------------|-----------------|
| 1 | Test | 0 |
|--------------|--------------|-----------------|
| 2 | Test2 | 1 |
|--------------|--------------|-----------------|
So, It's easy to order the list by the 'OrderNumber' in queries, but I've not found an appropriate way to update/set its value in reordering operation yet (Because I have an option in user-side to change the order of the list by drag and drop).
What is a suitable way to solve this problem without any store procedure in the database?
I have a question about Laravel - Eloquent….
I have a table that has (amongst other things)
id
name (varchar)
position (int)
parent (int) -> id of the parent page (which results in 0 when it is part of the main menu)
My goal didn’t seem to me as difficult but I don’t get it right….
I need to do the following:
Page in position 1
Child 1
Child 2
Page in position 2
Page in position 3
Child 1
Page in position 4
Example Table:
+----+--------+------+
| ID |Position|Parent|
+----+--------+------+
| 1 | 3 | 0 |
| 2 | 2 | 0 |
| 3 | 3 | 6 |
| 4 | 1 | 6 |
| 5 | 2 | 6 |
| 6 | 1 | 0 |
+----+--------+------+
Should result in:
6
- 4
- 5
- 3
2
1
In fact, in my first foreach, I should be able to do this:
if $page->children->count() > 0
I tried a lot of different things, in all ways possible but I don’t get to the result I want……
If someone has a solution for me, I will be happy and very grateful :D
Thank you in advance
If I understand the question correctly, you can read the table data in the controller and you can organize all the data in the view. In the view you can code two nested loops for the two levels you want to show.
Controller:
$items = $table->all();
return view('page', compact('items'));
View page.blade.php:
#foreach($items->where('parent', 0)->sortByDesc('id') as $menuFirstLevel)
{{$menuFirstLevel->id}}
#foreach($item->where('parent', $menuFirstLevel->id)->sortBy('foo-bar') as $menuSecondLevel)
{{$menuSecondLevel->id}}
#endif
#endif
I have a drug analysis experiment that need to generate a value based on given drug database and set of 1000 random experiments.
The original database looks like this where the number in the columns represent the rank for the drug. This is a simplified version of actual database, the actual database will have more Drug and more Gene.
+-------+-------+-------+
| Genes | DrugA | DrugB |
+-------+-------+-------+
| A | 1 | 3 |
| B | 2 | 1 |
| C | 4 | 5 |
| D | 5 | 4 |
| E | 3 | 2 |
+-------+-------+-------+
A score is calculated based on user's input: A and C, using the following formula:
# Compute Function
# ['A','C'] as array input
computeFunction(array) {
# do some stuff with the array ...
}
The formula used will be same for any provided value.
For randomness test, each set of experiment requires the algorithm to provide randomized values of A and C, so both A and C can be having any number from 1 to 5
Now I have two methods of selecting value to generate the 1000 sets for P-Value calculation, but I would need someone to point out if there is one better than another, or if there is any method to compare these two methods.
Method 1
Generate 1000 randomized database based on given database input shown above, meaning all the table should contain different set of value pair.
Example for 1 database from 1000 randomized database:
+-------+-------+-------+
| Genes | DrugA | DrugB |
+-------+-------+-------+
| A | 2 | 3 |
| B | 4 | 4 |
| C | 3 | 2 |
| D | 1 | 5 |
| E | 5 | 1 |
+-------+-------+-------+
Next we perform computeFunction() with new A and C value.
Method 2
Pick any random gene from original database and use it as a newly randomized gene value.
For example, we pick the values from E and B as a new value for A and C.
From original database, E is 3, B is 2.
So, now A is 3, C is 2. Next we perform computeFunction() with new A and C value.
Summary
Since both methods produce completely randomized input, therefore it seems to me that it will produce similar 1000-value outcome. Is there any way I could prove they are similar?
How to actually use merge sort for large data sets?
Suppose that I have several sorted files with the following data:
1.txt
1
2
2
2.txt
3
4
5
3.txt
1
1
1
Suppose that we can't hold all files' contents in memory at the same time (let's say we can hold only two numbers from each file).
I heard that I can use some kind of R-way merge sort in this case but I don't understand how can I actually do it.
As you see, the first iteration will give us the following sorted sequence:
1 1 1 2 3 4
, so we flush it to the output file. However, we will get 1 again (from the 3.txt file) on the next iteration, so the whole resulting sequence is wrong!
I heard that I can use some kind of R-way merge sort in this case but I don't understand how can I actually do it.
N-way merges are quite easy to explain. You open all the files, get the first element from each and put them into a heap.
The algorithm then proceeds by getting the smallest element from the heap (pop), writing it to your output buffer and then read the next element from the file this item originated from. Repeat until all files are empty.
Start by filling as many variables as you have files, one variable attached to one file. At each step find the lowest value of the three variables, and flush it to the output while filling it again from the same file.
| 1.txt | 2.txt | 3.txt |
| 1 | 3 | 1 | output 1 refill from file 1
| 2 | 3 | 1 | output 1 refill from file 3
| 2 | 3 | 1 | output 1 refill from file 3
| 2 | 3 | 1 | output 1 refill from file 3
| 2 | 3 | nil | output 2 refill from file 1
| 2 | 3 | nil | output 2 refill from file 1
| nil | 3 | nil | output 3 refill from file 2
| nil | 4 | nil | output 4 refill from file 2
| nil | 5 | nil | output 5 refill from file 2
| nil | nil | nil | end
I am trying to retrieve a list of each descendant with each item.
I am not sure I am making sense, I will try and explain.
Example Data:
ID | PID
--------
1 | 0
2 | 1
3 | 1
4 | 1
5 | 2
6 | 2
7 | 5
8 | 3
etc...
The desired results are:
ID | Decendant
--------------
1 | 1
1 | 2
1 | 3
1 | 4
...
2 | 2
2 | 5
2 | 6
2 | 7
3 | 3
3 | 8
etc...
This is currently being achieved by using a cursor to move through the data and inserting each descendant into a table and then selecting from them.
I was wondering if there was a better way to do these, there must be a way to right a query that would bring back the desired results.
If any one has ideas, or has figured this out before it would be very appreciated. Ordering is not important, nor is the 1 - 1, 2 -2 reference. It would be cool to have it, but not crucial.
select connect_by_root(id) as ID, id as Decendant
from table1
connect by prior id = pid
order by 1, 2
fiddle
Here is my attempt! Not sure, if I got you right!
select pid ,connect_By_root(id) as descendant from process
connect by id = prior pid
union all
select distinct pid,pid from
process
order by pid,descendant