nested for loops in stata - for-loop

I am having trouble to understand why a for loop construction does not work. I am not really used to for loops so I apologize if I am missing something basic. Anyhow, I appreciate any piece of advice you might have.
I am using a party level dataset from the parlgov project. I am trying to create a variable which captures how many times a party has been in government before the current observation. Time is important, the counter should be zero if a party has not been in government before, even if after the observation period it entered government multiple times. Parties are nested in countries and in cabinet dates.
The code is as follows:
use "http://eborbath.github.io/stackoverflow/loop.dta", clear //to get the data
if this does not work, I also uploaded in a csv format, try:
import delimited "http://eborbath.github.io/stackoverflow/loop.csv", bindquote(strict) encoding(UTF-8) clear
The loop should go through each country-specific cabinet date, identify the previous observation and check if the party has already been in government. This is how far I have got:
gen date2=cab_date
gen gov_counter=0
levelsof country, local(countries) // to get to the unique values in countries
foreach c of local countries{
preserve // I think I need this to "re-map" the unique cabinet dates in each country
keep if country==`c'
levelsof cab_date, local(dates) // to get to the unique cabinet dates in individual countries
restore
foreach i of local dates {
egen min_date=min(date2) // this is to identify the previous cabinet date
sort country party_id date2
bysort country party_id: replace gov_counter=gov_counter+1 if date2==min_date & cabinet_party[_n-1]==1 // this should be the counter
bysort country: replace date2=. if date2==min_date // this is to drop the observation which was counted
drop min_date //before I restart the nested loop, so that it again gets to the minimum value in `dates'
}
}
The code works without an error, but it does not do the job. Evidently there's a mistake somewhere, I am just not sure where.
BTW, it's a specific application of a problem I super often encounter: how do you count frequencies of distinct values in a multilevel data structure? This is slightly more specific, to the extent that "time matters", and it should not just sum all encounters. Let me know if you have an easier solution for this.
Thanks!

The problem with your loop is that it does not keep the replaced gov_counter after the loop. However, there is a much easier solution I'd recommend:
sort country party_id cab_date
by country party_id: gen gov_counter=sum(cabinet_party[_n-1])
This sorts the data into groups and then creates a sum by group, always up to (but not including) the current observation.

I would start here. I have stripped the comments so that we can look at the code. I have made some tiny cosmetic alterations.
foreach i of local dates {
egen min_date = min(date2)
sort country party_id date2
bysort country party_id: replace gov_counter=gov_counter+1 ///
if date2 == min_date & cabinet_party[_n-1] == 1
bysort country: replace date2 = . if date2 == min_date
drop min_date
}
This loop includes no reference to the loop index i defined in the foreach statement. So, the code is the same and completely unaffected by the loop index. The variable min_date is just a constant for the dataset and the same each time around the loop. What does depend on how many times the loop is executed is how many times the counter is incremented.
The fallacy here appears to be a false analogy with constructs in other software, in which a loop automatically spawns separate calculations for different values of a loop index.
It's not illegal for loop contents never to refer to the loop index, as is easy to see
forval j = 1/3 {
di "Hurray"
}
produces
Hurray
Hurray
Hurray
But if you want different calculations for different values of the loop index, that has to be explicit.

Related

How to restrict query result from multiple instances of overlapping date ranges in Django ORM

First off, I admit that I am not sure whether what I am trying to achieve is possible (or even logical). Still I am putting forth this query (and if nothing else, at least be told that I need to redesign my table structure / business logic).
In a table (myValueTable) I have the following records:
Item
article
from_date
to_date
myStock
1
Paper
01/04/2021
31/12/9999
100
2
Tray
12/04/2021
31/12/9999
12
3
Paper
28/04/2021
31/12/9999
150
4
Paper
06/05/2021
31/12/9999
130
As part of the underlying process, I am to find out the value (of field myStock) as on a particular date, say 30/04/2021 (assuming no inward / outward stock movement in the interim).
To that end, I have the following values:
varRefDate = 30/04/2021
varArticle = "Paper"
And my query goes something like this:
get_value = myValueTable.objects.filter(from_date__lte=varRefDate, to_date__gte=varRefDate).get(article=varArticle).myStock
which should translate to:
get_value = SELECT myStock FROM myValueTable WHERE varRefDate BETWEEN from_date AND to_date
But with this I am coming up with more than one result (actually THREE!).
How do I restrict the query result to get ONLY the 3rd instance i.e. the one with value "150" (for article = "paper")?
NOTE: The upper limit of date range (to_date) is being kept constant at 31/12/9999.
Edit
Solved it. In a round about manner. Instead of .get, resorted to generating values_list with fields from_date and myStock. Using the count of objects returned; appended a list with date difference between from_date and the ref date (which is 30/04/2021) and the value of field myStock, sorted (ascending) the generated list. The first tuple in the sorted list will have the least date difference and the corresponding myStock value and that will be the value I am searching for. Tested and works.

Wrong sorting while using Query function

I've been trying to do a report about the quantity of breakdonws of products in our company. The problem is that the QUERY function is operating as normal, but the sorting order is well - a bit strange.
The data I'm trying to sort are as follows (quantities are blacked out since I cannot share those informations):
Raw data
First column - name of the product, second, it's EAN code, third, breakdown rate for last year, last column - average breakdown rate. "b/d" means "brak danych" or no data.
What I want to achieve is to get the end table with values sorted by average breakdown rate.
My query is as follows:
=query(Robocze!A2:D;"select A where A is not null and NOT D contains 'b/d' order by D desc")
Final result
As You can see, we have descending order, but there are strange artifacts - like the 33.33% after 4,00% and before 3,92%.
Why is that!?
try:
=INDEX(LAMBDA(x; SORT(x; INDEX(x;; 4)*1; 0))
(QUERY(Robocze!A2:D; "where A is not null and NOT D contains 'b/d'"; 0));; 4)

Sum attributes of relation tables after performing division to them

I couldn't come up with an appropriate title, excuse me for which.
The situation is the following:
I've got two tables: montages and orders, where Montage belongs to Order.
My goal is to build a single mysql query which to return a single float value to represent a sum of values for multiple montages. For each montage in the query I need to divide the budget of its order by the number of montages which belong to the same order. The result of this division should be an attribute of ecah montage. Finally, I want to sum those attributes and retrieve a single value.
I've tried a lot of variation of something like the following, but none seemed to be written in correct syntax, so I kept getting errors:
$sum = App\Montage::where(/*this doesn't matter*/)
->join('orders', 'montages.order_id', '=', 'orders.id') //join the orders table
->select('montages.*, orders.budget') //include the budget column
->selectRaw('count(* where order_id = order_id) as all') //count all the montages of the same order and assing that count to the current montage
->selectRaw('(orders.budget / all) as daily_payment') //divide the budget of the order by the count; store the result as `daily_payment`
->sum('daily_payment') //sum the daily payments
I'm really lost with the proper syntax and can't figure it out. I'd estimate that to be a rather trivial sql task for people who know their stuff, but unfortunately I don't seem to be one of them... Any help is greatly appreciated!

Power Pivot and Closing Price

I am trying to use power pivot to analyze a stock portfolio at any point in time.
The data model is:
transactions table with buy and sell transactions
historical_prices table with the closing price of each stock
security_lookup table with the symbol and other information about the stock (whether it’s a mutual fund, industry, large cap, etc.).
One to many relationships link the symbol column in security_lookup to the transactions and historical_prices tables.
I am able to get the cost basis to work correctly by doing sumx(transactions, quantity*price). However, I’m not able to get the current value of my holdings. I have a measure called “Current Price” which finds the most recent closing price by
Current Price :=
CALCULATE (
LASTNONBLANK ( Historical_prices[close], min[close] ),
FILTER (
Historical_Prices,
Historical_prices[date] = LASTDATE ( historical_prices[date] )
)
)
However, when I try to find the current value of a security by using
Current Value = sumx(transactions,transactions[quantity]*[Current Price])
the total is not accurate. I'd appreciate suggestions on a way to find the current value of a position. Preferably using sumx or an iterator function so that the subtotals are accurate.
The problem with your Current Value measure is that you are evaluating [Current Price] within the row context of the transactions table (since SUMX is an iterator), so it's only seeing the date associated with that row instead of the last date. Or more precisely, that row's date is the last date in the measure's filter context.
The simplest solution is probably to calculate the Current Price outside of the iterator using a variable and then pass that constant in so you don't have to worry about row and filter contexts.
Current Value =
VAR CurrentPrice = [Current Price]
RETURN SUMX(transactions, transactions[quantity] * CurrentPrice)

How do I improve this Stored Procedure?

I have a question:
Assuming an assembly line where a bike goes through some tests, and then the devices send the information regarding the test to
our database (in oracle). I created this stored procedure; it works correctly for what I want, which is:
It gets a list of the first test (per type of test) that a bike has gone through. For instance, if a bike had 2 tests of the same type, it only
shows the first one, AND it shows it only when that first test is between the dates specified by the user. Also I look from 2 months back
because a bike cannot spend more than 2 months (I'm probably overestimating) at the assembly line, but if the user searches 2 days for instance, and I only look in between those days, I could let outside of my results a test made over a bike 3 days ago or maybe 4, and it get's worst if they search between hours.
As I said before, the sp works just fine, but I'm wondering if there's a way to optimize it.
Also consider that the table has around 7 millions of records by the end of the year, so I cannot query the whole year because it could get ugly.
Here's the main part of the stored procedure:
SELECT pid AS "bike_id",
TYPE AS "type",
stationnr AS "stationnr",
testtime AS "testtime",
rel2.releasenr AS "releasenr",
placedesc AS description,
tv.recordtime AS "recordtime",
To_char(tv.testtime, 'YYYY.MM.DD') AS "dategroup",
testcounts AS "testcounts",
tv.result AS "result",
progressive AS "PROGRESIVO"
FROM (SELECT l_bike_id AS pid,
l_testcounts AS testcounts,
To_char(l_testtime, 'yyyy-MM-dd hh24:mi:ss') AS testtimes,
testtime,
pl.code AS place,
t2.recordtime,
t2.releaseid,
t2.testresid,
t2.stationnr,
t2.result,
v.TYPE,
v.progressive,
v.prs,
pl.description AS placeDesc
FROM (SELECT v.bike_id AS l_bike_id,
v.TYPE AS l_type,
Min(t.testtime) AS l_testtime,
Count(t.testtime) AS l_testcounts
FROM result_test t
inner join bikes v
ON v.bike_id = t.pid
inner join result_release rel
ON t.releaseid = rel.releaseid
inner join resultconfig.places p
ON p.place = t.place
WHERE t.testtime >= Add_months(Trunc(p_startdate), -2)
GROUP BY v.bike_id,
v.TYPE,
p.code)p_bikelist
inner join result_test t2
ON p_bikelist.l_bike_id = t2.pid
AND p_bikelist.l_testtime = t2.testtime
inner join resultconfig.places pl
ON pl.place = t2.place
inner join bikes v
ON v.bike_id = t2.pid
inner join result_release rel2
ON t2.releaseid = rel2.releaseid
ORDER BY t2.pid)tv
inner join result_release rel2
ON tv.releaseid = rel2.releaseid
WHERE tv.testtime BETWEEN p_startdate AND p_enddate
ORDER BY testtime;
Thank you for answering!!
I'm struggling a bit to understand the business requirement from the English description you give. The wording suggests that this procedure is intended to work per bike but I don't see any obvious bike_id parameters being supplied, instead, you appear to be returning the earliest result for all bikes tested between given dates. Is that the aim? If it is designed to be run per bike, then ensure bike id gets passed in and used early :)
There is some confusion about your data types. You convert testtime in result_test (presumably a DATE or TIMESTAMP column ) into a string in the p_bikelist subquery but then compare back to the original value in the tv subquery. You further use (presumably typed parameters) p_startdate and p_enddate to filter results. I strongly suspect the conversion in p_bikelist to be unnecessary, and possibly a cause for index avoidance.
Finally, I don't get the add_months logic. By all means, extend the window back in time to get tests that finished within the window but started up to 2 months before the start date, but as written you will exclude the earlier starts anyway because of the condition on tv.testtime. Most likely you'd be better off fudging the startdate earlier in the stored procedure with code like
l_assumedstart := add_months(p_startdate, -2);
and then using l_assumedstart in the query itself.

Resources