Got Error "Command Get label of interval: not available for current selection" when trying to edit selections with certain labels - praat

I was trying to replace certain selections of sounds with silences, but before I could get into the essential part, the "Get label from interval" command failed before I was able to check whether I'd obtained the correct output.
Error Message I've Got on Praat
I'd thought it was some problems with the toy audio I played with, so I switched to another one but strangely enough, every time I always got stuck with the third interval in the sequence. Screenshot available here:
The Third Interval was Strangely not Available
The code I'm using right now is quite simple as below:
There could be problems with how I process the intervals labelled as "sounding", but it should not be relevant to the error I got here.
----------------------------------------------------------------------------
soundname$ = chooseReadFile$: "Open a sound file"
sound = Read from file: soundname$
gridname$ = chooseReadFile$: "Select the corresponding TextGrid file."
grid = Read from file: gridname$
tier = 1
soundEnd = Get total duration
numOfIntervals = Get number of intervals: tier
writeInfoLine: "Number of Intervals:", numOfIntervals
appendInfoLine: "**** Starting to Replace Selected Sounds... ****"
for interval from 1 to numOfIntervals
appendInfoLine: "Interval#", interval
label$ = Get label of interval: tier, interval
appendInfoLine: "Interval#", interval, ": ", label$
if label$ == "sounding"
startTime = Get starting point: tier, interval
endTime = Get end point: tier, interval
newSilence = Create Sound from formula: "new_silence", 1, startTime, endTime, 44100, ~ 0
before = Extract part: 0, startTime, "rectangular", 1, "no"
after = Extract part: endTime, soundEnd, "rectangular", 1, "no"
selectObject: before, newSilence, after
new = Concatenate
Write to WAV file... "new".wav
removeObject: before, newSilence, after
endif
endfor
appendInfoLine: "Finish!"
----------------------------------------------------------------------------
I'd suspected that it was because of my editing that some shifts occurred in the TextGrid-- however, since I only supplanted the sounds with silences that were of the same length, there should not be impacts on sounds or TextGrids afterwards. However, since I never had the chance to output some playable audio results, all I inferred above could not be corroborated by solid truth.
Thank you all so much for your kind attention and help in advance!

Related

Google Ads API: How to Send Batch Requests?

I'm using Google Ads API v11 to upload conversions and adjust conversions.
I send hundreds of conversions each day and want to start sending batch requests instead.
I've followed Google's documentation and I upload/ adjust conversions exactly the way they stated.
https://developers.google.com/google-ads/api/docs/conversions/upload-clicks
https://developers.google.com/google-ads/api/docs/conversions/upload-adjustments
I could not find any good explanation or example on how to send batch requests:
https://developers.google.com/google-ads/api/reference/rpc/v11/BatchJobService
Below is my code, an example of how I adjust hundreds of conversions.
An explanation of how to do so with batch requests would be very appreciated.
# Adjust the conversion value of an existing conversion, via Google Ads API
def adjust_offline_conversion(
client,
customer_id,
conversion_action_id,
gclid,
conversion_date_time,
adjustment_date_time,
restatement_value,
adjustment_type='RESTATEMENT'):
# Check that gclid is valid string else exit the function
if type(gclid) is not str:
return None
# Check if datetime or string, if string make as datetime
if type(conversion_date_time) is str:
conversion_date_time = datetime.strptime(conversion_date_time, '%Y-%m-%d %H:%M:%S')
# Add 1 day forward to conversion time to avoid this error (as explained by Google: "The Offline Conversion cannot happen before the ad click. Add 1-2 days to your conversion time in your upload, or check that the time zone is properly set.")
to_datetime_plus_one = conversion_date_time + timedelta(days=1)
# If time is bigger than now, set as now (it will be enough to avoid the original google error, but to avoid a new error since google does not support future dates that are bigger than now)
to_datetime_plus_one = to_datetime_plus_one if to_datetime_plus_one < datetime.utcnow() else datetime.utcnow()
# We must convert datetime back to string + add time zone suffix (+00:00 or -00:00 this is utc) **in order to work with google ads api**
adjusted_string_date = to_datetime_plus_one.strftime('%Y-%m-%d %H:%M:%S') + "+00:00"
conversion_adjustment_type_enum = client.enums.ConversionAdjustmentTypeEnum
# Determine the adjustment type.
conversion_adjustment_type = conversion_adjustment_type_enum[adjustment_type].value
# Associates conversion adjustments with the existing conversion action.
# The GCLID should have been uploaded before with a conversion
conversion_adjustment = client.get_type("ConversionAdjustment")
conversion_action_service = client.get_service("ConversionActionService")
conversion_adjustment.conversion_action = (
conversion_action_service.conversion_action_path(
customer_id, conversion_action_id
)
)
conversion_adjustment.adjustment_type = conversion_adjustment_type
conversion_adjustment.adjustment_date_time = adjustment_date_time.strftime('%Y-%m-%d %H:%M:%S') + "+00:00"
# Set the Gclid Date
conversion_adjustment.gclid_date_time_pair.gclid = gclid
conversion_adjustment.gclid_date_time_pair.conversion_date_time = adjusted_string_date
# Sets adjusted value for adjustment type RESTATEMENT.
if conversion_adjustment_type == conversion_adjustment_type_enum.RESTATEMENT.value:
conversion_adjustment.restatement_value.adjusted_value = float(restatement_value)
conversion_adjustment_upload_service = client.get_service("ConversionAdjustmentUploadService")
request = client.get_type("UploadConversionAdjustmentsRequest")
request.customer_id = customer_id
request.conversion_adjustments = [conversion_adjustment]
request.partial_failure = True
response = (
conversion_adjustment_upload_service.upload_conversion_adjustments(
request=request,
)
)
conversion_adjustment_result = response.results[0]
print(
f"Uploaded conversion that occurred at "
f'"{conversion_adjustment_result.adjustment_date_time}" '
f"from Gclid "
f'"{conversion_adjustment_result.gclid_date_time_pair.gclid}"'
f' to "{conversion_adjustment_result.conversion_action}"'
)
# Iterate every row (subscriber) and call the "adjust conversion" function for it
df.apply(lambda row: adjust_offline_conversion(client=client
, customer_id=customer_id
, conversion_action_id='xxxxxxx'
, gclid=row['click_id']
, conversion_date_time=row['subscription_time']
, adjustment_date_time=datetime.utcnow()
, restatement_value=row['revenue'])
, axis=1)
I managed to solve it in the following way:
The conversion upload and adjustment are not supported in the Batch Processing, as they are not listed here.
However, it is possible to upload multiple conversions in one request since the conversions[] field (list) could be populated with several conversions, not only a single conversion as I mistakenly thought.
So if you're uploading conversions/ adjusting conversions you can simply upload them in batch this way:
Instead of uploading one conversion:
request.conversions = [conversion]
Upload several:
request.conversions = [conversion_1, conversion_2, conversion_3...]
Going the same way for conversions adjustment upload:
request.conversion_adjustments = [conversion_adjustment_1, conversion_adjustment_2, conversion_adjustment_3...]

auto_arima() m value, and seasonal decomposition period parameter

I am working on arima modeling. The data has hourly granularity - taken from 1st May 2022 till 8th June 2022. I am trying to do forecasting for next 30 days i.e 720 hours. I am facing trouble & getting confused with the below doubts. If anybody could provide pointers then it will be great.
Tried plotting the raw data & found no trend, and seasonality
a) Checked with seasonal_decomposition() with a few period values with period=1 (correct with my understanding that season should be 0)
b) period = 12 (just random - but why it is showing some seasons?. Even if I pot without period for which default value is 7, it still shows season - why?)
Plotted this graph with seasonality value False as in the raw plot I do not see any seasons/trend & getting the below plot. How & what should be concluded???
Then I thought of capturing this season thing through resampling by plotting daily graph and getting further confused.
a) period - 7 (default for seasonal_decomposition), again I can see seasonality of 4 days when the raw plot do not show seasons.
The forecasting for this resampled (daily) data is below
I am extremely clueless now as to what to see. The more I am reading the more I am getting confused.
Below is the code that I am using.
df=pd.read_csv('~/Desktop/gru-scl/gru-scl-filtered.csv', index_col="time")
del df["Index"]
df.index=pd.to_datetime(df.index)
model = pm.auto_arima(df.bps, start_p=0, start_q=0,
test='adf', # use adftest to find optimal 'd'
max_p=3, max_q=3, # maximum p and q
m=24, # frequency of series
d=None, # let model determine 'd'
seasonal=False, # No Seasonality
start_P=0,
D=0,
trace=True,
error_action='ignore',
suppress_warnings=True,
stepwise=True)
f_steps=720
fc, confint = model.predict(n_periods=f_steps, return_conf_int=True)
fc_index = np.arange(len(df.bps), len(df.bps)+f_steps)
val=0
for f in fc:
val = val+f
mean = val/f_steps
print(mean)
# make series for plotting purpose
fc_series = pd.Series(fc, index=fc_index)
lower_series = pd.Series(confint[:, 0], index=fc_index)
upper_series = pd.Series(confint[:, 1], index=fc_index)
# Plot
plt.plot(df.bps, label="Actual values")
plt.plot(fc, color='darkgreen', label="Predicted values")
plt.fill_between(fc_index,
lower_series,
upper_series,
color='k', alpha=.15)
plt.legend(loc='upper left', fontsize=8)
plt.title('Forecast vs Actuals')
plt.xlabel("Hours since 1st May 2022")
plt.ylabel("Bps")
plt.show()

Pinescript duplicate alerts

I have created a very basic script in pinescript.
study(title='Renko Strat w/ Alerts', shorttitle='S_EURUSD_5_[MakisMooz]', overlay=true)
rc = close
buy_entry = rc[0] > rc[2]
sell_entry = rc[0] < rc[2]
alertcondition(buy_entry, title='BUY')
alertcondition(sell_entry, title='SELL')
plot(buy_entry/10)
The problem is that I get a lot of duplicate alerts. I want to edit this script so that I only get a 'Buy' alert when the previous alert was a 'Sell' alert and visa versa. It seems like such a simple problem, but I have a hard time finding good sources to learn pinescript. So, any help would be appreciated. :)
One way to solve duplicate alters within the candle is by using "Once Per Bar Close" alert. But for alternative alerts (Buy - Sell) you have to code it with different logic.
I Suggest to use Version 3 (version shown above the study line) than version 1 and 2 and you can accomplish the result by using this logic:
buy_entry = 0.0
sell_entry = 0.0
buy_entry := rc[0] > rc[2] and sell_entry[1] == 0? 2.0 : sell_entry[1] > 0 ? 0.0 : buy_entry[1]
sell_entry := rc[0] < rc[2] and buy_entry[1] == 0 ? 2.0 : buy_entry[1] > 0 ? 0.0 : sell_entry[1]
alertcondition(crossover(buy_entry ,1) , title='BUY' )
alertcondition(crossover(sell_entry ,1), title='SELL')
You'll have to do it this way
if("Your buy condition here")
strategy.entry("Buy Alert",true,1)
if("Your sell condition here")
strategy.entry("Sell Alert",false,1)
This is a very basic form of it but it works.
You were getting duplicate alerts because the conditions were fulfulling more often. But with strategy.entry(), this won't happen
When the sell is triggered, as per paper trading, the quantity sold will be double (one to cut the long position and one to create a short position)
PS :You will have to add code to create alerts and enter this not in study() but strategy()
The simplest solution to this problem is to use the built-in crossover and crossunder functions.
They consider the entire series of in-this-case close values, only returning true the moment they cross rather than every single time a close is lower than the close two candles ago.
//#version=5
indicator(title='Renko Strat w/ Alerts', shorttitle='S_EURUSD_5_[MakisMooz]', overlay=true)
c = close
bool buy_entry = false
bool sell_entry = false
if ta.crossover(c[1], c[3])
buy_entry := true
alert('BUY')
if ta.crossunder(c[1], c[3])
sell_entry := true
alert('SELL')
plotchar(buy_entry, title='BUY', char='B', location=location.belowbar, color=color.green, offset=-1)
plotchar(sell_entry, title='SELL', char='S', location=location.abovebar, color=color.red, offset=-1)
It's important to note why I have changed to the indices to 1 and 3 with an offset of -1 in the plotchar function. This will give the exact same signals as 0 and 2 with no offset.
The difference is that you will only see the character print on the chart when the candle actually closes rather than watch it flicker on and off the chart as the close price of the incomplete candle moves.

visual basic difference in dates between two lines in text file

I am new to vb express and looking for a way to read two lines in a text file get the difference between then and loop it till the end its a simple clock in clock out system which store each persons clock on and off time in a text file like so
03/11/2014 09:55:02
03/11/2014 14:55:02
03/11/2014 16:55:02
03/11/2014 19:55:02
04/11/2014 09:00:02
04/11/2014 13:00:00
I know I use the DateDiff to get the time but I only want them to work out the difference between line 1 and 2 then 3 and 4 and add them all up is it possible to do that without over complicating things?
I guys I have worked it out I have done this by reading the text filed line by line in a loop at the moment I have not put any validation in to show people who have forgot but the basics are there
Dim FILE_NAME As String = "times\08.txt"
Dim start As DateTime
Dim finish As DateTime
Dim total
If System.IO.File.Exists(FILE_NAME) = True Then
Dim objReader As New System.IO.StreamReader(FILE_NAME)
Do While objReader.Peek() <> -1
start = objReader.ReadLine() & vbNewLine
finish = objReader.ReadLine() & vbNewLine
duration = DateDiff(DateInterval.Minute, start, finish)
total = duration + total
Loop
Label2.Text = total

Calculate time remaining with different length of variables

I will have to admit the title of this question sucks... I couldn't get the best description out. Let me see if I can give an example.
I have about 2700 customers with my software at one time was installed on their server. 1500 or so still do. Basically what I have going on is an Auto Diagnostics to help weed out people who have uninstalled or who have problems with the software for us to assist with. Currently we have a cURL fetching their website for our software and looking for a header return.
We have 8 different statuses that are returned
GREEN - Everything works (usually pretty quick 0.5 - 2 seconds)
RED - Software not found (usually the longest from 5 - 15 seconds)
BLUE - Software found but not activated (usually from 3 - 9 seconds)
YELLOW - Server IP mismatch (usually from 1 - 3 seconds)
ORANGE - Server IP mismatch and wrong software type (usually 5 - 10 seconds)
PURPLE - Activation key incorrect (usually within 2 seconds)
BLACK - Domain returns 404 - No longer exists (usually within a second)
UNK - Connection failed (usually due to our load balancer -- VERY rare) (never countered this yet)
Now basically what happens is a cronJob will start the process by pulling the domain and product type. It will then cURL the domain and start cycling through the status colors above.
While this is happening we have an ajax page that is returning the results so we can keep an eye on the status. The major problem is the Time Remaining is so volatile that it does not do a good estimate. Here is the current math:
# Number of accounts between NOW and when started
$completedAccounts = floor($parseData[2]*($parseData[1]/100));
# Number of seconds between NOW and when started
$completedTime = strtotime("now") - strtotime("$hour:$minute:$second");
# Avg number of seconds per account
$avgPerCompleted = $completedTime / $completedAccounts;
# Total number of remaining accounts to be scanned
$remainingAccounts = $parseData[2] - $completedAccounts;
# The total of seconds remaining for all of the remaining accounts
$remainingSeconds = $remainingAccounts * $avgPerCompleted;
$remainingTime = format_time($remainingSeconds, ":");
I could create a count on all of the green, red, blue, etc... and do an average of how long each color does, then use that for the average time, although I don't believe that would give much better results.
With the difference in times that are so varied, any suggestions would be grateful?
Thanks,
Jeff
OK, I believe I have figured it out. I had to create a class so I could calculate a single regression over a period of time.
function calc() {
$n = count($this->mDatas);
$vSumXX = $vSumXY = $vSumX = $vSumY = 0;
//var_dump($this->mDatas);
$vCnt = 0; // for time-series, start at t=0<br />
foreach ($this->mDatas AS $vOne) {
if (is_array($vOne)) { // x,y pair<br />
list($x,$y) = $vOne;
} else { // time-series<br />
$x = $vCnt; $y = $vOne;
} // fi</p>
$vSumXY += $x*$y;
$vSumXX += $x*$x;
$vSumX += $x;
$vSumY += $y;
$vCnt++;
} // rof
$vTop = ($n*$vSumXY – $vSumX*$vSumY);
$vBottom = ($n*$vSumXX – $vSumX*$vSumX);
$a = $vBottom!=0?$vTop/$vBottom:0;
$b = ($vSumY – $a*$vSumX)/$n;
//var_dump($a,$b);
return array($a,$b);
}
I take each account and start building an array, for the amount of time it takes for each one. The array then runs through this calculation so it will build a x and y time sets. Finally I then run the array through the predict function.
/** given x, return the prediction y */
function calcpredict($x) {
list($a,$b) = $this->calc();
$y = $a*$x+$b;
return $y;
}
I put static values in so you could see the results:
$eachTime = array(7,1,.5,12,11,6,3,.24,.12,.28,2,1,14,8,4,1,.15,1,12,3,8,4,5,8,.3,.2,.4,.6,4,5);
$forecastProcess = new Linear($eachTime);
$forecastTime = $forecastProcess->calcpredict(5);
This overall system gives me about a .003 difference in 10 accounts and about 2.6 difference in 2700 accounts. Next will be to calculate the Accuracy.
Thanks for trying guys and gals

Resources