Writing Poseidon / Pedersen Hash based Time Locks on NEAR Protocol - time

I have worked on different types of time locks in Ethereum, Polkadot, Aeternity, Algorand, Cosmos etc. I could not find a time lock contract or time lock bridge on NEAR protocol yet. Can anyone suggest the best way to implement time locks on NEAR protocol.

This should help https://docs.near.org/docs/tokens/lockup#the-lockup-contract-at-near. Here is the example from the docs:
{
"owner_account_id": "gio3gio.near", // the Owner account who is allowed to call methods on this one
"lockup_duration": "0", // not necessary if the lockup_timestamp is used
"lockup_timestamp": "1601769600000000000", // Unix timestamp for October 4th, 2020 at midnight UTC
"transfers_information": {
"TransfersDisabled": {
"transfer_poll_account_id": "transfer-vote.near"
}
},
"vesting_schedule": null,
"release_duration": "31536000000000000", // 365 days
"staking_pool_whitelist_account_id": "lockup-whitelist.near",
"foundation_account_id": null
}

Related

Laravel tagging overhead leaving behind significantly large reference sets using redis

I am using Laravel 9 with the Redis cache driver. However, I have an issue where the internal standard_ref and forever_ref map that Laravel uses to manage tagged cache exceed more than 10MB.
This map consists of numerous keys, 95% of which have already expired/decayed and no longer exist; this map seems to grow in size and has a TTL of -1 (never expire).
Other than "not using tags", has anyone else encountered and overcome this? I found this in the slow log of Redis Enterprise, which led me to realize this is happening:
I checked the key/s via SCAN and can confirm it's a massive set of cache misses. It seems highly inefficient and expensive to constantly transmit 10MB back and forth to find one key within the map.
This quickly and efficiently removes expired keys from the SET data-type that laravel uses to manage tagged cache.
use Illuminate\Support\Facades\Cache;
function flushExpiredKeysFromSet(string $referenceKey) : void
{
/** #var \Illuminate\Cache\RedisStore $store */
$store = Cache::store()->getStore();
$lua = <<<LUA
local keys = redis.call('SMEMBERS', '%s')
local expired = {}
for i, key in ipairs(keys) do
local ttl = redis.call('ttl', key)
if ttl == -2 or ttl == -1 then
table.insert(expired, key)
end
end
if #expired > 0 then
redis.call('SREM', '%s', unpack(expired))
end
LUA;
$store->connection()->eval(sprintf($lua, $key, $key), 1);
}
To show the calls that this LUA script generates, from the sample above:
10:32:19.392 [0 lua] "SMEMBERS" "63c0176959499233797039:standard_ref{0}"
10:32:19.392 [0 lua] "ttl" "i-dont-expire-for-an-hour"
10:32:19.392 [0 lua] "ttl" "aa9465100adaf4d7d0a1d12c8e4a5b255364442d:i-have-expired{1}"
10:32:19.392 [0 lua] "SREM" "63c0176959499233797039:standard_ref{0}" "aa9465100adaf4d7d0a1d12c8e4a5b255364442d:i-have-expired{1}"
Using a custom cache driver that wraps the RedisTaggedCache class; when cache is added to a tag, I dispatch a job using the above PHP script only once within that period by utilizing a 24-hour cache lock.
Here is how I obtain the reference key that is later passed into the cleanup script.
public function dispatchTidyEvent(mixed $ttl)
{
$referenceKeyType = $ttl === null ? self::REFERENCE_KEY_FOREVER : self::REFERENCE_KEY_STANDARD;
$lock = Cache::lock('tidy:'.$referenceKeyType, 60 * 60 * 24);
// if we were able to get a lock, then dispatch the event
if ($lock->get()) {
foreach (explode('|', $this->tags->getNamespace()) as $segment) {
dispatch(new \App\Events\CacheTidyEvent($this->referenceKey($segment, $referenceKeyType)));
}
}
// otherwise, we'll just let the lock live out its life to prevent repeating this numerous times per day
return true;
}
Remembering that a "cache lock" is simply just a SET/GET and Laravel is responsible for many of those already on every request to manage it's tags, adding a lock to achieve this "once per day" concept only adds negligible overhead.

Laravel dispatching Queues at set time

I am currently dispatching queued jobs to send API Events instantly, in busy times these queued jobs need to be held until overnight when the API is less busy, how can I hold these queued jobs or schedule them to only run from 01:00am the following day.
the Queued Job call currently looks like:
EliQueueIdentity::dispatch($EliIdentity->id)->onQueue('eli');
there are other jobs on the same queue, all of which will need to be held in busy times
Use delay to run job at a certain time.
EliQueueIdentity::dispatch($EliIdentity->id)
->onQueue('eli')
->delay($this->scheduleDate());
Helper for calculating the time, handling a edge case between 00:00 to 01:00, where it would delay it a whole day. While not specified how to handle busy, made an pseudo example you can implement.
private function scheduleDate()
{
$now = Carbon::now();
if (! $this->busy()) {
return $now;
}
// check for edge case of 00:00:00 to 01
if ($now->hour <= 1) {
$now->setTime(1, 0, 0);
return $now;
}
return Carbon::tomorrow()->addHour();
}
You can use delayed dispatching (see https://laravel.com/docs/6.x/queues#delayed-dispatching):
// Run it 10 minutes later:
EliQueueIdentity::dispatch($EliIdentity->id)->onQueue('eli')->delay(
now()->addMinutes(10)
);
Or pass another carbon instance like:
// Run it at the end of the current week (i believe this is sunday 23:59, havent checked).
->delay(Carbon::now()->endOfWeek());
// Or run it at february second 2020 at 00:00.
->delay(Carbon::createFromFormat('Y-m-d', '2020-02-02'));
You get the picture.

How to partition Gobblin output to 30 min partitions?

We are planning to migrate from Camus to Gobblin. In Camus we were using below mentioned configs:
etl.partitioner.class=com.linkedin.camus.etl.kafka.partitioner.TimeBasedPartitioner
etl.destination.path.topic.sub.dirformat=YYYY/MM/dd/HH/mm
etl.output.file.time.partition.mins=30
But in Gobblin we have configs as:
writer.file.path.type=tablename
writer.partition.level=minute (other options: daily,hourly..)
writer.partition.pattern=YYYY/MM/dd/HH/mm
This creates directories on a minute level, but we need 30 min partitions.
Couldn't find much help in the official doc: http://gobblin.readthedocs.io/en/latest/miscellaneous/Camus-to-Gobblin-Migration/
Are there any other configs which can be used to achieve this?
Got a workaround by implementing a partitionerMethod inside custom WriterPartitioner :
While fetching the record level timestamp in the partitioner we just need to send the processed timestamp millis using below mentioned method.
public static long getPartition(long timeGranularityMs, long timestamp, DateTimeZone outputDateTimeZone) {
long adjustedTimeStamp = outputDateTimeZone.convertUTCToLocal(timestamp);
long partitionedTime = (adjustedTimeStamp / timeGranularityMs) * timeGranularityMs;
return outputDateTimeZone.convertLocalToUTC(partitionedTime, false);
}
Now partitions are getting generated at required time granularity.

Sentry: Ignore Exceptions which happen between 4 and 5 o'clock

Between 4 and 5 o'clock a remote system is regularly down.
This means some cron jobs produce exceptions.
Is there a way to ignore these exceptions.
But exceptions before or after that time period are important.
This is currently not possible with Sentry.
If you want you can watch this GitHub Sentry issue: Mute whole projects in case of maintenance downtime #1517.
Actually, there is a workaround for that;
Sentry.init(options -> {
options.setBeforeSend((event, hint) -> {
if (time is between 4-5 o-clock) {
return null;
}
return event;
});
}
);

Mixpanel - Bulk delete old users

I am about to go into the next plan in mixpanel for having too many people and would like to delete some old users first.
Is there a simple way/script/api to bulk delete old users?
I've written two scripts that may come in handy; mixpanel-engage-query and mixpanel-engage-post.
Using the first script (query) you can query your People Data and get a list of profiles, e.g. all users who have $last_seen set to a date older than X months.
Using the second script (post) you can perform actions in batch on those profiles, for example deleting them. See the README for an example of how to perform a batch delete.
Yes there is. Looking at the HTTP spec you'll find the following.
$delete
string Permanently delete the profile from Mixpanel, along with all of
its properties. The value is ignored - the profile is determined by
the $distinct_id from the request itself.
// This removes the user 13793 from Mixpanel
{
"$token": "36ada5b10da39a1347559321baf13063",
"$distinct_id": "13793",
"$delete": ""
}
Batch requests
Both the events endpoint at http://api.mixpanel.com/track/ and the profile update endpoint at http://api.mixpanel.com/engage/ accept batched updates. To send a batch of messages to an endpoint, you should use a POST instead of a GET request. Instead of sending a single JSON object as the data query parameter, send a JSON list of objects, base64 encoded, as the data parameter of an application/x-www-form-urlencoded POST request body.
// Here's a list of events
[
{
"event": "Signed Up",
"properties": {
"distinct_id": "13793",
"token": "e3bc4100330c35722740fb8c6f5abddc",
"Referred By": "Friend",
"time": 1371002000
}
},
{
"event": "Uploaded Photo",
"properties": {
"distinct_id": "13793",
"token": "e3bc4100330c35722740fb8c6f5abddc",
"Topic": "Vacation",
"time": 1371002104
}
}
]
Base64 encoded, the list becomes:
Ww0KICAgIHsNCiAgICAgICAgImV2ZW50IjogIlNpZ25lZCBVcCIsDQogICAgICAgICJwcm9wZXJ0aWVzIjogew0KICAgICAgICAgICAgImRpc3RpbmN0X2lkIjogIjEzNzkzIiwNCiAgICAgICAgICAgICJ0b2tlbiI6ICJlM2JjNDEwMDMzMGMzNTcyMjc0MGZiOGM2ZjVhYmRkYyIsDQogICAgICAgICAgICAiUmVmZXJyZWQgQnkiOiAiRnJpZW5kIiwNCiAgICAgICAgICAgICJ0aW1lIjogMTM3MTAwMjAwMA0KICAgICAgICB9DQogICAgfSwNCiAgICB7DQogICAgICAgICAiZXZlbnQiOiAiVXBsb2FkZWQgUGhvdG8iLA0KICAgICAgICAgICJwcm9wZXJ0aWVzIjogew0KICAgICAgICAgICAgICAiZGlzdGluY3RfaWQiOiAiMTM3OTMiLA0KICAgICAgICAgICAgICAidG9rZW4iOiAiZTNiYzQxMDAzMzBjMzU3MjI3NDBmYjhjNmY1YWJkZGMiLA0KICAgICAgICAgICAgICAiVG9waWMiOiAiVmFjYXRpb24iLA0KICAgICAgICAgICAgICAidGltZSI6IDEzNzEwMDIxMDQNCiAgICAgICAgICB9DQogICAgfQ0KXQ==
So the body of a POST request to send the events as a batch is:
data=Ww0KICAgIHsNCiAgICAgICAgImV2ZW50IjogIlNpZ25lZCBVcCIsDQogICAgICAgICJwcm9wZXJ0aWVzIjogew0KICAgICAgICAgICAgImRpc3RpbmN0X2lkIjogIjEzNzkzIiwNCiAgICAgICAgICAgICJ0b2tlbiI6ICJlM2JjNDEwMDMzMGMzNTcyMjc0MGZiOGM2ZjVhYmRkYyIsDQogICAgICAgICAgICAiUmVmZXJyZWQgQnkiOiAiRnJpZW5kIiwNCiAgICAgICAgICAgICJ0aW1lIjogMTM3MTAwMjAwMA0KICAgICAgICB9DQogICAgfSwNCiAgICB7DQogICAgICAgICAiZXZlbnQiOiAiVXBsb2FkZWQgUGhvdG8iLA0KICAgICAgICAgICJwcm9wZXJ0aWVzIjogew0KICAgICAgICAgICAgICAiZGlzdGluY3RfaWQiOiAiMTM3OTMiLA0KICAgICAgICAgICAgICAidG9rZW4iOiAiZTNiYzQxMDAzMzBjMzU3MjI3NDBmYjhjNmY1YWJkZGMiLA0KICAgICAgICAgICAgICAiVG9waWMiOiAiVmFjYXRpb24iLA0KICAgICAgICAgICAgICAidGltZSI6IDEzNzEwMDIxMDQNCiAgICAgICAgICB9DQogICAgfQ0KXQ==
Both endpoints will accept up to 50 messages in a single batch. Usually, batch requests will have a "time" property associated with events, or a "$time" attribute associated with profile updates.
Using the Mixpanel-api python Module
pip install mixpanel-api
This script will delete any profile that hasn't been seen since January 1st, 2019:
from mixpanel_api import Mixpanel
mixpanel = Mixpanel('MIXPANEL_SECRET', token='MIXPANEL_TOKEN')
deleted_count = mixpanel.people_delete(query_params={ 'selector' : 'user["$last_seen"]<"2019-01-01T00:00:00"'})
print(deleted_count)
Replace MIXPANEL_SECRET and MIXPANEL_TOKEN with your own project tokens.
Install Mixpanel Python API (Click Here)
pip install mixpanel-api
Create a python file : delete_people.py and copy and paste below code and perform changes as per your project configuration, i.e secret,token, filter params etc.
from mixpanel_api import Mixpanel
from datetime import datetime
now = datetime.now()
current_time = now.strftime("%Y_%m_%d_%H_%M_%S")
if __name__ == '__main__':
#Mixpanel Project :
credentials = {
'API_secret': '<Your API Secret>',
'token': '<Your API Token>',
}
# first we are going to make a Mixpanel object instance
mlive = Mixpanel(credentials['API_secret'])
# Mixpanel object with token to delete people
ilive = Mixpanel(credentials['API_secret'],credentials['token'])
#Prepare parameters for delete condition
#<filter_by_cohort_here> - Get from mixpanel explore UI, from engage api xhr call (https://mixpanel.com/api/2.0/engage)
parameters = {'filter_by_cohort':'<filter_by_cohort_here>','include_all_users':'true','limit':0}
# Backup data before deleting
print("\n Creating backup of data\n")
mlive.export_people('backup_people_'+current_time+'.json', parameters)
# Delete people using parameters filter
print("\n Backup Completed! Deleting Data\n")
ilive.people_delete(query_params=parameters)
print("\n Data Deleted Successfully\n")
Run below command from terminal
python delete_people.py
Note: people_delete method of mixpanel api will automatically create backup_timestamp.json file in same directory where you put this script

Resources