How to create sharded-queue tube the right way? - tarantool

Let's say we have Tarantool Cartridge based service, which stores posts made by users. When user makes a new post, it is inserted to corresponding space. Simultaneously into sharded-queue tube notify_friends is added task for notifying user friends about new post.
Question is about creation of tube notify_friends. Initially I planned to do that in init() method of the service role, but it causes error, because tube creation modifies clusterwide-config and it is being changed when init() runs.
I could try creating tube at first task add request, but not sure if it's the best approach.

You can put it to "default config" of your app.
Check it here:
How to implement default config section for a custom Tarantool Cartridge role?

There are 2 ways I'd go with it:
Create the tube on the first request as you propose. Nothing bad will happen.
If you want to do it in advance - create a fiber in the init function that will try to create the tube after 10 seconds since the initialization, if the tube doesn't exist. You can figure out all instances that have sharded_queue storage, and run the fiber only on the first one (sort alphabetically by instance URI).

Related

GA3 Event Push Neccesary fields in Request

I am trying to push a event towards GA3, mimicking an event done by a browser towards GA. From this Event I want to fill Custom Dimensions(visibile in the user explorer and relate them to a GA ID which has visited the website earlier). Could this be done without influencing website data too much? I want to enrich someone's data from an external source.
So far I cant seem to find the minimum fields which has to be in the event call for this to work. Ive got these so far:
v=1&
_v=j96d&
a=1620641575&
t=event&
_s=1&
sd=24-bit&
sr=2560x1440&
vp=510x1287&
je=0&_u=QACAAEAB~&
jid=&
gjid=&
_u=QACAAEAB~&
cid=GAID&
tid=UA-x&
_gid=GAID&
gtm=gtm&
z=355736517&
uip=1.2.3.4&
ea=x&
el=x&
ec=x&
ni=1&
cd1=GAID&
cd2=Companyx&
dl=https%3A%2F%2Fexample.nl%2F&
ul=nl-nl&
de=UTF-8&
dt=example&
cd3=CEO
So far the Custom dimension fields dont get overwritten with new values. Who knows which is missing or can share a list of neccesary fields and example values?
Ok, a few things:
CD value will be overwritten only if in GA this CD's scope is set to the user-level. Make sure it is.
You need to know the client id of the user. You can confirm that you're having the right CID by using the user explorer in GA interface unless you track it in a CD. It allows filtering by client id.
You want to make this hit non-interactional, otherwise you're inflating the session number since G will generate sessions for normal hits. non-interactional hit would have ni=1 among the params.
Wait. Scope calculations don't happen immediately in real-time. They happen later on. Give it two days and then check the results and re-conduct your experiment.
Use a throwaway/test/lower GA property to experiment. You don't want to affect the production data while not knowing exactly what you do.
There. A good use case for such an activity would be something like updating a life time value of existing users and wanting to enrich the data with it without waiting for all of them to come in. That's useful for targeting, attribution and more.
Thank you.
This is the case. all CD's are user Scoped.
This is the case, we are collecting them.
ni=1 is within the parameters of each event call.
There are so many parameters, which parameters are neccesary?
we are using a test property for this.
We also got he Bot filtering checked out:
Bot filtering
It's hard to test when the User Explorer has a delay of 2 days and we are still not sure which parameters to use and which not. Who could help on the parameter part? My only goal is to update de CD's on the person. Who knows which parameters need to be part of the event call?

Skill Flow Builder Lambda Function reset the DynamoDB

Apologies for the title of this question. It's hard to put it any other way. I have built an Alexa skill using their new dev tools (Skill Flow Builder). This tool has a feature that deploys the skill and creates the Lambda function you need to run it. This Lambda function uses DynamoDB to store information about variables, and the scene names that represent your current position in the skill as you progress through it.
I have edited the skill, and have been testing it thoroughly, but I have now removed all of the old scenes and replaced them with new ones that have new names.
Now, when I deploy and try to run the skill, it is throwing an error because it is trying to find the name of a scene that no longer exists. It is doing this because it wants to resume the skill at that point. The old scene name is stored in the DB.
Here is the error message thrown by the Lambda function:
{"errorMessage":"Cannot find the scene not interesting.","errorType":"Error","stackTrace":["StoryAccessor.getSceneByID (/var/task/node_modules/#alexa-games/sfb-f/dist/storyEntities/StoryAccessor.js:28:19)","ACEDriver.processScene (/var/task/node_modules/#alexa-games/sfb-f/dist/driver.js:435:47)","ACEDriver.resumeStory (/var/task/node_modules/#alexa-games/sfb-f/dist/driver.js:188:41)","<anonymous>","process._tickDomainCallback (internal/process/next_tick.js:228:7)"]}
It is the scene that was called "not interesting" that it can no longer find.
The question is, how can I reset the skill so it is not using the DB to resume the skill at the last point?
The answer, of course, is to spin up a new DynamoDB Table when you have significant changes to your Skill: in my case, a complete change in the Scene names.
In the Skill Flow Builder abcConfig.json file edit the dynamo-db-session-table-name string which is in the default object with all the other settings for your skill. Give it a new name and then re-deploy. It will build a new Table.

Autoscaling : Minimum 2 Instances and a subsequent Lambda

All,
Am really stuck and have tried almost everything. Can some one please help.
I provision 2 instances while creating my Auto-scaling group. I trigger a Lambda ( manipulates the tags) which changes the instance name to a unique name.
Desired State
I want first instance of Lambda to give first instance the name "web-1"
Then second instance of lambda would run just fine to assign a name "web-2"
Current State
I start with a search on running instances to see if "web-1" exists or not.
So in this case my Lambda executes twice and creates both instances with the same name ( web-1, web-1).
How do I get around this ? I know that the problem is due to Lambda listening to Cloud Watch events. ASG Launch creates 2 events at the same time in my case leading to the problem I have.
Thanks.
You are running into a classic multi-threading issue. Both lambda functions execute simultaneously, see the same "unused" web-1 and mark both with the same function.
What you need is an atomic operation that gives each Lambda execution "permission" to proceed. You can try using a helper DynamoDB table to serialize the tag attempts.
Have your lambda function decide which tag to set (web-1, web-2, etc.)
Check a DynamoDB table to see if that tag has been set in the last 30 seconds. If so, someone else got to it first, so go back to step 1.
Try to write your "ownership" of your sought-after tag to the DynamoDB along with your current timestamp. Try using some attribute_not_exists or other DynamoDB conditions to ensure only one simultaneous such write succeeds.
If you fail at writing, go back to step 1.
If you succeed at writing, then you're free to set your tag.
The reason for the timestamps is to allow for "web-1" to be terminated, and then having a new EC2 instance launched and labelled "web-1".
The above logic is not proven to work, but hopefully should give enough guidance to develop a working solution.

Realm: Do we need to write each and every new RLMObject we create

Started using Realm as storage layer for my app. This is these scenario I am trying to solve
Scenario: I get a whole bunch of data from the server. I convert each piece of data into a RLMObject. I want to just "save" to persistent storage at the end. In between, I want these RLMObjects create dot reflected when I do a query
I don't see a solution for this in Realm. Looks like only way to is to write each Object back into the Realm DB after they are created. Documentation also says that writes are expensive. Is there any way around?
To reduce the overhead, I guess I could maintain list of objects created and write all of them in one transaction. Still seems like a lot of work. Is that how it is intended to be used?
You can create the objects as standalone without adding them to the Realm, and then add them all in single transaction (which is very efficient) at the end.
Check out the documentation about creating objects here: https://realm.io/docs/objc/latest/#creating-objects
There is also an example of adding objects in bulk here, where they get added in chunks so that other threads can observe the changes as they happens: https://realm.io/docs/objc/latest/#using-a-realm-across-threads

Update highcharts chart series from meteor subscription in angularjs-meteor

The way I have been doing it so far is to use Meteor.call and reset all the series in the callback, adding all the points all over. I then fetch new data using $interval every 5 seconds or so. This is obviously not very efficient and my data set is growing.
Now I'm trying to switch to a Meteor subscription since it feels like the right tool for the job.
The first challenge was how to add (a single time) all the points that are already in the collection. I solved that by using the onReady callback of a subscription.
Now my question is: how do I process the subsequent updates to the series on the chart?
I have a helper collection which is supposed to receive the updates and although this works transparently when used in an ng-repeat, I find it difficult to interact with outside of this use case.
I tried to $watch it but the watcher does not trigger.
The Higcharts documentation has a pure Meteor example on how to do what I want: http://www.highcharts.com/blog/195-meteor-standalone, but how do can I adapt it for angularjs-meteor?
Use ng-highcharts, and make sure your series is linked to the helper result.

Resources