Laravel store the time difference into server - laravel

Ok, so now I'm building a VueJS + Laravel web application. My main function for the app is when it reaches a certain time(startTime), the user will be able to click a button and store the time difference between the time when user pressed the button and the startTime to the database. Note: there will be around 25users submitting at the same time, spamming the record button. Each user can only record once. The one with the shortest difference in time will win the competition.
Currently, I've thought of two ways,
(i) Use javascript Date.now() to get the current time and subtract by the start time and then send the timeDiff from front end to backend. It's fast but Date.now() depends on the client system time, which if the user changes its' system time, they can either draw earlier or later since it doesn't matter as long as the time matches the start time.
(ii) Everything is processed in the backend. A timestamp will be generated on the server every time a user press the record button. It doesn't have the issue (i) has but due to the performance of Laravel, it looks like the server is stalling the request thus making the timestamp not accurate at all.
Any suggestions or advices? I'm still new to all this and these are the 2 methods I can think of right now.
if(auth()->user()->recorded !== 1){ //check if recorded
//create timestamp in epoch
$pressTime = microtime(true)*1000;
$offTimeStart = $user->startTime;
$offTimeEnd = $user->endTime;
$pressedTime = $user->pressTime;//initially 999999999
//prevent user spamming record btn
if($pressTime > $pressedTime){ return something;}
//check if still within the range
if($pressTime >= $offTimeStart && $pressTime < $offTimeEnd){
if($user != null){
$user->update([
'pressTime' => $pressTime,
'recorded' => 1
]);}

Related

Dataflow job has high data freshness and events are dropped due to lateness

I deployed an apache beam pipeline to GCP dataflow in a DEV environment and everything worked well. Then I deployed it to production in Europe environment (to be specific - job region:europe-west1, worker location:europe-west1-d) where we get high data velocity and things started to get complicated.
I am using a session window to group events into sessions. The session key is the tenantId/visitorId and its gap is 30 minutes. I am also using a trigger to emit events every 30 seconds to release events sooner than the end of session (writing them to BigQuery).
The problem appears to happen in the EventToSession/GroupPairsByKey. In this step there are thousands of events under the droppedDueToLateness counter and the dataFreshness keeps increasing (increasing since when I deployed it). All steps before this one operates good and all steps after are affected by it, but doesn't seem to have any other problems.
I looked into some metrics and see that the EventToSession/GroupPairsByKey step is processing between 100K keys to 200K keys per second (depends on time of day), which seems quite a lot to me. The cpu utilization doesn't go over the 70% and I am using streaming engine. Number of workers most of the time is 2. Max worker memory capacity is 32GB while the max worker memory usage currently stands on 23GB. I am using e2-standard-8 machine type.
I don't have any hot keys since each session contains at most a few dozen events.
My biggest suspicious is the huge amount of keys being processed in the EventToSession/GroupPairsByKey step. But on the other, session is usually related to a single customer so google should expect handle this amount of keys to handle per second, no?
Would like to get suggestions how to solve the dataFreshness and events droppedDueToLateness issues.
Adding the piece of code that generates the sessions:
input = input.apply("SetEventTimestamp", WithTimestamps.of(event -> Instant.parse(getEventTimestamp(event))
.withAllowedTimestampSkew(new Duration(Long.MAX_VALUE)))
.apply("SetKeyForRow", WithKeys.of(event -> getSessionKey(event))).setCoder(KvCoder.of(StringUtf8Coder.of(), input.getCoder()))
.apply("CreatingWindow", Window.<KV<String, TableRow>>into(Sessions.withGapDuration(Duration.standardMinutes(30)))
.triggering(Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardSeconds(30))))
.discardingFiredPanes()
.withAllowedLateness(Duration.standardDays(30)))
.apply("GroupPairsByKey", GroupByKey.create())
.apply("CreateCollectionOfValuesOnly", Values.create())
.apply("FlattenTheValues", Flatten.iterables());
After doing some research I found the following:
regarding constantly increasing data freshness: as long as allowing late data to arrive a session window, that specific window will persist in memory. This means that allowing 30 days late data will keep every session for at least 30 days in memory, which obviously can over load the system. Moreover, I found we had some ever-lasting sessions by bots visiting and taking actions in websites we are monitoring. These bots can hold sessions forever which also can over load the system. The solution was decreasing allowed lateness to 2 days and use bounded sessions (look for "bounded sessions").
regarding events dropped due to lateness: these are events that on time of arrival they belong to an expired window, such window that the watermark has passed it's end (See documentation for the droppedDueToLateness here). These events are being dropped in the first GroupByKey after the session window function and can't be processed later. We didn't want to drop any late data so the solution was to check each event's timestamp before it is going to the sessions part and stream to the session part only events that won't be dropped - events that meet this condition: event_timestamp >= event_arrival_time - (gap_duration + allowed_lateness). The rest will be written to BigQuery without the session data (Apparently apache beam drops an event if the event's timestamp is before event_arrival_time - (gap_duration + allowed_lateness) even if there is a live session this event belongs to...)
p.s - in the bounded sessions part where he demonstrates how to implement a time bounded session I believe he has a bug allowing a session to grow beyond the provided max size. Once a session exceeded the max size, one can send late data that intersects this session and is prior to the session, to make the start time of the session earlier and by that expanding the session. Furthermore, once a session exceeded max size it can't be added events that belong to it but don't extend it.
In order to fix that I switched the order of the current window span and if-statement and edited the if-statement (the one checking for session max size) in the mergeWindows function in the window spanning part, so a session can't pass the max size and can only be added data that doesn't extend it beyond the max size. This is my implementation:
public void mergeWindows(MergeContext c) throws Exception {
List<IntervalWindow> sortedWindows = new ArrayList<>();
for (IntervalWindow window : c.windows()) {
sortedWindows.add(window);
}
Collections.sort(sortedWindows);
List<MergeCandidate> merges = new ArrayList<>();
MergeCandidate current = new MergeCandidate();
for (IntervalWindow window : sortedWindows) {
MergeCandidate next = new MergeCandidate(window);
if (current.intersects(window)) {
if ((current.union == null || new Duration(current.union.start(), window.end()).getMillis() <= maxSize.plus(gapDuration).getMillis())) {
current.add(window);
continue;
}
}
merges.add(current);
current = next;
}
merges.add(current);
for (MergeCandidate merge : merges) {
merge.apply(c);
}
}

How to Spam Filter Gmail Messages by Recipient Address?

I use the dot feature (m.yemail#gmail.com instead of myemail#gmail.com) to give emails for questionable sites so that I can easily spot spam from my address being sold.
I made this function and set it to trigger every 30 minutes to automatically filter these.
function moveSpamByAddress(){
var addresses = ["m.yemail#gmail.com"]
var threads = GmailApp.getInboxThreads();
for (var i = 0; i < threads.length; i++){
var messages = threads[i].getMessages();
for (var ii = 0; ii<messages.length; ii++){
for (var iii = 0; iii<addresses.length; iii++){
if (messages[ii].getTo().indexOf(addresses[iii]) > -1){
threads[i].moveToSpam()
}
}
}
}
}
This works, but I noticed that this runs slower than I would expect it to (but my expectation may be unreasonable) given that my inbox only contains 50 messages and I am only currently filtering one address. Is there a way to increase execution speed?
Also are there any penalties for running scripts too often? I see that I have the option to trigger a script every minute, and that would increase the likelihood of filtering a message before I see it, but it would also run the scripts uselessly significantly more times.
You can do this using native gmail filters plus apps script.
Script time quotas varies from 1 to 6 hours depending on account type.
To improve performance, first check getInboxUnreadCount and return inmediately if zero.
If you use a 1minute trigger, make sure to use a lock to avoid one timer starting while the other runs. If the lock is in use simply return.
First, make a gmail filter so when "to" matches your special address, apply a special label like "mySpam"
Second, make an apps script with my suggestions above, plus your code no longer needs to search so much, now you just need to find emails with that label (a single api call) and .moveToSpam
There shouldnt be that many at any time in the label if the script runs often.

MongoDB Geospatial Load More Between HTTP Requests

AcaniUsers loads the first 20 users in MongoDB (on Heroku via Sinatra) closest to me from my iPhone. I want to add a Load More button that will load the next 20 users closest to me. Keep in mind, my location and the locations of the users on my phone may have changed. I was thinking of switching from Sinatra to Node.js and opening a WebSocket, so I could have realtime updates of the presences & locations of the users on my phone, but think I should save that challenge for a next iteration. Basically, how should I implement the load more functionality?
To paginate queries in MongoDB you can use a combination of limit() and skip().
So, the first query will be:
your_query.limit(20)
Then if you want to load the second 20 (you will have to remember the first query somewhere):
your_query.skip(20).limit(20)
btw I suggest you to execute in the first place the query with a limit higher than 20 and put in the cache the result you don't display. When requested, just get them from the cache (you can store it in the user session). If the position change, restart from scratch and re-query the db invalidating the cache.
think of it more as a client side question: use subscriptions based on the current group - encode the group into a geo-square if possible (more efficient than circle, I think?) - periodically (t) executes an operation that checks the locations of each user and simply sends them out with a group id to match the subscriptions
actually...to build your subscription groups, just use the geonear command on all of your subscribers
- build a hash of your subscribers and their groups
- each subscriber is subscribed to one group and themselves (for targeted communication => indicate that a specific subscriber should change their subscription)
- iterate through the results i number of times where i is the number of individuals in an update group
- execute an action that checks the current value of j, the group number for a specific subscriber, against the new j value - if there is a change, notify the subscriber on the subsriber's private channel
- notifications synchronously follow subscriber adjustments
something like:
var pageSize;
// assign pageSize in method call
var documents = collection.Find(query);
var max = documents.Size();
for (int i = 0; i == max ; i++)
{
var level = i*pageSize;
if (max / level > 1)
{
documents.Skip(pageSize);
}
else
{
documents.Skip(pageSize).Limit(level);
break;
}
}
:)

Should I rely on the local phone's time for time sensitive app (Windows Phone 7)

I'm building an app where my users will post content. The exact time of the post is an important data point - I need to know exactly when the user hit the "Post" button. Once the post has been captured I'll upload that posting to my web server. My app should still work in offline mode, meaning when there is no internet connectivity the post will be saved locally and uploaded next time the network becomes available.
Question is, how can I guarantee that the time of the post is accurate? Should I rely on the phone's local time? Should I try to create some crazy code that regularly sync's the difference between my servers time and the devices time so I can always know the difference (if there is one). Are there better time management solutions that I'm not aware of?
Thanks,
UPDATE
Here's the server side code that I wrote to ensure that server and client times are perfectly matched. Hope it helps others...
/// <summary>
/// Calculates the actual time the client event occurred.
/// Takes in account that the event and the sending of the
/// event may have happened seprately.
/// </summary>
public static DateTime CalculateClientEventTime(
DateTime serverReceiveTime,
DateTime clientSendTime,
DateTime clientEventTime)
{
// first we need to sync the client and server time
// we also need to subtract any time zone offsets
// then we can subtract the actual time on de ice
DateTime serverReceiveUtc = serverReceiveTime.ToUniversalTime();
DateTime clientSendUtc = clientSendTime.ToUniversalTime();
DateTime clientEventUtc = clientEventTime.ToUniversalTime();
// note: all dates are in utc
// just need to subtract the client TimeSpan from the client Send
// then subtract that TimeSpan from the server utc time
TimeSpan diffBetweenClientEventAndClientSend = (clientSendUtc - clientEventUtc);
return serverReceiveUtc.Subtract(diffBetweenClientEventAndClientSend);
}
I suggest that you do the following:
In online mode: Take the time from your server when the user post their data.
In offline mode: Save the time from the phone. When going online, submit all saved data, and the current time of the phone. Calculate the difference between the phone and your server time to get the real time.
You cannot rely on the phone's time because user can change it and your app can run in diffrent time zones. Use always the sever time or you can get the phone time and calibrate your local timer to get the value of a lag.

Any body have any luck with ShellTileSchedule?

Any body have any luck with ShellTileSchedule? I have followed the Microsoft example and still have gotten no where.
"How to: Update Your Tile Without Push Notifications for Windows Phone"
Has any one seen a complete example that works on a device or emulator?
Yes...I started with the sample at http://channel9.msdn.com/learn/courses/WP7TrainingKit/WP7Silverlight/UsingPushNotificationsLab/Exercise-2-Introduction-to-the-Toast-and-Tile-Notifications-for-Alerts/
and skipped immediately down to "Task 3 – Processing Scheduled Tile Notifications on the Phone." After that I had to wait about 1 hour, leaving the emulator running on my desktop (1 hour is the minimum update interval, indicated as such for "performance considerations."
_shellTileSchedule = new ShellTileSchedule
{
Recurrence = UpdateRecurrence.Interval,
Interval = UpdateInterval.EveryHour,
StartTime = DateTime.Now - TimeSpan.FromMinutes(59),
RemoteImageUri = new Uri(#"http://cdn3.afterdawn.fi/news/small/windows-phone-7-series.png")
};
Note that setting the StartTime to DateTime.Now - 59 minutes did nothing. It still waited a full hour for its first update. I could not find any mechanism to perform "go to this URI and Update yourself NOW!", other than calling out to a web service that tickles a Tile Notification.
as #avidgator said, you'll have to wait an hour.
i have written a tutorial on how to update the tile instantly here:
http://www.diaryofaninja.com/blog/2011/04/03/windows-phone-7-live-tile-schedules-ndash-executing-instant-live-tile-updates
basically it involves opening a push/toast update channel and then getting the phone to send "itself" a live tile update request. this will trigger the phone to go and get the tile "right now"
hope this helps
Are the channels necessary for this kind of update?
Is there a full code example of what has to be done to create an app that just updates its tile?
BTW: How about setting the Recurrence to UpdateRecurrence.Onetime and the StartTime to Now + 20 seconds for testing purposes?
I just got an tile update after an hour without channels and so on. So that answered my first question. But having to wait an hour while trying to develop an app is... unsatisfying.
It is easy. Just use the following code when you setup ShellTileSchedule.
ShellTile applicationTile = ShellTile.ActiveTiles.First();
applicationTile.Update(
new StandardTileData {
BackgroundImage = new Uri("www.ash.com/logo.jpg"),
Title = ""
});

Resources