I am calling the Google distance matrix api with an API key along with a Unix time that is three weeks in the future. My goal is to get duration_in_traffic for a typical weekday at a couple of different times of day.
My issue is that I am getting "duration_in_traffic" times that do not differ from "duration" times in the same return.
I am pretty certain that I am getting the UTC right with respect to local time but, just in case, I varied the time with no variation in the result.
What I have been able to find in my searches hints to me that there is a way to get what I am after but that I may have to change the type of account that I have with Google. Although I am using the account on a 30 day free trial I did "enable billing" in case that was the problem. No luck so far. Does anyone out there know the solution I need? Thanks.
Related
Firstly, I would like to apologise for the detailed problem statement. Being a novice, I couldn't express it in any lesser words.
Environment Setup Details:
To give some background, I work in a cloud company where we have multiple servers geographically located in all continents. So, we have hierarchy like this:
Several partitions
Each partition has 7 pop's
Each pop has multiple nodes all set up with redundancy.
Turn servers connecting traffic to each node depending on the client location
Actual clients-ios, android, mac, windows,etc.
Now, every time the user uses our product/service, he leaves a rating out of 5, 5 being outstanding. This data is stored in our databases and we mine it and analyse it to pin-point the exact issue on any particular day.
For example, if the users from Asia are giving more bad ratings on Tuesday this week than a usual Tuesday, what factors can cause this - is it something to do with clients app version, or server release , physical factors, loss, increased round trip delay etc.
What we have done:
Till now we have been using visualization tools to track each of these metrics separately per day to see the trends and detect the issues manually.
But, due to growing micr-services, it is becoming difficult day by day. Now, we want to automate it using python/pandas.
What I want to do:
If the ratings drop on a particular day/hour, I run the script and it should do all the manual work by taking all the permutations and combinations of all factors and list out the exact combinations which could have lead to the drop.
The second step would be to check whether the drop was significant due to varying number of ratings.
What I know:
I understand that I can do this using pandas by creating a dataframe for each predictor variable and trying to do it per variable.
And then I can apply tests like whitney test etc for ordinal data.
What I need help with:
But I just wanted to know if there is a better way to do it? It is perfectly fine if there is a learning curve involved. I can learn and do it. I just wanted some help in choosing the right approach for this.
In "look up metrics” I’m trying to know how my players improve in playing my game.
I have the score (both as desing event and progression, just to try) and in look up metrics I try to “filter” with session number or days since install but, even if I group by Dimension, this doesn’t produce any result.
For instance if I do the same but with device filter it shows me the right histogram with score's mean per device.
What am I doing wrong?
From the customer care:
The session filter works only on core metrics at this point (like DAU). We hope to make this filter compatible with custom metrics as well but this might take time as we first need to include this improvement to our roadmap and then evaluate it by comparing it with our other tasks. As a result, there is no ETA on making a release.
I would recommend you to download the raw data (go to "Export data" in the settings of the game) and perform an analysis on your own for this sort of "per user" analysis. You should be able to create stats per user. GA does not do this since your game can reach millions of users and there's no way you can plot this amount of entries in a browser.
Given Training data of an organisation meter reading recorded at an interval of 15 minutes each day .Like for some N days we will be provided with data.
And Now with help of this data we need to tell that on a particular day an organisation is closed or open . I need to know if any link can help in this matter if someone has worked into this field.
By closed I mean that on that day consumption of electricity will obviously be almost constant,Though this is just a single feature to take into account.
So how to predict this in best way ?
Maybe the use of threshold not suit well for different kind of organisation, but for sure there are a minimum consumption.
Maybe you would like to use the information of past 15 minutes to increase your reliability.
Adding one more input to your classifier.
First of all, I'm not looking for code, just a plain discussion about approaches regarding what the subject says.
I was wondering lately on how really the best way to detect (as fast as possible) changes to website pages, assuming I have 100K websites, each has an unknown amount of pages, do a crawler really needs to visit each and every one of them once in a while?
Unless they have RSS feeds (which you would still need to pull to see if they have changed) there really isn't anyway to find out when the site has changed except by going to it and checking. However you can do some smart things to be more efficient. After you have been checking on the site for a while you can build a prediction model of when they tend to update. For example: this news site updates every 2-3 hours but that blog only makes about a post a week. This can save you many checks because the majority of pages don't actually update that often. Google does this to help with its pulling. One simple algorithm which will work for this (depending on how cutting edge you need your news to be) is the following of my own design based on binary search:
Start each site off with a time interval ~ 1 day
Visit the sites when that time hits and check changes
if something has changed
halve the time for that site
else
double the time for that site
If after many iterations you find it hovering around 2-3 numbers
fix the time on the greater of the numbers
Now this is a simple algorithm for finding which times are right for checking but you can probably do something more effective if you parse the text and see patterns in times when the updates were actually posted.
I'm currently working on an application that allows people to schedule "Shows" for an online radio station.
I want the ability for the user to setup a repeated event, say for example:-
"Manic Monday" show - Every Monday From 9-11
"Mid Month Madness" - Every Second Thursday of the Month
"This months new music" - 1st of every month.
What, in your opinion, is the best way to model this (based around an MVC/MTV structure).
Note: I'm actually coding this in Django. But I'm more interested in the theory behind it, rather than specific implementation details.
Ah, repeated events - one of the banes of my life, along with time zones. Calendaring is hard.
You might want to model this in terms of RFC2445. However, that may well give you far more flexibility - and complexity than you really want.
A few things to consider:
Do you need any finer granularity than a certain time on given dates? If you need to repeat based on time as well, it becomes trickier.
Consider date corner cases such as "the 30th of every month" and what that means for leap years
Consider time corner cases such as "1.30am every day" - sometimes 1.30am may happen twice, and sometimes it may not happen at all, due to daylight saving time
Do you need to share the schedule with people in other time zones? That makes life trickier again
Do you need to represent the number of times an event occurs, or a final date on which it occurs? ("Count" or "until" basically.) You may not need either, or you may need one or both.
I realise this is a list of things to think about more than a definitive answer, but I think it's important to define the parameters of your problem before you try to work out a solution.
From reading other posts, Martin Fowler describes recurring events the best.
http://martinfowler.com/apsupp/recurring.pdf
Someone implemented these classes for Java.
http://www.google.com/codesearch#vHK4YG0XgAs/src/java/org/chronicj/DateRange.java
I've had a thought that repeated events should be generated when the original event is saved, with a new model. This means I'm not doing random processing every time the calendar is loaded (and means I can also, for example, cancel one "Show" in a series) but also means that I have to limit this to a certain time frame, so if someone went say, a year into the future, they wouldn't see these repeated shows. But at some point, they'd have to (potentially) be re-generated.