How are you supposed to use the SSAS Tabular Measures Grid - visual-studio-2010

As far as I can tell it doesn't do anything, and the best thing to do is to add all measures as a list in one column, but then there would be a "Measures List" instead of grid.... so what's the "grid" part do?

The main reason I can imagine is that you can put a measure below a column to which it refers somehow. This is obviously the case for the standard case of a measure being the sum of a specific column. But other than that, I agree there is no real use.

Related

Tableau running slow/ Queries taking a long time

To speed up processing time in tableau: Is it best to combine multiple calculated fields into one calculated field or best to have the equation broken out into pieces?
Thanks!
Both options are not related to optimization. However, it is a good practice to hide unused fields from the "tables" sidebar

How to assign two or more time series identifier columns in Vertex AI Tabular Forecasting

I was wondering if it is possible to have more than one time series identifier column in the model? Let's assume I'd like to create a forecast at a product and store level (which the documentation suggests should be possible).
If I select product as the series identifier, the only options I have left for store is either a covariate or an attribute and neither is applicable in this scenario.
Would concatenating product and store and using the individual product and store code values for that concatenated ID as attributes be a solution? It doesn't feel right, but I can't see any other option - am I missing something?
Note: I understand that this feature of Vertex AI is currently in preview and that because of that the options may be limited.
There isn't an alternate way to assign 2 or more Time Series Identifiers in the Forecasting Model on Vertex AI. The "Forecasting model" is in the "Preview" Product launch stage, as you are aware, with all consequences of that fact the options are limited. Please refer to this doc for more information about the best practices for data preparation to train the forecasting model.
As a workaround, the two columns can be concatenated and assigned a Time Series Identifier on that concatenated column, as you have mentioned in the question. This way, the concatenated column carries more contextual information into the training of the model.
Just to follow up on Vishal's (correct) answer in case someone is looking this up in the future.
Yes, concatenating is the only option for now as there can only be one time series identifier (I would hope this changes in the future). Having said that, I've experimented with adding the individual identifiers in the data as categorical attributes and it works actually pretty well. This way I have forecast generated at a product/store level, but I can aggregate all forecasts for individual products and the results are not much off from the models trained on aggregated data (obviously that would depend on the demand classification and selected optimisation method amongst other factors).
Also, an interesting observation. When you include things like product descriptions, you can classify them either as categorical or text. I wasn't able to find in the documentation if the model would only use unigrams (which is what the column statistics in the console would suggest) or a number of n-grams but it is definitely something you would want to experiment with with your data. My dataset was actually showing a better accuracy when the categorical classification was used, which is a bit counter-intuitive as it feels like redundant information, although it's hard to tell as the documentation isn't very detailed. It is likely to be specific to my data set, so as I said make sure you experiment with yours.

Datamart modelling fact table: indicator in columns or lines with one column called indicator

I am modelling a datamart and have multiple measures (indicators ) and dimensions.
Is it better when modelling the fact table to make indicators by column or having one column that contains indicators like creating a dimension of indicators ?
Please give me your opinions and when to choose each option?
Dimensional modelling aims for each fact table to represent a business process where you take measurements, with each measurement stored separately as columns. These are separately named, with the aim being that these are things you can drag onto your BI tool's report without a user having to worry about going off to another table to work out what measure you're looking at.
The Kimball Group don't normally recommend the approach where you create a measure type dimension, and producing a 'generic' fact. It makes the number of rows in the fact table bigger (one for each measurement) and makes calculations between measurements in a single measurement event (fact) more difficult.
Where would this end? You could feasibly have one fact that represents all measurements, from all your facts. This might be easier to model and load into, and might be exactly what you need in your situation, but it doesn't make it easier to report from, and wouldn't be called a dimensional model.
The situation Kimball suggests this would be an acceptable technique, however, is when you could have hundreds of potential measurements, but only a few would be applicable to any particular fact.

Is there a way to improve speed/performance of google sheets functions?

I recently worked on a project in google sheets. While everything is working, changes in the page take a while to process and load. There are a few parts of the project (described below). Is there a way to find out what is causing the biggest load on the project, so that I can work on that area?
Alternatively, if someone has experience with the following types of functions, what do you think is causing the biggest slowdown:
I have a query to lookup and match values. Would this be faster as a vlookup with sort in an arrayformula?
=IFERROR(QUERY(Record!A:C,"Select A where B = '"&B7&"' order by A desc limit 1 label A ''"),"")
I have random number generation through an arrayformula:
=ARRAYFORMULA(IF(ROW(B6:B)=6,"RANDOM",IF(ISBLANK(B6:B),"",RANDBETWEEN(0+0*ROW(B6:B),COUNTA(B6:B)))))
This fills in a cell with a random number if the one next to it has a value. I use this for random sampling in another query later.
I have some conditional formatting based on whether the cell has something in it.
I have some data validation based on a range of cells.
Note: Since my main question is about performance, I didn't think having an example file would be beneficial. It would take me a little to make one so if Ii should, let me know. Also, since other questions deal with scripting performance (like this one and this one) I feel like my question is different.
I suspect the RANDBETWEEN formula is your biggest culprit - basically every time the spreadsheet changes in any way whatsoever, even if you dont actually edit, the numbers all change, so inside an arrayformula, depending on how many rows you have, its always recalculating the rand for every single row

How to detect duplicate data?

I have got a simple contacts database but I'm having problems with users entering in duplicate data. I have implemented a simple data comparison but unfortunately the duplicated data that is being entered is not exactly the same. For example, names are incorrectly spelled or one person will put in 'Bill Smith' and another will put in 'William Smith' for the same person.
So is there some sort of algorithm that can give a percentage for how similar an entry is to another?
So is there some sort of algorithm
that can give a percentage for how
similar an entry is to another?
Algorithms as Soundex and Edit distances (as suggested in a previous post) can solve some of your problems. However, if you are serious about cleaning your data, this will not be enough. As others have stated "Bill" does not sound anything like "William".
The best solution I have found is to use a reduction algorithm and table to reduce the names to it's root name.
To your regular Address table, add Root-versions of the names, e.g
Person (Firstname, RootFirstName, Surname, Rootsurname....)
Now, create a mapping table.
FirstNameMappings (Primary KEY Firstname, Rootname)
Populate your Mapping table by:
Insert IGNORE (select Firstname, "UNDEFINED" from Person) into FirstNameMappings
This will add all firstnames that you have in your person table together with the RootName of "UNDEFINED"
Now, sadly, you will have to go through all the unique first names and map them to a RootName. For example "Bill", "Billl" and "Will" should all be translated to "William"
This is very time consuming, but if data quality really is important for you I think it's one of the best ways.
Now use the newly created mapping table to update the "Rootfirstname" field in your Person table. Repeat for surname and address. Once this is done you should be able to detect duplicates without suffering from spelling errors.
You can compare the names with the Levenshtein distance. If the names are the same, the distance is 0, else it is given by the minimum number of operations needed to transform one string into the other.
I imagine that this problem is well understood but what occurs to me on first reading is:
compare fields individually
count those that match (for a possibly loose definition of match, and possibly weighing the fields differently)
present for human intervention any cases which pass some threshold
Use your existing database to get a good first guess for the threshold, and correct as you accumulate experience.
You may prefer a fairly strong bias toward false positives, at least at first.
While I do not have an algorithm for you, my first action would be to take a look at the process involved in entering a new contact. Perhaps users do not have an easy way to find the contact they are looking for. Much like on Stack Overflow's new question form, you could suggest contacts that already exist on the new contact screen.
If you have access SSIS check out the Fuzzy grouping and Fuzzy lookup transformation.
http://www.sqlteam.com/article/using-fuzzy-lookup-transformations-in-sql-server-integration-services
http://msdn.microsoft.com/en-us/library/ms137786.aspx
If you have a large database with string fields, you can very quickly find a lot of duplicates by using the simhash algorithm.
This may or may not be related but, minor misspellings might be detected by a Soundex search, e.g., this will allow you to consider Britney Spears, Britanny Spares, and Britny Spears as duplicates.
Nickname contractions, however, are difficult to consider as duplicates and I doubt if it is wise. There are bound to be multiple people named Bill Smith and William Smith, and you would have to iterate that with Charles->Chuck, Robert->Bob, etc.
Also, if you are considering, say, Muslim users, the problems become more difficult (there are too many Muslims, for example, that are named Mohammed/Mohammad).
I'm not sure it will work well for the names vs nicknames problem, but the most common algorithm in this sort of area would be the edit distance / Levenshtein distance algorithm. It's basically a count of the number of character changes, additions and removals required to turn one item into another.
For names, I'm not sure you're ever going to get good results with a purely algorithmic approach - What you really need is masses of data. Take, for example, how much better Google spelling suggestions are than those in a normal desktop application. This is because Google can process billions of web queries and look at what queries lead to each other, what 'did you mean' links actually get clicked etc.
There are a few companies which specialise in the name matching problem (mostly for national security and fraud applications). The one I could remember, Search Software America seems to have been bought out by these guys http://www.informatica.com/products_services/identity_resolution/Pages/index.aspx, but I suspect any of these sorts of solutions would be far to expensive for a contacts application.
FullContact.com has API's that can solve this for you, see their documentation here: http://www.fullcontact.com/developer/docs/?category=name.
They have APIs for Name Normalization (Bill into William), Name Deducer (for raw text), and Name Similarity (comparing two names).
All APIs are free at the moment, it could be a good way to get started.
You might also want to look into probabilistic matching.
For those wandering around the web and end up here, might I suggest that you try using a Google Sheet add-on I created called Flookup.
It's particularly good with names and it has a couple of other awesome features which I'll describe below:
Say you have a list of names and there are 2 people called "John Smith". You can use the rank parameter from Flookup to instruct the algorithm to return the 1st, 2nd, 3rd or nth best match. This is helpful if you have additional information that you can use to identify the "John Smith" you want.
Say you have an additional database/list of apartment numbers. You an specify which "John Smith" you want by typing: John Smith & Apartment A or John Smith & Apartment B as the lookup parameter to help distinguish between the two names.
I hope you find Flookup as beneficial as others have.

Resources