How do I gather statistics from ESSBASE? - oracle

I would like to gather some statistics about the usage of several cubes with Hyperion ESSBASE.
I've found that from the official documentation but doesn't seem what I need.
What I would like to achieve is to know how many times a dimension has been queried, in which cube, which dimensions are often used together and so on.
Is there any tool or maxl instruction for that pourpose?

i dont know such a tool. There is an option called query tracking which is used internal by essbase for aggregation but i dont know a way do display this information.
Sorry but i am afraid there is no solution for your problem.
Greets
Kevin

Related

Integrating / Transforming data from different / disparate sources without storing it

I have a usecase. I want to Integrate / Transform data from different / disparate sources without storing it. Data sources are database(oracle,db2,etc), Webservice(Rest/Soap), Flat files(CSV, XML, JSON), MQ dumps, mainframe systems. I want to pull data from these sources and do some kind of intelligent transformation and integration and provide it our customers. It looks like typical ETL scenario, but my situation is different. I am not allowed to store the data given by the desperate sources, that means, for simple example, i pull data from oracle, soap and a rest, and do all my intelligent transformations and integrations on the fly.
I browsed through google and technical stuffs but could not get convincing solution to my problem.
If you guys can help me giving some valuable insight on this problem and give suggestion and probable approaches to it.
Note: Data size from these sources can sometime be really huge.
Thanks in Advance
Take a look at htto://teiid.org
Thst is exactly what it does, and it is Open Source.
Talend Open Studio y a great solution as well, I'm using it and it's great and easy to make the ETL workflow.
https://www.talend.com/products/data-integration/data-integration-manuals-release-notes/
You can see a lot of help videos: https://www.youtube.com/results?search_query=talend+studio

Survey Monkey results directly into DB

This may be a question for Survey Monkey, but I felt that someone here may have encountered something like this in past experiences. Is there a way to work with the API of Survey Monkey (SM), to add the information from the survey straight into a database of my own? I realize that I can generate the information into output files, but I was wondering if there was a way to directly access the information from the SM database. I feel like this might cause some privacy concerns for SM. Has anyone attempted this, or would the best option of mine be to create my own surveys without a third party website?
I had a similar issue and here's my solution.
I was doing health related surveys which contain HIPPA protected Personal Health Info. Zapier is NOT HIPAA safe, so the "zap the results over to Google Drive" solution didn't work.
So I wanted a quick n dirty way to grab SM survey data and begin to design a data structure to analyse and store this data. I figured that I would start with <1000 results, sort it out, then build out a bigger/fancier structure as needed.
I just downloaded CSV's of the SM individual responses, munged the downloaded CSV files to make a Python CSV reader happy, then wrote a Python 3.5 script to grab the survey data and spit it out into a couple of output CSV files designed for different analytic purposes.
It was really quick and easy to alter the Python script to deliver different subsets of data to different output files, and really quick and easy to see if these output (CSV or XLS) files really told me what I wanted to know.
This is a really quick and easy way to start analysing right away without spending too much time on procedural overhead. You can alter CSV (or XLS ) tables really quickly and easily, so you can mix and match data / derivative data as much as you want. A wise person once told me "don't think, do." So the more you analyse on small runs of data, the better your final Big Buildout In The Sky will look.
Yah, you can spend a lot of time writing and API and setting up a dbase, but if you are not completely happy with what you want out of the SM data, start small. Hope this helps.

Extend and customize a performance monitor

.NET has several built in Perfmons, and I would like to know is it possible to extend one of the built in ones and add custom functionality.
More specifically, I would like to take RateOfCountsPerSecond32 and make it be something such as RateOfCountsPerMinute. Basically I would like to be able to monitor the average number of events that happen in a given time, but longer than a second.
Is extending the right idea? If so does anyone have a quick syntax example of how it can be done? I have read that it can be done but documentation is super poor on the subject. Or is there a better way to go about this entirely, very open to suggestions.
Thanks for any help,

efficient serverside autocomplete

First off all I know:
Premature optimization is the root of all evil
But I think wrong autocomplete can really blow up your site.
I would to know if there are any libraries out there which can do autocomplete efficiently(serverside) which preferable can fit into RAM(for best performance). So no browserside javascript autocomplete(yui/jquery/dojo). I think there are enough topic about this on stackoverflow. But I could not find a good thread about this on stackoverflow (maybe did not look good enough).
For example autocomplete names:
names:[alfred, miathe, .., ..]
What I can think off:
simple SQL like for example: SELECT name FROM users WHERE name LIKE al%.
I think this implementation will blow up with a lot of simultaneously users or large data set, but maybe I am wrong so numbers(which could be handled) would be cool.
Using something like solr terms like for example: http://localhost:8983/solr/terms?terms.fl=name&terms.sort=index&terms.prefix=al&wt=json&omitHeader=true.
I don't know the performance of this so users with big sites please tell me.
Maybe something like in memory redis trie which I also haven't tested performance on.
I also read in this thread about how to implement this in java (lucene and some library created by shilad)
What I would like to hear is implementation used by sites and numbers of how well it can handle load preferable with:
Link to implementation or code.
numbers to which you know it can scale.
It would be nice if it could be accesed by http or sockets.
Many thanks,
Alfred
Optimising for Auto-complete
Unfortunately, the resolution of this issue will depend heavily on the data you are hoping to query.
LIKE queries will not put too much strain on your database, as long as you spend time using 'EXPLAIN' or the profiler to show you how the query optimiser plans to perform your query.
Some basics to keep in mind:
Indexes: Ensure that you have indexes setup. (Yes, in many cases LIKE does use the indexes. There is an excellent article on the topic at myitforum. SQL Performance - Indexes and the LIKE clause ).
Joins: Ensure your JOINs are in place and are optimized by the query planner. SQL Server Profiler can help with this. Look out for full index or full table scans
Auto-complete sub-sets
Auto-complete queries are a special case, in that they usually works as ever decreasing sub sets.
'name' LIKE 'a%' (may return 10000 records)
'name' LIKE 'al%' (may return 500 records)
'name' LIKE 'ala%' (may return 75 records)
'name' LIKE 'alan%' (may return 20 records)
If you return the entire resultset for query 1 then there is no need to hit the database again for the following result sets as they are a sub set of your original query.
Depending on your data, this may open a further opportunity for optimisation.
I will no comply with your requirements and obviously the numbers of scale will depend on hardware, size of the DB, architecture of the app, and several other items. You must test it yourself.
But I will tell you the method I've used with success:
Use a simple SQL like for example: SELECT name FROM users WHERE name LIKE al%. but use TOP 100 to limit the number of results.
Cache the results and maintain a list of terms that are cached
When a new request comes in, first check in the list if you have the term (or part of the term cached).
Keep in mind that your cached results are limited, some you may need to do a SQL query if the term remains valid at the end of the result (I mean valid if the latest result match with the term.
Hope it helps.
Using SQL versus Solr's terms component is really not a comparison. At their core they solve the problem the same way by making an index and then making simple calls to it.
What i would want to know is "what you are trying to auto complete".
Ultimately, the easiest and most surefire way to scale a system is to make a simple solution and then just scale the system by replicating data. Trying to cache calls or predict results just make things complicated, and don't get to the root of the problem (ie you can only take them so far, like if each request missed the cache).
Perhaps a little more info about how your data is structured and how you want to see it extracted would be helpful.

Using flickr to get photos of a specific location and put together a model

I've read about systems which use the Flickr database of photos to fill in gaps in photos (http://blogs.zdnet.com/emergingtech/?p=629).
How feasible is a system like this? I was toying with the idea (not just a way of killing time but as a good addition to something I am coding) of using Flickr to get photos of a certain entity (in this case, race tracks) and reconstruct a model. My biggest concern is that there aren't enough photos of a particular track and even then, it would be difficult to tell if two photos are of the same part of the racetrack, in which case one of them may be irrelevant.
How feasible is something like this? Is it worth attempting by a sole developer?
Sounds like you're wanting to build a Photosynth style system - check out Blaise Aguera y Arcas' demo at TED back in 2007. There's a section about 4 minutes in where he builds a model of the Sagrada Família from photographs.
I say +1 for photosynth answer, its a great tool. Not sure how well you could incorporate it into your own app though.
Its definately feasable. Anything is possible. And yes, doable for a single developer, just depends how much free time you have. It would be great to see something like this integrated into Virtual Earth or Google Maps Street View. Someone who could nail some software like this could help 3D model the entire world based purely on photographs. That would be a great product and make any single developer rich and famous.
So get coding. :)
I have plenty of free time, as I am in between jobs.
One way to do it is to get an overhead view of the track layout, make a blueprint based on this model, and then get one photo of the track and mimic the track's road colour. That would be a start.
LINQ to Flickr on codeplex has a great API and would be helpful for your task.

Resources