I have a site with millions of users (well, actually it doesn't have any yet, but let's imagine), and I want to calculate some stats like "log-ins in the past hour".
The problem is similar to the one described here: http://highscalability.com/blog/2008/4/19/how-to-build-a-real-time-analytics-system.html
The simplest approach would be to do a select like this:
select count(distinct user_id)
from logs
where date>='20120601 1200' and date <='20120601 1300'
(of course other conditions could apply for the stats, like log-ins per country)
Of course this would be really slow, mainly if it has millions (or even thousands) of rows, and I want to query this every time a page is displayed.
How would you summarize the data? What should go to the (mem)cache?
EDIT: I'm looking for a way to de-normalize the data, or to keep the cache up-to-date. For example I could increment an in-memory variable every time someone logs in, but that would help to know the total amount of logins, not the "logins in the last hour". Hope it's more clear now.
IMO the more correct approach here would be to implement a continuous calculation that holds the relevant counters in memory. Every time a user is added to your system you can fire up an event which can be processed in multiple ways and update last hour, last day or even total users counters. There are some great frameworks out there to do this sort of processing. Twitter Storm is one of them, another one is GigaSpaces XAP (disclaimer - I work for GigaSpaces) and specifically this tutorial, and also Apache S4 and GridGain.
If you don't have a db then never mind. I don't have millions of users but I have table with a years worth of logon that has a million rows and simple stats like that in sub second. A million rows is not that much for a database. You cannot make date a PK as you can have duplicates. For minimal fragmentation and speed of insert make date a clustered non unique index asc and that is how the data comes in. Not sure if you have a DB but in MSSQL you can. Index user_id is something to test. What that would do is slow down insert as that is an index that will fragment. If you a looking for a fairly tight time span a table scan might be OK.
Why distinct user_id rather then a login is a login.
Have a property that only only runs the query every x seconds. Even if every second and reports the cached answer. If or 200 pages hit that property in one second for sure you don't want 200 queries. And if the stat is one second stale for information over the last hour that is still a valid stat.
I've ended up using Esper/NEsper. Also Uri's suggestions where useful.
Esper allows me to compute real-time stats of data as it's being obtained.
If you're just running off of logs, you probably want to look at something like Splunk.
Generally, if you want this in-memory and fast (real time), you would create a distributed cache of login data with an eviction after e.g. 24 hours, and then you could query that cache for e.g. logins within the past hour.
Assuming a login record looks something like:
public class Login implements Serializable {
public Login(String userId, long loginTime) {..}
public String getUserId() {..}
public long getLoginTime() {..}
public long getLastSeenTime() {..}
public void setLastSeenTime(long logoutTime) {..}
public long getLogoutTime() {..}
public void setLogoutTime(long logoutTime) {..}
String userId;
long loginTime;
long lastSeenTime;
long logoutTime;
}
To support the eviction after 24 hours, simply configure an expiry (TTL) on the cache
<expiry-delay>24h</expiry-delay>
To query for all users currently logged in:
long oneHourAgo = System.currentTimeMillis() - 60*60*1000;
Filter query = QueryHelper.createFilter("loginTime > " + oneHourAgo
+ " and logoutTime = 0");
Set idsLoggedIn = cache.keySet(query);
To query for the number of logins and/or active users in the past hour:
long oneHourAgo = System.currentTimeMillis() - 60*60*1000;
Filter query = QueryHelper.createFilter("loginTime > " + oneHourAgo
+ " or lastSeenTime > " + oneHourAgo);
int numActive = cache.keySet(query).size();
(See http://docs.oracle.com/cd/E15357_01/coh.360/e15723/api_cq.htm for more info on queries. All these examples were from Oracle Coherence.)
For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.
Related
Goal: read a big Cassandra table, process line by line in parallel
Constraints:
not all rows in memory
no Spark, we have to use Camel
One shot, no need polling the table
I did a first working version with CassandraQL but this Component seems to be limited to one query with all in memory, I did not find mechanics like fetSize/fetchMoreResult. I looked CassandraProducer class, PollingConsumerPollingStrategy, ResultSetConversionStrategy... See nothing.
Could it be possible to read a table by chunks of 1000 elements for example, each chunk would generate an exchange lately split in different threads ?
I think that maybe the ProducerTemplate injecting first exchanges in the route could be the answer. But I don't undertand how I could manage the production exchange rate to avoid to have too many rows in memory (to do so, we would need for example to check the size of the next blocking queue, if more than X no consumed elements, wait before producing more messages).
Maybe there are other options to do something like this ?
Maybe I did not see the magic parameter in CassandraQL ?
Maybe we can override some parts of CassandraQL ?
Thank you
This is not going to be answer to be a your question but hope to kick off some discussion. But as someone learning Cassandra and spending bit of time, it made me thinking. And mainly targets fetSize/fetchMoreResult part of the question
First of all, two of your constraints are contradicting
Not all rows in memory
I don't want all them fetched at once
One shot, no need polling the partition
I don't want to go back to db more than once.
Unless what you actually you meant is
Not all rows in memory
I don't want all them fetched at once
You can go back to partition many times, as long as you go back straight to where you left it last time.
As long as the time it takes for the first page is same as time it takes for the second page. And the time it takes for the 19th Page is same as the time it takes for the 20th page.
i.e Not starting from the first row
So I am going to assume that what you meant is Second Scenario and go with it.
Queries for Cassandra are going to satisfy the following two:
They are going to have a restriction on clustering columns
They are already ordered by clustering columns
Now Consider the following table
department(partition key), firstName(clustering_key), personId(clustering_key), lastname, etc as normal cols
First query
select department, firstName, lastname, etc
from person
where department = 'depart1`
order by firstName ASC
limit 25;
Second query (lets say last record in the page had userId=25 and firstName=kavi)
select department, firstName, lastname, etc
from person
where department = 'depart1` and firstName='kavi' and userId > 25
order by firstName ASC
limit 25;
As you can see, we can easily construct a Cassandra query that brings each chunk with certain size in constant time.
Now back to integration framework
I remember this concept called watermark in mule where the endpoint can store and remember so that they can start from there next time. In this case, value of userId and firstName of the last record of the last page is the watermark. So they can issue the second. I am sure we should be able to do the same with camel
I hope I have convinced that polling is not an issue where each chunk is retrieved in constant time
This is a problem I have been thinking about for a long time but I haven't written any code yet because I first want to solve some general problems I am struggling with. This is the main one.
Background
A single page web application makes requests for data to some remote API (which is under our control). It then stores this data in a local cache and serves pages from there. Ideally, the app remains fully functional when offline, including the ability to create new objects.
Constraints
Assume a server side database of products containing +- 50000 products (50Mb)
Assume no db type, we interact with it via REST/GraphQL interface
Assume a single product record is < 1kB
Assume a max payload for a resultset of 256kB
Assume max 5MB storage on the client
Assume search result sets ranging between 0 ... 5000 items per search
Challenge
The challenge is to define a stateless but (network) efficient way fetch pages from a result set so that it is deterministic which results we will get.
Example
In traditional paging, when getting the next 100 results for some query using this url:
https://example.com/products?category=shoes&firstResult=100&pageSize=100
the search result may look like this:
{
"totalResults": 2458,
"firstResult": 100,
"pageSize": 100,
"results": [
{"some": "item"},
{"some": "other item"},
// 98 more ...
]
}
The problem with this is that there is no way, based on this information, to get exactly the objects that are on a certain page. Because by the time we request the next page, the result set may have changed (due to changes in the DB), influencing which items are part of the result set. Even a small change can have a big impact: one item removed from the DB, that happened to be on page 0 of the result set, will change what results we will get when requesting all subsequent pages.
Goal
I am looking for a mechanism to make the definition of the result set independent of future database changes, so if someone was looking for shoes and got a result set of 2458 items, he could actually fetch all pages of that result set reliably even if it got influenced by later changes in the DB (I plan to not really delete items, but set a removed flag on them, for this purpose)
Ideas so far
I have seen a solution where the result set included a "pages" property, which was an array with the first and last id of the items in that page. Assuming your IDs keep going up in number and you don't really delete items from the DB ever, the number of items between two IDs is constant. Meaning the app could get all items between those two IDs and always get the exact same items back. The problem with this solution is that it only works if the list is sorted in ID order... I need custom sorting options.
The only way I have come up with for now is to just send a list of all IDs in the result set... That way pages can be fetched by doing a SELECT * FROM products WHERE id IN (3,4,6,9,...)... but this feels rather inelegant...
Any way I am hoping it is not too broad or theoretical. I have a web-based DB, just no good idea on how to do paging with it. I am looking for answers that help me in a direction to learn, not full solutions.
Versioning DB is the answer for resultsets consistency.
Each record has primary id, modification counter (version number) and timestamp of modification/creation. Instead of modification of record r you add new record with same id, version number+1 and sysdate for modification.
In fetch response you add DB request_time (do not use client timestamp due to possibly difference in time between client/server). First page is served normally, but you return sysdate as request_time. Other pages are served differently: you add condition like modification_time <= request_time for each versioned table.
You can cache the result set of IDs on the server side when a query arrives for the first time and return a unique ID to the frontend. This unique ID corresponds to the result set for that query. So now the frontend can request something like next_page with the unique ID that it got the first time it made the query. You should still go ahead with your approach of changing DELETE operation to a removed operation because it would make sure that none of the entries from the result set it deleted. You can discard the result set of the query from the cache when the frontend reaches the end of the result set or you can set a time limit on the lifetime of the cache entry.
My app has a class that saves picture that users upload. Each object in the class has a city property that holds the name of the city that the picture was taken at, and a like property that tracks the number of likes.
I want to be able to send a query that returns one picture per city and each picture should have the highest ranking of likes in the city it belongs to. How can I do that?
One way which I first thought about is doing multiple queries by fetching the most liked picture of a city and save it in an array, and then do the same to other cities.
However, each country has more than one city, thus it's not that efficient.
Parse doesn't support the ordinary operations used in databases. Besides, I tried to use a compound query. Unfortunately, I can't set limit or ordering on the subqueries. Any good solution for this?
It would be easy using group by. Unfortunately, Parse does not support "select distinct" or "group by" features.
As you've suggested you need to fetch for each country all the cities, and for each one get the top most rated photo.
BUT, since Parse has strict restrictions on the duration time execution of a request ( 3 sec for an event listener, 7 sec for a custom function ), I suggest you to do this in a background job, saving in a new table the top rated photo for each city. In this way you can easily query the db from client. The Background jobs can be executed up to 15 minuted before parse drop them, so you could make that kind of queries without timeouts.
Hope it helps
I have a product search engine using Coldfusion8 and MySQL 5.0.88
The product search has two display modes: Multiple View and Single View.
Multiple displays basic record info, Single requires additional data to be polled from the database.
Right now a user does a search and I'm polling the database for
(a) total records and
(b) records FROM to TO.
The user always goes to Single view from his current resultset, so my idea was to store the current resultset for each user and not have to query the database again to get (waste a) overall number of records and (waste b) a the single record I already queried before AND then getting the detail information I still need for the Single view.
However, I'm getting nowhere with this.
I cannot cache the current resultset-query, because it's unique to each user(session).
The queries are running inside a CFINVOKED method inside a CFC I'm calling through AJAX, so the whole query runs and afterwards the CFC and CFINVOKE method are discarded, so I can't use query of query or variables.cfc_storage.
So my idea was to store the current resultset in the Session scope, which will be updated with every new search, the user runs (either pagination or completely new search). The maximum results stored will be the number of results displayed.
I can store the query allright, using:
<cfset Session.resultset = query_name>
This stores the whole query with results, like so:
query
CACHED: false
EXECUTIONTIME: 2031
SQL: SELECT a.*, p.ek, p.vk, p.x, p.y
FROM arts a
LEFT JOIN p ON
...
LEFT JOIN f ON
...
WHERE a.aktiv = "ja"
AND
... 20 conditions ...
SQLPARAMETERS: [array]
1) ... 20+ parameters
RESULTSET:
[Record # 1]
a: true
style: 402
price: 2.3
currency: CHF
...
[Record # 2]
a: true
style: 402abc
...
This would be overwritten every time a user does a new search. However, if a user wants to see the details of one of these items, I don't need to query (total number of records & get one record) if I can access the record I need from my temp storage. This way I would save two database trips worth 2031 execution time each to get data which I already pulled before.
The tradeoff would be every user having a resultset of up to 48 results (max number of items per page) in Session.scope.
My questions:
1. Is this feasable or should I requery the database?
2. If I have a struture/array/object like a the above, how do I pick the record I need out of it by style number = how do I access the resultset? I can't just loop over the stored query (tried this for a while now...).
Thanks for help!
KISS rule. Just re-query the database unless you find the performance is really an issue. With the correct index, it should scales pretty well. When the it is an issue, you can simply add query cache there.
QoQ would introduce overhead (on the CF side, memory & computation), and might return stale data (where the query in session is older than the one on DB). I only use QoQ when the same query is used on the same view, but not throughout a Session time span.
Feasible? Yes, depending on how many users and how much data this stores in memory, it's probably much better than going to the DB again.
It seems like the best way to get the single record you want is a query of query. In CF you can create another query that uses an existing query as it's data source. It would look like this:
<cfquery name="subQuery" dbtype="query">
SELECT *
FROM Session.resultset
WHERE style = #SelectedStyleVariable#
</cfquery>
note that if you are using CFBuilder, it will probably scream Error at you for not having a datasource, this is a bug in CFBuilder, you are not required to have a datasource if your DBType is "query"
Depending on how many records, what I would do is have the detail data stored in application scope as a structure where the ID is the key. Something like:
APPLICATION.products[product_id].product_name
.product_price
.product_attribute
Then you would really only need to query for the ID of the item on demand.
And to improve the "on demand" query, you have at least two "in code" options:
1. A query of query, where you query the entire collection of items once, and then query from that for the data you need.
2. Verity or SOLR to index everything and then you'd only have to query for everything when refreshing your search collection. That would be tons faster than doing all the joins for every single query.
I read a lot of documents about AppFabric caching but most of them cover simple scenarios.
For example adding city list data or shopping card data to the cache.
But I need adding product catalog data to the cache.
I have 4 tables:
Product (1 million rows), ProductProperty (25 million rows), Property (100 rows), PropertyOption (300 rows)
I display paged search results querying with some filters for Product and ProductProperty tables.
I am creating criteria set over searched result set. For example (4 Items New Product, 34 Items Phone, 26 Items Book etc.)
I query for grouping over Product table with columns of IsNew, CategoryId, PriceType etc.
and also another query for grouping over ProductProperty table with PropertyId and PropertyOptionId columns to get which property have how many items
Therefore to display search results I make one query for search result and 2 for creating criteria list (with counts)
Search result query took 0,7 second and 2 grouping queryies took 1,5 second in total.
When I run load test I reach 7 request per second and %10 dropped by IIS becasue db could not give response.
This is why I want to cache Product and property records.
If I follow items below (in AppFabric);
Create named cache
Create region for product catalog data (a table which have 1 million rows and property table which have 25 million rows)
Tagging item for querying data and grouping.
Can I query with some tags and get 1st or 2nd page of results ?
Can I query with some tags and get counts of some grouping results. (displaying filter options with count)
And do I have to need 3 servers ? Can I provide a solution with only one appfabric server (And of course I know risk.)
Do you know any article or any document explains those scenarios ?
Thanks.
Note:
Some additional test:
I added about 30.000 items to the cache and its size is 900 MB.
When I run getObjectsInRegion method, it tooks about 2 minutes. "IList> dataList = this.DataCache.GetObjectsInRegion(region).ToList();"
The problem is converting to IList. If I use IEnumerable it works very quicly. But How can I get paging or grouping result without converting it to my type ?
Another test:
I tried getting grouping count with 30.000 product item and getting result for grouping took 4 seconds. For example GetObjectByTag("IsNew").Count() and other nearly 50 query like that.
There is, unfortunately, no paging API for AppFabric in V1. Any of the bulk APIs, like GetObjectsByTag, are going to perform the query on the server and stream back all the matching cache entries to the client. From there you can obviously use any LINQ operators you want on the IEnumerable (e.g. Skip/Take/Count), but be aware that you're always pulling the full result set back from the server.
I'm personally hoping that AppFabric V2 will provide support via IQueryable instead of IEnumerable which will give the ability to remote the full request to the server so it could page results there before returning to the client much like LINQ2SQL or ADO.NET EF.
For now, one potential solution, depending on the capabilities of your application, is you can actually calculate some kind of paging as you inject the items into the cache. You can build ordered lists of entity keys representing each page and store those as single entries in the cache which you can pull out in one request and then individually (in parallel) or bulk fetch the items in the list from the cache and join them together with an in-memory LINQ query. If you wanted to trade off CPU for Memory, just cache the actual list of full entities rather than IDs and having to do the join for the entities.
You would obviously have to come up with some kind of keying mechanism to quickly pull these lists of objects from the cache based on the incoming search criteria. Some kind of keying like this might work:
private static string BuildPageListCacheKey(string entityTypeName, int pageSize, int pageNumber, string sortByPropertyName, string sortDirection)
{
return string.Format("PageList<{0}>[pageSize={1};pageNumber={2};sortedBy={3};sortDirection={4}]", entityTypeName, pageSize, pageNumber, sortByPropertyName, sortDirection);
}
You may want to consider doing this kind of thing with a separate process or worker thread that's keeping the cache up to date rather than doing it on demand and forcing the users wait if the cache entry isn't populated yet.
Whether or not this approach ultimately works for you depends on several factors of your application and data. If it doesn't exactly fit your scenarios maybe it will at least help shift your mind into a different way of thinking about solving the problem.