How to track how many views a ajax post was seen? - ajax

I have something similar to twitter's feeds where we load it in real-time. How do i track how many people have seen a tweet? I am not talking about if you go to domain.com/post/32434 and that loads a status. I am talking about AJAX real-time query where one post is being loaded one after the other.
Will Google Analytics or Charbeat have anything that will help fulfill this need for me?

Why not managing your own counter in the database?
I don't really know your criteria, but let say you basically want to count how many times a given tweet was loaded.
Very quickly, I could think of this:
The table:
CREATE TABLE tweets_loads(ID_TWEET INT NOT NULL, LOADS BIGINT NOT NULL DEFAULT 0, PRIMARY KEY `ID_TWEET`) ENGINE=InnoDB;
The query at each ajax request:
INSERT INTO tweets_loads (ID_TWEET, LOADS) VALUES (myTweetId, 1)
ON DUPLICATE KEY UPDATE LOADS = LOADS + 1;
(Assuming mysql)
Run it from php and that's it... Up to you then to check competition between inserts though, but in theory mysql should handle it just well...

Related

Laravel - building an auction site

I'm building an auction site using laravel 5 however I'm currently unsure of the best way to go around the bidding process.
I currently have it set-up so that once the user hits a bid button the script runs to place the bid, however if multiple users do this at the same time this causes issues with multiple bids with the same value. I thought about modifying this so that it would queue the bid that way only 1 bid is being processed at once however I believe there will be a better method.
If someone could point me in the correct direction it would be greatly appreciated.
One way to do it is to simply insert the bid with a high precision timestamp immediately, then check the table using a select and see if its actually a leading bid or not. The table should have an auto incrementing id, so even if two bids have the exact same timestamp, sorting also by id will tell you which one was actually received first.

dashboard timeline

I'm trying to implement a dashboard similar to facebook in cakephp (getting posts and post them to timeline and while you press see more it keeps retrieving posts from previous offsets) , but im still confused about the logic and tools , should i use the cakephp pagination class in my implementations.
$this->paginate();
it somehow should be called through ajax accourding to some performance wise
Any helps or suggestions where to start from ?
Thanks All
Don't use paginate
If you paginate something that you are prepending data to - you're going to get data overlapping such that you ask for page 2 - and get the end of, as far as the current user is concerned, the previous page.
Use a timestamp
The normal technique for an endless stream of data is to use a query like:
SELECT *
FROM foos
WHERE created >= $previousLastTimestamp
ORDER BY created DESC
LIMIT 20
Note that while I'm using created in this example - it can be any field that is pseudo unique.
When you first render the page, store the timestamp of the last entry in a javascript variable, then your "get more posts" logic should be:
Make an ajax (get) request, passing the last timestamp
Perform the above sql query (as a $this->Foo->find call)
in your js update the last timestamp so that you know where you are up to for the next time the user clicks "get more posts"
The reason to use a >= condition is that, unless the field you are testing against has unique values, it's possible for there to be multiple rows with the value you're testing for. If you have a naturally-unique field that you are sorting by (id) then you don't need to use greater-or-equal, you can simply use greater-than, and avoid needing to think about duplicate rows.
Here's a reference which explains in more detail why you should handle systems like this avoiding traditional pagination.

Database table structure for notifications like table for a social networking site

I am developing a social networking site like Facebook. I am confused how to create structure for notification table. Should it be separate for each user or a huge one for all-where records added and deleted frequently ?
I have the same problem as you and found this (found this) upon researching where the table structure given is :
id
user_id (int)
activity_type (tinyint)
source_id (int)
parent_id (int)
parent_type (tinyint)
time (datetime but a smaller type like int would be better)
where:
activity_type tells me the type of activity, source_id tells me the record that the activity is related to. So if the activity type means "added favorite" then I know that the source_id refers to the ID of a favorite record.
The parent_id/parent_type are useful for my app - they tell me what the activity is related to. If a book was favorited, then parent_id/parent_type would tell me that the activity relates to a book (type) with a given primary key (id)
I index on (user_id, time) and query for activities that are user_id IN (...friends...) AND time > some-cutoff-point. Ditching the id and choosing a different clustered index might be a good idea - I haven't experimented with that.
Pretty basic stuff, but it works, it's simple, and it is easy to work with as your needs change. Also, if you aren't using MySQL you might be able to do better index-wise.
It also suggested there to use Redis for faster access to the most recent activities.
With Redis in the mix, it might work like this:
Create your MySQL activity record
For each friend of the user who created the activity, push the ID onto their activity list in Redis.
Trim each list to the last X items
Redis is fast and offers a way to pipeline commands across one connection - so pushing an activity out to 1000 friends takes milliseconds.
For a more detailed explanation of what I am talking about, see Redis' Twitter example: http://code.google.com/p/redis/wiki/TwitterAlikeExample
I hope this might help you also

Show all users currently signed in?

I am assuming I cannot do this using sessions but rather the DATABASE. So the user would sign in, it would set their TIMESTAMP and I display that from the database. Then it becomes deleted when the user logs out or their session is terminated. How would the code look for this?
The better question is, is my logic correct? Would this work? Does this make sense?
By default application servers store session data in temporary files on the server.
By storing session data in a database table you are able to create an interface that will show information about the users that are logged in. Apart from that, using this (database) approach is a serious advantage if you need to scale your application by adding more than one server.
One of the most popular ways to implement such a functionality is to create a session table containing your users' session data. This may look like:
create table session (
id number primary key,
data varchar(240),
timestamp date
);
The data column stores all the session data in a serialized form this is deserialized each time a user requests the data.
Serialization and deserialization may have inbuilt support depending on the platform you are using. For example, if you are using PHP, the functions session_encode and session_decode may be found useful.
You can't find out when a user logs out in PHP and the Javascript workarounds are a bit far from a stable solution.
A couple of things you need to do: Create a column in your user table called last_activity and update their last_activity to the current time whenever a user loads a page.
For a list of who's online, query the db for users with last_activity values more recent than 10 or 20 or whatever minutes ago.
To update the last_activity column use:
UPDATE users SET last_activity=CURRENT_TIMESTAMP() WHERE id=2
For a list of users online
SELECT * FROM users where last_activity >= (CURRENT_TIMESTAMP()-(60*20))

How are application like twitter implemented?

Suppose A follows 100 person,
then will need 100 join statement,
which is horrible for database I think.
Or there are other ways ?
Why would you need 100 Joins?
You would have a simple table "Follows" with your ID and the other persons ID in it...
Then you retrieve the "Tweets" by joining something like this:
Select top 100
tweet.*
from
tweet
inner join
followers on follower.id = tweet.AuthorID
where
followers.masterID = yourID
Now you just need a decent caching and make sure you use a non locking query and you have all information... (Well maybe add some userdata into the mix)
Edit:
tweet
ID - tweetid
AuthorID - ID of the poster
Followers
MasterID - (Basically your ID)
FollowerID - (ID of the person following you)
The Followers table has a composite ID based on master and followerID
It should have 2 indexes - one on "masterID - followerID" and one on "FollowerID and MasterID"
The real trick is to minimize your database usage (e.g., cache, cache, cache) and to understand usage patterns. In the specific case of Twitter, they use a bunch of different techniques from queuing, an insane amount of in-memory caching, and some really clever data flow optimizations. Give Scaling Twitter: Making Twitter 10000 percent faster and the other associated articles a read. Your question about how you implement "following" is to denormalize the data (precalculate and maintain join tables instead of performing joins on the fly) or don't use a database at all. <-- Make sure to read this!

Resources