I am using twitter api with to get tweet when someone posts. It takes around 3-4 seconds to update. How can i make that faster to less then a second...
What are the recommendations regarding
1: Which server to use (Would it matter in this case)
2: Twitter API to use (Standard, Premium, Enterprise)
I would appreciate if you can recommend any other
As mentionned in the documentation from twitter API, rates are limited in time, therefore, you cannot make the update faster or you will run out of calls to refresh what you are trying to search. See. https://developer.twitter.com/en/docs/basics/rate-limiting
Related
I am trying to display dashboards to every user who accesses my website with an analysis of his previous data. Can I do it without the help of google analytics?
Well if you do not want to use Google-Analytics then YES you can do it.
Few steps you need to do:
Create a database table in which save page URL and visitor's ip
Also try to get time that how many seconds/minutes visitor stay on page
You may generate reports and show on dashboard
By the way, if we have a free of cost solution then why you are spending time on it ?
Paperclip is free and ethical: Website. Users can see all the data collected about them. I'm in their beta program, where you can suggest things to them and they'll add it.
in short, I've a task of creating a stock portfolio website, something similar to Google Finance's portfolio, or Seeking Alpha's portfolio feature but before I start to attempt it, I thought I'd ask to see what would be the best way to go about doing it? For now, I was just thinking about using PHP and just connecting to Google or Yahoo Finance to get the data from them but surely, there is probably some better way to go about it?
Thanks.
I've built many different stock applications and here's what i've found.
Yahoo and Google have fickle APIs, some of them work sometimes and then break because they are poorly maintained or not official. I have scrapped yahoo's financial data before for specific information using PHP DOM.
If you want realtime ticker data like price, volume ect. consider etrades free API: https://developer.etrade.com/ctnt/dev-portal/getArticleByCategory?category=Documentation It is pretty reliable, I query their data every 4 seconds with no issues in PHP.
Also another good resource is https://www.quandl.com/search?query= some of it is free and some if it is premium paid content.
I am developing a cs-cart based website and my client wants to integrate Fishbowl into his website.
I have searched an add-on for it, but there is no one for me.
I have experience developed a simple add-on and, now I am going to build one add-on to integrate fishbowl.
Please guide me if you have solid experience on integrating fishbowl and cs-cart or another warehouse solution for cs-cart.
I don't understand why the fishbowl doesn't provide or developed the add-on for it.
Please help!
Thanks for reading!
Well, there are a few things to watch out for, and therefore a number of options.
Firstly, what does your client really want. Does he want the stock to be updated on the fly, or once a night? If the client wants to stock to be updated on the fly you will need to add some sort of post function for updating products to also execute your code. If your client wants it only once a night you could just get away by sending every product in the database (with your selected columns).
Secondly, try using object oriented code. If you are really into PHP you will know what I mean by this. Otherwise dont bother (or try learing PHP Object Oriented), though I must note that this can make your addon significantly faster.
Thirdly, use the provided dbquery function
If you have any more questions regarding this, feel free to ask.
Looking at a lot of web applications (websites/services/whatever) that have a 'streaming' component (typically this is a 'Social' app): Think: Facebook's 'Wall', Twitter 'Feed', LinkedIn's 'News Feed'.
They have a pretty similar characteristic: 'A notice of new items is added to the page (automatically assuming via a background Ajax call', but the new HTML representing the newest feed items isn't loaded to the page until the users click this update link.'
I guess I'm curious if this design decision is for any of the following reasons and if so: could anyone whom has worked on one of these types of apps explain the reasoning they found for doing it this way:
User experience (updates for a large number of 'Facebook Friends' or
'Pages' or 'Tweets' would move too quickly for one to absorb and
read with any real intent, so the page isn't refresh automatically.
Client-side performance: fetching a simple 'count' of updates
requires less bandwidth (less loadtime), less JS running to update
the page for anyone whom has the site open, and thus a lighter
weight feel on the client-side.
Server-side performance: Fewer requests coming into the server to
gather more information about recent updates (less outgoing
bandwidth, more free cycles to be grabbing information for those
whom do request it (by clicking the link). While I'm sure the owners
of these websites aren't 'short on resources', if everyone whom had
Twitter or Facebook open in the browser got a full-update fetched
from the server every-time one was created I'm sure it would be a
much more sig. drag on resources.
They are actually trying to save resources (it takes a cup of coffee
to perform a Google search (haha)) and sending a few bytes of data
to the page representing the count of new updates is a lot lighter
of a load on applications that are being used simultaneously on
hundreds and thousands of browser windows (not to mention API
requests).
I have a few more questions depending on the answer to this first question as well...so I'll probably add those here or ask another question!!
Thanks
P.S. This question got trolled off of the 'Web Applications' site -- so I brought my questions here where they're not to 'broad' or 'off-topic' (-8
Until the recent UI changes to Facebook, they did auto-load new content. It was extremely frustrating from a user perspective, as you'd be reading through the list of your friend's posts and all of a sudden everything would shift and you'd have no idea where the post you were just reading went.
I'd imagine this is the main reason.
I love BlogEngine. But from what I can se it does not collect the standard information about the visitors I would like to see (referrer, browser-type and so on).
When I log in as Admin I have a menu item named "Referrer". I can choose a weekday and then I'll be presented with 1 or 2 rows with
"google.com 4 hits, "itmaskinen.se 6 hits" and so on, But that's not what I want to se, I want to se where my visitors come from, country, IP if possible, how many visitors and so on.
If someone of you are familiar with Blogengine.Net and can point me in the right direction to where I would put my own log-code or if you know any visitor-statistic-extension that can do it for me, I would be really happy to know. I prefer an extension, because if I make changes myself to BlogEngine it may break later updates I install.
Blogengine.Net is a blog software made in .Net found here: http://www.dotnetblogengine.net/
And yes, I prefer to take this question here rather then in the Blogengine.Net forum, you know why. ;)
(Anyone, feel free to edit my (bad) english in this post and after that delete this sentence)
This isn't an extension, but it's what I use to collect all my blogengine.net data and it should be upgrade safe.
When you log into the Blogengine.NET admin screens you can go to "Settings> Custome Code > Tracking Script", here you can put your http://www.google.com/analytics/ logging script. Google Analytics provides all the referrer, browser type, etc stuff you were wanting. And what's nice is you can then create additional accounts for other sites if you choose.
I use both Google Analytics and StatCounter to track visitor stats. I find that each one provides useful information that the other doesn't. And they're both free to a certain extent.
I place their javascript code int the site.master file of my custom BE.Net skin.
For Google Analytics I go a step further and pass the username of authenticated users as a custom variable. That way I can match users names up with the stats. To do this you can use the _setVar javascript method on the GA pageTracker like so:
<script type="text/javascript">
var pageTracker = _gat._getTracker("UA-129049-25");
var userDefinedValue = '<%= System.Web.Security.Membership.GetUser() != null ? System.Web.Security.Membership.GetUser().UserName : "" %>';
pageTracker._setVar(userDefinedValue);
pageTracker._trackPageview();
</script>
Anyone noticed that we miss all the hits coming from RSS readers? Syndication.axd does not run the analytics javascripts. So we miss the vast majority of viewers from the statistics. And we happily analyze that is just not impotant - ad-hoc visitors.
For the vast majority of cases, Google Analytics does just fine. It all depends on how much data you want. For example, if you want to keep note of IP addresses and resolve them to get domain names, and also highlight all visits to your blog from, say, your coworkers at the company where you work, you'd have to write some custom code yourself. However, it's all fairly primitive - these sorts of things are easily achievable using ASP.NET.
I set up gathering statistics on IIS web site of my BlogEngine instance and then analyze the logs using WebLog Expert - http://www.weblogexpert.com.
It is more reliable than google analytics, since I see really ALL requests that are coming to my IIS, no matter if this is a request to axd or to some static content. And, once I've found out that google was fooling me in the number of visits. After that I trust my IIS statistics much more than google.
There is a Widget which can be use to display Visits and Online Users Statistics.
You can find it from following links:
http://www.nuget.org/packages/Statistics/
http://www.itnerd.ir/post/2013/07/25/Visits-and-Online-Users-Statistics-widget-for-BlogEngine-2
but to see the instructions go to the second link.