I have 3 dc.js charts on my page. I would like to make the data it consumes as dynamic as concurrent users might range from 5 to 15 users. If I use .CSV and .JSON, the physical file needs to be created on server and concurrency would be an issue. Is there a way to pass in-memory data to d3.js for rendering the charts? Any example/help is appreciated.
Thanks
BV
Related
Is JQgrid create all records in DOM or store locally in JavaScript object show records on pagination event?
Thanks
We tried JQ grid in our project experience slowness rendering table for large number of records (10,000) with page size of 50.
jqGrid stores the local data in array and display in DOM only the portion set in rowNum parameter.
Personally I think that loading 10k records locally is to big for any grid component. This will consume a lot of memory and will slow dawn any operation in the application.
The best is to store the data on server and request only the portion you want. Look here how we deal with 1 million records
I have backend API that have group of methods that retrieves some data series for charts in web/mobile apps from database by different sql-queries.
Some data series uses as-is, another - calculated from reteved data series.
At this time I have Controller that call Repository. And if I need to calculate data then I pass data to math function and get another series.
And I am stuck in how to clearly organize my code. What design-patterns to use? How to separate code?
pls, help :)
I am validating the data from Eloqua insights with the data I pulled using Eloqua API. There are some differences in the metrics.So, are there any issues when pulling the data using API vs .csv file using Eloqua Insights?
Absolutely, besides undocumented data discrepancies that might exist, Insights can aggregate, calculate, and expose various hidden relations between data in Eloqua that is not accessible by an API export definition.
Think of the api as the raw data with the ability to pick and choose fields and apply a general filter on those, but Insights/OBIEE as a way to calculate that data, create those relationships across tables of raw data, and then present it in a consumable manner to the end user. A user has little use with a 1 gigabyte csv of individual unsubscribes for the past year, but present that in several graphs on a dashboard with running totals, averages, and timeseries, and it suddenly becomes actionable.
I am currently using data provider for passing json data loaded externally. this is a large data with olhcv data per minute for 1 month. I am experiencing slowness while running it on localhost. It is taking 8 seconds to render full data.
Recently I saw the dataLoader feature of amStockChart. But I am experiencing issues with replacing my dataprovider with dataLoader as I have customized too many things. Can I please know if its worth the effort of moving to dataLoader from dataProvider?
I have a data set of 11 MB. It's slow to load it every time the document is loaded.
d3.csv("https://s3.amazonaws.com/vidaio/QHP_Individual_Medical_Landscape.csv", function(data) {
// drawing code...
});
I know that crossfilter can be used to slice-and-dice the data once it's loaded in browser. But before that, dataset is big. I only use an aggregation of the data. It seems like I should pre-process the data on server before sending it to client. Maybe, use crossfilter on server side. Any suggestion on how to handle/process large dataset for d3?
Is your data dynamic? If it's not, then you can certainly aggregate it and store the result on your server. The aggregation would only be required once. Even if the data is dynamic, if the changes are infrequent then you could benefit from aggregating only when the data changes and caching that result. If you have highly dynamic data such that you'll have to aggregate it fresh with every page load, then doing it on the server vs. the client could depend on how many simultaneous users you expect. A lot of simultaneous users might bring your server to its knees. OTOH, if you have a small number of users, then your server probably (possibly?) has more horsepower than your users' browsers, in which case it will be able to perform the aggregation faster than the browser. Also keep in mind the bandwidth cost of sending 11 MB to your users. Might not be a big deal ... unless they're loading the page a lot and doing it on mobile devices.
Try simplifying the data (also suggested in the comment from Stephen Thomas)
Try pre-parsing the data into json. This will likely result in a larger file (more network time) but have less parsing overhead (lower client cpu). If your problem is the parsing this could save time
Break the data up by some kind of sharding key, such as year. Limit the to that shard and then load up the other data files on demand as needed
Break up the data by time, but show everything in the UI. load the charts on the default view (such as most recent timeframe) but then asynchronously add the additional files as they arrive (or when they all arrive)
How about server side (gZip) compression. should be a few kb after compressing and browser will de-compress on the background.