Currently, we are using ActiveAndroid ORM in our project. We have thousands of records in a single SQLite Table. When we run multiple queries (with conditions) and shows results on the views. Our app hangs, skip android windows frames, the android screen turns black and app does not respond. Following warning are shows
CursorWindow: Window is full: requested allocation 696 bytes, free space 346 bytes, window size 2097152 bytes
We want to run an Asynchronous query in the background and shows the results on the UI Thread.
Is ActiveAndroid provide Asynchronous query, if not can we use parallel ORM like Room/GreenDAO/etc. with ActiveAndroid library Or any other library to run query asynchronously?
Related
Consider two application.
"A" application receives data from internet like player position and other details
"B" application which also needs the player position but it will be blocked from accessing internet. So the only way is to use SQLite sync player position (these frequently updates in milliseconds).
I can't even use socket or any other plugins too. So do you think SQLite can handle read and write in milliseconds without using CPU heavily ?
If you wish to share the data in anything like real-time then I would use something like inter-process pipes or file mapping (memory) for this.
Writing data to and reading it back from any form of hardware storage will add quite a delay to the data passing, which will only become worse as the hardware data cache is filled.
Hardware is okay for historic data.
Both are supported by Win32 and should be accessible even if you use .NET to produce a UWP application.
See here
We have a Windows application that interfaces with a sensor array. It reads out the array of 37 elements 10 times per second and appends the set of 35 32-bit integers and 2 16-bit integers to a csv file in the Documents folder on the C: drive
We have neither the application source nor access to the developer, who left the company a couple of years ago, nor specifications for the array to system protocols. We now want to perform real time analysis of the data, but all of the communications between the code and the array are effectively a black box.
I'm not a Windows system programmer, but a million years (in IT time) ago I was a designer for IBM OS/360 so I have a basic understanding of file system structures and it seems to me that it should be possible to somehow intercept file "open" and "write" calls to the OS and perform "near" real time analysis. Any good ideas how to do it? Preferably explained in terms that an 80 year old whio only dabbles in Python and C/C++ would comprehend? I've thought of a disassembler. or executing in a debugging environment that might be able to trap the I/O calls and pass control to an analysis routine, but I have no idea of what tools might be available these days in the Windows environment.
By the way, one other thing occurred to me - the app also outputs a plot of the data from each sensor - not sure if that's something we could get at .
There's no standard/supported way to hook into file I/O operations.
For this specific problem, the ideal solution will likely to be to use ReadDirectoryChangesW to watch the file for changes and read them out between updates; 100 ms should be more than enough time to pull out the data, unless it's a network drive or similar. This obviously won't work if the application is preventing you from reading the file between writes, though.
In an absolute worst case scenario, you can hook into the application's writes by injecting the process with a DLL that overwrites the first instructions of WriteFile (or whatever it's using to write) in kernel32.dll with a hook on load. You can read more about this process here.
I have created a Custom Image Recognition collection on IBM Cloud and am using it in my Django website to do the processing. However, I noticed that the response time ranges from 6 to 14 seconds.
I want to reduce this turnaround time. I am already zipping the image file that I sent. So when going through the API reference document here on IBM Cloud I noticed that there is a method called "get_model_file" which download the collection file to a local space.
But no documentation on how this can be used. Anyone who has successfully implemented this? Or am i missing something here?
However, I noticed that the response time ranges from 6 to 14 seconds.
I want to reduce this turnaround time. I am already zipping the image file that I sent.
How many images at at time are you sending in the zip file to the /analyze endpoint? If you are just sending one image at a time, you should not bother zipping it. Also, if you can, you should parallelize your code so that you make 1 request per image, rather than sending, say 6 images in a single zip file. This will reduce the latency.
Using the v4 API, by the way, you should resize your images to no more than 300 pixels in either width or height. In fact, you can "squash" the aspect ratio to square and it will not affect the outcome. The service will do this resizing internally anyhow, but if you do it on the client side, you save network transmission and decoding time.
With a single image at a time, if your resolution is under 300x300 pixels, you should have latency under 1.5 seconds on a typical call, including your network transmission time.
As the documentation states
Currently, the model format is specific to Android apps.
So unless you are creating an Android App then this is not going to work for you.
You probably have two areas of latency. First will be from the browser to your Django app. Second will be from your Django app to the Visual Recognition service. I am not sure where you have hosted the Django app, but if you locate it in the same region (data centre would be even better) you might be able to reduce part of the latency.
I have a website with a simple page. On click of a button we execute a MDX query which returns around 200,000 rows with 20 columns. I use following code to execute MDX query using Microsoft.AnalysisServices.AdomdClient library (version is 10.0.0.0 runtime version v2.0.50727)
var connection = new AdomdConnection(connectionString);
var command = new AdomdCommand(query, connection)
{
CommandTimeout = 900
};
connection.ShowHiddenObjects = true;
connection.Open();
var cellSet = command.ExecuteCellSet();
connection.Close();
While the query is executing the memory usgae of the app pool goes very high.
This is the initial state of the memory usage on the server :
After running the query:
I am not sure why the memory usage goes so high and stays like that. I have used profiler on my local box and everything looked ok.
What options I have to figure out what is holding on to the memory?
Is there any explicit way to clear off this memory?
Does ADOMD library always consumes this much memory? Do we have any alternate options to execute MDX queries using C#?
When the memory usgae goes this high, IIS stop processing other queries and the application hosted on same IIS server (using different app pool) also get affected and request takes longer to execute.
I've recently started at a place where we have a similar issue.
Your options to figure out whats holding memory are:
Download a memory profiler such as Redgate's Ants profiler, and that will allow you to see whats going on in the App pool. However theres only a 2 week trial but will allow you to see whats going on initially.
Get hold of CLR Profiler, this tool can be downloaded and allows you to see snapshots of the memory, so you can tell whats in memory in the CLR.
One thing to be aware of is the Large Object Heap, by design the CLR will not compact space in the LOH and so if objects are put there then that can lead to memory fragmentation. Objects greater than 85000 bytes get put there. One example is large lists of objects.
One thing I've tried doing to get around it is create a specialised collection like a composite list, which basically is a list of lists, then as each component list is under 85000 bytes it will remain in the normal heap and the entire object misses being put into the LOH. Others too have mentioned this approach.
That said I'm still having issues, as the composite list hasn't really sorted out the problem so there are still other factors at play which need to resolve. Am puzzled at it and thinking that a memory dump of the app pool and analysing with winDbg may provide further answers.
One further point, although I'm sure its not the source of the problem, is that its recommended to have a using statement for your connection, as otherwise if there's an exception before your close statement then it may not get closed.
I'm developing an application using Flas Builder / Flex for Adobe Air. This application will be processing a large set of static text (100 - 200 MB) using a variable set of processing instructions. The target platforms will be iOS, Android and Desktop.
The data set can be either one large XML file or broken into a bunch of XML files about 3MB each. This will be decided at design time.
From your experience would it be better to store the text in an Adode Air database or a set of XML files for best performance (including speed and battery life)?
What other considerations should I take into account?
I quote one of my favourite bookmarks:
There are several different methods for persisting data in AIR applications:
Flat files
Local shared objects
EncryptedLocalStore
Object serialization
SQL database
Each of these methods has its own set of advantages and disadvantages (an explanation of which is beyond the scope of this article). One of the advantages of using a SQL database is that it helps to keep your application's memory footprint down, rather than loading a lot of data into memory from flat files. For example, if you store your application's data in a database, you can select only what you need, when you need it, then easily remove the data from memory when you're finished with it.
Source: http://www.adobe.com/devnet/air/articles/10_tips_building_on_air.html
I don't understand one thing: is EVERY file 100-200 Mb in size? Or this is the total size of ALL your files?