Can I use SQLite for heavy use purpose? - windows

Consider two application.
"A" application receives data from internet like player position and other details
"B" application which also needs the player position but it will be blocked from accessing internet. So the only way is to use SQLite sync player position (these frequently updates in milliseconds).
I can't even use socket or any other plugins too. So do you think SQLite can handle read and write in milliseconds without using CPU heavily ?

If you wish to share the data in anything like real-time then I would use something like inter-process pipes or file mapping (memory) for this.
Writing data to and reading it back from any form of hardware storage will add quite a delay to the data passing, which will only become worse as the hardware data cache is filled.
Hardware is okay for historic data.
Both are supported by Win32 and should be accessible even if you use .NET to produce a UWP application.
See here

Related

Does calling `writev` repeatedly with the same memory address allow hardware caching?

I've read some performance claims about how Elixir and Erlang use hardware, and I'm trying to see if I understand their basis. Some background:
First, Erlang supports writing nested lists of immutable strings (iolists) to IO (files, sockets, etc) and uses writev and the strings' memory addresses to do so (see Evan Miller's blog post on this).
Second, the docs for an Erlang web framework called Chicago Boss say:
Erlang Respects Your RAM!
Erlang is different from other platforms because when rendering a server-side template, it doesn't create a separate copy of a web page in memory for each connected client. Instead, it constructs pointers to the same pieces of immutable memory across multiple requests.
So if two people request two different profile pages at the same time, they're actually sent the same chunks of memory for the header, footer, and other shared template snippets. The result is a server that can construct complex, uncached web pages for hundreds of users per second without breaking a sweat.
Third, a book about an Elixir (Erlang VM) web framework called Phoenix says:
Templates are precompiled. Phoenix doesn’t need to copy strings for each rendered template. At the hardware level, you’ll see caching come into play for these strings where it never did before.
From looking at the source, I know that this framework uses iolists to represent a completed response template.
Putting all this together, I think what's being implied is that if a web framework uses writev to tell the OS to send the same header and footer strings from the same memory locations, one web request after another, the hardware will be able to say "oh, I know that value, it's already in CPU cache so I don't have to look in RAM for it."
Is that right? (I have very little understanding of system calls and hardware.) If not, any ideas on how hardware caching is involved?
(Bonus if you can tell me how to see or infer what's happening.)
Yes, it's mostly the processor caches that help you. The time needed to retrieve the data is smaller as it's in a faster memory (ie the CPU caches).
Some pointers for understanding what the caches are and how they work:
https://www.quora.com/How-does-the-cache-memory-in-a-computer-work
http://www.hardwaresecrets.com/how-the-cache-memory-works/
http://lwn.net/Articles/252125/
To see this, measure how much a request takes (client side) in the normal server operation. After that have a separate process within the same vm that constantly creates and writes to disk a very large string (it probably has to be megabytes in size - whatever the size of the L2/L3 caches on your process are). Remeasure how much the request takes - if done correctly this should be at least 1 order of magnitude slower.

NPAPI: data push model?

When working with NPAPI, you have the control over two functions: NPP_WriteReady & NPP_Write. This is basically a data push model.
However I need to implement support for a new file format. The library I am using takes any concrete subclass of the following source model (simplified c++ code):
struct compressed_source {
virtual int read(char *buf, int num_bytes) = 0;
}
This model is trivial to implement when dealing with FILE* (C) or socket (BSD) and other(s), since they comply with a pull data model. However I do not see how to fullfill this pull model from the NPAPI push model.
As far as I understand I cannot explicitely call NPP_Write within my concrete implementation of ::read(char *, size_t).
What is the solution here ?
EDIT:
I did not want to add too much details to avoid confusing answer. Just for reference, I want to be build an OpenJPEG/NPAPI plugin. OpenJPEG is a huge library, and the underlying JPEG 2000 implementation really wants a pull data model to allow fine access on massive image (eg: specific sub-region of an 100000 x 100000 image thanks to low level indexing information). In other word, I really need a pull data model plugin interface.
Preload the file
Well, preloading the whole file is always an option that would work, but often not a good one. From your other questions I gather that files/downloads in question might be rather large, so avoiding network traffic might be a good idea, so preloading the file is not really an option.
Hack the library
If you're using some open source library, you might be able to implement a push API along or instead of the current pull API directly within the library.
Or you could implement things entirely by yourself. IIRC you're trying to decode some image format, and image formats are usually reasonably easy to implement from scratch.
Implement blocking reads by blocking the thread
You could put the image decoding stuff into a new thread, and whenever there is not enough buffered data already to fulfill a read immediately, do a blocking wait for the data receiving thread (main thread in case of NPAPI) until it indicates the buffer is sufficiently filled again. This is essentially the Producer/Consumer problem.
Of course, you'll first need to choose how to use threads and synchronization primitives (a library such as C++11 std::thread, Boost threads, low-level pthreads and/or Windows threads, etc.). There are tons of related SO questions on SO/SE and tons of articles/postings/discussions/tutorials etc. all over the internet.

How to get Core Data to make only one instance of entity of type

So. Before I get singleton pattern hate on this message hear me out. I'd love to hear ideas. I'm making a program that I think I need to use core data for, because later I want the status of some variables to be easily accessible from OS X, and multiple iOS devices.
What I'm making is an OS X program that will control phidgets (phidgets.com) to control and listen for status changes in real world objects. Example: whether a motor is turned on or not. Turn a motor on and off. Turn on status lights, etc.
I originally thought I'd just make global variables that I change, poll and manipulate in order to have a central status board for the logic of the program to work off of. But, because of the engineering that is put into core data every year by apple, I am assuming making this work with core data will allow me to more easily have options to sync this later with iOS devices that could control or monitor the said status' remotely.
Is there a nifty way you can imagine to:
-startup the program, confirm there is only one entity of type "SystemStatus", if there isn't one, make one. is there is one, we continue and are able to let the program update it's attributes with status of the real world objects it's controlling.
using core data was something I thought of also, because it will allow me a place to persist stored history of data gathered too. Example: motor bearing temperature over time.
If you ensure that access to this object is done through your API, Core Data becomes an implementation detail behind the getter method of the singleton object. There are no facilities in Core Data to tell it to create only one object, but if you ensure access to the object is done through a wrapper of your own, you can fetch it on demand and if it doesn't exist, you can insert it, save, and pass it to the caller.
An important thing to consider when using Core Data objects is multithreading. Passing the same object to multiple threads is very error-prone and requires locking mechanisms (or use of Apple's block-based API). This is not very straightforward for what you describe. Consider either a wrapper object which uses Core Data objects internally (wraps access to properties in block-based API) or using a different approach than Core Data.

Dynamically loaded Markers: DDOS prevention

My app shows a map where locations (or Markers) are dynamically loaded via an ajax (and database) request after every map Bounds changes.
I'm convinced that this solution is not scalable : at the moment, Europe area shows a total of 10 markers.
If the database grows and I display for instance 1000 locations, that means 1000 rows would be returned to the user.
This is not a JS / UI since I use the MarkerCluster plugin and I avoid the redraw of loaded locations's markers.
I made some tweaks :
- Delay the Ajax request thanks to an Idle gmaps event
- Increase the minimal zoom level, so the entire world can't be displayed.
But this is not enough.
There are lots of ways to approach this but I will just put here the two I think are most appropriate from your question.
First is to really control from your web app what information is asked for and when. You could write this all yourself in javascript and implement caching techniques ect. There are a number of libraries out there that do most of this work for you though.
I would recommend one of the following:
OpenGeo SDK
OpenLayers
GeoExt
Leaflet
All of these have ways of controlling local caching, when to get the data and what data is gathered from the server. Most of them can also be extended to add any functionality that is missing. The top two I know support google maps (as well as a number of others) as well.
If you need to add even more control over your data locally you could even look at implementing something like PouchDB. I think this is more suited to mobile applications or instances where the network connection is either really slow or intermittent.
This sort of solution should be able to easily handle 1000's to 10000's of features with 100's of users.
If you are really going to scale up to 100000's to 1000000's of features with 100's to 1000's of users then I would suggest adding a tile server to the soloution above. The tile server will sit between your web application and your data base. Most of them have lots of caching settings and optimistions for dealing with large datasets and pushing them out to a client. Because they push out tiles rather than features the data output remains reasonably constant even as the number of features grow. The OpenGeo SDK and Openlayers libraries I mentioned above can work really well with any of the following tile servers:
GeoServer
Mapserver
MapGuide
Quantum GIS Server
If you are reluctant to do any coding there are some offers that work out of the box for enterprise environments. They are all expensive and from your question I think they are probably not what you are looking for.

Best form of IPC for a decentralized roguelike?

I've got a project to create a roguelike that in some way abstracts the UI from the engine and the engine from map creation, line-of-site, etc. To narrow the focus, i first want to just get the UI (player's client) and engine working.
My current idea is to make the client basically a program that decides what one character (player, monsters) will do for its turn and waits until it can move again. So each monster has a client, and so does the player. The player's client prints the map, waits for input, sends it to the engine, and tells the player what happened. The monster's client does the same except without printing the map and using AI instead of keyboard input.
Before i go any futher, if this seems somehow an obfuscated way of doing things, my goal is to learn, not write a roguelike. It's the journy, not the destination.
And so i need to choose what form of ipc fits this model best.
My first attempt used pipes because they're simplest and i wrote a
UI for the player and a program to pipe in instructions such as
where to put the map and player. While this works, it only allows
one client--communicating through stdin and out.
I've thought about making the engine a daemon that looks in a spool
where clients, when started, create unique-per-client temp files to
give instructions to the engine and recieve feedback.
Lastly, i've done a little introductory programing with sockets.
They seem like they might be the way to go, and would allow the game
to perhaps someday be run over a net. I'd like to, if possible, use
a simpler solution, and since i'm unfamiliar with them, it's more
error prone.
I'm always open to suggestions.
I've been playing around with using these combinations for a similar problem (multiple clients talking via a single daemon on the local box, with much of the intelligence shoved off into the clients).
mmap for sharing large data blobs, with unix domain sockets, messages queues, or named pipes for notification
same, but using individual files per blob instead of munging them all together in an mmap
same, but without the files or mmap (in other words, more like conventional messaging)
In general I like the idea of breaking things up into separate executables this way -- it certainly makes testing easier, for instance. I think the choice of method comes down to usage patterns -- how large are messages, how persistent does the data in them need to be, can you afford the cost of multiple trips through the network stack for a socket-based message, that sort of thing. The fact that you're sticking to Linux makes things easy in terms of what's available -- you don't need to worry about portability of message queues, for instance.
This one's also applicable: https://stackoverflow.com/a/1428542/1264797

Resources