Best practise: Window Phone, how to sync files with a server - windows-phone-7

I am going to write some apps on WP 7.5 in C#, which should store some user input in files, and later on these files should be synced with a server.
However, internet connection might not always be available.
So, the app should store the data inernally first, and later on it should synchronize with the server in the background.
Now, is a background sync possible at all? If yes, are there any prebuilt-functionalities to do this? Or else, what is the 'best practice' to do such sychronizing tasks ?
Our file server's protocol can be chosen according to this task. We could even use a SQL server.

You need to read about "Background file transfers for Windows Phone":
http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202955(v=vs.105).aspx
Especially read and understand the policies and constraints regarding data size, duration, etc.
Note that this topic is part of the broader topic 'Multitasking for Windows Phone' which includes the background file transfers, tasks, etc:
http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202866(v=vs.105).aspx

Related

How to implement synchronization of browser-based online games when users refresh their browser

In implementing a browser-based simple game involving multiple users, I have the server save the game state at certain sync points (not time-based but event-specific). I identify each state by an integer.
When a user refreshes his browser, the server provides the latest state and restores the content in the browser. However, in those few seconds while the browser is loading the latest content after browser-refresh, the state could change again. I do not know how to handle this situation because sending the next state will again raise the same issue.
I want a seamless refresh so none of the other players are impacted when one user refreshes his browser (or for that matter leaves and comes back).
The implementation language is not relevant. I use websockets to communicate between the browser and the server. The server is the intermediary for all communication between users (I am not using WebRTC data channels). What is the best way to sync the application content in multiple browsers?
This is indeed a programming-based question though no code is provided.
Forget the fact that your client exists in a browser. Let's just talk about replication.
The usual approach in databases is to separate snapshots from Write Access Logging (WAL) logs. When you bring a new client up, you select a snapshot and transfer that. Then when the client is ready it asks for WAL logs from that snapshot forward. The same mechanism is used after crashes. The last available snapshot is loaded, then the WAL log is replayed, then the database comes up.
I would suggest the same strategy. This does require efficient storage of snapshots. Some kind of log. And some kind of replay mechanism. Which is a lot of easy to mess up code. If you can use something existing, that would be good.
The first thing that I looked into was using Emscripten to compile Redis to JS, and then try to use Redis' built-in asynchronous replication to replicate to your browser. That may be possible, but the fact that Redis is single-threaded and wants to be a client-server is probably a showstopper.
The next best option that I found is that you can use https://isomorphic-git.org/. Here is how that could build what you need. You simply maintain your current state in a git repository, and keep a WAL log of everything that you've done with it. When a client connects, it clones the repository. Once done it connects to the websocket, tells you what commit it is at, and you send it the WAL log from that point forward. Locally in the browser you run those git commands. If the client simply loses its connection and then rejoins, it can do a git pull, and then follow the same strategy.
This will be a bunch of work for you. But a lot less work than implementing everything from scratch.

WP7 inter process communication

I am building an music player using Background audio player agent on WP7. I want to enable communication between the UI part and the agent part. Many guides suggest using isolate storage, but I think that is not a good way
Is there any way to enable inter-process communication in Windows Phone 7
In Windows Phone 8 SDK, we can now use system-wide Mutex object.
It seems the foreground App and Background Agent run as separate processes on the phone. So even when you instantiate the same class, each process has a different instance.
The best solution I know about so far is to have each process map the "shared" data structure to an Isolated Storage file, then use a system wide Mutex (named Mutex) to prevent one process from reading the file when the other is writing it. It'll be simpler if one process is always the writer of the data structure, so it never has to worry about merging in changes made by the other process asynchronously. If each process must be the writer of some portion of the data structure, the usual case, consider separating those portions into separate data structures and separate Isolated Storage files, with one process reading one file and writing the other and the other process writing the first and reading the second. (all reads and writes within mutex. Use same mutex for both files and both processes to avoid deadlocks.)
try this:
phoneApplicationPage.State

Testing file transfer speed across LAN/WAN

Is there a utility for Windows that allows you to test different aspects of file transfer operations across a Lan or a Wan.
Example...
How long does it take to move a file of a known size (500 MB or 1 GB) from Server A (on site) to Server B (on site) or to Server C (off site-Satellite location)?
D-ITG will allow you to test many aspects of your links. It does not necessarily allow you transfer a file directly, but it allows you to control almost all aspects of the transmission of data across the wire.
If all you are interested in is bulk transfer time (and not all the nitty-gritty details) you could just use a basic FTP application and time the transfer.
Probably nothing you've not already figured out. You could get some coarse grain metrics using a batch file to coordinate:
start monitoring
copy file
stop monitoring
Copy file might just be initiating a file copy between two nodes on the LAN, or it might initiate a FTP copy between two nodes on the WAN.
Monitoring could be as basic as writing the current time to output or file, or it could be as complex as adding performance counter metrics from the network adapter on the two machines.
A commercial WAN emulator would also give you the information your looking for. I've used the Shunra Appliance successfully in the past. Its pretty expensive, so I'd really only recommend it if critical business success is riding on understanding how application behavior could change based on network conditions and is something you could incorporate into regular testing activities.

What's the best way to monitor a large number of Ruby processes?

I have a farm of several physical servers each running a large number of Ruby "workers" (daemon-like processes) and I'd like to be able to monitor the health and progress of these processes from a central location, perhaps with historical graphing like Cacti provides. What's the simplest preferably-open-standard protocol for doing something like that? Please note I'm already using monit to keep the processes up and running and under control; what I'm asking for here is a single point of entry (i.e. dashboard) for checking in on them. Thanks.
If you are already using Monit then M/Monit sounds like a perfect match.
"M/Monit expand upon Monit's capabilities to provide monitoring and management of all Monit enabled hosts from one simple to use web-interface. " - http://mmonit.com/
G'day,
What about having a monitoring process on each server that checks the status of each process and then writes that out to a flat text file, say once every five minutes.
Then another process located on a central server can retrieve at those flat files and trawl through the results and flag any issues.
If you save the individual files and timestamp them, you would also be able to see any trends forming.
Just a quick ideea.
BTW The above system is used to monitor the servers in one of the largest websites in the world. Our scripts are written in Perl with a little bit of shell script but I don't see why you couldn't write your monitoring scripts in Ruby as well.
HTH
cheers,
I'd suggest to take a look at Zabbix.
It's not as simple as monit, of course, but it allows you to run data collecting agent on each of your servers, with all agents feeding the central reporting and storage server with their data. Those agents can use any custom scripts to get the metrics - you can write simple scripts to extract the data you need from your workers, send it back to the central reporting server and display it there on the dashboard.

Best approach to collecting log files from remote machines?

I have over 500 machines distributed across a WAN covering three continents. Periodically, I need to collect text files which are on the local hard disk on each blade. Each server is running Windows server 2003 and the files are mounted on a share which can be accessed remotely as \server\Logs. Each machine holds many files which can be several Mb each and the size can be reduced by zipping.
Thus far I have tried using Powershell scripts and a simple Java application to do the copying. Both approaches take several days to collect the 500Gb or so of files. Is there a better solution which would be faster and more efficient?
I guess it depends what you do with them ... if you are going to parse them for metrics data into a database, it would be faster to have that parsing utility installed on each of those machines to parse and load into your central database at the same time.
Even if all you are doing is compressing and copying to a central location, set up those commands in a .cmd file and schedule it to run on each of the servers automatically. Then you will have distributed the work amongst all those servers, rather than forcing your one local system to do all the work. :-)
The first improvement that comes to mind is to not ship entire log files, but only the records from after the last shipment. This of course is assuming that the files are being accumulated over time and are not entirely new each time.
You could implement this in various ways: if the files have date/time stamps you can rely on, running them through a filter that removes the older records from consideration and dumps the remainder would be sufficient. If there is no such discriminator available, I would keep track of the last byte/line sent and advance to that location prior to shipping.
Either way, the goal is to only ship new content. In our own system logs are shipped via a service that replicates the logs as they are written. That required a small service that handled the log files to be written, but reduced latency in capturing logs and cut bandwidth use immensely.
Each server should probably:
manage its own log files (start new logs before uploading and delete sent logs after uploading)
name the files (or prepend metadata) so the server knows which client sent them and what period they cover
compress log files before shipping (compress + FTP + uncompress is often faster than FTP alone)
push log files to a central location (FTP is faster than SMB, the windows FTP command can be automated with "-s:scriptfile")
notify you when it cannot push its log for any reason
do all the above on a staggered schedule (to avoid overloading the central server)
Perhaps use the server's last IP octet multiplied by a constant to offset in minutes from midnight?
The central server should probably:
accept log files sent and queue them for processing
gracefully handle receiving the same log file twice (should it ignore or reprocess?)
uncompress and process the log files as necessary
delete/archive processed log files according to your retention policy
notify you when a server has not pushed its logs lately
We have a similar product on a smaller scale here. Our solution is to have the machines generating the log files push them to a NAT on a daily basis in a randomly staggered pattern. This solved a lot of the problems of a more pull-based method, including bunched-up read-write times that kept a server busy for days.
It doesn't sound like the storage servers bandwidth would be saturated, so you could pull from several clients at different locations in parallel. The main question is, what is the bottleneck that slows the whole process down?
I would do the following:
Write a program to run on each server, which will do the following:
Monitor the logs on the server
Compress them at a particular defined schedule
Pass information to the analysis server.
Write another program which sits on the core srver which does the following:
Pulls compressed files when the network/cpu is not too busy.
(This can be multi-threaded.)
This uses the information passed to it from the end computers to determine which log to get next.
Uncompress and upload to your database continuously.
This should give you a solution which provides up to date information, with a minimum of downtime.
The downside will be relatively consistent network/computer use, but tbh that is often a good thing.
It will also allow easy management of the system, to detect any problems or issues which need resolving.
NetBIOS copies are not as fast as, say, FTP. The problem is that you don't want an FTP server on each server. If you can't process the log files locally on each server, another solution is to have all the server upload the log files via FTP to a central location, which you can process from. For instance:
Set up an FTP server as a central collection point. Schedule tasks on each server to zip up the log files and FTP the archives to your central FTP server. You can write a program which automates the scheduling of the tasks remotely using a tool like schtasks.exe:
KB 814596: How to use schtasks.exe to Schedule Tasks in Windows Server 2003
You'll likely want to stagger the uploads back to the FTP server.

Resources