I'm a rookie in Adaptive AUTOSAR.
I can't imagine why Time Synchronization(Tysnc) is needed. System time of ECUs can be synchronized by PTP.
Could you explain why Tsync is needed even though PTP synchronize time across a distributed system? Or I welcome any documents or materials for me to understand Tsync's usages or use-cases.
The reason for the existence time sync along with the definition of time domains is that you need to be able to define different time domains across different bus systems within the vehicle. One example for a not directly obvious definition of a time domain could be the metering of operation-hours.
On top of that, the time domains can cross AUTOSAR platforms, i.e. a time domain may consists of both CP and AP nodes.
You can find explanations for time sync in (e.g) the AUTOSAR documents TPS Manifest and TPS System Template.
There need to be different time bases in vehicle.
Examples of Time Bases in vehicles are:
• Absolute, which is based on a GPS based time.
• Relative, which represents the accumulated overall operating time of a vehicle,
i.e. this Time Base does not start with a value of zero whenever the vehicle starts
operating.
• Relative, starting at zero when the ECU begins its operation.
Related
Background
I have a system consisting of several distributed services, each of which is continuously generating events and reporting these to a central service.
I need to present a unified timeline of the events, where the ordering in the timeline corresponds to the moment event occurred. The frequency of event occurrence and the network latency is such that I cannot simply use time of arrival at the central collector to order the events.
E.g. in the following scenario:
E1 needs to be rendered in the timeline above E2, despite arriving at the collector afterwards, which means the events need to come with timestamp metadata. This is where the problem arises.
Problem
Due to constraints on how the environment is set up, it is not possible to ensure that the local time services on each machine are reliably aware of current UTC time. I can assume that each machine can accurately gauge relative time, i.e. the clock speeds are close enough to make measurement of short timespans identical, but problems like NTP misconfiguration/partitioning make it impossible to guarantee that every machine agrees on the current UTC time.
This means that a naive approach of simply generating a local timestamp for each event as it occurs, then ordering events using that will not work: every machine has its own opinion of what universal time is.
So the question is: how can I recover an ordering for events generated in a distributed system where the clocks do not agree?
Approaches I've considered
Most solutions I find online go down the path of trying to synchronize all the clocks, which is not possible for me since:
I don't control the machines in question
The reason the clocks are out of sync in the first place is due to network flakiness, which I can't fix
My own idea was to query some kind of central time service every time an event is generated, then stamp that event with the retrieved time minus network flight time. This gets hairy, because I have to add another service to the system and ensure its availability (I'm back to square zero if the other services can't reach this one). I was hoping there is some clever way to do this that doesn't require me to centralize timekeeping in this way.
A simple solution, somewhat inspired by your own at the end, is to periodically ping what I'll call the time-source server. In the ping include the service's chip clock; the time-source echos that and includes its timestamp. The service can then deduce the round-trip-time and guess that the time-source's clock was at the timestamp roughly round-trip-time/2 nanoseconds ago. You can then use this as an offset to the local chip clock to determine a globalish time.
You don't have to use a different service for this; the Collector server will do. The important part is that you don't have to ask call the time-source server at every request; it removes it from the critical path.
If you don't want a sawtooth function for the time, you can smooth the time difference
Congratulations, you've rebuilt NTP!
I need to provide the business with a report estimating number of users (devices in this case) the system can cope with without extensive delays and errors.
Assuming each device polls-communicates with the server every 5 seconds or so would it be acceptable to multiple the number of concurrent users I stress test with by 5 to get the figure required by the business?
In general what are the best means of answering such a question considering the above factors?
I am guessing that the collision rate (making them concurrent) may well be over the ratio of 5 (the seconds it takes for the device before it asks to communicate with the server).
Any advice?
I am using JMeter to produce concurrent user/device throughput.
Edit as requested to explain further:
From an analytics point of view if each device will attempt to connect and communicate with the server every 5 seconds and we wish to receive a response within the time it is ready to re-communicate (in other words in next 4 seconds), the collision chances literally for other devices running the same software is calculated on the elapsed time between the two calls no?
I am looking for statistical analysis methodology really to find a percent to multiply the concurrent test results to a real environment.
I know it is a general question without a specific / explicit answer but more the methodology, if there is one, of how can one project the number of "active" users the system can cope with from the known "concurrent" users. I would have though that given the frequency of calls is known and that each call takes 300ms in average one could somehow project the actual users (maybe by an industry standard multiplier?)
I am building a sensor network where a large number of sensors report their status to a central hub. The sensors need to report status atleast once every 3 hours, but I want to make sure that the hub does not get innundated with too many reports at any given time. So to mitigate this, I let the hub tell the sensors the 'next report time'.
Now I am looking for any standard algorithms for doing some load balancing of these updates, such that the sensors dont exceed a set interval between reports and the hub can calculate the next report time such that its load (of receiving reports) is evenly divided over the day.
Any help will be appreciated.
If you know how many sensors there are, just divide up every three hour chunk into that many time slots and (either randomly or programmatically as you need), assign one to each sensor.
If you don't, you can still divide up every three hour chunk into some large number of time slots and assign them to sensors. In your assignment algorithm, you just have to make sure that all the slots have one assigned sensor before any of them have two, and all of them have two before any of them have three, etc.
Easiest solution: Is there any reason why the hub cannot poll the sensors according to its own schedule?
Otherwise you may want to devise a system where the hub can decide whether or not to accept a report based on its own load. If a sensor has its connection denied make it wait an random period of time and retry. Over time the sensors should space themselves out more or less optimally.
IIRC some facet of TCP/IP uses a similar method, but I'm drawing a blank as to which.
I would use a base of 90 minutes with a randomized variation over a 30-minute range, so that the intervals are randomly beteween 60 and 120 minutes. Adjust these numbers if you want to get closer to the 3-hour interval but I would personally stay well under it
I want to send my current location to php web service after every 5 min even if my application is runing in background. I try to make this thing but its working good when my application in running state but when i put this application in background it stop sending data so please any buddy tell how can i run my application in background.
By "running in background", do you mean running when under the lock screen? If this is the case, then you need to set PhoneApplicationService.Current.ApplicationIdleDetectionMode = IdleDetectionMode.Disabled;
The post Running a Windows Phone Application under the lock screen by Jaime Rodriguez covers the subject well.
However, if you're talking about running an application that continues to run while the user uses other applications on the device, then this is not possible. In the Mango build of the operating system you can create background agents, but these only run every 30 minutes and only for 15 seconds as described on MSDN.
There is a request on the official UserVoice forum for Windows Phone development to Provide an agent to track routes, but even if adopted, this would not be available for quite some time.
Tracking applications are the bulk of what I do for a living, and the prospect of using WP7 like this is the primary reason I acquired one.
From a power consumption perspective, transmitting data is the single most expensive thing you can do, followed closely by sampling the GPS and accelerometers.
To produce a trace that closely conforms to roads, you need a higher sampling rate. WP7 won't let you sample more than once per second. This is (just barely) fast enough to track a motor vehicle, and at this level of power consumption the battery will last for about an hour assuming you log the data on the phone and don't attempt to transmit it.
You will also find that if you transmit for every sample, your sampling interval will be at least 15 seconds. Running the web call on another thread won't help because it will take more than one second to complete and you will run out of sockets in less than a minute with a one second sample interval.
There are solutions to all of these problems. For example, in a motor vehicle you can connect to vehicle power and run hot. You can batch and burst your data on a background thread.
These, however, are only the basic problems faced by every tracker designer. More interesting are the questions of proximity in space and time, measurement of deviation from a route, how to specify routes and geofences in a time dependent manner, how to associate them into named sets for rule evaluation purposes and how to associate rules with named sets of routes and geofences.
And then there is periodic clustering, which introduces all the calendrical nightmares that are too much for your average developer of desktop software. To apply the speed limit for a school zone you need to know the time zone, daylight savings, two start and two stop times and the start and end dates for school holidays in that region.
If you are just doing this for fun or as some kind of hiking trace then a five minute interval will impose much milder power demands than one second sampling, but I still suggest batch and burst because it means you can track locations that don't have comms.
Assuming I have a cluster of n Erlang nodes, some of which may be on my LAN, while others may be connected using a WAN (that is, via the Internet), what are suitable mechanisms to cater for a) different bandwidth availability/behavior (for example, latency induced) and b) nodes with differing computational power (or even memory constraints for that matter)?
In other words, how do I prioritize local nodes that have lots of computational power, over those that have a high latency and may be less powerful, or how would I ideally prioritize high performance remote nodes with high transmission latencies to specifically do those processes with a relatively huge computations/transmission (that is, completed work per message ,per time unit) ratio?
I am mostly thinking in terms of basically benchmarking each node in a cluster by sending them a benchmark process to run during initialization, so that the latencies involved in messasing can be calculated, as well as the overall computation speed (that is, using a node-specific timer to determine how fast a node terminates with any task).
Probably, something like that would have to be done repeatedly, on the one hand in order to get representative data (that is, averaging data) and on the other hand it might possibly even be useful at runtime in order to be able to dynamically adjust to changing runtime conditions.
(In the same sense, one would probably want to prioritize locally running nodes over those running on other machines)
This would be meant to hopefully optimize internal job dispatch so that specific nodes handle specific jobs.
We've done something similar to this, on our internal LAN/WAN only (WAN being for instance San Francisco to London). The problem boiled down to a combination of these factors:
The overhead in simply making a remote call over a local (internal) call
The network latency to the node (as a function of the request/result payload)
The performance of the remote node
The compute power needed to execute the function
Whether batching of calls provides any performance improvement if there was a shared "static" data set.
For 1. we assumed no overhead (it was negligible compared to the others)
For 2. we actively measured it using probe messages to measure round trip time, and we collated information from actual calls made
For 3. we measured it on the node and had them broadcast that information (this changed depending on the load current active on the node)
For 4 and 5. we worked it out empirically for the given batch
Then the caller solved to get the minimum solution for a batch of calls (in our case pricing a whole bunch of derivatives) and fired them off to the nodes in batches.
We got much better utilization of our calculation "grid" using this technique but it was quite a bit of effort. We had the added advantage that the grid was only used by this environment so we had a lot more control. Adding in an internet mix (variable latency) and other users of the grid (variable performance) would only increase the complexity with possible diminishing returns...
The problem you are talking about has been tackled in many different ways in the context of Grid computing (e.g, see Condor). To discuss this more thoroughly, I think some additional information is required (homogeneity of the problems to be solved, degree of control over the nodes [i.e. is there unexpected external load etc.?]).
Implementing an adaptive job dispatcher will usually require to also adjust the frequency with which you probe the available resources (otherwise the overhead due to probing could exceed the performance gains).
Ideally, you might be able to use benchmark tests to come up with an empirical (statistical) model that allows you to predict the computational hardness of a given problem (requires good domain knowledge and problem features that have a high impact on execution speed and are simple to extract), and another one to predict communication overhead. Using both in combination should make it possible to implement a simple dispatcher that bases its decisions on the predictive models and improves them by taking into account actual execution times as feedback/reward (e.g., via reinforcement learning).