Device Management for none-oneM2M Devices? - onem2m

I already discussed how to manage devices in OneM2M on this topic but I noticed that I have still some misunderstanding.
Relation between MgmtObj and MgmtCmd. What is the exact correlation between them ? It seems MgmtObj keeps the status like the current software or firmware in it, the battery, device info etc. ObjectIds and ObjectPaths are used for mapping these information to a device management standard like LWM2M, TR-0069. Is it correct ?
I dont understand why Node has multiple reboot objects
in it?
Lets assume we have multiple different firmwares on a node. Each firmware controls different part of the hardware.
Then I guess I should create a MgmtCmd for each firmware but how a MgmtCmd knows
which firmware (MgmtObj) its related for ? There is no link between them when we look at the resource definition in OneM2M. Actually this points to my first question that the relationship between MgmtObj and MgmtCmd because somehow when the MgmtCmd runs and finished its job then the related firmware should be updated in related Node.
Lets assume that I am not going to implement any device management standard like TR-0069, LWM2M etc. We are using nonOneM2M devices which has its own proprietary way of device management. Then whats the simplest way to do that ?
What we thought is, we should put some device management logic to the IPE(Inter proxy entity) which can subscribe to all the events that occurs in any related MgmtCmds for devices like update of its ExecEnabled status and creation of ExecInstance. Then we should notify the IPE with that ExecInstance then IPE manages all the procedure. Is it suitable to use Subscription/Notification mechanism for device management ?
The mgmtCmd resource represents a method to execute management
procedures or to model commands and remote procedure calls (RPC)
required by existing management protocols (e.g. BBF TR-069 [i.4]),
and enables AEs to request management procedures to be executed on a
remote entity. It also enables cancellation of cancellable and
initiated but unfinished management procedures or commands.
The mgmtObj resource contains management data which enables
individual M2M management functions. It provides a general structure
to map to external management technology e.g. OMA DM [i.5], BBF TR-069
[i.4] and LWM2M [i.6] data models. Each instance of mgmtObj
resource shall be mapped to single external management technology.
-------------------------------- CLARIFICATION --------------------------------
When we look at the xsd of node, it contains child resources like
List of firmwares
List of softwares
List of reboots
etc...
Actually I just made up an example, its not a real world scenario. Also I try to understand why node has multiple of those resources like reboot, software, even if deviceinfo seems weird. What they refers ?
<xs:schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.onem2m.org/xml/protocols"
xmlns:m2m="http://www.onem2m.org/xml/protocols" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
elementFormDefault="unqualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:include schemaLocation="CDT-commonTypes-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-memory-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-battery-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-areaNwkInfo-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-areaNwkDeviceInfo-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-firmware-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-software-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-deviceInfo-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-deviceCapability-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-reboot-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-eventLog-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-cmdhPolicy-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-activeCmdhPolicy-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-subscription-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-semanticDescriptor-v3_9_0.xsd" />
<xs:include schemaLocation="CDT-transaction-v3_9_0.xsd"/>
<xs:include schemaLocation="CDT-schedule-v3_9_0.xsd"/>
<xs:element name="node" substitutionGroup="m2m:sg_announceableResource">
<xs:complexType>
<xs:complexContent>
<!-- Inherit common attributes for announceable Resources -->
<xs:extension base="m2m:announceableResource">
<!-- Resource Specific Attributes -->
<xs:sequence>
<xs:element name="nodeID" type="m2m:nodeID" />
<xs:element name="hostedCSELink" type="m2m:ID" minOccurs="0" />
<xs:element name="hostedAELinks" type="m2m:listOfM2MID" minOccurs="0" />
<xs:element name="hostedServiceLinks" type="m2m:listOfM2MID" minOccurs="0" />
<xs:element name="mgmtClientAddress" type="xs:string" minOccurs="0" />
<xs:element name="roamingStatus" type="xs:boolean" minOccurs="0" />
<xs:element name="networkID" type="xs:string" minOccurs="0" />
<!-- Child Resources -->
<xs:choice minOccurs="0" maxOccurs="1">
<xs:element name="childResource" type="m2m:childResourceRef" minOccurs="1" maxOccurs="unbounded" />
<xs:choice minOccurs="1" maxOccurs="unbounded">
<xs:element ref="m2m:memory" />
<xs:element ref="m2m:battery" />
<xs:element ref="m2m:areaNwkInfo" />
<xs:element ref="m2m:areaNwkDeviceInfo" />
<xs:element ref="m2m:firmware" />
<xs:element ref="m2m:software" />
<xs:element ref="m2m:deviceInfo" />
<xs:element ref="m2m:deviceCapability" />
<xs:element ref="m2m:reboot" />
<xs:element ref="m2m:eventLog" />
<xs:element ref="m2m:cmdhPolicy" />
<xs:element ref="m2m:activeCmdhPolicy" />
<xs:element ref="m2m:subscription" />
<xs:element ref="m2m:semanticDescriptor" />
<xs:element ref="m2m:transaction" />
<xs:element ref="m2m:schedule" />
</xs:choice>
</xs:choice>
</xs:sequence>
</xs:extension>
</xs:complexContent>
</xs:complexType>
</xs:element>
----------------------- MORE CLARIFICATION -----------------------------
By the way, there is already a discussion about deviceinfo. Then I think they chose the way of multiple deviceInfo per Node because the current version of OneM2M supports multiple deviceInfo per Node. I also curious about whats meaning of multiple reboot or firmware per Node ?

To answer your questions one by one:
A specialisation of a <mgmtObj> holds the actual management information or represents an aspect of a device or node to be managed. Some of those specialisations can define "trigger" attributes that can execute a local action on a node. If one updates such an attribute then the action will be executed on the associated device.
A <mgmtCmd> represents a method to execute a remote command or action on a node or device. It offers a way to implement management functionalities that are not provided by <mgmtObj> specialisations. It can also be used for tunnelling management functionality through oneM2M rather then abstracting it.
According to TS-0001, Table 9.6.18-1 "Child resources of <node> resource", the <node> resource shall only have 0 or 1 child resource of the reboot specialization.
Actually it seems that the XSD, which you also quote in your question, is not correct because it does not reflect the written spec (also for some other attributes).
The assumption here is that a firmware is the basic and non-modular software stack or operating system of a device. You can use the [software] specialization to support modular OS architectures where you install "drivers" or packages for various apsects on the device. Each of those software packages can be managed independly from the firmware. TR-0069, for example, supports this kind of management.
The reason why a node may support multiple firmwares is that a device can store multiple firmware versions or generations, and you want to support this feature. Of course, only one firmware is active at a time.
In general, what you want to do is to define and implement a management adapter for your proprietary protocol. This would be an IPE that implements the logic to map between the oneM2M management resources and the proprietary aspects as well as to implement the local protocol to communicate with the proprietary devices.
For the question regarding to use subscription and notifications: this depends on your concrete deployment architecture, but sure, using subscriptions/notification would be an efficient way to implement this. The other method would be that the management IPE polls for changes of the relevant resources, which in general is more resource intensive.

Related

Trying to Create a new Policy for Multi Disk Operations

I am using clickhouse with just one disk which is specified at config.xml file under <path>
Now I want to extend this disk, so I updated the clickhouse version for enabling multi disk support.
What I want to do now is using the two disks together. I want to read from both of them but write data to second one only.
I have many tables, I thought changing the storage policy of the tables would do the trick but i can't change it.
For example i have a table called default_event which has default policy, after this query:
alter table default_event modify setting storage_policy='newStorage_only';
I got this error : Exception: New storage policy default shall contain volumes of old one
My storage xml is like this:
<?xml version="1.0" encoding="UTF-8"?>
<yandex>
<storage_configuration>
<disks>
<!--
default disk is special, it always
exists even if not explicitly
configured here, but you can't change
it's path here (you should use <path>
on top level config instead)
-->
<default>
<!--
You can reserve some amount of free space
on any disk (including default) by adding
keep_free_space_bytes tag
-->
<keep_free_space_bytes>1024</keep_free_space_bytes>
</default>
<test_disk>
<!--
disk path must end with a slash,
folder should be writable for clickhouse user
-->
<path>/DATA/newStorage/</path>
</test_disk>
<test_disk_2>
<!--
disk path must end with a slash,
folder should be writable for clickhouse user
-->
<path>/DATA/secondStorage/</path>
</test_disk_2>
<test_disk_3>
<!--
disk path must end with a slash,
folder should be writable for clickhouse user
-->
<path>/DATA/thirdStorage/</path>
</test_disk_3>
</disks>
<policies>
<newStorage_only>
<!-- name for new storage policy -->
<volumes>
<newStorage_volume>
<!-- name of volume -->
<!--
we have only one disk in that volume
and we reference here the name of disk
as configured above in <disks> section
-->
<disk>test_disk</disk>
</newStorage_volume>
</volumes>
</newStorage_only>
</policies>
</storage_configuration>
</yandex>
I tried adding default volume to the new policy but i can't start clickhouse with that config.
So, your main problem is that before that you did not explicitly specify the storage policy, but the default disk is written there by default. New policy should include all old disks and volumes with same names.
I gave a configuration based on yours, removing everything unnecessary. And that, I mean that in addition to those listed, you have a drive specified in path with the name default. All disks are listed in the volumes section of the new policy. Writing to new disks will happen thanks to move_factor. The value 0.5 tells us that when 50% of the disk space is reached, we need to write to the next one, and so on.
As soon as the rest of the disks fill evenly, you can lower this value.
PS: you can not use old disks in the new policy, for this you need to execute ALTER TABLE ... MOVE PARTITIONS/PARTS ... to transfer partitions/parts to new disks. Then the table will not be tied to the old disk and it will not be tedious to specify it in the new storage policy. Disks, of course, must be pre-configured in the settings.
<yandex>
<storage_configuration>
<disks>
<test_disk>
<path>/DATA/newStorage/</path>
</test_disk>
<test_disk_2>
<path>/DATA/secondStorage/</path>
</test_disk_2>
<test_disk_3>
<path>/DATA/thirdStorage/</path>
</test_disk_3>
</disks>
<policies>
<!--... old policy ... -->
<new_storage_only> <!-- policy name -->
<volumes>
<default>
<disk>default</disk>
</default>
<new_volume>
<disk>test_disk</disk>
<disk>test_disk_2</disk>
<disk>test_disk_3</disk>
</new_volume>
</volumes>
<move_factor>0.5</move_factor>
</new_storage_only>
</policies>
</storage_configuration>
</yandex>

How can you control Kristen FPGA from implementing excess registers?

I am using Kristen to generate a Verilog FPGA host interface for a neuromorphic processor. I have implemented the basic host as follows,
<module name= "nmp" duplicate="1000">
<register name="start0" type="rdconst" mask="0xFFFFFFFF" default="0x00000000" description="Lower 64 bit start pointer of persitant NMP storage."></register>
<register name="start1" type="rdconst" mask="0xFFFFFFFF" default="0x00000020" description="Upper 64 bit start pointer of persitant NMP storage."></register>
<register name="size" type="rdconst" mask="0xFFFFFFFF" default="0x10000000" description="Size of NMP persitant storage in Mbytes."></register>
<register name="c_start0" type="rdconst" mask="0xFFFFFFFF" default="0x10000000" description="Lower 64 bit start pointer of cached shared storage."></register>
<register name="c_start1" type="rdconst" mask="0xFFFFFFFF" default="0x00000020" description="Upper 64 bit start pointer of cached shared storage."></register>
<register name="c_size" type="rdconst" mask="0xFFFFFFFF" default="0x10000000" description="Size of cached shared storage in Mbytes."></register>
<register name="row" type="rdwr" mask="0xFFFFFFFF" default="0x00000000" description="Configurable row location for this NMP."></register>
<register name="col" type="rdwr" mask="0xFFFFFFFF" default="0x00000000" description="Configurable col location for this NMP."></register>
<register name="threshold" type="rdwr" mask="0xFFFFFFFF" default="0x00000000" description="Configurable synaptic sum threshold for this instance."></register>
<memory name="learn" memsize="0x00004000" type="mem_ack" description="Learning interface - Map input synapsys to node intensity">
<field name="input_id" size="22b" description="Input ID this map data is intended for."></field>
<field name="scale" size="16b" description="The intensity scale for this input ID."></field>
</memory>
</module>
The end result is that I am seeing a ton of registers being generated and I have to scale my NMP size down to fit within the constraints of my FPGA. Is there a way to control the number of registers being generated here? Obviously I need to store settings for these different fields. Am I missing something here?
I should add that I am trying to get to a 2048 scale on my NMP but the best I can do is just over a 1000, and not quite 1024. If I implement without PCIe or host control, I can get to 2048 without issue.
If I understand correctly, each NMP instance has a been coded with a internal register to store data and the configuration you have shown will result in kristen creating Verilog with registers as well. Effectivley there is a double buffered storage occuring.
Because of this, the number of registers are effectively doubled beyond what they need to be. One way of dealing with this situation described is to use another RAM interface of 32 bits wide. I do note that your config calls for 9 x 32 bits words which is a odd size for memory. There will be some wasted adddress space. Kristen will create a RAM's on binary boundaries so, you can get a 16x32bit memory region that you can overlay on that interface. And then a second RAM just like you have already for the learn memory.
<module>
<memory name="regs" memsize="0x10" type="mem_ack" description="Register mapping per NMP instance">
<field name "start0" size="32b" description="Start0"></field>
<field name "start1" size="32b" description="Start1"></field>
....
<field name "threshold" size="32b" description="Threshold"></field>
</memory>
<memory name="learn" memsize="0x00004000" type="mem_ack" description="Learning interface - Map input synapsys to node intensity">
<field name="input_id" size="22b" description="Input ID this map data is intended for."></field>
<field name="scale" size="16b" description="The intensity scale for this input ID."></field>
</memory>
</module>
Generate this and take a look at the new interface. That should reduce the number of registers generated in your Verilog code and subsequent synthesis.

How much memory of a process is paged out?

Is there a Performance Counter which indicates how much of memory of a specific process is paged out? I have a server which has 40 GB of available RAM (of 128 GB physical memory) but the paged out amount of data is over 100 GB.
How can I find out which of my processes are responsible for that huge page file consumption?
It would be also ok to have some xperf tracing to see when the page out activity happens. But apart from many writes to the page file I cannot see from which processes the memory is written to the page file.
Reference Set Tracing shows me only as far as I understand it how big the physical memory consumption of my process is. But it does not seem to track page out activity.
Update
The OS is Windows Server 2012 R2
The ETW provider "Microsoft-Windows-Kernel-Memory" has a keyword "KERNEL_MEM_KEYWORD_WS_SWAP" ("0x80"). Here there are some events that occur when data are paged out/paged in:
<event value="4" symbol="WorkingSetOutSwapStart" version="0" task="WorkingSetOutSwap" opcode="win:Start" level="win:Informational" keywords="KERNEL_MEM_KEYWORD_WS_SWAP" template="WorkingSetOutSwapStartArgs"/>
<event value="4" symbol="WorkingSetOutSwapStart_V1" version="1" task="WorkingSetOutSwap" opcode="win:Start" level="win:Informational" keywords="KERNEL_MEM_KEYWORD_WS_SWAP" template="WorkingSetOutSwapStartArgs_V1"/>
<event value="5" symbol="WorkingSetOutSwapStop" version="0" task="WorkingSetOutSwap" opcode="win:Stop" level="win:Informational" keywords="KERNEL_MEM_KEYWORD_WS_SWAP" template="WorkingSetOutSwapStopArgs"/>
<event value="5" symbol="WorkingSetOutSwapStop_V1" version="1" task="WorkingSetOutSwap" opcode="win:Stop" level="win:Informational" keywords="KERNEL_MEM_KEYWORD_WS_SWAP" template="WorkingSetOutSwapStopArgs_V1"/>
<event value="6" symbol="WorkingSetInSwapStart" version="0" task="WorkingSetInSwap" opcode="win:Start" level="win:Informational" keywords="KERNEL_MEM_KEYWORD_WS_SWAP" template="WorkingSetOutSwapStartArgs"/>
<event value="6" symbol="WorkingSetInSwapStart_V1" version="1" task="WorkingSetInSwap" opcode="win:Start" level="win:Informational" keywords="KERNEL_MEM_KEYWORD_WS_SWAP" template="WorkingSetOutSwapStartArgs_V1"/>
<event value="7" symbol="WorkingSetInSwapStop" version="0" task="WorkingSetInSwap" opcode="win:Stop" level="win:Informational" keywords="KERNEL_MEM_KEYWORD_WS_SWAP" template="WorkingSetInSwapStopArgs"/>
Here you get some data like the number of pages accessed (PagesProcessed):
<template tid="WorkingSetOutSwapStartArgs">
<data name="ProcessId" inType="win:UInt32"/>
</template>
<template tid="WorkingSetOutSwapStopArgs">
<data name="ProcessId" inType="win:UInt32"/>
<data name="Status" inType="win:HexInt32"/>
<data name="PagesProcessed" inType="win:UInt32"/>
</template>
<template tid="WorkingSetInSwapStopArgs">
<data name="ProcessId" inType="win:UInt32"/>
<data name="Status" inType="win:HexInt32"/>
</template>
<template tid="WorkingSetOutSwapStartArgs_V1">
<data name="ProcessId" inType="win:UInt32"/>
<data name="Flags" inType="win:HexInt32"/>
</template>
<template tid="WorkingSetOutSwapStopArgs_V1">
<data name="ProcessId" inType="win:UInt32"/>
<data name="Status" inType="win:HexInt32"/>
<data name="PagesProcessed" inType="win:Pointer"/>
<data name="WriteCombinePagesProcessed" inType="win:Pointer"/>
<data name="UncachedPagesProcessed" inType="win:Pointer"/>
<data name="CleanPagesProcessed" inType="win:Pointer"/>
</template>
Play with it if it includes all data you need.
In Xperf you want to look for Hard Faults - note that this is a type of Page Fault, but page faults can often be handled in software without touching the drive. You can add a column in Task Manager to show page faults for each process.
You can get some information on a process by using a tool like https://technet.microsoft.com/en-us/sysinternals/vmmap.aspx which will tell you for each block of memory in the process address space what type it is, and how much is committed. However, it's the committed memory that can be paged out, and VirtualQueryEx() doesn't tell you about that.
It's also worth noting that a large quantity of paged out memory isn't always a bad thing - it's the hard faults that are slow.
Edit: Hmm, if you want an intrusive one-off test I guess there's the hacky option of combining VirtualQueryEx() and ReadProcessMemory() to touch every committed page in a process so you can count the hard faults!

Simulating high latency in load test

I have a Visual Studio web performance test that I am planning on running with a variety of network mixes to simulate different network conditions. However when I report this I would like to know context of this (actual bandwidth, latency in ms, etc.). The best information I've found is this: http://msdn.microsoft.com/en-us/library/dd997557.aspx
Specifically what I would like to know is: What is the intra-continental connection's properties?
Is there a better reference on this?
As said by AdrianHHH, you can find *.network file for each of available network profile at %ProgramFiles%\Microsoft Visual Studio XXX\Common7\IDE\Templates\LoadTest\Networks e.g.IntracontinentalWAN.network. This file contains all settings for a network file like Latency, Packet Loss, Queue Management etc.
A good description of all properties are available here. There is not problem to edit an existing profile and to create a new one just for your specific
So, the intra-continental connection's properties are
<NetworkEmulationProfile name="Intra-continental WAN 1.5 Mbps" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
<Emulation>
<VirtualChannel name="ICWAN-Channel2">
<FilterList />
<VirtualLink instances="1" name="link1">
<LinkRule dir="upstream">
<Bandwidth>
<Speed unit="kbps">1500</Speed>
<QueueManagement>
<NormalQueue>
<Size>100</Size>
<QueueMode>packet</QueueMode>
<DropType>DropTail</DropType>
</NormalQueue>
</QueueManagement>
</Bandwidth>
<Latency>
<Fixed>
<Time unit="msec">50</Time>
</Fixed>
</Latency>
</LinkRule>
<LinkRule dir="downstream">
<Bandwidth>
<Speed unit="kbps">1500</Speed>
<QueueManagement>
<NormalQueue>
<Size>100</Size>
<QueueMode>packet</QueueMode>
<DropType>DropTail</DropType>
</NormalQueue>
</QueueManagement>
</Bandwidth>
<Latency>
<Fixed>
<Time unit="msec">50</Time>
</Fixed>
</Latency>
</LinkRule>
</VirtualLink>
</VirtualChannel>
</Emulation>
</NetworkEmulationProfile>

"Insufficient content duration available" when playing a stream through the SmoothStreamingMediaElement

I am working on an application that features IIS Smooth Streaming using the SmoothStreamingMediaElement. Because of the nature of the project I can't disclose the source of the stream, I can however provide full technical information on the problem I encounter.
I seperated the SmoothStreaming part into a seperate application for testing purposes. Everything seems to be working well since the test stream provided by Microsoft works the way it should (http://video3.smoothhd.com.edgesuite.net/ondemand/Big%20Buck%20Bunny%20Adaptive.ism/Manifest)
I took the restrictions for SmoothStreaming on Windows Phone into account:
- In the ManifestReady event the available tracks are filtered to show only one available resolution
- The device is not connected through Zune while testing.
The error message presented is very clear:
"3108 Insufficient content duration available to begin playback.
Available = 3840 ms, Required = 7250 ms"
I have not been able to find any references to this error. I did find some more information on where the required duration of 7250 ms originates from. This MSDN page suggests it has something to do with the LivePlaybackOffset which defaults at 7 seconds and cannot be changed in the WP7 SmoothStreamingMediaElement. The same code works fine in a browser-silverlight application.
I don't have direct access to the server providing the stream. Is there a way to address this issue clientside? Or does it require server-side configuration? If it helps I can share parts of the source code, please let me know what parts would be relevant. Your help is highly appreciated!
This is the manifest file:
<SmoothStreamingMedia MajorVersion="2" MinorVersion="2" TimeScale="10000000" Duration="0" LookAheadFragmentCount="2" IsLive="TRUE" DVRWindowLength="300000000">
<StreamIndex Type="audio" QualityLevels="1" TimeScale="10000000" Name="audio" Chunks="7" Url="http://xxxx/xxx.isml/QualityLevels({bitrate})/Fragments(audio={start time})">
<QualityLevel Index="0" Bitrate="128000" CodecPrivateData="1190" SamplingRate="48000" Channels="2" BitsPerSample="16" PacketSize="4" AudioTag="255" FourCC="AACL"/>
<c t="3485836800000" d="38400000" r="7"/>
</StreamIndex>
<StreamIndex Type="video" QualityLevels="6" TimeScale="10000000" Name="video" Chunks="7" Url="http://xxxx/xxx.isml/QualityLevels({bitrate})/Fragments(video={start time})" MaxWidth="1024" MaxHeight="576" DisplayWidth="1024" DisplayHeight="576">
<QualityLevel Index="0" Bitrate="350000" CodecPrivateData="000000016742E01596540D0FF3CFFF80980097A440000003004000000CA10000000168CE060CC8" MaxWidth="405" MaxHeight="228" FourCC="AVC1" NALUnitLengthField="4"/>
<QualityLevel Index="1" Bitrate="700000" CodecPrivateData="000000016742E01E965404814F2FFF8140013FA440000003004000000CA10000000168CE060CC8" MaxWidth="568" MaxHeight="320" FourCC="AVC1" NALUnitLengthField="4"/>
<QualityLevel Index="2" Bitrate="1000000" CodecPrivateData="000000016742E01E965405217F7FFE0B800B769100000300010000030032840000000168CE060CC8" MaxWidth="654" MaxHeight="368" FourCC="AVC1" NALUnitLengthField="4"/>
<QualityLevel Index="3" Bitrate="1300000" CodecPrivateData="00000001674D4028965605819FDE029100000300010000030032840000000168EA818332" MaxWidth="704" MaxHeight="396" FourCC="AVC1" NALUnitLengthField="4"/>
<QualityLevel Index="4" Bitrate="1600000" CodecPrivateData="00000001674D402A965605A1AFCFFF80CA00CAA440000003004000000CA10000000168EA818332" MaxWidth="718" MaxHeight="404" FourCC="AVC1" NALUnitLengthField="4"/>
<QualityLevel Index="5" Bitrate="2000000" CodecPrivateData="00000001674D4032965300800936029100000300010000030032840000000168E96060CC80" MaxWidth="1024" MaxHeight="576" FourCC="AVC1" NALUnitLengthField="4"/>
<c t="3485836800000" d="38400000" r="7"/>
</StreamIndex>
</SmoothStreamingMedia>
I know this question is a bit old but I had a very similar problem today, so I thought I should answer it...
The problem is with the r="7"
This parameter is not documented in the MS documentation and only found in Smooth Streaming version 2.2 and above (not 2.0).
r="7" means that the chunk in the manifest should be repeated 7 times, which means you have 7 * 3.84 sec in total.
There is a blog post which explains it here:
http://blogs.iis.net/samzhang/archive/2011/03/10/how-to-troubleshoot-live-smooth-streaming-issues-part-5-client-manifest.aspx

Resources