I want to get the top "OID" for a given MIB. i.e;
CISCO-SMI = 1.3.6.1.4.1.9
CISCO-PROCESS-MIB = 1.3.6.1.4.1.9.9.109
I can easily get this by googling, however I need to get this from a system, preferably with native SNMP commands. I can't walk a device. I can do an snmptranslate which will give me all OIDs for that MIB, but I only want the OID that identifies the MIB;
snmptranslate -Tso -m /usr/share/snmp/mibs/CISCO-PROCESS-MIB.txt
.1.3
.iso.org
...
.1.3.6.1.4.1.9
.iso.org.dod.internet.private.enterprises.cisco
...
.1.3.6.1.4.1.9.9.109
.iso.org.dod.internet.private.enterprises.cisco.ciscoMgmt.ciscoProcessMIB
So I need to be able to say that CISCO-PROCESS-MIB = .1.3.6.1.4.1.9.9.109
I've done a fair bit of google-fu but not coming up with anything that gives me the above. Is it possible to do without an external mib browsing tool?
The set of all SNMP OIDs can be expressed in a tree, of which a particular MIB file defines a (possibly empty) sub-forest with leaf nodes (actual MIB OBJECTs). Ie. a MIB file defines a set of sub-trees. If you are lucky, the set of sub-trees starts at a single node, and no other MIB defines OIDs under that node.
Given this background, in MIMIC SNMP Simulator we define a TOPOID as the lowest (in the hierarchy) OID that contains all the OIDs defined in the MIB. In MIMIC we maintain the set of OIDs for all the MIBs that the simulator knows about, so that you can quickly determine from an arbitrary leaf OID which MIB it is in (by finding the lowest TOPOID in the hierarchy), eg.
% ./oidinfo 1.3.6.1.4.1.9.9.109
INFO 04/19.10:58:34 - OID 1.3.6.1.4.1.9.9.109 = ciscoProcessMIB
INFO 04/19.10:58:34 - MIB = cisco/CISCO-PROCESS-MIB
...
Related
SSC5 says:
4.2.5 Early-warning If writing, the application client needs an indication that it is approaching the end of the permissible recording
area (i.e., end of the partition (see 4.2.7)). This position, called
early-warning (EW), is typically reported to the application client at
a position early enough for the device to write any buffered logical
objects to the medium while still leaving enough room for additional
recorded logical objects (see figure 10 and figure 11). Some American National Standards include physical requirements for a marker placed on the medium to be detected by the device
as early-warning.
Can anyone tell me where EW is on the LTO tape, e.g LTO-5 or LTO-6?
Whether it depends on the vendor of the tape?
Whether they are tens or hundreds of MB's from EW to EOP?
I can't find the reference...
Here is a direct quote from HPE LTO-6 White Paper. Note: EWEOM stands for "Early Warning End Of Media".
The EWEOM is set to be slightly less than the native capacity of 2.5 TB for LTO-6 cartridges, as required by the LTO format.
Crucially, however, the EWEOM is slightly before the actual physical end of tape which means every LTO Ultrium format cartridge has a little bit more capacity than the stated headline figure. For LTO-6, this additional space is the equivalent of an additional 5% of capacity, although it is reserved exclusively for the system and cannot be accessed
via the backup software. The excess tape is the first section of the media that is used when there are higher than expected errors so that any rewrite and error correction takes place without losing the stated capacity of the tape.
Going back to your questions:
Can anyone tell me where EW is on the LTO tape, e.g LTO-5 or LTO-6?
5% of additional capacity corresponds to 125GB in LTO-6 media (2500GB * 5% = 125GB). This number means that the position of EW (EWEOM) in LTO-6 should be located at roughly 7 wraps before EOP. Note: 1wrap = 18GB in LTO-6. Please note that this location depends on generations. As an example, if we assume that LTO-5 media also has 5% of additional capacity, there should be 75GB for this region, and this capacity corresponds to roughly 4 wraps. This is just an example - I could not find the exact spare capacity of LTO-5.
Whether it depends on the vendor of the tape?
Since this spare capacity is required by LTO format, I believe that the location is independent on tape manufacturers.
Whether they are tens or hundreds of MB's from EW to EOP?
Once again, LTO-6 has 5% of spare capacity which corresponds to 125GB. I guess that this margin depends on generations, but it should be roughly a few percentages. This is my best guess.
I'm working with Chicken Scheme, I wonder how many elements a list can have.
There is no hard limit – it can have as many as there's room for in memory.
The documentation, under the option -:hmNUMBER, it mentions that there is a default max limit of 2GB heap size, which gives you about 45 million pairs. You can increase these with several options, but the simplest for a set default memory limit is -heap-size. Here is how to double the default:
csc -heap-size 4000M <file>
It says under the documentation for -heap-size that it only uses half of the allocated memory at every given time. It might be using the lonely hearts garbage collection algorithm where when memory is full it moves used memory to the unused segment making the old segment the unused one.
The paper: TensorFlow: A System for Large-Scale
Machine Learning, $3.3 said:
We optimized TensorFlow for executing large sub- graphs repeatedly with low latency. Once the graph for a step has been pruned, placed, and partitioned, its sub- graphs are cached in their respective devices. A client session maintains the mapping from step definitions to cached subgraphs, so that a distributed step on a large graph can be initiated with one small message to each participating task. This model favours static, reusable graphs, but it can support dynamic computations using dynamic control flow, as the next subsection describes.
How to understand 'cached in their respective devices' here? And many API owns 'caching_device' parameter, however the default value is False, how to understand the CACHE feature?
In general, cache mechanism will always follow with 'INVALID cache' policy, so how is the cache policy?
If we use more clones model graph for multiple GPUs with between graph parallatism, that is, more model clones will refer shared variable on ps, how each clone to read remote variables? Does it cache the variable on some local devices by default for reducing network communication?
More details:
A Tour of TensorFlow
https://arxiv.org/pdf/1610.01178.pdf
Finally, an important optimization made by TensorFlow at this step is “canonicalization” of (send,receive) pairs. In the setup displayed in Figure 5b, the existence of each recv node on device B would imply allocation and management of a separate buffer to store ν’s output tensor, so that it may then be fed to nodes α and β, respectively. However, an equivalent and more efficient transformation places only one recv node on device B, streams all output from ν to this single node, and then to the two dependent nodes α and β. This last and final evolution is given in Figure 5c.
The above documentation describes that if Figure 5c automatically do optimization to reduce implict read action. If this occurs in distributed system, network traffic will automatically be reduced as wanted.
In another way, /model/slim/deployment/model_deploy.py try to create cache variable as following:
562 def caching_device(self):
563 """Returns the device to use for caching variables.
564
565 Variables are cached on the worker CPU when using replicas.
566
567 Returns:
568 A device string or None if the variables do not need to be cached.
569 """
570 if self._num_ps_tasks > 0:
571 return lambda op: op.device
572 else:
573 return None
to try to opimitize network traffic, I think.
What's the real or best way to do communication optimization in distributed system ?
We also prefer more clearification about it, and we will try to update this issue if I get more experiments tunning result.
The only advantage I can think of using 16-bit instead of 64-bit addressing on a IEEE 802.15.4 network is that 6 bytes are saved in each frame. There might be a small win for memory constrained devices as well (microcontrollers), especially if they need to keep a list of many addresses.
But there are a couple of drawbacks:
A coordinator must be present to deal out short addresses
Big risk of conflicting addresses
A device might be assigned a new address without other nodes knowing
Are there any other advantages of short addressing that I'm missing?
You are correct in your reasoning, it saves 6 bytes which is a non-trivial amount given the packet size limit. This is also done with PanId vs ExtendedPanId addressing.
You are inaccurate about some of your other points though:
The coordinator does not assign short addresses. A device randomly picks one when it joins the network.
Yes, there is a 1/65000 or so chance for a collision. When this happens, BOTH devices pick a new short address and notify the network that there was an address conflict. (In practice I've seen this happen all of twice in 6 years)
This is why the binding mechanism exists. You create a binding using the 64-bit address. When transmission fails to a short address, the 64-bit address can be used to relocate the target node and correct the routing.
The short (16-bit) and simple (8-bit) addressing modes and the PAN ID Compression option allow a considerable saving of bytes in any 802.15.4 frame. You are correct that these savings are a small win for the memory-constrained devices that 802.15.4 is design to work on, however the main goal of these savings are for the effect on the radio usage.
The original design goals for 802.15.4 were along the lines of 10 metre links, 250kbit/s, low-cost, battery operated devices.
The maximum frame length in 802.15.4 is 128 bytes. The "full" addressing modes in 802.15.4 consist of a 16-but PAN ID and a 64-bit Extended Address for both the transmitter and receiver. This amounts to 20 bytes or about 15% of the available bytes in the frame. If these long addresses had to be used all of the time there would be a significant impact on the amount of application data that could be sent in any frame AND on the energy used to operate the radio transceivers in both Tx and Rx.
The 802.15.4 MAC layer defines an association process that can be used to negotiate and use shorter addressing mechanisms. The addressing that is typically used is a single 16-bit PAN ID and two 16-bit Short Ids, which amounts to 6 bytes or about 5% of the available bytes.
On your list of drawbacks:
Yes, a coordinator must hand out short addresses. How the addresses are created and allocated is not specified but the MAC layer does have mechanisms for notifying the layers above it that there are conflicts.
The risk of conflicts is not large as there are 65533 possible address to be handed out and 802.15.4 is only worried about "Layer 2" links (NB: 0xFFFF and 0xFFFE are special values). These addresses are not routable/routing/internetworking addresses (well, not from 802.15.4's perspective).
Yes, I guess a device might get a new address without the other nodes knowing but I have a hunch this question has more to do with ZigBee's addressing than with the 802.15.4 MAC addressing. Unfortunately I do not know much about ZigBee's addressing so I can't comment too much here.
I think it is important for me to point out that 802.15.4 is a layer 1 and layer 2 specification and the ZigBee is layer 3 up, i.e. ZigBee sits on top of 802.15.4.
This table is not 100% accurate, but I find it useful to think of 802.15.4 in this context:
+---------------+------------------+------------+
| Application | HTTP / FTP /Etc | COAP / Etc |
+---------------+------------------+------------+
| Transport | TCP / UDP | |
+---------------+------------------+ ZigBee |
| Network | IP | |
+---------------+------------------+------------+
| Link / MAC | WiFi / Ethernet | 802.15.4 |
| | Radio | Radio |
+---------------+------------------+------------+
How can I calculate storage when FTPing to MainFrame? I was told LRECL will always remain '80'. Not sure how I can calculate PRI and SEC dynamically based on the file size...
QUOTE SITE LRECL=80 RECFM=FB CY PRI=100 SEC=100
If the site has SMS, you shouldn't need to, but if you need to calculate the number of tracks is the size of the file in bytes divided by 56,664, or the number of cylinders is the size of the file in bytes divided by 849,960. In either case, you would round up.
Unfortunately IBM's FTP server does not support the newer space allocation specifications in number of records (the JCL parameter AVGREC=U/M/K plus the record length as the first specification in the SPACE parameter).
However, there is an alternative, and that is to fall back on one of the lesser-used SPACE parameters - the blocksize specification. I will assume 3390 disk types for simplicity, and standard data sets.
For fixed-length records, you want to calculate the largest number that will fit in half a track (27994 bytes), because z/OS only supports block sizes up to 32760. Since you are dealing with 80-byte records, that number is 27290. Divide your file size by that number and that will give you the number of blocks. Then in a SITE server command, specify
SITE BLKSIZE=27920 LRECL=80 RECFM=FB BLOCKS=27920 PRI=calculated# SEC=a_little_extra
This is equivalent to SPACE=(27920,(calculated#,a_little_extra)).
z/OS space allocation calculates the number of tracks required and rounds up to the nearest track boundary.
For variable-length records, if your reading application can handle it, always use BLKSIZE=27994. The reason I have the warning about the reading application is that even today there are applications from ISVs that still have strange hard-coded maximum variable length blocks such as 12K.
If you are dealing with PDSEs, always use BLKSIZE=32760 for variable-length and the closest-to-32760 for fixed-length in your specification (32720 for FB/80), but calculate requirements based on BLKSIZE=4096. PDSEs are strange in their underlying layout; the physical records are 4096 bytes, which is because there is some linear data set VSAM code that handles the physical I/O.