Powershell + MegaCLI - Making the output more readable - windows

Looking for some help with making an output from a MegaCli command a bit more readable.
The output is:
PS C:\Users\Administrator> C:\Users\Administrator\Downloads\8-04-07_MegaCLI\Win_CliKL_8.04.07\MegaCliKL -LDInfo -Lall -aAll
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name :OS
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 558.375 GB
Mirror Data : 558.375 GB
State : Optimal
Strip Size : 64 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 1 (Target Id: 1)
Name :Storage
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 7.275 TB
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 4
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Exit Code: 0x00
The command I'm using is:
C:\Users\Administrator\Downloads\8-04-07_MegaCLI\Win_CliKL_8.04.07\MegaCliKL -LDInfo -Lall -aAll
How can I make that information a bit more readable?
I only actually need: Name, Raid Level, Size, Number of drives, State, and Span Depth.
It has to be doable in just powershell.
Thanks in advance for any help!
Zack

If "a bit more readable" means "reduce output merely to lines starting with listed items":
$MegaCliKL = & C:\Users\Administrator\Downloads\8-04-07_MegaCLI\Win_CliKL_8.04.07\MegaCliKL -LDInfo -Lall -aAll
$listedItems = '^\s*Name',
'Raid Level',
'Size',
'Number of drives',
'State',
'Span Depth' -join '|^\s*'
$MegaCliKL -match $listedItems |
ForEach-Object {
if ( $_ -match '^\s*Name' ) {''} # line separator
$_
}
Output:
Name :OS
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 558.375 GB
State : Optimal
Number Of Drives : 2
Span Depth : 1
Name :Storage
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 7.275 TB
State : Optimal
Number Of Drives : 4
Span Depth : 1

Related

will "dd" for nvme use mmio or dma?

Recently I'm try to debug a nvme timeout issue:
# dd if=/dev/urandom of=/dev/nvme0n1 bs=4k count=1024000
nvme nvme0: controller is down; will reset: CSTS=0x3,
PCI_STATUS=0x2010
nvme nvme0: Shutdown timeout set to 8 seconds
nvme nvme0: 1/0/0 default/read/poll queues
nvme nvme0: I/O 388 QID 1 timeout, disable controller
blk_update_request: I/O error, dev nvme0n1, sector 64008 op 0x1:(WRITE) flags 0x104000 phys_seg 127 prio class 0
......
After some digging, I found the root cause is pcie-controller's ranges dts property, which is used for pio/outbound mapping:
<0x02000000 0x00 0x08000000 0x20 0x04000000 0x00 0x04000000>; dd timeout
<0x02000000 0x00 0x04000000 0x20 0x04000000 0x00 0x04000000>; dd ok
Regardless of the root cause, it seems the timeout here is influenced by mmio, because 0x02000000 stands for non-prefetch mmio. Is it true? is it possible that dd will trigger dma and nvme controller as a master?
It uses dma instead of mmio.
Here is answer from Keith Busch:
Generally speaking, an nvme driver notifies the controller of new
commands via a MMIO write to a specific nvme register. The nvme
controller fetches those commands from host memory with a DMA.
One exception to that description is if the nvme controller supports CMB
with SQEs, but they're not very common. If you had such a controller,
the driver will use MMIO to write commands directly into controller
memory instead of letting the controller DMA them from host memory. Do
you know if you have such a controller?
The data transfers associated with your 'dd' command will always use DMA.
Below is ftrace output:
Call stack before nvme_map_data:
# entries-in-buffer/entries-written: 376/376 #P:2
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID TGID CPU# |||| TIMESTAMP FUNCTION
# | | | | |||| | |
kworker/u4:0-379 (-------) [000] ...1 3712.711523: nvme_map_data <-nvme_queue_rq
kworker/u4:0-379 (-------) [000] ...1 3712.711533: <stack trace>
=> nvme_map_data
=> nvme_queue_rq
=> blk_mq_dispatch_rq_list
=> __blk_mq_do_dispatch_sched
=> __blk_mq_sched_dispatch_requests
=> blk_mq_sched_dispatch_requests
=> __blk_mq_run_hw_queue
=> __blk_mq_delay_run_hw_queue
=> blk_mq_run_hw_queue
=> blk_mq_sched_insert_requests
=> blk_mq_flush_plug_list
=> blk_flush_plug_list
=> blk_mq_submit_bio
=> __submit_bio_noacct_mq
=> submit_bio_noacct
=> submit_bio
=> submit_bh_wbc.constprop.0
=> __block_write_full_page
=> block_write_full_page
=> blkdev_writepage
=> __writepage
=> write_cache_pages
=> generic_writepages
=> blkdev_writepages
=> do_writepages
=> __writeback_single_inode
=> writeback_sb_inodes
=> __writeback_inodes_wb
=> wb_writeback
=> wb_do_writeback
=> wb_workfn
=> process_one_work
=> worker_thread
=> kthread
=> ret_from_fork
Call graph of nvme_map_data:
# tracer: function_graph
#
# CPU DURATION FUNCTION CALLS
# | | | | | | |
0) | nvme_map_data [nvme]() {
0) | __blk_rq_map_sg() {
0) + 15.600 us | __blk_bios_map_sg();
0) + 19.760 us | }
0) | dma_map_sg_attrs() {
0) + 62.620 us | dma_direct_map_sg();
0) + 66.520 us | }
0) | nvme_pci_setup_prps [nvme]() {
0) | dma_pool_alloc() {
0) | _raw_spin_lock_irqsave() {
0) 1.880 us | preempt_count_add();
0) 5.520 us | }
0) | _raw_spin_unlock_irqrestore() {
0) 1.820 us | preempt_count_sub();
0) 5.260 us | }
0) + 16.400 us | }
0) + 23.500 us | }
0) ! 150.100 us | }
nvme_pci_setup_prps is one method for nvme to do dma:
NVMe devices transfer data to and from system memory using Direct Memory Access (DMA). Specifically, they send messages across the PCI bus requesting data transfers. In the absence of an IOMMU, these messages contain physical memory addresses. These data transfers happen without involving the CPU, and the MMU is responsible for making access to memory coherent.
NVMe devices also may place additional requirements on the physical layout of memory for these transfers. The NVMe 1.0 specification requires all physical memory to be describable by what is called a PRP list. To be described by a PRP list, memory must have the following properties:
The memory is broken into physical 4KiB pages, which we'll call device pages.
The first device page can be a partial page starting at any 4-byte aligned address. It may extend up to the end of the current physical page, but not beyond.
If there is more than one device page, the first device page must end on a physical 4KiB page boundary.
The last device page begins on a physical 4KiB page boundary, but is not required to end on a physical 4KiB page boundary.
https://spdk.io/doc/memory.html

understanding the physical and virtual memory layout in my kernel

I have a dragonboard410c which is based on arm64 and when it boots , it shows the memory layout:
software IO TLB [mem 0xb6c00000-0xbac00000] (64MB) mapped at [ff]
Memory: 780212K/951296K available (9940K kernel code, 1294K rwda)
Virtual kernel memory layout:
vmalloc : 0xffffff8000000000 - 0xffffffbdbfff0000 ( 246 )
vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000 ( 8 )
0xffffffbdc0000000 - 0xffffffbdc1000000 ( 16 )
fixed : 0xffffffbffa7fd000 - 0xffffffbffac00000 ( 4108 )
PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000 ( 16 )
modules : 0xffffffbffc000000 - 0xffffffc000000000 ( 64 )
memory : 0xffffffc000000000 - 0xffffffc040000000 ( 1024 )
.init : 0xffffffc000e49000 - 0xffffffc000f43000 ( 1000 )
.text : 0xffffffc000080000 - 0xffffffc000e483e4 ( 14113 )
I could not find an explanation of the meaning of it.
especialy what is the vmemmap region ? and why are there two address interval for it?
Also, what are the "fixed" and the "memory" zones ?
I found out that whenever I use kmalloc no meter with what flag, I get an address that is from the memory region. Even if I use vmalloc , the address I receive is not from the vmalloc region.
So is it possible to use regions other than the memory region in a kernel module ?

Mistake in Virtual Hard Disk Image Format Specification?

I want to calculate the end offset of a parent locator in a VHD. Here is a part of the VHD header:
Cookie: cxsparse
Data offset: 0xffffffffffffffff
Table offset: 0x2000
Header version: 0x00010000
Max table entries: 10240
Block size: 0x200000
Checksum: 4294956454
Parent Unique Id: 0x9678bf077e719640b55e40826ce5d178
Parent time stamp: 525527478
Reserved: 0
Parent Unicode name:
Parent locator 1:
- platform code: 0x57326b75
- platform_data_space: 4096
- platform_data_length: 86
- reserved: 0
- platform_data_offset: 0x1000
Parent locator 2:
- platform code: 0x57327275
- platform_data_space: 65536
- platform_data_length: 34
- reserved: 0
- platform_data_offset: 0xc000
Some definitions from the Virtual Hard Disk Image Format Specification:
"Table Offset: This field stores the absolute byte offset of the Block Allocation Table (BAT) in the file.
Platform Data Space: This field stores the number of 512-byte sectors needed to store the parent hard disk locator.
Platform Data Offset: This field stores the absolute file offset in bytes where the platform specific file locator data is stored.
Platform Data Length. This field stores the actual length of the parent hard disk locator in bytes."
Based on this the end offset of the two parent locators should be:
data offset + 512 * data space:
0x1000 + 512 * 4096 = 0x201000
0xc000 + 512 * 65536 = 0x200c000
But if one uses only data offset + data space:
0x1000 + 4096 = 0x2000 //end of parent locator 1, begin of BAT
0xc000 + 65536 = 0x1c000
This latter calculation makes much more sense: the end of the first parent locator is the beginning of the BAT (see header data above); and since the first BAT entry is 0xe7 (sector offset), this corresponds to file offset 0x1ce00 (sector offset * 512), which is OK, if the second parent locator ends at 0x1c000.
But if one uses the formula data offset + 512 * data space, he ends up having other data written in the parent locator. (But, in this example there would be no data corruption, since Platform Data Length is very small)
So is this a mistake in the specification, and the sentence
"Platform Data Space: This field stores the number of 512-byte sectors needed to store the parent hard disk locator."
should be
"Platform Data Space: This field stores the number of bytes needed to store the parent hard disk locator."?
Apparently Microsoft does not care about correcting their mistake, this being already discovered by Virtualbox developers. VHD.cpp contains the following comment:
/*
* The VHD spec states that the DataSpace field holds the number of sectors
* required to store the parent locator path.
* As it turned out VPC and Hyper-V store the amount of bytes reserved for the
* path and not the number of sectors.
*/

Finding Maximum delay through FPGA design from a VHDL code written in Xilinx software

i am working on AES code and my aim is to create an architecture which will give the fastest performance. hence i need to determine the delay from the time input is given and the final output is obtained. the design is to be implemented on fpga. i need to find the delay via xilinx simulation and design summary. however i fail to understand the various reports.
for model one i am giving the 3 reports from design summary.
synthesis report
place and route report
static timing report
static timing report
--------------------------------------------------------------------------------
Release 9.2i Trace
Copyright (c) 1995-2007 Xilinx, Inc. All rights reserved.
C:\Xilinx92i\bin\nt\trce.exe -ise C:/Xilinx92i/sbox/sbox.ise -intstyle ise -e 3
-s 5 -xml dynamic5stage dynamic5stage.ncd -o dynamic5stage.twr
dynamic5stage.pcf
Design file: dynamic5stage.ncd
Physical constraint file: dynamic5stage.pcf
Device,package,speed: xc3s200,pq208,-5 (PRODUCTION 1.39 2007-04-13)
Report level: error report
Environment Variable Effect
-------------------- ------
NONE No environment variables were set
--------------------------------------------------------------------------------
INFO:Timing:2698 - No timing constraints found, doing default enumeration.
INFO:Timing:2752 - To get complete path coverage, use the unconstrained paths
option. All paths that are not constrained will be reported in the
unconstrained paths section(s) of the report.
INFO:Timing:3339 - The clock-to-out numbers in this timing report are based on
a 50 Ohm transmission line loading model. For the details of this model,
and for more information on accounting for different loading conditions,
please see the device datasheet.
Data Sheet report:
-----------------
All values displayed in nanoseconds (ns)
Setup/Hold to clock SYS_CLK
------------+------------+------------+------------------+--------+
| Setup to | Hold to | | Clock |
Source | clk (edge) | clk (edge) |Internal Clock(s) | Phase |
------------+------------+------------+------------------+--------+
BYTE_IN<0> | 2.659(R)| 0.515(R)|SYS_CLK_BUFGP | 0.000|
BYTE_IN<1> | 3.216(R)| 0.381(R)|SYS_CLK_BUFGP | 0.000|
BYTE_IN<2> | 3.373(R)| 0.453(R)|SYS_CLK_BUFGP | 0.000|
BYTE_IN<3> | 3.155(R)| 0.001(R)|SYS_CLK_BUFGP | 0.000|
BYTE_IN<4> | 3.419(R)| 0.663(R)|SYS_CLK_BUFGP | 0.000|
BYTE_IN<5> | 4.055(R)| 0.118(R)|SYS_CLK_BUFGP | 0.000|
BYTE_IN<6> | 3.389(R)| 0.545(R)|SYS_CLK_BUFGP | 0.000|
BYTE_IN<7> | 3.151(R)| 0.389(R)|SYS_CLK_BUFGP | 0.000|
RST | 2.750(R)| 0.970(R)|SYS_CLK_BUFGP | 0.000|
s | 3.140(R)| 0.344(R)|SYS_CLK_BUFGP | 0.000|
------------+------------+------------+------------------+--------+
Clock SYS_CLK to Pad
---------------+------------+------------------+--------+
| clk (edge) | | Clock |
Destination | to PAD |Internal Clock(s) | Phase |
---------------+------------+------------------+--------+
SUB_BYTE_OUT<0>| 6.404(R)|SYS_CLK_BUFGP | 0.000|
SUB_BYTE_OUT<1>| 6.404(R)|SYS_CLK_BUFGP | 0.000|
SUB_BYTE_OUT<2>| 6.404(R)|SYS_CLK_BUFGP | 0.000|
SUB_BYTE_OUT<3>| 6.404(R)|SYS_CLK_BUFGP | 0.000|
SUB_BYTE_OUT<4>| 6.404(R)|SYS_CLK_BUFGP | 0.000|
SUB_BYTE_OUT<5>| 6.404(R)|SYS_CLK_BUFGP | 0.000|
SUB_BYTE_OUT<6>| 6.404(R)|SYS_CLK_BUFGP | 0.000|
SUB_BYTE_OUT<7>| 6.403(R)|SYS_CLK_BUFGP | 0.000|
---------------+------------+------------------+--------+
Clock to Setup on destination clock SYS_CLK
---------------+---------+---------+---------+---------+
| Src:Rise| Src:Fall| Src:Rise| Src:Fall|
Source Clock |Dest:Rise|Dest:Rise|Dest:Fall|Dest:Fall|
---------------+---------+---------+---------+---------+
SYS_CLK | 3.612| | | |
---------------+---------+---------+---------+---------+
Analysis completed Sat Nov 29 11:39:23 2014
--------------------------------------------------------------------------------
Trace Settings:
-------------------------
Trace Settings
Peak Memory Usage: 93 MB
place & route report
Release 9.2i par J.36
Copyright (c) 1995-2007 Xilinx, Inc. All rights reserved.
ACER-PC:: Sat Nov 29 11:38:52 2014
par -w -intstyle ise -ol std -t 1 dynamic5stage_map.ncd dynamic5stage.ncd
dynamic5stage.pcf
Constraints file: dynamic5stage.pcf.
Loading device for application Rf_Device from file '3s200.nph' in environment C:\Xilinx92i.
"dynamic5stage" is an NCD, version 3.1, device xc3s200, package pq208, speed -5
Initializing temperature to 85.000 Celsius. (default - Range: 0.000 to 85.000 Celsius)
Initializing voltage to 1.140 Volts. (default - Range: 1.140 to 1.260 Volts)
INFO:Par:282 - No user timing constraints were detected or you have set the option to ignore timing constraints ("par
-x"). Place and Route will run in "Performance Evaluation Mode" to automatically improve the performance of all
internal clocks in this design. The PAR timing summary will list the performance achieved for each clock. Note: For
the fastest runtime, set the effort level to "std". For best performance, set the effort level to "high". For a
balance between the fastest runtime and best performance, set the effort level to "med".
Device speed data version: "PRODUCTION 1.39 2007-04-13".
Device Utilization Summary:
Number of BUFGMUXs 1 out of 8 12%
Number of External IOBs 19 out of 141 13%
Number of LOCed IOBs 0 out of 19 0%
Number of Slices 62 out of 1920 3%
Number of SLICEMs 0 out of 960 0%
Overall effort level (-ol): Standard
Placer effort level (-pl): High
Placer cost table entry (-t): 1
Router effort level (-rl): Standard
REAL time consumed by placer: 16 secs
CPU time consumed by placer: 10 secs
Writing design to file dynamic5stage.ncd
Total REAL time to Placer completion: 17 secs
Total CPU time to Placer completion: 11 secs
Starting Router
Phase 1: 482 unrouted; REAL time: 18 secs
Phase 2: 436 unrouted; REAL time: 18 secs
Phase 3: 178 unrouted; REAL time: 18 secs
Phase 4: 178 unrouted; (0) REAL time: 18 secs
Phase 5: 180 unrouted; (0) REAL time: 18 secs
Phase 6: 0 unrouted; (87) REAL time: 19 secs
Phase 7: 0 unrouted; (87) REAL time: 19 secs
Updating file: dynamic5stage.ncd with current fully routed design.
Phase 8: 0 unrouted; (0) REAL time: 20 secs
Phase 9: 0 unrouted; (0) REAL time: 20 secs
Total REAL time to Router completion: 20 secs
Total CPU time to Router completion: 13 secs
Partition Implementation Status
-------------------------------
No Partitions were found in this design.
-------------------------------
Generating "PAR" statistics.
**************************
Generating Clock Report
**************************
+---------------------+--------------+------+------+------------+-------------+
| Clock Net | Resource |Locked|Fanout|Net Skew(ns)|Max Delay(ns)|
+---------------------+--------------+------+------+------------+-------------+
| SYS_CLK_BUFGP | BUFGMUX6| No | 45 | 0.036 | 0.916 |
+---------------------+--------------+------+------+------------+-------------+
* Net Skew is the difference between the minimum and maximum routing
only delays for the net. Note this is different from Clock Skew which
is reported in TRCE timing report. Clock Skew is the difference between
the minimum and maximum path delays which includes logic delays.
The Delay Summary Report
The NUMBER OF SIGNALS NOT COMPLETELY ROUTED for this design is: 0
The AVERAGE CONNECTION DELAY for this design is: 0.832
The MAXIMUM PIN DELAY IS: 2.272
The AVERAGE CONNECTION DELAY on the 10 WORST NETS is: 1.786
Listing Pin Delays by value: (nsec)
d < 1.00 < d < 2.00 < d < 3.00 < d < 4.00 < d < 5.00 d >= 5.00
--------- --------- --------- --------- --------- ---------
337 142 2 0 0 0
Timing Score: 0
Asterisk (*) preceding a constraint indicates it was not met.
This may be due to a setup or hold violation.
------------------------------------------------------------------------------------------------------
Constraint | Check | Worst Case | Best Case | Timing | Timing
| | Slack | Achievable | Errors | Score
------------------------------------------------------------------------------------------------------
Autotimespec constraint for clock net SYS | SETUP | N/A| 3.612ns| N/A| 0
_CLK_BUFGP | HOLD | 0.702ns| | 0| 0
------------------------------------------------------------------------------------------------------
All constraints were met.
INFO:Timing:2761 - N/A entries in the Constraints list may indicate that the
constraint does not cover any paths or that it has no requested value.
Generating Pad Report.
All signals are completely routed.
Total REAL time to PAR completion: 21 secs
Total CPU time to PAR completion: 15 secs
Peak Memory Usage: 136 MB
Placement: Completed - No errors found.
Routing: Completed - No errors found.
Number of error messages: 0
Number of warning messages: 0
Number of info messages: 1
Writing design to file dynamic5stage.ncd
PAR done!
synthesis report
Release 9.2i - xst J.36
Copyright (c) 1995-2007 Xilinx, Inc. All rights reserved.
--> Parameter TMPDIR set to ./xst/projnav.tmp
CPU : 0.00 / 4.04 s | Elapsed : 0.00 / 4.00 s
--> Parameter xsthdpdir set to ./xst
CPU : 0.00 / 4.04 s | Elapsed : 0.00 / 4.00 s
--> Reading design: dynamic5stage.prj
=========================================================================
* Synthesis Options Summary *
=========================================================================
---- Source Parameters
Input File Name : "dynamic5stage.prj"
Input Format : mixed
Ignore Synthesis Constraint File : NO
---- Target Parameters
Output File Name : "dynamic5stage"
Output Format : NGC
Target Device : xc3s200-5-pq208
---- Source Options
Top Module Name : dynamic5stage
Automatic FSM Extraction : YES
FSM Encoding Algorithm : Auto
Safe Implementation : No
FSM Style : lut
RAM Extraction : Yes
RAM Style : Auto
ROM Extraction : Yes
Mux Style : Auto
Decoder Extraction : YES
Priority Encoder Extraction : YES
Shift Register Extraction : YES
Logical Shifter Extraction : YES
XOR Collapsing : YES
ROM Style : Auto
Mux Extraction : YES
Resource Sharing : YES
Asynchronous To Synchronous : NO
Multiplier Style : auto
Automatic Register Balancing : No
---- Target Options
Add IO Buffers : YES
Global Maximum Fanout : 500
Add Generic Clock Buffer(BUFG) : 8
Register Duplication : YES
Slice Packing : YES
Optimize Instantiated Primitives : NO
Use Clock Enable : Yes
Use Synchronous Set : Yes
Use Synchronous Reset : Yes
Pack IO Registers into IOBs : auto
Equivalent register Removal : YES
---- General Options
Optimization Goal : Speed
Optimization Effort : 1
Library Search Order : dynamic5stage.lso
Keep Hierarchy : NO
RTL Output : Yes
Global Optimization : AllClockNets
Read Cores : YES
Write Timing Constraints : NO
Cross Clock Analysis : NO
Hierarchy Separator : /
Bus Delimiter : <>
Case Specifier : maintain
Slice Utilization Ratio : 100
BRAM Utilization Ratio : 100
Verilog 2001 : YES
Auto BRAM Packing : NO
Slice Utilization Ratio Delta : 5
=========================================================================
=========================================================================
* HDL Compilation *
=========================================================================
Compiling vhdl file "C:/Xilinx92i/sbox/dynamic5stage.vhd" in Library work.
Entity <dynamic5stage> compiled.
Entity <dynamic5stage> (Architecture <Behavioral>) compiled.
=========================================================================
* Design Hierarchy Analysis *
=========================================================================
Analyzing hierarchy for entity <dynamic5stage> in library <work> (architecture <Behavioral>).
=========================================================================
* HDL Analysis *
=========================================================================
Analyzing Entity <dynamic5stage> in library <work> (Architecture <Behavioral>).
INFO:Xst:1561 - "C:/Xilinx92i/sbox/dynamic5stage.vhd" line 278: Mux is complete : default of case is discarded
Entity <dynamic5stage> analyzed. Unit <dynamic5stage> generated.
=========================================================================
HDL Synthesis Report
Macro Statistics
# ROMs : 1
16x4-bit ROM : 1
# Registers : 13
4-bit register : 12
8-bit register : 1
# Xors : 89
1-bit xor2 : 56
1-bit xor3 : 24
1-bit xor4 : 1
2-bit xor2 : 6
4-bit xor2 : 2
=========================================================================
=========================================================================
* Advanced HDL Synthesis *
=========================================================================
Loading device for application Rf_Device from file '3s200.nph' in environment C:\Xilinx92i.
INFO:Xst:2506 - Unit <dynamic5stage> : In order to maximize performance and save block RAM resources, the small ROM <Mrom_GALOIS_MUL_INV> will be implemented on LUT. If you want to force its implementation on block, use option/constraint rom_style.
INFO:Xst:2261 - The FF/Latch <STAGE2_1_3> in Unit <dynamic5stage> is equivalent to the following FF/Latch, which will be removed : <STAGE2_2_1>
=========================================================================
Advanced HDL Synthesis Report
Macro Statistics
# ROMs : 1
16x4-bit ROM : 1
# Registers : 55
Flip-Flops : 55
# Xors : 89
1-bit xor2 : 56
1-bit xor3 : 24
1-bit xor4 : 1
2-bit xor2 : 6
4-bit xor2 : 2
=========================================================================
=========================================================================
* Low Level Synthesis *
=========================================================================
Optimizing unit <dynamic5stage> ...
Mapping all equations...
Building and optimizing final netlist ...
Found area constraint ratio of 100 (+ 5) on block dynamic5stage, actual ratio is 3.
Final Macro Processing ...
=========================================================================
Final Register Report
Macro Statistics
# Registers : 55
Flip-Flops : 55
=========================================================================
=========================================================================
* Partition Report *
=========================================================================
Partition Implementation Status
-------------------------------
No Partitions were found in this design.
-------------------------------
=========================================================================
* Final Report *
=========================================================================
Final Results
RTL Top Level Output File Name : dynamic5stage.ngr
Top Level Output File Name : dynamic5stage
Output Format : NGC
Optimization Goal : Speed
Keep Hierarchy : NO
Design Statistics
# IOs : 19
Cell Usage :
# BELS : 114
# LUT2 : 22
# LUT2_D : 4
# LUT2_L : 1
# LUT3 : 14
# LUT3_L : 2
# LUT4 : 49
# LUT4_D : 3
# LUT4_L : 12
# MUXF5 : 7
# FlipFlops/Latches : 55
# FDR : 54
# FDRS : 1
# Clock Buffers : 1
# BUFGP : 1
# IO Buffers : 18
# IBUF : 10
# OBUF : 8
=========================================================================
Device utilization summary:
---------------------------
Selected Device : 3s200pq208-5
Number of Slices: 61 out of 1920 3%
Number of Slice Flip Flops: 55 out of 3840 1%
Number of 4 input LUTs: 107 out of 3840 2%
Number of IOs: 19
Number of bonded IOBs: 19 out of 141 13%
Number of GCLKs: 1 out of 8 12%
---------------------------
Partition Resource Summary:
---------------------------
No Partitions were found in this design.
---------------------------
=========================================================================
TIMING REPORT
NOTE: THESE TIMING NUMBERS ARE ONLY A SYNTHESIS ESTIMATE.
FOR ACCURATE TIMING INFORMATION PLEASE REFER TO THE TRACE REPORT
GENERATED AFTER PLACE-and-ROUTE.
Clock Information:
------------------
-----------------------------------+------------------------+-------+
Clock Signal | Clock buffer(FF name) | Load |
-----------------------------------+------------------------+-------+
SYS_CLK | BUFGP | 55 |
-----------------------------------+------------------------+-------+
Asynchronous Control Signals Information:
----------------------------------------
No asynchronous control signals found in this design
Timing Summary:
---------------
Speed Grade: -5
Minimum period: 4.822ns (Maximum Frequency: 207.394MHz)
Minimum input arrival time before clock: 6.639ns
Maximum output required time after clock: 6.216ns
Maximum combinational path delay: No path found
Timing Detail:
--------------
All values displayed in nanoseconds (ns)
=========================================================================
Timing constraint: Default period analysis for Clock 'SYS_CLK'
Clock period: 4.822ns (frequency: 207.394MHz)
Total number of paths / destination ports: 242 / 43
-------------------------------------------------------------------------
Delay: 4.822ns (Levels of Logic = 3)
Source: STAGE3_3_0 (FF)
Destination: STAGE4_2_3 (FF)
Source Clock: SYS_CLK rising
Destination Clock: SYS_CLK rising
Data Path: STAGE3_3_0 to STAGE4_2_3
Gate Net
Cell:in->out fanout Delay Delay Logical Name (Net Name)
---------------------------------------- ------------
FDR:C->Q 4 0.626 1.074 STAGE3_3_0 (STAGE3_3_0)
LUT4_D:I0->O 2 0.479 0.768 Mxor_GAL2_MUL_31_xor0000_xo<1>1 (GAL2_MUL_31_xor0000)
LUT4:I3->O 1 0.479 0.740 Mxor_OUTPUT1_xor0000_Result<1>11 (N211)
LUT4:I2->O 1 0.479 0.000 Mxor_OUTPUT1_xor0000_Result<1> (GALOIS_MUL_3<3>)
FDR:D 0.176 STAGE4_2_3
----------------------------------------
Total 4.822ns (2.239ns logic, 2.583ns route)
(46.4% logic, 53.6% route)
=========================================================================
Timing constraint: Default OFFSET IN BEFORE for Clock 'SYS_CLK'
Total number of paths / destination ports: 168 / 76
-------------------------------------------------------------------------
Offset: 6.639ns (Levels of Logic = 5)
Source: BYTE_IN<4> (PAD)
Destination: STAGE1_2_1 (FF)
Destination Clock: SYS_CLK rising
Data Path: BYTE_IN<4> to STAGE1_2_1
Gate Net
Cell:in->out fanout Delay Delay Logical Name (Net Name)
---------------------------------------- ------------
IBUF:I->O 7 0.715 1.201 BYTE_IN_4_IBUF (BYTE_IN_4_IBUF)
LUT2:I0->O 2 0.479 0.804 GALOIS_ADD_1<0>31 (GALOIS_ADD_1<0>_bdd5)
LUT4:I2->O 1 0.479 0.976 GALOIS_ADD_1<0>11 (GALOIS_ADD_1<0>_bdd0)
LUT3:I0->O 1 0.479 0.851 GALOIS_ADD_1<1>_SW0 (N25)
LUT4:I1->O 1 0.479 0.000 GALOIS_ADD_1<1> (GALOIS_ADD_1<1>)
FDR:D 0.176 STAGE1_2_1
----------------------------------------
Total 6.639ns (2.807ns logic, 3.832ns route)
(42.3% logic, 57.7% route)
=========================================================================
Timing constraint: Default OFFSET OUT AFTER for Clock 'SYS_CLK'
Total number of paths / destination ports: 8 / 8
-------------------------------------------------------------------------
Offset: 6.216ns (Levels of Logic = 1)
Source: OUTPUT_LATCH_7 (FF)
Destination: SUB_BYTE_OUT<7> (PAD)
Source Clock: SYS_CLK rising
Data Path: OUTPUT_LATCH_7 to SUB_BYTE_OUT<7>
Gate Net
Cell:in->out fanout Delay Delay Logical Name (Net Name)
---------------------------------------- ------------
FDR:C->Q 1 0.626 0.681 OUTPUT_LATCH_7 (OUTPUT_LATCH_7)
OBUF:I->O 4.909 SUB_BYTE_OUT_7_OBUF (SUB_BYTE_OUT<7>)
----------------------------------------
Total 6.216ns (5.535ns logic, 0.681ns route)
(89.0% logic, 11.0% route)
=========================================================================
CPU : 29.56 / 34.76 s | Elapsed : 29.00 / 34.00 s
-->
Total memory usage is 205164 kilobytes
Number of errors : 0 ( 0 filtered)
Number of warnings : 0 ( 0 filtered)
Number of infos : 3 ( 0 filtered)
To measure the performance of your AES block, you can multiply the autotimespec value of 3.612ns from the bottom of the place and route report, with the number of pipeline stages in your system. You write that you have 5 pipeline stages currently, so the total time through the system will be 5*3.612ns = 18.060ns. If you add another pipeline stage in the hope that it will make the system faster, then the clock must be able to run at 18.060ns/6 = 3.010 ns for the added pipeline stage to improve your performance.
The tool has calculated a minimum clock period of 3.612ns = 276 MHz, but if you constrain the sys_clk to be faster than that it might be able to make it faster.

Parse MegaCLI output using BASH

I am dumping the complete configuration output of my 2 RAID controllers using LSI's MegaCLI command. I would like to parse the text file and print out only the lines I am interested in. For example:
"Adapter"
"Product Name"
"RAID Level Size State"
"Number Of Drives"
"Physical Disk"
"Raw Size"
"Link Speed"
"Media Type"
"Drive Temperature"
However, given the fact that the file contains configuration data for 2 RAID controller cards in descending order, how would I best approach this task using ONLY BASH! Below is the output I am dealing with.
NOTE: I should mention that I plan on installing another RAID controller soon, so ideally I would want to use something like BASH's 'read' built-in to read in the file. That way, the script will automatically catch a newly installed RAID controllers config data.
==============================================================================
Adapter: 0
Product Name: Supermicro SMC2208
Memory: 1024MB
BBU: Absent
Serial No:
==============================================================================
Number of DISK GROUPS: 1
DISK GROUP: 0
Number of Spans: 1
SPAN: 0
Span Reference: 0x00
Number of PDs: 2
Number of VDs: 1
Number of dedicated Hotspares: 0
Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name :
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 54.947 GB
Sector Size : 512
Mirror Data : 54.947 GB
State : Optimal
Strip Size : 64 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteThrough, ReadAhead, Cached, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAhead, Cached, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Enabled
Encryption Type : None
Bad Blocks Exist: No
PI type: No PI
Is VD Cached: No
Physical Disk Information:
Physical Disk: 0
Enclosure Device ID: 252
Slot Number: 0
Drive's postion: DiskGroup: 0, Span: 0, Arm: 0
Enclosure position: N/A
Device Id: 5
WWN: 5001517803d94502
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA
Raw Size: 55.899 GB [0x6fccf30 Sectors]
Non Coerced Size: 55.399 GB [0x6eccf30 Sectors]
Coerced Size: 54.947 GB [0x6de5000 Sectors]
Sector Size: 512
Firmware state: Online, Spun Up
Commissioned Spare : No
Emergency Spare : No
Device Firmware Level: 300i
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x4433221103000000
Connected Port Number: 1(path0)
Inquiry Data: CVMP302300A6060AGN INTEL SSDSC2CT060A3 300i
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Solid State Device
Drive Temperature : N/A
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Drive has flagged a S.M.A.R.T alert : No
Physical Disk: 1
Enclosure Device ID: 252
Slot Number: 1
Drive's postion: DiskGroup: 0, Span: 0, Arm: 1
Enclosure position: N/A
Device Id: 2
WWN: 5001517803d855bb
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA
Raw Size: 55.899 GB [0x6fccf30 Sectors]
Non Coerced Size: 55.399 GB [0x6eccf30 Sectors]
Coerced Size: 54.947 GB [0x6de5000 Sectors]
Sector Size: 512
Firmware state: Online, Spun Up
Commissioned Spare : No
Emergency Spare : No
Device Firmware Level: 300i
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x4433221102000000
Connected Port Number: 0(path0)
Inquiry Data: CVMP3020013L060AGN INTEL SSDSC2CT060A3 300i
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Solid State Device
Drive Temperature : N/A
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Drive has flagged a S.M.A.R.T alert : No
==============================================================================
Adapter: 1
Product Name: Supermicro SMC2208
Memory: 1024MB
BBU: Absent
Serial No:
==============================================================================
Number of DISK GROUPS: 1
DISK GROUP: 0
Number of Spans: 1
SPAN: 0
Span Reference: 0x00
Number of PDs: 2
Number of VDs: 1
Number of dedicated Hotspares: 0
Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name :
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 54.947 GB
Sector Size : 512
Mirror Data : 54.947 GB
State : Optimal
Strip Size : 64 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteThrough, ReadAhead, Cached, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAhead, Cached, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Enabled
Encryption Type : None
Bad Blocks Exist: No
PI type: No PI
Is VD Cached: No
Physical Disk Information:
Physical Disk: 0
Enclosure Device ID: 252
Slot Number: 0
Drive's postion: DiskGroup: 0, Span: 0, Arm: 0
Enclosure position: N/A
Device Id: 5
WWN: 5001517803d94502
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA
Raw Size: 55.899 GB [0x6fccf30 Sectors]
Non Coerced Size: 55.399 GB [0x6eccf30 Sectors]
Coerced Size: 54.947 GB [0x6de5000 Sectors]
Sector Size: 512
Firmware state: Online, Spun Up
Commissioned Spare : No
Emergency Spare : No
Device Firmware Level: 300i
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x4433221103000000
Connected Port Number: 1(path0)
Inquiry Data: CVMP302300A6060AGN INTEL SSDSC2CT060A3 300i
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Solid State Device
Drive Temperature : N/A
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Drive has flagged a S.M.A.R.T alert : No
Physical Disk: 1
Enclosure Device ID: 252
Slot Number: 1
Drive's postion: DiskGroup: 0, Span: 0, Arm: 1
Enclosure position: N/A
Device Id: 2
WWN: 5001517803d855bb
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA
Raw Size: 55.899 GB [0x6fccf30 Sectors]
Non Coerced Size: 55.399 GB [0x6eccf30 Sectors]
Coerced Size: 54.947 GB [0x6de5000 Sectors]
Sector Size: 512
Firmware state: Online, Spun Up
Commissioned Spare : No
Emergency Spare : No
Device Firmware Level: 300i
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x4433221102000000
Connected Port Number: 0(path0)
Inquiry Data: CVMP3020013L060AGN INTEL SSDSC2CT060A3 300i
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Solid State Device
Drive Temperature : N/A
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Drive has flagged a S.M.A.R.T alert : No
Exit Code: 0x00
When you say "ONLY BASH", do you really mean it? bash by itself is pretty powerless; it really depends on having a collection of non-builtin commands available to do anything nontrivial. Also, do you really just want the selected lines, or do you want to reformat the info at all?
If you just want the lines (and maybe a little header info) and have egrep available, this is simple:
MegaCLI -whateveroptions | egrep '^(Adapter|Product Name|RAID Level Size State|Number Of Drives|Physical Disk|Raw Size|Link Speed|Media Type|Drive Temperature):`
If you really need 100% pure bash, you can do it with read, case, and echo:
MegaCLI -whateveroptions | while read line; do
case "$line" in
Adapter:* | \
Product Name:* | \
RAID Level Size State:* | \
Number Of Drives:* | \
Physical Disk:* | \
Raw Size:* | \
Link Speed:* | \
Media Type:* | \
Drive Temperature:* )
echo "$line" ;;
esac
done
If you get sick of parsing this stuff I would consider using a newer LSI controller and the StorCLI64 binary. StorCLI is somewhat similar to MegaCLI, but allows you to append " J" to every command to have the response returned in JSON.

Resources