so I have a .vmdk archive that is created by Memu (android emulator) , and I need to extract one file from it using command line.
I am able to do it using 7zip GUI drag and drop, but from the command line it does not work. Open for any other options besides 7zip!
Thank you!
Below is the output using the command-
7z.exe l D:\vms\MEmu\MEmu96-2022102700023FFF-disk2.vmdk
C:\Program Files\7-Zip>7z.exe l
D:\vms\MEmu\MEmu96-2022102700023FFF-disk2.vmdk
7-Zip 19.00 (x64) : Copyright (c) 1999-2018 Igor Pavlov : 2019-02-21
Scanning the drive for archives: 1 file, 3719291904 bytes (3547 MiB)
Listing archive: D:\vms\MEmu\MEmu96-2022102700023FFF-disk2.vmdk
-- Path = D:\vms\MEmu\MEmu96-2022102700023FFF-disk2.vmdk Open WARNING: Can not open the file as [VMDK] archive Type = VHD Physical Size =
3719291904 Offset = 0 Created = 2022-05-28 03:49:29 Cluster Size =
2097152 Method = Dynamic Total Physical Size = 3719291904 Creator
Application = vbox 6.1 Host OS = Windows Saved State = - ID =
19112220CCCCCCCCCCCC000000000000
---- Size = 68826850816 Packed Size = 3718250496 Created = 2022-05-28 03:49:29
-- Path = MEmu96-2022102700023FFF-disk2.mbr Type = MBR Physical Size = 68826850816
Date Time Attr Size Compressed Name
..... 1048576 1048576 0.img
..... 1048576 1048576 1.img
..... 68823705088 68823705088 2.img
68825802240 68825802240 3 files
Warnings: 1
Related
After a crash of NN-A(active) because of memory not enough (too much blocks/files), we upgrade the NN-A with much more memory, but do not upgrade NN-B(not active) immediately.
With difference HeapSize, we deleted some files(80million to 70million), and then NN-B crashed. NN-A became active.
Then we upgraded NN-B, and start it. It stucks in safemode with logs like:
The reported blocks 4620668 needs additional 62048327 blocks to reach the threshold 0.9990 of total blocks 66735729.
The reported blocks X needs.. X increases slowly, and I checked the Heap usage:
Attaching to process ID 11598, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 24.79-b02
using parallel threads in the new generation.
using thread-local object allocation.
Concurrent Mark-Sweep GC
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 107374182400 (102400.0MB)
NewSize = 2006515712 (1913.5625MB)
MaxNewSize = 2006515712 (1913.5625MB)
OldSize = 4013096960 (3827.1875MB)
NewRatio = 2
SurvivorRatio = 8
PermSize = 21757952 (20.75MB)
MaxPermSize = 85983232 (82.0MB)
G1HeapRegionSize = 0 (0.0MB)
Heap Usage:
New Generation (Eden + 1 Survivor Space):
capacity = 1805910016 (1722.25MB)
used = 1805910016 (1722.25MB)
free = 0 (0.0MB)
100.0% used
Eden Space:
capacity = 1605304320 (1530.9375MB)
used = 1605304320 (1530.9375MB)
free = 0 (0.0MB)
100.0% used
From Space:
capacity = 200605696 (191.3125MB)
used = 200605696 (191.3125MB)
free = 0 (0.0MB)
100.0% used
To Space:
capacity = 200605696 (191.3125MB)
used = 0 (0.0MB)
free = 200605696 (191.3125MB)
0.0% used
concurrent mark-sweep generation:
capacity = 105367666688 (100486.4375MB)
used = 105192740832 (100319.61520385742MB)
free = 174925856 (166.82229614257812MB)
99.83398526179955% used
Perm Generation:
capacity = 68755456 (65.5703125MB)
used = 41562968 (39.637535095214844MB)
free = 27192488 (25.932777404785156MB)
60.45042883578577% used
14501 interned Strings occupying 1597840 bytes.
at the same time, NN-A's Heap:
Attaching to process ID 6061, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 24.79-b02
using parallel threads in the new generation.
using thread-local object allocation.
Concurrent Mark-Sweep GC
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 107374182400 (102400.0MB)
NewSize = 1134100480 (1081.5625MB)
MaxNewSize = 1134100480 (1081.5625MB)
OldSize = 2268266496 (2163.1875MB)
NewRatio = 2
SurvivorRatio = 8
PermSize = 21757952 (20.75MB)
MaxPermSize = 85983232 (82.0MB)
G1HeapRegionSize = 0 (0.0MB)
Heap Usage:
New Generation (Eden + 1 Survivor Space):
capacity = 1020723200 (973.4375MB)
used = 643184144 (613.3881988525391MB)
free = 377539056 (360.04930114746094MB)
63.01259185644061% used
Eden Space:
capacity = 907345920 (865.3125MB)
used = 639407504 (609.7865142822266MB)
free = 267938416 (255.52598571777344MB)
70.47009193582973% used
From Space:
capacity = 113377280 (108.125MB)
used = 3776640 (3.6016845703125MB)
free = 109600640 (104.5233154296875MB)
3.3310377528901736% used
To Space:
capacity = 113377280 (108.125MB)
used = 0 (0.0MB)
free = 113377280 (108.125MB)
0.0% used
concurrent mark-sweep generation:
capacity = 106240081920 (101318.4375MB)
used = 42025146320 (40078.30268859863MB)
free = 64214935600 (61240.13481140137MB)
39.55677138092327% used
Perm Generation:
capacity = 51249152 (48.875MB)
used = 51131744 (48.763031005859375MB)
free = 117408 (0.111968994140625MB)
99.77090742886828% used
16632 interned Strings occupying 1867136 bytes.
We tried to restart both, NN-A startup and became active in 10 minutes, but NN-B stuck forever.
Finally I dumped the heap usage:
num #instances #bytes class name
----------------------------------------------
1: 185594071 13362773112 org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$BlockProto
2: 185594071 13362773112 org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$ReceivedDeletedBlockInfoProto
3: 101141030 10550504248 [Ljava.lang.Object;
4: 185594072 7423762880 org.apache.hadoop.hdfs.protocol.Block
5: 185594070 7423762800 org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo
6: 63149803 6062381088 org.apache.hadoop.hdfs.server.namenode.INodeFile
7: 23241035 5705267888 [B
It shows very very large count of ReceivedDeletedBlock, but Why?
I solved this by change the dfs.blockreport.initialDelay to 300, the reason of failure is Block Report Storm.
I'm trying to extract filenames from rar packages in a directory. I'm using 7z which returns a multi-line string, and would like to search the output for "mkv", "avi", or "srt" files.
Here's my code:
ROOT_DIR = "/users/ken/extract"
# Check each directory for Rar packages
# Returns an arary of directories with filenames from the rar's
def checkdirs()
pkgdirs = {}
Dir.foreach(ROOT_DIR) do |d|
if !Dir.glob("#{ROOT_DIR}/#{d}/*.rar").empty?
rarlist = `7z l #{ROOT_DIR}/#{d}/*.rar`
puts rarlist # Returns multilinen output from 7z l
puts rarlist.scan('*.mkv').first
pkgdirs[d] = 'filename'
end
end
pkgdirs
end
I can get the 7z output but I can't figure out how to search the output for my strings. How can I search the output and return the matching lines?
This is an example of the 7z output:
7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=utf8,Utf16=on,HugeFiles=on,64 bits,8 CPUs x64)
Scanning the drive for archives:
1 file, 15000000 bytes (15 MiB)
Listing archive: Gotham.S03E19.HDTV.x264-KILLERS/gotham.s03e19.hdtv.x264-killers.rar
--
Path = Gotham.S03E19.HDTV.x264-KILLERS/gotham.s03e19.hdtv.x264-killers.rar
Type = Rar
Physical Size = 15000000
Total Physical Size = 285988640
Characteristics = Volume FirstVolume VolCRC
Solid = -
Blocks = 1
Multivolume = +
Volume Index = 0
Volumes = 20
Date Time Attr Size Compressed Name
------------------- ----- ------------ ------------ ------------------------
2017-05-23 02:30:52 ..... 285986500 285986500 Gotham.S03E19.HDTV.x264-KILLERS.mkv
------------------- ----- ------------ ------------ ------------------------
2017-05-23 02:30:52 285986500 285986500 1 files
------------------- ----- ------------ ------------ ------------------------
2017-05-23 02:30:52 285986500 285986500 1 files
Archives: 1
Volumes: 20
Total archives size: 285988640
I expect this output:
2017-05-23 02:30:52 ..... 285986500 285986500 Gotham.S03E19.HDTV.x264-KILLERS.mkv
You can use this:
puts rarlist.scan(/^.*\.mkv/)
The regex will match from the beginning of lines.
To match .mkv, .avi, or .srt, you can use:
rarlist.scan(/(^.*\.(mkv|avi|srt))/) {|a,_| puts a}
The solution is much simpler than what you're making it.
Starting with:
TARGET_EXTENSIONS = %w[mkv avi srt]
TARGET_EXTENSION_RE = /\.(?:#{ Regexp.union(TARGET_EXTENSIONS).source})/
# => /\.(?:mkv|avi|srt)/
output = <<EOT
7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=utf8,Utf16=on,HugeFiles=on,64 bits,8 CPUs x64)
Scanning the drive for archives:
1 file, 15000000 bytes (15 MiB)
Listing archive: Gotham.S03E19.HDTV.x264-KILLERS/gotham.s03e19.hdtv.x264-killers.rar
--
Path = Gotham.S03E19.HDTV.x264-KILLERS/gotham.s03e19.hdtv.x264-killers.rar
Type = Rar
Physical Size = 15000000
Total Physical Size = 285988640
Characteristics = Volume FirstVolume VolCRC
Solid = -
Blocks = 1
Multivolume = +
Volume Index = 0
Volumes = 20
Date Time Attr Size Compressed Name
------------------- ----- ------------ ------------ ------------------------
2017-05-23 02:30:52 ..... 285986500 285986500 Gotham.S03E19.HDTV.x264-KILLERS.mkv
------------------- ----- ------------ ------------ ------------------------
2017-05-23 02:30:52 285986500 285986500 1 files
------------------- ----- ------------ ------------ ------------------------
2017-05-23 02:30:52 285986500 285986500 1 files
Archives: 1
Volumes: 20
Total archives size: 285988640
EOT
All it takes is to iterate over the lines in the output and puts the matches:
puts output.lines.grep(TARGET_EXTENSION_RE)
Which would output:
2017-05-23 02:30:52 ..... 285986500 285986500 Gotham.S03E19.HDTV.x264-KILLERS.mkv
The above is a basic solution, but there are things that could be done to speed up the code, depending on the output being received:
TARGET_EXTENSIONS = %w[mkv avi srt].map { |e| '.' << e } # => [".mkv", ".avi", ".srt"]
puts output.split(/\r?\n/).select { |l| l.end_with?(*TARGET_EXTENSIONS) }
I'd have to run benchmarks, but that should be faster, since regular expressions can drastically slow code if not written correctly.
You could try:
TARGET_EXTENSION_RE = /\.(?:#{ Regexp.union(TARGET_EXTENSIONS).source})$/
# => /\.(?:mkv|avi|srt)$/
puts output.split(/\r?\n/).grep(TARGET_EXTENSION_RE)
as anchored patterns are much faster than unanchored.
If the 7z archives will generate huge listings (in the MB range) it'd be better to iterate over the input to avoid scalability issues. In the above example output.lines would be akin to slurping the output. See "Why is "slurping" a file not a good practice?" for more information.
I am trying to make a verified boot on beaglebone black by following the documentation
https://github.com/01org/edison-u-boot/blob/master/doc/uImage.FIT/beaglebone_vboot.txt
when i run the command to put public key into .dtb file
mkimage -f sign.its -K am335x-boneblack.dtb -k keys -r image.fit
I get the output
FIT description: Beaglebone black
Created: Fri Mar 24 18:47:51 2017
Image 0 (kernel#1)
Description: unavailable
Created: Fri Mar 24 18:47:51 2017
Type: Kernel Image
Compression: lzo compressed
Data Size: 8490316 Bytes = 8291.32 KiB = 8.10 MiB
Architecture: ARM
OS: Linux
Load Address: 0x80008000
Entry Point: 0x80008000
Hash algo: sha1
Hash value: 9a390ee3c02c5bddc7b191d5cbe107991522a6d7
Image 1 (fdt#1)
Description: beaglebone-black
Created: Fri Mar 24 18:47:51 2017
Type: Flat Device Tree
Compression: uncompressed
Data Size: 38894 Bytes = 37.98 KiB = 0.04 MiB
Architecture: ARM
Hash algo: sha1
Hash value: 249ca75de41f5202fae334253bd153666f60b7dc
Default Configuration: 'conf#1'
Configuration 0 (conf#1)
Description: unavailable
Kernel: kernel#1
FDT: fdt#1
But unfortunately there is no such a field like signature or rsa in my .dtb file when i read it with fdtdump.
Here is my .its file:
/dts-v1/;
/ {
description = "Beaglebone black";
#address-cells = <1>;
images {
kernel#1 {
data = /incbin/("zImage.lzo");
type = "kernel";
arch = "arm";
os = "linux";
compression = "lzo";
load = <0x80008000>;
entry = <0x80008000>;
hash#1 {
algo = "sha1";
};
};
fdt#1 {
description = "beaglebone-black";
data = /incbin/("am335x-boneblack.dtb");
type = "flat_dt";
arch = "arm";
compression = "none";
hash#1 {
algo = "sha1";
};
};
};
configurations {
default = "conf#1";
conf#1 {
kernel = "kernel#1";
fdt = "fdt#1";
signature#1 {
algo = "sha1,rsa2048";
key-name-hint = "dev";
sign-images = "fdt", "kernel";
};
};
};
};
also in keys folder i have dev.key and dev.crt files.
thanks for your answer.
Though there's no error message, mkimage does not support that function if CONFIG_FIT_SIGNATURE in U-Boot's .config is not set.
as it says in: https://lxr.missinglinkelectronics.com/#uboot/doc/uImage.FIT/signature.txt
Public Key Storage
In order to verify an image that has been signed with a public key we need to
have a trusted public key. This cannot be stored in the signed image, since
it would be easy to alter. For this implementation we choose to store the
public key in U-Boot's control FDT (using CONFIG_OF_CONTROL).
Regards, Steve
I want to calculate the end offset of a parent locator in a VHD. Here is a part of the VHD header:
Cookie: cxsparse
Data offset: 0xffffffffffffffff
Table offset: 0x2000
Header version: 0x00010000
Max table entries: 10240
Block size: 0x200000
Checksum: 4294956454
Parent Unique Id: 0x9678bf077e719640b55e40826ce5d178
Parent time stamp: 525527478
Reserved: 0
Parent Unicode name:
Parent locator 1:
- platform code: 0x57326b75
- platform_data_space: 4096
- platform_data_length: 86
- reserved: 0
- platform_data_offset: 0x1000
Parent locator 2:
- platform code: 0x57327275
- platform_data_space: 65536
- platform_data_length: 34
- reserved: 0
- platform_data_offset: 0xc000
Some definitions from the Virtual Hard Disk Image Format Specification:
"Table Offset: This field stores the absolute byte offset of the Block Allocation Table (BAT) in the file.
Platform Data Space: This field stores the number of 512-byte sectors needed to store the parent hard disk locator.
Platform Data Offset: This field stores the absolute file offset in bytes where the platform specific file locator data is stored.
Platform Data Length. This field stores the actual length of the parent hard disk locator in bytes."
Based on this the end offset of the two parent locators should be:
data offset + 512 * data space:
0x1000 + 512 * 4096 = 0x201000
0xc000 + 512 * 65536 = 0x200c000
But if one uses only data offset + data space:
0x1000 + 4096 = 0x2000 //end of parent locator 1, begin of BAT
0xc000 + 65536 = 0x1c000
This latter calculation makes much more sense: the end of the first parent locator is the beginning of the BAT (see header data above); and since the first BAT entry is 0xe7 (sector offset), this corresponds to file offset 0x1ce00 (sector offset * 512), which is OK, if the second parent locator ends at 0x1c000.
But if one uses the formula data offset + 512 * data space, he ends up having other data written in the parent locator. (But, in this example there would be no data corruption, since Platform Data Length is very small)
So is this a mistake in the specification, and the sentence
"Platform Data Space: This field stores the number of 512-byte sectors needed to store the parent hard disk locator."
should be
"Platform Data Space: This field stores the number of bytes needed to store the parent hard disk locator."?
Apparently Microsoft does not care about correcting their mistake, this being already discovered by Virtualbox developers. VHD.cpp contains the following comment:
/*
* The VHD spec states that the DataSpace field holds the number of sectors
* required to store the parent locator path.
* As it turned out VPC and Hyper-V store the amount of bytes reserved for the
* path and not the number of sectors.
*/
I was looking to modify my GPIO driver for raspberry pi using device tree support.
First there were 2 files:
I read the device tree file in /arc/arm/boot/dts/bcm2835.dts
and for gpio following section was present:
gpio: gpio {
compatible = "brcm,bcm2835-gpio";
reg = <0x7e200000 0xb4>;
/*
* The GPIO IP block is designed for 3 banks of GPIOs.
* Each bank has a GPIO interrupt for itself.
* There is an overall "any bank" interrupt.
* In order, these are GIC interrupts 17, 18, 19, 20.
* Since the BCM2835 only has 2 banks, the 2nd bank
* interrupt output appears to be mirrored onto the
* 3rd bank's interrupt signal.
* So, a bank0 interrupt shows up on 17, 20, and
* a bank1 interrupt shows up on 18, 19, 20!
*/
interrupts = <2 17>, <2 18>, <2 19>, <2 20>;
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <2>;
};
From the references on the internet The reg = 0x7e200000 is understood but What is 0xb4.
I read the device tree file in /arch/arm/boot/dts/bcm2835-rpi-b.dts
and for gpio following section was present:
/ {
compatible = "raspberrypi,model-b", "brcm,bcm2835";
model = "Raspberry Pi Model B";
memory {
reg = <0 0x10000000>;
};
leds {
compatible = "gpio-leds";
act {
label = "ACT";
gpios = <&gpio 16 1>;
default-state = "keep";
linux,default-trigger = "heartbeat";
};
};
};
&gpio {
pinctrl-names = "default";
pinctrl-0 = <&alt0 &alt3>;
alt0: alt0 {
brcm,pins = <0 1 2 3 4 5 6 7 8 9 10 11 14 15 40 45>;
brcm,function = <4>; /* alt0 */
};
alt3: alt3 {
brcm,pins = <48 49 50 51 52 53>;
brcm,function = <7>; /* alt3 */
};
};
So, Which one of the dts files should I use, and how to read and interpret those key value pairs, for eg: what is pinctrl. and how does this approach affect on my code.
I know I am asking a lot of stuff here, but this is new and looks interesting and I want to modify my driver using this approach. Please help.
PS: I have made a driver using the standard udev support. So dynamic device node creation is managed.
I am not using platform model.
1.
From the references on the internet The reg = 0x7e200000 is understood but What is 0xb4.
reg = <0x7e200000 0xb4>
Here 0xb4 refers to the length of the register.
"reg : Address and length of the register set for the device"
You can probably checkout this pdf for better info
http://events.linuxfoundation.org/sites/events/files/slides/petazzoni-device-tree-dummies.pdf
2.
So, Which one of the dts files should I use, and how to read and interpret those key value pairs,
I will split the question into two parts. For reading key value pairs:
Every Device tree entry would have an associated binding file that describes how you read the key value pairs.
For example http://lxr.free-electrons.com/source/Documentation/devicetree/bindings/arm/bcm/brcm,bcm11351-cpu-method.txt . You can get the corresponding details .
Regarding which dts files should I use:
Now if u have noticed bcm2835.dtsi. is not a dts file but a dtsi file.
http://lxr.free-electrons.com/source/arch/arm/boot/dts/bcm2835.dtsi
dtsi files can be included into other dts or dtsi files just like we include other libraries like conio.h. or stdio.h in our C code.
Here bcm2835-rpi-b.dts is a dts file and if u notice the file here http://lxr.free-electrons.com/source/arch/arm/boot/dts/bcm2835-rpi-b.dts
it includes the following:
/include/ "bcm2835.dtsi"
This means that all the dt entries in bcm2835.dtsi is imported into bcm2835-rpi-b.dts.
You can choose to leave the nodes as is or modify the properties in rpi-b-dts, but the final entry made in dts file will be the one reflected in the dtb.
3.
for eg: what is pinctrl. and how does this approach affect on my code.
Pinctrl is framework provided in the kernel for accessing PIN's here gpio's. You can probably checkout the documentation used https://www.kernel.org/doc/Documentation/pinctrl.txt