ESP32 partition 3mb Spiff and 1mb App not applied [PlatformIO] - esp32

Im using a custom partition file for esp32 using PlatformIO addon for VS Code. The intent is to alocate 3mb to spiff and 1mb for app.
The partition file is noota_3g.csv with content
# Name, Type, SubType, Offset, Size, Flags
nvs, data, nvs, 0x9000, 0x5000,
otadata, data, ota, 0xe000, 0x2000,
app0, app, ota_0, 0x10000, 0x100000,
spiffs, data, spiffs, 0x110000,0x2F0000,
This should enable to make filesystem that is 3mb but when I try to build filesystem using PlatformIO I get error SPIFFS_write error(-10001): File system is full.
As soon as /data folder is above 2.200kb I get this error.
If I try other partitions they act as expected, only when I try to make it above 2mb.
What am I doing wrong ?

Fixed this by adding
board_build.filesystem = littlefs
to platformio.ini

Related

codesign an old Director projector on OS X 10.13 - perhaps manipulate __LINKEDIT segment value

(See Updates at end of this post)
For $reasons, I need to codesign an old Director projector that we can no longer re-publish (no access to original source code or to Director).
I'm doing this because when run without being signed, the app now opens a Finder window with a prompt saying "Where is..." asking for a file that's one of the embedded projector resources.
But... If I cd into the Projector.app contents (it's not really called that, but you get the idea) and find the projector binary inside Contents/MacOS/ and run this binary from terminal, the app launches and runs fine, once it's decompressed the (presumably) attached archive at the end of the binary...
/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleFSCompression/AppleFSCompression-96.30.2/Common/ChunkCompression.cpp:50: Error: unsupported compressor 8
/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleFSCompression/AppleFSCompression-96.30.2/Libraries/CompressData/CompressData.c:353: Error: Unknown compression scheme encountered for file '/System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/Exceptions.plist'
/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleFSCompression/AppleFSCompression-96.30.2/Common/ChunkCompression.cpp:50: Error: unsupported compressor 8
/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleFSCompression/AppleFSCompression-96.30.2/Libraries/CompressData/CompressData.c:353: Error: Unknown compression scheme encountered for file '/System/Library/CoreServices/CoreTypes.bundle/Contents/Library/AppExceptions.bundle/Exceptions.plist'
271 blocks
1120 blocks
274 blocks
136 blocks
255 blocks
120 blocks
1487 blocks
575 blocks
1128 blocks
570 blocks
104 blocks
2042 blocks
4889 blocks
677 blocks
388 blocks
363 blocks
700 blocks
23010 blocks
...app opens and runs correctly at this point
I can't ask our users to do this (they're very non-technical) so I'm guessing that the "Where is..." prompt is some aspect of the OS X Gatekeeper, and hence I'm hoping that signing the binary will make it click-runnable again.
When I try and codesign the binary App.app/Contents/MacOS/projector I get:
main executable failed strict validation
Setting the --no-strict codesign option gives a bit more detail:
the __LINKEDIT segment does not cover the end of the file (can't be processed)
Which I think is because the Director projector is a binary with a bundled archive containing the rest of the application's resources, appended to the end of the executable. Some googling shows that other projects have similar problems with their embedded resources.
I've tried using macho_edit to see if I could modify the binary, but with no joy. I've also tried signing using jtool, but again, this didn't work.
So now, opening the binary in MachOView:
I'm hoping that I can hexedit the binary and change the value of the __LINKEDIT segment so it covers the end of the file, and hence so the codesigning will work, but I have no idea what the modified value should be, or what else if anything I need to change. Any tips appreciated.
Update 1 - in response to Kamil.S's answer
I've tried adjusting the File Size value in __LINKEDIT segment, so this + the File Offset is the same as the actual binary (I tried a few times; you actually need to change the VM Size to be the same value as the File Size or you get Killed: 9 by the OS. Same happens if you set File Size to be the total size of the binary), but with no luck. With the new File Size and VM Size values, I can still run the binary, but I can't codesign it; I do however, get a slightly different error message:
file not in an order that can be processed (link edit information does not fill the __LINKEDIT segment)
Update 2 - https://github.com/pyinstaller/pyinstaller/wiki/Recipe-OSX-Code-Signing#pyinstaller-fix-implementation has a bit more detail on the same problem:
PyInstaller breaks OSX code signing because it appends python code at
the end of the binary. Appending data at the end of executable breaks
the Mach-o format structure. codesign utility complains with the
following messages.
the __LINKEDIT segment does not cover the end of the file (can't be processed)
file not in an order that can be processed (link edit information does not fill the __LINKEDIT segment)
Fix __LINKEDIT - File Size (offset + File size == exe size), VM Size- same as 'File Size'
Fix LC_SYMTAB - String Table Size - last data in mach-o file (offset + size = exe size on the filesystem) - The data appended to the executable will be part of the 'String Table' (Last data section in
Mach-O file).
I'll take a look at fixing LC_SYMTAB to see if that helps.
__LINKEDIT's File Offset + File Size should be equal to physical executable size. You can tinker with File Size in MachOView by double clicking the value, editing it and saving - the executable should be fine. Just don't touch File Offset because this will definitely break it.
If the app can't find its external resources when being run normally from the Finder, that sounds like a result of Gatekeeper Path Randomization. The OS moves the app to a hidden read-only location when it's run, and it can't find the resources.
I don't think that signing the app binary itself will fix this problem and prevent the OS from doing path randomization. The user either needs to move the application to a different directory after extracting it, or you need to distribute the application inside a disk image that has been signed with your Developer ID certificate. DropDMG (linked from the post above) can create signed disk images, that might be worth a quick try.
DIrector uses a scheme widely used on Windows called "Overload". I attaches some data at the end of the physical file, but beyond the size of the executable image.
This approach is not supported with Mach-O files. The physical image has to end with the last byte covered by the LINKEDIT segment, and even the order of items inside the LINKEDIT segment is well defined. Reason for this was prebinding in the past, now it is codesigning.
The appended data is the initial DIR/DXR Directory wants to load first. I guess this has been fixed later by adding the DIR/DXR into the application bundle. But I am not anymore into Director, so I am not sure about this.

How to upload large size image by Intervention Image in Laravel 5

I'm using Image Intervention in my project.
My application working smoothly while uploading small size images. But when I try to upload large size image(>2mb), my application stops working!
Even It shows no proper errors. Sometimes It shows Token mismatch error & sometimes the url not redirects.
How to fix it? I've no idea.
Here is my code:
$post->new Post();
if($request->hasFile('image')){
$image=$request->file('image');
$filename=Auth::user()->id.'_'.time().'.'.$image->getClientOriginalExtension();
$location=public_path('images/'.$filename);
Image::make($image)->save($location);
$post->image=$filename;
}
$post->save();
I'm using Image intervention for uploading images. But you can suggest alternative of it as well.
Thanks!
Actually this is the issue from server side setting variable values into php.ini file. if you upload more then your server's post_max_size setting the input will be empty, you will get Token mismatch error.
change upload_max_filesize , post_max_size value as per you required and restart the server.
It turns out this is a memory issue. If you check the error log you with see that the server ran out of memory. You will see something like
PHP Fatal error: Allowed memory size of XXXXXXXX bytes exhausted (tried to allocate XXXXX bytes) in ...
Because Intervention Image reads the whole image pixel by pixel keeping the data in memory, seemingly small images like 2MB can end up requiring dozens of MB of memory to process.
You may need to set your memory limit to the highest available and check the file size before it is opened because a site that breaks without error messages is embarrassing. Use something like
if( $request->hasFile('image') && $request->file('image')->getClientSize() < 2097152 ){
$image=$request->file('image');
$filename=Auth::user()->id.'_'.time().'.'.$image->getClientOriginalExtension();
$location=public_path('images/'.$filename);
Image::make($image)->save($location);
$post->image=$filename;
}

Size constraints of initramfs on ARM?

I'm creating a bootable Linux system on a PicoZed board (ARM CortexA9 core), and I've run into a "limitation", which I don't think really is a limitation (I get the feeling it's another problem masquerading as a limitation).
I boot by starting the system in JTAG boot mode; after powering on the board I use the xmd debugger to place u-boot into the system's RAM and then I run it.
Next I place the kernel (uImage), the gzip'd initramfs image and the device tree into memory. Finally I tell u-boot to boot the system using bootm, and using three arguments to point out the memory locations of all the images.
All of this works, and I manage to boot up Linux + userland. However, I need to grow the initramfs, and this is where I run into problems.
The working image is 16MiB exactly. I tried to make it 24MiB (completely regenerated from scratch each time I try to boot), but just after the kernel has loaded and it tries to find init the kernel reports file system faults and fails. There shouldn't be any overlaps, but just in case I tried moving things around a little, but the same problem occurred.
After searching for some tips, I saw someone on a forum say that the image needs to be placed at a 16MiB alignment (which I don't think is true, but I tried it none the less, but it didn't get a working system). Another post claimed that the images must be aligned with their sizes (which I again don't think is true, but I tried that as well, but with no change). Yet another post claimed that this happens if the initramfs image crosses the __init end boundary, and that placing the initramfs image firmly inside will allow the memory to be reclaimed after the image has been uncompressed by the kernel, and placing it beyond the __init section will work but then that memory is forever lost after boot. I know way too little about Linux in order to know if any of this is in any way true/accurate, and I have no idea where "__init"'s - if such a thing exists - end-boundary is, but if the issue is that I was crossing it I tried moving the initramfs image way beyond anywhere were I was previously using it, but that didn't change anything.
I also tried the original location (which works with a 16MiB image) and create a 16MiB + 1K sized image, but this didn't work either. (Checking that it didn't overlay any of the over images, obviously).
This originally led me to think there's a 16MiB initramfs size limit lurking somewhere. But on searching for it, it got me thinking that doesn't make sense -- as far as I can gather the bootm command in u-boot shoud set up the tag list for the system (which includes the location and size of the initramfs), and I haven't come across any note about a 16MiB limit in relation to the tagged lists for the initramfs so far.
I have found a page which claims that the initramfs size is in practical terms limited to roughly half the size of physical ram in the system, and the PicoZed board has 1G RAM, so we're in orders of magnitude away from what "should" be a limitation.
To clarify: The 16MiB image is the size of the raw image; compressed it's just under 6MiB. However, if I populate it so it's not compressed as tightly it doesn't make any difference -- the problem doesn't appear to be related to the size of the compressed image, only the uncompressed image.
Main question:
Where is this apparent 16MiB initramfs size limit coming from?
Side-question:
Is there such a thing as an "kernel __init section" which is reclaimed by the kernel after it has uncompressed/loaded images? If so, how do I see/configure the location/size of it?
Size constraints of initramfs on ARM?
I have not encountered a 16 MiB size constraint as you allege.
At one time I thought there was a size limit too, but that turned out to be a memory footprint issue during boot. Once that was sorted out I've been using large initramfs of 30MiB (e.g. with glibc, gtstreamer and qt5 libraries).
Where is this apparent 16MiB initramfs size limit coming from?
There isn't one. A ramfs is only constrained by available RAM.
There is a definition for the "Default RAM disk size", but this would not affect the size of an initramfs.
Your method of booting with the U-Boot bootm command is suspect, i.e. you're passing the memory address of the initramfs as the second argument.
The U-Boot documentation clearly describes the second argument as "the address of an initrd image" (emphasis added).
There is no mention of initramfs as an argument.
Linux kernel documentation states that the initramfs archive can be "linked into the linux kernel image". There's a kernel menuconfig entry for specifying the path to the initramfs cpio file. The make script will append this cpio file to the kernel image so that there is a single image file for booting.
Or (like an initrd) "a separate file" can be passed to the kernel at boot to populate the initramfs.
U-Boot passes the location (and length) of this archive either as an ATAG entry or as a reserved memory region in the Device Tree blob.
The kernel expects either a cpio archive for the initramfs or a tar archive for an initrd.
You neglect to mention (besides its compression) what kind of archive this "initramfs" or "separate file" actually is .
So it's not clear if you're booting the kernel with an initrd (tar archive) or an initramfs (cpio archive).
Your repeated reference to the initramfs as an "image" file rather than a cpio archive suggests that you really are using an initrd.
An initrd would definitely have a size constraint.
Is there such a thing as an "kernel __init section" which is reclaimed by the kernel after it has uncompressed/loaded images?
Yes, there is an __init section of memory, which is released after kernel initialization is complete.
If so, how do I see/configure the location/size of it?
Usually routines and data that have no use after initialization can be declared with the __init macro. See this.
The location and size of this memory section would be under the control of the linker script, rather than explicit user control. The kernel System.map file should have the info for review.
I think you need to pass ramdisk_size in uboot bootargs command
ramdisk_size needs to be set if the ram-disk uncompress file
size is bigger than default setting. It should be more than
ramdisk uncompress file size.

File System Block Size while creating the File System using mkfs

I am trying to use BUSE (with NBD) to create a block device in user space. I am not clearly understanding the block access patterns when creating a file system. As shown in the example when I mount the nbd device and create a ext4 file system with a block size of 4096, I am seeing the reads and writes are in multiples of 1024 and not 4096.
However once the file system is created, when I mount the device and try to read/write files the requests are being sent in multiples of 4096.
So it looks like, while creating the file system using mkfs.ext4, the block device is accessed with 1024 as block size and only after the file system is created, the user specified block size will be used. I am correct in making this inference? If so, can someone explain what happens at the backend and why 1024 is chose initially?
Thanks and Regards,
Sharath

How to create a big file to fill disk to the full on windows phone?

For example, I have a Lumia 920, its total space is 32G, and available free space is 24G.
Now I want to create some files to filling disk to the full, how can I create 24G files as quickly as possilbe? I tried, but very slowly. :-(
As far as I know, one app (see below link) can do that, but I really can't understand how does he do it? Write isolated storage is very slow slow. Could you give me some advices?
http://www.windowsphone.com/en-us/store/app/%E7%BC%93%E5%AD%98%E6%B8%85%E7%90%86/b790919d-8ec8-40d8-b97a-10c466cedca8
You just need to create the file like thisand write a single byte:
FileStream fs = new FileStream(#"c:\tmp\yourfilename", FileMode.CreateNew);
fs.Seek(2048L * 1024 * 1024, SeekOrigin.Begin);
fs.WriteByte(0);
fs.Close();
This example will create a 2GB file with a single byte of content !

Resources