Signtool file size limitation? - signtool

I created a 7-zip self-extracting archive with a size of 4,37GB.
When I use the signtool (tried already the 64-bit and 32-bit variant), it failed to sign this file.
I got the errors:
SignTool error: This file format cannot be signed because it is not recognized.
SignTool Error: An error occurred while attempting to sign: <7zip_selfextract.exe>
When I create in the same way via 7zip an self-extracting archive that is smaller then 4GB, the signing happens without any problems.
Anyone knowing about file size limitations in the signtool or ran against the same issue?

It does not matter whether you will be able to sign it or not;
Windows cannot run exe file that is over 4gb.
You can find explanation for that limit here:
https://superuser.com/questions/667593/is-it-possible-to-run-a-larger-than-4gb-exe
Signtool is limited as well:
https://web.archive.org/web/20120630022739/connect.microsoft.com/VisualStudio/feedback/details/519201/signtool-exe-cant-sign-big-file
I did not found more up to date article but all packages I work with as long as they are under 4gb work fine with signtool. This seems to be same limit as with operating system as well.
As for in my case with signtool I had file truncated to size of part that was over 4gb (lets say file was 4,5gb and as output I got 0,5gb - might have been caused by something else in build pipeline - I did not investigate it further).
I got rid of some packages that were there for convenience sake and increased compression level. If it is impossible for you (extra extraction time, not enough gain, etc.); try to do something like https://sourceforge.net/p/sevenzip/discussion/45797/thread/677bd204/ or use different solution.

Related

Windows: Additional file attribute preventing downloaded program from running?

I have a compiled program which runs great after being compressed, copied to another computer using a USB key, extracted and ran.
However, if I upload the compressed file to Google Drive or Dropbox, download it and extract it, the program will not run. It gives me an error "program.exe has stopped working".
Using a tool called WinMerge, I compared the program that was extracted from a USB drive with the program that was extracted after being downloaded. Every file, both binary and text, was identical.
Next I used attrib -r -a -s -h on every program file in both folders, thinking perhaps one of the file attributes was incorrect. I still had the same problem; the copied program works, the downloaded one does not.
I also tried changing the name and location of the folders the program was in but it had no effect.
The only thing I can think of is some additional attribute that Windows gives files which were downloaded from the internet, to possibly trigger an additional UAC check which is interfering with the program. Does this exist?
This is on Windows 7.
Found the problem. Windows adds an Alternate Data Stream (ADS) to every file downloaded off the internet. For some reason, these streams were preventing the program from running. Stripping the ADS from each file allows it to run.
I used a Windows Sysinternals program called Streams to strip the ADS data.

ZeroMQ Windows Installer NSIS Error

I want to install ZeroMQ for Ratchet/PHP and I downloaded the installer from http://zeromq.org/distro:microsoft-windows. But I keep getting "NSIS error" whenever I try to install it.
It immediately shows after I run the installer. Different versions, x64 or x86 ones, none of them works. This problem only shows up with ZeroMQ installers.
Does anyone have any idea why does this happen?
P.S. I use Windows 8.1. (Up to date)
This question does not belong here on Stackoverflow but since you posted it here anyway I will give you the technical answer: NSIS needs to open a file handle to itself so it can read the compressed data, it does this by calling GetModuleFileName to get the path and CreateFile to open the file. If this step fails it displays the _LANG_CANTOPENSELF message ("Error launching installer", the text in your screenshot).
A) GetModuleFileName can return a "incorrect" path when filesystem redirection is involved, this is most commonly seen when psexec is used to execute the program from the Windows directory on a remote 64-bit computer and this is probably not the case here?
B) The call to CreateFile can fail, this is most often caused by Anti-Virus software holding a lock/denying access to the file. Try to disable/uninstall any 3rd-party Anti-Virus software...

VB6 Application fails 8.3 path conversion on a single PC

I have a VB6 desktop application that is deployed on well over 1200 desktops. The devices throughout are a mix of Windows XP SP2 and SP3 systems. All but one of these PCs (XP SP2) is able to successfully decipher the DOS 8.3 path (ie C:\PROGRA~1\DATFOL~1\Config\) that is used in an .ini file related to this application. This particular PC errors out with a message: "Run-time error '76': Path not found".
The string is obtained from the .ini file using the
GetPrivateProfileString function. (The string is not hard-coded into the application - it is obtained from an ini file).
Since there is only one machine having the problem, I'm looking towards some configuration value on that device as being the root cause. I looked at the NtfsDisable8dot3NameCreation setting in the registry to see if this might cause the issue, but I have been unable to reproduce the problem on any other machine when changing this setting.
Anybody have any thoughts or perhaps another direction I could take?
Don't use hard-coded paths or short filenames. The Program Files folder might not be on the C: drive, might not be named Program Files, and even if it is, might not have a short filename of PROGRA~1 (and the same for DATAFOL~1). Write the install path to an INI file or the registry during installation and read+use that in your program.
If someone was gimping around and made a temp/backup/testing \DataFolder_Temp, deleted the original then renamed, the short path would be DATAFOL~2.
Delete the directory and recreate it.
check the PC. The PROGRA~1 or DATFOL~1 might actually be ~2 instead. Put the 8.3 name used in your code into explorer and see what IT tells you.

cc1plus: error: include: Value too large for defined data type when compiling with g++

I am making a project that should compile on Windows and Linux. I have made the project in Visual Studio and then made a makefile for linux. I created all the files in Windows with VS.
It compiles and runs perfectly in VS but when I run the makefile and it runs g++ I get
$ g++ -c -I include -o obj/Linux_x86/Server.obj src/Server.cpp
cc1plus: error: include: Value too large for defined data type
cc1plus: error: src/Server.cpp: Value too large for defined data type
The code is nothing more than a Hello World atm. I just wanted to make sure that everything was working before I started development. I have tried searching but to no avail.
Any help would be appreciated.
I have found a solution on Ubuntu at least. I, like you have noticed that the error only occurs on mounted samba shares - it seems to come from g++ 'stat'ing the file, the inode returns a very large value.
When mounting the share add ,nounix,noserverino to the options, ie:
mount -t cifs -o user=me,pass=secret,nounix,noserverino //server/share /mount
I found the info at http://bbs.archlinux.org/viewtopic.php?id=85999
I had similar problem. I compiled a project in a CIFS mounted samba share. With one Linux kernel the compilation was done, but using an other Linux kernel (2.6.32.5), I got similar error message: "Value too large for defined data type". When I used the proposed "nounix,noserverino" CIFS mounting option, the problem was fixed. So in that case there is a problem with CIFS mounting, so the error message is misleading, as there are no big files.
GNU Core Utils:
27 Value too large for defined data type
It means that your version of the utilities were not compiled with
large file support enabled. The GNU utilities do support large files
if they are compiled to do so. You may want to compile them again and
make sure that large file support is enabled. This support is
automatically configured by autoconf on most systems. But it is
possible that on your particular system it could not determine how to
do that and therefore autoconf concluded that your system did not
support large files.
The message "Value too large for defined data type" is a system error
message reported when an operation on a large file is attempted using
a non-large file data type. Large files are defined as anything larger
than a signed 32-bit integer, or stated differently, larger than 2GB.
Many system calls that deal with files return values in a "long int"
data type. On 32-bit hardware a long int is 32-bits and therefore this
imposes a 2GB limit on the size of files. When this was invented that
was HUGE and it was hard to conceive of needing anything that large.
Time has passed and files can be much larger today. On native 64-bit
systems the file size limit is usually 2GB * 2GB. Which we will again
think is huge.
On a 32-bit system with a 32-bit "long int" you find that you can't
make it any bigger and also maintain compatibility with previous
programs. Changing that would break many things! But many systems make
it possible to switch into a new program mode which rewrites all of
the file operations into a 64-bit program model. Instead of "long"
they use a new data type called "off_t" which is constructed to be
64-bits in size. Program source code must be written to use the off_t
data type instead of the long data type. This is typically done by
defining -D_FILE_OFFSET_BITS=64 or some such. It is system dependent.
Once done and once switched into this new mode most programs will
support large files just fine.
See the next question if you have inadvertently created a large file
and now need some way to deal with it.
12 years after this question was posted, I get the same error, when building my c++ project in docker on ubuntu 20.04 in Windows 11 WSL2 using an old 32-bit compiler (gcc-linaro-arm-linux-gnueabi-2012.01-20120125_linux/bin/arm-linux-gnueabi-g++).
The issue is related to mounting of the windows filesystem into WSL, where we get 64 bit inodes, not supported by the old 32 bit toolchain, see also The 64 bit inode problem
As a quick workaround you can build if you move your project into the home folder of the ubuntu.
An alternative solution is to replace the stat() with LD_PRELOAD as described in Build on Windows 10 with WSL, section "Fix stat in 32-bit binaries". This shall be done in the docker image. To compile inode64.c I had to additionally add 32 bit header files with
sudo apt-get install gcc-multilib
If you are on mergerfs filesystem, removing use_ino option will solve the issue: https://github.com/trapexit/mergerfs/issues/485
I think your g++ parameters are a bit off, or conflicting.
-c compile only
-I directory of your includes (just plain include might be ambiguous. Try full path)
-o outfile (but -c says compile only)

What can be the reason for Windows error ERROR_DISK_FULL (112) when opening a NTFS alternate data stream?

My application writes some bytes of data to an alternate data stream. This works fine on all but one machine (Windows Server 2003 SP2).
Instead, CreateFile returns ERROR_DISK_FULL when I try to create an alternate data stream (on the root directory). I don't find the reason for this result, because...
There's plenty of space on that drive.
The drive is NTFS formatted (due to GetVolumeInformation).
The drive supports altenate data
streams (due to GetVolumeInformation).
Edit: I can provide some more information about what the reason not is:
I added many streams on a test system which didn't show the error and wondered if the error might occur. It didn't. Instead after about 2000 Streams with long file names another error occurred and persisted: 1450 (ERROR_NO_SYSTEM_RESOURCES).
EDIT: Here is an example for one of the used file names:
char szStreamFileName[] = "C:\\:abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnoqrstuvwxyz012345";
EDIT: Our customer uses some corporate antivirus software from Avira on this server. Maybe this is the reason (Alternate data streams can be abused by malware).
After opening a support ticket at MS I know that there was a readonly flag set which one only can set (and reset) with undocumented Windows functions. Nobody knows who set this flag and why, but I sent them an image of the drive (after I got the machine from our customer) and so they figured it out. We only have a workaround in our application (We use another location if we detect this error). Meanwhile we know that some of our customers have this problem.
Are there any compressed/spare files or alternate data streams?
Often backup applications receive ERROR_DISK_FULL errors attempting to back up compressed files and this causes quite a bit of confusion when there are still several gigabytes of free space on the drive. Other issues may also occur when copying compressed files. The goal of this blog is to give the reader a more thorough understanding of what really happens when you compress NTFS files.
From Understanding NTFS Compression
Just another possibility...
Did you check the number of currently opend files in your OS?
The OS support max. number of reserved file handles after that report ERROR_DISK_FULL or ERROR_NO_SYSTEM_RESOURCES.
And second possibility...
The root directory is limited by number of files. As I remember 512 files in older versions of OS. But the NTFS support unlimited number of files in root!
You might want to see what something like Sysinternal's Process Monitor utility captures when trying to create this file - it show the return codes of various APIs involved in the I/O stack and one of them might give you a clue as to why 112 is being returned to you. Hopefully the level of detail in ProcMon is enough - if not, I imagine there are other, more detailed I/O trace facilities for Windows (but I don't know of them off the top of my head)
The filename you give is
char szStreamFileName[] = "C:\\:abcdefghijklm...
it starts with
C:\\:
Is that a typo on the post, or is there really a colon after the slash? I think thats a illegal filename.
If you try to copy a file greater than 2GB from another filesystem (NTFS) to FAT / FAT32 which has a 2GB limit you may see this error.
Just a blind shot, but are the rights set properly?

Resources