Is there a way to create a PDF larger than 10 MB - pdf-generation

Running ghostscript to create a PDF larger than 10 MB fails. I ensured I upgraded to the latest ghostscript. Is this a hard limit or is there a way that ghostscript can allow file sizes larger than 10 MB

Related

Reading matrix from file takes too much RAM

I am reading a matrix from a file using readdlm. The file is about 400 MB in size. My PC has 8 GB of RAM. When I try to readdlm the matrix from this file, my PC eventually freezes, while the RAM consumption goes up until it consumes everything. The matrix is simply a 0, 1 matrix.
I don't understand why this happens. Storing this matrix in memory shouldn't take more than the 400 MB necessary to store the file.
What can I do?
The code I am using is simple:
readdlm("data.txt")
where data.txt is a 400mb text file of tab-separated values. I am on Linux Mint 17.3, Julia 0.4.

Windows Large Page Support other than 2MB?

I have read that the Intel chips support up to 1 GB virtual memory page sizes. Using VirtualAlloc with MEM_LARGE_PAGES gets you 2MB pages. Is there any way to get a different page size? We are currently using Server 2008 R2, but are planning to upgrade to Server 2012.
Doesn't look like it, the Large Page Support docs provide no mechanism for defining the size of the large pages. You're just required to make allocations that have a size (and alignment if explicitly requested) that are multiples of the minimum large page size.
I suppose it's theoretically possible that Windows could implement multiple large page sizes internally (the API function only tells you the minimum size), but they don't expose it at the API level. In practice, I'd expect diminishing returns for larger and larger pages; the overhead of TLB cache misses just won't matter as much when you're already reducing the TLB usage by several orders of magnitude.
In recent versions of Windows 10 (or 11 and later) it is finally possible to choose 1GB (as opposed to 2MB) pages to satisfy large allocations.
This is done by calling VirtualAlloc2 with specific set of flags (you will need recent SDK for the constants):
MEM_EXTENDED_PARAMETER extended {};
extended.Type = MemExtendedParameterAttributeFlags;
extended.ULong64 = MEM_EXTENDED_PARAMETER_NONPAGED_HUGE;
VirtualAlloc2 (GetCurrentProcess (), NULL, size,
MEM_LARGE_PAGES | MEM_RESERVE | MEM_COMMIT,
PAGE_READWRITE, &extended, 1);
If the 1GB page(s) cannot be allocated, the function fails.
It might not be necessary to explicitly request 1GB pages if your software already uses 2MB ones.
Quoting Windows Internals, Part 1, 7th Edition:
On Windows 10 version 1607 x64 and Server 2016 systems, large pages may also be mapped with huge pages, which are 1 GB in size. This is done automatically if the allocation size requested is larger than 1 GB, but it does not have to be a multiple of 1 GB. For example, an allocation of 1040 MB would result in using one huge page (1024 MB) plus 8 “normal” large pages (16 MB divided by 2 MB).
Side note:
Unfortunately the flags above only work for VirtualAlloc2, not for creating shared sections (CreateFileMapping2), where also new flag SEC_HUGE_PAGES exists, but always returns ERROR_INVALID_PARAMETER. But again, given the quote, Windows might be using 1GB pages transparently where appropriate anyway.

to write 10,000 x 5,000 term document matrix using 'Write Excel' operator

I am using rapid miner studio 6.0.007 with 8 GB ram trial license on windows 7 sp1. I have Core i7 with 8 GB RAM and 256GB SSD 840 PRO. I want to write term document matrix 5,000 columns and 10,000 rows. But I am unable to write because when using write excel operator memory utilization becomes maximum i.e. 98% of my 8 GB ram and after many hours I got the error message related with less available memory. Is there some optimal setting required in Rapid miner?
The trial license is limited to a small amount of memory; 1Gb I believe. You can write the output as a CSV file which might require less memory.
Alternatively, you can use the last 5.n community version which is not memory limited.

OpenElec XMBC image reduces SD card to 128 MB

I'm preparing an SD card with OpenElec XMBC to use in a Raspberry Pi. To start with, I have formatted the SD Card using this software. Then I followed the steps on this page to load the image on the SD card. Before writing the image to the SD card, the size is around 4 GB (as it should be). After writing the image, the size of the SD card goes back to around 128 MB. If I format the card again, it returns to 4 GB. Re-writing the image puts it back at 128 MB again.
I'm still awaiting the delivery of my Raspberry Pi to test it, however I find it hard to imagine that this is normal behavior or that the Raspberry Pi would recognize the 4GB. Has anybody had this issue?
EDIT:
I'm using Windows 8.1
UPDATE:
Tried it in my Raspberry Pi and it is showing 1G. Still 3GB missing.
It's probably re-written the partition table and created partitions that your OS doesn't recognise. I'd be willing to bet that the only partition your OS recognises is the one that's 128mb - OpenELEC is Linux based, so for example one of the partitions will be Linux Swap
If you DD/bitwise copy (which is what I assume the util is doing) an image that's only 1 gig, it would only show the disk as being 1 gig. You can resize or create a new partition.

what are the dimensions of the largest usable JPEG image in GAE?

There are a couple limits on the size of images when you start to talk about Google's App Engine:
10 MB -- the upload limit
1 MB -- the manipulation limit (I do not know what else to call this)
But, folks have reported that they have exceeded the manipulation limit while working with images that are smaller than 1MB...
So, it seems there is another limit that is coming into play. My guess is there is some limit to the size of the image after it has been transformed into 24/32 bit pixels.
here is the documentation for Java and here for Python.

Resources