Information source - http://www.onicos.com/staff/iz/formats/gif.html#header
In GIF images the actual image size (width, height) is stored in Image Block. To my best understanding Image Block is the very first block included in header.
Before the actual blocks begin, there is a memory allocation called Global Color Table(0..255 x 3 bytes)(from now-on GCT). If I can know the byte count reserved for GCT I can extract bytes 5-9 from Image Block and have the actual image size.
Question:
How can I know/learn what is the size of the GCT?
OR
Where does GCT end?
OR
Where does Image Block begin?
OR
Where does Image Block end?
All you need for gif enc/dec you will find here 3MF Project GIF
GCT
this block is optional and not always present in a GIF file. Size is determined by number of colors and bit wide from GIF header. I decode/load it like this:
struct _hdr
{
// Header
BYTE Signature[3]; /* Header Signature (always "GIF") */
BYTE Version[3]; /* GIF format version("87a" or "89a") */
// Logical Screen Descriptor
WORD xs;
WORD ys;
BYTE Packed; /* Screen and Color Map Information */
BYTE BackgroundColor; /* Background Color Index */
BYTE AspectRatio; /* Pixel Aspect Ratio */
} hdr;
gcolor_bits= (hdr.Packed &7)+1; // global pallete
scolor_bits=((hdr.Packed>>4)&7)+1; // screen
_gcolor_sorted =hdr.Packed&8;
_gcolor_table =hdr.Packed&128;
scolors=1<<scolor_bits;
gcolors=1<<gcolor_bits;
if _gcolor_table is true then GCT is present
GCT size is 3*gcolors [Bytes] stored in order R,G,B
Start of Image
This one is a bit tricky because GIF89a files may contain many optional blocks. You need to do a decoding loop detecting type of block and decoding/skipping it according its purpose. I do it like this:
struct _gfxext
{
BYTE Introducer; /* Extension Introducer (always 21h) */
BYTE Label; /* Graphic Control Label (always F9h) */
BYTE BlockSize; /* Size of remaining fields (always 04h) */
BYTE Packed; /* Method of graphics disposal to use */
WORD DelayTime; /* Hundredths of seconds to wait */
BYTE ColorIndex; /* Transparent Color Index */
BYTE Terminator; /* Block Terminator (always 0) */
} gfx;
struct _txtext
{
BYTE Introducer; /* Extension Introducer (always 21h) */
BYTE Label; /* Extension Label (always 01h) */
BYTE BlockSize; /* Size of Extension Block (always 0Ch) */
WORD TextGridLeft; /* X position of text grid in pixels */
WORD TextGridTop; /* Y position of text grid in pixels */
WORD TextGridWidth; /* Width of the text grid in pixels */
WORD TextGridHeight; /* Height of the text grid in pixels */
BYTE CellWidth; /* Width of a grid cell in pixels */
BYTE CellHeight; /* Height of a grid cell in pixels */
BYTE TextFgColorIndex; /* Text foreground color index value */
BYTE TextBgColorIndex; /* Text background color index value */
// BYTE *PlainTextData; /* The Plain Text data */
// BYTE Terminator; /* Block Terminator (always 0) */
};
struct _remext
{
BYTE Introducer; /* Extension Introducer (always 21h) */
BYTE Label; /* Comment Label (always FEh) */
// BYTE *CommentData; /* Pointer to Comment Data sub-blocks */
// BYTE Terminator; /* Block Terminator (always 0) */
};
struct _appext
{
BYTE Introducer; /* Extension Introducer (always 21h) */
BYTE Label; /* Extension Label (always FFh) */
BYTE BlockSize; /* Size of Extension Block (always 0Bh) */
CHAR Identifier[8]; /* Application Identifier */
BYTE AuthentCode[3]; /* Application Authentication Code */
// BYTE *ApplicationData; /* Point to Application Data sub-blocks */
// BYTE Terminator; /* Block Terminator (always 0) */
};
// handle 89a extensions blocks
_gfxext gfxext; gfxext.Introducer=0;
_txtext txtext; txtext.Introducer=0;
_remext remext; remext.Introducer=0;
_appext appext; appext.Introducer=0;
if((hdr.Version[0]=='8')
&&(hdr.Version[1]=='9')
&&(hdr.Version[2]=='a')) _89a=true; else _89a=false;
if (_89a)
for (;!f.eof;)
{
f.peek((BYTE*)&dw,2);
if (dw==0xF921) { f.read((BYTE*)&gfxext,sizeof(_gfxext)); }
else if (dw==0x0121) { f.read((BYTE*)&txtext,sizeof(_txtext)); for (;!f.eof;) { f.read(&db,1); if (!db) break; f.read(dat,DWORD(db)); } }
else if (dw==0xFE21) { f.read((BYTE*)&remext,sizeof(_remext)); for (;!f.eof;) { f.read(&db,1); if (!db) break; f.read(dat,DWORD(db)); } }
else if (dw==0xFF21) { f.read((BYTE*)&appext,sizeof(_appext)); for (;!f.eof;) { f.read(&db,1); if (!db) break; f.read(dat,DWORD(db)); } }
else if ((dw&0x00FF)==0x0021) return; // corrupted file
else break; // no extension found
}
db is BYTE variable
dw is WORD variable
f is my file cache class the members are self explanatory I hope anyway:
f.read(&data,size) read size BYTES into data
f.peek(&data,size) do the same but do not update position in file
f.eof indicates end of file reached
this has to be done for each frame after all this image header starts.
End of Image
Image block ends with terminator. All the chunks of image start with BYTE count. If it is zero it is a terminator block. Usually after the image there are few BYTES not used by LZW data so after you fill the whole image area skip all blocks until hit the zero sized block and then stop that is image end. If BYTE after this is 0x3B hex you reached the end of GIF file
[notes]
Do not forget to encapsulate struct by #pragma pack(1) and #pragma pack() or manually set align to 1 BYTE. Beware problems with signed data types (LZW data is unsigned) so overtype where you can to avoid problems or use just unsigned variables (with enough bit-width) for decoding
Related
After upgrading to Xcode 12.0.1 my command line Mac app (written in Swift) for file decryption runs into these errors when trying to build:
Implicit declaration of function 'SecKeyEncrypt' is invalid in C99
Implicit declaration of function 'SecKeyRawSign' is invalid in C99
Implicit declaration of function 'SecKeyDecrypt' is invalid in C99
The en-/de-cryption code (written in Objective C) was taken from https://github.com/ideawu/Objective-C-RSA - it worked just fine in Xcode 11.
It uses this import statement
#import <Security/Security.h>
<Security/Security.h> has a line
#include <Security/SecKey.h>
and in this file, the methods are declared:
#if SEC_OS_IPHONE
/*!
#function SecKeyRawSign
#abstract Given a private key and data to sign, generate a digital
signature.
#param key Private key with which to sign.
#param padding See Padding Types above, typically kSecPaddingPKCS1SHA1.
#param dataToSign The data to be signed, typically the digest of the
actual data.
#param dataToSignLen Length of dataToSign in bytes.
#param sig Pointer to buffer in which the signature will be returned.
#param sigLen IN/OUT maximum length of sig buffer on input, actualy
length of sig on output.
#result A result code. See "Security Error Codes" (SecBase.h).
#discussion If the padding argument is kSecPaddingPKCS1, PKCS1 padding
will be performed prior to signing. If this argument is kSecPaddingNone,
the incoming data will be signed "as is".
When PKCS1 padding is performed, the maximum length of data that can
be signed is the value returned by SecKeyGetBlockSize() - 11.
NOTE: The behavior this function with kSecPaddingNone is undefined if the
first byte of dataToSign is zero; there is no way to verify leading zeroes
as they are discarded during the calculation.
If you want to generate a proper PKCS1 style signature with DER encoding
of the digest type - and the dataToSign is a SHA1 digest - use
kSecPaddingPKCS1SHA1.
*/
OSStatus SecKeyRawSign(
SecKeyRef key,
SecPadding padding,
const uint8_t *dataToSign,
size_t dataToSignLen,
uint8_t *sig,
size_t *sigLen)
__OSX_AVAILABLE_STARTING(__MAC_10_7, __IPHONE_2_0);
/*!
#function SecKeyRawVerify
#abstract Given a public key, data which has been signed, and a signature,
verify the signature.
#param key Public key with which to verify the signature.
#param padding See Padding Types above, typically kSecPaddingPKCS1SHA1.
#param signedData The data over which sig is being verified, typically
the digest of the actual data.
#param signedDataLen Length of signedData in bytes.
#param sig Pointer to the signature to verify.
#param sigLen Length of sig in bytes.
#result A result code. See "Security Error Codes" (SecBase.h).
#discussion If the padding argument is kSecPaddingPKCS1, PKCS1 padding
will be checked during verification. If this argument is kSecPaddingNone,
the incoming data will be compared directly to sig.
If you are verifying a proper PKCS1-style signature, with DER encoding
of the digest type - and the signedData is a SHA1 digest - use
kSecPaddingPKCS1SHA1.
*/
OSStatus SecKeyRawVerify(
SecKeyRef key,
SecPadding padding,
const uint8_t *signedData,
size_t signedDataLen,
const uint8_t *sig,
size_t sigLen)
__OSX_AVAILABLE_STARTING(__MAC_10_7, __IPHONE_2_0);
/*!
#function SecKeyEncrypt
#abstract Encrypt a block of plaintext.
#param key Public key with which to encrypt the data.
#param padding See Padding Types above, typically kSecPaddingPKCS1.
#param plainText The data to encrypt.
#param plainTextLen Length of plainText in bytes, this must be less
or equal to the value returned by SecKeyGetBlockSize().
#param cipherText Pointer to the output buffer.
#param cipherTextLen On input, specifies how much space is available at
cipherText; on return, it is the actual number of cipherText bytes written.
#result A result code. See "Security Error Codes" (SecBase.h).
#discussion If the padding argument is kSecPaddingPKCS1 or kSecPaddingOAEP,
PKCS1 (respectively kSecPaddingOAEP) padding will be performed prior to encryption.
If this argument is kSecPaddingNone, the incoming data will be encrypted "as is".
kSecPaddingOAEP is the recommended value. Other value are not recommended
for security reason (Padding attack or malleability).
When PKCS1 padding is performed, the maximum length of data that can
be encrypted is the value returned by SecKeyGetBlockSize() - 11.
When memory usage is a critical issue, note that the input buffer
(plainText) can be the same as the output buffer (cipherText).
*/
OSStatus SecKeyEncrypt(
SecKeyRef key,
SecPadding padding,
const uint8_t *plainText,
size_t plainTextLen,
uint8_t *cipherText,
size_t *cipherTextLen)
__OSX_AVAILABLE_STARTING(__MAC_10_7, __IPHONE_2_0);
/*!
#function SecKeyDecrypt
#abstract Decrypt a block of ciphertext.
#param key Private key with which to decrypt the data.
#param padding See Padding Types above, typically kSecPaddingPKCS1.
#param cipherText The data to decrypt.
#param cipherTextLen Length of cipherText in bytes, this must be less
or equal to the value returned by SecKeyGetBlockSize().
#param plainText Pointer to the output buffer.
#param plainTextLen On input, specifies how much space is available at
plainText; on return, it is the actual number of plainText bytes written.
#result A result code. See "Security Error Codes" (SecBase.h).
#discussion If the padding argument is kSecPaddingPKCS1 or kSecPaddingOAEP,
the corresponding padding will be removed after decryption.
If this argument is kSecPaddingNone, the decrypted data will be returned "as is".
When memory usage is a critical issue, note that the input buffer
(plainText) can be the same as the output buffer (cipherText).
*/
OSStatus SecKeyDecrypt(
SecKeyRef key, /* Private key */
SecPadding padding, /* kSecPaddingNone,
kSecPaddingPKCS1,
kSecPaddingOAEP */
const uint8_t *cipherText,
size_t cipherTextLen, /* length of cipherText */
uint8_t *plainText,
size_t *plainTextLen) /* IN/OUT */
__OSX_AVAILABLE_STARTING(__MAC_10_7, __IPHONE_2_0);
#endif // SEC_OS_IPHONE
The method for decryption where the error is raised looks like this:
+ (NSData *)decryptData:(NSData *)data withKeyRef:(SecKeyRef) keyRef{
const uint8_t *srcbuf = (const uint8_t *)[data bytes];
size_t srclen = (size_t)data.length;
size_t block_size = SecKeyGetBlockSize(keyRef) * sizeof(uint8_t);
UInt8 *outbuf = malloc(block_size);
size_t src_block_size = block_size;
NSMutableData *ret = [[NSMutableData alloc] init];
for(int idx=0; idx<srclen; idx+=src_block_size){
//NSLog(#"%d/%d block_size: %d", idx, (int)srclen, (int)block_size);
size_t data_len = srclen - idx;
if(data_len > src_block_size){
data_len = src_block_size;
}
size_t outlen = block_size;
OSStatus status = noErr;
status = SecKeyDecrypt(keyRef, // <<<<<<<<<<<<<<<<<<<<<< This raises the error
kSecPaddingNone,
srcbuf + idx,
data_len,
outbuf,
&outlen
);
if (status != 0) {
NSLog(#"SecKeyEncrypt fail. Error Code: %d", status);
ret = nil;
break;
}else{
//the actual decrypted data is in the middle, locate it!
int idxFirstZero = -1;
int idxNextZero = (int)outlen;
for ( int i = 0; i < outlen; i++ ) {
if ( outbuf[i] == 0 ) {
if ( idxFirstZero < 0 ) {
idxFirstZero = i;
} else {
idxNextZero = i;
break;
}
}
}
[ret appendBytes:&outbuf[idxFirstZero+1] length:idxNextZero-idxFirstZero-1];
}
}
free(outbuf);
CFRelease(keyRef);
return ret;
}
It seems that the en-/de-cryption functions cannot be called directly anymore. I am not sure what has changed here - is the problem due to a change in Xcode? And more importantly: how can this problem be fixed? (I am on Catalina 10.15.6)
Any help is highly appreciated! (Please let me know if some information is missing.)
As Phillip Mills pointed out, there is an #if SEC_OS_IPHONE conditional in the Security/SecKey.h include and indeed, if I create an iOS app instead of Mac app, the project can be built without errors. So I assume the conditional has been introduced with Xcode 12 and functions like SecKeyEncrypt and SecKeyDecrypt cannot be called on macOS anymore (unless Mac Catalyst) - maybe have never been supported officially.
Anyway, I have now added OpenSSL via CocoaPods and got the decryption part working. In case you are interested, my header file RSACryptoOpenSSL.h looks like this:
#import <Foundation/Foundation.h>
#import <openssl/bio.h>
#import <openssl/pem.h>
NS_ASSUME_NONNULL_BEGIN
#interface RSACryptoOpenSSL : NSObject
+ (NSString *)decryptMacOsString:(NSString *)str privateKey:(NSString *)privKey;
#end
NS_ASSUME_NONNULL_END
and the implementation file like this
#import "RSACryptoOpenSSL.h"
#implementation RSACryptoOpenSSL
+ (NSString *)decryptMacOsString:(NSString *)str privateKey:(NSString *)privKey
{
NSData *data = [[NSData alloc] initWithBase64EncodedString:str options:NSDataBase64DecodingIgnoreUnknownCharacters];
// load private key
const char *private_key = [privKey UTF8String];
BIO *bio = BIO_new_mem_buf((void*)private_key, (int)strlen(private_key));
RSA *rsa_privatekey = PEM_read_bio_RSAPrivateKey(bio, NULL, 0, NULL);
BIO_free(bio);
int maxSize = RSA_size(rsa_privatekey);
unsigned char *output = (unsigned char *) malloc(maxSize * sizeof(char));
int bytes __unused = RSA_private_decrypt((int)[data length], [data bytes], output, rsa_privatekey, RSA_PKCS1_PADDING);
NSString *ret = [NSString stringWithUTF8String:(char *)output];
return ret;
}
#end
Thanks to doginthehat and timburks for the valuable information.
Using the ATting1616 within avr-gcc I am trying to read and write to the EEPROM.
The ATtiny1616 uses NVMCTRL - Nonvolatile Memory Controller for byte level read/writes. I am using NVMCTRL to read/write blocks from the EEPROM, but it is not working correctly.
Here is an example to demonstrate what I am trying to so.
Lets say that I was to save two different values within the EEPROM and then read back each ones value.
uint16_t eeprom_address1 = 0x01;//!< Address one for first saved value
uint16_t eeprom_address2 = 0x32;//!< Address two for second saved value
char save_one = "12345"; //!< Test value to save, one
char save_two = "testing";//!< Test value to save, two
FLASH_0_write_eeprom_block(eeprom_address1,save_one,7); //!< Save first value to address 1
FLASH_0_write_eeprom_block(eeprom_address2,save_two,7); //!< Save second value to address 2
char test_data[7] = {0}; //!< Just some empty array to put chars into
FLASH_0_read_eeprom_block(eeprom_address1,test_data,7); //!< Read eeprom from address, to address+ 7, and store back into test_data
Here are the read/write functions:
# define EEPROM_START (0x1400)//!< is located in header file
/**
* \brief Read a block from eeprom
*
* \param[in] eeprom_adr The byte-address in eeprom to read from
* \param[in] data Buffer to place read data into
*
* \return Nothing
*/
void FLASH_0_read_eeprom_block(eeprom_adr_t eeprom_adr, uint8_t *data, size_t size)
{
// Read operation will be stalled by hardware if any write is in progress
memcpy(data, (uint8_t *)(EEPROM_START + eeprom_adr), size);
}
/**
* \brief Write a block to eeprom
*
* \param[in] eeprom_adr The byte-address in eeprom to write to
* \param[in] data The buffer to write
*
* \return Status of write operation
*/
nvmctrl_status_t FLASH_0_write_eeprom_block(eeprom_adr_t eeprom_adr, uint8_t *data, size_t size)
{
uint8_t *write = (uint8_t *)(EEPROM_START + eeprom_adr);
/* Wait for completion of previous write */
while (NVMCTRL.STATUS & NVMCTRL_EEBUSY_bm)
;
/* Clear page buffer */
ccp_write_spm((void *)&NVMCTRL.CTRLA, NVMCTRL_CMD_PAGEBUFCLR_gc);
do {
/* Write byte to page buffer */
*write++ = *data++;
size--;
// If we have filled an entire page or written last byte to a partially filled page
if ((((uintptr_t)write % EEPROM_PAGE_SIZE) == 0) || (size == 0)) {
/* Erase written part of page and program with desired value(s) */
ccp_write_spm((void *)&NVMCTRL.CTRLA, NVMCTRL_CMD_PAGEERASEWRITE_gc);
}
} while (size != 0);
return NVM_OK;
}
The value that is turned if test_data[7] is printed will be "testing".
When looking at the memory in debug mode I am able to see that the value is always being written to the first memory location in the data EEPROM.[0x1400]
In this case starting at memory x1400 the value of "testing" starts.
There seems to be something fundamental that I have failed to understand with reading and write to the EEPROM. Any guidance would be greatly appreciated.
I have a VC++ console app which has a function that reads the printer's current orientation setting. Using the sample code from this MS page, I can successfully read the printer's setting when executing the program from the cmd prompt. However, if I execute it from a Windows service (written in C#), the current orientation would always return as 1 (portrait), even though the other settings look correct. Why is that?
To summarize:
For a printer whose orientation is set to Landscape, the code below, if run from cmd.exe, correctly outputs:
original printer orientation=2
but if run from a windows service written with C#, always outputs:
original printer orientation=1
/*
* Step 1:
* Allocate a buffer of the correct size.
*/
dwNeeded = DocumentProperties(NULL,
hPrinter, /* Handle to our printer. */
deviceName, /* Name of the printer. */
NULL, /* Asking for size, so */
NULL, /* these are not used. */
0); /* Zero returns buffer size. */
pDevMode = (LPDEVMODE)malloc(dwNeeded);
/*
* Step 2:
* Get the default DevMode for the printer and
* modify it for your needs.
*/
dwRet = DocumentProperties(NULL,
hPrinter,
deviceName,
pDevMode, /* The address of the buffer to fill. */
NULL, /* Not using the input buffer. */
DM_OUT_BUFFER); /* Have the output buffer filled. */
if (dwRet != IDOK)
{
/* If failure, cleanup and return failure. */
free(pDevMode);
ClosePrinter(hPrinter);
return NULL;
}
cout << "original printer orientation=";
cout << pDevMode->dmOrientation;
This is due to permissions. The Windows service was running under "Local System", which of course does not recognize the printer setting changes I made under my own logon name. Thanks to Retired Ninja who made me think of permissions.
Is there any way to set the textlayer color on a pebble watchface to multiple colors? For instance, if the time is 12:00 I would like the 1 to be one color, the 2 to be a second color, the : to be the first color and so on and so forth. I cannot find this information anywhere, and the method seems to only take a single variable.
It is easy to do, you need to change the context color each time, In example:
void time_update_callback(Layer *layer, GContext *ctx)
{
/* Get a layer rect to re-draw */
GRect layer_bounds = layer_get_bounds(layer);
/* Get time from variables */
char h1[2];
char h2[2];
char m1[2];
char m2[2];
h1[0] = hour_text_visible[0];
h2[0] = hour_text_visible[1];
m1[0] = minute_text_visible[0];
m2[0] = minute_text_visible[1];
h1[1] = h2[1] = m1[1] = m2[1] = 0;
/* Add y padding to GRect */
layer_bounds.origin.y += 1;
/* Aux copy */
GRect origin = layer_bounds;
/* Change color to Black */
graphics_context_set_text_color(ctx, GColorBlack);
/* Move */
layer_bounds.origin.x += 4;
layer_bounds.origin.y += 1;
/* Draw assuming you have a font loaded */
graphics_draw_text(ctx, h1, font, layer_bounds, GTextOverflowModeTrailingEllipsis, GTextAlignmentLeft, NULL);
/* Move again */
layer_bounds.origin.x += 20;
/* Draw black also */
graphics_draw_text(ctx, h2, font, layer_bounds, GTextOverflowModeTrailingEllipsis, GTextAlignmentLeft, NULL);
/* move */
layer_bounds.origin.x += 30;
/* Change color to Blue */
graphics_context_set_text_color(ctx, GColorBlue);
/* Draw in blue */
graphics_draw_text(ctx, m1, font, layer_bounds, GTextOverflowModeTrailingEllipsis, GTextAlignmentLeft, NULL);
/* etc */
}
I want to convert a UINT16 monochrome image to a 8 bits image, in C++.
I have that image in a
char *buffer;
I'd like to give the new converted buffer to a QImage (Qt).
I'm trying with freeImagePlus
fipImage fimage;
if (fimage.loadfromMemory(...) == false)
//error
loadfromMemory needs a fipMemoryIO adress:
loadfromMemory(fipMemoryIO &memIO, int flag = 0)
So I do
fipImage fimage;
BYTE *buf = (BYTE*)malloc(gimage.GetBufferLength() * sizeof(BYTE));
// 'buf' is empty, I have to fill it with 'buffer' content
// how can I do it?
fipMemoryIO memIO(buf, gimage.GetBufferLength());
fimage.loadFromMemory(memIO);
if (fimage.convertTo8Bits() == true)
cout << "Good";
Then I would do something like
fimage.saveToMemory(...
or
fimage.saveToHandle(...
I don't understand what is a FREE_IMAGE_FORMAT, which is the first argument to any of those two functions. I can't find information of those types in the freeImage documentation.
Then I'd finish with
imageQt = new QImage(destiny, dimX, dimY, QImage::Format_Indexed8);
How can I fill 'buf' with the content of the initial buffer?
And get the data from the fipImage to a uchar* data for a QImage?
Thanks.
The conversion is simple to do in plain old C++, no need for external libraries unless they are significantly faster and you care about such a speedup. Below is how I'd do the conversion, at least as a first cut. The data is converted inside of the input buffer, since the output is smaller than the input.
QImage from16Bit(void * buffer, int width, int height) {
int size = width*height*2; // length of data in buffer, in bytes
quint8 * output = reinterpret_cast<quint8*>(buffer);
const quint16 * input = reinterpret_cast<const quint16*>(buffer);
if (!size) return QImage;
do {
*output++ = *input++ >> 8;
} while (size -= 2);
return QImage(output, width, height, QImage::Format_Indexed8);
}