How to initialize __m128i array statically in gcc? - gcc

I am porting some SSE optimization code from Windows to Linux. And I found that the following code, which works well in MSVC, won't work in GCC.
The code is to initialize an array of __m128i. Each __mi28i contains 16 int8_t. It does compile with gcc but the result is not as expected.
Actually, as gcc defines __m128i as long long int, the code will initialize an array like:
long long int coeffs_ssse3[4] = {64, 83, 64, 36}.
I googled and was told that "The only portable way to initialize a vector is to use _mm_set_XXX intrinsics." However, I want to know is there any other way to initialize the __m128i array? Better statically, and don't need to modify the following code much (since I have tons of code in the following format). Any suggestion is appreciated.
static const __m128i coeffs_ssse3[4] =
{
{ 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0},
{ 83, 0, 36, 0,-36,-1,-83,-1, 83, 0, 36, 0,-36,-1,-83, -1},
{ 64, 0,-64,-1,-64,-1, 64, 0, 64, 0,-64,-1,-64,-1, 64, 0},
{ 36, 0,-83,-1, 83, 0,-36,-1, 36, 0,-83,-1, 83, 0,-36,-1}
};

It seems that gcc doesn't treat the __m128* types as being candidates for aggregate initialization. Since they aren't standard types, this behavior will vary from compiler to compiler. One approach would be to declare the array as an aligned array of 8-bit integers, then just cast a pointer to it:
static const int8_t coeffs[64] __attribute__((aligned(16))) =
{
64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0,
83, 0, 36, 0,-36,-1,-83,-1, 83, 0, 36, 0,-36,-1,-83, -1,
64, 0,-64,-1,-64,-1, 64, 0, 64, 0,-64,-1,-64,-1, 64, 0,
36, 0,-83,-1, 83, 0,-36,-1, 36, 0,-83,-1, 83, 0,-36,-1
};
static const __m128i *coeffs_ssse3 = (__m128i *) coeffs;
However, I don't think this syntax (__attribute__((aligned(x)))) is supported by Visual Studio, so you would need some #ifdef trickery in there to use the right directives to achieve the alignment that you want on all of your target platforms.

Related

how to extract the raw value in a subdocument in mongo's libbson c library?

consider the mongo document: {"key1": "value1", "key2": {"subkey1": "subvalue1"}}, this can be encoded (using python bson module like: bson.encode(DATA)) to the following uint8_t array: [56, 0, 0, 0, 2, 107, 101, 121, 49, 0, 7, 0, 0, 0, 118, 97, 108, 117, 101, 49, 0, 3, 107, 101, 121, 50, 0, 28, 0, 0, 0, 2, 115, 117, 98, 107, 101, 121, 49, 0, 10, 0, 0, 0, 115, 117, 98, 118, 97, 108, 117, 101, 49, 0, 0, 0]. Now, I use this array to initialize the bson_t struct and use the iterator to find the subdocument for key2.
Here I want to create another bson_t document from this. I tried using the bson_iter_document() method but it gives me precondition failed: document error. Maybe i'm not using it right. Is there another way to do it properly?
void test_bson(){
uint8_t raw[] = {56, 0, 0, 0, 2, 107, 101, 121, 49, 0, 7, 0, 0, 0, 118, 97, 108, 117, 101, 49, 0, 3, 107, 101, 121, 50, 0, 28, 0, 0, 0, 2, 115, 117, 98, 107, 101, 121, 49, 0, 10, 0, 0, 0, 115, 117, 98, 118, 97, 108, 117, 101, 49, 0, 0, 0};
bson_t *bson;
bson = bson_new_from_data(raw, 56);
bson_iter_t iter;
bson_iter_init(&iter, bson);
if (bson_iter_find(&iter, "key2")){
printf("found. %d\n", bson_iter_type(&iter));
const uint8_t **subdocument;
uint32_t subdoclen = 56;
bson_iter_document (&iter, &subdoclen, subdocument);
}
bson_free(bson);
}

Why won't ruby-mcrypt accept an array as a key?

Hello I am having trouble encrypting using an array as the key and the value with the ruby-mcrypt gem. The gem lets me use an array for the key fine, cipher = Mcrypt.new("rijndael-256", :ecb, secret) works. But it will give me an error when I try to encrypt. I've tried many things but no luck. Does anyone know if Mcrypt just doesn't like encrypting with an array?
require 'mcrypt'
def encrypt(plain, secret)
cipher = Mcrypt.new("rijndael-256", :ecb, secret)
cipher.padding = :zeros
encrypted = cipher.encrypt(plain)
p encrypted
encrypted.unpack("H*").first.to_s.upcase
end
array_to_encrypt = [16, 0, 0, 0, 50, 48, 49, 55, 47, 48, 50, 47, 48, 55, 32, 50, 50, 58, 52, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
key_array = [65, 66, 67, 68, 49, 50, 51, 52, 70, 71, 72, 73, 53, 54, 55, 56]
result = encrypt(array_to_encrypt, key_array)
p "RESULT IS #{result}"
The output is as follows:
Mcrypt::RuntimeError: Could not initialize mcrypt: Key length is not legal.
I traced this error to here in the ruby-mcrypt gem but don't understand it enough to figure out why I am getting the error message. Any help or insights would be amazing. Thanks!
The library doesn't support arrays. You'll need to use Strings instead:
def binary(byte_array)
byte_array.pack('C*')
end
array_to_encrypt = [16, 0, 0, 0, 50, 48, 49, 55, 47, 48, 50, 47, 48, 55, 32, 50, 50, 58, 52, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
key_array = [65, 66, 67, 68, 49, 50, 51, 52, 70, 71, 72, 73, 53, 54, 55, 56]
result = encrypt(binary(array_to_encrypt), binary(key_array))
p "RESULT IS #{result}"

Converting a UTF-16LE Elixir bitstring into an Elixir String

Given an Elixir bitstring encoded in UTF-16LE:
<<68, 0, 101, 0, 118, 0, 97, 0, 115, 0, 116, 0, 97, 0, 116, 0, 111, 0, 114, 0, 0, 0>>
how can I get this converted into a readable Elixir String (it spells out "Devastator")? The closest I've gotten is transforming the above into a list of the Unicode codepoints (["0044", "0065", ...]) and trying to prepend the \u escape sequence to them, but Elixir throws an error since it's an invalid sequence. I'm out of ideas.
The simplest way is using functions from the :unicode module:
:unicode.characters_to_binary(utf16binary, {:utf16, :little})
For example
<<68, 0, 101, 0, 118, 0, 97, 0, 115, 0, 116, 0, 97, 0, 116, 0, 111, 0, 114, 0, 0, 0>>
|> :unicode.characters_to_binary({:utf16, :little})
|> IO.puts
#=> Devastator
(there's a null byte at the very end, so the binary display instead of string will be used in the shell, and depending on OS it may print some extra representation for the null byte)
You can make use of Elixir's pattern matching, specifically <<codepoint::utf16-little>>:
defmodule Convert do
def utf16le_to_utf8(binary), do: utf16le_to_utf8(binary, "")
defp utf16le_to_utf8(<<codepoint::utf16-little, rest::binary>>, acc) do
utf16le_to_utf8(rest, <<acc::binary, codepoint::utf8>>)
end
defp utf16le_to_utf8("", acc), do: acc
end
<<68, 0, 101, 0, 118, 0, 97, 0, 115, 0, 116, 0, 97, 0, 116, 0, 111, 0, 114, 0, 0, 0>>
|> Convert.utf16le_to_utf8
|> IO.puts
<<192, 3, 114, 0, 178, 0>>
|> Convert.utf16le_to_utf8
|> IO.puts
Output:
Devastator
πr²

difference between CV_32FC3 and CV_64FC3 in OpenCV?

I was testing around with OpenCV matrices and the display function and had this bug. It took me more than half a day to reveal it:
I originally tried to display OpenCV matrices regardless of the type of matric e.g. CvMat or Mat, ...
with a display method recommended by Mr vasile from another post of mine Multi channel Mat display function
The display method simply fetches all data of the matrix to cout stream
this is my program:
// First: CV_32FC3 works OK
float objpts[12] = {0, 105, 105, 0, 0, 0, 105, 105, 0, 0, 0, 0};
CvMat objptsmat = cvMat( 1, 4, CV_32FC3, objpts);
CvMat* objectPoints = &objptsmat;
CvMatShow(objectPoints);
getchar();
output:
// Second: CV_64FC3 crashes
float objpts[12] = {0, 105, 105, 0, 0, 0, 105, 105, 0, 0, 0, 0};
CvMat objptsmat = cvMat( 1, 4, CV_64FC3, objpts);
CvMat* objectPoints = &objptsmat;
CvMatShow(objectPoints);
getchar();
output:
they should be both the same. Right??!!
In the second example, you should have the array declared as
double objpts[12] = {0, 105, 105, 0, 0, 0, 105, 105, 0, 0, 0, 0};
You can read CV_xxtCn as
xx: number of bits
t: type (F = floating point type, S = signed integer, U = unsigned integer)
n: number of channels

ActionScript 3 Read JPEG quality

I am developing image uploader for Flash 10. Is there a way to read jpeg quality of the browsed images.
Unfortunately, it can't be done directly:
The quality factor is not stored
directly in the JPEG file, so you
cannot read the quality factor from
the file. (from: Microsoft support pages...)
In more detail:
The quantization table that was used
to compress an image is stored in
the JFIF header, but the JPEG Quality
Factor that was used to generate the
quantization table is not stored along
with the image and hence the original
JPEG Quality Factor is lost. (from: JPEG Compression Metrics as a Quality Aware Image Transcoding, by Surendar Chandra and Carla Schlatter Ellis)
The above quote is from a paper which discusses ways to estimate the level of compression (by examining the quantization tables used in the image), but it doesn't look easy to implement: there's an example here which is part of the Image Magick codebase, but it's written in C.
Image Magick has been ported to Haxe, which can be complied into Flash code, so conceivably you could get something working, but I'm afraid it's beyond my skills to explain how!
EDIT: just found a similar question on SuperUser, which also mentions Image Magick.
EDIT: you might also be interested in the answers to this question, which asked how to get the size of an image without loading the whole file (good for dealing with images bigger than Flash can handle).
I have used the libjpeg to finish this job, maybe you can refer to my code
#include <stdio.h>
#include <math.h>
#include "jpeglib.h"
#include <setjmp.h>
static const unsigned int std_luminance_quant_tbl[DCTSIZE2] = {
16, 11, 10, 16, 24, 40, 51, 61,
12, 12, 14, 19, 26, 58, 60, 55,
14, 13, 16, 24, 40, 57, 69, 56,
14, 17, 22, 29, 51, 87, 80, 62,
18, 22, 37, 56, 68, 109, 103, 77,
24, 35, 55, 64, 81, 104, 113, 92,
49, 64, 78, 87, 103, 121, 120, 101,
72, 92, 95, 98, 112, 100, 103, 99
};
int ReadJpegQuality(const char *filename)
{
FILE * infile = fopen(filename, "rb");
fseek(infile,0,SEEK_END);
size_t sz = ftell(infile);
fseek(infile,0,SEEK_SET);
unsigned char* buffer = new unsigned char[sz];
fread(buffer,1,sz,infile);
fclose(infile);
struct jpeg_decompress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_decompress(&cinfo);
jpeg_mem_src(&cinfo,(unsigned char*)buffer,sz);
jpeg_read_header(&cinfo, TRUE);
int tmp_quality = 0;
int linear_quality = 0;
const int aver_times = 3;
int times = 0;
int aver_quality = 0;
for(int i=0;i<DCTSIZE2;i++)
{
long temp = cinfo.quant_tbl_ptrs[0]->quantval[i];
if(temp<32767L&&temp>0)
{
linear_quality = ceil((float)(temp*100L - 50L)/std_luminance_quant_tbl[i]);
if(linear_quality==1) tmp_quality = 1;
else if(linear_quality==100) tmp_quality = 100;
else if(linear_quality>100)
{
tmp_quality = ceil((float)5000/linear_quality);
}
else
{
tmp_quality = 100 - ceil((float)linear_quality/2);
}
aver_quality += tmp_quality;
if(aver_times==++times)
{
aver_quality /= aver_times;
break;
}
}
}
jpeg_destroy_decompress(&cinfo);
return aver_quality;
}
int main(int argc,char** argv)
{
printf("quality: %d\n",ReadJpegQuality("test1.jpg"));
return 0;
}
this method is using the libjpeg to read jpg file's quantization table,then use the quantization table calculate the quality arguments.

Resources