I am novice in OpenCV. I want to use equality operator between a vector element and matrix to cross check where both values are equal or not. How do that?
Thanks in advance
Im not sure if I understand your question correctly, but if you just want to compare one element of the vector to one element of the matrix this could be done in the following way:
cv::Vec3b vec(1, 2, 3);
cv::Mat mat(2, 2, CV_8UC1);
if (vec[0] == mat.at<uchar>(2, 1)) {
...
}
This compares the first element of a uchar vector to the second-row, first-column element of a uchar matrix.
If you want to iterate over the matrix, do it the following way:
cv::Mat mat(2, 2, CV_8UC1);
uchar* ptr;
for (int row = 0; row < mat.rows; ++row) {
ptr = mat.ptr<uchar>(row);
for (int col = 0; cols < mat.cols; ++col) {
if(ptr[col] == ...)
}
}
EDIT: equivalent for float, just exchange uchar for float:
cv::Vec3f vec(1, 2, 3);
cv::Mat mat(2, 2, CV_32FC1);
if (vec[0] == mat.at<float>(2, 1)) {
...
}
And:
cv::Mat mat(2, 2, CV_32FC1);
float* ptr;
for (int row = 0; row < mat.rows; ++row) {
ptr = mat.ptr<float>(row);
for (int col = 0; cols < mat.cols; ++col) {
if(ptr[col] == ...)
}
}
If you have multiple channels ptr[col] returns not one value but an OpenCV vector of the matrix' data type and as many dimensions as you've got channels. You can also directly add another [] operator with the index of the channel you want to access:
if(ptr[col][channel] == ...)
How to find out the data type of your cv::Mat?
From the type and depth specifier of your matrix you can see what data type they store:
CV_8U - 8-bit unsigned integers ( 0..255 )
CV_8S - 8-bit signed integers ( -128..127 )
CV_16U - 16-bit unsigned integers ( 0..65535 )
CV_16S - 16-bit signed integers ( -32768..32767 )
CV_32S - 32-bit signed integers ( -2147483648..2147483647 )
CV_32F - 32-bit floating-point numbers ( -FLT_MAX..FLT_MAX, INF, NAN )
CV_64F - 64-bit floating-point numbers ( -DBL_MAX..DBL_MAX, INF, NAN )
These are the depth specifiers for matrices. You can find out the depth of your matrix by calling cv::Mat::depth(). They specify which data type one element has. The type specifiers you use when creating a matrix also contain the information how many channels the matrix shall have, just add Cx to the depth specifier, x being the amount of channels , e.g. CV_8UC3 would be a matrix with three channels and 8bit unsigned chars as data type (so a pretty normal 8bit image). This information can be obtained from an existing matrix by calling cv::Mat::type(). The amount of channels is returned by cv::Mat::channels().
For OpenCV Vectors those type specifiers are similar:
typedef Vec<uchar, 2> Vec2b;
typedef Vec<uchar, 3> Vec3b;
typedef Vec<uchar, 4> Vec4b;
typedef Vec<short, 2> Vec2s;
typedef Vec<short, 3> Vec3s;
typedef Vec<short, 4> Vec4s;
typedef Vec<int, 2> Vec2i;
typedef Vec<int, 3> Vec3i;
typedef Vec<int, 4> Vec4i;
typedef Vec<float, 2> Vec2f;
typedef Vec<float, 3> Vec3f;
typedef Vec<float, 4> Vec4f;
typedef Vec<float, 6> Vec6f;
typedef Vec<double, 2> Vec2d;
typedef Vec<double, 3> Vec3d;
typedef Vec<double, 4> Vec4d;
typedef Vec<double, 6> Vec6d;
Related
I was recently asked this question in an interview of C++ where I
was asked to improve the below piece of code which fails when
adding two int's results in the result being long and return
type needs accordingly to be derived.
Here the below code fails because the decltype() based derivation is not intelligent enough to identify based on the actual range of values of input but the type and derives return type as same. Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.
How can this be generalized any hints or clues?
I feel that decltype() won't be helpful here.
#include<iostream>
#include<string>
#include<climits>
using namespace std;
template<typename T> auto adder(const T& i1, const T& i2) -> decltype(i1+i2)
{
return(i1+i2);
}
int main(int argc, char* argv[])
{
cout << adder(INT_MAX-10, INT_MAX-3) << endl; // wrong.
cout << adder<long>(INT_MAX-10, INT_MAX-3) << endl; // correct!!.
return(0);
}
Hence we need perhaps some metaprogramming template technique to derive the return type as long if T is int.
Not so simple.
If T is int, you're non sure that long is enough.
The standard say only that
1) the number of bits for int (sizeof(int) * CHAR_BIT) is at least 16
2) the number of bits for long (sizeof(long) * CHAR_BIT) is at least 32
3) sizeof(int) <= sizeof(long)
So if a compiler manage a int with sizeof(int) == sizeof(long), this is perfectly legal and
adder<long>(INT_MAX-10, INT_MAX-3);
doesn't works because long can be not enough to contain (without overflow) the sum between two int's.
I don't see a simple and elegant solution.
The best that come in my mind is based on the fact that C++11 introduced the following types
1) std::int_least8_t, smallest integer type with at least 8 bits
2) std::int_least16_t, smallest integer type with at least 16 bits
3) std::int_least32_t, smallest integer type with at least 32 bits
4) std::int_least64_t, smallest integer type with at least 64 bits
C++11 also introduce std::intmax_t as the maximum width integer type.
So I propose the following template type selector
template <std::size_t N, typename = std::true_type>
struct typeFor;
/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };
that, given a number of bits, define the corresponding smallest "at least" integer type.
I propose also the following using
template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;
that, given a type T, detect the smallest integer type that surely contain a sum between two T values (a integer with a number of bits that is at least the number of bits of T plus one).
So your adder() simply become
template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }
Observe that th returned value isn't simply
return i1 + i2;
otherwise you return the correct type but with the wrong value: i1 + i2 is calculated as a T value so you can have overflow, then the sum is assigned to a typeNext<T> variable.
To avoid this problem, you have to initialize a typeNext<T> temporary variable with one of two values (typeNext<T>{i1}), then add the other (typeNext<T>{i1} + i2) obtaining a typeNext<T> value, finally return the computed value. This way the sum in calculated as a typeNext<T> sum and you doesn't have overflow.
The following is a full compiling example
#include <cstdint>
#include <climits>
#include <iostream>
#include <type_traits>
template <std::size_t N, typename = std::true_type>
struct typeFor;
/* in case std::intmax_t is bigger than 64 bits */
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool,
(N > 64u) && (N <= sizeof(std::intmax_t)*CHAR_BIT)>>
{ using type = std::intmax_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 32u) && (N <= 64u)>>
{ using type = std::int_least64_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 16u) && (N <= 32u)>>
{ using type = std::int_least32_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N > 8u) && (N <= 16u)>>
{ using type = std::int_least16_t; };
template <std::size_t N>
struct typeFor<N, std::integral_constant<bool, (N <= 8u)>>
{ using type = std::int_least8_t; };
template <typename T>
using typeNext = typename typeFor<1u+sizeof(T)*CHAR_BIT>::type;
template<typename T>
typeNext<T> adder (T const & i1, T const & i2)
{ return {typeNext<T>{i1} + i2}; }
int main()
{
auto x = adder(INT_MAX-10, INT_MAX-3);
std::cout << "int: " << sizeof(int)*CHAR_BIT << std::endl;
std::cout << "long: " << sizeof(long)*CHAR_BIT << std::endl;
std::cout << "x: " << sizeof(x)*CHAR_BIT << std::endl;
std::cout << std::is_same<long, decltype(x)>::value << std::endl;
}
In my Linux 64bit platform, i get 32bit for int, 64bit for long and for x and also that long and decltype(x) are the same type.
But this is true for my platform; nothing guaranties that long and decltype(x) are ever the same.
Observe also that trying to get a type for the sum of two std::intmax_t's
std::intmax_t y {};
auto z = adder(y, y);
gives an error and doesn't compile because isn't defined a typeFor for a N bigger that sizeof(std::intmax_t)*CHAR_BIT.
Given a vector of reals c and a vector of integers rw, I want to create a vector z with elements z_i=c_i^rw_i. I tried to do this using the component-wise function pow, but I get a compiler error.
#include <Eigen/Core>
typedef Eigen::VectorXd RealVector;
typedef Eigen::VectorXi IntVector; // dynamically-sized vector of integers
RealVector c; c << 2, 3, 4, 5;
IntVector rw; rw << 6, 7, 8, 9;
RealVector z = c.pow(rw); **compile error**
The compiler error is
error C2664: 'const Eigen::MatrixComplexPowerReturnValue<Derived> Eigen::MatrixBase<Derived>::pow(const std::complex<double> &) const': cannot convert argument 1 from 'IntVector' to 'const double &'
with
[
Derived=Eigen::Matrix<double,-1,1,0,-1,1>
]
c:\auc\sedanal\LammSolve.h(117): note: Reason: cannot convert from 'IntVector' to 'const double'
c:\auc\sedanal\LammSolve.h(117): note: No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called
What is wrong with this code? And, assuming it can be fixed, how would I do the same operation when c is a real matrix instead of a vector, to compute c_ij^b_i for all elements of c?
Compiler is Visual Studio 2015, running under 64-bit Windows 7.
First of all, MatrixBase::pow is a function that computes the matrix power of a square matrix (if the matrix has an eigenvalue decomposition, it is the same matrix, but with the eigenvalues raised to the given power).
What you want is an element-wise power, which since there is no cwisePow function in MatrixBase, requires switching to the Array-domain. Furthermore, there is no integer-specialization for the powers (this could be efficient, but only up to a certain threshold -- and checking for that threshold for every element would waste computation time), so you need to cast the exponents to the type of your matrix.
To also answer your bonus-question:
#include <iostream>
#include <Eigen/Core>
int main(int argc, char **argv) {
Eigen::MatrixXd A; A.setRandom(3,4);
Eigen::VectorXi b = (Eigen::VectorXd::Random(3)*16).cast<int>();
Eigen::MatrixXd C = A.array() // go to array domain
.pow( // element-wise power
b.cast<double>() // cast exponents to double
.replicate(1, A.cols()).array() // repeat exponents to match size of A
);
std::cout << A << '\n' << b << '\n' << C << '\n';
}
Essentially, this will call C(i,j) = std::pow(A(i,j), b(i)) for each i, j. If all your exponents are small, you might actually be faster than that with a
simple nested loop that calls a specialized pow(double, int) implementation (like gcc's __builtin_powi), but you should benchmark that with actual data.
hey there i'm a student in computer science ,
we are asked to build a generic matrix using the vector class and we are not allowed to use "new" and "delete" at all.
I don't know how to start it properly we can use these libraries:
cassert
vector
cstdlib
cmath
I searched all over the Internet but didn't found a class that uses the vector as the matrix (index of matrix is mat[i*cols+j]) and doesn't use new and delete function or allocate memory:
examples of what we are supposed to check:(int google test)
"Matrix.hpp"
template < class T ><class T> class Matrix {
private:
int rows;
int cols;
vector< T > mat;
public:
Matrix(int row, int col,const T& mat2):rows(row),cols(col) {
mat.resize(rows*cols);
//?
};
Matrix(int row, int col);
virtual ~Matrix();
Matrix(const Matrix< T >& rhs);
Matrix< T > transpose();
Matrix< T >& operator=(const Matrix< T >& rhs);
bool operator==(const Matrix< T >& rhs)const;
const int getRowNum() const;
const int getColNum() const;
Matrix< T >& operator+(const Matrix< T >& rhs);
};
"gtest/gtest.h"
Matrix<int> mat1(1, 1, std::vector<int>(1,2));
EXPECT_EQ(1, mat1.getRowNum());
EXPECT_EQ(1, mat1.getColNum());
int t= 0;
EXPECT_TRUE(mat1.hasTrace(t));
EXPECT_TRUE(mat1.isSquareMatrix());
Matrix<int> transpose_mat= mat1;
EXPECT_EQ(transpose_mat, mat1.transpose());
Matrix<int> mat2(1, 1, std::vector<int>(1,3));
Matrix<int> add_mat(1, 1, std::vector<int> (1,5));
EXPECT_EQ(add_mat, mat1+mat2);
Matrix<int> multi_mat(1, 1, std::vector<int>(1,6));
EXPECT_EQ(multi_mat, mat1*mat2);
Matrix<int> scalar_multi_mat(1, 1, std::vector<int>(1,4));
EXPECT_EQ(scalar_multi_mat, mat1*2);
This is what we are supposed to do:
Matrix interface: You must define and implement the generic class file Matrix.hpp Matrix. The matrix will be generic and limbs will not necessarily integers but generic type numbers. For the purpose of due diligence, you Squirrels are entitled to assume that operators will have the necessary exercise as being above. Also you can assume the Squirrels be used in relation to the rest of the matrix are such that the order of the atomic steps of connecting or twice) a long chain of calculations (does not matter. For example (a + b) + c == a + (b + c) the exercise or functions that you can use the default language is given), all generically, of course (:
Constructor without parameters. The builder builds a matrix of 1x1 With organ is conducted T where T is the type that is stored in the matrix.
The constructor gets the number of rows, number of columns and the vector with the values of the matrix filling) saw the driver call this constructor does not realize, the department should be structured so that default is good enough
(Constructor Copy) copy constructor (a different matrix receiver) does not realize, the department should be structured so that default is good enough
(The assignment operator) do not realize, the department must be built so that default is good enough
Plus (+) operator to connect matrices.
Multiplication (*) operator matrix multiplication. Matrix of the object it does the function is left matrix multiplication.
(Swap function matrix) transpose
(Aqaba function) HasTrace (reference to the organ recipient generic type and a task which the value of the trace (if any) of the matrix and returns a Boolean value: true if a square matrix, and another false case put the value additive identity organ received about .reference
The four functions above) connection, transpose and HasTrace double (not from the object it was applied function. In addition, you can add more public functions or Privacy desired, according to what you think is useful to the department.
If someone can help me, I will be grateful!
You can start by considering that an NxM matrix can be represented as a vector with M*N elements. Operations such as addition and subtraction become simply equivalent to vector addition and subtraction.
More general operations, though will require a simple transformation to convert a matrix index [a, b] into a vector index [c].
Given a matrix index [a, b] you can find the corresponding vector element by using:
c = a*N + b
Similarly, converting from a vector index [c] to the matrix index [a, b] by:
a = c / N
b = c % N
Here the / and % are integer division and modulus.
template < class T ><class T> class Matrix {
private:
int rows;
int cols;
vector< T > mat;
public:
Matrix(int row, int col,const T& mat2):rows(row),cols(col) {
mat.resize(rows*cols);
//?
};
Matrix(int row, int col);
virtual ~Matrix();
Matrix(const Matrix< T >& rhs);
Matrix< T > transpose();
Matrix< T >& operator=(const Matrix< T >& rhs);
bool operator==(const Matrix< T >& rhs)const;
const int getRowNum() const;
const int getColNum() const;
Matrix< T >& operator+(const Matrix< T >& rhs);
};
I think I may have encountered a bug with the c++11 template std::underlying_type.
I use a traits class to define the ranges of enumerations we have in our system.
I am then able to provide a generic is_valid function.
I recently extended the function when -Wextra was enabled because I was getting
a lot of warnings about an always true comparison.
When an enum is of an unsigned type, and its first value is 0, the warning was generated.
Solved that easily. But the next day some unit tests in modules using the function started
to fail.
When you don't specify the underlying type of the enum, it still chooses the correct implementation, but somehow returns the wrong result.
Here is the minimal example (http://ideone.com/PwFz15):
#include <type_traits>
#include <iostream>
using namespace std;
enum Colour
{
RED = 0,
GREEN,
BLUE
};
enum NoProblems : int
{
A,
B,
C
};
enum AlsoOk : unsigned
{
D,
E,
F
};
template <typename Enum> struct enum_traits;
template <> struct enum_traits<Colour>
{
typedef Colour type;
static constexpr type FIRST = RED;
static constexpr type LAST = BLUE;
};
template <> struct enum_traits<NoProblems>
{
typedef NoProblems type;
static constexpr type FIRST = A;
static constexpr type LAST = C;
};
template <> struct enum_traits<AlsoOk>
{
typedef AlsoOk type;
static constexpr type FIRST = D;
static constexpr type LAST = F;
};
#if 0
// This implementation gives you warnings about an always true comparison
// ONLY IF you define the underlying type of your enum, such as Colour.
template <typename Enum>
inline constexpr bool is_valid(Enum e)
{
return e >= enum_traits<Enum>::FIRST && e <= enum_traits<Enum>::LAST;
}
#endif
// So you define the is_valid function like so, to prevent the warnings:
template <typename Enum, typename enable_if<is_unsigned<typename underlying_type<Enum>::type>::value && enum_traits<Enum>::FIRST == 0, int>::type = 0>
inline constexpr bool is_valid(Enum e)
{
return e <= enum_traits<Enum>::LAST;
}
template <typename Enum, typename enable_if<is_signed<typename underlying_type<Enum>::type>::value || enum_traits<Enum>::FIRST != 0, int>::type = 0>
inline constexpr bool is_valid(Enum e)
{
return e >= enum_traits<Enum>::FIRST && e <= enum_traits<Enum>::LAST;
}
int main()
{
Colour c = static_cast<Colour>(RED - 1);
cout << is_valid(c) << endl;
NoProblems np = static_cast<NoProblems>(A - 1);
cout << is_valid(np) << endl;
AlsoOk ao = static_cast<AlsoOk>(D - 1);
cout << is_valid(ao) << endl;
return 0;
}
Which gives the output:
1
0
0
Clearly the output for the first call to is_valid, should be 0 / false. Somehow the enum is both signed and unsigned at the same time?
Have I missed some critical piece of documentation in the standard library regarding the templates I've used?
It is fixable by performing the comparison like so:
return static_cast<typename std::underlying_type<Enum>::type>(e) <= enum_traits<Enum>::LAST;
But it doesn't seem like that should be necessary.
I've tried this on gcc 4.8.1, gcc 4.7.3 and clang 3.2.1, all on x86-64
C++11 5.2.9 [expr.static.cast]/10:
A value of integral or enumeration type can be explicitly converted to an enumeration type. The value is
unchanged if the original value is within the range of the enumeration values (7.2). Otherwise, the resulting
value is unspecified (and might not be in that range).
The "range of the enumeration values" is defined in 7.2/7:
For an enumeration whose underlying type is fixed, the values of the enumeration are the values of the
underlying type. Otherwise, for an enumeration where emin is the smallest enumerator and emax is the
largest, the values of the enumeration are the values in the range bmin to bmax, defined as follows: Let K
be 1 for a two’s complement representation and 0 for a one’s complement or sign-magnitude representation.
bmax is the smallest value greater than or equal to max(|emin| − K, |emax|) and equal to 2M − 1, where
M is a non-negative integer. bmin is zero if emin is non-negative and −(bmax + K) otherwise. The size of
the smallest bit-field large enough to hold all the values of the enumeration type is max(M, 1) if bmin is
zero and M + 1 otherwise. It is possible to define an enumeration that has values not defined by any of its
enumerators. If the enumerator-list is empty, the values of the enumeration are as if the enumeration had a
single enumerator with value 0.
For Colour, the range of the enumeration values (assuming two's-complement) is [0, 3]. RED - 1 is either -1 or UINT_MAX, both of which are outside the range [0, 3], so the result of the static_cast is unspecified.
Since the result of converting out-of-range values is unspecified, you would do better to perform your comparisons in the domain of the underlying type, which is exactly the effect of your fix.
I need to convert rgba8 to rgba5551 manually. I found some helpful code from another post and want to modify it to convert from rgba8 to rgba5551. I don't really have experience with bitewise stuff and haven't had any luck messing with the code myself.
void* rgba8888_to_rgba4444( void* src, int src_bytes)
{
// compute the actual number of pixel elements in the buffer.
int num_pixels = src_bytes / 4;
unsigned long* psrc = (unsigned long*)src;
unsigned short* pdst = (unsigned short*)src;
// convert every pixel
for(int i = 0; i < num_pixels; i++){
// read a source pixel
unsigned px = psrc[i];
// unpack the source data as 8 bit values
unsigned r = (px << 8) & 0xf000;
unsigned g = (px >> 4) & 0x0f00;
unsigned b = (px >> 16) & 0x00f0;
unsigned a = (px >> 28) & 0x000f;
// and store
pdst[i] = r | g | b | a;
}
return pdst;
}
The value of RGBA5551 is that it has color info condensed into 16 bits - or two bytes, with only one bit for the alpha channel (on or off). RGBA8888, on the other hand, uses a byte for each channel. (If you don't need an alpha channel, I hear RGB565 is better - as humans are more sensitive to green). Now, with 5 bits, you get the numbers 0 through 31, so r, g, and b each need to be converted to some number between 0 and 31, and since they are originally a byte each (0-255), we multiply each by 31/255. Here is a function that takes RGBA bytes as input and outputs RGBA5551 as a short:
short int RGBA8888_to_RGBA5551(unsigned char r, unsigned char g, unsigned char b, unsigned char a){
unsigned char r5 = r*31/255; // All arithmetic is integer arithmetic, and so floating points are truncated. If you want to round to the nearest integer, adjust this code accordingly.
unsigned char g5 = g*31/255;
unsigned char b5 = b*31/255;
unsigned char a1 = (a > 0) ? 1 : 0; // 1 if a is positive, 0 else. You must decide what is sensible.
// Now that we have our 5 bit r, g, and b and our 1 bit a, we need to shift them into place before combining.
short int rShift = (short int)r5 << 11; // (short int)r5 looks like 00000000000vwxyz - 11 zeroes. I'm not sure if you need (short int), but I've wasted time tracking down bugs where I didn't typecast properly before shifting.
short int gShift = (short int)g5 << 6;
short int bShift = (short int)b5 << 1;
// Combine and return
return rShift | gShift | bShift | a1;
}
You can, of course condense this code.