My project uses libavformat to connect to rtsp:// URLs. It's important that it set a socket timeout and reconnect on error. Unfortunately, the stimeout open option for this only exists in ffmpeg (and in particular, its libavformat versions >= 55.1.100), not the competing project libav (any version). And some systems I'd like to support (such as Raspbian Jessie) are still bundled with libav.
So, I think my best option is to detect whether I have a suitable version using cmake, and install ffmpeg in-tree if not. I think I should be able to do this via something like:
pkg_check_modules(FFMPEG libavutil libavcodec libavformat)
if(not FFMPEG_FOUND or FFMPEG_VERSION VERSION_LESS 55.1.101)
ExternalProject_Add(
FfmpegProject
URL "http://ffmpeg.org/releases/ffmpeg-2.8.3.tar.xz"
URL_HASH "SHA1=a6f39efe1bea9a9b271c903d3c1dcb940a510c87"
INSTALL_COMMAND "")
...set up flags and such to use this in-tree version...
endif()
except that I don't know how to detect libav vs ffmpeg. I don't see anything in the pkgconfig stuff or libavformat/version.h to distinguish them. The version numbers they use seem to overlap. It's not obvious to me at all how to tell the difference programmatically, much less do so with a not-weird cmake rule. Any ideas?
To specifically answer your question, use this code:
#include <stdio.h>
#include "libavutil/opt.h"
#include "libavformat/avformat.h"
int main(int argc, char *argv[]) {
av_register_all();
AVInputFormat *input = av_find_input_format("rtsp");
const AVClass *klass = input->priv_class;
const AVOption *opt = av_opt_find2(&klass, argv[1], NULL, 0, AV_OPT_SEARCH_FAKE_OBJ, NULL);
printf("%p\n", opt);
return 0;
}
This can do runtime detection, and here's how it works:
bash-3.2$ /tmp/test hi
0x0
bash-3.2$ /tmp/test stimeout
0x103420100
For your other question, detecting Libav vs. FFmpeg can be done by looking at the library micro version. For FFmpeg, they all start at 100 (e.g. libavformat 55.1.100), whereas for Libav, they start at 0. So if micro < 100, it's Libav, else it's FFmpeg. To get libavformat micro version at runtime, use avformat_version() & 0xff, or LIBAVFORMAT_VERSION_MICRO at compile time.
Related
all.
Compiling simple stuff using the gcc toolchain for several years, today I ran against a curious phenomenon.
I installed Kubuntu 14.04 to a common desktop i686 machine with gcc 4.8.2 in it. But then, trying to build some well coded stuff pulled out from my local repository, I ran against tons of 'undefined reference to' messages. The code compiles, links und runs well under Ubuntu 11.04 / gcc 4.5.2.
I checked the linking process (by -Wl,--verbose to gcc), think it works. It finds all libraries I specify in the link command. An objdump -t myLib.so brings exactly the symbols I'd expect - but the linker doesn't see them.
Checking the pthread library also brings according symbols, except they are suffixed with some #GLIBC... stuff. Didn't check linker/loader tricks so far.
A sample like
#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
static void *fooo (void *xxx) {
char *txt = (char*)xxx;
printf("My job is to print this :'%s'. Bye now!\n", txt);
return 0;
}
int main (int argc, char *argv[]) {
pthread_t thd;
pthread_create(&thd, NULL, fooo, "A POSIX thread");
sleep(1);
return 0;
}
runs very well on the old system just saying
gcc -l pthread fooo.c && ./a.out
but breaks at the linking step with 4.8.2.
Any idea would be very welcome.
.M
Thanks to sfrehse, JoachimPileborg et al!
Indeed, success depends on argument order. I knew this fact for static linking, but it is new in processing of shared objects with gcc.
Does someone know what the background of this improvement is? It breaks innumerable build processes, and I guess thousands of tomatoes are being made ready against gcc.gnu.org .....
.M
After I declared a variable in this way:
#include <thread>
namespace thread_space
{
thread_local int s;
} //etc.
i tried to compile my code using 'g++ -std=c++0x -pthread [sourcefile]'. I get the following error:
example.C:6:8: error: thread-local storage is unsupported for the current target
static thread_local int s;
^
1 error generated.
If i try to compile the same code on Linux with GCC 4.8.1 whit the same flags, i get a functioning executable file. I'm using clang-503.0.40 (the one which comes with Xcode 5.1.1) on a MacBook Pro running OSX 10.9.3. Can anybody explain me what i'm doing wrong?
Thank you!!
Try clang++ -stdlib=libc++ -std=c++11. OS X's outdated libstdc++ doesn't support TLS.
Edit
Ok, this works for the normal clang version but not for the Xcode one.
I did a diff against Apple's clang (503.0.38) and the normal released one and found the following difference:
.Case("cxx_thread_local",
- LangOpts.CPlusPlus11 && PP.getTargetInfo().isTLSSupported() &&
- !PP.getTargetInfo().getTriple().isOSDarwin())
+ LangOpts.CPlusPlus11 && PP.getTargetInfo().isTLSSupported())
So I think this is a bug in Apple's clang version (or they kept it in there on purpose - but still weird, because -v says based on 3.4).
Alternatively, you can use compiler extensions such as __thread (GCC/Clang) or __declspec(thread) (Visual Studio).
Wrap it in a macro and you can easily port your code across different compilers and language versions:
#if HAS_CXX11_THREAD_LOCAL
#define ATTRIBUTE_TLS thread_local
#elif defined (__GNUC__)
#define ATTRIBUTE_TLS __thread
#elif defined (_MSC_VER)
#define ATTRIBUTE_TLS __declspec(thread)
#else // !C++11 && !__GNUC__ && !_MSC_VER
#error "Define a thread local storage qualifier for your compiler/platform!"
#endif
...
ATTRIBUTE_TLS static unsigned int tls_int;
The clang compiler included in the Xcode 8 Beta and GM releases supports the C++11 thread_local keyword with both -std=c++11 and -std=c++14 (as well as the GCC variants).
Earlier versions of Xcode apparently supported C-style thread local storage using the keywords __thread or _Thread_local, according to the WWDC 2016 video "What's New in LLVM" (see the discussion beginning at 5:50).
Seems like you might need to set the minimum OS X version you target to 10.7 or higher.
I have a problem using Apache Portable Runtime on Ubuntu with GCC 4.8.1
The problem is that the off64_t from <sys/types.h> is not available when compiling with gcc. (When compiling with g++ everything work fine)
Does anybody know which compiler switch to use to enable off64_t? (I know that defining _LARGEFILE_SOURCE _LARGEFILE64_SOURCE avoids the problem, but wondering if this is the right way)
To reproduce the error one can simply try to compile the following code:
#include <sys/types.h>
off64_t a_variable;
off64_t is not a language defined type. No compiler switch will make it available.
It is defined in sys/types.h, but (on a 32 bit system) only if
_LARGEFILE64_SOURCE is defined
Which will make the 64 bit interfaces available (off64_t, lseek64(), etc...).
The 32 bit interfaces will still be available by their original names.
_FILE_OFFSET_BITS is defined as '64'
Which will make the names of the (otherwise 32 bit) functions and data types refer to their 64 bit counterparts.
off_t will be off64_t, lseek() will use lseek64(), and so on...
The 32 bit interface is no longer available.
Make sure that if you define these macros anywhere in your program, you define them at the beginning of all your source files. You don't want ODR violations to be biting you in the ass.
Note, this is for a 32 bit system, where off_t is normally a 32 bit value.
On a 64 bit system, the interface is already 64 bits wide, you don't need to use these macros to get the large file support.
off_t is a 64 bit type, lseek() expects a 64 bit offset, and so on.
Additionally, the types and functions with 64 in their name aren't defined, there's no point.
See http://linux.die.net/man/7/feature_test_macros
and http://en.wikipedia.org/wiki/Large_file_support
You also may be interested to know that when using g++, _GNU_SOURCE is automatically defined, which (with the gnu c runtime library) leads to _LARGEFILE64_SOURCE being defiend. That is why compiling your test program with g++ makes off64_t visible. I assume APR uses the same logic in making _LARGEFILE64_SOURCE defined.
Redefine off64_t to __off64_t in your compile flag. Edit your Makefile so it contains:
CFLAGS= -Doff64_t=__off64_t
then, just run $ make 1 (assuming you have 1.c in your directory)
A bit late, but still current.
I simply add -Doff64_t=_off64_t to the compiler flags.
In my environment gcc version 4.1.2, I need to define __USE_LARGEFILE64. I found this macro from /usr/include/unistd.h who defines lseek64()
#define __USE_LARGEFILE64
#include <sys/types.h>
#include <unistd.h>
You should define $C_INCLUDE_PATH to point to linux headers, something like
export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu
To install linux header, use
sudo apt-get install linux-headers-`uname -r`
P.S.
$ cat 1.c
#include <sys/types.h>
off64_t a_variable;
int main(){return 0;}
$ gcc --version
gcc (Ubuntu/Linaro 4.8.1-10ubuntu9) 4.8.1
$ echo $C_INCLUDE_PATH
/usr/include/x86_64-linux-gnu
$ grep off64_t /usr/include/x86_64-linux-gnu/sys/types.h
typedef __off64_t off_t;
#if defined __USE_LARGEFILE64 && !defined __off64_t_defined
typedef __off64_t off64_t;
# define __off64_t_defined
Sorry for the lateness but I did never had to embed perl code in C programs untill today ^^
I solved the issue in Unix/Linux systems (I think it is possible to create such feature in Windows since Vista) by creating a symbolic link pointing to the CORE folder of perl version...
ln -s $(perl -MConfig -e 'print $Config{archlib}')/CORE /usr/include/perl
In your project file, source code, simply add:
#include <perl/EXTERN.h>
#include <perl/perl.h>
...and I came from long list of notes and errors related to off_t and off64_t to a clean build result ^^
Also late to the party, but the main reason for receiving this issue was installing the 64-bit version of MinGW instead of 32-bit:
https://sourceforge.net/projects/mingw/
In one application, I've got a bunch of CUDA kernels. Some use dynamic parallelism and some don't. For the purposes of either providing a fallback option if this is not supported, or simply allowing the application to continue but with reduced/partially available features, how can I go about compiling?
At the moment I'm getting invalid device function when running kernels compiled with -arch=sm_35 on a 670 (max sm_30) that don't require compute 3.5.
AFAIK you can't use multiple -arch=sm_* arguments and using multiple -gencode=* doesn't help. Also for separable compilation I've had to create an additional object file using -dlink, but this doesn't get created when using compute 3.0 (nvlink fatal : no candidate found in fatbinary due to -lcudadevrt, which I've needed for 3.5), how should I deal with this?
I believe this issue has been addressed now in CUDA 6.
Here's my simple test:
$ cat t264.cu
#include <stdio.h>
__global__ void kernel1(){
printf("Hello from DP Kernel\n");
}
__global__ void kernel2(){
#if __CUDA_ARCH__ >= 350
kernel1<<<1,1>>>();
#else
printf("Hello from non-DP Kernel\n");
#endif
}
int main(){
kernel2<<<1,1>>>();
cudaDeviceSynchronize();
return 0;
}
$ nvcc -O3 -gencode arch=compute_20,code=sm_20 -gencode arch=compute_35,code=sm_35 -rdc=true -o t264 t264.cu -lcudadevrt
$ CUDA_VISIBLE_DEVICES="0" ./t264
Hello from non-DP Kernel
$ CUDA_VISIBLE_DEVICES="1" ./t264
Hello from DP Kernel
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2013 NVIDIA Corporation
Built on Sat_Jan_25_17:33:19_PST_2014
Cuda compilation tools, release 6.0, V6.0.1
$
In my case, device 0 is a Quadro5000, a cc 2.0 device, and device 1 is a GeForce GT 640, a cc 3.5 device.
I don't believe there is a way to do this using the runtime API as of CUDA 5.5.
The only way I can think of to get around the problem is to use the driver API to perform your own architecture selection and load code from different cubin files at runtime. The APIs can be safely mixed, so it is only the context establishment-device selection-module load phase which needs to be done with the driver API. You can use the runtime API after that - you will need a little bit of homemade syntactic sugar for the kernel launches, but otherwise no code changes are required in other runtime API code.
The beginning of my issue is that I'm trying to use regular expressions in Cocos2d-x. For whatever reason, std::tr1::regex isn't working with C++98, so I'm trying to use std::regex with C++11 (along with some other C++11 features). This is working with iOS now, since it's really easy to change the version of C++ in Xcode, but I'm having all kinds of trouble getting this to work on Android.
I'm using the r8e version of the NDK with the gnustl_static library. I set the LOCAL_CPPFLAGS += -std=c++11 I've tried setting the toolchain version to clang (in addition to the default). Regardless of the toolchain, I am now able to compile my code, but it still crashes when I try to create a std::regex object std::regex reg1("[a-z][0-3]*"); It seems like some people are able to get C++11 to work with the Android NDK expanded library (not the "minimal C++ runtime support library"), but I can't figure it out. I've read lots of ideas and I've tried most of them, and I've seen some clues, such as the following from CHANGES.html in the NDK docs:
Patched GCC 4.4.3/4.6/4.7 libstdc++ to work with Clang in C++11
I don't know enough about how this all fits together, so could someone point me in the right direction? What am I missing here?
Open your Application.mk file and add following two lines at the end:
APP_CPPFLAGS += -std=c++11
NDK_TOOLCHAIN_VERSION=4.7
Note: As you mentioned that you are using NDK's version r8e the toolchain version you need is 4.7. If it is r9, you can set it to 4.8.
Hope this helps.
Alternatively, if you aren't restricted to using c++'s std::regex, you could try using standard C: regcomp() and regexec() .
Here a sample implementation (http://pubs.opengroup.org/onlinepubs/009695399/functions/regcomp.html):
#include <regex.h>
int match(const char *string, const char *pattern)
{
int status;
regex_t re;
if (regcomp(&re, pattern, REG_EXTENDED|REG_NOSUB) != 0) {
return(0); /* Report error. */
}
status = regexec(&re, string, (size_t) 0, NULL, 0);
regfree(&re);
if (status != 0) {
return(0); /* Report error. */
}
return(1);
}
In Your android.mk add
LOCAL_CPPFLAGS += -std=gnu++0x