I am trying GNUstep. If it's working well, I'll try to make a website with it.
Anyway, I am stuck from at first. GNUstep NSRunLoop implementation doesn't seem to work well.
Here's my code.
#import <Foundation/Foundation.h>
#interface AAA : NSObject
- (void)test1:(id)s;
#end
#implementation AAA
- (void)test1:(id)s
{
NSLog(#"%#", s);
}
- (void)dealloc
{
NSLog(#"DEALLOCED!!");
}
#end
int main(int argc, const char * argv[])
{
#autoreleasepool
{
AAA* aaa = [[AAA alloc] init];
[[NSNotificationCenter defaultCenter] addObserver:aaa selector:#selector(test1:) name:NSFileHandleDataAvailableNotification object:nil];
[[NSFileHandle fileHandleWithStandardInput] waitForDataInBackgroundAndNotify];
[[NSRunLoop currentRunLoop] run];
}
return 0;
}
I built this code with this command.
clang -v
EE_GNUSTEP_OPTS="-MMD -MP -DGNUSTEP -DGNUSTEP_BASE_LIBRARY=1 -DGNU_RUNTIME=1 -DGNUSTEP_BASE_LIBRARY=1 -fno-strict-aliasing -fexceptions -fobjc-exceptions -D_NATIVE_OBJC_EXCEPTIONS -D_NONFRAGILE_ABI -pthread -fPIC -Wall -DGSWARN -DGSDIAGNOSE -Wno-import -g -fgnu-runtime -fconstant-string-class=NSConstantString"
EE_BUILD_OPTS="-I/usr/local/include -L/usr/local/lib -lc -lobjc -lgnustep-base -fblocks -fobjc-arc -fobjc-abi-version=3"
alias OBJCC="clang $EE_GNUSTEP_OPTS $EE_BUILD_OPTS"
OBJCC *.m
Under OSX, the program doesn't quit, and just keep running the run-loop. It worked equally under GNUstep on FreeBSD if I don't use ARC. If I enable ARC the program quit immediately.
I have no idea why that doesn't work only when ARC is enabled. Why does it quits immediately?
Here's the component versions I used:
svn co http://svn.gna.org/svn/gnustep/libs/libobjc2/releases/1.7 libobjc2-1.7 &
svn co http://svn.gna.org/svn/gnustep/tools/make/tags/make-2_6_5 make-2_6_5 &
svn co http://svn.gna.org/svn/gnustep/libs/base/tags/base-1_24_5 base-1_24_5 &
This appears to be a bug in the GSFileHandle class. The singleton returned by fileHandleWithStandardInput is autoreleased. In non-ARC mode, it is destroyed at the end of the #autoreleasepool scope, but you never get to that point, so it works fine.
In ARC mode, the sequence becomes something along these lines:
NSFileHandle *tmp = [NSFileHandle fileHandleWithStandardInput];
objc_retainAutoreleasedReturnValue(tmp);
[tmp waitForDataInBackgroundAndNotify];
objc_release(tmp);
After the call to objc_release(), the object is deallocated. When the object is deallocated, it helpfully removes itself as a run loop source. The run loop then has no sources registered, and so exits (otherwise it would wait forever for nothing).
This shows one of the advantages of ARC—that temporary objects don't spend as much time in autorelease pools—but in this case it uncovers a bug. Fortunately, it uncovers the bug in a much easier-to-debug way than without ARC (without, we'd see a crash much later when something outside the current autoreleasepool scope tries to access the file handle).
I've fixed this in GNUstep Base svn r37245, thank you for the report (in the future, GNUstep bug reports are more likely to be seen if posted to the GNUstep mailing lists than to a random third-party web site, however...).
Related
I have below tree:
.
├── func1.c
├── func2.c
├── main.c
├── Makefile
├── override.c
└── weak.h
main.c invokes func1().
func1() invokes func2().
weak.h declares func2() as weak.
override.c provides an override version of func2().
func1.c
#include <stdio.h>
void func2(void);
void func1 (void)
{
func2();
}
func2.c
#include <stdio.h>
void func2 (void)
{
printf("in original func2()\n");
}
main.c
#include <stdio.h>
void func1();
void func2();
void main()
{
func1();
}
override.c
#include <stdio.h>
void func2 (void)
{
printf("in override func2()\n");
}
weak.h
__attribute__((weak))
void func2 (void); // <==== weak attribute on declaration
Makefile
ALL:
rm -f *.a *.o
gcc -c override.c -o override.o
gcc -c func1.c -o func1.o -include weak.h # weak.h is used to tell func1.c that func2() is weak
gcc -c func2.c -o func2.o
ar cr all_weak.a func1.o func2.o
gcc main.c all_weak.a override.o -o main
All these runs well as below:
in override func2()
But if I remove the override version of func2() from override.c as below:
#include <stdio.h>
// void func2 (void)
// {
// printf("in override func2()\n");
// }
The build pass but the final binary gives below error at runtime:
Segmentation fault (core dumped)
And in the symbol table of ./main, the func2() is an unresolved weak symbol.
000000000000065b T func1
w func2 <=== func2 is a weak symbol with no default implementation
Why didn't it fall back to the func2() in the original func2.c?
After all the all_weak.a already contains an implementation in func2.o:
func1.o:
0000000000000000 T func1
w func2 <=== func2 is [w]eak with no implementation
U _GLOBAL_OFFSET_TABLE_
func2.o:
0000000000000000 T func2 <=========== HERE! a strong symbol!
U _GLOBAL_OFFSET_TABLE_
U puts
ADD 1
It seems the arrangement of translation unit also affects the fall back to the weak function.
If I put the func2() implementation into the same file/translation unit as func1() as below, the fall back to the original func2() can work.
func1.c
#include <stdio.h>
void func2 (void)
{
printf("in original func2()\n");
}
void func1 (void)
{
func2();
}
The symbols of all_weak.a is:
func1.o:
0000000000000013 T func1
0000000000000000 W func2 <==== func2 is still [W]eak but has default imeplementation
U _GLOBAL_OFFSET_TABLE_
U puts
The code can fall back to the original func2() correctly if no override is provided.
This link also mentioned that the to work with the GCC alias attribute, translation unit arrangement must also be considered.
alias (“target”) The alias attribute causes the declaration to be
emitted as an alias for another symbol, which must be specified. For
instance,
void __f () { /* Do something. */; } void f () attribute ((weak,
alias ("__f"))); defines f to be a weak alias for __f. In C++, the
mangled name for the target must be used. It is an error if __f is not
defined in the same translation unit.
According to the wikipedia:
The nm command identifies weak symbols in object files, libraries, and
executables. On Linux a weak function symbol is marked with "W" if a
weak default definition is available, and with "w" if it is not.
ADD 2 - 7:54 PM 8/7/2021
(Huge thanks to #n. 1.8e9-where's-my-share m. )
I tried these:
Add the __attribute__((weak)) to the func2() definition in func2.c.
Remove the -include weak.h from the Makefile.
Now these files look like this:
func2.c
#include <stdio.h>
__attribute__((weak))
void func2 (void)
{
printf("in original func2()\n");
}
Makefile:
ALL:
rm -f *.a *.o
gcc -c override.c -o override.o
gcc -c func1.c -o func1.o
gcc -c func2.c -o func2.o
ar cr all_weak.a func1.o func2.o
gcc main.c all_weak.a -o main_original # <=== no override.o
gcc main.c all_weak.a override.o -o main_override # <=== override.o
The output is this:
xxx#xxx-host:~/weak_fallback$ ./main_original
in original func2() <===== successful fall back
xxx#xxx-host:~/weak_fallback$ ./main_override
in override func2() <===== successful override
So, the conclusion is:
If weak a function declaration (like what I did in weak.h), it essentially tells the linker not to resolve it.
If weak a function definition (like what I did in func2.c), it essentially tells the linker to use it as a fallback if no strong version found.
If weak a function declaration, you'd better provide an override version in a .o file to the linker (like what I did in override.o). It seems linker is still willing to resolve .o file in this situation.
This is the case when you cannot modify the source but still want to override some function.
And some quotation from here:
The linker will only search through libraries to resolve a reference
if it cannot resolve that reference after searching all input objects.
If required, the libraries are searched from left to right according
to their position on the linker command line. Objects within the
library will be searched by the order in which they were archived. As
soon as armlink finds a symbol match for the reference, the searching
is finished, even if it matches a weak definition. The ELF ABI section
4.6.1.2 says: "A weak definition does not change the rules by which object files are selected from libraries. However, if a link set
contains both a weak definition and a non-weak definition, the
non-weak definition will always be used." The "link set" is the set of
objects that have been loaded by the linker. It does not include
objects from libraries that are not required. Therefore archiving two
objects where one contains the weak definition of a given symbol and
the other contains the non-weak definition of that symbol, into a
library or separate libraries, is not recommended.
ADD 3 - 8:47 AM 8/8/2021
As #n.1.8e9-where's-my-sharem commented:
Comment 1:
"weak" on a symbol which is not a definition means "do not resolve
this symbol at link time". The linker happily obeys.
Comment 2:
"on a symbol which is not a definition" is wrong, should read "on an
undefined symbol".
I think by "on an undefined symbol", he means "an undefined symbol within current translation unit". In my case, when I:
defined the func2() in a separated func2.c file
and compiled func1.c with weak.h
These essentially tell the linker do not resolve the func2() consumed in the translation unit func1.c. But it seems this "do not" only applies to .a file. If I link another .o file besides the .a file, the linker is still willing to resolve the func2(). Or if the func2() is also defined in the func1.c, linker will also resolve it. Subtle it is!
(So far, all these conclusions are based on my experiment result. It's subtle to summarize all these. If anyone can find some authoritative source, please feel free to comment or reply. Thanks!)
(Thanks to n. 1.8e9-where's-my-share m.'s comment.)
And a related thread:
Override a function call in C
Some afterthought - 9:55 PM 8/8/2021
There's no rocket science behind these subtle behaviors. It just depends on how the linker is implemented. Sometimes document is vague. You have to try it and deal with it. (If there's some big idea behind all these, please correct me and I will be more than grateful.)
these subtle behaviors
There isn't really anything subtle here.
A weak definition means: use this symbol unless another strong definition is also present, in which case use the other symbol.
Normally two same-named symbols result in a multiply-defined link error, but when all but one definitions are weak, no multiply-defined error is produced.
A weak (unresolved) reference means: don't consider this symbol when deciding whether to pull an object which defines this symbol out of archive library or not (an object may still be pulled in if it satisfies a different strong undefined symbol).
Normally if the symbol is unresolved after all objects are selected, the linker will report unresolved symbol error. But if the unresolved symbol is weak, the error is suppressed.
That's really all there is to it.
Update:
You are repeating incorrect understanding in comments.
What makes me feel subtle is, for a weak reference, the linker doesn't pull an object from an archive library, but still check a standalone object file.
This is entirely consistent with the answer above. When a linker deals with archive library, it has to make a decision: to select contained foo.o into the link or not. It is that decision that is affected by the type of reference.
When bar.o is given on the link line as a "standalone object file", the linker makes no decisions about it -- bar.o will be selected into the link.
And if that object happens to contain a definition for the weak reference, will the weak reference be also resolved by the way?
Yes.
Even the weak attribute tells the linker not to.
This is the apparent root of misunderstanding: the weak attribute doesn't tell the linker not to resolve the reference; it only tells the linker (pardon repetition) "don't consider this symbol when deciding whether to pull an object which defines this symbol out of archive library".
I think it's all about whether or not an object containing a definition for that weak reference is pulled in for linking.
Correct.
Be it a standalone object or from an archive lib.
Wrong: a standalone object is always selected into the link.
I am new to the world of STM32F103. I have a demo code for STM32F103 and I am using arm-none-eabi to compile it.
I tried what I could find on Google, but nothing worked so far. I have already spent three days on the problem.
Anyone can give me a demo code for printf which works well?
Part of my makefile:
CFLAG = -mcpu=$(CPU) -mthumb -Wall -fdump-rtl-expand -specs=nano.specs --specs=rdimon.specs -Wl,--start-group -lgcc -lc -lm -lrdimon -Wl,--end-group
LDFLAG = -mcpu=$(CPU) -T ./stm32_flash.ld -specs=nano.specs --specs=rdimon.specs -Wl,--start-group -lgcc -lc -lm -lrdimon -Wl,--end-group
By including the following linker flags:
LDFLAGS += --specs=rdimon.specs -lc -lrdimon
it looks like you are trying to use what is called semihosting. You are telling the linker to include system call libraries.
Semihosting is a mechanism that enables code running on an ARM target to communicate and use the Input/Output facilities on a host computer that is running a debugger.
Examples of these facilities include keyboard input, screen output, and disk I/O. For example, you can use this mechanism to enable functions in the C library, such as printf() and scanf(), to use the screen and keyboard of the host instead of having a screen and keyboard on the target system.
Since you are using openSource tools for your STM32 development (Makefile and arm-none-eabi), I am assuming you are also using openOCD to program your microcontroller. openOCD requires you to enable semihosting as well using the following command:
arm semihosting enable
You can at the command to your openOCD script making sure you terminate the configuration stage and enter the run stage with the 'init' command. Below is an example of an openOCD script (adapted for STM32F103):
source [find target/stm32f1x.cfg]
init
arm semihosting enable
Other solutions mentioned here where your retarget the fputc() function to a UART interface will also work and might. Semihosting will work on all recent ARM Cortex-M but will require some compiler & debugger configuration (see above). Retargeting the fputc() function to a UART interface will work with any compiler but you will have to check your pin configurations for every board.
Writing an own printf implementation is an option, and probably the most recommended option according to me. Get some inspiration from the standard library implementation and write your own version, only to cater your requirements. In general, what you have to do is, first retarget a putc function to send char s through your serial interface. Then override the printf method by using the putc custom implementation. Perhaps, a very simple approach is sending the string character-wise by recursive calls for putc function.
Last but not least, you can find some lightweight printf implementations. The code size and the set of features offered by these lightweight implementations lie in between the custom written printf function and the stock standard printf function (aka the beast). I have recently tried this Tiny Printf and very pleased with its performance on an ARM core in terms of memory footprint and the number of execution cycles required.
-PS
Copied from my own writings sometime back.
Link: How to retarget printf() on an STM32F10x?
Try hijacking the _write function like so:
#define STDOUT_FILENO 1
#define STDERR_FILENO 2
int _write(int file, char *ptr, int len)
{
switch (file)
{
case STDOUT_FILENO: /*stdout*/
// Send the string somewhere
break;
case STDERR_FILENO: /* stderr */
// Send the string somewhere
break;
default:
return -1;
}
return len;
}
The original printf will go through this function (depending on what libs you use of course).
Look there. This is printf from glib. But you have microcontroller. So you sould write own printf, where vfprintf will return result into buffer and next you will send data from buffer to UART. Kind of
void printf( const char * format, ... )
{
char buffer[256];
va_list args;
va_start (args, format);
vsprintf (buffer,format, args);
send_via_USART1 (buffer);
va_end (args);
}
Also you can write own vsprintf. Standart vsprintf is very heavy. Usually little part of vsprintf features is used.
I've got a file, test.lex, which I run through as
$ flex test.lex
That gives me lex.yy.c, which I try to compile with:
$ gcc lex.yy.c -lfl
This gives me the error ld: library not found for -lfl. I know the Flex specification is correct and lex.yy.c compiles fine on a Linux machine. Any suggestions?
Edit: I'm using the flex supplied by Apple.
Some systems make libfl a separate package from flex, as it is rarely needed. The libfl library just contains two functions:
int main() {
while (yylex());
return 0;
}
int yywrap() {
return 1;
}
Normally you'll want your own main function rather than the one from libfl, and defining yywrap yourself is trivial. Alternately, you can use %option noyywrap and not need it at all.
In your case, try just getting rid of the -lfl option. If you get an error about yywrap, add%option noyywrap to the first section of your test.lex file.
old topic but accepted answer did not help me.
so I'm adding this answer.
on macos use -ll (as in "library lex").
valid for macos 10.14 mojave.
also as #Chris Dodd said you can get rid of this dependency by specifying %option noyywrap in .l file and providing own main routine .y file.
The program is written in C using SDL2 and openGL. So long as I comment out
//glUseProgram(0);
the program compiles and runs and displays the glCleared color. Including the gl version checks:
const char* renderer = (const char*) glGetString(GL_RENDERER);
puts(renderer);
const char* version = (const char*) glGetString(GL_VERSION);
puts(version);
const char* glslVersion = (const char*) glGetString(GL_SHADING_LANGUAGE_VERSION);
puts(glslVersion);
Which print out:
ATI Radeon HD 5670
3.2.11927 Core Profile Context
4.20
And the gl error check:
GLenum error = glGetError();
switch(error){
case GL_NO_ERROR: puts("no error"); break;
case GL_INVALID_ENUM: puts("invalid enum"); break;
case GL_INVALID_VALUE: puts("invalid value"); break;
case GL_OUT_OF_MEMORY: puts("out of memory"); break;
case GL_INVALID_FRAMEBUFFER_OPERATION: puts("invalid framebuffer operation"); break;
default: break;
}
which prints:
no error.
But when glUseProgram(0) is uncommented I get the following error:
D:\TEMP\ccSTF4cr.o:App.c:(.text+0x320): undefined reference to 'glUseProgram
I also get:
App.c:54.2: warning: implicit declaration of function 'glUseProgram'
The included files are:
#include "SDL2/SDL.h"
#include "SDL2/SDL_opengl.h"
The program is executed using a .bat file on Windows XP. The .bat is:
del /F /Q bin\app.exe
set files=main.c App.c EventHub.c Game.c MainShader.c
set libs=-LD:\environments\minGW\mingw32\lib -lmingw32 -lopengl32 -lSDL2main -mwindows -lSDL2
set objs=bin\main.obj
gcc -Wall -o bin/app %files% %libs%
bin\app.exe
If you don't know .bats, the %file% and %libs% in the command are simply substituted for the strings in the variables.
The issue would seem to be that the context does not support the later glUseProgram function except for the fact that the context is version 3.2 which does support the function. In which case the issue seems to be around the SDL_opengl.h include is picking up a wrong -lopengl32. But frankly I don't really understand this linking stuff which why I am asking the question.
The Microsoft Windows OpenGL API only has functions from version 1.X at best. (Because they'd prefer you use Direct3D.)
To use modern OpenGL - heck, anything newer than 1995 - on MS Windows, you need an extensions manager such as GLEW or GLEE. See the OpenGL SuperBible, any edition, for details.
All is in the title.
For some reasons I have to do it like this.
But when I compile my code, GCC (or GAS maybe...) displays the following error:
.../Temp/cc1C1fjs.s:19: Error: immediate operand illegal with absolute jump
Code:
int main ( int argc, char **argv )
{
/* Some code */
( (void(*)()) &&label)();
/* Some code */
return 0;
label:
asm ("push %ebp");
asm ("mov %esp,%ebp");
/* Some code */
printf("Hello world");
asm ("leave");
asm("ret");
}
I'm sure that this should works because I tried to create a thread using CreateThread function (I'm under windows) specifing as entry point the address of label, and it works perfectly well.
So how can I ensure that the compiler accepting this syntax?
Or there is anothers ways for doing that?
I don't have a solution for you, but I do have a couple of suggestions:
Run gcc -S file.c and look at line #19 to see if you can spot what the actual problem is.
Look through the rest of the (short) .s file to see if anything is obviously amiss. For example, my version of gcc seems to decide that everything after return 0 is dead code, so none of your asm code nor the printf actually make it to the assembler.
Can't this code be moved into a function? This way you'll get the prologue/epilogue for free; taking the address would also be less fraught with difficulty.
I fixed a part of the problem:
#aix you have right, GCC remove
everything of the main function
after return 0;, I fixed this
replacing it by
asm("leave");
asm("xor %eax,%eax");
asm("ret");
Now the code after my label is generated.
Running gcc -S file.c then
gcc file.s -o file.exe, of course it displays the error and at
the error line there is call *$L2
(L2 is label in my c file). It works
by replacing it by call L2.
Now the code after my label and after my call in main is
executed and the program properly
terminates with state 0.
But I don't want to have to do that each time I will compile.
Is it normal that GCC write call *$L2 rather than call L2?