Really can't understand why my program isn't working - occam-pi

I've really spent a lot of time working at this problem and googling around to find a solution, but I can't seem to find what's wrong.
I've learning how to code occam and have the following program:
PROC light (CHAN OF BYTE screen, CHAN OF INT light.change)
INT light.on :
WHILE TRUE
SEQ
light.change ? light.on
IF
light.on = 1
screen ! 'y'
TRUE
SKIP
:
PROC test(CHAN OF BYTE keyboard, scr)
CHAN OF INT to.light :
INITIAL INT on IS 1(INT) :
BYTE b :
SEQ
light(scr, to.light)
WHILE TRUE
SEQ
keyboard ? b
IF
b = 'o'
to.light ! on
TRUE
SKIP
:
All I'm trying to do is communicate from one process to another when I press the 'o' key.
The error message I'm getting from the (KRoC) compiler is:
Error at lift.occ:11
Program failed, state = e, eflags = 00000000
which is the light.on = 1 line.
As far as I can see, the light PROC will wait for some input on its light.change channel and will then assign it to its light.on variable. The program will then proceed to a conditional statement IF, where the light.on = 1 line should in this case evaluate to true. But instead I get this error.
I have tried using the -verbose flag, but the compiler says that you can't use it for .occ files.
Does anyone know how or if I can get more detailed info from the compiler?
Any help on this would be greatly appreciated.
Thanks

The above code compiles for me and when run reaches deadlock
james:conc$ occbuild --program light.occ
james:conc$ light
KRoC: deadlock: no valid processes found in workspace(s)
KRoC: program deadlocked (no processes to run)
I can also get it to run in verbose mode as below
occbuild -v --program light.occ
On a different note you might want to change your structure. Try having three PROC's
PROC is.light.on(CHAN BYTE screen! , CHAN INT light.control)
WHILE TRUE
...output to terminal if light is on or off
PROC light.switch(CHAN BYTE keyboard? , CHAN INT light.control)
WHILE TRUE
...use the keyboard to turn light on and off
PROC light(CHAN BYTE keyboard? , screen!)
CHAN INT light.control:--0 for light on;1 for light off
PAR
light.switch(keyboard? , light.control!)
is.light.on(screen! , light.control?)

Related

UART only working correctly one way (ATmega328p)

I got an Arduino Uno, which is driven by an ATmega328P. And I wanted to move away from its libraries and do everything on a lower level for learning purposes. However I cannot get the uart working correctly, it works now only when sending to the device. Receiving returns weird garbage wich the temrinal can't print.
#define BAUDRATE (((F_CPU / (BAUD * 16UL))) - 1)
void init_uart()
{
UBRR0H = BAUDRATE >> 8; // set high baud
UBRR0L = BAUDRATE; //set low baud
UCSR0B = _BV(TXEN0) | _BV(RXEN0); //enable duplex
UCSR0C = _BV(UCSZ00) | _BV(UCSZ01) | _BV(USBS0); //8-N-1
}
void putchar_uart(char c, FILE* stream)
{
loop_until_bit_is_set(UCSR0A, UDRE0); //wait till prev char is read
UDR0 = c;
}
char getchar_uart(FILE* stream)
{
loop_until_bit_is_set(UCSR0A, RXC0); //wait if there is data
return UDR0;
}
//^ actually is in a seperate file which gets linked
int main()
{
DDRD |= PIN_LED;
PORTD |= PIN_LED;
stdout = &mystdout;
stdin = &mystdin;
char buf[0xFF];
init_uart();
while (1)
{
char c = getchar_uart(NULL);
if (c == 'a')
{
PIND = PIN_LED;
printf("%s\n", "Hallo");
}
}
}
I'm running Ubuntu 14.04 LTS and using minicom for the communication. Which is setup as: 115200 8N1 (with the correct serial device of course.)
It gets compiled as:
avr-gcc -Wall -Os -mmcu=atmega328p -DF_CPU=16000000UL -DBAUD=115200 -std=c99 -L/home/joel/avr-libs/lib -I/home/joel/avr-libs/inc -o firmware.o main.c -luart
So how do I know that one way works? Because of the led only toggles when typing in an 'a'. But the response are invalid characters. In hex:
c8 e1 ec ec ef 8a
By setting the USBS bit you are commanding a second stop bit.
This appears to lead your computer to mistakenly believe that the MSB (which is the last data bit) is set when it isn't causing your received data to be OR'd with 0x80.
While this will cause a framing error, it is probably not the cause of the wrong MSB. Your own answer about switching to 2x mode (and thus more accurately approximating the baud rate) is more key to the solution, though you should correct this too.
I fixed the problem when Chris suggested to print out the config registers that Arduino uses I noticed that it uses the double mode. I couldn't configure that with minicom or I missed that. Maybe it is default to use such mode. Anyway it works now.
I also learned that avr-libc provides a header called util/setbaud.h which calculates the correct baud rate automatically. In the UBRRL_VALUE and UBRRH_VALUE fields.

End of input in XTerm usnig keyboard

How can I signal XTerm terminal to end of input. In my case, I run a C++ program in XTerm console and I want to signal the program the end of input by pressing some combination of keyboard buttons.(I tried Ctrl+D Ctrl+Z ).My program goes like this :
map<int,string>info;
string name;
int age;
cin>>name;
while( **?????????** ){ //Input till EOF , missing logic
cin>>age;
info.insert( pair<int,string>(age,name) );
cin>>name;
}
The program proceeds upon receiving the end of input signal from terminal.
You always need to check the input after reading, i.e., your program should look something like this:
while (std::cin >> name >> age) {
// do something with name and age
}
This will read from std::cin until something fails. You can check if std::cin.eof() is set to determine if having reached the end of the of the input is the cause of the error or there was some other failure, e.g., an attempt to enter something which isn't a number for the age.

NetServerEnum create Worker Threads who won't close

While trying to solve a previously asked SO question of mine, I've find that even without my threads, the problem occurs.
what I have now , is a really simple single-threaded code , that calls - NetServerEnum()
. when returned, it calls NetApiBufferFree() and return from main, which supposed to end the process.
at that point, my thread truly ends, but the process won't exit , as there are 4 threads opened (not by me):
1 * ntdll.dll!TplsTimerSet+0x7c0 (stack is at ntdll.dll!WaitForMultipleObjects)
(This one opened upon the call to NetServerEnum())
3 * ndll.dll!RtValidateHeap+0x170 (stack is at ntdll.dll!ZwWaitWorkViaWorkerFactory+0xa)
(These are open when my code returns)
UPDATE:
If I kill the thread running ntdll.dll!TplsTimerSet+0x7c0 externally (using process explorer) , before return of main(), the program exit gracefully.
I thought it might be useful to know.
UPDATE2: (some more tech info)
I'm using:
MS Visual Studio 2010 Ultimate x64 (SP1Rel) on Win7 Enterprise SP1
Code is C (but compile as c++ switch is on)
Subsystem: WINDOWS
Compiler: cl.exe (using IDE)
all other parameters are default.
I'm Using a self modified entry point (/ENTRY:"entry") , and it is the only function In my program):
int entry(void)
{
SERVER_INFO_101* si;
DWORD a,b;
NET_API_STATUS c;
c = NetServerEnum ( NULL , 101 , (LPBYTE*) &si , MAX_PREFERRED_LENGTH , &b , &a , SV_TYPE_WORKSTATION, NULL , 0 );
c = NetApiBufferFree (si);
Sleep(1000);
return 0;
}
all the tested mentioned before where preformed inside a windows domain network of about 100 units.
UPDATE 3:
This problem does not occur when tested on a (non-virtual) WinXP 32bit. (same binary, though for the Win7 x64 two binary were tested - 32bit over WOW , and native x64)
When you use a custom entry point, you're bypassing the runtime library, which means you're responsible for exiting the process. The process will exit implicitly if there are no more threads running, but as you've discovered, the operating system may create threads on your behalf that you don't have control over.
In your case, all you need to do is to call ExitProcess explicitly at the end of the entry() function.
int entry(void)
{
SERVER_INFO_101* si;
DWORD a,b;
NET_API_STATUS c;
c = NetServerEnum ( NULL , 101 , (LPBYTE*) &si , MAX_PREFERRED_LENGTH , &b , &a , SV_TYPE_WORKSTATION, NULL , 0 );
c = NetApiBufferFree (si);
Sleep(1000);
ExitProcess(0);
}
In the absence of a call to ExitProcess and with a custom entry point, the behaviour you're seeing is as expected.

How can I generate an EOF (or an ASCII 0) in a visual studio debug console?

I have a console-mode program running on Windows. The program calls getchar() in a loop unitl either an EOF or a 0 is returned. I'd like to enter one of the following as a test vector while running the debugger:
"abc\0" or "abc\EOF
I can't seem to consistently generate either. I tried the suggestion in this question by typing a bcCTRL-ZENTER". That returns 97,98,99,26 to my program, and then hangs on the next getchar() call.
Entering CTRL-D doesn't hlep either, getchar returns a 4 for the control char, then a newline char, and then it duplicates the line I just entered on the console. It's like VS is using the control characters as editing characters.
EDIT:
Here is a stripped down version of the code I am using. I see identical behavior in the debug console and in a command window.
#define MAXSZ 4096
int main(int argc, char **argv)
{
short int data[MAXSZ]={0}, i=0;
char c;
do {
if (i==MAXSZ) break;
c = getchar();
if (c!=EOF) data[i]=c;
} while (data[i++]!=0);
for (i=0;data[i]&&i<MAXSZ;i++)
putchar(data[i]);
}
How do I enter an EOF or an ASCII 0 in the Visual Studio debug a Windows console?
Try this one:
<Enter><Ctrl-Z><Enter>.
#Hans Passant solution works for me - should also work for OP.
1 To generate an ASCII 0, type (Ctrl+# ) or (Ctrl+2 )
2 To generate an EOF, type (Ctrl+Z Enter), but the input buffer needs to be empty. So this is typically after an (Enter), thus (Enter Ctrl+Z Enter).
But OP code has problems.
char c; // should be int ch
do {
...
c = getchar();
if (c!=EOF) data[i]=c;
} while (...);
In OP code, if the character ASCII 255 occurs , it gets assigned to a char (-1) which compares to EOF. Instead use int ch.
if (c!=EOF) data[i]=c;
// should be
if (c==EOF) break;
data[i]=c;
This prevents the code from looping forever or erring once an EOF occurs.
To enter ASCII 255
(Alt key down, num pad 2, num pad 5, num pad 5, Alt key up)

Get_user running at kernel mode returns error

I have a problem with get_user() macro. What I did is as follows:
I run the following program
int main()
{
int a = 20;
printf("address of a: %p", &a);
sleep(200);
return 0;
}
When the program runs, it outputs the address of a, say, 0xbff91914.
Then I pass this address to a module running in Kernel Mode that retrieves the contents at this address (at the time when I did this, I also made sure the process didn't terminate, because I put it to sleep for 200 seconds... ):
The address is firstly sent as a string, and I cast them into pointer type.
int * ptr = (int*)simple_strtol(buffer, NULL,16);
printk("address: %p",ptr); // I use this line to make sure the cast is correct. When running, it outputs bff91914, as expected.
int val = 0;
int res;
res= get_user(val, (int*) ptr);
However, res is always not 0, meaning that get_user returns error. I am wondering what is the problem....
Thank you!!
-- Fangkai
That is probably because you're trying to get value from a different user space. That address you got is from your simple program's address space, while you're probably using another program for passing the value to the module, aren't you?
The call to get_user must be made in the context of the user process.
Since you write "I also made sure the process didn't terminate, because I put it to sleep for 200 seconds..." I have a feeling you are not abiding by that rule. For the call to get_user to be in the context of the user process, you would have had to make a system call from that process and there would not have been a need to sleep the process.
So, you need to have your user process make a system call (an ioctl would be fine) and from that system call make the call to get_user.

Resources