I've recently come across spi2statbits in the following function:
int WriteSPI2( int data)
{
int f;
SPI2BUF = data; // write to buffer for TX
while( !SPI2STATbits.SPIRBF)
{f=1;}// wait transfer completion
return SPI2BUF; // read the received value
} // WriteSPI2
I'm using the above in conjunction with a PIC24FJ128GA010 project.
I've been searching around to find more about SPI2STATbits but haven't found actual documentation. I assume this is a fairly basic requirement.
Can someone direct me to the correct documentation?
See page 130 of the datasheet , look for SPIxSTAT.
For the particular device, do a search for SPI2STATbits in all included header files.
It's in p24FJ128GA010.h as
#define SPI2STAT SPI2STAT extern volatile uint16_t SPI2STAT __attribute__((__sfr__)); __extension__ typedef struct tagSPI2STATBITS {...
Related
[skip to UPDATE2 and save some time :-)]
I use ARM Cortex-M4, with CMSIS 5-5.7.0 and FreeRTOS, compiling using GCC for ARM (10_2021.10)
My variables are not initialized as they should.
My startup code is pretty simple, the entry point is the reset handler (CMSIS declared startup_ARMCM4.s as deprecated and recommend using the C code startup code so this is what I do).
Here is my code:
__attribute__((__noreturn__)) void Reset_Handler(void)
{
DataInit();
SystemInit(); /* CMSIS System Initialization */
main();
}
static void DataInit(void)
{
typedef struct {
uint32_t const* src;
uint32_t* dest;
uint32_t wlen;
} __copy_table_t;
typedef struct {
uint32_t* dest;
uint32_t wlen;
} __zero_table_t;
extern const __copy_table_t __copy_table_start__;
extern const __copy_table_t __copy_table_end__;
extern const __zero_table_t __zero_table_start__;
extern const __zero_table_t __zero_table_end__;
for (__copy_table_t const* pTable = &__copy_table_start__; pTable < &__copy_table_end__; ++pTable) {
for(uint32_t i=0u; i<pTable->wlen; ++i) {
pTable->dest[i] = pTable->src[i];
}
}
for (__zero_table_t const* pTable = &__zero_table_start__; pTable < &__zero_table_end__; ++pTable) {
for(uint32_t i=0u; i<pTable->wlen; ++i) {
pTable->dest[i] = 0u;
}
}
}
__copy_table_start__, __copy_table_end__ etc. have the wrong values an so no data is copied to the appropriate place in RAM.
I tried adding __libc_init_array() before DataInit(), as suggested in this answer, and remove the nostartfiles flag from the linker, but at some point __libc_init_array() jumps to an illegal address and I get a HardFault interrupt.
Is there a different method to fix it? maybe one where I can use the nostartfiles flag?
UPDATE:
Looking at the memory, where __copy_table_start__ is located, I see the data there is valid (even without the use of __libc_init_array()). It seems that pTable doesn't get the correct value.
I tried using __data_start__, __data_end__, __bss_start__, __bss_end__ and __etext instead of the above variables, in the linker file it is said they can be used in code without definition, but they cannot (maybe that's a clue?). In any case they didn't work either.
UPDATE2:
found the actual problem
all struct members get the same value (modifying one changes all others), it happens with every struct. I have no idea how this is possible. In other words the value of __copy_table_start__.src is, for example, 0x14651234, __copy_table_start__.dest is 0x00100000, and __copy_table_start__.wlen is 0x0365. When looking at pTable all members are 0x14651234.
I am trying to write a module that uses the AXI4 streaming protocol to communicate with the previous and next modules. The modules use the following communication signals:
TDATA, which is 16 bits,
TKEEP, which is 2 bits,
TUSER, which is 1 bit,
TVALID, which is 1 bit,
TREADY, which is 1 bit and goes towards the previous module, and
TLAST, which is 1 bit.
These all need to be separate signals. I tried to implement it using the following code:
#include "core.h"
void core_module(hls::stream<ap_axis_str> &input_stream, hls::stream<ap_axis_str> &output_stream){
#pragma HLS INTERFACE axis port=input_stream
#pragma HLS INTERFACE axis port=output_stream
#pragma HLS INTERFACE s_axilite port=return bundle=CTRL
ap_axis_str strm_val_in;
ap_axis_str strm_val_out;
for (int i = 0; i<NDATA; i++){
strm_val_in = input_stream.read();
strm_val_out.data = strm_val_in.data * 2;
strm_val_out.keep = 3;
strm_val_out.valid = 1;
strm_val_in.ready = 1;
strm_val_out.user = ((i%2)==0);
strm_val_out.last = (i == NDATA-1) ? 1:0;
output_stream.write(strm_val_out);
}
}
where the header file is
#ifndef core_h
#define core_h
#include <ap_int.h>
#include <ap_axi_sdata.h>
#include <hls_stream.h>
typedef ap_uint<16> word;
#define NDATA 10
struct ap_axis_str {
word data;
ap_uint<2> keep;
bool user;
bool last;
bool ready;
bool valid;
};
void core_module(hls::stream<ap_axis_str> &input_stream, hls::stream<ap_axis_str> &output_stream);
#endif
The problem is that this doesn't separate the signals. When I synthesise it and run it in the co-simulation (giving it values from 0 to 9), even if the result is what I expect it to be, the waveform produced looks like this:
We can see that TREADY, TVALID, and TDATA are there, but not the other 3. Furthermore, looking at the contents of TDATA (which for some reason are 64 bits) we notice that they contain all the signals. They are the following:
0001000001030000,
0001000000030002,
0001000001030004,
0001000000030006,
...
000100000003000c, (they are in base 16)
0001000001030010,
0001000100030012.
From which we can see that the 3 in position 12 is probably what was intended to be TKEEP, the 1 in position 8 which only appears in the last case is probably what was intended to be TUSER, the last 4 digits are what was supposed to be TDATA, etc. Additionally, the program drops TREADY when it isn't ready to receive data, which is what is intended of TREADY, but I didn't program it to work this way, which means that it's automatically generated and probably has nothing to do with the TREADY I told it to have.
So my question is: How do I make a module that gives out the correct 6 separate signals for the version of the AXI4 protocol that we are using?
Well, according to the Xilinx Documentation,
If you specify an hls::stream object with a data type other than ap_axis or ap_axiu, the tool will infer an AXI4-Stream interface without the TLAST signal, or any of the side-channel signals. This implementation of the AXI4-Stream interface consumes fewer device resources, but offers no visibility into when the stream is ending.
Now I had already imported the needed module with#include <ap_axi_sdata.h>, all I needed to do was actually use it by removing
struct ap_axis_str {
word data;
ap_uint<2> keep;
bool user;
bool last;
bool ready;
bool valid;
};
and replacing it with
typedef ap_axiu<16, 1, 0, 0> ap_axis_str;
Additionally, I needed to remove my manual attempt to control TREADY and TVALID, as those are done automatically.
So my problem sounds like this.
I have some platform dependent code (embedded system) which writes to some MMIO locations that are hardcoded at specific addresses.
I compile this code with some management code inside a standard executable (mainly for testing) but also for simulation (because it takes longer to find basic bugs inside the actual HW platform).
To alleviate the hardcoded pointers, i just redefine them to some variables inside the memory pool. And this works really well.
The problem is that there is specific hardware behavior on some of the MMIO locations (w1c for example) which makes "correct" testing hard to impossible.
These are the solutions i thought of:
1 - Somehow redefine the accesses to those registers and try to insert some immediate function to simulate the dynamic behavior. This is not really usable since there are various ways to write to the MMIO locations (pointers and stuff).
2 - Somehow leave the addresses hardcoded and trap the illegal access through a seg fault, find the location that triggered, extract exactly where the access was made, handle and return. I am not really sure how this would work (and even if it's possible).
3 - Use some sort of emulation. This will surely work, but it will void the whole purpose of running fast and native on a standard computer.
4 - Virtualization ?? Probably will take a lot of time to implement. Not really sure if the gain is justifiable.
Does anyone have any idea if this can be accomplished without going too deep? Maybe is there a way to manipulate the compiler in some way to define a memory area for which every access will generate a callback. Not really an expert in x86/gcc stuff.
Edit: It seems that it's not really possible to do this in a platform independent way, and since it will be only windows, i will use the available API (which seems to work as expected). Found this Q here:
Is set single step trap available on win 7?
I will put the whole "simulated" register file inside a number of pages, guard them, and trigger a callback from which i will extract all the necessary info, do my stuff then continue execution.
Thanks all for responding.
I think #2 is the best approach. I routinely use approach #4, but I use it to test code that is running in the kernel, so I need a layer below the kernel to trap and emulate the accesses. Since you have already put your code into a user-mode application, #2 should be simpler.
The answers to this question may provide help in implementing #2. How to write a signal handler to catch SIGSEGV?
What you really want to do, though, is to emulate the memory access and then have the segv handler return to the instruction after the access. This sample code works on Linux. I'm not sure if the behavior it is taking advantage of is undefined, though.
#include <stdint.h>
#include <stdio.h>
#include <signal.h>
#define REG_ADDR ((volatile uint32_t *)0x12340000f000ULL)
static uint32_t read_reg(volatile uint32_t *reg_addr)
{
uint32_t r;
asm("mov (%1), %0" : "=a"(r) : "r"(reg_addr));
return r;
}
static void segv_handler(int, siginfo_t *, void *);
int main()
{
struct sigaction action = { 0, };
action.sa_sigaction = segv_handler;
action.sa_flags = SA_SIGINFO;
sigaction(SIGSEGV, &action, NULL);
// force sigsegv
uint32_t a = read_reg(REG_ADDR);
printf("after segv, a = %d\n", a);
return 0;
}
static void segv_handler(int, siginfo_t *info, void *ucontext_arg)
{
ucontext_t *ucontext = static_cast<ucontext_t *>(ucontext_arg);
ucontext->uc_mcontext.gregs[REG_RAX] = 1234;
ucontext->uc_mcontext.gregs[REG_RIP] += 2;
}
The code to read the register is written in assembly to ensure that both the destination register and the length of the instruction are known.
This is how the Windows version of prl's answer could look like:
#include <stdint.h>
#include <stdio.h>
#include <windows.h>
#define REG_ADDR ((volatile uint32_t *)0x12340000f000ULL)
static uint32_t read_reg(volatile uint32_t *reg_addr)
{
uint32_t r;
asm("mov (%1), %0" : "=a"(r) : "r"(reg_addr));
return r;
}
static LONG WINAPI segv_handler(EXCEPTION_POINTERS *);
int main()
{
SetUnhandledExceptionFilter(segv_handler);
// force sigsegv
uint32_t a = read_reg(REG_ADDR);
printf("after segv, a = %d\n", a);
return 0;
}
static LONG WINAPI segv_handler(EXCEPTION_POINTERS *ep)
{
// only handle read access violation of REG_ADDR
if (ep->ExceptionRecord->ExceptionCode != EXCEPTION_ACCESS_VIOLATION ||
ep->ExceptionRecord->ExceptionInformation[0] != 0 ||
ep->ExceptionRecord->ExceptionInformation[1] != (ULONG_PTR)REG_ADDR)
return EXCEPTION_CONTINUE_SEARCH;
ep->ContextRecord->Rax = 1234;
ep->ContextRecord->Rip += 2;
return EXCEPTION_CONTINUE_EXECUTION;
}
So, the solution (code snippet) is as follows:
First of all, i have a variable:
__attribute__ ((aligned (4096))) int g_test;
Second, inside my main function, i do the following:
AddVectoredExceptionHandler(1, VectoredHandler);
DWORD old;
VirtualProtect(&g_test, 4096, PAGE_READWRITE | PAGE_GUARD, &old);
The handler looks like this:
LONG WINAPI VectoredHandler(struct _EXCEPTION_POINTERS *ExceptionInfo)
{
static DWORD last_addr;
if (ExceptionInfo->ExceptionRecord->ExceptionCode == STATUS_GUARD_PAGE_VIOLATION) {
last_addr = ExceptionInfo->ExceptionRecord->ExceptionInformation[1];
ExceptionInfo->ContextRecord->EFlags |= 0x100; /* Single step to trigger the next one */
return EXCEPTION_CONTINUE_EXECUTION;
}
if (ExceptionInfo->ExceptionRecord->ExceptionCode == STATUS_SINGLE_STEP) {
DWORD old;
VirtualProtect((PVOID)(last_addr & ~PAGE_MASK), 4096, PAGE_READWRITE | PAGE_GUARD, &old);
return EXCEPTION_CONTINUE_EXECUTION;
}
return EXCEPTION_CONTINUE_SEARCH;
}
This is only a basic skeleton for the functionality. Basically I guard the page on which the variable resides, i have some linked lists in which i hold pointers to the function and values for the address in question. I check that the fault generating address is inside my list then i trigger the callback.
On first guard hit, the page protection will be disabled by the system, but i can call my PRE_WRITE callback where i can save the variable state. Because a single step is issued through the EFlags, it will be followed immediately by a single step exception (which means that the variable was written), and i can trigger a WRITE callback. All the data required for the operation is contained inside the ExceptionInformation array.
When someone tries to write to that variable:
*(int *)&g_test = 1;
A PRE_WRITE followed by a WRITE will be triggered,
When i do:
int x = *(int *)&g_test;
A READ will be issued.
In this way i can manipulate the data flow in a way that does not require modifications of the original source code.
Note: This is intended to be used as part of a test framework and any penalty hit is deemed acceptable.
For example, W1C (Write 1 to clear) operation can be accomplished:
void MYREG_hook(reg_cbk_t type)
{
/** We need to save the pre-write state
* This is safe since we are assured to be called with
* both PRE_WRITE and WRITE in the correct order
*/
static int pre;
switch (type) {
case REG_READ: /* Called pre-read */
break;
case REG_PRE_WRITE: /* Called pre-write */
pre = g_test;
break;
case REG_WRITE: /* Called after write */
g_test = pre & ~g_test; /* W1C */
break;
default:
break;
}
}
This was possible also with seg-faults on illegal addresses, but i had to issue one for each R/W, and keep track of a "virtual register file" so a bigger penalty hit. In this way i can only guard specific areas of memory or none, depending on the registered monitors.
Moving from Boost 1.54 to 1.55 I now get this error during compilation (VS2010):
void GzipDecompression::Decompress(const unsigned char * src, int length)
{
if(src)
{
// Create an input-stream source for the data buffer so we can used the boost filtering buffer
std::ifstream inputstream;
typedef boost::iostreams::basic_array_source<char> Device;
boost::iostreams::stream_buffer<Device> buffer((char *)src, length);
// Inflate using the GZIP filter
filtering_streambuf<input> in;
in.push(gzip_decompressor());
in.push(buffer);
// Get the result into a vector
boost::interprocess::basic_vectorstream<std::vector<char>> vectorStream;
copy(in, vectorStream);
std::vector<char> output(vectorStream.vector());
}
}
error C2243: 'type cast' : conversion from 'boost::interprocess::basic_vectorstream<CharVector> *' to 'volatile const std::basic_streambuf<_Elem,_Traits> *' exists, but is inaccessible c:\boost\boost_1_55_0\boost\iostreams\traits.hpp 57 1
It appears this is now failing:
boost::interprocess::basic_vectorstream<std::vector<char>> vectorStream;
What has changed so this doesn't compile?
Update After Reply: I've tried changing the output to this:
std::istream instream(&in);
typedef std::istream_iterator<char> istream_iterator;
typedef std::ostream_iterator<char> ostream_iterator;
std::vector<char> output;
std::copy(istream_iterator(instream), istream_iterator(), std::back_inserter(output));
But the output is different. Do I have to flush the stream or something?
Update2: Apparently the istream_iterator strips CR LF etc. Here is my working function
void GzipDecompression::Decompress(const unsigned char * src, int length)
{
if(src)
{
// Create an input-stream source for the data buffer so we can used the boost filtering buffer
std::ifstream inputstream;
typedef boost::iostreams::basic_array_source<char> Device;
boost::iostreams::stream_buffer<Device> buffer((char *)src, length);
// Inflate using the GZIP filter
filtering_streambuf<input> in;
in.push(gzip_decompressor());
in.push(buffer);
// Get the result into a vector
std::istream instream(&in);
typedef std::istreambuf_iterator<char> istreambuf_iterator;
std::vector<char> output;
std::copy(istreambuf_iterator(instream), istreambuf_iterator(), std::back_inserter(output));
}
}
Thanks
Of course the error doesn't emerge from the line you mention. Instead, it's generated deep in the template instantiations for copy_impl. The problem seems to be that Boost Iostreams tries to be smart about detecting when people use "raw buffers" as devices/streams.
The problem with that is that the Interprocess stream implementation (privately) inherits its buffer class and as such, this confuses the detection because a conversion to base seems to be available but not accessible.
This can be reproduced in GCC as well as VS2013 update 4 and all using Boost 1_58_0 as well. As such it is an error that can be reported to the Boost developers. I'd suggest it is a weakness in the Boost Interprocess implementation, although Boost Iostreams devs might be interested in making their overload selection more robust in the presence of private base classes...
In the mean time, consider using a simple boost::iostreams::array_sink or boost::iostreams::back_insert_device (IIRC) which is pretty much guaranteed to play well with Boost Iostreams, and achieves the same goals.
I have been studying I2C driver (client) code for a while.
I have seen this function "i2c_get_clientdata" and "i2c_set_clientdata" every where.
I have seen the this question here .
Use of pointer to structure instead of creating static local copy
Some times i think like it is like "container_of" macro to get a pointer to the structure.
But still i didn't understood properly why to use it and when to use it.
Below i am posting a sample code where I see its usage.
If any one could help me understand why it is used there and when we shall use it when we write our own drivers.
struct max6875_data {
struct i2c_client *fake_client;
struct mutex update_lock;
u32 valid;
u8 data[USER_EEPROM_SIZE];
unsigned long last_updated[USER_EEPROM_SLICES];
};
static ssize_t max6875_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t off, size_t count)
{
struct i2c_client *client = kobj_to_i2c_client(kobj);
struct max6875_data *data = i2c_get_clientdata(client);
int slice, max_slice;
if (off > USER_EEPROM_SIZE)
return 0;
if (off + count > USER_EEPROM_SIZE)
count = USER_EEPROM_SIZE - off;
/* refresh slices which contain requested bytes */
max_slice = (off + count - 1) >> SLICE_BITS;
for (slice = (off >> SLICE_BITS); slice <= max_slice; slice++)
max6875_update_slice(client, slice);
memcpy(buf, &data->data[off], count);
return count;
}
Those functions are used to get/set the void *driver_data pointer that is part of the struct device, itself part of struct i2c_client.
This is a void pointer that is for the driver to use. One would use this pointer mainly to pass driver related data around.
That is what is happening in your example. The max6875_read is a callback getting a structu kobject. That kobject is an i2c_client which is enough to communicate with the underlying device using the driver_data pointer here allows to get back the driver related data (instead of using global variables for example).