My task is to copy first 255 bytes from external EEPROM (24LC64) to internal (PIC16F877) via i2c bus. I've read AN1488, all datasheets, MikroC gide (oh, yes, I'm using MikroC), but hopeless.. Meaning that my code trys to read smtng but then, reading my PIC's eeprom at programmer (which can't read 24LC64, so I don't even know what's on it, but there is smtng defenately and it is different from what i'm getting), and I'm getting all EEPROM filled by "A2" or "A3". My guess is that it's that first addr, by which I'm addressing to 24LC64. Could you pls inspect my code (it's quite small =)) and point me at my misstakes.
char i;
unsigned short Data;
void main(){
PORTB = 0;
TRISB = 0;
I2C1_Init(100000);
PORTB = 0b00000010;
for (i = 0x00; i<0xFF; i++) {
I2C1_Start();
I2C1_Wr(0xA2); //being 1010 001 0
//I'm getting full internal EE filled with what's in brackets from above
I2C1_Wr(0b00000000);
I2C1_Wr(i);
I2C1_Repeated_Start();
I2C1_Wr(0xA3); //being 1010 001 1
Data = I2C1_Rd(0);
I2C1_Stop();
EEPROM_write(i, Data); //How could that 1010 001 0 get into here???
Delay_100ms();
}
PORTB = 0b00000000;
while (1) {
}
}
P.S. I've tryed this with sequantial read, but it "reads" (again that "A2"..) only 1st byte.. So i've posted this one..
P.S.S. I`m working in "hardware", no Proteus involved..
P.S.S.S. I can't test writing, because I have only one 24LC64 with important info on it, so it's even pulld up to Vcc on it's WP pin...
This isn't a specific answer but more of a checklist for I2C comms, since it's difficult to help with your problem without looking at a scope and without delving into the API calls that you've provided.
Check the address of your EEPROM. I2C uses a 7-bit address with a R/W bit appended to the end, so it's easy to make a mistake here.
Check the command sequence that your EEPROM expects to receive for a "data read"
Check how the I2C_ API that you're using deals with acks from the EEPROM. They need to be handled somewhere (usually in an ISR) and it's not obvious where they're dealt with from your example.
Check that you've got the correct pull-ups on SDA and SCL as per the requirements of your design - they're needed for I2C to work.
Related
In an external kernel module, using DMA Engine, when calling dma_request_chan() returns an error pointer of value -19, i.e. ENODEV or "No such device".
Now, in the active device tree, I do find a dma-names entry with what I'm trying to get a channel for, so my suspicion is that something else deeper in the forest is already not found.
How do I find out what's wrong?
Background:
I have a Zynq MP Ultrascale+ board here, with an FPGA design which uses AXI VDMA block to provide one channel of data to be received on the Cortex A's Linux, where the data is written to DDR4 by the FPGA and to be read from Linux.
I found that there is a Xilinx DMA driver included in the kernel, in the Xilinx source repo anyway, currently kernel version 5.6.0.
And that that driver has no user space interface, such that an intermediate kernel driver is needed.
This is depicted, and they have an example here: Section "4 DMA Proxy Design". I modified the code in the dma-proxy.c of the zip file linked there such that it uses only the RX channel, i.e. also only tries to request it.
The code for that is here, to not make this post huge:
Modified dma-proxy.c at onlinegdb.com
Line 407 has the function create_channel(), which used to use dma_request_slave_channel() which ditches the error code of the function it wraps, so to see the error, I am using that one instead: dma_request_chan().
The function create_channel() is called in function dma_proxy_probe() # line 470 (the occurences before that are deactivated by compile switch).
So by way of this call, dma_request_chan() will be called with the parameters:
create_channel(pdev, &channels[RX_CHANNEL], "dma_proxy_rx", DMA_DEV_TO_MEM);
The Device Tree for my board has an added node for dma-proxy driver as is shown at the top of the dma-proxy.c
dma_proxy {
compatible ="xlnx,dma_proxy";
dmas = <&axi_dma_0 0>;
dma-names = "dma_proxy_rx";
};
The name "axi_dma_0" matches with the name in the axi DMA device tree node:
axi_dma_0: dma#a0000000 {
#dma-cells = <0x1>;
clock-names = "s_axi_lite_aclk", "m_axi_s2mm_aclk";
clocks = <0x3 0x47 0x3 0x47>;
compatible = "xlnx,axi-dma-7.1", "xlnx,axi-dma-1.00.a";
interrupt-names = "s2mm_introut";
interrupt-parent = <0x1d>;
interrupts = <0x0 0x2>;
reg = <0x0 0xa0000000 0x0 0x1000>;
xlnx,addrwidth = <0x28>;
xlnx,sg-length-width = <0x1a>;
phandle = <0x1e>;
dma-channel#a0000030 {
compatible = "xlnx,axi-dma-s2mm-channel";
dma-channels = <0x1>;
interrupts = <0x0 0x2>;
xlnx,datawidth = <0x40>;
xlnx,device-id = <0x0>;
};
If I now look here:
% cat /proc/device-tree/dma_proxy/dma-names
dma_proxy_rx
Looks like my dma_proxy_rx, that I'm trying to request the channel for, is in there.
Edit:
In the boot log, I see this:
xilinx-vdma a0000000.dma: Please ensure that IP supports buffer length > 23 bits
irq: no irq domain found for interrupt-controller#a0010000 !
xilinx-vdma a0000000.dma: unable to request IRQ 0
xilinx-vdma a0000000.dma: WARN: Device release is not defined so it is not safe to unbind this driver while in use
xilinx-vdma a0000000.dma: Xilinx AXI DMA Engine Driver Probed!!
There are warnings - but in the end, the Xilinx AXI DMA Engine got "probed", meaning the lowest level driver loaded and is ready, right?
So it looks to me like there should be my device, but the kernel disagrees.
I've got the same problem with similar configuration. After digging a lot of kernel source code (especially drivers/dma/xilinx/xilinx_dma.c) I've solved this problem by changing channel number in dmas parameter from 0 to 1 in dma-proxy device tree entry like this:
dma_proxy {
compatible ="xlnx,dma_proxy";
dmas = <&axi_dma_0 1>;
dma-names = "dma_proxy_rx";
};
It seems that dma-proxy example is written for AXI DMA block with both mm2s (channel #0) and s2mm (channel #1) channels. And if we remove mm2s channel from AXI DMA block, the s2mm channel stays #1.
Hi i'm new to this and i need help. It's suppose to just show the 'S' in the realterm instead it gives 'null'. What would be the problem? could it be the register? or the code itself?
#include <avr/io.h>
#include <util/delay.h>
void UART_Init(unsigned int ubrr)
{
UBRRH=(unsigned int)(ubrr>>8);
UBRRL=(unsigned int)ubrr;
UCSRA=0x00;
UCSRB=(1<<TXEN)|(1<<RXEN);
UCSRC=(0<<USBS)|(1<<UCSZ0)|(1<<UCSZ1);
}
void UART_Tx(unsigned char chr)
{
while (bit_is_clear(UCSRA,UDRE)){}
UDR=chr;
}
int main(void)
{
UART_Init(95);
DDRD|=0B11111111;
PORTD|=0B11111111;
while(1){
_delay_ms(10);
UART_Tx('S');
}
}
System is running on xtal with 14745600 Hz. Speed on host is 9600 baud. all settings should be 8N1.
You need to set the URSEL when writing to the UCSRC register.
Change
UCSRC=(0<<USBS)|(1<<UCSZ0)|(1<<UCSZ1);
to
UCSRC=(1<<URSEL)|(0<<USBS)|(1<<UCSZ0)|(1<<UCSZ1);
From the data sheet:
The UBRRH Register shares the same I/O location as the UCSRC Register. Therefore some
special consideration must be taken when accessing this I/O location. When doing a write access of this I/O location, the high bit of the value written, the USART Register Select (URSEL) bit, controlswhich one of the two registers that will be written. If URSEL is
zero during a write operation, the UBRRH value will be updated. If URSEL is one, the UCSRC
setting will be updated.
The rest of the code looks fine to me.
change UART_Tx('S'); using UART_Tx("S");
I am writing a network device driver.
Kernel 2.6.35.12
The device is supposed to be working when it is connected to a bridge port.
I am trying to intercept ICMPv6 RA and NS messages (Router/ Neighbor solicitation) forwarded to the interface from the bridge.
eth <–> br0 <–> mydevice
In the device start_xmit function I am doing to following:
Check that the protocol field after the Ethernet header is IPV6 (0x86dd)
Check that the ipv6 next header is ICMPv6 and check its type:
__u8 nexthdr = ipv6_hdr(skb)->nexthdr;
if (nexthdr == htons (IPPROTO_ICMPV6))
{
struct icmp6hdr *hdr = icmp6_hdr(skb);
u8 type = hdr->icmp6_type;
if(type == htons (NDISC_NEIGHBOUR_SOLICITATION) || type == htons (NDISC_ROUTER_SOLICITATION))
{
….Do something here…
}
}
When RS/NS are sent from within the device (e.g br0), I see that the code is working right.
The problem is when traffic is forwarded through the bridge from the other port.
I see that the icmp6_hdr(skb) returns an incorrect header.
Debugging some more, it seems that the
skb->network_header and the skb->transport_header are pointing to the same place.
icmp6_hdr is using the transport_header which explain why it is incorrect.
Dumping the skb data it looks that all the headers and payload are at the right offset (also compared it with tcpdump)
I suspect that it might be related to the bridge code, before going to dive into it,
I thought that maybe anyone had come up against anything similar or have any other ideas?
Part of the problem is that you are assuming that Netfilter did anything more than just figure out what was the next header. In my experience (albeit not very long) you want to do something like this:
struct icmp6hdr *icmp6;
// Obviously don't do this unless you check to make sure that it's the right protocol
struct ipv6_hdr *ip6hdr = (struct ipv6_hdr*)skb->network_header;
// You need to move the headers around
// Notice the memory address of skb->data and skb->network_header are the same
// that means that the IP header hasn't been "pulled"
skb->transport_header = skb_pull(skb, sizeof(struct ipv6_hdr));
if(ntohs(ip6hdr->nexthdr) == IPPROTO_ICMPV6) {
icmp6 = (struct icmp6hdr*)skb->transport_header;
// Doing this is more efficient, since you only are calling the
// Network to Host function once
__u8 type = ntohs(hdr->icmp6_type);
switch(type) {
case NDISC_NEIGHBOUR_SOLICITATION:
case NDISC_ROUTER_SOLICITATION:
// Do your stuff
break;
}
}
Hopefully this was helpful. I just started diving into writing Netfilter code, so I am not exactly certain 100%, but I found this out when I was trying to do something similar with IPv4 on the NF_IP_LOCAL_IN hook.
Hi I have a problem in sending message in project, I am using pic16f877a and sim300. the main function runs repeatedly. Some characters are missed in the sent sms.
my program is like this...
void main()//main function
{
Serial_init(); // initialization of serial communication
Send_SMS();
}
void Serial_init()
{
TRISC=0XC0;
TXSTA=0x24;
SPBRG=129; // set baud rate 9600 Hz for 20MHz fosc
RCSTA=0x90;
TXIF=1;
}
void Send_SMS(void)
{
USART_puts("AT\0");
putch1(0x0D);
Delay_ms4M(200);
USART_puts("AT+CMGF=1\0"); // switch into text mode
putch1(0x0D);// ascii of Carriage Return
Delay_ms4M(200);
USART_puts("AT+CMGS=\"9741153218\"\0"); // send sms to the number
putch1(0x0D);
Delay_ms4M(200);
USART_puts("Hi this is working LOL\0"); // SMS text
putch1(0x0A); // new line
Delay_ms4M(200);
putch1(0x0D);
Delay_ms4M(100);
putch1(0x1A); // ascii of 'substitute' i.e end of file
}
void USART_puts(const unsigned char *string)
{
while(*string)
putch1(*string++);
}
void putch1(unsigned char data)
{
while(TXIF==0);
TXREG=data;
}
Please help
additional details: all other programs run properly, but if I call send_sms function, "main" runs repeatedly and several messages are sent with missed characters.
IMHO :
Your chip is resetting. This is the highest probable cause.
Either it is faulty or you have set Watchdog Timer to on somewhere.
For missing characters :
a) Chip resets in the midst of a data transfer.
b) Roule of thumb for usart:
Stop stuffing bytes to usart. Send each byte with a small leading delay like 10-20 microseconds.
The communication is asynchronous, which means the receiver has to synchronize at the beginning of each communication unit which is a byte. To do that receiver brutely uses resources to detect start bit, the length (in time) of it etc. So if you try to send a byte train, you will stall the receiver.
Have you tried the code with another 16F877a ? (to check chip failure)...
I am using I2C on the Snowball board, running at 400KHz by default and would like to reduce this to 100KHz.
I use the api defined in and configure as follows
m_fd = open(m_filename.c_str(), O_RDWR);
if (ioctl(m_fd, I2C_SLAVE_FORCE, m_addr) < 0)
{
throw I2cError(DeviceConfigFail);
}
Does anyone know how I would go about changing the speed to standard mode.
Thanks
You can change the I2C SCL frequency in your driver's 'struct i2c_gpio_platform_data'.
static struct i2c_gpio_platform_data xyz_i2c_gpio_data = {
.sda_pin = GPIO_XYZ_SDA,
.scl_pin = GPIO_XYZ_SCL,
.udelay = 5, //#udelay: signal toggle delay. SCL frequency is (500 / udelay) kHz
....
};
Changing 'udelay' changes your 'xyz' i2c device's clock frequency.
You should change the I2C Frequency in driver source file of the corresponding peripheral (ie: Slave device to which you are communicating through I2C. Example: EEPROM/Camera etc.)
You may find some macro defined in that driver source code... like:
#define EEPROM_I2C_FREQ 400000 //400KHz
Change it to:
#define EEPROM_I2C_FREQ 100000 //100KHz
Only for that corresponding driver, I2C frequency/speed will be changed.