I want to send a command by PIC12F1572 to another PIC12F1572 through UART
and in that I want to send a function which will blink the LED on the slave PIC.
I have done some code but I am somewhat confusing
can anyone help me here?
P.S- I am Using MPLAB X IDE v3.50
Master/Transmitter PIC12F1572:
#include <xc.h>
#include "main.h"
#include "config.h"
#include "TYPEDEF_Definitions.h"
#include "PIC_12_Timer0.h"
#include "PIC_12_UART.h"
#include "System_init.h"
#include "Pin_manager.h"
#include "LED.h"
#define _XTAL_FREQ 16000000
void main()
{
SYSTEM_Initialize();
//pin_config();
LATA = 0x00;
UART_Init(); //Initialize UART
// StartLedBlinkingWithColor(color);
TRISAbits.TRISA2 = 1; //make RA2 as input
while (1)
{
//LATAbits.LATA2 = 1;
uint8_t Var = 0x61;
//uint8_t Var = LATAbits.LATA2;
if(TXSTAbits.TRMT == 1)
{
UART_write(LED_Blink()); // is it possible?? or how it will be possible
// __delay_ms(1000);
}
LATAbits.LATA2 = 1;
}
}
void LED_Blink()
{
LATAbits.LATA2 = 1;
LATAbits.LATA5 = 1;
__delay_ms(1000);
LATAbits.LATA2 = 0;
LATAbits.LATA5 = 0;
}
RECEIVER/SLAVE PIC12F1572:
#include <xc.h>
#include "config.h"
#include "PIC_12F_GPIO.h"
#include "Led.h"
#include "Interruptmanage.h"
#include "PIC_12F_UART.h"
#include "PIC_12F_TIMER0.h"
#include "main.h"
void main( void)
{
uint8 data;
InterruptInit();
TIMER0_Init();
UART_Init();
InitLeds(); // here I init GPIO pin..no prb here
// SetLedOff();
/*-------------R E C E I V E R*------------*/
while (1)
{ // Endless loop
if (UART_DataReady() ) // I get prob here ..
{
PORTA = UART_ReadOneByte(); // read the received data, [how can I receive my Led_Blink() function??
LATAbits.LATA2 = 1;
//LATAbits.LATA2 = data;
//SendByteSerially(data);
}
}
}
There are a couple of things to consider:
You cannot "send a function" through the UART. Therefore, the LED_Blink() function needs to be on the receiver side. Before doing anything else, verify that the function works on the receiver side, without any UART interaction.
Next, you can define a protocol, which is basically deciding which byte values you send through the UART should trigger the LED_Blink() call on the receiver side. For example, if you decide to use the byte value of 42 to trigger a LED blink, your sender would call UART_write(42) and on the receiver side, you would have something like the following:
data = UART_ReadOneByte();
if (data == 42) {
LED_Blink();
}
So for Receiving data from UART and saving it in array and use the data to vlink nthe LED I Have done this code: Anyone interested can have a look
while (1)
{
if (EUSART_DataReady)
{
for (i = 0; i<FRAMESIZE; i++) //#define FRAMESIZE 19
{
RX_Buffer[i] = EUSART_Read();
if (RX_Buffer[i] == '\n') //check'\n' in the end of frame
break;
}
RX_Buffer[i] = '\n'; //append '\n' at the end of stoaring array for detection of frame
RX_Buffer[i+1] = '\0'; // End of an array
EUSART_WriteAnArrayOfBytes(RX_Buffer);
if(RX_Buffer[0]=='R' && RX_Buffer[FRAMESIZE-2] == '\n') //check for correct frame
{
LATAbits.LATA2 = 1;
__delay_ms(2000);
LATAbits.LATA2 = 0;
__delay_ms(1000);
}
}
}
Related
I'm making a simple car gauge, I want to add a little personal touch nothing special.
Setup: ESP32 and ST7735S with no SD card.
At the beginning I want to add a boot screen ( picture- I use flash memory for this because I have no SD card tftIcons.ino)
After like 5s I want to transition from picture to my gauges ( simple print code for now) But I don't know how to stich these two together to get what I want.
The code for Welcome screen/ Boot is:
#include <Adafruit_GFX.h> // Core graphics library
#include <Adafruit_ST7735.h> // Hardware-specific library
#include "bitmaps.h"
#include "bitmapsLarge.h"
// For the breakout, you can use any 2 or 3 pins
// These pins will also work for the 1.8" TFT shield
#define TFT_CS 5
#define TFT_RST 4 // you can also connect this to the Arduino reset
// in which case, set this #define pin to 0!
#define TFT_DC 2
Adafruit_ST7735 tft = Adafruit_ST7735(TFT_CS, TFT_DC, TFT_RST);
void setup() {
tft.initR(INITR_BLACKTAB);
tft.setRotation(0);
tft.fillScreen(ST7735_BLACK);
//Case 2: Multi Colored Images/Icons
int h = 160,w = 128, row, col, buffidx=0;
for (row=0; row<h; row++) { // For each scanline...
for (col=0; col<w; col++) { // For each pixel...
//To read from Flash Memory, pgm_read_XXX is required.
//Since image is stored as uint16_t, pgm_read_word is used as it uses 16bit address
tft.drawPixel(col, row, pgm_read_word(evive_in_hand + buffidx));
buffidx++;
} // end pixel
}
}
void loop() {
}
The code for the gauges is:
#include <SPI.h>
#include <Adafruit_GFX.h> // Core graphics library
#include <Adafruit_ST7735.h> // Hardware-specific library
#define TFT_CS 5
#define TFT_RST 4 // Or set to -1 and connect to Arduino RESET pin
#define TFT_DC 2
Adafruit_ST7735 tft = Adafruit_ST7735(TFT_CS, TFT_DC, TFT_RST);
float p = 3.1415926;
void setup(void) {
Serial.begin(9600);
Serial.print(F("Hello! ST77xx TFT Test"));
// Use this initializer if using a 1.8" TFT screen:
tft.initR(INITR_BLACKTAB); // Init ST7735S chip, black tab
tft.setRotation(1); // set display orientation
}
void loop() {
tft.fillScreen(ST77XX_BLACK);
print_text(20,5,"1.54",5,ST77XX_GREEN);
print_text(70,50,"BAR",2,ST77XX_GREEN);
print_text(5,90,"Temp motora: 81'C",1,ST77XX_WHITE);
print_text(5,100,"Temp usisa: 30'C",1,ST77XX_BLUE);
print_text(146,116,"AM",1,ST77XX_WHITE);
delay(5000);
}
void print_text(byte x_pos, byte y_pos, char *text, byte text_size, uint16_t color) {
tft.setCursor(x_pos, y_pos);
tft.setTextSize(text_size);
tft.setTextColor(color);
tft.setTextWrap(true);
tft.print(text);
}
Can someone tell me how can I make when I power on ESP32 to show Welcome screen/ boot screen for 5s, then automatically transition to Gauges screen ?
EDIT: When I join these two
#include <Adafruit_GFX.h> // Core graphics library
#include <Adafruit_ST7735.h> // Hardware-specific library
#include <SPI.h>
#include "bitmaps.h"
#include "bitmapsLarge.h"
// For the breakout, you can use any 2 or 3 pins
// These pins will also work for the 1.8" TFT shield
#define TFT_CS 5
#define TFT_RST 4 // you can also connect this to the Arduino reset
// in which case, set this #define pin to 0!
#define TFT_DC 2
Adafruit_ST7735 tft = Adafruit_ST7735(TFT_CS, TFT_DC, TFT_RST);
void setup() {
tft.initR(INITR_BLACKTAB);
tft.setRotation(0);
tft.fillScreen(ST7735_BLACK);
//Case 2: Multi Colored Images/Icons
int h = 160,w = 128, row, col, buffidx=0;
for (row=0; row<h; row++) { // For each scanline...
for (col=0; col<w; col++) { // For each pixel...
//To read from Flash Memory, pgm_read_XXX is required.
//Since image is stored as uint16_t, pgm_read_word is used as it uses 16bit address
tft.drawPixel(col, row, pgm_read_word(evive_in_hand + buffidx));
buffidx++;
} // end pixel
}
delay(5000); // Delay 5s, then run the code down?
}
// Timer
float p = 3.1415926;
Serial.begin(9600);
Serial.print(F("Hello! ST77xx TFT Test"));
// Use this initializer if using a 1.8" TFT screen:
tft.initR(INITR_BLACKTAB); // Init ST7735S chip, black tab
tft.setRotation(1); // set display orientation
void loop() {
tft.fillScreen(ST77XX_BLACK);
print_text(20,5,"1.54",5,ST77XX_GREEN);
print_text(70,50,"BAR",2,ST77XX_GREEN);
print_text(5,90,"Temp motora: 81'C",1,ST77XX_WHITE);
print_text(5,100,"Temp usisa: 30'C",1,ST77XX_BLUE);
print_text(146,116,"AM",1,ST77XX_WHITE);
delay(5000);
}
void print_text(byte x_pos, byte y_pos, char *text, byte text_size, uint16_t color) {
tft.setCursor(x_pos, y_pos);
tft.setTextSize(text_size);
tft.setTextColor(color);
tft.setTextWrap(true);
tft.print(text);
}
I get exit status 1
'Serial' does not name a type
#include <SPI.h>
#include <Adafruit_GFX.h> // Core graphics library
#include <Adafruit_ST7735.h> // Hardware-specific library
#include "bitmaps.h"
#include "bitmapsLarge.h"
#define TFT_CS 5
#define TFT_RST 4 // Or set to -1 and connect to Arduino RESET pin
#define TFT_DC 2
Adafruit_ST7735 tft = Adafruit_ST7735(TFT_CS, TFT_DC, TFT_RST);
float p = 3.1415926;
void setup(void) {
Serial.begin(9600);
Serial.print(F("Hello! ST77xx TFT Test"));
tft.initR(INITR_BLACKTAB);
tft.setRotation(0);
tft.fillScreen(ST7735_BLACK);
//Case 2: Multi Colored Images/Icons
int h = 160,w = 128, row, col, buffidx=0;
for (row=0; row<h; row++) { // For each scanline...
for (col=0; col<w; col++) { // For each pixel...
//To read from Flash Memory, pgm_read_XXX is required.
//Since image is stored as uint16_t, pgm_read_word is used as it uses 16bit address
tft.drawPixel(col, row, pgm_read_word(evive_in_hand + buffidx));
buffidx++;
} // end pixel
}
delay(5000);
// Use this initializer if using a 1.8" TFT screen:
tft.initR(INITR_BLACKTAB); // Init ST7735S chip, black tab
tft.setRotation(1); // set display orientation
}
void loop() {
tft.fillScreen(ST77XX_BLACK);
print_text(20,5,"1.54",5,ST77XX_GREEN);
print_text(70,50,"BAR",2,ST77XX_GREEN);
print_text(5,90,"Temp motora: 81'C",1,ST77XX_WHITE);
print_text(5,100,"Temp usisa: 30'C",1,ST77XX_BLUE);
print_text(146,116,"AM",1,ST77XX_WHITE);
delay(5000);
}
void print_text(byte x_pos, byte y_pos, char *text, byte text_size, uint16_t color) {
tft.setCursor(x_pos, y_pos);
tft.setTextSize(text_size);
tft.setTextColor(color);
tft.setTextWrap(true);
tft.print(text);
}
I fount out how. You can only have one void setup and void loop. You cant c/p stich two together
I'm trying to catch all systems-calls called by a given PID with a self-made program (I cant use any of strace, dtruss, gdb...). So i used the function
kern_return_t task_set_emulation(task_t target_port, vm_address_t routine_entry_pt, int routine_number) declared in /usr/include/mach/task.h .
I've written a little program to catch the syscall write :
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/syscall.h>
#include <mach/mach.h>
#include <mach/mach_vm.h>
void do_exit(char *msg)
{
printf("Error::%s\n", msg);
exit(42);
}
int main(void)
{
mach_port_t the_task;
mach_vm_address_t address;
mach_vm_size_t size;
mach_port_t the_thread;
kern_return_t kerr;
//Initialisation
address = 0;
size = 1ul * 1024;
the_task = mach_task_self(); //Get the current program task
kerr = mach_vm_allocate(the_task, &address, size, VM_MEMORY_MALLOC); //Allocate a new address for the test
if (kerr != KERN_SUCCESS)
{ do_exit("vm_allocate"); }
printf("address::%llx, size::%llu\n", address, size); //debug
//Process
kerr = task_set_emulation(the_task, address, SYS_write); //About to catch write syscalls
the_thread = mach_thread_self(); //Verify if a thread is opened (even if it's obvious)
printf("kerr::%d, thread::%d\n", kerr, the_thread); //debug
if (kerr != KERN_SUCCESS)
{ do_exit("set_emulation"); }
//Use some writes for the example
write(1, "Bonjour\n", 8);
write(1, "Bonjour\n", 8);
}
The Output is :
address::0x106abe000, size::1024
kerr::46, thread::1295
Error::set_emulation
The kernel error 46 corresponds to the macro KERN_NOT_SUPPORTED described as an "Empty thread activation (No thread linked to it)" in /usr/include/mach/kern_return.h, and happend even before i'm calling write.
My question is: What did I do wrong in this process? Kern_not_supported does mean that it's not implemented yet, instead of a meaningless thread problem?
The source code in XNU for the task_set_emulation is:
kern_return_t
task_set_emulation(
__unused task_t task,
__unused vm_offset_t routine_entry_pt,
__unused int routine_number)
{
return KERN_NOT_SUPPORTED;
}
Which means task_set_emulation is not supported.
Building with VS2013, specifying time_point::max() to a condition variable's wait_until results in an immediate timeout.
This seems unintuitive - I would naively expect time_point::max() to wait indefinitely (or at least a very long time). Can anyone confirm if this is documented, expected behaviour or something specific to MSVC?
Sample program below; note replacing time_point::max() with now + std::chrono::hours(1) gives the expected behaviour (wait_for exits once cv is notified, with no timeout)
#include <condition_variable>
#include <mutex>
#include <chrono>
#include <future>
#include <functional>
void fire_cv( std::mutex *mx, std::condition_variable *cv )
{
std::unique_lock<std::mutex> lock(*mx);
printf("firing cv\n");
cv->notify_one();
}
int main(int argc, char *argv[])
{
std::chrono::steady_clock::time_point now = std::chrono::steady_clock::now();
std::condition_variable test_cv;
std::mutex test_mutex;
std::future<void> s;
{
std::unique_lock<std::mutex> lock(test_mutex);
s = std::async(std::launch::async, std::bind(fire_cv, &test_mutex, &test_cv));
printf("blocking on cv\n");
std::cv_status result = test_cv.wait_until( lock, std::chrono::steady_clock::time_point::max() );
//std::cv_status result = test_cv.wait_until( lock, now + std::chrono::hours(1) ); // <--- this works as expected!
printf("%s\n", (result==std::cv_status::timeout) ? "timeout" : "no timeout");
}
s.wait();
return 0;
}
I debugged MSCV 2015's implementation, and wait_until calls wait_for internally, which is implemented like this:
template<class _Rep,
class _Period>
_Cv_status wait_for(
unique_lock<mutex>& _Lck,
const chrono::duration<_Rep, _Period>& _Rel_time)
{ // wait for duration
_STDEXT threads::xtime _Tgt = _To_xtime(_Rel_time); // Bug!
return (wait_until(_Lck, &_Tgt));
}
The bug here is that _To_xtime overflows, which results in undefined behavior, and the result is a negative time_point:
template<class _Rep,
class _Period> inline
xtime _To_xtime(const chrono::duration<_Rep, _Period>& _Rel_time)
{ // convert duration to xtime
xtime _Xt;
if (_Rel_time <= chrono::duration<_Rep, _Period>::zero())
{ // negative or zero relative time, return zero
_Xt.sec = 0;
_Xt.nsec = 0;
}
else
{ // positive relative time, convert
chrono::nanoseconds _T0 =
chrono::system_clock::now().time_since_epoch();
_T0 += chrono::duration_cast<chrono::nanoseconds>(_Rel_time); //Overflow!
_Xt.sec = chrono::duration_cast<chrono::seconds>(_T0).count();
_T0 -= chrono::seconds(_Xt.sec);
_Xt.nsec = (long)_T0.count();
}
return (_Xt);
}
std::chrono::nanoseconds by default stores its value in a long long, and so after its definition, _T0 has a value of 1'471'618'263'082'939'000 (this changes obviously). Adding _Rel_time (9'223'244'955'544'505'510) results definitely in signed overflow.
We have already passed every negative time_point possible, so a timeout happens.
I'm trying to read the memory of a process using task_for_pid / vm_read.
uint32_t sz;
pointer_t buf;
task_t task;
pid_t pid = 9484;
kern_return_t error = task_for_pid(current_task(), pid, &task);
vm_read(task, 0x10e448000, 2048, &buf, &sz);
In this case I read the first 2048 bytes.
This works when I know the base address of the process (which I can find out using gdb "info shared" - in this case 0x10e448000), but how do I find out the base address at runtime (without looking at it with gdb)?
Answering my own question. I was able to get the base address using mach_vm_region_recurse like below. The offset lands in vmoffset. If there is another way that is more "right" - don't hesitate to comment!
#include <stdio.h>
#include <mach/mach_init.h>
#include <sys/sysctl.h>
#include <mach/mach_vm.h>
...
mach_port_name_t task;
vm_map_offset_t vmoffset;
vm_map_size_t vmsize;
uint32_t nesting_depth = 0;
struct vm_region_submap_info_64 vbr;
mach_msg_type_number_t vbrcount = 16;
kern_return_t kr;
if ((kr = mach_vm_region_recurse(task, &vmoffset, &vmsize,
&nesting_depth,
(vm_region_recurse_info_t)&vbr,
&vbrcount)) != KERN_SUCCESS)
{
printf("FAIL");
}
Since you're calling current_task(), I assume you're aiming at your own process at runtime. So the base address you mentioned should be the dynamic base address, i.e. static base address + image slide caused by ASLR, right? Based on this assumption, you can use "Section and Segment Accessors" to get the static base address of your process, and then use the dyld functions to get the image slide. Here's a snippet:
#import <Foundation/Foundation.h>
#include </usr/include/mach-o/getsect.h>
#include <stdio.h>
#include </usr/include/mach-o/dyld.h>
#include <string.h>
uint64_t StaticBaseAddress(void)
{
const struct segment_command_64* command = getsegbyname("__TEXT");
uint64_t addr = command->vmaddr;
return addr;
}
intptr_t ImageSlide(void)
{
char path[1024];
uint32_t size = sizeof(path);
if (_NSGetExecutablePath(path, &size) != 0) return -1;
for (uint32_t i = 0; i < _dyld_image_count(); i++)
{
if (strcmp(_dyld_get_image_name(i), path) == 0)
return _dyld_get_image_vmaddr_slide(i);
}
return 0;
}
uint64_t DynamicBaseAddress(void)
{
return StaticBaseAddress() + ImageSlide();
}
int main (int argc, const char *argv[])
{
printf("dynamic base address (%0llx) = static base address (%0llx) + image slide (%0lx)\n", DynamicBaseAddress(), StaticBaseAddress(), ImageSlide());
while (1) {}; // you can attach to this process via gdb/lldb to view the base address now :)
return 0;
}
Hope it helps!
Recently I started to play with boost.log, and bumped into an issue that if an unhanded exception is thrown no log messages are written to the log file. I am using rolling text files and auto-flash option is set on.
Here is the modified source from the samples:
#include <stdexcept>
#include <string>
#include <iostream>
#include <fstream>
#include <functional>
#include <boost/ref.hpp>
#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/date_time/gregorian/gregorian.hpp>
#include <boost/date_time/posix_time/posix_time_types.hpp>
#include <boost/thread/thread.hpp>
#include <boost/thread/barrier.hpp>
#include <boost/log/common.hpp>
#include <boost/log/filters.hpp>
#include <boost/log/formatters.hpp>
#include <boost/log/attributes.hpp>
#include <boost/log/sinks.hpp>
#include <boost/log/utility/empty_deleter.hpp>
#include <boost/log/utility/record_ordering.hpp>
namespace logging = boost::log;
namespace attrs = boost::log::attributes;
namespace src = boost::log::sources;
namespace sinks = boost::log::sinks;
namespace fmt = boost::log::formatters;
namespace keywords = boost::log::keywords;
using boost::shared_ptr;
using namespace boost::gregorian;
enum
{
LOG_RECORDS_TO_WRITE = 100,
LOG_RECORDS_TO_WRITE_BEFORE_EXCEPTION = 10,
THREAD_COUNT = 10
};
BOOST_LOG_DECLARE_GLOBAL_LOGGER(test_lg, src::logger_mt)
//! This function is executed in multiple threads
void thread_fun(boost::barrier& bar)
{
// Wait until all threads are created
bar.wait();
// Here we go. First, identify the thread.
BOOST_LOG_SCOPED_THREAD_TAG("ThreadID", boost::thread::id, boost::this_thread::get_id());
// Now, do some logging
for (unsigned int i = 0; i < LOG_RECORDS_TO_WRITE; ++i)
{
BOOST_LOG(get_test_lg()) << "Log record " << i;
if(i > LOG_RECORDS_TO_WRITE_BEFORE_EXCEPTION)
{
BOOST_THROW_EXCEPTION(std::exception("unhandled exception"));
}
}
}
int main(int argc, char* argv[])
{
try
{
typedef sinks::synchronous_sink< sinks::text_file_backend > file_sink;
shared_ptr< file_sink > sink(new file_sink(
keywords::file_name = L"%Y%m%d_%H%M%S_%5N.log", // file name pattern
keywords::rotation_size = 10 * 1024 * 1024, // rotation size, in characters
keywords::auto_flush = true // make each log record flushed to the file
));
// Set up where the rotated files will be stored
sink->locked_backend()->set_file_collector(sinks::file::make_collector(
keywords::target = "log" // where to store rotated files
));
// Upon restart, scan the target directory for files matching the file_name pattern
sink->locked_backend()->scan_for_files();
sink->locked_backend()->set_formatter(
fmt::format("%1%: [%2%] [%3%] - %4%")
% fmt::attr< unsigned int >("Line #")
% fmt::date_time< boost::posix_time::ptime >("TimeStamp")
% fmt::attr< boost::thread::id >("ThreadID")
% fmt::message()
);
// Add it to the core
logging::core::get()->add_sink(sink);
// Add some attributes too
shared_ptr< logging::attribute > attr(new attrs::local_clock);
logging::core::get()->add_global_attribute("TimeStamp", attr);
attr.reset(new attrs::counter< unsigned int >);
logging::core::get()->add_global_attribute("Line #", attr);
// Create logging threads
boost::barrier bar(THREAD_COUNT);
boost::thread_group threads;
for (unsigned int i = 0; i < THREAD_COUNT; ++i)
threads.create_thread(boost::bind(&thread_fun, boost::ref(bar)));
// Wait until all action ends
threads.join_all();
return 0;
}
catch (std::exception& e)
{
std::cout << "FAILURE: " << e.what() << std::endl;
return 1;
}
}
Source is compiled under Visual Studio 2008. boost.log compiled for boost 1.40.
Any help is highly appreciated.
Check to see if the log file is in the current working directory of the process, rather than the specified file collector target directory ("log" in your sample code). Additionally, you will probably want to specify a directory for the sink "file_name" pattern.
As "JQ" notes, don't expect to see any logging post-exception.