Use digitalRead function at setup - debugging

Is it possible to use digitalRead() function on Arduino setup to check if a circuit is opened or closed and act as a physical DEBUG flag?

Yes, you can. Do a pinmode() first - maybe you need to set the pull-up. Maybe wait a tiny bit for the levels to settle down, since you probably just turned it on. Then you can read the pin with digitalRead().

Related

Beckhoff PLC using ENUM's in CASE OF question

When I use an enum in a switch statement in C#, I am used to add a debug break statement to the Default case to prevent adding items to the enum which are not covered by the switch. During debugging, the code will then break if it hits the Default case.
Now I am programming a beckhoff PLC and want to do the same in a CASE .. OF ELSE ...END CASE in STL. Is this possible and/or normal in PLC programming?
I don’t think you can. Also it wouldn’t be desirable to stop a PLC program and prevent it from executing machine relevant code.
Instead you could use the ADSLOGSTR function to log to the event logger. Or show a message box. This will work in both TC2 and TC3.
You can set breakpoints when you are in online-mode, but as pboedker pointed out as soon as the breakpoint is reached (unless you have a special configuration, but this is another subject) your ethercat master will timeout, your safety module will produce a com error and your drives will need a reset aswell.
If you don't have real hardware and an ethercat master attached in your project you can use breakpoints without any worries.
I personally take another approach.
I always build a separate Debug-Visualization in the plc together with a special Debug FunctionBlock which helps me to track bugs in the project.
In your case for example I would simply call a special method of the Debug-FunctionBlock with an errror code and a string when the program flow reaches the default-case.
The error code and the string would then be visualized in the Debug-Visualization.
Even if it's a little more effort than simply calling adslogstr I would rather implement a separate Debug-FunctionBlock for 3 reasons:
You need more logic than simply calling adslogstr anyway because if by any chance adslogstr is called cyclically, you end up spamming the event logger.
Reuse in other projects
You can expand the Debug-Visualization to a Test-Suite if needed, which can come in handy
You can find more info about the beckhoff visualization here:
https://infosys.beckhoff.com/english.php?content=../content/1033/tc3_plc_intro/3523377803.html&id=
Breakpoints are possible like Filippo said. You can prevent outputs from being reset during breakpoint by setting KeepOutputsOnBP (see this: https://stackoverflow.com/a/52158801/8140625).
You could also set error/warning/note message to your Visual Studio when that happens by using ADSLOGSTR(see this: https://stackoverflow.com/a/51700613/8140625). So add a ADSLOGSTR call to your CASE ELSE with appropriate message and you will see it in error list / TwinCAT console.
Edit: Somehow missed pboedkers answer, he already answered the ADSLOGSTR.
I like the solution of Filippo. Is could be easy to change the behavior of the debug function in the future without touching the code to much.
I was thinking to much in the C# solutions :)
Thank!

Retry and max attemps with state machine

I'm trying to make a state machine in which I want to build a retry and max attemps feature. Let me explain, so far I have this:
From SAVED, I want to go to VALIDATED, although if there is an error, it has to go to AWAITING_VALIDATION state. After 3 minutes, try again to VALIDATED state.
Did I have correctly set up retry mechanism?
After 3 attemps, I want to go back to SAVED state (and pause state machine). Is it possible to do that in a fancy waty (e.g using spring state machine) or do I have to do this manually using some kind of a cache?
Thanks for your help
There are probably many ways to do these things with different machine configurations but having said that, this is such a clearly presented guestion that I wanted to spend some time on it.
You are close and you missed some things(I'd say tricks) to make this happen. Answer is to use extended state variables to add memory into a machine. These variables are usually used to limit number of needed stated to represent what machine needs to do. You need 3 loops and you could probably create more states to represent each loop and transition(with specific guards) to those as needed. However this will simply explode state configuration if you need more loops like 10 or 20 or 100+.
I created an example in ssm-sample3 which is showing how extended state variables and different guards and actions can be used to drive this specific flow.
Unfortunately there is a bug in a current 1.1.1.RELEASE which prevents you to directly transition from a AWAITING_VALIDATION into HAS_ERROR junction and loop until you pause into VALID using an anonymous transition having a guard(that's why sample has a dummy TMP state which is not needed with 1.2.x).
This is probably something I'd like to add as an example or faq to our ref docs.
Lemmy know if this helps.

Problems getting Altera's Triple Speed Ethernet IP core to work

I am using a Cyclone V on a SoCKit board (link here) (provided by Terasic), connecting an HSMC-NET daughter card (link here) to it in order to create a system that can communicate using Ethernet while communication that is both transmitted and received goes through the FPGA - The problem is, I am having a really, really hard time getting this system to work using Altera's Triple Speed Ethernet core.
I am using Qsys to construct the system that contains the Triple Speed Ethernet core, instantiating it inside a VHDL wrapper that also contains an instantiation of a packet generator module, connected directly to the transmit Avalon-ST sink port of the TSE core and controlled through an Avalon-MM slave interface connected to a JTAG to Avalon Master bridge core which has it's master port exported to the VHDL wrapper as well.
Then, using System Console, I am configuring the Triple Speed Ethernet core as described in the core's user guide (link here) at section 5-26 (Register Initialization) and instruct the packet generator module (also using System Console) to start and generate Ethernet packets into the TSE core's transmit Avalon-ST sink interface ports.
Although having everything configured exactly as described in the core's user guide (linked above) I cannot get it to output anything on the MII/GMII output interfaces, neither get any of the statistics counters to increase or even change - clearly, I am doing something wrong, or missing something, but I just can't find out what exactly it is.
Can any one please, please help me with this?
Thanks ahead,
Itamar
Starting the basic checks,
Have you simulated it? It's not clear to me if you are just simulating or synthesizing.
If you haven't simulated, you really should. If it's not working in SIM, why would it ever work in real life.
Make sure you are using the QIP file to synthesize the design. It will automatically include your auto generated SDC constraints. You will still need to add your own PIN constraints, more on that later.
The TSE is fairly old and reliable, so the obvious first things to check are Clock, Reset, Power and Pins.
a.) Power is usually less of problem on devkits if you have already run the demo that came with the kit.
b.) Pins can cause a whole slew of issues if they are not mapped right on this core. I'll assume you are leveraging something from Terasic. It should define a pin for reset, input clock and signal standards. Alot of times, this goes in the .qsf file, and you also reference the QIP file (mentioned above) in here too.
c.) Clock & Reset is a more likely culprit in my mind. No activity on the interface is kind of clue. One way to check, is to route your clocks to spare pins and o-scope them and insure they are what you think they are. Similarly, if you may want to bring out your reset to a pin and check it. MAKE SURE YOU KNOW THE POLARITY and you haven't been using ~reset in some places and non-inverted reset in others.
Reconfig block. Some Altera chips and certain versions of Quartus require you to use a reconfig block to configure the XCVR. This doesn't seem like your issue to me because you say the GMII is flat lined.

Is the a way to disable interrupt sepcial core's interruput on x86(in linux userspace)

Now, I have a piece of code when it run, I don't want it be interrupted. So I want to know is
there a way to disable it or don't handle any interrput.
And is there a way let sepcial core only run one process.
To disable interrupts you can implement a system call that would call
irq_disable()
and
irq_enable()
when you are exiting. However disabling interrupt should in most cases be done for very quick operations. You could also need to be root to execute that syscall (to be checked).
For your second question, if I understood it well, you can set process affinities via
int sched_setaffinity(pid_t pid, size_t cpusetsize,cpu_set_t *mask);
Since you mention specifically that this is a user space application, you may want to look into using one of the many synchronization primitives provided by linux. The one you choose to use will depend on what you are trying to do. This will let you define a critical section of your code without the potential for race conditions and or deadlocks.

how to implement semaphore without DI/EI, TS and CS instructions

I am reading the operating systems book by Milan Milenkovic (http://books.google.co.in/books?id=wbvHuTfiQkoC&printsec=frontcover#v=onepage&q&f=false). From this i understood how semaphore can be implemented using the following assembly instructions :
1)Enable/Disable interrupts
2)Test & Set instruction
3)Compare & swap instruction
I want to know if there is some other way of semaphore implementation as well, other than using the 3 assembly instructions above. Any help will be greatly appreciated. Thanks.
You need to make your "check if the semaphore is set; if it isn't, set it, and tell me the previous state" operation atomic. If you wanted to implement a semaphore on a processor that has none of your 3 instructions, you could probably build some hardware around it. So, yes, there are other ways, depending on how far you want to go. If it's not in your processor, build it somewhere else.
But for a practial answer: there's only 2 ways to do it. Either use something that makes a chain of multiple operations atomic (which is what enabling/disabling interrupts does, except NMI can't be disabled, and disabling interrupts on one core won't help you in a multicore environment), or use a processor feature that does the "check if the semaphore is set; if it isn't, set it, and tell me the previous state" thing atomically. Looking at it this way, your methods 2) and 3) aren't really different.

Resources