Is there a way to increase the stack size in Twincat 3 - twincat

I am getting stack overflow problems and I can clearly see that it happens with the introductions of some new arrays. I cannot find the option to increase the stack size on the soft PLC (Twincat) running on my machine.
Any help is appreciated

I'm currently using 4024.7 and there you can change the stack size under SYSTEM > Real-time. And then under the Settings tab you'll find Maximal Stack Size [kB].

Here is the official answer that I got from Beckhoff:
You can’t change TC3 stack size, it use fix size of 60KB. Only
function uses the stack memory, FBs & programs not.
The stack size is very limiting. You can't do large memory operations inside function. Also you can stack limited number of functions in one operation.
Still, Beckhoff may increase the stack size in future versions of TwinCAT 3.

I realise this is a little late, but instead of trying to increase the stack size, you can take steps to reduce the size of the stack you need. When calling a method or function, try passing in a reference to an existing array and using that for the calculation. Even if it is for some intermediate processing that isn't returned directly as your response, this will dramatically improve your stack management. There are two way to manage this in TwinCAT.
The easy way is to create a VAR_IN_OUT variable to pass in. This works well, but you should not be using this if your block calls the variables from other methods. The other way is to pass in a REFERENCE TO your ARRAY and using that.
This approach will work for both returned and intermediate processing type issues.

regedit:
Computer\HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Beckhoff\TwinCAT3\System
add a costum DWORD
with the key: DefaultStackSize
value: Hex(80) or Decimal(128)
Regedit DWORD
this will set your stacksize to 128 instead of 64 :)
change it to whatever you want, not sure what happens when it's too high.
but this works, we use it on all our PLC's as we always have issues with the amount of strings parsed with Json converters etc.
this works with 4022.xx versions of twincat. trailed and tested for a couple of years already.

Related

In my App memory graph, there are instances of dispatch_group leaked but I don't use that technology explicitly. Any possible suggestion?

In my macOS App with Mixed Objective-C/Swift, in the Xcode memory graph, there are instances of dispatch_group leaked:
I am a bit familiar with GCD and I use it in my project, but I don't use dispatch_groups explicitly in my code. I have thought that it could be some indirect usage of it when I call other GCD APIs like dispatch_async. I was wondering if there is somebody that can help me track this issue. Thanks for your attention.
In order to diagnose this, you want to know (a) what is keeping a strong reference to them; and (b) where these objects were instantiated. Unfortunately, unlike many objects, dispatch groups might not tell you much about the former (though your memory addresses suggest that there might be some object keeping a reference to them), but we can use the “Malloc Stack” feature to answer the latter.
So, edit your scheme (“Product” » “Scheme” » “Edit Scheme” or press command-<) and temporarily turn on this feature:
You can then click on the object in question and the panel on the right might show you illuminating stack information about where the object was allocated:
Now, in this case, in viewDidLoad I manually instantiated six dispatch groups, performed an enter, but not a leave, which is why these objects are still in memory. This is a common source of dispatch groups lingering in memory.
As you look at the stack trace, focus first on entries in your codebase (in white, rather than gray, in the stack trace). And if you click on your code in the stack trace, it will even jump you to the relevant line of code. But even if it is not in your code, often the stack trace will give you insights where these dispatch groups were created, and you can start your research there.
And remember, when you are done with your diagnostics, make sure to turn off the “Malloc Stack” feature.

Calling .clone() too many times in vb, will it cause any trouble?

I am an embedded engineer and I have never worked with neither windows nor visual basic.
For my current task I have to maintain and improve a test system running on Windows, written in Visual Studio, C#(also have no experience with) .
This project uses some libraries written in visual basic(all legacy code). And I detect a problem in there. I cannot copy the code directly in here but because of legal bindings but it is something like that:
'getter()
dim temp as byte = global_data
Array.reverse(temp);
...
This is a getter function. Since there is a reverse inside, the return of this function is different after each call because when temp changed, global_data is also changed. And I can get the real value only after odd number of calls. Previous handler told me to call function only once or three times... I think this is stupid and changed it by adding a .clone() like this:
dim temp as byte = global_data.clone()
Array.reverse(temp);
And it worked :)
There are a lot of functions like this so I'm gonna make similar adjustments to them too.
But since I am not familiar with the dynamics of this system, I am afraid to face with a problem later. For example can making multiple number of clones consume my RAM? Can those clones be destroyesd? If yes, do I have to destroy them? How?
Or are there any other possible problems?
And is there an other way to do this?
Thanks in advance!
To answer your question, no there is nothing wrong with calling Clone multiple times.
The cloned byte arrays will take up memory as long as they are referenced, but that isn't unique to the byte array being cloned. Presumably that cloned byte array is being passed to other methods. Once those methods are executed the array will be eligible for garbage collection, and the system will take care of it. If this code runs very very frequently, there might be better approaches that are more efficient than allocating and eventual garbage collection of those arrays, but you won't "break" anything using the Clone over an over.
For variables of basic type, clone method copies its value, which requires the stack to allocate space for it.
Value type allocates memory in the stack. They have their own life cycle, so they are automatically allocated and released without management. So you do not have to worry about taking a lot of memory, calling it many times will not cause trouble.

alternatives to functions PathCombine() and PathCchCombine()

i am coding for windows 7 and windows 10 and want to have only a single main binary for my application.
In this application i want to join a path and a file name so that all relative path specs are eliminated.
Now i found out that when using PathCombine() function i am portable but the function itself has the chance by design to create buffer overrun issues under certain input conditions.
For PathCchCombine() the security is improved by an extra parameter providing the results buffer size but its not available on the first mentioned platform (the function is only there on Win8 or higher - maybe provided with api-ms-win-core-path-l1-1-0.dll or just by the Windows KernelBase.dll).
How to solve that so that i can keep a single binary, don't need to provide extra DLLs and still stay save against buffer overruns?
Is there some alternate function for Windows 7 that will just serve me?
indicated solution:
i need to use PathCombine() becaue its the only option that works on Win7.
i have to supply a results buffer of MAX_PATH (smaller is risky, bigger is useless).
i have to accept that even if Win10 might support path length of 32kB or more there is no simple solution (like a single API call) that will work with any of the platform determined limits and/or content determined results length. - there are function variants (thinking of PathAllocCanonicalize) that do dynamic allocations and thus the caller does not need any pre-result knowledge, but all of these functions seem to be available only with Win8 or higher.

How is a Linux kernel task's stack pointer determined for each thread?

I'm working on a tool that sometimes hijacks application execution, including working in a different stack.
I'm trying to get the kernel to always see the application stack when performing certain system calls, so that it will print the [stack] qualifier in the right place in /proc/pid/maps.
However, simply modifying the esp around the system call seems not to be enough. When I use my tool on "cat /proc/self/stat" I'm seeing kstkesp (entry 29 here) sometimes has the value I want but sometimes has a different value, corresponding to my alternate stack.
I'm trying to understand:
How is the value reflected in /proc/self/stat:29 determined?
Can I modify it so that it will reliably have an appropriate value?
If 2 is difficult to answer, where would you recommend that I look to understand why the value is intermittently incorrect?
Looks like it's defined e.g. in line 409 of http://lxr.free-electrons.com/source/fs/proc/array.c?v=3.16 to me.
There is lots of discussion about the related macro KSTK_ESP over the last few years for example: https://github.com/davet321/rpi-linux/commit/32effd19f64908551f8eff87e7975435edd16624
and
http://lists.openwall.net/linux-kernel/2015/01/04/140
From what I gather regarding the intermittent oddness it seems like an NMI or other interrupt hits inside the kernel sometimes and then it doesn't properly walk the stack in that case.

Vectored Referencing buffer implementation

I was reading code from one of the projects from github. I came across something called a Vectored Referencing buffer implementation. Can have someone come across this ? What are the practical applications of this. I did a quick google search and wasn't able to find any simple sample implementation for this.
Some insight would be helpful.
http://www.ibm.com/developerworks/library/j-zerocopy/
http://www.linuxjournal.com/article/6345
http://www.seccuris.com/documents/whitepapers/20070517-devsummit-zerocopybpf.pdf
https://github.com/joyent/node/pull/304
I think some more insight on your specific project/usage/etc would allow for a more specific answer.
However, the term is generally used to either change or start an interface/function/routine with the goal that it does not allocate another instance of its input in order to perform its operations.
EDIT: Ok, after reading the new title, I think you are simply talking about pushing buffers into a vector of buffers. This keeps your code clean, you can pass any buffer you need with minimal overhead to any function call, and allows for a better cleanup time if your code isn't managed.
EDIT 2: Do you mean this http://cpansearch.perl.org/src/TYPESTER/Data-MessagePack-Stream-0.07/msgpack-0.5.7/src/msgpack/vrefbuffer.h

Resources