I'm putting into production some RPGLE code which uses %alloc and dealloc to allocate memory. Programmers should be able to ensure there are no resulting memory leaks but I'm worried about what happens if they don't.
My question is: if programmers mess up and there are memory leaks then when will this memory be reclaimed? Is it when the program leaves memory or when the job finishes?
From the ILE RPG Programmer's Reference Guide:
Storage is implicitly freed when the
activation group ends. Setting LR on
will not free any heap storage
allocated by the module, but any
pointers to heap storage will be lost.
If your RPG program is in its own activation group, then the memory will be freed when the program ends. Of course, when your job ends, so does your activation group. So ending the job will always clean up any memory allocated.
It sounds like you are approaching RPG from a C/C++ background. I've been programming in RPG for about 8 years now and only a handful of times ever had to use the %alloc() BIF.
That being said if you are using a new activation group, you should be fine. If you are using a named activation group and you do not issue the RCLACTGRP command or you are using the default activation group you could run into issues.
Indeed, you have to study the mechanism of activation groups. Memory leaks may happen, but will not do any damage to the machine (I love the as400). But you can harm the other programs within your iSeries job (remark: if you are not from a as400 background, you have to read about the as400 job mechanism).
If you start with managing the activationgroups within your job yourself (in the program that is ofcourse), you can create separate, sort of memory area's. It requires some overhead (you have to name the groups) but then you have a safe environment where you can powerfull stuff.
I am not familiar with those built-in-functions, but normally everything is cleaned up when the job ends (or user logs off if interactive). If you can't find an answer, I can point you to another community were your answer may be known.
Just happen to see this blogs now, way late but who knows others out there might still find this useful.
%alloc, dealloc uses job's default heap so it will be cleaned up when job ends.
There is another type of heaps, which you can use programmatically via CEE APIs, and it uses user defined heaps -- this is the one i think that you need to manage or clean up programmatically coz if not i think it might cause memory leakage.
Related
After reading this article https://developer.ibm.com/tutorials/l-memory-leaks/ I'm wondering is there a way to cancel thread execution and avoid memory leaks. Since my understanding is that the join functionality is releasing the allocated space. That should be possible to do also by other commands. The thing that interest me how does join releases the memory space and other functions cant? Is there a function that gives to witch thread a memory space is assigned? Can this be given out (the mapping)? I know one should not do crazy things with that since it represents an potential safety issue. But still are there ways to achieve that?
For example if I have a third party lib then I can identify its threads but I have the problem that I cannot identify allocated memory spaces in the lib, or I do not know how to do that (the lib is a binary).
If the library doesn't support that, you can't. Your understanding of the issue is slightly off. It doesn't matter who allocated the memory, it matters whether the memory still needs to be allocated or not. If the library provides some way to get to the point where the memory no longer needs to be allocated, that provided way would also provide a way to free the memory. If the library doesn't provide any way to get to the point where the memory no longer needs to be allocated, some way to free it would not be helpful.
Coding such stuff is a rabbit hole and should be done on the OS level.
Can't be done. The OS has no way to know when the code that allocated some chunk of memory still needs it and when it doesn't. Only the code that allocated the memory can possibly know that.
Posix allows canceling but not identifying the individual threads, and not all Posix functionality works on linux. Posix is just a layer over the stl stuff in the OS.
Right, so POSIX is not the place where this goes. It requires understanding of the application and so must be done at the application layer. If you need this functionality, code it. If you need it in other people's code and they don't supply it, talk to them. Presumably, if their code is decent and appropriate, it has some way to d what you need. If not, your complaint is with the code that doesn't do what you need.
My thoughts on that were that somewhere in Linux the system tracks what allocation on heap were made by the threads if some option is enabled since I know by default there is nothing.
That doesn't help. Which thread allocated memory tells you absolutely nothing about when it is no longer needed. Only the same code that decided it was needed can tell when it is no longer needed. So if this is needed in some code that allocates memory, that code must implement this. If the person who implemented that code did not provide this kind of facility, then that means they decided it wasn't needed. You may wish to ask them why they made that decision. Their answer may well surprise you.
But I see there is no answer to a serious question.
The answer is to code what you need. If it's someone else's code and they didn't code it, then they didn't think you would need it. They're most likely right. But if they're wrong, then don't use their code.
I worked on VxWorks 5.5 long time back and it was the best experience working on world's best real time OS. Since then I never got a chance to work on it again. But, a question keeps popping to me, what makes is so fast and deterministic?
I have not been able to find many references for this question via Google.
So, I just tried thinking what makes a regular OS non-deterministic:
Memory allocation/de-allocation:- Wiki says RTOS use fixed size blocks, so that these blocks can be directly indexed, but this will cause internal fragmentation and I am sure this is something not at all desirable on mission critical systems where the memory is already limited.
Paging/segmentation:- Its kind of linked to Point 1
Interrupt Handling:- Not sure how VxWorks implements it, as this is something VxWorks handles very well
Context switching:- I believe in VxWorks 5.5 all the processes used to execute in kernel address space, so context switching used to involve just saving register values and nothing about PCB(process control block), but still I am not 100% sure
Process scheduling algorithms:- If Windows implements preemptive scheduling (priority/round robin) then will process scheduling be as fast as in VxWorks? I dont think so. So, how does VxWorks handle scheduling?
Please correct my understanding wherever required.
I believe the following would account for lots of the difference:
No Paging/Swapping
A deterministic RTOS simply can't swap memory pages to disk. This would kill the determinism, since at any moment you could have to swap memory in or out.
vxWorks requires that your application fit entirely in RAM
No Processes
In vxWorks 5.5, there are tasks, but no process like Windows or Linux. The tasks are more akin to threads and switching context is a relatively inexpensive operation. In Linux/Windows, switching process is quite expensive.
Note that in vxWorks 6.x, a process model was introduced, which increases some overhead, but mainly related to transitioning from User mode to Supervisor mode. The task switching time is not necessarily directly affected by the new model.
Fixed Priority
In vxWorks, the task priorities are set by the developer and are system wide. The highest priority task at any given time will be the one running. You can thus design your system to ensure that the tasks with the tightest deadline always executes before others.
In Linux/Windows, generally speaking, while you have some control over the priority of processes, the scheduler will eventually let lower priority processes run even if higher priority process are still active.
I understand that delete returns memory to the heap that was allocated of the heap, but what is the point? Computers have plenty of memory don't they? And all of the memory is returned as soon as you "X" out of the program.
Example:
Consider a server that allocates an object Packet for each packet it receives (this is bad design for the sake of the example).
A server, by nature, is intended to never shut down. If you never delete the thousands of Packet your server handles per second, your system is going to swamp and crash in a few minutes.
Another example:
Consider a video game that allocates particles for the special effect, everytime a new explosion is created (and never deletes them). In a game like Starcraft (or other recent ones), after a few minutes of hilarity and destruction (and hundres of thousands of particles), lag will be so huge that your game will turn into a PowerPoint slideshow, effectively making your player unhappy.
Not all programs exit quickly.
Some applications may run for hours, days or longer. Daemons may be designed to run without cease. Programs can easily consume more memory over their lifetime than available on the machine.
In addition, not all programs run in isolation. Most need to share resources with other applications.
There are a lot of reasons why you should manage your memory usage, as well as any other computer resources you use:
What might start off as a lightweight program could soon become more complex, depending on your design areas of memory consumption may grow exponentially.
Remember you are sharing memory resources with other programs. Being a good neighbour allows other processes to use the memory you free up, and helps to keep the entire system stable.
You don't know how long your program might run for. Some people hibernate their session (or never shut their computer down) and might keep your program running for years.
There are many other reasons, I suggest researching on memory allocation for more details on the do's and don'ts.
I see your point, what computers have lots of memory but you are wrong. As an engineer you have to create programs, what uses computer resources properly.
Imagine, you made program which runs all the time then computer is on. It sometimes creates some objects/variables with "new". After some time you don't need them anymore and you don't delete them. Such a situation occurs time to time and you just make some RAM out of stock. After a while user have to terminate your program and launch it again. It is not so bad but it not so comfortable, what is more, your program may be loading for a while. Because of these user feels bad of your silly decision.
Another thing. Then you use "new" to create object you call constructor and "delete" calls destructor. Lets say you need to open so file and destructor closes it and makes it accessible for other processes in this case you would steel not only memory but also files from other processes.
If you don't want to use "delete" you can use shared pointers (it has garbage collector).
It can be found in STL, std::shared_ptr, it has one disatvantage, WIN XP SP 2 and older do not support this. So if you want to create something for public you should use boost it also has boost::shared_ptr. To use boost you need to download it from here and configure your development environment to use it.
I am refining a large body of native code which uses a few static critical sections and never calls DeleteCriticalSection, leaving them to process exit to clean up.
There are no leaks and no concerns about the total number of CS getting too high, I'm just wondering if there are any long-term Windows consequences to not cleaning them up. We have regression test suites that will launch a program thousands of times a day, although end users are not likely to do anything like that.
Because of the range of deployed machines we have to consider Windows XP as well and this native code is run from a managed application.
A critical section is just a block of memory unless contention is detected, at which time an event object is created for synchronization. Process exit would clean up any lingering events. If you were creating these at runtime dynamically and not freeing them, it would be bad. If the ones not getting cleaned up are a fixed amount for each process, I wouldn't worry about it.
In principle, every process resource is cleaned up when the process exits. Kernel resources like event objects definitely follow this principle.
The short answer is probably not. The long answer is, this is a lazy programming practice and should be fixed.
To use DeleteCriticalSection correctly, one needs to shutdown in an orderly manner so that no other thread owns or attempts to own the section before/after it is deleted. And programmers get lazy to define and implement how shutdown will work for their program.
There are many things you can do with no immediate measurable consequences - but that does not make it right. Also similar attitude towards other handles/objects in the same code base will have cumulative effect and could add up to "consequences".
Our Biztalk 2006 application contains two orchestrations which are invoked on frequent basis (approx 15 requests per second). We identified possible memory leakages in our application by doing certain throttling threshold changes in the host. As we disabled the memory based throttling, the Process memory started increasing till 1400 MB and after that we started to experience the out of memory exceptions.
We are forced to restart the host instances when this situation occurs.
We were wondering if explicitly calling GC.Collect from the Orchestration is fruitful in such a case. and what could be the cons of using this approach?
Thanks.
Out of memory exceptions occur only if the garbage collector was unable to free enough memory to perform a requested allocation. This can happen if you have a memory leak, which in a garbage collected platform, means some object references are kept longer than they need to. Frequent causes of leaks are objects that hold global data (static variables), such as a a singleton, a cache or a pool that keeps references for too long.
If you explicitly call GC.Collect, it will also fail to free the memory for the same reasons as implicit collection failed. So the explit GC.Collect call would only result in slowing down the orchestration.
If you are calling .Net classes from your orchestrations, I suggest trying to isolate the problem by calling the same classes from a pure .Net application (no BizTalk involved)
It's also possible that there's no leak, but that each instance is consuming too much memory at the same time. BizTalk can usually dehydrate orchestrations when it finds it necessary, but it may be prevented for doing that if a step in the orchestration (or a large atomic scope) takes too long to execute.
1400 mb also looks large for only 15 concurrent instances. Are you doing manipulations on large messages in the orchestration? In that case you can greatly reduce memory usage by avoiding operations that force the whole message to be loaded in memory, and instead manipulate the message using streaming.
Not knowing Biztalk my answer may be way off…
I am assuming that many more orchestration instances running in a process increases the time it take for a single orchestration instances to complete. Then as you increase the number of orchestration instances that you let run at the same time, at some point the time it takes them to complete will be large enough that the size of the running orchestration instances is to great for your RAM.
I think you need to do throttling based on the number of running orchestration instances. If you graph “rate of completion” against “number of running orchestration instances” you will likely see a big flat zone in the middle of the graph, choose your throttling to keep you in the middle of this stable zone.
I agree with the poster above. Trying to clear the memory or resetting the host instance is not the solution but just a band aid. you need to find where you are leaking memory. I would look at the whole Application and not just the orchestration; it is possible that the ports could also be causing your memory leak. Do you use custom functoids in your maps? how about inline code? Custom xslt?
I would also look at custom pipelines if you are using them.
If its possible i would try isolating the different components and putting them under stress and volume tests individually; somehow i dont think your orchestration itself is the problem, but rather a map or a custom component.
Doing a garbage collect isn't going to free up memory leaked, as its still (in error) referenced by your application somehow. You would only call GC.Collect if you did a lot of generating short lived objects and knew you were at a good point to free them.
You must identify and fix the leaking code!
I completely agree with most of what the other's said - you should look into where is your leak and fix it; calling the GC directly will not help you and in any case is very unlikely to be a reasonable way forward.
I would add though, that the throttling exists to protect your environment from grinding to a halt should you encounter sudden rise in resource consumption; without throttling it is possible for BizTalk (like any other server) to reach a point where it cannot continue processing and effectively "get stuck"; throttling allows it to slow down in order to ensure processing is still happening, until the resource consumption level (hopefully) returns to normal levels;
for that reason I would also suggest that you consider having some throttling configured for your environment, the values of which would have to be tweaked to suit your scenario