The "queue", or FIFO, is one of the most common data structures, and have native implementations in many languages and frameworks. However, there seems to be little consensus as to how fundamental queue operations should be named. A survey of several popular languages show:
Python: put / get
C#, Qt : enqueue /dequeue
Ruby, C++ STD: push / pop
Java: add / remove
If one need to implement a queue (say, in some embedded platform that does not have a native queue implementation already), what naming convention would be best?
Enqueue/dequeue seem to be the most explicit, but is wordy; put/get is succinct but does not provide any hint as to the FIFO nature of the operations; push/pop seems to be suggest stack operations instead of queue operations.
I'm kind of a pedant, so I'd go with enqueue/dequeue.
Though add/next has a certain appeal.
Just to cloud the issue a little more, in Perl it's push/shift. :)
push/pop is plain wrong for a fifo as these are stack (first in last out) operations.
queue can refer to the object as well as an operation so is a bit overloaded and dequeue can cause confusion because it was commonly used to refer to a double ended queue.
put/get - short, obvious and generic (doesn't assume an implementation and can be used for all sorts of queues/lists/collections) - what's not to like?
I'll probably name it as push_back and pop_front.
Add/Remove sounds like the most logical one to use, especially if you are intending for it to possibly be read by person unfamiliar with the structure or the language (easier to understand).
Push/Pop would be next in my rankings because of personal preferences.
Put/Get comes next.
Enqueue/Dequeue is very last because I really hate the letter Q.
Add/remove has the advantage that you can easily change from a queue to another data structure.
For example, storing states in a queue vs. a stack makes the difference between breadth-first and depth-first search.
I like enqueue and dequeue, but typing them sucks. So in my Queue structures (both C++ and Java), I named the functions enQ and deQ :)
Pop / push sounds wrong as it suggests a stack data structure instead of a queue.
To add something new to the suggestions: my teachers always used in and out on the blackboard.
I like entail and behead. Not for everyone though. Or "in with the new", "out with the old". And then for us Southwesterners there are berattle and defang. But my favorite is graphical. Right Arrow for in and then Right Arrow for out.
Related
Any algorithm you can implement in a HLL you can implement in assembly. On the other hand, there are many algorithms you can implement in assembly which you cannot implement in a HLL. - Randall Hyde
I found this statement in the forward to a book on assembly. The book is here: https://courses.engr.illinois.edu/ece390/books/artofasm/fwd/fwd.html#109
Does anyone know an example of this type of algorithm?
It's plain wrong.
You can implement any algorithm (in the CS sense of the word) in any turing complete programming language.
On the other hand, if he would have said something a like: "Some algorithms can be implemented very efficiently, and with ease in assembly, much more so than what is possible in most high level programming languages", then his statement would have made sense...
Interesting text though....
There is a sense in which it is trivially false: in the worst case, you could write an emulator in the HLL and then run the algorithm in there. But that's cheating a bit because now the HLL does not directly implement the algorithm.
A concrete example of what many HLL's can't do (or maybe they can in practice, but it is not guaranteed that they can do it), is directly implementing a XOR linked list. In many languages you just cannot XOR pointers, and/or it wouldn't make sense even if you could (consider garbage collection). Of course you can refer to every node by an integer ID and XOR those, but that's a workaround, not a direct implementation.
HLL's often have trouble implementing unstructured control flow, though many (particularly older) languages offer a goto. That means you may have to jump through hoops to implement a state machine (using a switch in a loop or whatever), instead of letting the state be implied by the program counter.
There are also many algorithms and data structures that rely on operations that don't exist in typical HLL's, for example popcnt or lzcnt, which can again be emulated, but then so can everything.
In case you have strict limitations in terms of memory and/or execution time, you might be forced to use assembly language.
High level languages typically require a run-time library which might be too big to fit into your program memory.
Think of a time-critical driver routine. An interrupt service routine for example. If there are only a few nanoseconds available for the routine, assembly language might be the only viable option.
How about this? You need to write some assembly code in order to access system registers and tables. But onces the setup is done, no CPU instructions are executed (everything's done by the complex CPU exception handling mechanisms) and yet the thing is Turing-complete and can "run" programs.
This is pure just for interest question, any sort of questions are welcome.
So is it possible to create thread-safe collections without any locks? By locks I mean any thread synchronization mechanisms, including Mutex, Semaphore, and even Interlocked, all of them. Is it possible at user level, without calling system functions? Ok, may be implementation is not effective, i am interested in theoretical possibility. If not what is the minimum means to do it?
EDIT: Why immutable collections don't work.
This of class Stack with methods Add that returns another Stack.
Now here is program:
Stack stack = new ...;
ThreadedMethod()
{
loop
{
//Do the loop
stack = stack.Add(element);
}
}
this expression stack = stack.Add(element) is not atomic, and you can overwrite new stack from other thread.
Thanks,
Andrey
There seem to be misconceptions by even guru software developers about what constitutes a lock.
One has to make a distinction between atomic operations and locks. Atomic operations like compare and swap perform an operation (which would otherwise require two or more instructions) as a single uninterruptible instruction. Locks are built from atomic operations however they can result in threads busy-waiting or sleeping until the lock is unlocked.
In most cases if you manage to implement an parallel algorithm with atomic operations without resorting to locking you will find that it will be orders of magnitude faster. This is why there is so much interest in wait-free and lock-free algorithms.
There has been a ton of research done on implementing various wait-free data-structures. While the code tends to be short, they can be notoriously hard to prove that they really work due to the subtle race conditions that arise. Debugging is also a nightmare. However a lot of work has been done and you can find wait-free/lock-free hashmaps, queues (Michael Scott's lock free queue), stacks, lists, trees, the list goes on. If you're lucky you'll also find some open-source implementations.
Just google 'lock-free my-data-structure' and see what you get.
For further reading on this interesting subject start from The Art of Multiprocessor Programming by Maurice Herlihy.
Yes, immutable collections! :)
Yes, it is possible to do concurrency without any support from the system. You can use Peterson's algorithm or the more general bakery algorithm to emulate a lock.
It really depends on how you define the term (as other commenters have discussed) but yes, it's possible for many data structures, at least, to be implemented in a non-blocking way (without the use of traditional mutual-exclusion locks).
I strongly recommend, if you're interested in the topic, that you read the blog of Cliff Click -- Cliff is the head guru at Azul Systems, who produce hardware + a custom JVM to run Java systems on massive and massively parallel (think up to around 1000 cores and in the hundreds of gigabytes of RAM area), and obviously in those kinds of systems locking can be death (disclaimer: not an employee or customer of Azul, just an admirer of their work).
Dr Click has famously come up with a non-blocking HashTable, which is basically a complex (but quite brilliant) state machine using atomic CompareAndSwap operations.
There is a two-part blog post describing the algorithm (part one, part two) as well as a talk given at Google (slides, video) -- the latter in particular is a fantastic introduction. Took me a few goes to 'get' it -- it's complex, let's face it! -- but if you presevere (or if you're smarter than me in the first place!) you'll find it very rewarding.
I don't think so.
The problem is that at some point you will need some mutual exclusion primitive (perhaps at the machine level) such as an atomic test-and-set operation. Otherwise, you could always devise a race condition. Once you have a test-and-set, you essentially have a lock.
That being said, in older hardware that did not have any support for this in the instruction set, you could disable interrupts and thus prevent another "process" from taking over but effectively constantly putting the system into a serialized mode and forcing sort of a mutual exclusion for a while.
At the very least you need atomic operations. There are lock free algorithms for single cpu's. I'm not sure about multiple CPU's
What are some of the criticisms leveled against exposing continuations as first class objects?
I feel that it is good to have first class continuations. It allow complete control over the execution flow of instructions. Advanced programmers can develop intuitive solutions to certain kind of problems. For instance, continuations are used to manage state on web servers. A language implementation can provide useful abstractions on top of continuations. For example, green threads.
Despite all these, are there strong arguments against first class continuations?
The reality is that many of the useful situations where you could use continuations are already covered by specialized language constructs: throw/catch, return, C#/Python yield. Thus, language implementers don't really have all that much incentive to provide them in a generalized form usable for roll-your-own solutions.
In some languages, generalized continuations are quite hard to implement efficiently. Stack-based languages (i.e. most languages) basically have to copy the whole stack every time you create a continuation.
Those languages can implement certain continuation-like features, those that don't break the basic stack-based model, a lot more efficiently than the general case, but implementing generalized continuations is quite a bit harder and not worth it.
Functional languages are more likely to implement continuations for a couple of reasons:
They are frequently implemented in continuation passing style, which means the "call stack" is probably a linked list allocated on the heap. This makes it trivial to pass a pointer to the stack as a continuation, since you don't need to overwrite the stack context when you pop the current frame and push a new one. (I've never implemented CPS but that's my understanding of it.)
They favor immutable data bindings, which make your old continuation a lot more useful because you will not have altered the contents of variables that the stack pointed to when you created it.
For these reasons, continuations are likely to remain mostly just in the domain of functional languages.
First up, there is more then just call/cc when it comes to continuation. I suggest starting with Mark Feelys paper: A better API for first class continuations
Next up I suggest reading about the control operators shift and reset, which is a different way of representing contunations.
A significant objection is implementation cost. If the runtime uses a stack, then first-class continuations require a stack copy at some point. The copy cost can be controlled (see Representing Control in the Presence of First-Class Continuations for a good strategy), but it also means that mutable variables cannot be allocated on the stack. This isn't an issue for functional or mostly-functional (e.g., Scheme) languages, but this adds significant overhead for OO languages.
Most programmers don't understand them. If you have code that uses them, it's harder to find replacement programmers who will be able to work with it.
Continuations are hard to implement on some platforms. For example, JRuby doesn't support continuations.
First-class continuations undermine the ability to reason about code, especially in languages that allow continuations to be imperatively assigned to variables, because the insides of closures can be brought alive again in hairy ways.
Cf. Kent Pitman's complaint about continuations, about the tricky way that unwind-protect interacts with call/cc
Call/cc is the 'goto' of advanced functional programming (a la the example here).
in ruby 1.8 the implementation was extremely slow. better in 1.9, and of course most schemes have had them built in and performing well from the outset.
Are data structures like linked lists something that are purely academic for real programming or do you really use them? Are they things that are covered by generics so that you don't need to build them (assuming your language has generics)? I'm not debating the importance of understanding what they are, just the usage of them outside of academia. I ask from a front end web, backend database perspective. I'm sure someone somewhere builds these. I'm asking from my context.
Thank you.
EDIT: Are Generics so that you don't have to build linked lists and the like?
It will depend on the language and frameworks you're using. Most modern languages and frameworks won't make you reinvent these wheels. Instead, they'll provide things like List<T> or HashTable.
EDIT:
We probably use linked lists all the time, but don't realize it. We don't have to write implementations of linked lists on our own, because the frameworks we use have already written them for us.
You may also be getting confused about "generics". You may be referring to generic list classes like List<T>. This is just the same as the non-generic class List, but where the element is always of type T. It is probably implemented as a linked list, but we don't have to care about that.
We also don't have to worry about allocation of physical memory, or how interrupts work, or how to create a file system. We have operating systems to do that for us. But we may be taught that information in school just the same.
Certainly. Many "List" implementations in modern languages are actually linked lists, sometimes in combination with arrays or hash tables for direct access (by index as opposed to iteration).
Linked lists (especially doubly linked lists) are very commonly used in "real-world" data structures.
I would dare to say every common language has a pre-built implementation of linked list, either as a language primitive, native template library (e.g. C++), native library (e.g. Java) or some 3rd party implementation (probably open-source).
That being said, several times in the past I wrote a linked list implementation from scratch myself when creating infrastructure code for complex data structures. Sometimes it's a good idea to have full control over the implementation, and sometimes you need to add a "twist" to the classic implementation for it to satisfy your specific requirement. There's no right or wrong when it comes to whether to code your own implementation, as long as you understand the alternatives and trade-offs. In most cases, and certainly in very modern languages like C# I would avoid it.
Another point is when you should use lists versus array/vectors or hash tables. From your question I understand you are aware of the trade-offs here so I won't go too much into it, but basically, if your main usage is traversing lists by-order, and the list size may vary significantly, a list may be a viable option. Another consideration is the type of insertion. If a common use case is "inserting in the middle", than lists have a significant advantage over arrays/vectors. I can go on but this information is in the classic CS books :)
Clarification: My answer is language agnostic and does not relate specifically to Generics which to my understanding have a linked list implementation.
A singly-linked list is the only way to have a memory efficient immutable list which can be composed to "mutate" it. Look at how Erlang does it. It may be slightly slower than an array-backed list but it has very useful properties in multithreaded and purely-functional implementations.
Yes, there are real world application that use linked list, I sometimes have to maintain a huge application that makes very have use of linked lists.
And yes, linked lists are included in just about any class library from C++/STL to .net.
And I wish it used arrays instead.
In the real world linked lists are SLOW because of things like paging and CPU cache size (linked lists tend to spread you data and that makes it more likely that you will need to access data from different areas of memory and that is much slower on todays computers than using arrays that store all the data in one sequence).
Google "locality of reference" for more info.
Never used hand-made lists except for homeworks at university.
Depending on usage a linked list could be the best option. Deletes from the front of the list are much faster with a linked list than an array list.
In a Java program that I maintain profiling showed that I could increase performance by moving from an ArrayList to a LinkedList for a List that had lots of deletes at the beginning.
I've been developing line of business applications (.NET) for years and I can only think of one instance where I've used linked list and even then I did not have to create the object.
This has just been my experience.
I would say it depends on the usage, in some cases they are quicker than typical random access containers.
Also I think they are used by some libraries as an underlying collection type, so what may look like a non-linked list might actually be one underneath.
In a C/C++ application I developed at my last company we used doubly linked lists all the time. They were essential to what we were doing, which was real-time 3D graphics.
Yes all sorts of data-structures are very useful in daily software development. In most languages that I know (C/C++/Python/Objective-C) there are frameworks that implement those data-structures so that you don't have to reinvent the wheel.
And yes, data-structures are not only for academics, they are very useful and you would not be able to write software without them (depends on what you do).
You use data-structures in message queues, data maps, hash tables, keeping data ordered, fast access/removal/insertion and so on depends what needs to be done.
Yes, I do. It all depends on the situation. If I won't be storing a lot of data in them, or if the specific application needs a FIFO structure, I'll use them without a second thought because they are fast to implement.
However, in applications for other developers I know, there are times that a linked list would fit perfectly except that poor locality causes a lot of cache misses.
I cannot imagine many programs that doesn't deal with lists.
The minute you need to deal with more than 1 thing of something, lists in all forms and shapes becomes needed, as you need somewhere to store these things. That list might be a singly/doubly linked list, an array, a set, a hashtable if you need to index your things based on a key, a priority queue if you need to sort it etc.
Typically you'd store these lists in a database system, but somewhere you need to fetch them from the db, store them in your application and manipulate them, even if it's as simple to retrieve a little list of things you populate into a drop-down combobox.
These days, in languages such as C#,Python,Java and many more, you're usually abstracted away from having to implement your own lists. These languages come with a great deal of abstractions of containers you can store stuff in. Either via standard libraries or built into the language.
You're still at an advantage of learning these topics, e.g. if you're working with C# you'd want to know how an ArrayList works, and wheter you'd choose ArrayList or something else depending on your need to add/insert/search/random index such a list.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a Delphi 2009 program that handles a lot of data and needs to be as fast as possible and not use too much memory.
What small simple changes have you made to your Delphi code that had the biggest impact on the performance of your program by noticeably reducing execution time or memory use?
Thanks everyone for all your answers. Many great tips.
For completeness, I'll post a few important articles on Delphi optimization that I found.
Before you start optimizing Delphi code at About.com
Speed and Size: Top 10 Tricks also at About.com
Code Optimization Fundamentals and Delphi Optimization Guidelines at High Performance Delphi, relating to Delphi 7 but still very pertinent.
.BeginUpdate;
.EndUpdate;
;)
Use a Delphi Profiling tool (Some here or here) and discover your own bottle necks. Optimizing the wrong bottlenecks is a waste of time. In other words, if you apply all of these suggestions here, but ignore the fact someone put a sleep(1000) (or similar) in some very important code is a waste of your time. Fix your actual bottlenecks first.
Stop using TStringList for everything.
TStringList is not a general purpose datastructure for effective storage and handling of everything from simple to complex types. Look for alternatives. I use Delphi Container and Algorithm Library (DeCAL, formerly known as SDL). Julians EZDSL should also be a good alternative.
Pre-allocating lists and arrays, rather than growing them with each iteration.
This has probably had the biggest impact for me in terms of speed.
If you need to use Application.processmesssages (or similar) in a loop, try calling it only every Nth iteration.
Similarly, if updating a progressbar, don't update it every iteration. Instead, increment it by x units every x iterations, or scale the updates according to time or as a percentage of overall task length.
FastMM
FastCode (lib)
Use high performance data structures, like hash table (etc). Many places it is faster to make one loop which makes lookup hash table for your data. Uses quite lot of memory but it surely is fast. (this maybe is most important one, but 2 first are dead simple and need very little of effort to do)
Reduce disk operations. If there's enough memory, load the file entirely to RAM and do all operations in memory.
Consider the careful use of threads. If you are not using threads now, then consider adding a couple. If you are, make sure you are not using too many. If you are running on a Dual or Quad core computer (which most are any more) then proper thread tuning is very important.
You could look at OmniThread Library by Gabr, but there are a number of thread libraries in development for Delphi. You could easily implement your own parallel for using anonymous types.
Before you do anything, identify slow parts. Do not touch working code which performs fast enough.
The biggest improvement came when I started using AsyncCalls to convert single-threaded applications that used to freeze up the UI, into (sort of) multi-threaded apps.
Although AsyncCalls can do a lot more, I've found it useful for this very simple purpose. Let's say you have a subroutine blocked like this: Disable Button, Do Work, Enable Button.
You move the 'Do Work' part to a local function (call it AsyncDoWork), and add four lines of code:
var a: IAsyncCall;
a := LocalAsyncCall(#AsyncDoWork);
while (NOT a.Finished) do
application.ProcessMessages;
a.Sync;
What this does for you is run AsyncDoWork in a separate thread, while your main thread remains available to respond to the UI (like dragging the window or clicking Abort.) When AsyncDoWork is finished the code continues. Because I moved it to a local function, all local vars are available, an the code does not need to be changed.
This is a very limited type of 'multi-threading'. Specifically, it's dual threading. You must ensure that your Async function and the UI do not both access the same VCL components or data structures. (I disable all controls except the stop button.)
I don't use this to write new programs. It's just a really quick & easy way to make old programs more responsive.
When working with a tstringlist (or similar), set "sorted := false" until needed (if at all). Seems like a no-brainer...
Create unit tests
Verify tests all pass
Profile your application
Refactor looking for bottlenecks and memory
Repeat from Step 2 (comparing to previous pass)
Make intelligent use of SetLength() for strings and arrays. Optimise initialisation with FillChar or ZeroMemory.
Local variables created on stack (e.g. record types) are faster than heap allocated (objects and New()) variables.
Reuse objects rather than Destroy then create. But make sure management code for this is faster than memory manager!
Check heavily-used loops for calculations that could be (at least partially) pre-calculated or handled with a lookup table. Trig functions are a classic for this, but it applies to many others.
If you have a list, use a dynamic array of anything, even a record as follows:
This needs no classes, no freeing and access to it is very fast. Even if it needs to grow you can do this - see below. Only use TList or TStringList if you need lots of size changing flexibility.
type
TMyRec = record
SomeString : string;
SomeValue : double;
end;
var
Data : array of TMyRec;
I : integer;
..begin
SetLength( Data, 100 ); // defines the length and CLEARS ALL DATA
Data[32].SomeString := 'Hello';
ShowMessage( Data[32] );
// Grow the list by 1 item.
I := Length( Data );
SetLength( Data, I+1 );
..end;
Separating the program logic from user interface, refactoring, then optimizing the most-used, most resource-intensive elements independently.
Turn debugging OFF
Turn optimizations ON
Remove all references to units that
you don't actually use
Look for memory leaks
Use a lot of assertions to debug, then turn them off in shipping code.
Turn off range and overflow checking after you have tested extensively.
If you really, really, really need to be light weight then you can shed the VCL. Take a look at the KOL & MCK. Granted if you do that then you are trading features for reduced footprint.
Use the full FastMM and study the documentation and source and see if you can tweak it to your specifications.
For an old BDE development when I first started Delphi, I was using lots of TQuery components. Someone told me to use TTable master-detail after I explained him what I was doing, and that made the program run much faster.
Calling DisableControls can omit unnecessary UI updates.
When identifying records, use integers if at all possible for record comparison. While a primary key of "company name" might seem logical, the time spent generating and storing a hash of this will greatly improve overall search times.
You might consider using runtime packages. This could reduce your memory foot print if there are more then one program running that is written using the same packages.
If you use threads, set their processor affinity. If you don't use threads yet, consider using them, or look into asynchronous I/O (completion ports) if your application does lots of I/O.
Consider if a DBMS database is really the perfect choice. If you are only reading data and never changing it, then a flat fixed record file could work faster, especially if the path to the data can be easily mapped (ie, one index). A trivial binary search on a fixed record file is still extremely fast.
BeginUpdate ... EndUpdate
ShortString vs. String
Use arrays instead of TStrings and TList
But the sad answer is that tuning and optimization will give you maybe 10% improvement (and it's dangerous); re-design can give you 90%. Once you really understand the goal, you often can restate the problem (and therefore the solution) in much better terms.
Cheers
Examine all loops, and look for ways to short circuit. If your looking for something specific and find it in a loop, then use the BREAK command to immediately bail...no sense looping thru the rest. If you know that you don't have a match, then use a CONTINUE as quickly as possible.
Take advantage of some of the FastCode project code. Parts of it were incorporated into VCL/RTL proper (like FastMM was), but there is more out there you can use!
Note, they have a new site they are moving too, but it seems to be a bit inactive.
Consider hardware issues. If you really need performance then consider the type of hard drive(s) your program and your databases are running on. There are a lot of variables especially if you are running a database. RAID is not always the best answer either.