If you write a compiler in pure Prolog (no extra-logical bits), will it work as a decompiler also?
(A book I was reading opined on this, but I wonder if anyone has actually tried it)
I once wrote the equivalent of cdecl.org as a reversible program. It was a bit tricky, but I demonstrated that it could be done. (Somewhere in a pile of papers is the source code; one of these days, I hope to publish it on github.) The code was 2 or 3 times as compact at some existing code that used tools such as yacc/lex (bison/flex).
For something like cdecl -- where you're translating between char ** const * const x and declare x as const pointer to const pointer to pointer to char, compiling/decompiling makes sense. But what does it mean to translate from arbitrary machine code to source code? Even translating between some IR and source code doesn't seem to make a lot of sense.
This question needs to be much more precise, as we don't know what a "compiler" is (an extraneous-information-dumping transformation from a graph - the program in language 1 - to another graph - the algorithmically equivalent graph in language 2, I suppose). It also not clear what "no-extra logical bits implies". If yo get rid of these, what kind of compilers can you still build?
Seen this way, compilation looks like pure deduction (Prolog running forward, or CHR) while decompilation looks like possibly very hard search (you will get a program among the gazillion possible ones but it won't be pleasant too look at and in no way resemble the one you had earlier). Someone who as a toolbox of theorems freshly in his mind can certainly say more.
But I would say not automagically, no. For one, there will be no guarantee that an infinite "recursion on the left" loop won't appear when "decompiling".
First of, this is a genuine question and not poking fun at anyone. I am trying to learn C++ after many years of not touching it and I found a very old article (last updated July, 96) while trying to remember how implement BCD addition. Even though the article is old, looking at the person who wrote it is a professor, I am in a wtf state after reading the first few lines. I am learning and don't want to disregard something so easily, so please excuse my naivety.
The BCD system was chosen for the internal number system in these
machines because it is easy to convert it to alphanumeric
representations for printouts and displays. The compelling advantages
of BCD have waned over time, and these digits are supported by more
modern hardware simply to provide backward compatibility with earlier
generations of machines.
Is the above statement true ??? !!! If yes, can someone explain how the modern CPUs perform addition if not using Binary ??? Or is the author trying to say something else and I misunderstood. I am concerned that maybe author might be hinting at something at the hardware level that might be different than the software abstraction. Or it might be some sort of translation issue.
I don't see any purpose solved by processors giving an outer appearance of being binary ("for backward compatibility") when internally they are decimal and don't need BCD system.
I have a function in my program that generates random strings.
func randString(s []rune, l int) string
s is a slice of runes containing the possible characters in the string. I pass
in a rune slice of both capital and lowercase alphabetic characters. l
determines the length of the string. This works great. But I also need to
generate random hex strings for html color codes.
It seems all sources say that it's good programming practice to reuse code. So I
made another []rune that held [1-9a-f] and feed that into randString. That
was before I realized that the stdlib already inclues formatting verbs for int
types that suit me perfectly.
In practice, is it better to reuse my randString function or code a separate
(more efficient) function? I would generate a single random int and Sprintf it
rather than having to loop and generate 6 random ints which randString does.
1) If there is an exact solution in the standard library, you should like always choose to use that.
Because:
The standard library is tested. So it does what it says (or what we expect it to do). Even if there is a bug in it, it will be discovered (by you or by others) and will get fixed without your work/effort.
The standard library is written as idiomatic Go. Chances are it's faster even if it does a little more than what you need compared to the solution you could write.
The standard library is (or may) improve by time. Your program may get faster just because an implementation was improved in a new Go release without any effort from your part.
The solution is presented (which means it's ready and requires no time from you).
The standard library is well and widely known, so your code will be easier to understand by others and by you later on.
If you're already imported the package (or will in the near future), this means zero or minimal overhead as libraries are statically linked, so the function you need is already linked to your program (to the compiled executable binary).
2) If there is a solution provided by the standard library but it is a general solution to similar problems and/or offers more than what you need:
That means it's more likely not the optimal solution for you, as it may use more memory and/or work more slowly as your solution could be.
You need to decide if you're willing to sacrifice that little performance loss for the gains listed above. This also depends how and how many times you need to use it (e.g. if it's a one-time, it shouldn't matter, if it's in an endless loop called very frequently, it should be examined carefully).
3) And at the other end: you should avoid using a solution provided by the standard library if it wasn't designed to solve your problem...
If it just happens that its "side-effect" solves your problem: Even if the current implementation would be acceptable, if it was designed for something else, future improvements to it could render your usage of it completely useless or could even break it.
Not to mention it would confuse other developers trying to read, improve or use your code (you included, after a certain amount of time).
As a side note: this question is exactly about the function you're trying to create: How to generate a random string of a fixed length in golang? I've presented mutiple very efficient solutions.
This is fairly subjective and not go-specific but I think you shouldn't reuse code just for the sake of reuse. The more code you reuse the more dependencies you create between different parts of your app and as result it becomes more difficult to maintain and modify. Easy to understand and modify code is much more important especially if you work in a team.
For your particular example I would do the following.
If a random color is generated only once in your package/application then using fmt.Sprintf("#%06x", rand.Intn(256*256*256)) is perfectly fine (as suggested by Dave C).
If random colors are generated in multiple places I would create function func randColor() string and call it. Note that now you can optimize randColor implementation however you like without changing the rest of the code. For example you could have implemented randColor using randString initially and then switched to a more efficient implementation later.
Note: I accidentally posted this question without specifying which STL implementation I was using, and I felt it can't really be updated since it would render most of its answers obsolete.
So, the correct question goes - which sorting algorithm is used in the below code, assuming I'm using the STL library of Microsoft Visual C++?:
list<int> mylist;
// ..insert a million values
mylist.sort();
Just so you don't have to rely on second hand information, the the sort code is right in the list header - it's about 35 lines.
Appears to be a modified iterative (non-recursive) merge sort with up to 25 bins (I don't know if there's a particular name for this variant of merge sort).
At least in recent versions (e.g. VC++ 9.0/VS 2008) MS VC++ uses a merge-sort.
STL that came with MS VC6 was the P. J. Plauger's version of the library (Dinkumware) and it used merge-sort in std::list<>::sort(). I dont know about later versions of MS's package.
To my knowledge it is Introsoft: http://en.wikipedia.org/wiki/Introsort
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
People in java/.net world has framework which provides methods for sorting a list.
In CS, we all might have gone through Bubble/Insertion/Merge/Shell sorting algorithms.
Do you write any of it these days?
With frameworks in place, do you write code for sorting?
Do you think it makes sense to ask people to write code to sort in an interview? (other than for intern/junior developer requirement)
There are two pieces of code I write today in order to sort data
list.Sort();
enumerable.OrderBy(x => x); // Occasionally a different lambda is used
I work for a developer tools company, and as such I sometimes need to write new RTL types and routines. Sorting is something developers need, so it's something I sometimes need to write.
Don't forget, all that library code wasn't handed down from some mountain: some developer somewhere had to write it.
I don't write the sorting algorithm, but I have implemented the IComparer in .Net for a few classes which was kind of interesting the first couple of times.
I wouldn't write the code for sorting given what is in the frameworks in most cases. There should be an understanding of why a particular sorting strategy like Quick sort is often used in frameworks like .Net.
I could see giving or being given a sorting question where some of the work is implementing the IComparer and understanding the different ways to sort a class. It would be a fairly easy thing to show someone a Bubble sort and ask, "Why wouldn't you want to do this in most applications?"
I can say with 100% certainty that I haven't written one of the 'traditional' sort routines since leaving University. It's nice to know the theory behind it, but to apply them to real-world situations that can't be done by other means doesn't happen very often (at least from my experience...).
only on employer's interview/test =)
I wrote a merge sort when I had to sort multi-gigabyte files with a custom key comparison. I love merge sort - it's easy to comprehend, stable, and has a worst-case O(n log n) performance.
I've been looking for an excuse to try radix sort too. It's not as general purpose as most sorting algorithms, so there aren't going to be any libraries that provide it, but under the right circumstances it should be a good speedup.
Personally, I've not had a need to write my own sorting code for a while.
As far as interview questions go, it would weed out those who didn't pay attention during CS classes.
You could test API knowledge by asking how would you build Comparable (Capital C) objects, or something along those lines.
The way I see it, just like many others fields of knowledge, programming also has a theoretical and a practical approach to it.
The field of "theoretical programming" is the one that gave us quicksort, Radix Sort, Djikstra's Algorithm and many other things absolutely necessary to the advance of computing.
The field of "practical programming" deals with the fact that the solutions created in "theoretical programming" should be easily accessible to all in a much easier way, so that the theoretical ideas can get many, many creative uses. This gave us high-level languages like Python and allowed pretty much any language to implement packed methods for the most basics operations like sorting or searching with a good enough performance to be fit for almost everyone.
One can't live without the other...
most of us not needing to hard code a sorting algorithm doesn't mean no one should.
I've reciently had to write a sort, of sorts.
I had a list of text.. the ten most common had to show up according to the frequency at which they were selected. All other entries had to show up according to alpha sort.
It wasn't crazy hard to do but I did have to write a sort to support it.
I've also had to sort objects whose elements aren't easily sorted with an out of the box code.
Same goes for searching.. I had to walk a file and search staticly sized records.. When I found a record I had to move one record back, because I was inserting before it.
For the most part it was very simple and I mearly pasted in a binary search. Some changes needed to be done to support the method of access, because I wasn't using an array that was acutally in memory.. Ah c&#p.. I could have treated it like a stream.. See now I want to go back and take a look..
Man, if someone asked me in an interview what the best sort algorithm was, and didn't understand immediately when I said 'timsort', I'd seriously reconsider if I wanted to work there.
Timsort
This describes an adaptive, stable,
natural mergesort, modestly called
timsort (hey, I earned it ). It
has supernatural performance on many
kinds of partially ordered arrays
(less than lg(N!) comparisons needed,
and as few as N-1), yet as fast as
Python's previous highly tuned
samplesort hybrid on random arrays.
In a nutshell, the main routine
marches over the array once, left to
right, alternately identifying the
next run, then merging it into the
previous runs "intelligently".
Everything else is complication for
speed, and some hard-won measure of
memory efficiency.
http://svn.python.org/projects/python/trunk/Objects/listsort.txt
Is timsort general-purpose or Python-specific?
I haven't really implemented a sort, except as coding exercise and to observe interesting features of a language (like how you can do quicksort on one line in Python).
I think it's a valid question to ask in an interview because it reflects whether the developer thinks about these kind of things... I feel it's important to know what that list.sort() is doing when you call it. It's just part of my theory that you should know the fundamentals behind everything as a programmer.
I never write anything for which there's a library routine. I haven't coded a sort in decades. Nor would I ever. With quicksort and timsort directly available, there's no reason to write a sort.
I note that SQL does sorting for me.
There are lots of things I don't write, sorts being just one of them.
I never write my own I/O drivers. (Although I have in the past.)
I never write my own graphics libraries. (Yes, I did this once, too, in the '80s)
I never write my own file system. (Avoided this.)
There is definitely no reason to code one anymore. I think it is important though to understand the efficiency of what you are using so that you can pick the best one for the data you are sorting.
Yes. Sometimes digging out Shell sort beats the builtin sort routine when your list is only expected to be at most a few tens of records.