There's no clear definition for a polyadic function on the internet. Please help me with a clear definition. On the other hand, there is some information about variadic functions available. Variadic functions mean that they take a variable number of arguments and apparently, such functions are supported by C.
Adic stands for number of arguments. So poly + adic means multiple number of arguments that a function can take. It will have more than 1 argument. In python you can define the arguments where you declare it. Hope that helps.
Arithmetic functions in LISP language are polyadic. ie, they can take multiple arguments.
Related
I have started learning Haskell and I have read that every function in haskell takes only one argument and I can't understand what magic happens under the hood of Haskell that makes it possible and I am wondering if it is efficient.
Example
>:t (+)
(+) :: Num a => a -> a -> a
Signature above means that (+) function takes one Num then returns another function which takes one Num and returns a Num
Example 1 is relatively easy but I have started wondering what happens when functions are a little more complex.
My Questions
For sake of the example I have written a zipWith function and executed it in two ways, once passing one argument at the time and once passing all arguments.
zipwithCustom f (x:xs) (y:ys) = f x y : zipwithCustom f xs ys
zipwithCustom _ _ _ = []
zipWithAdd = zipwithCustom (+)
zipWithAddTo123 = zipWithAdd [1,2,3]
test1 = zipWithAddTo123 [1,1,1]
test2 = zipwithCustom (+) [1,2,3] [1,1,1]
>test1
[2,3,4]
>test2
[2,3,4]
Is passing one argument at the time (scenario_1) as efficient as passing all arguments at once (scenario_2)?
Are those scenarios any different in terms of what Haskell is actually doing to compute test1 and test2 (except the fact that scenario_1 probably takes more memory as it needs to save zipWithAdd and zipWithAdd123)
Is this correct and why? In scenario_1 I iterate over [1,2,3] and then over [1,1,1]
Is this correct and why? In scenario_1 and scenario_2 I iterate over both lists at the same time
I realise that I have asked a lot of questions in one post but I believe those are connected and will help me (and other people who are new to Haskell) to better understand what actually is happening in Haskell that makes both scenarios possible.
You ask about "Haskell", but Haskell the language specification doesn't care about these details. It is up to implementations to choose how evaluation happens -- the only thing the spec says is what the result of the evaluation should be, and carefully avoids giving an algorithm that must be used for computing that result. So in this answer I will talk about GHC, which, practically speaking, is the only extant implementation.
For (3) and (4) the answer is simple: the iteration pattern is exactly the same whether you apply zipWithCustom to arguments one at a time or all at once. (And that iteration pattern is to iterate over both lists at once.)
Unfortunately, the answer for (1) and (2) is complicated.
The starting point is the following simple algorithm:
When you apply a function to an argument, a closure is created (allocated and initialized). A closure is a data structure in memory, containing a pointer to the function and a pointer to the argument. When the function body is executed, any time its argument is mentioned, the value of that argument is looked up in the closure.
That's it.
However, this algorithm kind of sucks. It means that if you have a 7-argument function, you allocate 7 data structures, and when you use an argument, you may have to follow a 7-long chain of pointers to find it. Gross. So GHC does something slightly smarter. It uses the syntax of your program in a special way: if you apply a function to multiple arguments, it generates just one closure for that application, with as many fields as there are arguments.
(Well... that might be not quite true. Actually, it tracks the arity of every function -- defined again in a syntactic way as the number of arguments used to the left of the = sign when that function was defined. If you apply a function to more arguments than its arity, you might get multiple closures or something, I'm not sure.)
So that's pretty nice, and from that you might think that your test1 would then allocate one extra closure compared to test2. And you'd be right... when the optimizer isn't on.
But GHC also does lots of optimization stuff, and one of those is to notice "small" definitions and inline them. Almost certainly with optimizations turned on, your zipWithAdd and zipWithAddTo123 would both be inlined anywhere they were used, and we'd be back to the situation where just one closure gets allocated.
Hopefully this explanation gets you to where you can answer questions (1) and (2) yourself, but just in case it doesn't, here's explicit answers to those:
Is passing one argument at the time as efficient as passing all arguments at once?
Maybe. It's possible that passing arguments one at a time will be converted via inlining to passing all arguments at once, and then of course they will be identical. In the absence of this optimization, passing one argument at a time has a (very slight) performance penalty compared to passing all arguments at once.
Are those scenarios any different in terms of what Haskell is actually doing to compute test1 and test2?
test1 and test2 will almost certainly be compiled to the same code -- possibly even to the point that only one of them is compiled and the other is an alias for it.
If you want to read more about the ideas in the implementation, the Spineless Tagless G-machine paper is much more approachable than its title suggests, and only a little bit out of date.
How to count all the permutations of a string in Microsoft Small Basic?
The brute-force exploring of all permutations is usually done by recursion in languages such as C and C++. However, Microsoft Small Basic doesn't support the arguments for functions, so it's impossible to implement the recursive algorithm the same way.
Perhaps it's doable in Small Basic using the Stack? How exactly?
You can't use arguments in functions in Smallbasic, but since all variables are global, you can simply set them before you call the function and use them in the function. It is also possible for a function to call itself. This means that you can use functions (or subroutines as they are called in SB) for 'brute forcing' this algorithm.
See here:
Sub Printx
TextWindow.WriteLine(x)
x = x + 1
Printx()
EndSub
x = 1
Printx()
Note that this way of doing things may crash the program after around 2,000 'recalls' of the subroutine as it throws a stackoverflow error.
What are the primitive types in bash scripting? I feel like this is a simple question, but it has been surprisingly hard to find the answer. I know there are at least arrays because you can do commands such as
for file in *.less
# code working with file
so that makes it appear like *.less is an array.
Also, there are string types, because if I have
a=y
b=z
c=$a+$b
echo $c
>> y+z
are there any other types besides this? Thanks a lot!
EDIT: After doing further research it also appears that there are associative arrays, which can be declared in the following manner
declare -A address
which I got from here. Are there any examples of other types?
It's an interesting question, here's what I found:
bash supports several programming primitives shared by most
programming languages. It can perform choices (if then else, case), it
can loop (for,while, until) and it has functions (function).
http://dasher.wustl.edu/chem478/software/unix-tools/bash.html
Bash variables are untyped.
Unlike many other programming languages, Bash does not segregate its
variables by "type." Essentially, Bash variables are character
strings, but, depending on context, Bash permits arithmetic operations
and comparisons on variables. The determining factor is whether the
value of a variable contains only digits.
http://tldp.org/LDP/abs/html/untyped.html
Bash provides one-dimensional indexed and associative array variables.
http://www.gnu.org/software/bash/manual/html_node/Arrays.html#Arrays
Let us assume you can represent a program as mathematical function, that's possible. How does the program representation of the first derivative of that function look like? Is there a way to transform a program to its "derivative" form, and does this make sense at all?
Yes it does make sense, it's known as Automatic Differentiation. There are one or two experimental compilers which can do this, for example NAGware's Differentiation Enabled Fortran Compiler Technology. And there are a lot of research papers on the topic. I suggest you get Googling.
First, it only makes sense to try to get the derivative of a pure function (one that does not affect external state and returns the exact same output for every input). Second, the type system of many programming languages involves a lot of step functions (e.g. integers), meaning you'd have to get your program to work in terms of continuous functions in order to get a valid first derivative. Third, getting the derivative of any function involves breaking it down and manipulating it symbolically. Thus, you can't get the derivative of a function without knowing how what operations it is made of. This could be achieved with reflection.
You could create a derivative approximation function if your programming language supports closures (that is, nested functions and the ability to put functions into variables and return them). Here is a JavaScript example taken from http://en.wikipedia.org/wiki/Closure_%28computer_science%29 :
function derivative(f, dx) {
return function(x) {
return (f(x + dx) - f(x)) / dx;
};
}
Thus, you could say:
function f(x) { return x*x; }
f_prime = derivative(f, 0.0001);
Here, f_prime will approximate function(x) {return 2*x;}
If a programming language implemented higher-order functions and enough algebra, one could implement a real derivative function in it. That would be really cool.
See Lambda the Ultimate discussions on Derivatives and dissections of data types and Derivatives of Regular Expressions
How do you define the mathematical function of a program?
A derivative represent the rate of change of a function. If your function isn't continuous its derivative will be undefined over most of the domain.
I'm just gonna say that this doesn't make a lot of sense, as a program is much more abstract and "ruleless" than a mathematical function. As a derivative is a measure of the change in output as the input changes, there are certainly some programs where this could apply. However, you'd need to be able to quantify your input/output both in numerical terms.
Since input/output would both numerical, it's reasonable to assume that your program represents or operates similarly to a mathematical function, or series of functions. Hence, you can easily represent a derivative, but it would be no different than converting the mathematical derivative of a function to a computer program.
If the program is denoted as a distribution (Schwartz) then you have some notion of derivative assuming that tests functions models your postcondition (you can still take the limit to get a characteristic function). For instance, the assignment x:=x+1 is associated to the Dirac distribution \delta_{x_0+1} where x_0 is the initial value of the variable x. However, I have no idea what is the computational meaning of \delta_{x_0+1}'.
I am wondering, what if the program your're trying to "derive" uses some form of heursitics ? How can it be derived then ?
Half-jokingly, we all know that all real programs use at least a rand().
Lambda calculus of course is quite elegant, but doesn't it bother you that there is this asymmetry between input and output of a function? I.e. you can make the function take two parameters (by returning a function) but you can't make it return two values.
I don't think we could find it in The Book.
You can make it return a function whose evaluations return two values.
Or you can return a compound structure. Just like a vector function which returns a vector, which has separate components in there.
BTW what does your header question have to do with the actual question you've asked?