Is there a way to use a dynamic function name in Elixir from string interpolation like in Ruby? - ruby

I want to be able to construct a function call from a string in elixir. Is this possible? The equivalent ruby method call would be:
"uppercase".send("u#{:pcase}")

Although the answer by #fhdhsni is perfectly correct, I’d add some nitpicking clarification.
The exact equivalent of Kernel#send from ruby in elixir is impossible, because Kernel#send allows to call private methods on the receiver. In elixir, private functions do not ever exist in the compiled code.
If you meant Kernel#public_send, it might be achieved with Kernel.apply/3, as mentioned by #fhdhsni. The only correction is since the atom table is not garbage collected, and one surely wants to call an indeed existing function, it should be done with String.to_existing_atom/1.
apply(
String,
String.to_existing_atom("u#{:pcase}"),
["uppercase"]
)
Also, one might use macros during the compilation stage to generate respective clauses when the list of functions to call is predictable (when it’s not, the code already smells.)
defmodule Helper do
Enum.each(~w|upcase|a, fn fname ->
def unquote(fname)(param),
do: String.unquote(fname)(param)
# or
# defdelegate unquote(fname)(param), to: String
end)
end
Helper.upcase("uppercase")
#⇒ "UPPERCASE"

In Elixir module and function names are atoms. You can use apply to call them dynamically.
apply(String, String.to_atom("u#{:pcase}"), ["uppercase"]) # "UPPERCASE"
Depending on your use case it might not be a good idea to create atoms dynamically (since the atom table is not garbage collected).

Related

Local declaration of (built-in) Lua functions to reduce overhead

It is often said that one should re-declare (certain) Lua functions locally, as this reduces the overhead.
But what is the exact rule / principle behind this? How do I know for which functions this should be done and for which it is superfluous? Or should it be done for EVERY function, even your own?
Unfortunately I can't figure it out from the Lua manual.
The principle is that every time you write table.insert for example, the Lua interpreter looks up the "insert" entry in the table called table. Actually, it means _ENV.table.insert - _ENV is where the "global variables" are in Lua 5.2+. Lua 5.1 has something similar but it's not called _ENV. The interpreter looks up the string "table" in _ENV and then looks up the string "insert" in that table. Two table lookups every time you call table.insert, before the function actually gets called.
But if you put it in a local variable then the interpreter gets the function directly from the local variable, which is faster. It still has to look it up, to fill in the local variable.
It is superfluous if you only call the function once within the scope of the local variable, but that is pretty rare. There is no reason to do it for functions which are already declared as local. It also makes the code harder to read, so typically you won't do it except when it actually matters (in code that runs a lot of times).
My favorit tool for speed up things in Lua is to place all the useable stuff for a table in a metatable called: __index
A common example for this is the datatype: string
It has all string functions in his __index metatable as methods.
Therefore you can do things like that directly on a string...
print(('istaqsinaayok'):upper():reverse())
-- Output: KOYAANISQATSI
The Logic above...
The lookup for a method in a string fails directly and therefore the __index metamethod will be looked up for that method.
I like to implement same behaviour for the datatype number...
-- do debug.setmetatable() only once for all further defined/used numbers
math.pi = debug.setmetatable(math.pi, {__index = math})
-- From now numbers are objects ;-)
-- Lets output Pi but not using Pi this time
print((180):rad()) -- Pi calcing with method rad()
-- Output: 3.1415926535898
The Logic: If not exists then lookup __index
Is only one step behind: local
...imho.
Another Example, that works with this method...
-- koysenv.lua
_G = setmetatable(_G,
{ -- Metamethods
__index = {}, -- Table constructor
__name = 'Global Environment'
})
-- Reference whats in _G into __index
for key, value in pairs(_G) do
getmetatable(_G)['__index'][key] = value
end
-- Remove all whats in __index now from _G
for key, value in pairs(getmetatable(_G)['__index']) do
_G[key] = nil
end
return _G
When started as a last require it move all in _G into fresh created metatable method __index.
After that _G looks totally empty ;-P
...but the environment is working like nothing happen.
To add to what #user253751 already said:
Code Quality
Lua is a very flexible language. Other languages require you to import the parts of the standard library you use; Lua doesn't. Lua usually provides one global environment not to be polluted. If you play with the environment _ENV (setfenv/getfenv on Lua 5.1 / LuaJIT), you'll want to be able to still access Lua libraries. For that purpose you may to localize them before changing the environment; you can then use your "clean" environment for your module / API table / class / whatever. Another option here is to use metatables; metatable chains may quickly get hairy though and are likely to harm performance, as a failed table lookup is required each time to trigger indexing metamethods. localizing otherwise global variables can thus be seen as a way of importing them; to give a minimal & rough example:
local print = print -- localize ("import") everything we need first
_ENV = {} -- set environment to clean table for module
function hello() -- this writes to _ENV instead of _G
print("Hello World!")
end
hello() -- inside the environment, all variables set here are accessible
return _ENV -- "export" the API table
Performance
Very minor nitpick: Local variables aren't strictly always faster. In very extreme cases (i.e. lots of upvalues), indexing a table (which doesn't need an upvalue if it's the environment, the string metatable or the like) may actually be faster.
I imagine that localizing variables is required for many optimizations of optimizing compilers such as LuaJIT to be applicable though; otherwise Lua makes very little code. A global like print might as well be overwritten somewhere in a deep code path - thus the indexing operation has to be repeated every time; for a local on the other hand, the interpreter will has way more guarantees regarding its scope. It is thus able to detect constants that are only written to once, on initialization for instance; for globals very little code analysis is possible.

Chaining methods in Ruby, and injecting a block into the chain

Consider the following Ruby expression:
y=x.a.b.c.d.e.f
Of course, x is an object and a to f are methods defined for a class which matches the return value of the previous method in the chain. Now say that I want to replace the invocation of method c by a custom block, i.e. I would like to achieve the effect of
temp=x.a.b
temp1=.... (calculate something based on the value of temp)
y=temp1.d.e.f
but with using method chaining.
It is of course trivial to define a suitable method to achieve this:
class Object
def pass
yield(self)
end
end
which would allow me to write something like
y=x.a.b.pass {|the_b| .....}.d.e.f
Now to my question:
Given that Ruby already has a method for a similar problem (Object#tap), I wonder why it does not have a method similar to the Object#pass which I just explained. I suspect, that either
(a) Ruby already offers a feature like this, and I'm just to stupid to find it, or
(b) What I want to achieve would be considered bad programming style (but then, why?)
Is (a) or (b) correct, or did I miss something here?
(a) Yes. Ruby already has that. It is called yield_self.
(b) No. It is not a bad style.

Ruby: how to pass object to a block and return a calculated value depending on the obj?

I would like to do something similar:
seconds=Time.parse("0:26:29.489").magic{|z| z.hour+z.min+z.sec+z.nsec.fdiv(1_000_000)}
to convert a timestamp into seconds (with fractions too), instead of writing:
d=Time.parse("0:26:29.489")
seconds=d.hour+d.min+d.sec+d.nsec.fdiv(1_000_000)
to spare a temporary "d" variable. But what should I use for "magic" if any?
Ruby has tap, but that won't help you here. What you want is something that would be called pipe, but sadly it's not there. At least not without a gem that monkey patches Object. Though I think it should be.
You can create a lambda and immediately call it, which will avoid the intermediate variable (or at least contain it within the lambda's block scope, as in your magic example), but I'm not sure you gain much, and would probably stick with what you have. The lambda approach would look like this:
# will return the value for "seconds"
->(d) { d.hour+d.min+d.sec+d.nsec.fdiv(1_000_000) }.(Time.parse("0:26:29.489"))

Using the name "function" for a variable in Python code

Is using the word function for the name of an argument considered bad style in Python code?
def conjunction_junction(function):
pass # do something and call function in here
This pattern occurs all the time, especially in decorators. You see func, fn and f used all of the time but I prefer to avoid abbreviations when possible. Is the fact that it's the name of a type enough to warrant abbreviating it?
>> type(conjunction_junction).__name__
'function'
It's not a reserved keyword, so I don't see why not.
From the Style Guide
If a function argument's name clashes with a reserved keyword, it is
generally better to append a single trailing underscore rather than
use an abbreviation or spelling corruption. Thus class_ is better than
clss. (Perhaps better is to avoid such clashes by using a synonym.)
Using function is perfectly fine.
There is nothing in the style guide about it specifically. The reason that the use of type names such as str and list is highly discouraged is because they have functionality within the language. Overwriting them would obscure the functionality of the code. function on the other hand, does nothing.
I suspect func, fn, and f are used because they are all shorter than typing function ;)

scala coalesces multiple function call parameters into a Tuple -- can this be disabled?

This is a troublesome violation of type safety in my project, so I'm looking for a way to disable it. It seems that if a function takes an AnyRef (or a java.lang.Object), you can call the function with any combination of parameters, and Scala will coalesce the parameters into a Tuple object and invoke the function.
In my case the function isn't expecting a Tuple, and fails at runtime. I would expect this situation to be caught at compile time.
object WhyTuple {
def main(args: Array[String]): Unit = {
fooIt("foo", "bar")
}
def fooIt(o: AnyRef) {
println(o.toString)
}
}
Output:
(foo,bar)
No implicits or Predef at play here at all -- just good old fashioned compiler magic. You can find it in the type checker. I can't locate it in the spec right now.
If you're motivated enough, you could add a -X option to the compiler prevent this.
Alternatively, you could avoid writing arity-1 methods that accept a supertype of TupleN.
What about something like this:
object Qx2 {
#deprecated def callingWithATupleProducesAWarning(a: Product) = 2
def callingWithATupleProducesAWarning(a: Any) = 3
}
Tuples have the Product trait, so any call to callingWithATupleProducesAWarning that passes a tuple will produce a deprecation warning.
Edit: According to people better informed than me, the following answer is actually wrong: see this answer. Thanks Aaron Novstrup for pointing this out.
This is actually a quirk of the parser, not of the type system or the compiler. Scala allows zero- or one-arg functions to be invoked without parentheses, but not functions with more than one argument. So as Fred Haslam says, what you've written isn't an invocation with two arguments, it's an invocation with one tuple-valued argument. However, if the method did take two arguments, the invocation would be a two-arg invocation. It seems like the meaning of the code affects how it parses (which is a bit suckful).
As for what you can actually do about this, that's tricky. If the method really did require two arguments, this problem would go away (i.e. if someone then mistakenly tried to call it with one argument or with three, they'd get a compile error as you expect). Don't suppose there's some extra parameter you've been putting off adding to that method? :)
The compile is capable of interpreting methods without round brackets. So it takes the round brackets in the fooIt to mean Tuple. Your call is the same as:
fooIt( ("foo","bar") )
That being said, you can cause the method to exclude the call, and retrieve the value if you use some wrapper like Some(AnyRef) or Tuple1(AnyRef).
I think the definition of (x, y) in Predef is responsible. The "-Yno-predefs" compiler flag might be of some use, assuming you're willing to do the work of manually importing any implicits you otherwise need. By that I mean that you'll have to add import scala.Predef._ all over the place.
Could you also add a two-param override, which would prevent the compiler applying the syntactic sugar? By making the types taking suitably obscure you're unlikely to get false positives. E.g:
object WhyTuple {
...
class DummyType
def fooIt(a: DummyType, b: DummyType) {
throw new UnsupportedOperationException("Dummy function - should not be called")
}
}

Resources