Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Is there a programming language in which any of the following:
a numeric literal next to a variable 3x, -0.5y
a numeric literal or numeric variable next to a parenthesized expression a(b+c+d) 2(x-y)
two adjacent parenthesized expressions (1+x)(1-x) (4a-5b)(1+4c)
is interpreted to mean a multiplication?
I can see the syntactical problems that this would cause, but I'm curious if any language has gone ahead and done it anyway.
TI-BASIC does it in certain circumstances. I believe that certain CAS oriented langauges do as well.
IIRC, Fortress has a "juxtaposition operator" that for numeric types is defined as multiplication.
Some high level languages, such Mathematica, are capable of dealing with symbols rather than plain vars. You can try to query Wolfram Alpha in a same fashion too, omitting the multiplication operator.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been looking at algorithms used to calculate various functions, such as the CORDIC algorithm for tigonometric functions.
I was wondering how the error function is usually calculated. Wikipedia has a number of approximations, but is there one algorithm that is generally preferred when coding the error functions for numerical computing?
Your best bet is to check what actual implementations do, here is a selection:
Boost: http://www.boost.org/doc/libs/1_55_0/boost/math/special_functions/erf.hpp
GNU Scientific Library: http://www.gnu.org/software/gsl/
GLibc http://www.gnu.org/software/libc/index.html
There are probably others.
If you are looking for basic ideas on how these algorithms work instead of the exact details, then you should know that the Taylor expansion for these functions usually provides the asymptotic optimal way to compute them (either directly or with a re-expression), so it basically just boils down to how you refine the computation of the Taylor expansion. If you are unfamiliar with Taylor expansions and how they relate to functions like erf, see http://en.wikipedia.org/wiki/Taylor_series and http://en.wikipedia.org/wiki/Error_function#Taylor_series
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
So I was just in racket and was thinking about using keys to interact with the computer and keys are interpreted as strings by racket. I am looking for optimization of my code and was wondering whether strings or symbols are faster to operate on.
If the set of possible keys is well-defined, use symbols. Otherwise, use strings.
The main difference between strings and symbols is that symbols are (by default) interned. With strings, you can have multiple strings that have the same contents, but are different objects (they do not compare as eq?). With symbols, two symbols that have the same contents are guaranteed to be the same object.
The advantage of this is that you can do symbol comparisons using eq?, whereas for strings you have to use string=? or equal?.
However, in order for this magic to happen, behind the scenes, Scheme implementations maintain an intern pool, which is basically like a string-to-symbol hash table. If you call string->symbol and the string is not already in the intern table, it will add the string (and its corresponding symbol) to the table, so if your set of possible keys is not well-defined, you can junk up the intern table pretty quickly.
Edit: When you say "keys", did you mean keyboard characters? That is definitely a well-defined set, so you can use symbols.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the difference between dynamic typing, duck typing, polymorphism, and parametric polymorphism?
I ask because Ruby has each of these (according to Wikipedia), though I am interested more generally.
Dynamic typing means you don't need to define the type of a variable, the language interpreter will try to guess the type of that variable (number, boolean, string etc).
Duck typing means that we are not interested in what type an object is, instead we are more concerned in the functional aspect of the object: if an object returns those methods we are interested in, then this means that the object satisfy our requirements. Hence the well known phrase: "if a bird that walks like a duck and swims like a duck and quacks like a duck, that bird is a duck".
From Wikipedia: parametric polymorphism is a way to make a language more expressive, while still maintaining full static type-safety. Using parametric polymorphism, a function or a data type can be written generically so that it can handle values identically without depending on their type.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Out of sheer curiosity and the pursuit of trivia, I couldn't find an answer on Google quickly.
Dear fellow programmers, what is the first programming language to provide an interactive shell?
I can't prove other systems weren't earlier but the LISP REPL construct is one common name given to this style of interpreter.
The LISP I Programmers Manual from 1960 (PDF) includes a mention on page 2 that is apropos:
Enlargements of the basic system are available for various purposes. The compiler version of the LISP system can be used to compile S-expressions into machine code. Values of compiled functions are computed about 60 times faster than the S-expressions for the functions could be interpreted and evaluated. The LISP-compiler system uses about half of the 32,000 memory.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a set of keys e.g (IENDKCAGI, RZRLBYBFH) and now I want to find the algorithm of it. I allready know the basics of cryptography but I don't know how to start reverse engeneering the algorithm.
I don't search a certain algorithm, I'm just interested how the process would look like.
cheers
EDIT: I don't want to crack any key!!! I buy the software I need!
I'm just interested in the approach of reengeneering a checksum from the result, thats my conceptual formulation, without knowing the algorythm. This topic is more theorethical, but in my opinion it has a certain relevancy also for stackoverflow
You can analyze it to some degree, at least enough to rule out several possibilities. You say you have a set of keys, and I'm not sure what you mean by that, so pretend for discussion that the left value is the plaintext and the right value is the encrypted equivalent.
You can determine that the left value has only one repeating character, "I", and that the right value has two, "R" and "B". From that you can rule out a simple substitution cipher, even one with characters rearranged.
Both values appear to have only characters in the range [A-Z] (a larger sample would help confirm), so you can rule out encryption techniques that yield binary results, like most block and stream ciphers. In fact, use of such a limited character set implies that it was designed for use by people rather than machines. That would imply a relatively simple cipher technique, but may also involve an additional key to which you do not have access.