How are data structures created in declarative programming? - data-structures

An article I read gave this example for the difference between declarative and imperative programming:
Declarative
small_nums = [x for x in range(20) if x < 5]
Imperative
small_nums = []
for i in range(20):
if i < 5:
small_nums.append(i)
The imperative example declares an array. How is the data stored in the declarative example? Or is the storage structure determined by a separate piece of software?

Based on my searches, declarative programming does not create data structures. Instead, declarative programming is an abstraction. That it, it calls on objects that create the needed structures.
With that in mind, the imperative programming is not done away with, it's just pre-written.

Related

Purely functional languages?

what is purely functional language? And what is purely functional data structure?
I'm kind of know what is the functional language, but I don's know what is the "pure" mean. Does anyone know about it?
Can someone explain it to me?Thanks!
When functional programmers refer to the concept of a pure function, they are referring to the concept of referential transparency.
Referential transparency is actually about value substitution without changing the behaviour of the program.
Consider some function that adds 2 to a number:
let add2 x = x + 2
Any call to add2 2 in the program can be substituted with the value 4 without changing any behaviour.
Now consider we throw a print into the mix:
let add2 x =
print x
x + 2
This function still returns the same result as the previous one but we can no longer do the value substitution without changing the program behaviour because add2 2 has the side-effect of printing 2 to the screen.
It is therefore not refentially transparent and thus an impure function.
Now we have a good definition of a pure function, we can define a purely functional language as a language where we work pretty much exclusively with pure functions.
Note: It is still possible to perform effects (such as printing to the console) in a purely functional language but this is done by treating the effect as a value that represents the action to be performed rather than as a side-effect within some function. These effect values are then composed into a larger set of program behaviour.
A purely functional data structure is then simply a data structure that is designed to be used from a purely functional language.
Since mutating a data structure with a function would break this referential transparency property, we need to return a new data structure each time we e.g. add or remove elements.
There are particular types of data structures where we can do this efficiently, sharing lots of memory from prior copies: singly-linked lists and various tree based structures are the most common examples but there are many others.
Most functional languages that are in use today are not pure in that they provide ways to interact with the real world. Long ago, for example, Haskell had a few pure variants.
Purely functional data = persistent data (i.e. immutable)
Pure function = given the same input always produces same output and does not contain, or is effected by, side effects.

How To Use Classic Custom Data Structures As Java 8 Streams

I saw a SO question yesterday about implementing a classic linked list in Java. It was clearly an assignment from an undergraduate data structures class. It's easy to find questions and implementations for lists, trees, etc. in all languages.
I've been learning about Java lambdas and trying to use them at every opportunity to get the idiom under my fingers. This question made me wonder: How would I write a custom list or tree so I could use it in all the Java 8 lambda machinery?
All the examples I see use the built in collections. Those work for me. I'm more curious about how a professor teaching data structures ought to rethink their techniques to reflect lambdas and functional programming.
I started with an Iterator,but it doesn't appear to be fully featured.
Does anyone have any advice?
Exposing a stream view of arbitrary data structures is pretty easy. The key interface you have to implement is Spliterator, which, as the name suggests, combines two things -- sequential element access (iteration) and decomposition (splitting).
Once you have a Spliterator, you can turn that into a stream easily with StreamSupport.stream(). In fact, here's the stream() method from AbstractCollection (which most collections just inherit):
default Stream<E> stream() {
return StreamSupport.stream(spliterator(), false);
}
All the real work is in the spliterator() method -- and there's a broad range of spliterator quality (the absolute minimum you need to implement is tryAdvance, but if that's all you implement, it will work sequentially, but will lose out on most of the stream optimizations.) Look in the JDK sources Arrays.stream(), IntStream.range()) for examples of how to do better.)
I'd look at http://www.javaslang.io for inspiration, a library that does exactly what you want to do: Implement custom lists, trees, etc. in a Java 8 manner.
It specifically doesn't closely couple with the JDK collections outside of importing/exporting methods, but re-implements all the immutable collection semantics that a Scala (or other FP language) developer would expect.

Is Data-Structure and Algorithm same for all programming languages?

If a person learns data-structure and algorithm in one programming language does it needs to learn other language's data-structure and algorithm ?
As i am about to start a book Data-structure and algorithm in JavaScript as i also want to learn Web
will it help me for other languages too?
Data structures and algorithms are concepts that are independent of language. Therefore once you master them in you favorite language it's relatively easy to switch to another one.
Now if you're asking about built-in methods and apis that different languages have, they do differ, but you shouldn't learn specific APIs in your data-structure and algorithms book anyways.
Yes... and no.
While the concepts behind algorithms and data structures, like space and time complexity or mutability, are language agnostic, some languages might not let you implement some of those patterns at all.
A good example would be a recursive algorithm. In some languages like haskell, recursivity is the best way to iterate over a collection of element. In other languages like C, you should avoid using recursive algorithm on unbound collections as you can easily hit the dept limit of the stack. You could also easily imagine a language that is not even stack based and in which a recursive algorithm would be completely impossible to implement. You could implement a stack on top of such a language but it would most definitely be slower than implementing the algorithm in a different fashion.
An other example would be object oriented data structures. Some languages like haskell do not let you change values. All elements in such language are immutable and must be copied to be changed. This is analog to how numbers are handled in javascript where you cannot change the value 2, but you can take the value 2, add 1 to it and then store it to a new location. Other languages like C do not have (or very poorly handle) object oriented programming. This will break down most data structure pattern you will learn about in a javascript oriented book.
In the end, it all boils down to performance. You don't write C code like you write JavaScript or F# code. They all have their quirks and thus need different implementations even though the idea behind those algorithms and structures will stay the same. You can always emulate a pattern on a language that does not supports it, like OOP in C, but it will always feel more natural to solve the problem in a different way.
That being said, as long as you stay within the same kind of language, you can probably reuse 80%+ of that book. There are many OOP languages out there. Javascript is probably the most exotic of them all with its ability to treat all objects like dictionaries and its weird concept of "this" so a lot of patterns in there will not apply in those other languages.
You need not learn data structure and algorithm when you use another language.The reason is evident, all of data structures and algorithm is a realization of some kinds of "mathmatical or logical thought".
for example,if you learn the sort algorithm, you must hear about the quick sort and merge sort and any others, the realization of different sort algotithm is based on fundamental element that almost every language has,such as varible,arrays,loop and so on. i mean you can realize it without using the language(such as JavaScript) characteristics.
although it is nothing relevant to language,i still suggest you stduy it with C.The reason is that C is a lower high-level language which means it is near the operating system.And the algorithm you write with C is faster than Java or Python.(Most cases).And C don't have so many characteristic like c++ stl or java collection. In C++ or Java, it realize hashmap itself.If you are a green hand to data structure, you'd better realize it from 0 to 1 yourself rather directly use other "tools" to lazy.
The data structure and algorithm as concepts are the same across languages, the implementation however varies greatly.
Just look at the implementation of quicksort in an imperative language like C and in a functional language like Haskell. This particular algorithm is a poster boy for functional languages as it can be implemented in just about two lines (and people are particularly fond of stressing that's it).
There is a finer point. Depending on the language, many data structures and algorithms needn't be implemented explicitly at all, other than as an academic exercise to learn about them. For example, in Python you don't really need to implement an associative container whereas in C++ you need to.
If it helps, think of using DS and algo in different programming languages as narrating a story in multiple human languages. The plot elements remain the same but the ease of expression, or the lack thereof, varies greatly depending on the language used to narrate it.
(DSA) Data structures and algorithms are like emotions in humans (in all humans emotions are same like happy, sad etc)
and, programming languages are like different languages that humans speak (like spanish, english, german, arabic etc)
all humans have same emotions (DSA) but when they express them in different languages (programming languages) , the way of expressing (implementation) of these emotions (DSA) are different.
so when you switch to using different or new language, just have a look at how those DSA are implemented in that languages.

Mutability in functional programming

First I am a Haskell newbie.
I've read this:
Immutable functional objects in highly mutable domain
And my question is nearly the same -- how to efficiently write algorithms where the state is supposed to change. Let's take for example Dijkstra's algorithm. There will be new paths found and distances should be updated. And in traditional languages this is simple while in Haskell for example I can only think of creating entirely new distances which will be too slow and memory consuming. Are there something like design patterns for such cases where one should implement algorithm with mutable data structure and speed and memory usage are main concerns?
There are of course many ways functional languages address this issue.
Different data structures - many data structures can be implemented in a purely functional manner, with the same algorithmic complexity as imperative versions. Probably the most well-known work in this area is Chris Okasaki's Purely Functional Data Structures, but there are many other resources as well. For Dijkstra's algorithm, Martin Erwig's work on functional graphs is appropriate. See this question as well.
Different algorithms - some algorithms have assumptions of mutability built-in, Quicksort is an example of this. In this case an alternative algorithm can be used that's more amenable to immutability.
Mutable state - every functional language can model functional state with a State monad. Most provide other forms of mutability as well, such as Haskell's ST monad and IORef's.
The ST Monad lets you use mutable state internally, but present a pure external interface.
Creating new immutable objects isn't nearly as expense as you might think, since large amounts of structural sharing can occur because the compiler KNOWS they can't change and thus can be safely shared. That said, using highly imperative algorithms with lots of mutable state in Haskell is a bit of a code smell.
In ML derivatives (such as OCaml, SML, F#), there are "references", which can be used as mutable variables.
In Haskell, this isn't cleanly handled. State is simply not covered by the usual "purely functional" style. Pure FP languages deal with "eternal truths", and are thus not very suitable for working with "ephemeral truths" (although it can be done, definitely).
However, yes, sometimes we need mutable state. A language such as ATS incorporates linear types for handling destructive updates and safe resource manipulation.

Is there any benefit to porting the Haskell Edison API and Core to F#?

The Edison API and Core modules are the Haskell implementation of Purely Functional Data Structures
Do the F# and native .Net data structures cover use cases in the Edison API and Core sufficiently?
Would there be any benefit to trying to port the API and CORE Haskell modules to F#?
I haven't read the paper on edison, but if it's nothing more than the Haskell implementation of Purely Functional Data Structures, doesn't it make more sense to port the SML code that's in the book / thesis? It should be easier than porting Haskell code, which must be annotated for strictness, while F# will have to annotated for laziness.
The language used by the book is SML with syntax extensions for lazy evaluation. F# provides half of those extensions natively:
> let x = lazy 12;;
val x : Lazy<int> = <unevaluated>
> match x with
| Lazy(n) -> n;;
val it : int = 12
> x;;
val it : Lazy<int> = 12
To convert the book's fun lazy notation, change this:
fun lazy plus ($m, $n) = $m + n
To this:
let plus (m',n') = lazy (
match (m',n') with
| (Lazy(m), Lazy(n)) -> (lazy (m + n)).Force())
(See page 33 in the book). The differences from between SML and F# are minor syntax, so the translation should be easy.
As for whether it's worthwhile, most of the data structures in Okasaki's book are very specialised, so they are unlikely to exist already in .NET, even as F#'s immutable Set and Map. It would be worthwhile for the people that need those data structures.
Revisiting this question months later, I note that
http://lepensemoi.free.fr/index.php/tag/purely-functional-data-structures
someone has implemented lots of them on that blog.
I didn't follow the link, though I have at least a tiny familiarty with the work or Okasaki. So this whole answer is wildly speculative (I may be off base in my assumptions about what's in the Edison API).
I expect there is 'some benefit' in the sense that people like 'reference implementations of common FP data structures' in 'new languages' to help learn new languages.
As for the use in practice (rather than pedagogy), I expect that some of them are useful, though there are some F# and .Net APIs that may be as useful or more useful for many scenarios. The main couple 'batches' of overlapping functionality I would guess are the F# immutable collections (Set and Map), as well as the .Net 4.0 concurrent collections (like ConcurrentQueue).
Of course you'll also find some snippets on the web, like Jomo's immutable queue.

Resources