Aren't side effects fundamental in complexity analysis?

I'm not that good at functional programming and the related discussion of the benefits and drawbacks of side effects. Please forgive me if I'm utterly wrong – or if I'm trivially right because I'm saying something elementary.

So the thing is, very basic complexity analysis that we all use assumes a von Neumann machine with a random access memory. Random access means that you can read an arbitrary array element, and that you can write it.

Suppose we throw away the ability to write array elements, which is a side effect. Instead, we replace it with the "functional" ability to create a new array having the same elements as the old one at all indexes except for one – the one we'd write if we had side effects. So we create an array like this: {old,…,old,new,old,…,old}.

In this sense, aren't we getting a fundamentally inefficient system? We took an O(1) operation and replaced it with an O(N) operation. We don't have to use the new O(N) operation every time we previously used the old O(1) one, of course – we can use an entirely different algorithm instead. But we must be losing something.

For instance, how do you compute a histogram? With side effects, you have a loop over your elements doing histogram[array[i]]++, this is O(N) where N is the amount of elements.

Without side effects, we can create a new histogram at every step – this is O(N*K) where K is the amount of histogram bins. Or we can sort the array – O(N*log(N)) – and then have an O(N) pass over the sorted array, consing every time a new element is found. Both options are asymptotically slower, not just programming-language-shootout slower. Did I miss an O(N) option?

I assume that the people most likely to point out the drawbacks of side effects are also likely to care about asymptotic complexity, because both questions are very basic CS questions. What's the standard workaround for the loss of asymptotic efficiency, then?

One answer could be a smart compiler – smart enough to implement the logical creation of an updated histogram as an update to the memory keeping the old histogram.

But it seems that if we had a compiler that smart, then side effects are not a problem in the first place. An imperative program full of side effects can be trivially implemented as a recursive function taking a state (the entire state of the memory – one large array), creating a new state, and calling itself with this state.

The reason people object to side effects seems related to the fact that such a program is "functional" only in a pointlessly theoretical sense. The nominal lack of side effects gives you no actual added ability to analyze the data flow of such programs.

I haven't formally proved it by this example, of course, but it seems to me that a nominally functional operation of creating a new array just like the old one with a single modified element is not a very good one. It can sometimes be hard or impossible to elide – as in the "imperative program emulation" example – and when it can't be, the need to fall back to actually copying is just too disastrous.

I guess this is why functional languages have cons – make a new list from a new element, the head, and an old list, the tail – but don't have a similar "array consing" primitive for inserting a new element into a random position of an old array.

Now in the absence of a compiler smart enough for efficient "array consing", people seem to be using mutable data structures, such as Haskell's mutable arrays. This seems to repeal some of the correctness guarantees gained by not having side effects. For example:

The freezeArray action converts a mutable array into an immutable one by copying, whereas unsafeFreezeArray returns an immutable array that is effectively just the type cast version of the mutable array. Should you write to the mutable array after it has been (unsafely) frozen, you'll side-effect the immutable array in the process. Please don't :-)

Bugs introduced by side effects coupled with aliasing – you modified something at the wrong time and someone else then references that something – are back.

So what's the point of shunning side effects if you're forced to have them anyway to do some of your computations asymptotically efficiently? Is the point having a clear separation between "pure" code and side-effecting code?

From a "social" more than a "formalistic" standpoint, such a separation indeed makes a lot of sense when "pure" code computes and side-effecting code does I/O. It's awesome to be able to tell pure computations from I/O, and it's actually a pity that imperative language typically allow people to litter computational code with I/O as easily as they do.

But if side effects are also necessary for computational code, then, again from a "social" standpoint, the pure/impure separation doesn't seem like a stable thing that is likely to correspond to how people think of boundaries between parts of the system. A piece of code that doesn't use histograms today might use them tomorrow, and then its purity will be gone.

Generally it seems like a whole lot of algorithms require writable RAM to be efficient. Hough transforms are "glorified histograms". A bunch of algorithms processing graphs or matrices update "rather random" indexes/counters iteratively, with no very clear way to somehow sort these updates so that they are sufficiently non-random to become list consing or similar.

I'm not saying pure functional languages aren't fun or productive for you – a very valid "social" argument in favor of purity. I just wonder how far purity can take you from a complexity analysis standpoint.

If it's not very far, or if it's not very clear how far, then why not have side effects "by default", and use data & control flow analysis to identify places where there are no side effects after all, or where their impact is limited enough (as in, i++ is a nominal side effect but a trivial one)?

In particular, how can purity be "the" way to safe parallelism if it robs you of asymptotic performance, and you're after parallelism to get performance?

[I'm not at all sure I get this right, so I'm asking a "real" question, not a rhetorical one.]

101 comments ↓

#1 Stephen A. Goss on 08.24.12 at 9:59 am

This doesn't answer your fundamental questions, but there are purely functional arrays and other data structures that allow copying (with a changed element) with much better than O(N) performance, usually something like O(log N). They are built with tree structures to allow structural sharing. Clojure's fundamental data structures work like this. They exist natively and as libraries for many programming languages.

To answer your question, I think programming languages generally should encapsulate efficient (mutating) algorithms behind functional interfaces, so you pass in your immutable array to a function (eg. sort) and it does a single copy to a mutable array, does a really fast sort and then creates an immutable array to pass back to you. Since those copying actions only have to happen once, they don't generally affect the asymptotic complexity of the overall operation. As a user of those functions, you continue to have the benefit of immutable data structures, but get to have some efficient algorithms at the same time.

#2 Nathan on 08.24.12 at 10:17 am

@Stephen – implementing an array as a tree has other costs though, even if you disregard the benefits of better utilization of cache. What's the cost of a read to a random location of an array implemented as a tree?

As a side note, I've found that in many cases (on my hardware, where N < ~50) implementing a frequently read map as a wrapper around std::vector in C++ is faster than implementing it directly as a std::map. That is to say, even when you have increased algorithmic complexity (O(N) vs O(log N)) you can still get a win if your accesses are fast enough. With an array implemented as a tree, I suspect you are trading off both algorithmic complexity (in other places, as well as in the copy as O(log N) is slower than O(1)) and memory access speed.

I do like your idea of keeping interfaces functional, but implementations mutable. Clean interfaces are always a good idea, and in this case you can get the benefits of both worlds.

#3 Mikhail Glushenkov on 08.24.12 at 10:58 am

Yes, purely functional algorithms can't be faster than the imperative ones [1]. Arrays and graphs are good examples, and for achieving the best performance using an imperative implementation is necessary in these cases (though we have purely functional array and graph libraries). But in many other cases there do exist functional data structures that are asymptotically as efficient as their imperative analogues (for more information, see Okasaki's "Purely Functional Data Structures" and [2]).

Purity is not a silver bullet that solves all parallelism problems, but it can make your life easier (it's obviously safe to run two different pure functions in parallel). The point is not to get rid of mutable state altogether (Haskell has great support for mutable state), but to separate pure code from the impure and make the compiler enforce that separation via the type system. Haskell's type system is even advanced enough to allow pure code that uses mutable state internally (and statically guarantee that the mutable state doesn't "escape") [3].

Writing pure code is not the only approach to parallel programming that Haskell offers. Software Transactional Memory[4], which is fundamentally an abstraction built on top of mutable state, also deserves a mention.

[1] http://stackoverflow.com/questions/1990464/efficiency-of-purely-functional-programming
[2] http://cstheory.stackexchange.com/questions/1539/whats-new-in-purely-functional-data-structures-since-okasaki
[3] http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Monad-ST.html
[4] http://www.haskell.org/haskellwiki/Software_transactional_memory

#4 gasche on 08.24.12 at 11:04 am

Please note that the unsafeFoo are *not* the expected way to use the array when you want mutable update. They're here as escape hatches to let people shoot themselves in the foot if they really insist on it, but as you say it gives up safety.

Instead, you should use typed interfaces that allow the type checker to verify that the use of mutation is only local, and cannot be observed from the outside (disallowing aliasing in particular); this is done with STArray for example, that uses the ST monad for this quite neat purpose.

Of course, not all cases will be covered, and if you look hard enough you'll see situations that are not covered (rejected as ill-typed by ST). In this case you're on your own, like in any other language.

(You're remark that it is possible to write code that is too hard to analyze due to extreme chaining of dependencies but still qualifies as "pure" is spot on. This is how use of Haskell monads for effects work. Similarly, if you use ST to use side-effects locally, well are likely to use an imperative style locally, with the well-known downsides.)

For the theoretical discussion on asymptotic bounds of functional programs, see this very interesting StackOverflow discussion: http://stackoverflow.com/questions/1990464/efficiency-of-purely-functional-programming

#5 Mikhail Glushenkov on 08.24.12 at 11:57 am

I should note that I was speaking from the Haskell perspective. Most other functional languages (e.g. OCaml, Scheme, Clojure) don't have this strict compiler-enforced separation between side-effecting and pure code.

#6 Russ Ross on 08.24.12 at 1:03 pm

I was a heavy user of OCaml for a while, and I teach Haskell in my programming languages course. I have come to believe that "purity" in programming languages is a great learning tool, but not a very good tool for building software. I am amazed at how far one can go with purely functional programming, and I think there is value in exploring those frontiers, but one should recognize that pursuit as largely academic. I believe the same applies to languages that rhapsodies about their OOP purity (I happen to think OOP has been taken too far even in mainstream programming). Some of your writings on Forth reflect a similar sentiment about that corner of the design space.

One of the things I love about the CTM book by Van-Roy and Haridi is that it has a nicely balanced view on this. They explore declarative programming (a cousin to FP) in detail, then talk about instances where the model becomes cumbersome enough that it is worth adding a bit of state just to simplify the code. They also define their concept of purity in a way that allows state to lurk behind an abstraction; if the interface meets all the purity requirements, they are willing to cheat behind the scenes and sneak some state in when no one is looking. They don't consider the result tainted (think of the Haskell IO monad that acts almost as a badge of shame for impure code).

In short, the extremes are a fertile ground for learning and discovering, but being able to pick and choose the best of each model for each job is better for practical use. I'm one of those who believes my imperative programming is better because of my FP experience, but not everything needs to be coerced into a side-effect free model

#7 Rafael on 08.24.12 at 1:15 pm

Yossi, you're neither utterly wrong nor trivially right.

Indeed there's no magical solution for efficiently mutating arrays within purely functional programming. But there's a interesting way of working around it, by limiting (localizing?) the usage of side-effects. Take a look at the ST monad.
http://www.haskell.org/haskellwiki/Monad/ST

For the more general question you might want to look into the excellent "Purely Functional Data Structures" by Chris Okasaki.

#8 Rafael on 08.24.12 at 1:19 pm

(oops, when I wrote my comment, Mikhail's and gasche's weren't there, it's mostly redundant now)

#9 Max on 08.24.12 at 1:47 pm

But direct memory update is already a O(log(N)) operation and no more efficient than a functional array! It's only O(1) because you can only use a finite amount of memory on the von Neumann machine under your desk.

#10 Dmitry Novik on 08.24.12 at 3:15 pm

Inefficiency – this is precisely why functional languages did not gain such the adoption in the commercial code so far. But this may change in the future with the increase of computation power. You may do it inefficiently, but if the user can hardly notice any difference in time – then, who cares?

There of course will be always realms where efficienty is very important: games, graphics, scientific computation, etc. However, the mainstream desktop and web application front ends where performance rarely matters can switch to functional programming even today, gaining speed of development and safety at cost of performance.

#11 Pascal Bourguignon on 08.24.12 at 3:49 pm

First, the big O notation is made to count things.
You can count memory cells (which gives you a space complexity), or you can count operations (which gives you a time complexity), or something else.

When you count micro-instructions, in a programming language with side effects, indeed, indexing an array is O(1), in the number of micro-instructions.

But if your computer doesn't have Random Access Memory (RAM), but for example, Delay Line Memory (DLM http://en.wikipedia.org/wiki/Delay_line_memory ), then while the number of instructions needed to index an array will still be O(1), the clock time needed will depend on the position of the data in the line. And this doesn't only concerns old technology, the more recent magnetic bubble memories had linear access times too.

So the question is what complexity is setting a slot in a functional array. If you implement a sorting algorith in a functional programming language, you can count the number of slots read from functional arrays and the number of slot written in the functional arrays. Each kind of access counting for 1, have a complexity of O(1).

Of course, if you try to implement your functional programming language on a Von Neuman compter, and if you do so by copying the old elements, suddenly writing a slot will be O(n) in micro-instructions of the Von Neuman machine.

But nobody forces you to use a Von Neuman machine. You could use a quantum computer, and fork a new universe each time you write an array slot, so you don't have to do the copying. Then setting a functional array slot will be O(1) in universe forks.

Who knows if forking a universe takes aeons in paradize "time"?

#12 AlanL on 08.24.12 at 10:24 pm

Perhaps an irrelevant tangent, but I am currently building my first iPhone app and I notice that Objective-C, although not a functional language, has mutable and immutable versions of all of its data structures, and the culture – documentation, example code etc. – encourages use of the immutable versions unless mutability is really needed.

A very common idiom, for example, is for the data provider to work on a mutable data structure internally and then return an immutable version, either by implicit casting (the mutable version is normally a subclass of the immutable one) or by copy-once-when-finished.

This strikes me as a good example of what Russ is talking about: you're not forced into pure functional contortions, but you are encouraged to localise and separate your state-fiddling.

#13 Yossi Kreinin on 08.25.12 at 1:30 am

Thanks for all the comments. The bit about "hidden side effects" (pure interface, impure implementation) seems really interesting; I wonder if this works for internal state variables maintained across calls (as opposed to only supporting clobbering temporaries).

Regarding von Neumann machines being just one possible model – firstly, well, all of the standard complexity analysis assumes this model and it's awfully widespread, but secondly, I think it scales rather nicely. In particular, why does access to a pointer require O(log(N)) operations – because you need log(N) pointer bits? 128 pointer bits, or even a smaller constant, AFAIK allow you to address every atom in the universe or something.

#14 Xah Lee on 08.26.12 at 12:03 am

reading this post felt a bit odd. Then i skipped to read the comments section. I think Pascal Bourguignon's comment (#11) captures the best my reaction to this post.

upon re-reading your post, i found this beginning paragraph, to be the culprit. Quote: «… very basic complexity analysis that we all use assumes a von Neumann machine with …»

the heart of complexity analysis measures the level of steps of algorithm. The unit of a step is usually mathematically abstract ones.

for example, Euclid's algorithm for GCD (the earlist or one of the earlist algorithm), or prime sieve, or various sort algorithms. In literature, we don't see the mention of cpu/memory architecture, because it doesn't matter.

So, a functional programing algorithm to create a new array, i'd consider O(1), just as imperative language updating one element. Actual unit considerd depends on the context.

my 2 cents. Thank you for your blog. I enjoyed read them.

#15 Yossi Kreinin on 08.26.12 at 12:51 am

Well, it's true that we count "steps" and we could count anything as "a single step"; the question is when such counting is useful. If a new array with N elements and one element modified is said to be created in a single step, how is the resulting counting useful? The naive implementation – copying – will take a varying amount of (physical) time proportionate to the number of elements, N, so calling it O(1) is not useful for predicting observed performance. Which less naive implementation would actually be O(1), observably – take an amount of physical time limited by a reasonably small constant (not something larger than the amount of time the universe could possibly exist) that is not proportionate to N?

#16 Xah Lee on 08.26.12 at 1:30 am

hi Yossi (#15), i think what unit to use needs to be case-specific, goal-specific.
for example, let's say we implement factorial by recursion. We implement in Scheme and Python. In python, the storage will blow up quick. How can the same algorithm now have 2 different complexity? Of course it depends on level of detail, and our purpose. It is in this sense, i think that thinking of FP's complexity in terms of cpu level detail is incongruous in general.

suppose we write basic prime sieve to get the first one thousand primes, in OCaml and also in python. Depending what's the goal of our study, we could say they both have the same complexity because afterall it's the same algorithm. Or, we could compare the cpu instruction generated of a particular hardware. Then it also drags in specific compiler used, and how the code is exactly written in each lang. (e.g. is it using a “array”, a “list”, does it use list compresion or lambda/map) So, the same algorithm can actually result in different magnitude of O() for the 2 implementations, and i think that for SOME, the FP's resulting O() could be a magnitude less than Python. So yeah i think it all depends.

another example: what's the algorithmic complexity of i = i+1 vs ++i in C? if we compare cpu instructions, don't they differ by some magnitude with a decade old compiler?

#17 Yossi Kreinin on 08.26.12 at 2:34 am

In the i++ example, both versions are O(1). I was talking about asymptotic complexity.
Of course you could say that a functional specification of sorting is always O(n*log(n)) given a sufficiently smart compiler; in that sense, bubble sort is O(n*log(n)) given a sufficiently smart compiler. But that sort of misses the point of complexity analysis of algorithms.

I think the question of the impact of not having random access for writing on asymptotic complexity is a valid one – I'm not saying it is trivial or that I fully answered it but I don't think "it depends" answers it any more than "O(n*log(n))" is a valid answer to the question of "what's the complexity of bubble sort". I think that to show that losing random access doesn't affect complexity you'd need to show a compilation strategy working around the problem, not merely point out that it could exist in theory. I think there's no such strategy to date, which is why functional languages offer cons but no "new array with one modified element" operation.

#18 Xah Lee on 08.26.12 at 11:20 am

hi Yossi, i guess what you said makes sense. but i don't see the utility of this conclusion with its assumptions?

as a analogy, say we have a list of 1 to 10, and we want to add one to each. We can do a for loop, or we can map a lambda over it. With certain assumption, we can decidedly say that the map approach will have storage requirement double that of imperative approach, ALWAYS. But what's the utility of this overly general conclusion?

on a different note, i don't think that functional languages uses cons. I think mostly just lisp. (i.e. it's builtin function “list” is made of cons, which is called a “proper list” when structured in a certain way (last linked pair ending in “nil”), and such is called linked list.). Mathematica's “List” is C's array. Mathematica does not have anything of linked list. (nor Perl, PHP, Python, as far as i know.) Of course, in any highlevel lang, linked list a la lisp can be built by nesting pairs. This is a oft used technique in Mathematica. e.g. nest as a way to append, then call Flatten on it to rid of nesting.

#19 Yossi Kreinin on 08.26.12 at 12:14 pm

I know almost no Mathematica; is it purely functional, and how do you build new collections out of old ones?

In Haskell and ML as well as Lisps, there is a cons – I don't think they name it so and perhaps they rightly don't since it's a bit of a weird mnemonic, but they have it. That is, you take a head and a tail and make a new list; this can be made to work predictably efficiently, I think.

How people usually build new arrays out of old ones in purely functional languages or language subsets I don't know; what does Mathematica do?

As to mapping a lambda vs a for – I think a relatively simple compiler can convert the map to a for; also it's not an asymptotic performance difference. What I wonder about is whether purity costs asymptotic performance, because that kind of thing is not something stronger hardware can really buy back – not for a sufficiently large N.

(I imagine one possible solution is to add a bunch of pure functional primitives such as histogram() that use side effects in their implementations but have a pure interface; if so, I wonder if you pay a price in generality – or is there actually an approach letting you encapsulate side effects in functional primitives so that every algorithm can be made as efficient as with a writable RAM? If such an approach exists, it'd answer my question spectacularly and prove my gut feeling wrong; if a less general approach exist that can be shown to handle an important sub-space of problems, it's rather interesting as well.)

#20 Noam Yorav-Raphael on 08.26.12 at 2:04 pm

Hi Yossi!

I haven't researched this enough, but it seems to me that pure functions with the added feature of destroying their input might be a solution. Imaging a (pure) function that creates a new array from an old one by replacing one element, that has a peculiar feature that using the original array again an error. It's obvious how using it in code translates to O(1) machine code, but the code is still functional.

Here http://www.cse.unsw.edu.au/~benl/papers/thesis/lippmeier-impure-world.pdf is a (surprisingly pleasant to read) PhD thesis, dealing with exactly this problem. I've read only the first two chapters, so I'm not sure what's exactly the solution he suggests, but the author dedicates the work to his beloved data structure, an array with O(1) updates.

Here http://code.google.com/p/decac/ is a programming language based on the ideas of the above thesis, which aims to give "bare-metal performance" while using modern programming languages ideas.

None of these languages got a lot of success or development. However, I do hope that something similar might work one day. I think that functional programming that can be translated to efficient machine code in an obvious way can be a really nice thing.

#21 Mikhail Glushenkov on 08.26.12 at 2:52 pm

> The bit about “hidden side effects” (pure interface, impure implementation)
> seems really interesting; I wonder if this works for internal state variables
> maintained across calls (as opposed to only supporting clobbering
> temporaries).

You can do that in Haskell (a good example is the DiffArray data structure
[1][2]), but the type system won't be able to guarantee that your program does
not go wrong in that case, so you'll have to use an escape hatch
(such as unsafePerformIO).

[1] http://www.lri.fr/~filliatr/ftp/publis/puf-wml07.ps
[2] http://hackage.haskell.org/package/diffarray

#22 Xah Lee on 08.26.12 at 3:56 pm

Mathematica (M) isn't purely fuctional. It's somewhat like Common Lisp in that regard. It's big, has few thousand functions built-in. All paradigms can be used, but functional and pattern matching term rewriting system is naturally primary. It's my fav lang. ( http://en.wikipedia.org/wiki/Rewriting )

in M, there's Append and Prepend http://reference.wolfram.com/mathematica/ref/Prepend.html but it's costy. It basically make a copy of the array. For seasoned M programers, if we want to grow a list, a common idiom is to start with huge number of elements (e.g. of 0), then chop off inert tail when done. Or, nest them in each step e.g. mylist = {mylist, newItem}, then call Flatten[mylist] in the end.

am aware that Ocaml and Haskel has linked list. But only dabbled in OCaml. Tried to study Haskell but went nowhere. On the other hand, been coding emacs lisp for a few years. I hated lisp cons! I think the linked list in Haskell, OCaml are qualitatively different than lisp's list, because they don't expose the “cons” primitive as a list constructor. The fact of the linked list is only at the implementation level.

i don't have experience with low level langs such as C, C++, assembly, nor system programing, compilers, nor cpu/memory architecture …. So, i can only wave hands to your deeper technical inquiries in this discussion. ☺

#23 Yossi Kreinin on 08.26.12 at 10:05 pm

@Noam: that's really interesting – thanks!

A function or expression updating its input instead of copying it on the grounds of there being no references to the input could take you some distance I guess; a simple recursive histogram function could work that way, for instance. It's not intuitive for me what exactly this distance is.

#24 Görkem Demirtaş on 08.27.12 at 6:22 am

One important problem FP solves is aliasing. Aliasing makes optimising memory/time trade off and reasoning about code hard. While a convenience, mutable state is what pulls the rug under you while you are not looking. Dependent types tries to address both the "preventing copying bits around" and "constant time access for constant amount of memory" problems by giving a little too much power to the type system. While optimisation opportunuties sound great in theory, it leads to rather baroque syntax which helps them to stay obscure for the rest of us.

Access to arbitrary amounts of memory is not actually constant time operation, average/worst case time is assumed for analysis. Arbitrary length operations doesn't run as fast as machine instructions either.

Another thing worth looking to understand where functional programming stands relative to imperative is reasoning about continuations. To get a logical view of world (world stays same upon continuation reentry), you either need to copy the whole call frames (and related state) and restore upon reentry or pretend nothing will change that may interest you. Pretending nothing will change is way safer for a functional language although it is heavy on memory consumption. An imperative language needs to take aliasing into account while copying the modified call frames. Just ditching the logical view guarantee and letting the programmer make sure nothing bad will happen is easier (i.e. prolog with cuts). Once you try to incorporate call/cc and friends, complexity analysis becomes impossible for mere mortals with mutable state.

#25 Mikhail Glushenkov on 08.28.12 at 5:40 am

> A function or expression updating its input instead of copying it on the grounds of there being no references to the input could take you some distance I guess;

This is basically what uniqueness types give you – the type system guarantees that values with unique types are used in a single threaded manner. Uniqueness types were developed as a solution to the problem of combining lazy evaluation with side effects, just like Haskell's ST and IO monads. Ben Lippmeier discusses these matters in much more detail in his wonderful dissertation (linked above) and also presents another solution to the same problem.

#26 Micky Latowicki on 09.12.12 at 5:32 am

Regarding uniqueness types and Lippmeier's thesis – I haven't read it, but it sounds a lot like linear types (http://en.wikipedia.org/wiki/Linear_type_system), which are the flagship feature of the Clean language (a cousin of Haskell), and have since been adopted elsewhere (see the Wikipedia article).

#27 Dror Livne on 09.24.12 at 5:40 am

AFAICT, there is a slight difference between uniqueness types (at least as implemented by Clean) and Linear types (as presented in Baker / Wadler):
in Clean, unique <= shared, so a unique type can be coerced to it's shared type.
In contrast, linear type systems require every binding to be used exactly once.

On a different note, there is actually a wealth of type systems designed to control aliasing with varying degrees of precision and sophistication.

see:
Francois Pottier: Wandering through linear types, capabilities, and regions.

#28 dmbarbour on 04.02.14 at 3:10 pm

The perceived O(1) update for random access arrays is, unfortunately, misleading – a consequence of many simplifications for an abstract model that cannot physically be implemented. In practice, memory is allocated in physical space, and there are (minimally) speed of light delays. Further, unique addresses for memory necessarily have average size lg(N). If memory is allocated in 2D space, the best we can feasibly do is O(N^(1/2) * lg(N)). If memory is allocated in 3D space, we can improve this to O(N^(1/3) * lg(N)).

In practice, the discrepancy between O(1) and actual access times tends to show up in terms of cache invalidation, virtual memory pages, even disk reads or network lookups. This is interesting, in its own way, because our machines are essentially modeling "motion" of memory – the stuff we access is 'near' (in cache instead of memory, or in in memory) and thus more cheaply accessible.

In practice, it is not difficult to achieve identical asymptotic performance in a pure language. For example, if we have a Map of Integers to Values, we can typically update an integer in that map in O(lg(N)) time (ignoring physical layout of the map). If you used a map for your histogram, you'd ultimately have the same asymptotic performance as you do for an array… you'd simply be a lot more explicit and obvious about it.

If you're somewhat more clever, you could actually model full cache-like motions, and thus enable O(1) amortized manipulations to recently accessed values and their neighbors (cf. Gërard Huet's Zippers). So, if you're sequentially updating every value in a histogram, you can get equivalent physical-world asymptotic performance as modifying an array.

Of course, coefficients cannot reasonably be ignored. A map update is likely to involve, on average, many more memory operations and much more indirection than a stateful array manipulation. Zippers, if used, similarly require extra computations.

#29 Yossi Kreinin on 04.02.14 at 9:59 pm

In practice your histogram is typically small enough to fit in an L1 cache. So being "much more explicit about asymptotic performance" means that you're bringing slowdowns which are unavoidable on huge inputs to problems where they're perfectly avoidable.

On the other hand I agree that realizing that random access is in practice much slower than sequential access in many cases is valuable. I'm just saying that fast RAM – small fast RAM as necessitated by physics but fast nonetheless – is darn useful and castrating it seems weird. The existence of memory hierarchy on most machines that you use as evidence that O(1) random access is impossible is actually also evidence that getting O(1) with as small constants as possible for an as large data set is possible is useful.

#30 dmbarbour on 04.03.14 at 6:26 am

"you're bringing slowdowns which are unavoidable on huge inputs to problems where they're perfectly avoidable" – True enough. I don't deny that coefficients matter. They only don't matter to asymptotic complexity, which was a significant focus of your article. :)

If we're aiming for coefficient performance in a purely functional programs, there are at least three paths to pursue.

First, it is quite feasible to develop cache-friendly, purely functional structures – e.g. by using wider tree nodes (not two branches per level, instead fifty or so – e.g. B-trees, or finger-trees with many nodes, or ropes built on these). By going wider, we better fit paging models and cache-lines. Of course, binary trees tend to be simpler to implement, so there is a tradeoff on complexity.

Second, if we can recognize that there is no aliasing for a data structure – or prevent it using the type system (i.e. linear or affine types) – then we can update it in place without losing our ability to reason about it in a purely functional manner. There are entire languages and concepts built around this – e.g. Clean, Mercury, DDC Haskell, ParaSail, the 'Wormholes' work for FRP. It's worth paying attention to. I'm also developing a language that leverages these features.

Third, the Zipper data structure – initially developed by Gërard Huet in 1997, but which remains unknown and underutilized by most functional programmers. It provides an excellent foundation for O(1) navigation steps and local updates for any tree-like structure. That includes kd-trees (which can model 2D grids and 3D volumes). It also includes text cursors, navigation and user-focus in a user interface. (Cleverly, ray-tracing can be understood as a form of navigation unto collision.)

The zipper is another of the bridges between functional and imperative. Like linearity, zippers do have a weakness with regards to aliasing: you can't have multiple zippers in the same data structure. The closest you can get is higher-order zippers, which model volumes instead of focus/location as a single point.

Related to the zipper and the tree, functional programming languages have also developed a "finger-tree" concept, which is basically a tree turned inside out so we can access and update the endpoints in O(1) time. These make excellent purely functional deques. :)

Anyhow, I think there is a lot of confusion about what is feasible in purely functional programming. An expert imperative programmer doesn't immediately know the idioms for functional performance, and might prefer to remain an expert of imperative than become a beginner of functional. It's easy to learn "use more trees", but it can take years to learn a full suite of idioms.

Anyhow, I'm not inclined to give up my "O(1) with as small constants as possible". As I mentioned above, and before, I use zippers in that role. Even better with linear in-place updates, and if we can keep it cache friendly with wider structures. :)

#31 istripper crack on 05.15.19 at 12:23 pm

I conceive you have mentioned some very interesting details , appreciate it for the post.

#32 Collen Gammage on 05.15.19 at 2:51 pm

5/15/2019 I'm pleased with the way that yosefk.com deals with this sort of subject matter. Usually to the point, often contentious, without fail well-researched and more often than not quite challenging.

#33 TerrellTarve on 05.18.19 at 9:53 am

Buy Acapella Pack by soundroll on AudioJungle. This item is a Music Pack Description: A collection of male acapellas. What you hear in the preview: 00:00 – 01:42 – …

[url=http://v.ht/TPTS]More info>>>[/url]

#34 TerrellTarve on 05.18.19 at 6:27 pm

Get 1,623 bright royalty free funk & groove music tracks on AudioJungle. Buy bright music and sounds from $5.

[url=http://v.ht/zXnx]More info![/url]

#35 Josue Alejandre on 05.23.19 at 10:43 pm

Love yosefk.com– extremely easy to navigate and lots to think about!

#36 how to get help in windows 10 on 05.29.19 at 5:16 am

Thank you a bunch for sharing this with all of us you really understand what you are speaking about!
Bookmarked. Please also consult with my website =).
We may have a link trade arrangement among us

#37 how to get help in windows 10 on 05.30.19 at 4:24 pm

Greetings! Very helpful advice within this article! It's the little changes that make the biggest changes.
Many thanks for sharing!

#38 gamefly free trial on 05.31.19 at 3:45 pm

Aw, this was an incredibly good post. Spending some time and
actual effort to generate a really good article… but what can I say… I put things off a whole lot and don't seem to get nearly anything done.

#39 gamefly free trial on 06.01.19 at 1:54 am

Truly when someone doesn't be aware of after that its
up to other visitors that they will assist, so here it happens.

#40 gamefly free trial on 06.02.19 at 11:45 am

Thank you for every other fantastic post. The place else may
just anybody get that kind of info in such an ideal method of writing?

I've a presentation next week, and I'm at the search for such info.

#41 gamefly free trial on 06.02.19 at 3:49 pm

I am extremely inspired along with your writing talents and
also with the structure for your blog. Is that this a paid subject or
did you modify it your self? Anyway keep up the nice quality writing, it is uncommon to look a great weblog like this one nowadays..

#42 Franmuh on 06.03.19 at 6:27 pm

Can I Buy Xenical Over The Counter Kamagra Acheter Canada Real Progesterone Germany Free Doctor Consultation Cheap Wells [url=http://cialicheap.com]online pharmacy[/url] Cialis 20 Mg Comprimes Pellicules Cialis Se Puede Comprar En Farmacias

#43 gamefly free trial on 06.03.19 at 11:00 pm

What's up everyone, it's my first go to see at this web site, and piece of writing is really fruitful in support of me,
keep up posting these articles or reviews.

#44 gamefly free trial on 06.04.19 at 3:17 am

This is a really good tip particularly to those fresh to the blogosphere.
Short but very accurate info… Many thanks for sharing this one.

A must read article!

#45 gamefly free trial on 06.06.19 at 7:30 pm

I used to be recommended this website by means of my cousin. I'm
now not positive whether or not this submit is written via him as nobody else understand
such specified approximately my difficulty.
You're wonderful! Thank you!

#46 gamefly free trial on 06.07.19 at 8:40 am

Wow! This blog looks exactly like my old one!
It's on a entirely different topic but it has pretty
much the same layout and design. Excellent choice of colors!

#47 gamefly free trial on 06.08.19 at 9:56 am

Hello there, You've done a fantastic job.
I will certainly digg it and personally suggest to my friends.
I am sure they will be benefited from this
website.

#48 Franmuh on 06.10.19 at 11:47 pm

Tricor Viagra Generico Por Internet Sage And Propecia [url=http://buyoxys.com]cheap 80 mg vardenafil hcl[/url] Venta De Viagras Were To Buy Levothyroxine Nolvadex Medicament

#49 playstation 4 best games ever made 2019 on 06.12.19 at 10:48 am

Do you have a spam problem on this site; I also am a blogger, and
I was wanting to know your situation; we have created some
nice procedures and we are looking to exchange methods with others, please shoot me an e-mail if interested.

#50 quest bars cheap on 06.15.19 at 12:18 am

Excellent post. I was checking continuously this blog
and I'm impressed! Extremely helpful info specially the last part :) I care
for such information a lot. I was looking for this particular
info for a very long time. Thank you and best of luck.

#51 krunker hacks on 06.16.19 at 3:35 pm

bing bring me here. Thanks!

#52 Franmuh on 06.18.19 at 7:14 pm

Priligy Kaufen [url=http://durazy.com]viagra prescription[/url] Finasteride 10mg Where To Buy Buying Viagra In Ontario

#53 proxo key generator on 06.19.19 at 7:05 am

I really enjoy examining on this web , it has got fine goodies .

#54 vn hax pubg mobile on 06.20.19 at 4:03 pm

Enjoyed reading through this, very good stuff, thankyou .

#55 nonsense diamond key on 06.21.19 at 5:13 am

Great read to Read, glad that yandex brought me here, Keep Up awsome job

#56 plenty of fish dating site on 06.22.19 at 9:20 am

I like the valuable information you supply in your articles.

I will bookmark your blog and check once more right here frequently.

I am quite certain I will learn plenty of new stuff right here!
Best of luck for the following!

#57 star valor cheats on 06.23.19 at 2:50 pm

Great, yahoo took me stright here. thanks btw for info. Cheers!

#58 gx tool pro apk download on 06.24.19 at 12:59 pm

I dugg some of you post as I thought they were very beneficial invaluable

#59 how do we KNOW on 06.25.19 at 2:38 am

Great, google took me stright here. thanks btw for this. Cheers!

#60 geometry dash 2.11 download pc on 06.25.19 at 5:39 pm

I like this website its a master peace ! Glad I found this on google .

#61 krunker aimbot on 06.26.19 at 4:18 am

Yeah bookmaking this wasn’t a risky decision outstanding post! .

#62 Franmuh on 06.26.19 at 1:10 pm

Nifedipine Sublingual [url=http://zgdkdz.com]viagra[/url] Order Canadian Pharmacy No Script

#63 ispoofer key on 06.27.19 at 3:56 am

stays on topic and states valid points. Thank you.

#64 synapse x serial key free on 06.27.19 at 6:37 pm

Intresting, will come back here later too.

#65 strucid aimbot script on 06.28.19 at 5:07 am

Very interesting points you have remarked, appreciate it for putting up.

#66 advanced systemcare 11.5 on 06.28.19 at 11:55 am

Respect to website author , some wonderful entropy.

#67 how to get help in windows 10 on 06.29.19 at 6:41 am

Hello, just wanted to say, I loved this post.
It was inspiring. Keep on posting!

#68 cryptotab hack script free download 2019 on 06.29.19 at 7:30 am

stays on topic and states valid points. Thank you.

#69 cryptotab script hack free on 06.29.19 at 1:52 pm

Very interesting points you have remarked, appreciate it for putting up.

#70 pokecrot on 07.01.19 at 7:28 am

I kinda got into this article. I found it to be interesting and loaded with unique points of interest.

#71 tinyurl.com on 07.01.19 at 2:47 pm

Why visitors still make use of to read news papers when in this technological world the whole
thing is accessible on net?

#72 cheat fortnite download no virus on 07.01.19 at 6:18 pm

I simply must tell you that you have an excellent and unique post that I must say enjoyed reading.

#73 hacking apex legends pc on 07.02.19 at 5:58 am

I like this website its a master peace ! Glad I found this on google .

#74 vn hack on 07.03.19 at 5:48 am

Thank You for this.

#75 cyberhackid on 07.03.19 at 5:45 pm

I like this website its a master peace ! Glad I found this on google .

#76 prison life hacks on 07.04.19 at 5:41 am

I must say, as a lot as I enjoyed reading what you had to say, I couldnt help but lose interest after a while.

#77 seo tutorial on 07.04.19 at 3:31 pm

Parasite backlink SEO works well :)

#78 phantom forces aimbot on 07.04.19 at 5:30 pm

You got yourself a new rader.

#79 open dego on 07.05.19 at 5:40 am

I like this site because so much useful stuff on here : D.

#80 Franmuh on 07.05.19 at 4:52 pm

Dove Posso Comprare Cialis Amoxicilina Antibiotic Cash Delivery Viagra Authentique [url=http://gemeds.com]busco kamagra[/url] Where To Buy Tab Doxycycline Visa Buy Propecia For Hair Loss

#81 tom clancy's the division hacks on 07.05.19 at 5:53 pm

I love reading through and I believe this website got some genuinely utilitarian stuff on it! .

#82 synapse x on 07.06.19 at 5:34 am

I love reading through and I believe this website got some genuinely utilitarian stuff on it! .

#83 gx tool uc hack apk download on 07.06.19 at 10:05 am

You got yourself a new rader.

#84 Dale Nesbit on 07.06.19 at 10:18 am

The next phase of the mystery is to solve the order of the pyramid. This is your third secret lead. 517232125

#85 black ops 4 license key on 07.07.19 at 6:17 am

I consider something really special in this site.

#86 spyhunter 5.4.2.101 crack on 07.08.19 at 6:16 am

stays on topic and states valid points. Thank you.

#87 roblox fps unlocker download on 07.09.19 at 7:52 am

Me like, will read more. Cheers!

#88 quest bars cheap on 07.11.19 at 3:48 am

Magnificent items from you, man. I have have in mind your stuff prior to and you're simply extremely wonderful.
I actually like what you've obtained right here, really like what you're stating and the way in which by which you are saying it.
You're making it enjoyable and you still take care of to keep
it smart. I can't wait to read far more from you.
This is actually a terrific web site.

#89 how to get help in windows 10 on 07.18.19 at 1:23 am

Hi! I realize this is somewhat off-topic but I had to ask.
Does building a well-established blog like yours take a lot of work?

I'm completely new to blogging but I do write
in my diary everyday. I'd like to start a blog so I will be able to share my own experience and feelings online.
Please let me know if you have any ideas or tips for new aspiring bloggers.
Appreciate it!

#90 Brock Messer on 07.18.19 at 5:52 pm

Mr.s Fister, this message is your next bit of information. Please contact the agency at your earliest convenience. No further information until next transmission. This is broadcast #4533. Do not delete.

#91 Reinaldo Rullan on 07.19.19 at 5:28 am

Dreamwalker, this note is your next bit of data. Do contact the agency at your convenience. No further information until next transmission. This is broadcast #5109. Do not delete.

#92 how to get help in windows 10 on 07.20.19 at 6:21 am

Your method of describing all in this paragraph is truly good, every one be capable
of simply know it, Thanks a lot.

#93 Franmuh on 07.21.19 at 12:09 am

Levitra Sciogliere In Bocca [url=http://lapizmoon.com]generic cialis overnight delivery[/url] Can You Get High On Cephalexin Buy Cheap Nexium Online Buy Seroquel No Script

#94 how to hack prodigy on 07.21.19 at 1:06 pm

Deference to op , some superb selective information .

#95 how to get help in windows 10 on 07.21.19 at 4:12 pm

I pay a visit daily some web pages and websites to
read content, but this blog provides quality based writing.

#96 evogame.net/wifi on 07.23.19 at 10:06 am

stays on topic and states valid points. Thank you.

#97 natalielise on 07.23.19 at 10:19 pm

I every time used to study paragraph in news papers but now as I
am a user of net thus from now I am using net for content, thanks
to web. natalielise pof

#98 pphud beta on 07.24.19 at 10:42 am

I am not rattling great with English but I get hold this really easygoing to read .

#99 skisploit on 07.25.19 at 11:50 am

stays on topic and states valid points. Thank you.

#100 smore.com on 07.26.19 at 5:34 am

Howdy! I know this is kind of off topic but I was wondering which blog platform are
you using for this site? I'm getting sick and tired
of WordPress because I've had issues with hackers and I'm looking
at alternatives for another platform. I would be fantastic if you could point me in the direction of
a good platform. plenty of fish natalielise

#101 ezfrags on 07.26.19 at 12:53 pm

Your site has proven useful to me.