Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. Join them; it only takes a minute:

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

I've learned today that algorithm analysis differs based on computational model. It is something I've never thought about or heard of.

An example given to me, that illustrated it further, by User @chi was:

E.g. consider the task: given $(i,x_1 ,…,x_n )$ return $x_i$ . In RAM this can be solved in $O(1)$ since array access is constant-time. Using TMs, we need to scan the whole input, so it's $O(n)$

This makes me wonder about functional languages; From my understanding, "Functional languages are intimately related to the lambda calculus" (from a comment by Yuval Filmus on here). So, if functional languages are based on lambda calculus, but they run on RAM based machines, what is the proper way to perform complexity analysis on algorithms implemented using purely functional data structures and languages?

I have not had the opportunity to read Purely Functional Data Structures but I have looked at the Wikipedia page for the subject, and it seems that some of the data structures do replace traditional arrays with:

"Arrays can be replaced by map or random access list, which admits purely functional implementation, but the access and update time is logarithmic."

In that case, the computational model would be different, correct?

share|cite|improve this question
3  
I am definitely not an expert on this topic, but I believe I heard that 1) a lisp-like machine (with its own cost model) can simulate RAM programs with a $O(\log n)$ additional factor (this looks easy to prove), and 2) whether this factor is really needed is still an open problem. Further, it can be argued that assigning a O(1) cost to array access in the RAM model is too generous. In hardware, memory access has to traverse $O(\log n)$ gates where $n$ is the size physical memory. – chi yesterday
1  
Also keep in mind that virtually all real-world FP languages have arrays in some form, with a guaranteed $O(1)$ access time (as in imperative languages). This is typically solved adding them as a language primitive. – chi yesterday
    
@chi "virtually all real-world FP languages have arrays in some form, with a guaranteed $O(1)$ access time" - Is this imposed because of how modern CPUs are built? – Abdul yesterday
1  
An example of a different computational model would be the number of beta reductions done on a lambda calculus term. In FP we are more so using a ram model dressed up as a lambda calculus, if that makes sense – Kurt Mueller yesterday
1  
@KurtMueller Note that we can get a lambda term of size $O(2^n)$ after only $O(n)$ bete reductions. This makes the cost model of counting the number of beta unrealistic. An arguably better one could be to weigh each step by the size of the terms at hand. Yet, this is not the only possible model: optimal evaluation of lambda terms does not apply beta in the naive way, preferring some more sophisticated graph reduction machines. In such case, counting the betas would probably be not appropriate. – chi yesterday
up vote 5 down vote accepted

It depends on the semantics of your functional language. You can't do algorithm analysis on programming languages in isolation, because you don't know what the statements actually mean. The specification for your language needs to provide sufficiently detailed semantics. If your language specifies everything in terms of lambda calculus you need some cost measure for reductions (are they O(1) or do they depend on the size of the term you reduce?).

I think that most functional languages don't do it that way and instead provide more useful statements like "function calls are O(1), appending to the head of a list is O(1)", things like that.

share|cite|improve this answer
    
I believe I kind of understand your answer (the misunderstanding is most likely due to my lack of understanding in lambda calculus): You're saying you basically have to do the analysis on case by case (case being language) basis, rather than a general way, because certain operations have different meanings per language. Is my understanding correct? – Abdul yesterday
    
Yes. Your language designer needs to tell you what the things you can write in the language actually mean before you can analyze the runtime of an algorithm. – adrianN yesterday
    
"You can't do algorithm analysis on programming languages in isolation" - was this referring to FP languages or languages in general? If it was referring to the earlier, then how do we algorithm analysis in school in such a general manner i.e. do analysis across Java, C/C++, Python problems? Is it because they're all very similar? Or is it because the underlying data structures & ADTs are all the same and implemented in the same manner as well? Or lastly, is it because these courses are simply for the sake of education, and don't necessarily need to strictly accurate? – Abdul 19 hours ago
    
It's true for all programming languages. To be strictly correct you first need to fix a machine model, say the RAM and (a small handful) of instructions it supports. You can only do analysis on programs using only those instructions. Then you can think of a mapping of your programming language to that machine model. Then you can analyze programs in the programming language. For a very rigorous treatment check how Knuth does it in The Art of Computer Programming. A lot of this can be simplified because of big-O hiding constants. – adrianN 3 hours ago

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.