Take the 2-minute tour ×
Programmers Stack Exchange is a question and answer site for professional programmers interested in conceptual questions about software development. It's 100% free, no registration required.

I read somewhere (forgot which book it is) that algorithms are independent of computer architectures. Some even say algorithms are themselves computation (machines?)?

On the other hand, books on parallel programming have chapters on parallel algorithms. It seems like parallel algorithms depend on parallel architectures?

I think I miss some big pictures? Thanks.

share|improve this question
    
is quicksort better then merge sort? –  AK_ 9 hours ago
    
Parallel algorithms can run on single-threaded architectures. Time slicing is supported by most of those architectures, and who says parallel tasks must run concurrently? If A and B are parallel, why can you not run A then B? Or interleave them if one relies on the other (but that is generally a bad idea). –  Snowman 8 hours ago
    
Algorithms are specified against an Abstract Machine. A typical one that is an idealization of today's multi-core personal computers is called PRAM (Parallel Random Access Machine, sometimes further classified into EREW (exclusive memory read/write) or CRCW (concurrent memory read/write). –  rwong 5 hours ago
    
@rwong: then are abstract machines totally independent of computer architectures, if algorithms are? –  Tim 5 hours ago

6 Answers 6

Algorithms are the series of steps taken to solve a particular problem. The recipe for solving the problem, if you will. "Programs" do the same things, of course; we use "algorithm" to suggest the "generalized" or "generally applicable" recipes that are independent of specific machine designs, programming languages, and such.

Algorithms are meant to be general, but they can still depend on some features being present. "Concurrent algorithms," for example, might depend on you having some mechanism for different programs to run concurrently. "Distributed algorithms" can depend on you having more than one system in a cooperating group, and a network or other communication scheme between them. Similarly, "parallel algorithms" are often those designed to run when you have multiple processing units--potentially many, many processing units, and the sort of communication facilities that are common when you have large arrays of processing units. You may be able to run a "parallel algorithm" even when you have only one computer or one CPU--but it's not terribly interesting in the way having traffic engineers isn't very useful unless you also have roads.

share|improve this answer

"It seems like parallel algorithms depend on parallel architectures?"

In my opinion the answer is simply: no. General I only get the properties

  • parallelism
  • word size (implicit resource limits)

when thinking of hardware architecture.

Referring to parallelism you can have any parallel algorithm be batch computed and any parallel arch to work serial so the algorithm does not depend on that. Word size might be an issue to numeric stability but not to the algorithm itself. Resource limits like 64bit can only describe 2^64 different numbers could be a problem, but anyway elements are limited.

Of course there might be some algorithms which are depending on some extended instruction sets, but at least everything can be described with basic math.

For example with quantum computing some Big-O values may change and then I would say that

"algorithms are independent of computer architectures"

is not true anymore.

share|improve this answer

Algorithms are independent of computer architecture. That's because algorithms define a series of processes that solves a problem. Regardless of architectures, sorting algorithms will always sort. It wouldn't suddenly render 3D drawings on some architectures.

If you think about it, this is actually intuitive. Google Chrome (which is merely a collection of algorithms) is a web browser when compiled for any architecture. It wouldn't suddenly become a device driver on some architectures.

But the speed with which algorithms runs depends on architectures. And some algorithms work faster than others depending on architectures.

If you think about it, this is also actually intuitive. Given an algorithm, it's always possible for the hardware designer to design an architecture that specifically speeds up that algorithm. That's one reason why there are things like 3D accelerated graphics cards and bitcoin miner accelerators.

When people talk about parallel algorithms, they're talking about a family of algorithms that can work faster on parallel architectures. There are plenty of algorithms that aren't improved by parallel architectures. So identifying new algorithms for the same problem that work well in parallel is an active area of research.

But those algorithms still do the same things. Architectures don't change what they do.

share|improve this answer

Algorithms doesn't depend on computer architecture, however the efficiency of running any particular algorithm does depend on the architecture. Any Turing Complete machines can emulate any other Turing Complete machines, although some machines would be better at one thing than others.

What we mean by concurrent algorithms is that the algorithm plays well or can take advantage of concurrency in the machine, maybe because it requires less locking that otherwise would have been required by algorithms that aren't specifically designed for the concurrent machine or maybe because the algorithm makes use of divide and conquer effectively to use the full power of the machine. Running the algorithm in a non concurrent machine would still be possible but may not be as efficient or it may require additional locking to perform correctly.

There are also algorithms that are designed to take advantage of the quirk of a particular architecture, like cache-friendly algorithms, which optimises caching. These algorithms may be less efficient in machines that doesn't cache the way the algorithm assumes.

share|improve this answer

In theory, algorithms are entirely independent of architecture. You can always emulate a parallel architecture on a single-issue timesliced system. You can reason about algorithms without an architecture at all. Knuth's book uses a fictional architecture.

In practice there are algorithms that attempt to achieve better runtime for the same "O" complexity by optimizing the use of cache hardware and synchronisation primitives.

share|improve this answer

Yes and no. It depends on the constraints you want to meet and the preconditions needed to run your algorithm.

Ideally, an algorithm is an abstract recipe that defines step-by-step how to do something. Algorithms was defined like so with the goal of reproducibility, and later automatization. Algorithms originates from lambda-calcul, so you can easily see why they are made in such a way. This definition is the usual one, but modern algorithms can be non-sequential (not step-by-step, like concurrent algorithms, or logical ones like the ones using unification), non-linear (stochastic algorithms) or just plainly weird (quantum algorithms), but I'll pass that.

Thus, ideally, an algorithm should be as abstract as possible without accounting any hardware.

But, as with any system, you must define some axioms, not only to get a coherent system, but also to gain time. For instance, most algorithms presume, at the very least implicitly, that they are defined on a Von-Neumann machine. If that was not the case, they would need to explicitly define each parts of the system they need to be run on (since this is required to reproduce the recipe, this a kind of precondition). Also, often algorithms relies on common commands such as write() without defining them fully.

Another reason why algorithms are not so abstract from hardware architecture, is when you need to meet some constraints.

Let's say you are working on embedded systems, then probably you can't rely on the same amount of resources you have on workstations. One of the most restrained resources is probably memory. However, most algorithms tend to optimize the time complexity (speed of execution on CPU), not the memory complexity (amount of memory necessary to work on the data). For these systems, memory optimized algorithms have been devised where non memory optimized algorithms would just fail or run a lot slower. In fact, embedded systems are not the sole target of memory efficient algorithms: for example, there are cache-oblivious algorithms that adapts their processing to efficiently use the CPU cache. Another example: some machine learning algorithms for big data are tailored for incremental learning or out-of-core computing to process huge amount of data far bigger than memory available on any computer, etc.

There are also algorithms that do not optimize a specific part of the computer, but a standard. For example, numerical data that need precision are stored inside float or double. The problem is that complex computations can lead to rounding, and the more calculations you do over rounded numbers, the more you will drift off. This is called a catastrophic interference. Some applications need a critical precision, even at the cost of some worst complexity. For this type of applications, algorithms that optimize their calculation to reduce or remove catastrophic interference were made.

Thus, designing an algorithm can also be a trade-off between abstraction and constraints.

In the end, we can say that an algorithm is as abstract as its target is, and as its preconditional (architecture) needs. The more specific target your algorithm aims, the more it will probably rely on the hardware architecture.

Some related keywords that may interest you:

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.