Programmers Stack Exchange is a question and answer site for professional programmers interested in conceptual questions about software development. It's 100% free.

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

Until now, I always believed that you should learn programming languages that makes you do low-level stuff (E.G. c) to understand what's really happening under the hood and how the computer really works. this question, this question and an answer from this question reinforced that belief:

The more I program in the abstracted languages, the more I miss what got me into computers in the first place: poking around the computer and seeing what twitches. Assembler and C are very much suited for poking :)

Eventually, I thought you will become a better programmer knowing this because you'll know what's happening rather than assuming that everything is magic. And Knowing low-level stuff is much more interesting that writing business programs, I think.

But a month ago, I came across this book called Structure and Interpretation of Computer Programs and anything in the web suggests that this is one of the best computer science book and you will get better as a programmer when reading it.

Now, I'm really enjoying the concepts a lot. But I find that this book make it seem that abstraction is the best concept in computer science while only spending one chapter on the low-level part.

My goal is to become a better programmer, to understand computer science more and this got me really confused. Mainly shouldn't we avoid all abstractions and observe what really is happening at the very low-level? I know why abstraction is great, but doesn't that prevent you from learning how computers work? Am I missing something?

share|improve this question
6  
1  
    
For any given problem some abstractions will help more and others will get in the way more. For any given problem, you should choose a good tool (with appropriate level of abstraction). Both high level and low level thinking will make you a better programmer. The key is making sure you know when to use an abstraction and when to avoid it (google leaky abstractions). – martinkunev 1 hour ago
1  
Consider this abstraction: your C or asm program sees memory as a huge contiguous array, while the underlying hardware is a very different story. Each person's "low level" is another person's "abstraction". – Christopher Schultz 36 mins ago

I know why abstraction is great, but doesn't that prevent you from learning how computers work?

Certainly not. If you want to understand the abstractions at work, then study those abstractions. If you want to understand the low-level technical details of a real, physical, computer then study those details. If you want to understand both, study both. (In my opinion, a good programmer will do that.)

You seem to have got yourself stuck in a false dichotomy, as if you can only understand one abstraction level at a time (or, worse, as if only one abstraction level exists at a time). That's rather like suggesting that it is fundamentally impossible for someone to have any understanding of both physics and mathematics.

A good start would be discovering the distinction between computer science and programming.

share|improve this answer

Eventually, I thought you will become a better programmer knowing this because you'll know what's happening rather than assuming that everything is magic.

These are not contradictory things. I have no idea how to pave a road, but I know that it is not magic.

But a month ago, I came across this book called Structure and Interpretation of Computer Programs and anything in the web suggests that this is one of the best computer science book and you will get better as a programmer when reading it.

Yes, that is absolutely true.

Mainly shouldn't we avoid all abstractions and observe what really is happening at the very low-level?

Maybe once or twice, but doing that every time will prevent you from being a good programmer. You don't need to watch the electrons flow through a gate to know that it's not magic. You don't need to see the cpu translate those electrons to the bitwise representation of a number to know it's not magic. You don't need to see those bits go down the wire to know that it's not magic. There are a few dozen abstractions necessary just to put these letters alongside one another. A few hundred probably to get them to your computer from SE's servers.

Nobody knows all of them - not in depth.

This is a very common problem with beginners in programming. They want to know how things work. They think that low level is the best level, because it gives them the most power, or control, or knowledge. It does not.

Yeah, you can't treat it as magic. But you really don't need to know that stuff either. Concatenating strings by hand isn't interesting. Rolling your own sorting algorithm isn't interesting. Because any programmer can get the same result in a few order of magnitudes less time by using something written by far better programmers decades ago.

share|improve this answer
    
"doing that every time will prevent you from being a good programmer." - this is interesting, but when should we need to actually understand the details? For example, a few years ago while learning java, my professor never mentioned anything about object references. When we say Object foo = new Object() we're just creating an object to memory. It's until I learned a little bit of c++ when I found out java used pointers implicitly. It lead me to understand why my objects are getting their values changed for some unknown reason. So when should we ignore the details and when to focus on them? – recursivePointer 4 hours ago
    
I think the biggest problem beginners have is understanding that there's a difference between "it happens to be that way this time" and "it is guaranteed to be that way", and that they should care. Getting lost in the details looks like a part of that. – Deduplicator 3 hours ago
2  
@recursivePointer: No offence but it sounds like you weren't taught Java very well. Those are the absolute basics of the language. – Lightness Races in Orbit 3 hours ago
    
@Lightness Races in Orbit yes I realized that, and my classmates told me I never have to understand that because references are "abstracted" from the programmer. I'd like to know when to stop digging at the details. I realized references are very important concepts. Should we go deeper than that? – recursivePointer 3 hours ago
    
@recursivePointer: References and pointers are important. Pointers-to-pointers, maybe not so much. To keep this in-scope for Java, you should know that there is a garbage collector which reclaims memory you are not using (in C++ you have to delete objects or have a smart pointer do it for you). You do not need to know the exact algorithm it is using to do that (nor should you rely on said algorithm since Oracle might change it some day). You might want to know a little about how these algorithms work in theory. You don't need to see the actual code in practice. – Kevin 2 hours ago

A key skill in programming is simultaneously thinking at multiple levels of abstraction. Another key skill is building abstractions; this skill uses the previous one. Low-level programming is valuable in part because it exercises and expands both these skills.

SICP models and implements interpreters, simulators for a machine model, and a compiler to that machine model (the output of which can then be run on the simulator). The machine model, while very simple, is not inherently less low-level than x86_64. Frankly, a good amount of low-level programming is dealing with arbitrary and arcane hardware/firmware interfaces which are no better than the arbitrary rules of business logic.

share|improve this answer
    
"a good amount of low-level programming is dealing with arbitrary and arcane hardware/firmware interfaces which are no better than the arbitrary rules of business logic" - can you explain this further? How is that possible? – recursivePointer 3 hours ago
2  
Have you ever written a bootloader for x86_64? If not, try it some time. Most of the work required is due to backwards compatibility so that programs written in the '70s can still run on your 2016 processor. Much of the hardware initialization is also tied to interfaces designed in the '80s. Hardware peripherals are often far worse (in part because there's a lot more of them). Most of the same pressures exist for both firmware and business software, leading to most of the same problems. – Derek Elkins 3 hours ago

I know why abstraction is great, but doesn't that prevent you from learning how computers work? Am I missing something?

Go to a magic show and you'll be entertained but you won't understand how the tricks work. Read a book on magic and you'll learn how tricks work but you still won't be entertaining.

Do both. Work hard. And you might be both.

I'ved worked at high levels SOLID, bash, UML. I've worked at low levels, TASM, Arithmetic Logic Units, Analog Circuits. I can tell you, there is no level at which you can work that there isn't some magic abstracted away from you.

The key certainly is not to understand every level of abstraction at once. It's to understand one level well enough to use it right and well enough to know when it's time to move to a different one.

Any sufficiently advanced technology is indistinguishable from magic.

Arthur C Clark

share|improve this answer

No, abstractions don't prevent you from understanding how things work. Abstractions allow you to understand why (to what end) things work the way they do.

First off, let's make one thing clear: pretty much everything you've ever known is at a level of abstraction. Java is an abstraction, C++ is an abstraction, C is an abstraction, x86 is an abstraction, ones and zeroes are an abstraction, digital circuits are an abstraction, integrated circuits are an abstraction, amplifiers are an abstraction, transistors are an abstraction, circuits are an abstraction, semiconductors are an abstraction, atoms are an abstraction, electron bands are an abstraction, electrons are an abstraction, and quarks could be an abstraction too - I don't really know, I'm just making a guess based on the pattern. By the logic that low level knowledge is required to understand how something really works, if you want to understand how computers really work, you need to study physics, then electrical engineering, then computer engineering, then computer science, and essentially work your way up in terms of abstraction. (I've taken the liberty of not mentioning that you also need to study math first, to really understand physics.)

Now realistically, the days when you could make sense of computers and programming by building your way up from the lowest level details were the earliest days of computers. By now, this field has advanced too much, to the point where it can't possibly be rediscovered from scratch by a single person. There are hundreds of thousands of very qualified people specializing at every level of abstraction, working hard daily to make advances that you can't hope to understand without spending years of studying a specific portion thoroughly and committing to keeping up with the latest advancements there.

As an example, consider this Java snippet:

public void Example() { 
    Object obj = new String("...");
    // ...
}

Unless you are well-versed in stack frames, heap data structures, concurrent generational tracing GC, memory compacting, static analysis, escape analysis specifically, virtual machines, dynamic analysis, assembly language and executable space protection, you are wrong if you think you really know what happens in practice when you run something as simple as that snippet.

As another example, consider this C snippet:

void example(int i) {
    int j;
    if(i == 0) {
        j = i * 2;
        printf("Received zero, printing %d", j);
    } else {
        printf("Received non-zero, printing %d", j);
    }
}

If you show it to a beginner, they'll tell you that when the argument is non-zero, the residual contents of a memory location will be printed, because variables are actually just memory addresses behind the scenes and when you don't initialize them it's just that you don't move anything to their address and anyway there's nothing magic about them. If you show it to a non-beginner, they'll tell you that this program's behavior is undefined for non-zero values of this function's parameter and the compiler could potentially remove the conditional, treat all arguments to this function as zero, replace all of the function's call spots with calls that pass zero as the argument, set all variables that are ever passed as arguments to this function to zero, or do other paranormal stuff.

The beginner in this example took everything he/she knows into account and arrived elegantly to a completely wrong answer because (a) he/she didn't read the spec of the language (which is an abstraction on purpose, not because C programmers aren't clever enough to understand computer architecture) and (b) tried to reason about implementation details which he/she didn't fully grasp and which have evolved way beyond his/her mental model by now. This example is fictional, but it draws from everyday real-world misconceptions - the kind that sometimes lead to perilous bugs and occasionally famous security holes.

You're saying you aim to be a better programmer. Some of the best programmers I've met are the way they are because they understand abstractions and they can carry their knowledge to any language, and adapt it to any problem they need to solve, at any level they happen to be working on. Some of the worst programmers I've met are the way they are because they insist on focusing on details and trivia which they don't really understand and which a lot of the time aren't exactly up-to-date, or relevant to the problem, or applicable in the context they attempt to use them in, or have never been true in the first place.

Don't fall into the trap. After all, there is no single level of abstraction at any given point and a person isn't limited to understanding a single level of abstraction at any given moment. You can understand one, then move on to another.

share|improve this answer

Software engineering has multiple levels of detail. Your question is "what is the most rewarding, worthy, interesting level?"

It depends on your task or on what you want to be, what you care about. For big systems you should not care much about bit shifting and clock cycles. For embedded software running on a simple micro controller you will probably want to keep an eye on the ammount of memory you use and you may have to write some primitive timing routines.

Sometimes a choice made on a high abstraction level can impact performance or resource use of your system. Recognizing that and understanding why and how will help making the better choice or finding a way to make an existing system more efficient. This works both ways: knowing there is a high level utility available will keep you from inventing your own wheel in your low level domain. So having some understanding of other levels than the one you are working on may help you be more effective.

On a personal level you may want to ask yourself "do I like laying bricks or do I want to designing houses?" Or maybe make layouts of cities. Or improve the science that makes a stronger, lighter, cheaper brick?

share|improve this answer
    
this is not what is asked. "shouldn't we avoid all abstractions and observe what really is happening at the very low-level? I know why abstraction is great, but doesn't that prevent you from learning how computers work?" See How to Answer – gnat 1 hour ago
    
@gnat To you the answer is "no". – Martin Maat 1 hour ago

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.