CPU is the abbreviation for central processing unit. Sometimes referred to simply as the central processor, but more commonly called processor
0
votes
3answers
144 views
Why do executables depend on the OS but not on the CPU?
If I write a C program and compile it to an .exe file, the .exe file contains raw machine instructions to the CPU. (I think).
If so, how is it possible for me to run the compiled file on any computer ...
2
votes
0answers
175 views
What are the minimum memory requirments a microprocessors must have to perform any calculation? [migrated]
Please excuse my ignorance in low level things. A lot of the written below might be very wrong.
As far as I understand (and I might be very wrong), there are two types of memory locations a ...
0
votes
3answers
286 views
Is there still any value in learning assembly languages today? [closed]
Specifically for a game programmer.
If you really needed some assembly routines you could look for help, whereas back in the 80s/90s it was one of the mainstream languages. I read that compilers can ...
6
votes
3answers
847 views
Japanese Multiplication simulation - is a program actually capable of improving calculation speed?
On SuperUser, I asked a (possibly silly) question about processors using mathematical shortcuts and would like to have a look at the possibility at the software application of that concept.
I'd like ...
1
vote
1answer
66 views
Do you have a metaphor for cache/data latencies? [closed]
From this answer about latencies, we have some numbers (yes, caveat caveat) for latencies when coding (slightly edited):
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns
Main ...
1
vote
1answer
189 views
Can we still consider that Moore's law still holds true regarding the consequence on CPU speed? [closed]
Moore's law is an empirical law and in simple terms states that the number of transistors on integrated circuits doubles approximately every two years.
One of the consequences of Moore's law is that ...
0
votes
2answers
194 views
How Byte loading/storing is implemented By the CPU?
I know that in 32bit machine, cpu read from memory 32bits at a time. since the registers in this case is 32bit in size too, I can understand how this works.
What I don't understand is how the cpu ...
0
votes
2answers
74 views
Multi-level paging tables
Referring to the image here:
From http://en.wikipedia.org/wiki/File:X86_Paging_4K.svg
Could somebody please explain something for me? I don't get exactly how this works. As I understand it the ...
0
votes
2answers
133 views
Intel Nehalem/SB/IB/Haswell CPUs, cache vs TLB
On the Nehalem+ architecture Intel CPUs what is the interaction between the L1 cache, L2 cache, L1 DTLB and L2 DTLB? On all the images I have found there isnt a clear explanation whether the CPU looks ...
2
votes
2answers
119 views
How does Branch Target Prediction differ from Branch Prediction?
I do not understand how BTP differs from BP? Yes I understand BP evaluates whether a conditional is true/false, but surely implicitly this also determines the "target" instruction?
If I predict the ...
2
votes
1answer
290 views
How does the CPU know when it received RAM data and instructions?
Well, the title is pretty much self explanatory, but I'll expound it a bit, and say the origin of my question.
So, I've been wondering as to how the CPU knows when it received the RAM. I'm pretty ...
4
votes
1answer
313 views
Working with CPU cycles in Gameboy Advance
I am working on an GBA emulator and stuck at implementing CPU cycles.
I just know the basic knowledge about it, each instruction of ARM and
THUMB mode as each different set of cycles for each ...
4
votes
2answers
504 views
How long is a typical modern microprocessor pipeline?
I learnt some about pipelining but those were 4-stage and 5-stage and I think that modern pipelining typical is much longer and more complicated in practice. How long are typical pipelines and how ...
11
votes
3answers
4k views
What are CPU registers?
This question has been bothering me for some time now and today I figured I would Google it. I've read some stuff about it and it seemed very similar to what I've always known as processor cache.
Is ...
2
votes
1answer
307 views
Can we illustrate a CPU pipeline with a UML sequence diagram?
I study multicore pipelining and the diagrams are not UML sequence diagrams for instance
Why not remake this diagram like an UML sequence diagram, would not that be more clear so that we can see ...
24
votes
1answer
598 views
Performance of single-assignment ADT oriented code on modern CPUs
Working in immutable data with single assignments has the obvious effect of requiring more memory, one would presume, because you're constantly creating new values (though compilers under the covers ...
2
votes
1answer
238 views
Is there genetic relationship between ARM and PDP-11 architectures?
Reading about ARM architecture I found many similarities to PDP-11 architecture which did not exist between ARM and x86.
For example,
General-purpose registers named Rx compared to AX, BX,... for ...
2
votes
2answers
2k views
Can multiple CPU's / cores access the same RAM simutaneously?
This is what I guess would happen:
If two cores tried to access the same address in RAM, one would have to wait for the other to access the RAM. The second time that each core would try to access ...
0
votes
3answers
1k views
Instruction vs data cache usage
Say I've got a cache memory where instruction and data have different cache memories ("Harvard architecture"). Which cache, instruction or data, is used most often? I mean "most often" as in time, not ...
1
vote
1answer
628 views
Do compilers have to be written for each model of CPU?
Do you need to take account of the different processors and their instructions when writing a compiler? Have instructions been standardised? Or what tools and techniques are available to assist with ...
2
votes
2answers
275 views
CPU recommendation for learning multiprocessor programming [closed]
This might be an odd question and the place to ask might not be appropriate.
I am very interested in working with multiprocessor programming and parallel algorithms, mostly for research purposes. I ...
5
votes
3answers
568 views
Cloud computing platforms only have one CPU. Does this mean I shouldn't use Parallel Programming?
Almost every cloud instance I can find only offers one CPU. Why is this only one CPU now, and should I expect this to increase in the future?
Does this design impact my code design so that I exclude ...
10
votes
4answers
2k views
What is the meaning of the sentence “we wanted it to be compiled so it’s not burning CPU doing the wrong stuff.”
I was reading this article. It has the following paragraph.
And did Scala turn out to be fast? Well, what’s your definition of fast? About as fast as Java. It doesn’t have to be as fast as C or ...
11
votes
2answers
13k views
What's the DCPU-16 thing all about? [closed]
Ever since Notch (of Minecraft fame) announced his next project will include programmable 16-bit CPUs ingame everybody seems to want to write VM's for the specification Notch has written up. I've seen ...
3
votes
2answers
234 views
Benchmarking CPU processing power
Provided that many tools for computers benchmarking are available already, I'd like to write my own, starting with processing power measurement.
I'd like to write it in C under Linux, but other ...
5
votes
3answers
285 views
How do I balance program CPU reverse compatibility whist still being able to use cutting edge features?
As I learn more about C and C++ I'm starting to wonder: How can a compiler use newer features of processors without limiting it just to people with, for example, Intel Core i7's?
Think about it: new ...
2
votes
3answers
435 views
Computers that operate exclusively on boolean algebra
I was wondering if there are any computers that operate exclusively on boolean operations. For example, no add, sub, mult, or div in the instruction set (although these could be emulated with the ...
16
votes
3answers
3k views
Why does the stack grow downward?
I'm assuming there's a history to it, but why does the stack grow downward?
It seems to me like buffer overflows would be a lot harder to exploit if the stack grew upward...
6
votes
4answers
3k views
How do lines of code get executed by the CPU?
I'm trying to really understand how exactly a high-level language is converted into machine code and then executed by the cpu.
I understand that the code is compiled into machine code, which is the ...
4
votes
3answers
910 views
CPU Architecture and floating-point math
I'm trying to wrap my head around some details about how floating point math is performed on the CPU, trying to better understand what data types to use etc.
I think I have a fairly good ...
8
votes
7answers
4k views
When should I be offloading work to a GPU instead of the CPU?
Newer systems such as OpenCL are being made so that we can run more and more code on our graphics processors, which makes sense, because we should be able to utilise as much of the power in our ...
7
votes
2answers
4k views
Is this a valid smartphone CPU vs. desktop CPU speed comparison (Android G1 vs. old Pentium 4 desktop)?
I am trying to estimate speed differences when creating code on my desktop PC that will be ported to Android phones. I don't need to be exact, but a good estimation will help stop me from creating ...
13
votes
3answers
4k views
OpenGL CPU vs. GPU
So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed ...
3
votes
5answers
315 views
Based on what I read in “Inside the Machine”, is this approach to branches more optimal?
So I have been reading Inside the Machine by Jon Stokes. It is a FANTASTIC book, and it has got me thinking about the effects of programming on processors...
Given a branch unit in a CPU and a ...
6
votes
3answers
1k views
What's so special about x64 and programming x86?
I know this is a little funny question, but I didn't have the chance to realize what makes any difference when programming x64 or x86 at high level languages (.NET for instance).
Any explanations ...
14
votes
2answers
6k views
Why are there separate L1 caches for data and instructions?
Just went over some slides and noticed that the L1 cache (at least on Intel CPUs) distinguishes between data and instruction cache, I would like to know why this is..
6
votes
3answers
1k views
32-bit / 64-bit processors - what is that feature officially called? [closed]
I see talk of CPU's being either 32-bit or 64-bit processors. Information which is often required on download pages
But what is that feature officially called.
i.e What's the inverse of saying "I ...
14
votes
6answers
3k views
How often do CPUs make calculation errors?
In Dijkstra's Notes on Structured Programming he talks a lot about the provability of computer programs as abstract entities. As a corollary, he remarks how testing isn't enough. E.g., he points out ...
5
votes
2answers
305 views
Programming language constructs for cache optimization?
Clearly optimizing cache usages is bound to improve my program efficiency. Surprisingly, I don't see too many programming languages actually having this sort of a feature. So here's my question:
...
3
votes
3answers
243 views
.NET processing unit [closed]
Do you think we'll ever see an IL (or other bytecode) processing unit?
It sounds possible and would have a major benefit, because we wouldn't need the JITter. This isn't the same as compiling .NET ...