Tell me more ×
Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It's 100% free, no registration required.

I have done some basic object oriented programming with C++ (creating a B-Tree, Hashing Algorithms, Double Linked Lists) and I have done small project in C (like making a scientific calculator etc.)

How different is hardware programming (specifically for micro controllers) from software/object oriented programming in terms of the mindset and "thinking" that the programmer has to have?

Is one usually considered harder than the other my most people?

With my background (as described above) would I need a lot of preparation for going into hardware programming or can I dive straight in without too much preparation?

share|improve this question
2  
The biggest learning curve will be how to drive the specific hardware in your micro. That will involve poring over the data sheets for hours. Unfortunately, there's no easy way out. – drxzcl Jul 19 '11 at 23:24
@rrazd, I noticed that you've included the arduino tag. Is this because you want to use the arduino wiring language and libraries? Or will you be writing your embedded applications in pure C? If you intend to stick with the arduino environment, it's pretty safe and easy to play around in as they have done some abstractions away from the hardware. – Jon L Jul 19 '11 at 23:53
@Jon i plan to use an arduino board for starters. Isn't it similar to C language? I thought it involved the same basic concepts.... – rrazd Jul 19 '11 at 23:59
1  
I'm wondering whether you mean what many people would call 'I/O programming', or if you anticipate re-arranging hardware with code. The arduino is decidedly the former; the latter would be the domain of FPGAs. – JustJeff Jul 20 '11 at 1:02
1  
@rrazd - I changed the title; "hardware programming" sounds too much like HDL (Hardware Description Language), e.g. VHDL and Verilog, which are used to program FPGAs and CPLDs. – stevenvh Jul 20 '11 at 6:24
show 2 more comments

6 Answers

up vote 10 down vote accepted

You will have to completely abandon the object-oriented paradigm when dealing with most microcontrollers.

Microcontrollers are generally register- and RAM-limited, with slow clock rates and no pipelining / parallel code paths. You can forget about Java on a PIC, for example.

You have to get into an assembly-language mindset, and write procedurally.

You have to keep your code relatively flat and avoid recursion, as RAM limitations can often lead to stack issues.

You have to learn how to write interrupt service routines which are efficient (usually in assembly language).

You may have to refactor parts of the code manually, in assembly language, to implement functionality that the compiler doesn't support (or supports poorly).

You have to write mathematical code that takes into account the word size and lack of FPU capabilities of most microcontrollers (i.e. doing 32-bit multiplication on an 8-bit micro = evil).

It is a different world. To me, having a computer science or professional programming background can be as much of a hindrance as having no knowledge at all when dealing with microcontrollers.

share|improve this answer
+1 for the last sentence – Nick Halden Jul 20 '11 at 13:24
You do not have to entirely abandon the object oriented paradigm, but on smaller micros it may be necessary to abandon heavyweight object implementations and really think about what the best way to solve each problem is. Often that is procedural, but lightweight objects, well implemented (usually by hand), can at times shrink the size of complicated microcontrollor projects. – Chris Stratton Aug 30 '11 at 6:23
All of this is true except abandoning object-orientation. You will probably not use a language with OO features but that doesn't rule out object orientation. For microcontrollers you are going to write drivers for all hardware peripherals (ADCs, serial bus controllers, PWM, etc etc). Such a driver should always be written in an object-oriented manner so that it is 1) autonomous and doesn't know/care about the rest of the program, and 2) implementing private encapsulation so that the rest of the program can't go in and fiddle with it. This is 100% possible in C and will not affect performance. – Lundin Oct 18 '11 at 14:11

You need to think about several things:

  • You will use C as the language
  • You can still create a feeling of object orientation using function pointers so that you can override functions etc. I have used this method in the past and current projects and works very well. So OO is partially there but not in C++ sense.

There are other limitations that will come in to play such as limited speed and memory. So as a general guideline, I avoid:

  • Using heap, if there is a way to solve the problem without Malloc, I do that. For example, I preallocate buffers and just use them.
  • I intentionally reduce the stack size in compiler settings to face with stack size issues early on, optimize that carefully.
  • I assume every single line of code will be interrupted by an event, so I avoid non reentrant code
  • I assume even interrupts are nested so I write that code accordingly
  • I avoid using OS unless it is necessary. 70% of the embedded projects doesn't really need an OS. If I must use an OS, I only use something with source code available. (Freertos etc)
  • if I am using an OS, I almost always abstract things so that I can change OS in a matter of hours.
  • For drivers etc. I will only use the libraries provided by the vendor, I never ever directly fiddle the bits, unless I have no other choice. This makes the code readable and improves debugging.
  • I look at the loops and other stuff, especially in ISR, to make sure they are fast enough.
  • I always keep a few GPIOs handy to measure stuff, context switching, ISR run time etc.

List goes on, I am probably below average in terms of software programming, I am sure there are better practices.

share|improve this answer
1  
+1 for the "you can use OO paradigms if you want". What you need to check at the door is not OO design. OOD is just a philosophy that encourages you to keep related code and data together. What you need to leave behind is the way OO is implemented in enterprise systems, with multiple layers of abstraction, inversion of control and all that jazz. Your firmware's task is to drive the hardware, that's it. – drxzcl Jul 20 '11 at 7:40

I do both, so here's my view.

I think the most important skill by far in embedded is your debugging ability. The required mindset is much different in that so much more can go wrong, and you must be very open to considering all the different ways what you are trying to do can go wrong.

This is the single biggest issue for new embedded developers. PC people tend to have it rougher, as they're used to so much just working for them. They'll tend to waste a lot of time searching for tools to do things for them instead (hint: there aren't many). There's a lot of banging heads into walls over and over, not knowing what else to do. If you feel you're getting stuck, step back and figure out if you can identify what all might be going wrong. Systematically go through narrowing your potential problems list until you figure it out. It follows directly from this process that you should limit the scope of problems by not changing too much at once.

Experienced embedded people tend to take debugging for granted... most of the people who can't do it well don't last long (or work in large companies that simply accept "firmware is hard" as an answer for why a certain feature is years late)

You're working on code that runs on an external system to your development system, with varying degrees of visibility into your target from platform to platform. If under your control, push for development aids to help increase this visibility into your target system. Use debug serial ports, bit banging debug output, the famous blinking light, etc. Certainly at a minimum learn how to use an oscilloscope and use pin I/O with the 'scope to see when certain functions enter/exit, ISRs fire, etc. I've watched people struggle for literally years longer than necessary simply because they never bothered to set up/learn how to use a proper JTAG debugger link.

It's much more important to be very aware of exactly what resources you have relative to a PC. Read the datasheets carefully. Consider the resource 'cost' of anything you are trying to do. Learn resource-oriented debugging tricks like filling stack space with a magic value to track stack usage.

While some degree of debugging skill is required for both PC and embedded software, it's much more important with embedded.

share|improve this answer

I'm presuming your C++ experience is PC-based.

An often made error by programmers moving from PC to microcontroller is that they don't realize how limited resources can be. On a PC nobody will stop you when you create a table with 100 000 entries, or write a program which compiles to 1MB of machine code.
There are microcontrollers which have a wealth of memory resources, especially in the high end, but it's still a far cry from what you'll be used to. For a hobby project you probably can always go for the maximum, but in a professional project you'll often be forced to work with the smaller device because it's cheaper.
On one project I was working with a TI MSP430F1101. 1KB of program memory, 128 bytes of configuration Flash, 128 bytes of RAM. The program didn't fit in the 1K, so I had to write a 23 bytes function in the configuration Flash. With these small controllers you calculate by the byte. On another occasion the program memory was 4 bytes too small. Boss wouldn't let me use the controller with more memory, but instead I had to optimize an already optimized machine code (it was already written in assembler) to fit the extra 4 bytes in. You get the picture.

Depending on the platform you're working on you'll have to deal with very low level I/O. Some development environments have functions to write to an LCD, but on others you're on your own, and will have to read the LCD's datasheet from start to finish to know how to control it.
You may have to control a relay, that's easier than an LCD, but it will require you to go to the register level of the microcontroller. Again a datasheet or user manual. You'll have to get to know the microcontroller's structure, which you'll find in a block diagram, again in the datasheet. In the microprocessor's days we talked about a programming model, which was basically a lineup of the processor's registers. Today's microcontrollers are so complex that a description of all the registers can take the best part of a 100 pages datasheet. IIRC just the description of the clock module for the MSP430 was 25 pages long.

You'll often have to deal with real time event handling. An interrupt you have to handle within 10\$\mu\$s, for example, and during that have another interrupt which requires the same timing accuracy.

Microcontrollers are often programmed in C. C++ is rather resources hungry, so that's usually out. (Most C++ implementations for microcontrollers offer a limited subset of C++.) Like I said, depending on the platform you may have an extensive library of functions available which could save you quite some developing time. It's worth taking some time to study it, it may save you a lot of time later on if you know what's available.

share|improve this answer
I've written games for the Atari 2600, which is a rather limited platform; my first published game was essentially 4K code (since I had a 32K cart, I added some extra goodies, but the 4K version was entirely playable); RAM is 128 bytes. I find it interesting to contemplate that in the year I wrote that game (2005), other games were published that were, literally, a million times as big. – supercat Aug 29 '11 at 15:17
@supercat - Yeah, but that was to be expected, in 2005 the Atari 2600 was already 200 years old! I've never played action games like FPSs, but when I look at what's needed to play them, a GPU much more powerful than your CPU, both programmatically and electrically, I can't help but shake my head :-). I played chess (Sargon) on a 16k TRS-80 IIRC. My brother's Flight Simulator didn't need more. – stevenvh Aug 29 '11 at 15:31
Not quite 200 years old. It debuted in 1977, so it wasn't even 30. While I agree that that was aeons ago in technological terms, I'm still blown away by the fact that there's not just a hundred-hold increase, nor a thousand-fold increase, but a MILLION-fold increase in both RAM and code size. Speed hasn't gotten quite that much of a boost, since the 2600 was 1.19MHz and newer systems are only in the low GHz range. They can do a lot more per cycle than the 2600 (which could--and had to--generate 1/76 of a video line each cycle) but I don't think they're 1,000,000x as fast. – supercat Aug 29 '11 at 15:48

For every arduino library method that you call there is a wealth of C/C++ code that makes it possible, it's simply packaged nicely for you to use as an API. Take a look at the arduino source code under the directory hardware/arduino/* and you'll see all the C/C++ written for you which interacts directly with the AVR microcontroller's registers. If your objective is to learn how to write stuff like this (directly for the hardware) then there is a lot to cover. If your objective is to get something to work using their libraries then there might not be much to talk about as most of the hard work is done for you and their libraries and development environment are very easy to use.

Some rules of thumb though when working with resource constrained devices that could apply to either arduino environment or others:

Be aware of how much memory you are using. Both code size (which goes to flash memory) and static RAM usage (constants in your code that will always exist in RAM). I would argue that static RAM usage is a bit more important starting out, as it is easy to over look. It's not uncommon for you to have only 1000 bytes to work with for your stack, heap, and constants. Be wise in how you spend it, so avoid things like long arrays of integers (4-bytes each) when bytes or unsigned char's (1 byte each) will suffice. Another answer here covers some other important points very well so I'll stop here, I mainly wanted to get the point across that there is a lot to cover if you're not using the arduino library and writing your own C libraries.

share|improve this answer

"hardware programming" can mean a lot of things. Programming a very small chip (think 10F200, 512 instructions, a few bytes of RAM) can be almost like designing an electronic circuit. On the other side programming a big Cortex microcontroller (1 Mb FLASH, 64 kB RAM) can be a lot like PC/GUI programming, using a big GUI toolkit. IMHO a good embedded/real-time programmer needs skills both from the software egineering side and from the circuit design side. For the bigger uC C++ is a good language choice, for the smaller ones C is probably better suited. Assemby knowledge can be handy, but I would not recommend doing serious projects entirely in assembly.

I have done serious embedded work with people from both (SWI and EE) sides. I generally prefer the SWI people, provided that they have some experience with multu-threaded programming.

Your question sounds like you want to dive into embedded programming. By all means do so. For the low-level aspects (interfacing the peripherals in your chip and the hardware around it) you will need to learn some new skills, but it is just a lot of work without many new concepts. For the higher layers of your projects you can draw on your existing knwoledge.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.