Wikipedia gives pretty good summaries of most of these terms. Here is my take on them:
Build automation is essentially automating how the software is built instead of manually invoking the compiler. This would be accomplished via tools such as Makefiles or Ant.
Deployment automation is less well-define but involves taking your built software and "deploying" or "installing" it on a test or production system.
Continuous integration means having an automated process build your software continuously as developers check in code. For example, every 15 to 30 minutes a server might wake up, look for new check-ins, and build the project if any changes were made.
Continuous delivery is a combination of CI and DA where the software builds from CI are also deployed to a test system.
At the very least, you need to have build automation, i.e. a build script of some sort. That allows you to click one button or issue one command to build your project. The benefit to this is reducing errors from manually running steps. Complex build environments might involve generating code (think DAOs from configs, interface code such as JAXB), compiling code, packaging it up, customizing metadata, etc. With a lot of stuff to do you need a checklist: why not make the checklist be your build script, and use a tool to run it? It reduces errors and provides consistency.
Next up is CI: this is really good to have but not strictly required. It helps identify build problems early. If you have multiple developers checking in code throughout the day and perhaps not syncing up their own workspaces constantly, there is a risk that their changes will interfere with each other. I am referring specifically to static code errors, not conflicts. A CI build server will mitigate this risk.
Finally we have the deployment steps. The idea here is to save time and reduce error from manually deploying software. Much like build automation, there are a hundred ways to screw up a software deployment. I have personally stayed late at the office to fix manual deployment problems on many occasions when we need a functioning system for customers coming on-site tomorrow. Automating multiple systems introduces more risk: instead of one system possibly crashing or having weird errors, we now have multiple systems that can go wrong. However, that risk is far lower than somebody missing a step on a checklist or issuing the wrong command and messing up a deployment. If you are lucky you can simply restore a DB backup and start over, if you are unlucky an error might cause the system to function incorrectly. Is it a software defect? Did the technician not set a configuration correctly? This takes time to diagnose, time that you may not have and time that need not be spent if you automate the process.