I've been brought up old school - where we learned to design the database schema BEFORE the application's business layer (or using OOAD for everything else). I've been pretty good with designing schemas (IMHO :) and normalized only to remove unnecessary redundancy but not where it impacted speed i.e. if joins were a performance hit, the redundancy was left in place. But mostly it wasn't.
With the advent of some ORM frameworks like Ruby's ActiveRecord or ActiveJDBC (and a few others I can't remember, but I'm sure there are plenty) it seems they prefer having a surrogate key for every table even if some have primary keys like 'email' - breaking 2NF outright. Okay, I understand not too much, but it gets on my nerves (almost) when some of these ORMs (or programmers) don't acknowledge 1-1 or 1-0|1 (i.e. 1 to 0 or 1). They stipulate that it's just better to have everything as one big table no matter if it has a ton of nulls
"todays systems can handle it" is the comment I've heard more often.
I agree that memory constraints did bear a direct correlation to normalization (there are other benefits too :) but in today's time with cheap memory and quad-core machines is the concept of DB normalization just left to the texts? As DBAs do you still practice normalization to 3NF (if not BCNF :)? Does it matter? Is "dirty schema" design good for production systems? Just how should one make the case "for" normalization if it's still relevant.
(Note: I'm not talking about datawarehouse's star/snowflake schemas which have redundancy as a part/need of the design but commercial systems with a backend database like StackExchange for example)