A quick benchmark (loungebook caveats apply) of the three fastest posts so far for getting the direct result, over list lengths 8-22, taking Ceiling[length/2]
at a time:

As can be seen, manipulating subsets to get to the answer, while both vastly faster than the OP, quickly gets untenable (and not much beyond this range becomes impossible - you simply won't have enough RAM/Disk to store the intermediate results).
The speed of the fastest is such that for larger problems, one can use its intermediate results to do an actual complete simulation faster than using the slower two to calculate things, and using it to directly calculate the mean is instantaneous.
Original post follows
Total[RandomChoice[1 + ((Tr@Unitize@Subtract[Differences@#, 1]) & /@
Subsets[ll, {n}]),NN]]
Accomplishes same thing, over 1000X faster on the loungbook...
But I remain curious - what exactly is the calculating? To what end?
In any case, if you just want the end result (and simulation is not the goal, which I assumed was), Patrick's second answer is the way to go...
It can be sped up quite a bit for large cases:
NN + NN Mean[(Tr@Unitize@Subtract[Differences@#, 1]) & /@ Subsets[ll, {n}]]
And... it appears after cursory investigation, this can be done nearly instantly with nil memory and no need to create subsets. If the end result is what you're after and that's of interest (i.e., this is not about simulation), comment, I'll investigate/polish and post when I've time (it's quite late, I'm off for a bit).
Update:
As alluded to, here's an instantaneous, for all practical purposes, solution to getting the end result. I've assumed the OP is after the total of lengths of realizations specified, using a source list that is a contiguous range of integers (actual range matters not), so given the length of the list (end of range for a {1,2,...} list), size of samples, and number of samples, this returns the value k would have on average (so using 1 for number of samples simply gives the average length.)
This does not suffer from the arrow to the knee of solutions so far: it does not need to create nor parse subsets, so it can handle arbitrary cases, e.g., a Range@100
list with n of 30 would take (assuming one could have sufficient RAM, which you can't) much much much longer than the age of the universe to complete with my second solution, an adaptation of Patrick's that's already 6X or so faster than his on larger cases. This completes in below timer resolution on a loungebook:
get[l_, n_, s_] :=
With[{j = If[n <= Ceiling[l/2], n, l - n + 1]},
(s Tr[#*Range@Length@#]/Tr@#) &@(Binomial[l - n + 1,
Range[1, j]]*Binomial[n - 1, Range[0, j - 1]])];
get[12, 7, 1*^9] // Timing
(* {0., 3500000000} *)
N.b.: If it is a simulation you're after, perhaps to get the distribution of results, this can obviously be adapted to that purpose with negligible impact on performance (basically the overhead to grab samples)...