Let $n = |w|$ and $m = |\Sigma|$, where $w$ is the input string and $\Sigma$ is the input alphabet. There are $n - k + 1$ substrings of $w$ of length $k$, and $m^k$ strings over $\Sigma$ of length $k$. If $n - k + 1 < m^k$ then, by the pigeonhole principle, there must be a string over $\Sigma$ of length $k$ that is not a substring of $w$.
There are on the order of $m^i$ strings of length less than $i$ over $\Sigma$. Determining whether an arbitrary string is a substring of $w$ can be done in time $n$. Thus we have, at most, $nm^i$ work to do, where $i = k + 1$ in the worst case. We can extrapolate from the inequality that $n < m^k$, so $k > \log_m n$; so $k = 1 + \log_m n$ always works. Note: if we can rule out $k = 1$, we can go even further and let $k = \log_m n$.
Taken altogether, this means that the total amount of work that the naïve method (enumerate strings in lexicographic order, and check the input for each string over $\Sigma^*$ until you find one that's missing) would never do more than $O(n^2m^2)$ (*fixed an algebraic mistake; had $O(n^3m)$ previously, but should have been $O(n^2m^2)$) work in the worst case, although this bound may not be tight. Note that if we rule out $k = 1$ and take $k = \log_m n$, we lose an $n$ and get $O(n^2m)$.