You've written a straightforward solution, but botched the implementation. It takes O(m + n) space to store n items of input
and distinct values up to m=100. It takes O(m n) time to do arr.uniq
in a loop, and O(m n2) when including the count()
operation. arr.uniq
does not vary, and therefore should have been lifted out of the loop, in which case it would be O(n) to do arr.uniq
plus O(m) to build the answer
array. Moving arr.uniq
out of the loop and doing a smarter count()
might be enough to bring performance down to a reasonable level.
However, your performance challenge is for the case where \$n = 100000\$, and \$m = 100\$. It would be nice to be able to take advantage of the fact that \$n \gg m\$. While \$O(n)\$ time is the theoretical minimum, \$O(n)\$ space could be improved upon.
An alternate solution is to use a data structure known as a binary indexed tree, also known as a Fenwick tree. The tree can be represented as an array of \$m\$ elements. The tradeoff is that it takes \$log m\$ time to process each data item. When \$n \gg m\$, you can pretend that \$m\$ is a largish constant.
$$
\newcommand{smallm}[0]{\overset{n\ \gg\ m}{\longrightarrow}}
\begin{array}{|l|c|c|}
\hline \\
& \textrm{Straightforward approach, done well} & \textrm{Fenwick tree} \\
\hline \\
\textrm{Space} & O(m + n) \smallm O(n) & O(m) \smallm O(1) \\
\hline \\
\textrm{Time} & O(m + n) \smallm O(n) & O((m + n) \log m) \smallm O(n) \\
\hline
\end{array}$$
class FenwickTree
def initialize(limit)
@arr = [0] * limit
end
# O(log m)
def incr(item, count=1)
loop do
break if item >= @arr.length
@arr[item] += count
item |= item + 1
end
end
# O(log m)
def count_le(item)
count = 0
while item >= 0
count += @arr[item]
item = (item & (item + 1)) - 1
end
count
end
# O(m log m)
def report
([email protected]).map { |i| count_le(i) }
end
end
f = FenwickTree.new(100) # O(m)
gets.to_i.times do # O(n log m)
f.incr(gets.to_i)
end
puts f.report.join(' ') # O(m log m)