Edited the question to be more clear:
It is known that interpreting bytecode is much faster than interpreting source code or some IL version of the source code. The interpreter has a much easier time understanding bytecode than source code or source-code-like IL.
Virtual machines interpret bytecode. This bytecode interpreted by VMs is the result of compiling source code to bytecode.
However, 'independent interpreters' (not inside a VM) are known to interpret source code or source-code-like IL, instead of bytecode.
Why is that? Why don't interpreters interpret bytecode, like VMs? All that is needed is to first compile the source code to bytecode (like done for VMs), and then the interpreter can interpret this bytecode.
Is the reason for this is that an interpreter that interprets bytecode is a VM by definition? (Just guessing here). Or is it something else?