The .NET version of Chromium Embedded Framework (CEFSharp) does have a version for offscreen rendering, however you need to handle converting the rendered screen into another image format to use with your framework of choice (in this case Unity). There also does seem to be a recent discussion about integrating it into Unity.
As for alternatives, there is Awesomium, however I don't recommend it as it is still running on Chromium 18 (with no plans to update in sight, apparently they are really busy improving the API for the next big version), virtually no CSS3 support and the engine takes a good few seconds to start up in a parallel thread, meaning the actual screen will be blank until it finishes its start up. However it does have good JS bindings compared to CEFSharp (you bind individual methods instead of entire classes), and it has a much more user friendly API, which means its easier to pass mouse and key events compared to CEFSharp.
CEFSharp, on the other hand is a lot rougher around the edges in regards to its API, but a lot faster on startup and is fairly up to date (it was running Chromium 43 or 44 when I tried it out a month ago). You'll also need to pass mouse and key events to the browser instance as well, but you'll need to do some digging to find the correct handlers, as they are buried rather deep in parent objects (took me 2 days to find the correct functions).
Only issues I had with it was that it took a while to shut down and the API wasn't as user friendly as Awesomium, which made it a bit awkward to work with, and its JS binding didn't seem to like existing class instances to bind to.
Coherent UI is probably the best option around, and is supposed to have Unity bindings, but is rather expensive. Googling reveals that the Unity Asset Store has a few cheaper options available.
If you don't want to use any of those, it is probably possible to implement the offscreen versions of CEFSharp or Awesomium into your Unity game. I don't know how difficult it would be, but provided you could get mouse/keyboard input and render a 2D texture over the entire screen, it is probably doable. I integrated both of them into XNA/MonoGame, and I still have the code referencing the functions to be called to pass input/get the rendered screen to a texture lying around somewhere if you want to try and implement it.