Executive Summary
Excluding Big-Integers, Strings that are more than one character long are inherently more complicated than numbers because they:
- are represented as a list (or shallow tree) of multiple numbers
- can be alphabetized
- have case sensitivity
- have punctuation, accents, characters, and whitespace, which all need to be treated differently
- have character encodings which add their own complexities.
- could use a lot of memory (if large enough).
Big-Integers could be about as complicated as all-lower-case (or all-upper-case) ASCII or EBCDIC strings.
Details
What's a String?
A string is a list of characters. Characters are just numbers and a character encoding that gives each character a number to represent it. So a String is essentially a list of numbers.
What's a Number?
Excluding a few special-purpose computers at research facilities, every popular processor has built in integers (from 8 to 64-bits) and IEEE floating points (32- and 64-bits). Popular processors have instructions for doing simple math: +, -, /, and * on these various kinds of ints and floats. Popular programming languages have straight-forward syntax that gets compiled to these opcodes in very simple ways.
Bigger Numbers?
Many languages have a Big-Integer that stores values bigger than what fits in a hardware 64-bit integer. Like Strings, they are essentially lists of numbers. Sometimes BigInt is implemented as a String, but I hope that is rare today. Like Strings, BigInts tend to be harder to use than simple hardware-supported numbers. You can fill up memory with them, etc. As an aside, tools like Spire cleverly promote Integers to BigIntegers as appropriate.
Character Encoding
Strings used to be encoded in EBCDIC, ASCII, WinAnsi, and a bunch of other formats that came out before Unicode. With Unicode, there is UTF-8, UTF-16, UTF-32, and other ways of representing characters as one or more bytes. Some character encodings are one-way compatible with others, but most are not. ASCII has only 8-bit characters, but a String in UTF-8 is actually a shallow tree structure where each character is composed of 1-4 bytes (thank you @gnasher729). And that's just representation of a single "code point" in bytes. The logical characters themselves are sometimes composed of multiple code points (a base character plus an accent) so you have grapheme clusters to deal with (thank you @gnasher729).
Parting Thoughts
Anything can be as complicated as you want it to be. Numbers can be positive, negative, or zero (IEEE floating points can also be negative zero). They can be even, odd, prime, ratios, imaginary, irrational, transcendental, or have many other properties that have kept number theorists and set theorists busy for centuries and will continue to do so.
But character strings represent languages and you need fonts to render them, which have their own set of headaches (and licensing issues). Chinese people whose ancient family names involve characters that are not otherwise part of their roughly 10,000 character alphabet are clambering to have their names included in Unicode. There are lost languages, and arguments about whether made-up languages like Klingon need to be included in character sets. I think when you add all that, plus regular expressions, then in general, Strings are more complicated than numbers.
Probably for every string complexity issue, a great student of Math could bring up various series, sets, divergence, and other complex issues. But I think you are defining numbers as "ints, floats, and maybe Big-Integers" not as "polynomials, series, and beyond."