I have ascii strings which contain the character "\x80" to represent the euro symbol:

>>> print "\x80"
€

When inserting string data containing this character into my database, I get:

psycopg2.DataError: invalid byte sequence for encoding "UTF8": 0x80
HINT:  This error can also happen if the byte sequence does not match the encodi
ng expected by the server, which is controlled by "client_encoding".

I'm a unicode newbie. How can I convert my strings containing "\x80" to valid UTF-8 containing that same euro symbol? I've tried calling .encode and .decode on various strings, but run into errors:

>>> "\x80".encode("utf-8")
Traceback (most recent call last):
  File "<pyshell#14>", line 1, in <module>
    "\x80".encode("utf-8")
UnicodeDecodeError: 'ascii' codec can't decode byte 0x80 in position 0: ordinal not in range(128)
share|improve this question
2  
You have to .decode() it from your current locale (where \x80 == ), then .encode("utf-8") – Evan Carroll Jun 7 '10 at 17:25
2  
If you have an ASCII string, you do not have "\x80". Conversely, if you have "\x80", you do not have an ASCII string. – Thanatos Jun 8 '10 at 1:27
@Thanatos: true. as i said i'm a char-encoding newb, i didnt know what else to call it. i just meant a python string literal without a "u" in the front. – Claudiu Jun 8 '10 at 4:07

1 Answer

up vote 9 down vote accepted

The question starts with a false premise:

I have ascii strings which contain the character "\x80" to represent the euro symbol.

ASCII characters are in the range "\x00" to "\x7F" inclusive.

The previously-accepted now-deleted answer operated under two gross misapprehensions (1) that locale == encoding (2) that the latin1 encoding maps "\x80" to a Euro character.

In fact, all of the ISO-8859-x encodings map "\x80" to U+0080 which is one of the C1 control characters, not a Euro character. Only 3 of those encodings (x in (7, 15, 16)) provide the Euro character, as "\xA4". See this Wikipedia article.

You need to know what encoding your data is in. What machine was it created on? How? The locale it was created in (not necessarily yours) may give you a clue.

Note that "My data is encoded in latin1" is up there with "The cheque's in the mail" and "Of course I'll love you in the morning". Your data is probably encoded in one of the cp125x encodings found on Windows platforms. Note that all of them except cp1251 (Windows Cyrillic) map "\x80" to the euro character:

>>> ['\x80'.decode('cp125' + str(x), 'replace') for x in range(9)]
[u'\u20ac', u'\u0402', u'\u20ac', u'\u20ac', u'\u20ac', u'\u20ac', u'\u20ac', u'\u20ac', u'\u20ac']

Update in response to the OP's comment

I'm reading this data from a file, e.g. open(fname).read(). It contains strings with \x80 in them that represents the euro character. it's just a plain text file. it is generated by another program, but I don't know how it goes about generating the text. what would be a good solution? I'm thinking I can assume that it outputs "\x80" for a euro character, meaning I can assume it's encoded with a cp125x that has that char as the euro.

This is a bit confusing: First you say

It contains strings with \x80 in them that represents the euro character

But later you say

I'm thinking I can assume that it outputs "\x80" for a euro character

Please explain.

Selecting an appropriate cp125x encoding: Where (geographical location) was the file created? In what language(s) is the text written? Any characters other than the presumed euro with values > "\x7f"? If so, which ones and what context are they used in?

Update 2 If you don't "know how the program is written", neither you nor we can form an opinion on whether it always uses "\x80" for the euro character. Although doing otherwise would be monumental silliness, it can't be ruled out.

If the text is written in the English language and/or it is written in the USA, and/or it's written on a Windows platform, then it's reasonably certain that cp1252 is the way to go ... until you get evidence to the contrary, in which case you'd need to guess an encoding by yourself or answer the (what language, what locality) questions.

share|improve this answer
1  
+1 for "You need to know what encoding your data is in." You need to know. +1 for "latin1 [doesn't map] '\x80' to a euro". +1 for finding the real encoding, which I was still looking for. – Thanatos Jun 8 '10 at 1:37
@Thanatos: "real encoding": cp125x are the usual suspects. – John Machin Jun 8 '10 at 1:47
yep i'm deifnitely one of the cp125x, so it worked on my given computer. i'll hard-code it instead. the accepted answer is correct except for using 'latin1' in that case, yes? – Claudiu Jun 8 '10 at 4:06
@Claudiu: (1) I don't understand your use of the word "so". (2) No, the currently accepted answer is replete with confusion and error. – John Machin Jun 8 '10 at 5:47
@John: I mean it happened to work on my machine. maybe it was pure chance. I'm reading this data from a file, e.g. open(fname).read(). It contains strings with \x80 in them that represents the euro character. it's just a plain text file. it is generated by another program, but I don't know how it goes about generating the text. what would be a good solution? I'm thinking I can assume that it outputs "\x80" for a euro character, meaning I can assume it's encoded with a cp125x that has that char as the euro. – Claudiu Jun 8 '10 at 12:43
show 3 more comments

Your Answer

 
or
required, but never shown
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.