Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I found this through https://news.ycombinator.com/item?id=9609955 -- I find it fascinating the solutions that people come up with to deal with other people's problems without damaging correct code. Rust uses WTF-8 to interact with Windows' UCS2/UTF-16 hybrid, and from a quick look I'm hopeful that Rust's story around handling Unicode should be much nicer than (say) Python or Java.


Have you looked at Python 3 yet? I'm using Python 3 in production for an internationalized website and my experience has been that it handles Unicode pretty well.


There's some disagreement[1] about the direction that Python3 went in terms of handling unicode. Pretty good read if you have a few minutes.

1 http://lucumr.pocoo.org/2014/1/5/unicode-in-2-and-3/


Not that great of a read. Stuff like:

> I have been told multiple times now that my point of view is wrong and I don't understand beginners, or that the “text model” has been changed and my request makes no sense.

"The text model has changed" is a perfectly legitimate reason to turn down ideas consistent with the previous text model and inconsistent with the current model. Keeping a coherent, consistent model of your text is a pretty important part of curating a language. One of Python's greatest strengths is that they don't just pile on random features, and keeping old crufty features from previous versions would amount to the same thing. To dismiss this reasoning is extremely shortsighted.


Many people who prefer Python3's way of handling Unicode are aware of these arguments. It isn't a position based on ignorance.


Hey, never meant to imply otherwise. In fact, even people who have issues with the py3 way often agree that it's still better than 2's.


http://lucumr.pocoo.org/2014/1/9/ucs-vs-utf8/ is a nice comparison of Python’s (2 and 3) and Rust’s Unicode handling.


Python 3 doesn't handle Unicode any better than Python 2, it just made it the default string. In all other aspects the situation has stayed as bad as it was in Python 2 or has gotten significantly worse. Good examples for that are paths and anything that relates to local IO when you're locale is C.


> Python 3 doesn't handle Unicode any better than Python 2, it just made it the default string. In all other aspects the situation has stayed as bad as it was in Python 2 or has gotten significantly worse.

Maybe this has been your experience, but it hasn't been mine. Using Python 3 was the single best decision I've made in developing a multilingual website (we support English/German/Spanish). There's not a ton of local IO, but I've upgraded all my personal projects to Python 3.

Your complaint, and the complaint of the OP, seems to be basically, "It's different and I have to change my code, therefore it's bad."


My complaint is not that I have to change my code. My complaint is that Python 3 is an attempt at breaking as little compatibilty with Python 2 as possible while making Unicode "easy" to use. They failed to achieve both goals.

Now we have a Python 3 that's incompatible to Python 2 but provides almost no significant benefit, solves none of the large well known problems and introduces quite a few new problems.


I have to disagree, I think using Unicode in Python 3 is currently easier than in any language I've used. It certainly isn't perfect, but it's better than the alternatives. I certainly have spent very little time struggling with it.


That is not quite true, in the sense that more of the standard library has been made unicode-aware, and implicit conversions between unicode and bytestrings have been removed. So if you're working in either domain you get a coherent view, the problem being when you're interacting with systems or concepts which straddle the divide or (even worse) may be in either domain depending on the platform. Filesystem paths is the latter, it's text on OSX and Windows — although possibly ill-formed in Windows — but it's bag-o-bytes in most unices. There Python 2 is only "better" in that issues will probably fly under the radar if you don't prod things too much.


There is no coherent view at all. Bytes still have methods like .upper() that make no sense at all in that context, while unicode strings with these methods are broken because these are locale dependent operations and there is no appropriate API. You can also index, slice and iterate over strings, all operations that you really shouldn't do unless you really now what you are doing. The API in no way indicates that doing any of these things is a problem.

Python 2 handling of paths is not good because there is no good abstraction over different operating systems, treating them as byte strings is a sane lowest common denominator though.

Python 3 pretends that paths can be represented as unicode strings on all OSes, that's not true. That is held up with a very leaky abstraction and means that Python code that treats paths as unicode strings and not as paths-that-happen-to-be-unicode-but-really-arent is broken. Most people aren't aware of that at all and it's definitely surprising.

On top of that implicit coercions have been replaced with implicit broken guessing of encodings for example when opening files.


When you say "strings" are you referring to strings or bytes? Why shouldn't you slice or index them? It seems like those operations make sense in either case but I'm sure I'm missing something.

On the guessing encodings when opening files, that's not really a problem. The caller should specify the encoding manually ideally. If you don't know the encoding of the file, how can you decode it? You could still open it as raw bytes if required.


I used strings to mean both. Byte strings can be sliced and indexed no problems because a byte as such is something you may actually want to deal with.

Slicing or indexing into unicode strings is a problem because it's not clear what unicode strings are strings of. You can look at unicode strings from different perspectives and see a sequence of codepoints or a sequence of characters, both can be reasonable depending on what you want to do. Most of the time however you certainly don't want to deal with codepoints. Python however only gives you a codepoint-level perspective.

Guessing encodings when opening files is a problem precisely because - as you mentioned - the caller should specify the encoding, not just sometimes but always. Guessing an encoding based on the locale or the content of the file should be the exception and something the caller does explicitly.


It slices by codepoints? That's just silly, so we've gone through this whole unicode everywhere process so we can stop thinking about the underlying implementation details but the api forces you to have to deal with them anyway.

Fortunately it's not something I deal with often but thanks for the info, will stop me getting caught out later.


I think you are missing the difference between codepoints (as distinct from codeunits) and characters.


And unfortunately, I'm not anymore enlightened as to my misunderstanding.

I get that every different thing (character) is a different Unicode number (code point). To store / transmit these you need some standard (encoding) for writing them down as a sequence of bytes (code units, well depending on the encoding each code unit is made up of different numbers of bytes).

How is any of that in conflict with my original points? Or is some of my above understanding incorrect.

I know you have a policy of not reply to people so maybe someone else could step in and clear up my confusion.


Codepoints and characters are not equivalent. A character can consist of one or more codepoints. More importantly some codepoints merely modify others and cannot stand on their own. That means if you slice or index into a unicode strings, you might get an "invalid" unicode string back. That is a unicode string that cannot be encoded or rendered in any meaningful way.


Right, ok. I recall something about this - ü can be represented either by a single code point or by the letter 'u' preceded by the modifier.

As the user of unicode I don't really care about that. If I slice characters I expect a slice of characters. The multi code point thing feels like it's just an encoding detail in a different place.

I guess you need some operations to get to those details if you need. Man, what was the drive behind adding that extra complexity to life?!

Thanks for explaining. That was the piece I was missing.


bytes.upper is the Right Thing when you are dealing with ASCII-based formats. It also has the advantage of breaking in less random ways than unicode.upper.

And I mean, I can't really think of any cross-locale requirements fulfilled by unicode.upper (maybe case-insensitive matching, but then you also want to do lots of other filtering).


> There Python 2 is only "better" in that issues will probably fly under the radar if you don't prod things too much.

Ah yes, the JavaScript solution.


Well, Python 3's unicode support is much more complete. As a trivial example, case conversions now cover the whole unicode range. This holds pretty consistently - Python 2's `unicode` was incomplete.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: