[#25936] [Bug:1.9] [rubygems] $LOAD_PATH includes bin directory — Nobuyoshi Nakada <nobu@...>

Hi,

10 messages 2009/10/05

[#25943] Disabling tainting — Tony Arcieri <tony@...>

Would it make sense to have a flag passed to the interpreter on startup that

16 messages 2009/10/05

[#26028] [Bug #2189] Math.atanh(1) & Math.atanh(-1) should not raise an error — Marc-Andre Lafortune <redmine@...>

Bug #2189: Math.atanh(1) & Math.atanh(-1) should not raise an error

14 messages 2009/10/10

[#26222] [Bug #2250] IO::for_fd() objects' finalization dangerously closes underlying fds — Mike Pomraning <redmine@...>

Bug #2250: IO::for_fd() objects' finalization dangerously closes underlying fds

11 messages 2009/10/22

[#26244] [Bug #2258] Kernel#require inside rb_require() inside rb_protect() inside SysV context fails — Suraj Kurapati <redmine@...>

Bug #2258: Kernel#require inside rb_require() inside rb_protect() inside SysV context fails

24 messages 2009/10/22

[#26361] [Feature #2294] [PATCH] ruby_bind_stack() to embed Ruby in coroutine — Suraj Kurapati <redmine@...>

Feature #2294: [PATCH] ruby_bind_stack() to embed Ruby in coroutine

42 messages 2009/10/27

[#26371] [Bug #2295] segmentation faults — tomer doron <redmine@...>

Bug #2295: segmentation faults

16 messages 2009/10/27

[ruby-core:26149] [Feature #2034] Consider the ICU Library for Improving and Expanding Unicode Support

From: Perry Smith <redmine@...>
Date: 2009-10-18 16:03:59 UTC
List: ruby-core #26149
Issue #2034 has been updated by Perry Smith.


I discovered ICU and ICU4R back in 2007 and I just now moved it to
Ruby 1.9.  I'm a pretty big advocate of using ICU.  There is nothing
that has as many encodings as ICU to my knowledge.  It is the only one
that addresses many of the EBCDIC encodings (of which there 147 some
odd of them).

The reason I came to use ICU is the application I'm working on needs
to translate EBCDIC encoded Japanese characters to something a browser
can use such as utf-8.  ICU is the only portable library that I found
and it is also the only library that had the encodings that I needed.

I'm assuming a few things here.  One is that this:

http://yokolet.blogspot.com/2009/07/design-and-implementation-of-ruby-m17n.html

is accurate for the most part.  In particular, this paper seems to say
that there is choice between a UCS model and an CSI model and Ruby 1.9
has choosen CSI.  From my perspective, a CSI model should be an
envelope around a UCS model.

My background is working inside IBM for 20+ years and I've bumped into
multi-byte language issues since 1989.  I'm not an expert by any
meaans but I have seen IBM struggle with this for decades.

Perhaps only IBM and legacy IBM applications have these issues.  I
simply don't know but I will say that all of the other open source
language encoding implementations are very small in the number of
encodings they do compared to what you see when dealing with legacy
international applications.

In the text below, I will use "aaa" to represent a string using an
encoding of A, "bbb" will represent a string using an encoding of B,
and so on.  I will also simply put B to stand for encoding B.

I believe that the CSI model is a great choice: why translate
everything all the time?  If an application is going to read data and
write it back out, translating it is both a waste of time and error
prone.

I believe the implementors of a UCS model fall back and say that if
the application is going to compare strings they must be in a common
encoding -- Ruby agrees with this point.  And, they also would argue
that if you want to translate "aaa" into B, it is simply more
practical to go to a common encoding C first.  Then you have only 2N
encoders instead of N^2 encoders.  To me, that argument is very
sound.  If plausible, I would allow specific A to B translators to be
plugged in.

The key place where I believe Ruby's choice of a CSI model wins is the
fact that there are a lot of places that data can be used and
manipulated without translation.  Keeping and using the CSI model in
all those places is a clear win.  In all those places, the data is
opaque; it is not interpreted or understood by the application.

Opaque data can be compared for equality as Ruby appears to be doing
now -- the two strings must have the same encoding and byte for byte
compare as equal.

Technically, opaque data can be concatenated and spliced as well.
This is one place that Ruby's 1.9 implementation surprised me a bit.
It could be that "aaa" + "bbb" yields String that is a list of
SubStrings.  I'll write as x = [ "aaa", "bbb" ].  That would have many
useful concepts: length would be the sum of the length of all the
SubStrings. x[1] would be "a".  x[4] would be "b".  x[2,2] would yield
a String with two SubStrings (again, this is just how I'm representing
it) [ "a", "b" ].  x.encoding would return Mixed in these cases.
Encoding would be a concept attached to a SubString rather than
String.  x.each would return a sequence of "a", "a", "a", "b", "b",
"b" each with a encoding of A for the "a"s and B for the "b"s.  String
would still be what most applications use.  Rarely would they need to
know about the SubStrings.

Many text manipulations can be done with opaque data because the
characters themselves are still not being interpreted by the
application.  To the application are just "dodads" that those human
guys know about.  I believe that if Ruby wants to hold strongly to the
CSI model that encoding agnostic string manipulations should be
implemented.

The places where the actual characters are "understood" by an
application is for sorting (collation) and if, for some external
reason, they need to be translated to a particular encoding.

Sorting not only depends upon the encoding but also the language.
Sorting could be done with routines specific to an encoding plus
language but I believe that is impractical to implement.  Utopia would
be the ability to plug (and grow) sort routines that would be specific
to the encoding and language with a fall back going to a sort routine
tailored for the language and a common encoding such as UTF-16 and if
the language was not known (or implemented), fall back to sorting
based upon just the encoding, and if that was not available, fall back
to a sort based upon a common encoding.

As has been pointed out already, the String#to_i routine needs to be
encoding savvy.  There are probably a few more methods that need to be
encoding savvy.

The translations, collations, and other places that characters must be
understood by the application is where I believe using ICU is a hugh
win.  ICU should not be used all the time because most of the time, no
undetanding of the characters are needed by the application.  But if
translation or collation are needed, ICU is a hugh repository that is
already implemented and available.

I have not seen arguments aginst ICU that I believe hold much weight.
It is more portable than any iconv implementation (because iconv has
been stuff into the libc implemntation and pulling it back apart
looked really hard to me).  The fact that it is hugh is just a
reflection of the size of the problem.

----------------------------------------
http://redmine.ruby-lang.org/issues/show/2034

----------------------------------------
http://redmine.ruby-lang.org

In This Thread