[#10209] Market for XML Web stuff — Matt Sergeant <matt@...>

I'm trying to get a handle on what the size of the market for AxKit would be

15 messages 2001/02/01

[#10238] RFC: RubyVM (long) — Robert Feldt <feldt@...>

Hi,

20 messages 2001/02/01
[#10364] Re: RFC: RubyVM (long) — Mathieu Bouchard <matju@...> 2001/02/05

[#10708] Suggestion for threading model — Stephen White <spwhite@...>

I've been playing around with multi-threading. I notice that there are

11 messages 2001/02/11

[#10853] Re: RubyChangeRequest #U002: new proper name for Hash#indexes, Array#indexes — "Mike Wilson" <wmwilson01@...>

10 messages 2001/02/14

[#11037] to_s and << — "Brent Rowland" <tarod@...>

list = [1, 2.3, 'four', false]

15 messages 2001/02/18

[#11094] Re: Summary: RCR #U002 - proper new name fo r indexes — Aleksi Niemel<aleksi.niemela@...>

> On Mon, 19 Feb 2001, Yukihiro Matsumoto wrote:

12 messages 2001/02/19

[#11131] Re: Summary: RCR #U002 - proper new name fo r indexes — "Conrad Schneiker" <schneik@...>

Robert Feldt wrote:

10 messages 2001/02/19

[#11251] Programming Ruby is now online — Dave Thomas <Dave@...>

36 messages 2001/02/21

[#11469] XML-RPC and KDE — schuerig@... (Michael Schuerig)

23 messages 2001/02/24
[#11490] Re: XML-RPC and KDE — schuerig@... (Michael Schuerig) 2001/02/24

Michael Neumann <neumann@s-direktnet.de> wrote:

[#11491] Negative Reviews for Ruby and Programming Ruby — Jim Freeze <jim@...> 2001/02/24

Hi all:

[#11633] RCR: shortcut for instance variable initialization — Dave Thomas <Dave@...>

13 messages 2001/02/26

[#11652] RE: RCR: shortcut for instance variable initialization — Michael Davis <mdavis@...>

I like it!

14 messages 2001/02/27

[#11700] Starting Once Again — Ron Jeffries <ronjeffries@...>

OK, I'm starting again with Ruby. I'm just assuming that I've

31 messages 2001/02/27
[#11712] RE: Starting Once Again — "Aaron Hinni" <aaron@...> 2001/02/27

> 2. So far I think running under TextPad will be better than running

[#11726] Re: Starting Once Again — Aleksi Niemel<zak@...> 2001/02/28

On Wed, 28 Feb 2001, Aaron Hinni wrote:

[ruby-talk:11814] Re: Comparison Caching

From: "Ben Tilly" <ben_tilly@...>
Date: 2001-02-28 21:55:30 UTC
List: ruby-talk #11814
"Christoph Rippel" <crippel@primenet.com> wrote:
>
> > From: Ben Tilly [mailto:ben_tilly@hotmail.com]
[...]
>I also think that your objections towards symmetrization - i.e. swapping
>are not valid since they cost very little compared to the reminder division
>invoked in any hashing scheme and running the interpreter it self of course
>(in my test it makes virtually non difference if you swap or don't swap -
>even so swapping does not gain anything in the ``examples'')
>- swapping IMO simply feels more robust.

I hate the phrase "simply feels more robust".  And my
reaction is so strongly opposite that I want to explain
the cause of my reaction in some detail.

First of all it is an unnecessary step.  We are both
agreed on that.  The algorithm works just fine without
it.

Secondly let us stop and ask ourselves what we want == to
mean for complex data structures.  A reasonable definition
is that two data structures are == when there is no nested
set of lookups that you can find which give apparently
different results.  So, for instance if we take this code:

   a = ["foo"]
   a.push a

then a is == to ["foo", a] which is == to ["foo", ["foo", a]]
and in general if you try to access an element in any of them
you get the same result as from the infinite data structure:

    ["foo", ["foo", ["foo", ... ]]]

I maintain that this definition is both reasonable and
understandable.

Now it is not hard for me to come up with a convincing
demonstration that caching without swapping is always,
no matter how complex the data structure, going to result
in a correct algorithm.  But I had more trouble producing
a demonstration for the general case with swapping where
you might have 10 variables that got compared with each
other, which have a variety of recursive relationships
and appear on both sides of the comparison.

It turns out that there is a simple proof, but it was not
as easy for me to produce it.  Conversely since both
algorithms accomplish the same thing, there is no way
that adding the swap can make your code more robust.

So we have a step that is not needed, that makes the
code harder to analyze, that does not change what
the end result is.  There is no case that will work out
which would not have have worked out anyways.  Thus it
cannot make the code more robust.  It can only make the
code more complex to understand.  And in my books, all
else being equal, complexity is the dire enemy of
robustness.

Sure, it is only one line.  But what does that line
accomplish?  What purpose does it serve?  How does it
justify its continued existence?

(I know, my math background is showing...)

Cheers,
Ben

PS I don't mind complications if they buy you something.
Linear searches are far simpler than hashing, BTrees,
etc, but they are also much slower.  But unless a
complication actually wins me something specific, I try
to avoid it.  And while it is definitely true that there
are good reasons to use these more complex algorithms, it
is also undeniable that many bugs have been created by
people trying to switch to more complex algorithms...
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com

In This Thread

Prev Next