[#27380] [Bug #2553] Fix pthreads slowness by eliminating unnecessary sigprocmask calls — Dan Peterson <redmine@...>

Bug #2553: Fix pthreads slowness by eliminating unnecessary sigprocmask calls

21 messages 2010/01/03

[#27437] [Feature #2561] 1.8.7 Patch reduces time cost of Rational operations by 50%. — Kurt Stephens <redmine@...>

Feature #2561: 1.8.7 Patch reduces time cost of Rational operations by 50%.

9 messages 2010/01/06

[#27447] [Bug #2564] [patch] re-initialize timer_thread_{lock,cond} after fork — Aliaksey Kandratsenka <redmine@...>

Bug #2564: [patch] re-initialize timer_thread_{lock,cond} after fork

18 messages 2010/01/06

[#27545] [Feature #2594] 1.8.7 Patch: Reduce time spent in gc.c is_pointer_to_heap(). — Kurt Stephens <redmine@...>

Feature #2594: 1.8.7 Patch: Reduce time spent in gc.c is_pointer_to_heap().

8 messages 2010/01/11

[#27635] [Bug #2619] Proposed method: Process.fork_supported? — Hongli Lai <redmine@...>

Bug #2619: Proposed method: Process.fork_supported?

45 messages 2010/01/20
[#27643] [Feature #2619] Proposed method: Process.fork_supported? — Luis Lavena <redmine@...> 2010/01/21

Issue #2619 has been updated by Luis Lavena.

[#27678] Re: [Feature #2619] Proposed method: Process.fork_supported? — Yukihiro Matsumoto <matz@...> 2010/01/22

Hi,

[#27684] Re: [Feature #2619] Proposed method: Process.fork_supported? — Charles Oliver Nutter <headius@...> 2010/01/22

On Thu, Jan 21, 2010 at 11:27 PM, Yukihiro Matsumoto <matz@ruby-lang.org> wrote:

[#27708] Re: [Feature #2619] Proposed method: Process.fork_supported? — Yukihiro Matsumoto <matz@...> 2010/01/22

Hi,

[#27646] Re: [Bug #2619] Proposed method: Process.fork_supported? — Tanaka Akira <akr@...> 2010/01/21

2010/1/21 Hongli Lai <redmine@ruby-lang.org>:

[#27652] Re: [Bug #2619] Proposed method: Process.fork_supported? — Hongli Lai <hongli@...99.net> 2010/01/21

On 1/21/10 5:20 AM, Tanaka Akira wrote:

[#27653] Re: [Bug #2619] Proposed method: Process.fork_supported? — Tanaka Akira <akr@...> 2010/01/21

2010/1/21 Hongli Lai <hongli@plan99.net>:

[#27662] Re: [Bug #2619] Proposed method: Process.fork_supported? — Vladimir Sizikov <vsizikov@...> 2010/01/21

On Thu, Jan 21, 2010 at 10:53 AM, Tanaka Akira <akr@fsij.org> wrote:

[#27698] [Bug #2629] ConditionVariable#wait(mutex, timeout) should return whether the condition was signalled, not the waited time — Hongli Lai <redmine@...>

Bug #2629: ConditionVariable#wait(mutex, timeout) should return whether the condition was signalled, not the waited time

8 messages 2010/01/22

[#27722] [Feature #2635] Unbundle rdoc — Yui NARUSE <redmine@...>

Feature #2635: Unbundle rdoc

14 messages 2010/01/23

[#27757] [Bug #2638] ruby-1.9.1-p37[68] build on aix5.3 with gcc-4.2 failed to run for me because it ignores where libgcc is located. — Joel Soete <redmine@...>

Bug #2638: ruby-1.9.1-p37[68] build on aix5.3 with gcc-4.2 failed to run for me because it ignores where libgcc is located.

10 messages 2010/01/24

[#27778] [Bug #2641] Seg fault running miniruby during ruby build on Haiku — Alexander von Gluck <redmine@...>

Bug #2641: Seg fault running miniruby during ruby build on Haiku

10 messages 2010/01/25

[#27791] [Bug #2644] memory over-allocation with regexp — Greg Hazel <redmine@...>

Bug #2644: memory over-allocation with regexp

12 messages 2010/01/25

[#27794] [Bug #2647] Lack of testing for String#split — Hugh Sasse <redmine@...>

Bug #2647: Lack of testing for String#split

14 messages 2010/01/25

[#27912] [Bug #2669] mkmf find_executable doesn't find .bat files — Roger Pack <redmine@...>

Bug #2669: mkmf find_executable doesn't find .bat files

11 messages 2010/01/27

[#27930] [Bug:trunk] some behavior changes of lib/csv.rb between 1.8 and 1.9 — Yusuke ENDOH <mame@...>

Hi jeg2, or anyone who knows the implementation of FasterCSV,

15 messages 2010/01/28
[#27931] Re: [Bug:trunk] some behavior changes of lib/csv.rb between 1.8 and 1.9 — James Edward Gray II <james@...> 2010/01/28

On Jan 28, 2010, at 10:51 AM, Yusuke ENDOH wrote:

[ruby-core:27543] Re: better GC?

From: Erik Scheirer <e@...>
Date: 2010-01-11 17:10:48 UTC
List: ruby-core #27543
Free beer wouldn't be bad, either ;-)

I agree that the overhead would be unacceptable, based on the points you raised. For true production-level code, performance would be unacceptable.

However, maybe another thing to consider is to work up a simple method for GC experimentation so that anyone who wanted to try out a new GC approach could do so without having to get into the guts of everything; if such a GC ended up being useful it could then be incorporated with the 'native' build that did not use the 'experimentation hooks'.

In other words, what I am suggesting is a 'pluggable' GC architecture for the purposes of accelerating development of proposed changes/additions to the language in a more 'agile' manner. Then if a particular GC were accepted it would be mind-melded into the production-level code without the hooks so performance is as good as it can get.

This kind of approach might serve well with other aspects of language development, but the GC issue is by far the biggest in terms of performance considerations. GC issues killed java credibility for years, for example, so the faster the growing pains are gotten out of the way, the better, I would offer.

e

On Jan 11, 2010, at 10:22 AM, Rick DeNatale wrote:

> On Mon, Jan 11, 2010 at 9:42 AM, Erik Scheirer <e@illume.org> wrote:
>> I think a pluggable/loadable GC scheme, as long as its really simple to use, is perfect.
> 
> So would be a lasting world peace!
> 
>> There would be some overhead created by making it pluggable, though, but in the scheme of things it would be well worth the small amount of cpu cycles lost.
> 
> I just can't see the feasibility of this.  Any good GC involves
> careful interaction between the parts of the system which mutate
> memory and those which manage it.  Getting a highly performant GC
> almost always involves careful coordinated design of things like:
> 
> The low-level layout of objects.
> The division of memory into various collections of objects (e.g. in a
> GC scheme the old objects and the new object live in different spaces,
> sometimes the new space moves each time a minor GC happens.
> Efficient detection of whether an object is old or new.
> For a GC requiring a 'write-barrier', efficient implementation of that
> write barrier.
> ...
> 
> And to really get the most out of a GC, some of the low level
> decisions can be platform and processor specific.
> 
> There are cascading design decisions to be be made.  For example, lets
> say we're making a generation scavenging GC.  We need to capture
> enough information as the mutator (the program) runs so that we can
> find any new objects which are referenced by an old object. This is
> the reason for the write barrier.  So there are several issues:
> 
>   How do we detect a store of a reference to a new object  into an
> old object with the lowest overhead.
>   How do we remember a store into an old object with the lowest overhead.
>   ...
> 
> There are several strategies for detecting old vs new objects, each
> with it's own tradeoffs, for example:
>   A flag bit in the object header
>   Address range checking to see which space it's in, or not in.
>   On some platforms and processors, one might make use of the virtual
> memory hardware and access privileges to detect such stores, but this
> is highly non-portable and may or may not outperform other approaches.
> 
> Flag bits need to be maintained properly, and are expensive, see below.
> Address range checking is more common, and goes back to the
> interactions with the overall design of the "VM".
> 
> And what about how to remember the old objects which need to be
> considered during a new object GC.
> 
> We could perhaps make a linked or set of "remembered" objects, but
> this is expensive both in terms of space and speed.
> 
> Most GCs use some form of "card marking" where old space is broken up
> in to 'cards' containing a range of memory.  Cards are similar to
> pages in a virtual memory system, and may or may not be the same in a
> particular GC implementation. In such a scheme when a new object
> reference is stored in an old object, the fact that has happened is
> stored as a change to the card in which the old object resides.  The
> most obvious way to do this is to have a data structure somewhere
> which has a bit for each card.
> 
> But on most processors setting individual bits is expensive involving
> fetching, masking, and re-storing a larger datatype.
> 
> The Self guys recognized this and found that for the processors they
> were working on using a byte rather than a bit for the mark, was much
> better overall despite requiring eight times the space for the marks.
> 
> http://www.cs.ucsb.edu/~urs/oocsb/papers/write-barrier.pdf
> 
> And these are just some of the variations once one has chosen a
> particular GC algorithm or perhaps one of a family of GC algorithms.
> 
> Now I know that LLVM attempts to do something like this,
> 
> http://llvm.org/docs/GarbageCollection.html
> 
> but it apparently hasn't been all that successful:
> 
> http://lhc-compiler.blogspot.com/2009/01/why-llvm-probably-wont-replace-c.html
> 
> 
> The problem is that LLVM defines the interface between the mutator and
> the GC "framework" in terms of C functions and function callbacks,
> e.g. for the write-barrier, whereas a really efficient GC implements
> the write barrier(and other GC bookkeeping tasks) in a few machine
> instructions.
> 
> 
> I fear that a pluggable GC would only let you play around with pretty
> poorly performing GC alternatives.
> 
> -- 
> Rick DeNatale
> 
> Blog: http://talklikeaduck.denhaven2.com/
> Twitter: http://twitter.com/RickDeNatale
> WWR: http://www.workingwithrails.com/person/9021-rick-denatale
> LinkedIn: http://www.linkedin.com/in/rickdenatale
> 
> 


In This Thread