[#15701] Ruby 1.9.0-1 snapshot released — Yukihiro Matsumoto <matz@...>
Hi,
[#15704] Proc#curry doesn't work on func which produces func — Lin Jen-Shin <godfat@...>
Proc#curry doesn't work on function which produces function,
Hi,
>>>>> "Y" == Yusuke ENDOH <mame@tsg.ne.jp> writes:
[#15707] Schedule for the 1.8.7 release — "Akinori MUSHA" <knu@...>
Hi, developers,
On Sat, Mar 01, 2008 at 08:58:00PM +0900, Akinori MUSHA wrote:
Hi,
At Fri, 21 Mar 2008 23:16:54 +0900,
At Mon, 24 Mar 2008 21:39:45 +0900,
[#15709] capitalize and downcase — Trans <transfire@...>
I've always wondered why String#capitalize downcases the whole string
[#15713] Ruby String hash key overflow when converting to Fixnum. — "Chiyuan Zhang" <pluskid@...>
Hi, all! I've opened a issue at rubyforge:
[#15728] Question on build process - skipping unsupported extensions — Daniel Berger <djberg96@...>
Hi,
[#15740] Copy-on-write friendly garbage collector — Hongli Lai <hongli@...99.net>
Hi.
Hi,
Yukihiro Matsumoto wrote:
Yukihiro Matsumoto wrote:
Hi.
Hongli Lai wrote:
Hi.
Hi,
I believe I managed to close the performance gap to only 6% slower than
Daniel DeLorme wrote:
[#15746] Am I misinterpreting the new keyword arguments to IO.foreach and friends? — Dave Thomas <dave@...>
I was expecting this to pass lines to the block:
[#15756] embedding Ruby 1.9.0 inside pthread — "Suraj Kurapati" <sunaku@...>
Hello,
Hi,
Hi,
Yukihiro Matsumoto wrote:
Suraj N. Kurapati wrote:
Hi,
Nobuyoshi Nakada wrote:
Suraj N. Kurapati wrote:
Hongli Lai wrote:
[#15775] next(n), succ(n) ? — Trans <transfire@...>
Can anyone see any reason against adding an optional parameter to
[#15778] Named captures and regular captures — Dave Thomas <dave@...>
It seems that once you have a named capture in a regular expression,
[#15783] Adding startup and shutdown to Test::Unit — Daniel Berger <Daniel.Berger@...>
Hi all,
Daniel Berger wrote:
On Wed, Mar 05, 2008 at 07:52:40AM +0900, Daniel Berger wrote:
[#15835] TimeoutError in core, timeouts for ConditionVariable#wait — MenTaLguY <mental@...>
I've been reworking JRuby's stdlib to improve performance and fix
On Sun, 2008-03-09 at 12:13 +0900, MenTaLguY wrote:
[#15837] Correct procedure for patch review? — Hongli Lai <hongli@...99.net>
Hi.
[#15855] Ruby 1.8.6 trace return line numbers wrong — "Rocky Bernstein" <rocky.bernstein@...>
Consider this program:
[#15860] Webrick directory traversal exploit on UNIX — Jos Backus <jos@...>
DSecRG Advisory #DSECRG-08-026 aka -018 describes a remote directory traversal
[#15871] Sparc architecture optimizations — Thomas Enebo <Thomas.Enebo@...>
Someone at Sun has been looking at Ruby on Sparc:
Thomas Enebo wrote:
Hello Ruby-core,
Hi,
Yukihiro Matsumoto wrote:
Prashant Srinivasan wrote:
[#15880] Ruby 1.8.6 binding value after "if" expression evaluation — "Rocky Bernstein" <rocky.bernstein@...>
Here's another trace hook weirdness that I've encountered.
Hello,
Thanks. The output you report matches what I get in 1.8.6 and suggests where
I think I've found why this is happening. The trace hook for NODE_IF is
[#15907] Range#member? semantics seem wrong — Dave Thomas <dave@...>
Range#member? has been changed so that it the start and end of the
[#15909] RARRAY_PTR — "Laurent Sansonetti" <laurent.sansonetti@...>
Hi,
[#15917] Ruby 1.9 (trunk) crashes when running RubyGems and Rake — Hongli Lai <hongli@...99.net>
Ruby 1.9 (trunk) seems to crash when running the supplied RubyGems and Rake:
Hi,
Nobuyoshi Nakada wrote:
On Mon, Mar 17, 2008 at 06:53:19PM +0900, Hongli Lai wrote:
[#15927] how to create a block with a block parameter in C? — Paul Brannan <pbrannan@...>
This works in Ruby (1.9):
>>>>> "P" == Paul Brannan <pbrannan@atdesk.com> writes:
[#15933] complex and rational — Dave Thomas <dave@...>
Before I start doing the documentation for the PickAxe, could I just
[#15936] Are Depreciated Methods "add_final" & "remove_final" supposed to ACTUALLY WORK? — Charles Thornton <ceo@...>
In Working on IRHG Docs for GC the following
>>>>> "C" == Charles Thornton <ceo@hawthorne-press.com> writes:
ts wrote:
[#15938] Questions on Enumerator#skip_first and Enumerable#first — "Artem Voroztsov" <artem.voroztsov@...>
I asked in ruby-talk, but did not get answer.
On Mar 18, 2008, at 6:20 AM, Artem Voroztsov wrote:
[#15975] Bugs in REXML — "Federico Builes" <federico.builes@...>
Hi,
On Mar 21, 2008, at 17:35, Federico Builes wrote:
[#15980] 1.8.6 memory leak? — "Stephen Sykes" <sdsykes@...>
Hi,
[#15983] Changing the algorithm of String#* — apeiros <apeiros@...>
Hi there
[#15990] Recent changes in Range#step behavior — "Vladimir Sizikov" <vsizikov@...>
Hi,
Hi Dave,
Hi Dave,
Hi,
Hi,
Hi,
On Wed, Mar 26, 2008 at 7:01 PM, Dave Thomas <dave@pragprog.com> wrote:
Dave Thomas wrote:
Dave Thomas wrote:
Dave Thomas wrote:
Dave,
This is all a semantic problem. Different people have different
[#16011] New ERb mode — Marc Haisenko <haisenko@...>
Hi folks,
On Tuesday 25 March 2008, Marc Haisenko wrote:
ERb already does this:
On Tuesday 25 March 2008, Jason Roelofs wrote:
On Tue, Mar 25, 2008 at 11:39 AM, Marc Haisenko <haisenko@comdasys.com> wro=
On Tuesday 25 March 2008, Jason Roelofs wrote:
[#16023] some Enumerable methods slower in 1.9 on OS X after revision 15124 — Chris Shea <cmshea@...>
All,
Hi,
Hi,
On Thu, Mar 27, 2008 at 02:26:51PM +0900, Nobuyoshi Nakada wrote:
Hi,
Nobuyoshi Nakada wrote:
Hi,
[#16057] About the license of gserver.rb being "freeware"? — "XiaoLiang Liu" <liuxlsh@...>
Hello everyone,
[#16088] command_call in parse.y — Adrian Thurston <thurston@...>
Hi,
Re: Copy-on-write friendly garbage collector
Hi,
In message "Re: Copy-on-write friendly garbage collector"
on Mon, 3 Mar 2008 18:48:34 +0900, Hongli Lai <hongli@plan99.net> writes:
|I've written patch which makes Ruby's garbage collector copy-on-write
|friendly. Details can be found on my blog,
|http://izumi.plan99.net/blog/index.php/category/optimizing-rails/, in
|the "Making Ruby's garbage collector copy-on-write friendly" series.
|
|Matz had shown interest in merging the patch into Ruby 1.9. I'm
|wondering whether that has already been done, and if not, whether I can
|be of any assistance.
Here's the patch against the latest trunk (r15675). It's still 8-10%
slower than the current implementation.
matz.
diff --git a/debug.c b/debug.c
index d7f99ed..dfcb523 100644
--- a/debug.c
+++ b/debug.c
@@ -29,8 +29,8 @@ static const union {
RUBY_ENC_CODERANGE_7BIT = ENC_CODERANGE_7BIT,
RUBY_ENC_CODERANGE_VALID = ENC_CODERANGE_VALID,
RUBY_ENC_CODERANGE_BROKEN = ENC_CODERANGE_BROKEN,
- RUBY_FL_MARK = FL_MARK,
- RUBY_FL_RESERVED = FL_RESERVED,
+ RUBY_FL_RESERVED0 = FL_RESERVED0,
+ RUBY_FL_RESERVED1 = FL_RESERVED1,
RUBY_FL_FINALIZE = FL_FINALIZE,
RUBY_FL_TAINT = FL_TAINT,
RUBY_FL_EXIVAR = FL_EXIVAR,
diff --git a/gc.c b/gc.c
index c47f8a0..38a9fe7 100644
--- a/gc.c
+++ b/gc.c
@@ -22,8 +22,14 @@
#include "gc.h"
#include <stdio.h>
#include <setjmp.h>
+#include <math.h>
#include <sys/types.h>
+#include <sys/mman.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+
#ifdef HAVE_SYS_TIME_H
#include <sys/time.h>
#endif
@@ -145,6 +151,8 @@ static struct heaps_slot {
void *membase;
RVALUE *slot;
int limit;
+ int *marks;
+ int marks_size;
} *heaps;
static int heaps_length = 0;
static int heaps_used = 0;
@@ -322,6 +330,36 @@ ruby_xfree(void *x)
RUBY_CRITICAL(free(x));
}
+static int debugging = 0;
+
+#define DEBUG_POINT(message) \
+ do { \
+ if (debugging) { \
+ printf("%s\n", message); \
+ getchar(); \
+ } \
+ } while (0)
+
+#define OPTION_ENABLED(name) (getenv((name)) && *getenv((name)) && *getenv((name)) != '0')
+
+static void *
+alloc_ruby_heap(size_t size)
+{
+ return malloc(size);
+}
+
+static void
+free_ruby_heap(void *heap)
+{
+ free(heap);
+}
+
+static void
+init_debugging()
+{
+ debugging = OPTION_ENABLED("RUBY_GC_DEBUG");
+}
+
/*
* call-seq:
@@ -413,6 +451,106 @@ rb_gc_unregister_address(VALUE *addr)
}
}
+static struct heaps_slot *last_heap = NULL;
+
+static inline struct heaps_slot *
+find_heap_slot_for_object(RVALUE *object)
+{
+ struct heaps_slot *heap;
+ register long hi, lo, mid;
+
+ /* Look in the cache first. */
+ if (last_heap != NULL && object >= last_heap->slot
+ && object < last_heap->slot + last_heap->limit) {
+ return last_heap;
+ }
+ /* find heap_slot for object using bsearch*/
+ lo = 0;
+ hi = heaps_used;
+ while (lo < hi) {
+ mid = (lo + hi) / 2;
+ heap = &heaps[mid];
+ if (heap->slot <= object) {
+ if (object < heap->slot + heap->limit) {
+ /* Cache this result. According to empirical evidence, the chance is
+ * high that the next lookup will be for the same heap slot.
+ */
+ last_heap = heap;
+ return heap;
+ }
+ lo = mid + 1;
+ }
+ else {
+ hi = mid;
+ }
+ }
+ return NULL;
+}
+
+static inline void
+find_position_in_bitfield(struct heaps_slot *hs, RVALUE *object,
+ unsigned int *bitfield_index, unsigned int *bitfield_offset)
+{
+ unsigned int index;
+
+ index = object - hs->slot;
+ *bitfield_index = index / (sizeof(int) * 8);
+ *bitfield_offset = index % (sizeof(int) * 8);
+}
+
+
+static void
+rb_mark_table_add(RVALUE *object)
+{
+ struct heaps_slot *hs;
+ unsigned int bitfield_index, bitfield_offset;
+
+ hs = find_heap_slot_for_object(object);
+ if (hs != NULL) {
+ find_position_in_bitfield(hs, object, &bitfield_index, &bitfield_offset);
+ hs->marks[bitfield_index] |= (1 << bitfield_offset);
+ }
+}
+
+static inline int
+rb_mark_table_heap_contains(struct heaps_slot *hs, RVALUE *object)
+{
+ unsigned int bitfield_index, bitfield_offset;
+
+ find_position_in_bitfield(hs, object, &bitfield_index, &bitfield_offset);
+ return hs->marks[bitfield_index] & (1 << bitfield_offset);
+}
+
+static inline int
+rb_mark_table_contains(RVALUE *object)
+{
+ struct heaps_slot *hs;
+
+ hs = find_heap_slot_for_object(object);
+ if (hs != NULL) {
+ return rb_mark_table_heap_contains(hs, object);
+ }
+}
+
+static inline void
+rb_mark_table_heap_remove(struct heaps_slot *hs, RVALUE *object)
+{
+ unsigned int bitfield_index, bitfield_offset;
+ find_position_in_bitfield(hs, object, &bitfield_index, &bitfield_offset);
+ hs->marks[bitfield_index] &= ~(1 << bitfield_offset);
+}
+
+static void
+rb_mark_table_remove(RVALUE *object)
+{
+ struct heaps_slot *hs;
+
+ hs = find_heap_slot_for_object(object);
+ if (hs != NULL) {
+ rb_mark_table_heap_remove(hs, object);
+ }
+}
+
static int
heap_cmp(const void *ap, const void *bp, void *dummy)
{
@@ -445,7 +583,7 @@ add_heap(void)
}
for (;;) {
- RUBY_CRITICAL(p = (RVALUE*)malloc(sizeof(RVALUE)*(heap_slots+1)));
+ RUBY_CRITICAL(p = (RVALUE*)alloc_ruby_heap(sizeof(RVALUE)*(heap_slots+1)));
if (p == 0) {
if (heap_slots == HEAP_MIN_SLOTS) {
rb_memerror();
@@ -460,6 +598,8 @@ add_heap(void)
p = (RVALUE*)((VALUE)p + sizeof(RVALUE) - ((VALUE)p % sizeof(RVALUE)));
heaps[heaps_used].slot = p;
heaps[heaps_used].limit = heap_slots;
+ heaps[heaps_used].marks_size = (int) (ceil(heap_slots / (sizeof(int) * 8.0)));
+ heaps[heaps_used].marks = (int *) calloc(heaps[heaps_used].marks_size, sizeof(int));
break;
}
pend = p + heap_slots;
@@ -494,6 +634,7 @@ rb_newobj_from_heap(void)
freelist = freelist->as.free.next;
MEMZERO((void*)obj, RVALUE, 1);
+ RANY(obj)->as.free.flags = 0;
#ifdef GC_DEBUG
RANY(obj)->file = rb_sourcefile();
RANY(obj)->line = rb_sourceline();
@@ -702,8 +843,7 @@ gc_mark_all(void)
for (i = 0; i < heaps_used; i++) {
p = heaps[i].slot; pend = p + heaps[i].limit;
while (p < pend) {
- if ((p->as.basic.flags & FL_MARK) &&
- (p->as.basic.flags != FL_MARK)) {
+ if (rb_mark_table_contains(p) && (p->as.basic.flags != 0)) {
gc_mark_children((VALUE)p, 0);
}
p++;
@@ -737,21 +877,8 @@ is_pointer_to_heap(void *ptr)
if (p < lomem || p > himem) return Qfalse;
if ((VALUE)p % sizeof(RVALUE) != 0) return Qfalse;
- /* check if p looks like a pointer using bsearch*/
- lo = 0;
- hi = heaps_used;
- while (lo < hi) {
- mid = (lo + hi) / 2;
- heap = &heaps[mid];
- if (heap->slot <= p) {
- if (p < heap->slot + heap->limit)
- return Qtrue;
- lo = mid + 1;
- }
- else {
- hi = mid;
- }
- }
+ if (find_heap_slot_for_object(p))
+ return Qtrue;
return Qfalse;
}
@@ -857,8 +984,8 @@ gc_mark(VALUE ptr, int lev)
obj = RANY(ptr);
if (rb_special_const_p(ptr)) return; /* special const not marked */
if (obj->as.basic.flags == 0) return; /* free cell */
- if (obj->as.basic.flags & FL_MARK) return; /* already marked */
- obj->as.basic.flags |= FL_MARK;
+ if (rb_mark_table_contains(obj)) return; /* already marked */
+ rb_mark_table_add(obj);
if (lev > GC_LEVEL_MAX || (lev == 0 && ruby_stack_check())) {
if (!mark_stack_overflow) {
@@ -892,8 +1019,8 @@ gc_mark_children(VALUE ptr, int lev)
obj = RANY(ptr);
if (rb_special_const_p(ptr)) return; /* special const not marked */
if (obj->as.basic.flags == 0) return; /* free cell */
- if (obj->as.basic.flags & FL_MARK) return; /* already marked */
- obj->as.basic.flags |= FL_MARK;
+ if (rb_mark_table_contains(obj)) return; /* already marked */
+ rb_mark_table_add(obj);
marking:
if (FL_TEST(obj, FL_EXIVAR)) {
@@ -1147,10 +1274,15 @@ finalize_list(RVALUE *p)
while (p) {
RVALUE *tmp = p->as.free.next;
run_final((VALUE)p);
- if (!FL_TEST(p, FL_SINGLETON)) { /* not freeing page */
+ /* Don't free objects that are singletons, or objects that are already freed.
+ * The latter is to prevent the unnecessary marking of memory pages as dirty,
+ * which can destroy copy-on-write semantics.
+ */
+ if (!FL_TEST(p, FL_SINGLETON) && p->as.free.flags != 0) {
VALGRIND_MAKE_MEM_UNDEFINED((void*)p, sizeof(RVALUE));
p->as.free.flags = 0;
p->as.free.next = freelist;
+ rb_mark_table_remove(p);
freelist = p;
}
p = tmp;
@@ -1164,7 +1296,8 @@ free_unused_heaps(void)
for (i = j = 1; j < heaps_used; i++) {
if (heaps[i].limit == 0) {
- free(heaps[i].membase);
+ free_ruby_heap(heaps[i].membase);
+ free(heaps[i].marks);
heaps_used--;
}
else {
@@ -1208,29 +1341,34 @@ gc_sweep(void)
p = heaps[i].slot; pend = p + heaps[i].limit;
while (p < pend) {
- if (!(p->as.basic.flags & FL_MARK)) {
+ if (!rb_mark_table_contains(p)) {
if (p->as.basic.flags) {
obj_free((VALUE)p);
}
if (need_call_final && FL_TEST(p, FL_FINALIZE)) {
- p->as.free.flags = FL_MARK; /* remain marked */
+ p->as.free.flags = FL_FINALIZE;
p->as.free.next = final_list;
final_list = p;
}
else {
VALGRIND_MAKE_MEM_UNDEFINED((void*)p, sizeof(RVALUE));
- p->as.free.flags = 0;
- p->as.free.next = freelist;
+ /* Do not touch the fields if they don't have to be modified.
+ * This is in order to preserve copy-on-write semantics.
+ */
+ if (p->as.free.flags != 0)
+ p->as.free.flags = 0;
+ if (p->as.free.next != freelist)
+ p->as.free.next = freelist;
freelist = p;
}
n++;
}
- else if (RBASIC(p)->flags == FL_MARK) {
+ else if (RBASIC(p)->flags == FL_FINALIZE) {
/* objects to be finalized */
- /* do nothing remain marked */
+ /* do nothing here */
}
else {
- RBASIC(p)->flags &= ~FL_MARK;
+ rb_mark_table_heap_remove(&heaps[i], p);
live++;
}
p++;
@@ -1272,6 +1410,7 @@ rb_gc_force_recycle(VALUE p)
VALGRIND_MAKE_MEM_UNDEFINED((void*)p, sizeof(RVALUE));
RANY(p)->as.free.flags = 0;
RANY(p)->as.free.next = freelist;
+ rb_mark_table_remove((RVALUE *) p);
freelist = RANY(p);
}
@@ -1462,6 +1601,7 @@ mark_current_machine_context(rb_thread_t *th)
FLUSH_REGISTER_WINDOWS;
/* This assumes that all registers are saved into the jmp_buf (and stack) */
+ memset(save_regs_gc_mark, 0, sizeof(save_regs_gc_mark));
setjmp(save_regs_gc_mark);
mark_locations_array((VALUE*)save_regs_gc_mark,
sizeof(save_regs_gc_mark) / sizeof(VALUE));
@@ -1501,6 +1641,7 @@ garbage_collect(void)
SET_STACK_END;
+ last_heap = NULL;
init_mark_stack();
th->vm->self ? rb_gc_mark(th->vm->self) : rb_vm_mark(th->vm);
@@ -1602,6 +1743,18 @@ ruby_set_stack_size(size_t size)
rb_gc_stack_maxsize = size;
}
+int
+rb_gc_is_thread_marked(the_thread)
+ VALUE the_thread;
+{
+ if (FL_ABLE(the_thread)) {
+ return rb_mark_table_contains((RVALUE *) the_thread);
+ }
+ else {
+ return 0;
+ }
+}
+
void
Init_stack(VALUE *addr)
{
@@ -2037,6 +2190,7 @@ rb_gc_call_finalizer_at_exit(void)
DATA_PTR(p) && RANY(p)->as.data.dfree &&
RANY(p)->as.basic.klass != rb_cThread) {
p->as.free.flags = 0;
+ rb_mark_table_remove(p);
if ((long)RANY(p)->as.data.dfree == -1) {
RUBY_CRITICAL(free(DATA_PTR(p)));
}
@@ -2048,6 +2202,7 @@ rb_gc_call_finalizer_at_exit(void)
else if (BUILTIN_TYPE(p) == T_FILE) {
if (rb_io_fptr_finalize(RANY(p)->as.file.fptr)) {
p->as.free.flags = 0;
+ rb_mark_table_remove(p);
VALGRIND_MAKE_MEM_UNDEFINED((void*)p, sizeof(RVALUE));
}
}
@@ -2268,6 +2423,61 @@ count_objects(int argc, VALUE *argv, VALUE os)
return hash;
}
+static VALUE
+os_statistics()
+{
+ int i;
+ int n = 0;
+ unsigned int objects = 0;
+ unsigned int total_heap_size = 0;
+ unsigned int ast_nodes = 0;
+ char message[1024];
+
+ for (i = 0; i < heaps_used; i++) {
+ RVALUE *p, *pend;
+
+ p = heaps[i].slot;
+ pend = p + heaps[i].limit;
+ for (;p < pend; p++) {
+ if (p->as.basic.flags) {
+ int isAST = 0;
+ switch (TYPE(p)) {
+ case T_ICLASS:
+ case T_NODE:
+ isAST = 1;
+ break;
+ case T_CLASS:
+ if (FL_TEST(p, FL_SINGLETON)) {
+ isAST = 1;
+ break;
+ }
+ default:
+ break;
+ }
+ objects++;
+ if (isAST) {
+ ast_nodes++;
+ }
+ }
+ }
+ total_heap_size += (void*)pend - heaps[i].membase;
+ }
+
+ snprintf(message, sizeof(message),
+ "Number of objects: %d (%d AST nodes, %.2f%%)\n"
+ "Heap slot size: %d\n"
+ "Number of heaps: %d\n"
+ "Total size of objects: %.2f KB\n"
+ "Total size of heaps: %.2f KB\n",
+ objects, ast_nodes, ast_nodes * 100 / (double) objects,
+ sizeof(RVALUE),
+ heaps_used,
+ objects * sizeof(RVALUE) / 1024.0,
+ total_heap_size / 1024.0
+ );
+ return rb_str_new2(message);
+}
+
/*
* The <code>GC</code> module provides an interface to Ruby's mark and
* sweep garbage collection mechanism. Some of the underlying methods
@@ -2300,6 +2510,8 @@ Init_GC(void)
rb_define_module_function(rb_mObSpace, "_id2ref", id2ref, 1);
+ rb_define_module_function(rb_mObSpace, "statistics", os_statistics, 0);
+
rb_gc_register_address(&rb_mObSpace);
rb_global_variable(&finalizers);
rb_gc_unregister_address(&rb_mObSpace);
@@ -2315,4 +2527,6 @@ Init_GC(void)
rb_define_method(rb_mKernel, "object_id", rb_obj_id, 0);
rb_define_module_function(rb_mObSpace, "count_objects", count_objects, -1);
+
+ init_debugging();
}
diff --git a/include/ruby/ruby.h b/include/ruby/ruby.h
index 4438bc3..1626a7e 100644
--- a/include/ruby/ruby.h
+++ b/include/ruby/ruby.h
@@ -621,8 +621,8 @@ struct RBignum {
#define RVALUES(obj) (R_CAST(RValues)(obj))
#define FL_SINGLETON FL_USER0
-#define FL_MARK (((VALUE)1)<<5)
-#define FL_RESERVED (((VALUE)1)<<6) /* will be used in the future GC */
+#define FL_RESERVED0 (((VALUE)1)<<5) /* will be used in the future GC */
+#define FL_RESERVED1 (((VALUE)1)<<6) /* will be used in the future GC */
#define FL_FINALIZE (((VALUE)1)<<7)
#define FL_TAINT (((VALUE)1)<<8)
#define FL_EXIVAR (((VALUE)1)<<9)
@@ -716,6 +716,7 @@ void rb_global_variable(VALUE*);
void rb_register_mark_object(VALUE);
void rb_gc_register_address(VALUE*);
void rb_gc_unregister_address(VALUE*);
+int rb_gc_is_thread_marked(VALUE);
ID rb_intern(const char*);
ID rb_intern2(const char*, long);