From: "byroot (Jean Boussier)" Date: 2022-08-03T06:42:52+00:00 Subject: [ruby-core:109420] [Ruby master Feature#18885] Long lived fork advisory API (potential Copy on Write optimizations) Issue #18885 has been updated by byroot (Jean Boussier). > After calling this, all fork calls are treated as "long-lived". Is my understanding right? Well, this wouldn't change anything to the `Process.fork` implementation. I think I need to rewrite the ticket description because it is now confusing, I'll do it in a minute. Also as said before I don't even think this is specific to forking servers anymore, I think `RubyVM.make_ready` or something like that would be just fine. Even if you don't fork, optimizations such as precomputing inline caching could improve performance of the first request. > it is good to focus on the use case of nakayoshi_fork Ok, so here's a thread from when Puma added it as an option two years ago, https://github.com/puma/puma/issues/2258#issuecomment-630510423 > After fixing the config bug in nakayoshi_fork, Codetriage is now showing about a 10% reduction in memory usage Some other people report good numbers too, but generally they enabled other changes at the same time. ---------------------------------------- Feature #18885: Long lived fork advisory API (potential Copy on Write optimizations) https://bugs.ruby-lang.org/issues/18885#change-98572 * Author: byroot (Jean Boussier) * Status: Open * Priority: Normal ---------------------------------------- ### Context It is rather common to deploy Ruby with forking servers. A process first load the code and data of the application, and then forks a number of workers to handle an incoming workload. The advantage is that each child has its own GVL and its own GC, so they don't impact each others latency. The downside however is that in uses more memory than using threads or fibers. That increased memory usage is largely mitigated by Copy on Write, but it's far from perfect. Over time various memory regions will be written into and unshared. The classic example is the objects generation, young objects must be promoted to the old generation before forking, otherwise they'll get invalidated on the next GC run. That's what https://github.com/ko1/nakayoshi_fork addresses. But there are other sources of CoW invalidation that could be addressed by MRI if it had a clear notification when it needs to be done. ### Proposal MRI could assume than any `fork` may be long lived and perform all the optimizations it can then, but It may be preferable to have a dedicated API for that. e.g. - `Process.fork(long_lived: true)` - `Process.long_lived_fork` - `RubyVM.prepare_for_long_lived_fork` ### Potential optimizations `nakayoshi_fork` already does the following: - Do a major GC run to get rid of as many dangling objects as possible. - Promote all surviving objects to the highest generation - Compact the heap. But it would be much simpler to do this from inside the VM rather than do cryptic things such as `4.times { GC.start }` from the Ruby side. Also after discussing with @jhawthorn, @tenderlovemaking and @alanwu, we believe this would open the door to several other CoW optimizations: #### Precompute inline caches Even though we don't have hard data to prove it, we are convinced that a big source of CoW invalidation are inline caches. Most ISeq are never invoked during initialization, so child processed are forked with mostly cold caches. As a result the first time a method is executed in the child, many memory pages holding ISeq are invalidated as caches get updated. We think MRI could try to precompute these caches before forking children. Constant cache particularly should be resolvable statically (somewhat related https://github.com/ruby/ruby/pull/6049). Method caches are harder to resolve statically, but we can probably apply some heuristics to at least reduce the cache misses. #### Copy on Write aware GC We could also keep some metadata about which memory pages are shared, or even introduce a "permanent" generation. [The Instagram engineering team introduced something like that in Python](https://instagram-engineering.com/copy-on-write-friendly-python-garbage-collection-ad6ed5233ddf) ([ticket](https://bugs.python.org/issue31558), [PR](https://github.com/python/cpython/pull/3705)). That makes the GC aware of which objects live on a shared page. With this information the GC can decide to no free dangling objects leaving on these pages, not to compact these pages, etc. #### Scan the coderange of all strings Strings have a lazily computed `coderange` attribute in their flags. So if a string is allocated at boot, but only used after fork, its coderange may be computed and the string mutated. Using https://github.com/ruby/ruby/pull/6076, I noticed that 58% of the strings retained at the end of the boot sequence had an `UNKNOWN` coderange. So eagerly scanning the coderange of all strings could also improve Copy on Write performance. -- https://bugs.ruby-lang.org/ Unsubscribe: