[ruby-core:114794] [Ruby master Bug#17263] Fiber context switch degrades with number of fibers, limit on number of fibers
From:
"kjtsanaktsidis (KJ Tsanaktsidis) via ruby-core" <ruby-core@...>
Date:
2023-09-18 08:21:35 UTC
List:
ruby-core #114794
Issue #17263 has been updated by kjtsanaktsidis (KJ Tsanaktsidis).
File flamegraph_make_many_fibers.png added
File cache_misses_vs_time.png added
OK, so I spent way longer than I should have staring at this but I think I've worked out what's going on. There are a couple of separate fibers to unpick here (ba-dum tish); I'll look at making fibers and switching between fibers as separate issues.
## Making lots of fibers
Firstly, the process of making lots of fibers (and switching to them for the first time) is kind of slow. To see what's going on here, I made the following script `make_many_fibers.rb:
```ruby
# make_many_fibers.rb
GC.disable
MAX = 2**20
FIBERS = []
MAX.times do
fiber = Fiber.new do
loop do
Fiber.yield
end
end
fiber.resume
FIBERS << fiber
end
```
And then ran it under perf to generate a combined userspace/kernelspace flamegrah:
```
sudo swapoff --all # make sure we don't get page faults from swap
echo madvise | sudo tee /sys/kernel/mm/transparent_hugepage/enable # make sure THP doesn't cause faults
echo -1 | sudo tee /proc/sys/kernel/perf_event_paranoid # allow unprivileged users to run perf
echo 1000000000 | sudo tee /proc/sys/vm/max_map_count # boost mapping count
% ruby --version
ruby 3.3.0dev (2023-09-05T08:35:28Z master 5b146eb5a1) [x86_64-linux]
perf record -F 999 -g -- ruby make_many_fibers.rb
perf script report flamegraph
```

(Also you can look at the [interactive HTML version](https://gistpreview.github.io/?1eef2ef1e521742c9f7778a14483d939))
You can see almost all of the time spent in the process of making fibers is in the kernel. A few things jump out at me.
### Fiber memory mappings
Firstly, let's discuss the process of how memory is mapped for the fibers, in `fiber_pool_expand`. The code for this makes a single call to `mmap(2)` to produce a mapping large enough to hold a pool of fibers (up to 1024) - this happens in [fiber_pool_allocate_memory]https://github.com/ruby/ruby/blob/c87f2a4f156477ba962de19866a1f6298d567398/cont.c#L475). Then, we divide up that space for each fiber in [fiber_pool_expand](https://github.com/ruby/ruby/blob/c87f2a4f156477ba962de19866a1f6298d567398/cont.c#L501). Each fiber gets an area for its Ruby stack, for its C stack, and a "guard page" below the stack so that a fiber which tries to grow its native stack out of bounds generates a page fault instead of trashing some random memory. That guard page is marked non-readable and non-writable through a call to `mprotect(2)` [here](https://github.com/ruby/ruby/blob/c87f2a4f156477ba962de19866a1f6298d567398/cont.c#L546).
Despite this starting out as one mapping, on Linux a single memory mapping must actually have the same permissions for _all pages_ in the mapping. Thus, when a page inside a mapping has its protection flags changed, the mapping must actualy be split up into three; an area before the page being changed, a mapping containing just the page being changed, and the area afterwards. We can see in the flamegraph that a huge amount of kernel time is spent in the call to `mprotect`, doing `__split_vma`.
Since we know ahead of time what the layout of the fiber pool's mappings is going to be, it might be worth experimenting with explicitly making two `mmap` calls per fiber; one to set up the memory needed, and one with the guard page. This would avoid the continous splitting as we `mprotect` the guard pages inside the one mapping.
Another thing we could try here is using mmap's `MAP_GROWSDOWN` option instead of explicitly making guard pages ourselves. The [docs](https://man7.org/linux/man-pages/man2/mmap.2.html) say that "this flag is used for stacks" and automatically allocates a guard page which will attempt to grow the mapping if hit. I haven't looked too hard at how it's implemented, but it might mean we can use a _single_ mapping per fiber, with the guard page managed by the kernel.
### Page faults
The second place we see a ton of kernel time being spent is in the page fault handler. This happens because when the mapping is created with `mmap`, the kernel doesn't actually back it with any memory; instead, it lazily waits for the application to try and access the memory range and only makes real memory available to the application then. We hit the page fault handler in two main places; `coroutine_initialize` (this is where the fiber's C stack is used for the first time), and `vm_push_frame` (this is where the fibers Ruby stack is used for the first time).
This is why, in the profile @ioquatix shared before, we see more time in `vm_push_frame`. Each fibers first call to this method takes longer because it needs to allocate the initial page of Ruby stack. The original script in this issue makes more of the calls to `vm_push_frame` the initial call for a fiber because it has more fibers & the same number of iterations.
There isn't a whole lot which can be done about this overhead unfortunately. The only thing which would really make a difference here is using `MAP_LOCKED` to eagerly allocate all the real memory when the mapping is first made, but that would be extrodinarily wasteful of real system memory.
## Transfering between existing fibers
Once the fibers are all created (and used for the first time, to fault in the stack), the assertion in the original issue is that it should be a constant-time operation to transfer between fibers, irrespective of tthe number of fibers in the application. However, the script in @ioquatix's last comment shows the rate of fiber-transfer operations dropping off as more fibers participate in the benchmark.
To debug this issue, I modified @ioquatix's script slightly to measure some performance counters with `perf` for each of the test iterations:
```ruby
# transfer_many_fibers.rb
require 'open3'
require 'json'
require 'csv'
# This program shows how the performance of Fiber.transfer degrades as the fiber
# count increases
MAX = 2**20
FIBERS = []
MAX.times do
fiber = Fiber.new do
loop do
Fiber.yield
end
end
fiber.resume
FIBERS << fiber
end
REPEATS = 10_000_000
PERF_COUNTERS = [
'page-faults', 'major-faults', 'minor-faults',
'cpu-cycles:u', 'cpu-cycles:k', 'cache-misses', 'cache-references',
'L1-dcache-load-misses', 'L1-dcache-loads' ,'LLC-load-misses', 'LLC-loads',
'dTLB-loads', 'dTLB-load-misses', 'dTLB-stores', 'dTLB-store-misses'
]
def with_perf_counters
perf_cmd = [
'perf', 'stat', '-e', PERF_COUNTERS.join(','),
'-j', '-p', Process.pid.to_s, '-o', '/proc/self/fd/1'
]
Open3.popen2(*perf_cmd) do |stdin, stdout, wait_thr|
block_result = yield
Process.kill :SIGINT, wait_thr.pid
counters = {}
stdout.each_line do |ln|
parsed_ln = JSON.parse ln
counters[parsed_ln['event']] = parsed_ln['counter-value'].to_i
end
wait_thr.value
return [block_result, counters]
end
end
def run(fibers, repeats = REPEATS)
count = 0
t0 = Process.clock_gettime(Process::CLOCK_MONOTONIC)
while count < repeats
fibers.each do |fiber|
count += 1
break if count >= repeats
fiber.resume
end
end
elapsed = Process.clock_gettime(Process::CLOCK_MONOTONIC) - t0
end
# Print results as a CSV table:
puts CSV.generate_line [
'fibers', 'elapsed time (s)', 'rate (t/s)', *PERF_COUNTERS
]
GC.disable
21.times do |i|
GC.start
limit = 2**i
fibers = FIBERS[0...limit]
elapsed, perf_counters = with_perf_counters { run(fibers) }
rate = REPEATS / elapsed
puts CSV.generate_line [
limit, elapsed, rate, *PERF_COUNTERS.map { perf_counters[_1] },
]
end
```
This generats a table like the previous comment, but also includes performance counter information from `perf`. I ran this as follows:
```
sudo swapoff --all # make sure we don't get page faults from swap
echo madvise | sudo tee /sys/kernel/mm/transparent_hugepage/enable # make sure THP doesn't cause faults
echo -1 | sudo tee /proc/sys/kernel/perf_event_paranoid # allow unprivileged users to run perf
echo 1000000000 | sudo tee /proc/sys/vm/max_map_count # boost mapping count
% ruby --version
ruby 3.3.0dev (2023-09-05T08:35:28Z master 5b146eb5a1) [x86_64-linux]
ruby transfer_many_fibers.rb
```
and obtained the following CSV data as a result:
```
fibers,elapsed time (s),rate (t/s),page-faults,major-faults,minor-faults,cpu-cycles:u,cpu-cycles:k,cache-misses,cache-references,L1-dcache-load-misses,L1-dcache-loads,LLC-load-misses,LLC-loads,dTLB-loads,dTLB-load-misses,dTLB-stores,dTLB-store-misses
1,1.4039703360003841,7122657.611476247,3,0,3,5859475245,12694559,3387,54987,165432,7146119413,650,16262,7145394321,4252,4601866193,7320
2,1.2886379399997168,7760131.600659063,1,0,1,5439823427,11350600,5964,55424,189086,6621658447,500,15200,6622892928,5438,4258132955,6574
4,1.258153214000231,7948157.576298307,0,0,0,5310310212,13089610,10449,68069,294892,6335968536,290,16860,6345118100,25997,4058742655,6802
8,1.253403425000215,7978277.225465762,0,0,0,5279962922,34494806,5446,51855,62076301,6192431913,900,16039,6179777983,44444,3939369328,6815
16,1.2689530819998254,7880512.007772858,0,0,0,5370474254,10763313,3844,47457,183940406,6102241668,765,16165,6103205916,26204,3910352255,6246
32,1.2346172180004942,8099676.445623648,0,0,0,5332621133,15694092,10154,64114,200680305,6063273925,462,15166,6068903340,38317,3883690997,11413
64,1.2704579209994336,7871177.655481325,0,0,0,5346794521,12588604,9447,56960,232002452,6059702410,638,15011,6058457551,17290,3878188080,8864
128,1.26315632800015,7916676.485983461,0,0,0,5343576341,12619817,30122,172656,254975275,6032357554,781,28931,6043258633,16401,3866060907,7708
256,1.3926949320002677,7180323.393320194,1,0,1,5946049052,16997016,232957,228631165,263358837,6037282294,1965,49843868,6038606356,14063,3867631144,7523
512,1.4688533830003507,6808031.431682799,0,0,0,6499307194,16462834,1098041,459635153,271677968,6042696127,5034,97729427,6041134724,1340316,3870403582,913873
1024,1.6829085890003626,5942093.388411512,0,0,0,7586548807,18469725,17423890,505379678,289076442,6049202803,558254,99886434,6055483248,10667121,3875545360,8659232
2048,2.458299625000109,4067852.3880096828,0,0,0,10614911431,34628829,125912094,586214908,293999643,6065587295,9686614,109485434,6074510477,12782326,3886950850,9275023
4096,3.8801752910003415,2577203.154505403,0,0,0,14523389421,54902097,332459456,565424689,294461987,6081202395,72540001,112161028,6082385994,18416444,3895766223,3191327
8192,4.739300166999783,2110016.172773987,0,0,0,16748731849,80070197,433472922,584586094,293148050,6082602630,93201829,114523043,6098901364,17956895,3898469812,4034203
16384,5.312199805000091,1882459.313858551,0,0,0,18067407945,89930922,494108236,594290763,292332035,6085798157,104621395,114785396,6094092659,17858483,3896705126,4349967
32768,5.714678685999388,1749879.6606884277,0,0,0,18954140751,57310406,520112168,600784408,292360264,6087857552,110260489,114733525,6089971523,17990727,3897881974,4273404
65536,5.85913169899959,1706737.5361621308,0,0,0,19225994828,58272894,527013755,601246452,292342290,6087916819,111582260,114827548,6091113925,18252983,3898391171,4059701
131072,5.90900381099982,1692332.6367440557,0,0,0,19223289411,60175735,530254916,600266994,292030029,6085240814,111810664,114751374,6084010298,18191953,3891766484,4108941
262144,5.8664713339994705,1704602.2098573,0,0,0,19168022353,66616904,531536818,601173015,291660876,6086962661,111537615,114434876,6090071817,17934617,3900912972,4454238
524288,5.871122640999602,1703251.7648613497,0,0,0,19239854173,74336965,531516412,601003077,291588311,6087162576,111521049,114462843,6092648529,17901966,3902304196,4484503
1048576,5.919364534000124,1689370.5299886826,0,0,0,19290077173,162339116,532576986,603934645,291400193,6088300500,111567224,114528793,6087885216,17794051,3898248975,4611646
```
### Page faults
Firstly, the number of page faults in the transfer process is zero. This is expected; each of these fibers was already resumed once at the top of the script, so their stacks are faulted in. Thus, transferring between them incurs no page fault.
(sidebar: My initial version of this test actually _did_ generate a large number of page faults. Eventually, I worked out it was because I was running the script as root. Ruby's `Process.spawn` uses `vfork(2)` to create the new process to be executed normally, BUT it uses `fork(2)` when the process is privileged. When you use real fork to make a new process, Linux marks all of your memory pages as read-only, so they can be shared with the just-forked process with copy-on-write semantics - even though the child process calls `exec(2)` straight afterwards! The next time your process tries to write to a page, it takes a page fault. The fault handler works out that the page is no longer shared (since the child process exec'd) and marks it as writable again. Anyway I spent _hours_ trying to find out where these faults were coming from... glad I did though!)
### Cache misses
Based on this data, my assessment is that the scenarios with more fibers involved in the transfer dance are slower because they put more pressure on the CPU's various caches. It's quite confusing what all the various cache measurements mean, but I think the `cache-misses` metric measures all memory accesses which missed _all_ the caches, based on [this Stackoverflow answer](https://stackoverflow.com/questions/55035313/how-does-linux-perf-calculate-the-cache-references-and-cache-misses-events).
It seems that the overall time spent on the benchmark is _very_ strongly correlated to the total `cache-misses`:

Based on this, I believe transfering between multiple fibers in this benchmark gets slower because each fiber has its stack in a different part of the system's memory, and so with many fibers you're far less likely to get a CPU cache hit when loading that fibers stack.
I don't think there's _really_ anything which can be done here; working on large amounts of data or instructions which don't fit into CPU cache is _always_ much slower than smaller bits which do. This benchmark is particuarly pathalogical because all it does is flip between fibers; in a real application hopfully the CPU caches would quickly be filled up with some relevant data for the fiber that's running.
## Conclusion
I think it's worth experimenting with manually making a single `MAP_GROWSDOWN` mapping per fiber to see if that can improve fiber creation time by avoiding having the kernel split the mappings sequentially. However, I don't think there's a real problem with fiber transfers once the fibers have been created.
----------------------------------------
Bug #17263: Fiber context switch degrades with number of fibers, limit on number of fibers
https://bugs.ruby-lang.org/issues/17263#change-104638
* Author: ciconia (Sharon Rosner)
* Status: Closed
* Priority: Normal
* ruby -v: 2.7.1
* Backport: 2.5: UNKNOWN, 2.6: UNKNOWN, 2.7: UNKNOWN
----------------------------------------
I'm working on developing [Polyphony](https://github.com/digital-fabric/polyphony), a Ruby gem for writing
highly-concurrent Ruby programs with fibers. In the course of my work I have
come up against two problems using Ruby fibers:
1. Fiber context switching performance seem to degrade as the number of fibers
is increased. This is both with `Fiber#transfer` and
`Fiber#resume/Fiber.yield`.
2. The number of concurrent fibers that can exist at any time seems to be
limited. Once a certain number is reached (on my system this seems to be
31744 fibers), calling `Fiber#transfer` will raise a `FiberError` with the
message `can't set a guard page: Cannot allocate memory`. This is not due to
RAM being saturated. With 10000 fibers, my test program hovers at around 150MB
RSS (on Ruby 2.7.1).
Here's a program for testing the performance of `Fiber#transfer`:
```ruby
# frozen_string_literal: true
require 'fiber'
class Fiber
attr_accessor :next
end
def run(num_fibers)
count = 0
GC.start
GC.disable
first = nil
last = nil
supervisor = Fiber.current
num_fibers.times do
fiber = Fiber.new do
loop do
count += 1
if count == 1_000_000
supervisor.transfer
else
Fiber.current.next.transfer
end
end
end
first ||= fiber
last.next = fiber if last
last = fiber
end
last.next = first
t0 = Time.now
first.transfer
elapsed = Time.now - t0
rss = `ps -o rss= -p #{Process.pid}`.to_i
puts "fibers: #{num_fibers} rss: #{rss} count: #{count} rate: #{count / elapsed}"
rescue Exception => e
puts "Stopped at #{count} fibers"
p e
end
run(100)
run(1000)
run(10000)
run(100000)
```
With Ruby 2.6.5 I'm getting:
```
fibers: 100 rss: 23212 count: 1000000 rate: 3357675.1688139187
fibers: 1000 rss: 31292 count: 1000000 rate: 2455537.056439736
fibers: 10000 rss: 127388 count: 1000000 rate: 954251.1674325482
Stopped at 22718 fibers
#<FiberError: can't set a guard page: Cannot allocate memory>
```
With Ruby 2.7.1 I'm getting:
```
fibers: 100 rss: 23324 count: 1000000 rate: 3443916.967616508
fibers: 1000 rss: 34676 count: 1000000 rate: 2333315.3862491543
fibers: 10000 rss: 151364 count: 1000000 rate: 916772.1008060966
Stopped at 31744 fibers
#<FiberError: can't set a guard page: Cannot allocate memory>
```
With ruby-head I get an almost identical result to that of 2.7.1.
As you can see, the performance degradation is similar in all the three versions
of Ruby, going from ~3.4M context switches per second for 100 fibers to less
then 1M context switches per second for 10000 fibers. Running with 100000 fibers
fails to complete.
Here's a program for testing the performance of `Fiber#resume/Fiber.yield`:
```ruby
# frozen_string_literal: true
require 'fiber'
class Fiber
attr_accessor :next
end
# This program shows how the performance of Fiber.transfer degrades as the fiber
# count increases
def run(num_fibers)
count = 0
GC.start
GC.disable
fibers = []
num_fibers.times do
fibers << Fiber.new { loop { Fiber.yield } }
end
t0 = Time.now
while count < 1000000
fibers.each do |f|
count += 1
f.resume
end
end
elapsed = Time.now - t0
puts "fibers: #{num_fibers} count: #{count} rate: #{count / elapsed}"
rescue Exception => e
puts "Stopped at #{count} fibers"
p e
end
run(100)
run(1000)
run(10000)
run(100000)
```
With Ruby 2.7.1 I'm getting the following output:
```
fibers: 100 count: 1000000 rate: 3048230.049946255
fibers: 1000 count: 1000000 rate: 2362235.6455160403
fibers: 10000 count: 1000000 rate: 950251.7621725246
Stopped at 21745 fibers
#<FiberError: can't set a guard page: Cannot allocate memory>
```
As I understand it, theoretically at least switching between fibers should have
a constant cost in terms of CPU cycles, irrespective of the number of fibers
currently existing in memory. I am completely ignorant the implementation
details of Ruby fibers, so at least for now I don't have any idea where this
problem is coming from.
---Files--------------------------------
clipboard-202308251514-grqb1.png (81.3 KB)
clipboard-202308251514-r7g4l.png (81 KB)
clipboard-202308251538-kmofk.png (13.8 KB)
flamegraph_make_many_fibers.png (471 KB)
cache_misses_vs_time.png (42.5 KB)
--
https://bugs.ruby-lang.org/
______________________________________________
ruby-core mailing list -- ruby-core@ml.ruby-lang.org
To unsubscribe send an email to ruby-core-leave@ml.ruby-lang.org
ruby-core info -- https://ml.ruby-lang.org/mailman3/postorius/lists/ruby-core.ml.ruby-lang.org/