From: Daniel Ferreira Date: 2018-01-25T01:13:12+00:00 Subject: [ruby-core:85087] Re: [Ruby trunk Feature#13618][Assigned] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid Hi Eric, I've been reading this issue and I'm finding it fascinating. Let me play here the role of the ruby developer that is seeking to understand better the asynchronous ruby capabilities. Every time I read threads(conversations) like this one about the pros and cons of Fibers vs Threads I tend to think: stay away from it. When people like Kochi write comments like this: > "But most (many? some? a few?) of ruby programmer (including me) can not write correct code I believe." or Yusuke Endoh: > "Thread is considered harmful. Casual Rubyists (including I) had better not use it." what these comments make us mere mortals feel? I will speak about me. When I read such a line I tend to step away. So yes, this situation makes me develop single threaded code as much as possible. I rely on libraries to handle asynchronous behaviour for me and specially I rely extensively on the actor model. I doubt I will change my mind unless I start to read that Thread is good to be used or Fiber is good to be used. When I read all this conversation and you mention corner cases that still have problems that is a NO GO for me. IMHO to add yet another Thread like feature it should be "The Killer Feature". The one that we can say to the all community: Hey people use this thing because async is a paradise in ruby land at last. If we don't have this it will be just another Thread, Fiber nightmare for the very few who accept the overhead of dealing with all the "buts". And for the record, I use async libraries but I don't feel confident about them either knowing that ruby core is not reliable in itself. Production code in the enterprise world it is not something to mess around. For me ruby core needs desperately to change this situation so I really hope your work will be the answer for all of this I'm talking about. So yes, if it is it fits in ruby core like a glove IMO. If it is not then we will be much worst because instead of 2 walking deads we will have 3. A 50% increase is a lot in this domain. Turns things into a joke. So, can you please explain us what peace of mind will we gain with this new "light thread" in our everyday work? Thank you very much and keep up the excellent work. I appreciate specially the care you have in passing across your knowledge on the subject. Really helpful and insightful. Note: Your last two messages are not part of the issue in redmine. I hope my message will be there! On Wed, Jan 24, 2018 at 10:01 PM, Eric Wong wrote: >> Thinking about this even more; I don't think it's possible to >> preserve round-robin recv_io/accept behavior I want from >> blocking on native threads when sharing descriptors between >> multiple processes. > > ``` > The following example hopefully clarifies why I care about > maintaining blocking I/O behavior in some places despite relying > on non-blocking I/O for light-weight threading. > > # With non-blocking accept; PIDs do not share fairly: > $ NONBLOCK=1 ruby fairness_test.rb > PID accept count > 5240 55 > 5220 42 > 5216 36 > 5242 109 > 5230 57 > 5208 26 > 5227 53 > 5212 26 > 5223 46 > 5236 43 > total: 493 > > # With blocking accept on Linux; each process gets a fair share: > $ NONBLOCK=0 ruby fairness_test.rb > PID accept count > 5271 50 > 5278 50 > 5275 50 > 5282 49 > 5286 49 > 5290 49 > 5295 49 > 5298 49 > 5303 49 > 5306 49 > total: 493 > > For servers which only handle one client-per-process (e.g. > Apache prefork), unfairness is preferable because the busiest > process will be hottest in CPU cache. > > For everything else that serves multiple clients in a single > process, fair sharing is preferable. This will apply to Guilds > in the future, too. > > More information about this behavior I rely on is here: > http://www.citi.umich.edu/projects/linux-scalability/reports/accept.html > > > require 'socket' > require 'thread' > require 'io/nonblock' > Thread.abort_on_exception = STDOUT.sync = true > host = '127.0.0.1' > srv = TCPServer.new(host, 0) > srv.nonblock = true if ENV['NONBLOCK'].to_i != 0 > port = srv.addr[1] > pipe = IO.pipe > nr = 10 > running = true > trap(:INT) { running = false } > pids = nr.times.map do > fork do > pipe[0].close > q = Queue.new # per-process Queue > Thread.new do # dedicated accept thread > q.push(srv.accept) while running > q.push(nil) > end > while accepted = q.pop > # n.b. a real server would do processing, here, maybe spawning > # a new Thread/Fiber/Threadlet > pipe[1].write("#$$ #{accepted.fileno}\n") > accepted.close > end > end > end > pipe[1].close > > sleep(1) # wait for children to start > cleanup = SizedQueue.new(1024) > Thread.new do > cleanup.pop.close while true > end > > Thread.new do > loop do > cleanup.push(TCPSocket.new(host, port)) > sleep(0.01) > rescue => e > break > end > end > Thread.new { sleep(5); running = false } > > counts = Hash.new(0) > at_exit do > tot = 0 > puts "PID\taccept count" > counts.each { |pid, n| puts "#{pid}\t#{n}"; tot += n } > puts "total: #{tot}" > end > case line = pipe[0].gets > when /\A(\d+) / > counts[$1] += 1 > else > running = false > Process.waitall > end while running > ``` > > Unsubscribe: > Unsubscribe: