From: vmakarov@... Date: 2016-03-07T02:41:14+00:00 Subject: [ruby-core:74188] [Ruby trunk Feature#12142] Hash tables with open addressing Issue #12142 has been updated by Vladimir Makarov. Koichi Sasada wrote: > On 2016/03/05 1:31, vmakarov@redhat.com wrote: > > So the packed element approach could be implemented too for the proposed > > implementation. > > I agree. > > > I don't see it is necessary unless the hash tables will > > be again used for method tables where most of them are small. > > As some people said, there are many small Hash objects, like that: > > def foo(**opts) > do_something opts[:some_option] || default_value > end > > foo(another_option: customized_value) > > BTW, from Ruby 2.2, most of passing keyword parameters does not create > Hash object. In above case, a hash object is created explicitly (using > `**` a keyword hash parameter). > To be honest, I did not know that some parameters are passed as hash objects. I am not a Ruby programmer but I am learning. > > Hash > > tables will be faster than the used binary search. But it is not a > > critical code (at least for benchmarks in MRI) as we search method table > > once for a method and all other calls of the method skips this search. > > I am sure you know it much better. > > Maybe we continue to use id_table for method table, or something. It is > specialized for ID key table. > > BTW (again), I (intuitively) think linear search is faster than using > Hash table on small elements. We don't need to touch entries table. > (But no evidence I have.) > > For example, assume 8 elements. > One element consume 24B, so that we need to load 8 * 24B = 192B on worst > case (no entry) with linear search. 3 L1 cache misses on 64B L1 cache CPU. > > However, if we collect hash values (making a hash value array), we only > need to load 8 * 8B = 64B. > > ... sorry, it is not simple :p > I am agree it is not simple. Especially if we take the effect of execution of other parts of the program on caches (how some code before and after and paralelly working with given code accesses memory). In general the environment effect can be important. For example, I read a lot research papers about compiler optimizations. They always claim big or modest improvements. But when you are trying the same algorithm in GCC environment (not in a toy compiler) the effect is frequently smaller, and even in rare cases the effect is opposite (a worse performance). Therefore, something affirmative can be said only after the final implementation. If I correctly understood MRI VM code, you need to search id table element usually once and after that the found value is stored in corresponding VM call insn. The search is done again if an object method or a class method is changed. Although this pattern ("monkey patching") exists, I don't consider it frequent. So I conclude the search in the id table is not critical (may be I am wrong). For non-critical code, on my opinion the best strategy is to minimize cache changes for other parts of a program. With this point of view, linear or binary search is probably a good approach as the used data structure has a minimal footprint. > > Speaking of measurements. Could you recommend credible benchmarks for > > the measurements. I am in bechmarking business for a long time and I > > know benchmarking may be an evil. It is possible to create benchmarks > > which prove opposite things. In compiler field, we use > > SPEC2000/SPEC2006 which is a consensus of most parties involved in the > > compiler business. Do Ruby have something analogous? > > as other people, i agree. and Ruby does not have enough benchmark :( > I think discourse benchmark can help. > As I know there are a lot of applications written on Ruby. May be it is possible to adapt a few of non IO- or network bound programs for benchmarking. It would be really useful. The current MRI benchmarks are micro-benchmarks they don't permit to see a bigger picture. Some RedHat people recommended for me to use fluentd for benchmarking. But I am not sure about this. > > In the proposed implementation, the table size can be decreased. So in > > some way it is collected. > > > > Reading the responses to all of which I am going to answer, I see people > > are worrying about memory usage. Smaller memory usage is important > > for better code locality too (although a better code locality does not mean a > > faster code automatically -- the access patter is important too). But > > I consider the speed is the first priority these days (especially when memory > > is cheap and it will be much cheaper with new coming memory > > technology). > > > > In many cases speed is achieved by methods which requires more memory. > > For example, Intel compiler generates much bigger code than GCC to > > achieve better performance (this is most important competitive > > advantage for their compiler). > > Case by case. > For example, Heroku smallest dyno only provides 512MB. > > >> I think goods overcomes bads. > >> > > > > Thanks, I really appreciate your opinion. I'll work on the found > > issues. Although I am a bit busy right now with work on GCC6 release. > > I'll have more time to work on this in April. > > So great. > I hope GCC6 released in success. > > >> * I always confuse about "open addressing" == "closed hashing" https://en.wikipedia.org/wiki/Open_addressing > > > > Yes, the term is confusing but it was used since 1957 according to Knuth. > > I need to complain old heroes. Thank you for providing some new and useful info to me. ---------------------------------------- Feature #12142: Hash tables with open addressing https://bugs.ruby-lang.org/issues/12142#change-57330 * Author: Vladimir Makarov * Status: Open * Priority: Normal * Assignee: ---------------------------------------- ~~~ Hello, the following patch contains a new implementation of hash tables (major files st.c and include/ruby/st.h). Modern processors have several levels of cache. Usually,the CPU reads one or a few lines of the cache from memory (or another level of cache). So CPU is much faster at reading data stored close to each other. The current implementation of Ruby hash tables does not fit well to modern processor cache organization, which requires better data locality for faster program speed. The new hash table implementation achieves a better data locality mainly by o switching to open addressing hash tables for access by keys. Removing hash collision lists lets us avoid *pointer chasing*, a common problem that produces bad data locality. I see a tendency to move from chaining hash tables to open addressing hash tables due to their better fit to modern CPU memory organizations. CPython recently made such switch (https://hg.python.org/cpython/file/ff1938d12240/Objects/dictobject.c). PHP did this a bit earlier https://nikic.github.io/2014/12/22/PHPs-new-hashtable-implementation.html. GCC has widely-used such hash tables (https://gcc.gnu.org/svn/gcc/trunk/libiberty/hashtab.c) internally for more than 15 years. o removing doubly linked lists and putting the elements into an array for accessing to elements by their inclusion order. That also removes pointer chaising on the doubly linked lists used for traversing elements by their inclusion order. A more detailed description of the proposed implementation can be found in the top comment of the file st.c. The new implementation was benchmarked on 21 MRI hash table benchmarks for two most widely used targets x86-64 (Intel 4.2GHz i7-4790K) and ARM (Exynos 5410 - 1.6GHz Cortex-A15): make benchmark-each ITEM=bm_hash OPTS='-r 3 -v' COMPARE_RUBY='' Here the results for x86-64: hash_aref_dsym 1.094 hash_aref_dsym_long 1.383 hash_aref_fix 1.048 hash_aref_flo 1.860 hash_aref_miss 1.107 hash_aref_str 1.107 hash_aref_sym 1.191 hash_aref_sym_long 1.113 hash_flatten 1.258 hash_ident_flo 1.627 hash_ident_num 1.045 hash_ident_obj 1.143 hash_ident_str 1.127 hash_ident_sym 1.152 hash_keys 2.714 hash_shift 2.209 hash_shift_u16 1.442 hash_shift_u24 1.413 hash_shift_u32 1.396 hash_to_proc 2.831 hash_values 2.701 The average performance improvement is more 50%. ARM results are analogous -- no any benchmark performance degradation and about the same average improvement. The patch can be seen as https://github.com/vnmakarov/ruby/compare/trunk...hash_tables_with_open_addressing.patch or in a less convenient way as pull request changes https://github.com/ruby/ruby/pull/1264/files This is my first patch for MRI and may be my proposal and implementation have pitfalls. But I am keen to learn and work on inclusion of this code into MRI. ~~~ -- https://bugs.ruby-lang.org/ Unsubscribe: