From: vmakarov@... Date: 2016-03-07T19:44:41+00:00 Subject: [ruby-core:74200] [Ruby trunk Feature#12142] Hash tables with open addressing Issue #12142 has been updated by Vladimir Makarov. Yura Sokolov wrote: > Vladimir, you may borrow ideas from my patch. Thank you, Yura. I definitely investigate them and possibility of their inclusion in the final patch. *Still the community should decide about 32-bit hashes and indexes* because it is an important part of your patch and probably a big reason for your table size and speed improvements. Here are the results on 4.2GHz i7-4970K for 64- and 32-bit indexes and hashes on your branch. I've calculated speedup by an awk script (as benchmark-each is broken): ``` 64 32 Speedup hash_aref_dsym 0.256 0.218 1.17431 hash_aref_dsym_long 2.887 2.765 1.04412 hash_aref_fix 0.267 0.232 1.15086 hash_aref_flo 0.073 0.045 1.62222 hash_aref_miss 0.355 0.315 1.12698 hash_aref_str 0.327 0.291 1.12371 hash_aref_sym 0.250 0.209 1.19617 hash_aref_sym_long 0.351 0.320 1.09687 hash_flatten 0.238 0.199 1.19598 hash_ident_flo 0.064 0.034 1.88235 hash_ident_num 0.253 0.218 1.16055 hash_ident_obj 0.236 0.220 1.07273 hash_ident_str 0.235 0.215 1.09302 hash_ident_sym 0.249 0.212 1.17453 hash_keys 0.197 0.141 1.39716 hash_shift 0.044 0.018 2.44444 hash_shift_u16 0.077 0.049 1.57143 hash_shift_u24 0.073 0.046 1.58696 hash_shift_u32 0.075 0.046 1.63043 hash_to_proc 0.042 0.014 3 hash_values 0.196 0.151 1.29801 ``` My opinion is the same -- we should stay with 64-bit for a perspective in the future and fix slowness of work with huge tables if it is possible. You showed many times that most of languages permit maximum 2^31 elements but probably there are exclusions. I know only one besides MRI -- Dino (http://dino-lang.github.io). I use it to do my research on performance of dynamic languages. Here are examples to work with huge tables on an Intel machine with 128GB memory: ``` bash-4.3$ /usr/bin/time ./dino -c 'var t = tab[];for (var i = 0;i < 100_000_000; i++) t[i]=i;' 11.60user 3.10system 0:14.70elapsed 100%CPU (0avgtext+0avgdata 7788892maxresident)k 0inputs+0outputs (0major+2549760minor)pagefaults 0swaps ``` Dino uses worse implementation of hash tables than the proposed tables (I wanted to rewrite it for a long time). As I wrote MRI took 7min on the same test for 100_000_000 elements on the same machine. So I guess it is possible to make MRI faster to work with huge hash tables too. Here is an example with about 2^30 elements: ``` bash-4.3$ /usr/bin/time ./dino -c 'var t = tab[];for (var i = 0;i < 1_000_000_000; i++) t[i]=i;' 113.97user 60.58system 2:54.57elapsed 99%CPU (0avgtext+0avgdata 89713628maxresident)k 0inputs+0outputs (0major+39319119minor)pagefaults 0swaps ``` 128GB is not enough for 2^31 elements but I believe 256GB will be enough. It is still possible to work with about 2^31 elements on 128GB machine if we create table from a vector without several rebuilding as in the above tests: ``` bash-4.3$ /usr/bin/time ./dino -c 'var v=[2_000_000_000:1],t=tab(v),s=0,i; for (i in t)s+=i;putln(s);' 1999999999000000000 78.02user 38.70system 1:56.85elapsed 99%CPU (0avgtext+0avgdata 96957920maxresident)k ``` By the way, a value in Dino (16B) is 2 times bigger than in MRI (8B). ---------------------------------------- Feature #12142: Hash tables with open addressing https://bugs.ruby-lang.org/issues/12142#change-57343 * Author: Vladimir Makarov * Status: Open * Priority: Normal * Assignee: ---------------------------------------- ~~~ Hello, the following patch contains a new implementation of hash tables (major files st.c and include/ruby/st.h). Modern processors have several levels of cache. Usually,the CPU reads one or a few lines of the cache from memory (or another level of cache). So CPU is much faster at reading data stored close to each other. The current implementation of Ruby hash tables does not fit well to modern processor cache organization, which requires better data locality for faster program speed. The new hash table implementation achieves a better data locality mainly by o switching to open addressing hash tables for access by keys. Removing hash collision lists lets us avoid *pointer chasing*, a common problem that produces bad data locality. I see a tendency to move from chaining hash tables to open addressing hash tables due to their better fit to modern CPU memory organizations. CPython recently made such switch (https://hg.python.org/cpython/file/ff1938d12240/Objects/dictobject.c). PHP did this a bit earlier https://nikic.github.io/2014/12/22/PHPs-new-hashtable-implementation.html. GCC has widely-used such hash tables (https://gcc.gnu.org/svn/gcc/trunk/libiberty/hashtab.c) internally for more than 15 years. o removing doubly linked lists and putting the elements into an array for accessing to elements by their inclusion order. That also removes pointer chaising on the doubly linked lists used for traversing elements by their inclusion order. A more detailed description of the proposed implementation can be found in the top comment of the file st.c. The new implementation was benchmarked on 21 MRI hash table benchmarks for two most widely used targets x86-64 (Intel 4.2GHz i7-4790K) and ARM (Exynos 5410 - 1.6GHz Cortex-A15): make benchmark-each ITEM=bm_hash OPTS='-r 3 -v' COMPARE_RUBY='' Here the results for x86-64: hash_aref_dsym 1.094 hash_aref_dsym_long 1.383 hash_aref_fix 1.048 hash_aref_flo 1.860 hash_aref_miss 1.107 hash_aref_str 1.107 hash_aref_sym 1.191 hash_aref_sym_long 1.113 hash_flatten 1.258 hash_ident_flo 1.627 hash_ident_num 1.045 hash_ident_obj 1.143 hash_ident_str 1.127 hash_ident_sym 1.152 hash_keys 2.714 hash_shift 2.209 hash_shift_u16 1.442 hash_shift_u24 1.413 hash_shift_u32 1.396 hash_to_proc 2.831 hash_values 2.701 The average performance improvement is more 50%. ARM results are analogous -- no any benchmark performance degradation and about the same average improvement. The patch can be seen as https://github.com/vnmakarov/ruby/compare/trunk...hash_tables_with_open_addressing.patch or in a less convenient way as pull request changes https://github.com/ruby/ruby/pull/1264/files This is my first patch for MRI and may be my proposal and implementation have pitfalls. But I am keen to learn and work on inclusion of this code into MRI. ~~~ ---Files-------------------------------- 0001-st.c-use-array-for-storing-st_table_entry.patch (46.7 KB) -- https://bugs.ruby-lang.org/ Unsubscribe: