From: funny.falcon@... Date: 2016-03-05T22:08:44+00:00 Subject: [ruby-core:74170] [Ruby trunk Feature#12142] Hash tables with open addressing Issue #12142 has been updated by Yura Sokolov. Vladimir, you acts as if i said a rubbish or i'm trying to cheat you. It makes me angry. You wrote: > I believe your code above is incorrect for tables of sizes of power of 2. > The function should look like h(k,i) = (h(k) + c1 * i + c2 * i^2) mod m, > where "c1 = c2 = 1/2 is a good choice". You can not simplify it. And you cited Wikipedia > With the exception of the triangular number case for a power-of-two-sized hash table, > there is no guarantee of finding an empty cell once the table gets more than half full But couple of lines above you cited my cite from Wikipedia: > This leads to a probe sequence of h(k), h(k)+1, h(k)+3, h(k)+6, ... > where the values increase by 1, 2, 3, ... Do you read carefully before answer? **It is** implementation of **triangular number** sequence - single quadratic probing sequence which walks across all elements of `2^n` table. You even can remember arithmetic: https://en.wikipedia.org/wiki/Arithmetic_progression ```` (1/2)*i + (1/2)*i*i = i*(i+1)/2 = 1 + 2 + 3 + ... + (i-1) + i ```` Or use Ruby to check you sentence: ```` 2.1.5 :002 > p, d = 0, 1; 8.times.map{ a=p; p=(p+d)&7; d+=1; a} => [0, 1, 3, 6, 2, 7, 5, 4] 2.1.5 :008 > p = 0; 8.times.map{|i| a=p+ 0.5*i + 0.5*i*i; a.to_i&7} => [0, 1, 3, 6, 2, 7, 5, 4] ```` If you still don't believe me, read this: https://en.wikipedia.org/wiki/Triangular_number Google Dense/Sparse hash uses this sequence with table of `2^n` https://github.com/sparsehash/sparsehash/blob/75ada1728b8b18ce80d4052b55b0d16cc6782f4b/doc/implementation.html#L153 https://github.com/sparsehash/sparsehash/blob/a61a6ba7adbc4e3a7545843a72c530bf35604dae/src/sparsehash/internal/densehashtable.h#L650-L651 https://github.com/sparsehash/sparsehash/blob/a61a6ba7adbc4e3a7545843a72c530bf35604dae/src/sparsehash/internal/densehashtable.h#L119 khash uses quadratic probing now with table of `2^n` https://github.com/lh3/minimap/blob/master/khash.h#L51-L53 https://github.com/lh3/minimap/blob/master/khash.h#L273 Please, check your self before saying other man is mistaken. Or at least say with less level of confidence. > Also as I wrote before your proposal means just throwing away the biggest part of hash value even if it is a 32-bit hash. > I don't think ignoring the big part of the hash is a good idea as it probably worsens collision avoiding. Please about Birthday Paradox **carefully** https://en.wikipedia.org/wiki/Birthday_problem Yes, it will certainly increase probability of hash value collision, but only for *very HUGE* hash tables. And it doesn't affect length of a collision chain (cause `2^n` tables uses only low bits). It just affects probability of excess call to equality check on value, but not too much: http://math.stackexchange.com/a/35798 ```` > N = (2**32).to_f 4294967296.0 > n = 100_000_000.0 100000000.0 > collisions = n*(1-(1-1/N)**(n-1)) => 2301410.50385877 > collisions / n 0.0230141050385877 > n = 300_000_000.0 100000000.0 > collisions = n*(1-(1-1/N)**(n-1)) => 20239667.356876057 > collisions / n => 0.06746555785625352 ```` In other words, only 2% of full hash collisions on a Hash with 100_000_000 elements, and 7% for 300_000_000 elements. Can you measure, how much time will consume insertion of 100_000_000 elements to a Hash (current or your implementation), and how much memory it will consume? Int=>Int ? String=>String? At my work, we use a huge in-memory hash tables (hundreds of millions elements) (custom in-memory db, not Ruby), and it uses 32bit hashsum. No any problem. At this > Also about storing only part of the hash. Can it affect rubygems? It may be a part of API. But I don't know anything about it gems ought to be recompiled, but no code change. > I routinely use a few machines for my development with 128GB memory. But you wouldn't use Ruby process which consumes 100GB of memory using Ruby Hash. Otherwise you get a big trouble (with GC for example). If you need to store such amount of data within Ruby Process, you'd better make your own datastructure. I've maid one for my needs : https://rubygems.org/gems/inmemory_kv https://github.com/funny-falcon/inmemory_kv It also can store only `2^31` elements, but hardly beleive you will ever store more inside of Ruby process. >> Could you imagine that Hash with 1M elements starts to rebuild? > I can. The current tables do it all the time already and it means traversing all the elements as in the proposed tables case. Current st_table rebuilds only if its size grow. Your table will rebuild even if size is not changed much, but elements are inserted and deleted repeatedly (1 add, 1 delete, 1 add, 1 delete) >> May be it is better to keep st_index_t prev, next in struct st_table_entry (or struct st_table_elements as you called it) ? > Sorry, I can not catch what do you mean. What prev, next should be used for. > How can it avoid table rebuilding which always mean traversing all elements to find a new entry or bucket for the elements. Yeah, it is inevitable to maintain free list for finding free element. But `prev,next` indices will allow to insert new elements in random places (deleted before), cause iteration will go by this pseudo-pointers. Perhaps, it is better to make a separate LRU hash structure instead in a standard library, and keep Hash implementation as you suggest. I really like this case, but it means Ruby will have two hash tables - for Hash and for LRU. ---------------------------------------- Feature #12142: Hash tables with open addressing https://bugs.ruby-lang.org/issues/12142#change-57313 * Author: Vladimir Makarov * Status: Open * Priority: Normal * Assignee: ---------------------------------------- ~~~ Hello, the following patch contains a new implementation of hash tables (major files st.c and include/ruby/st.h). Modern processors have several levels of cache. Usually,the CPU reads one or a few lines of the cache from memory (or another level of cache). So CPU is much faster at reading data stored close to each other. The current implementation of Ruby hash tables does not fit well to modern processor cache organization, which requires better data locality for faster program speed. The new hash table implementation achieves a better data locality mainly by o switching to open addressing hash tables for access by keys. Removing hash collision lists lets us avoid *pointer chasing*, a common problem that produces bad data locality. I see a tendency to move from chaining hash tables to open addressing hash tables due to their better fit to modern CPU memory organizations. CPython recently made such switch (https://hg.python.org/cpython/file/ff1938d12240/Objects/dictobject.c). PHP did this a bit earlier https://nikic.github.io/2014/12/22/PHPs-new-hashtable-implementation.html. GCC has widely-used such hash tables (https://gcc.gnu.org/svn/gcc/trunk/libiberty/hashtab.c) internally for more than 15 years. o removing doubly linked lists and putting the elements into an array for accessing to elements by their inclusion order. That also removes pointer chaising on the doubly linked lists used for traversing elements by their inclusion order. A more detailed description of the proposed implementation can be found in the top comment of the file st.c. The new implementation was benchmarked on 21 MRI hash table benchmarks for two most widely used targets x86-64 (Intel 4.2GHz i7-4790K) and ARM (Exynos 5410 - 1.6GHz Cortex-A15): make benchmark-each ITEM=bm_hash OPTS='-r 3 -v' COMPARE_RUBY='' Here the results for x86-64: hash_aref_dsym 1.094 hash_aref_dsym_long 1.383 hash_aref_fix 1.048 hash_aref_flo 1.860 hash_aref_miss 1.107 hash_aref_str 1.107 hash_aref_sym 1.191 hash_aref_sym_long 1.113 hash_flatten 1.258 hash_ident_flo 1.627 hash_ident_num 1.045 hash_ident_obj 1.143 hash_ident_str 1.127 hash_ident_sym 1.152 hash_keys 2.714 hash_shift 2.209 hash_shift_u16 1.442 hash_shift_u24 1.413 hash_shift_u32 1.396 hash_to_proc 2.831 hash_values 2.701 The average performance improvement is more 50%. ARM results are analogous -- no any benchmark performance degradation and about the same average improvement. The patch can be seen as https://github.com/vnmakarov/ruby/compare/trunk...hash_tables_with_open_addressing.patch or in a less convenient way as pull request changes https://github.com/ruby/ruby/pull/1264/files This is my first patch for MRI and may be my proposal and implementation have pitfalls. But I am keen to learn and work on inclusion of this code into MRI. ~~~ -- https://bugs.ruby-lang.org/ Unsubscribe: