From: vmakarov@... Date: 2016-03-08T20:15:25+00:00 Subject: [ruby-core:74232] [Ruby trunk Feature#12142] Hash tables with open addressing Issue #12142 has been updated by Vladimir Makarov. Yura Sokolov wrote: > > I don't like lists (through pointer or indexes). This is a disperse data structure hurting locality and performance on modern CPUs for most frequently used access patterns. The lists were cool long ago when a gap between memory and CPU speed was small. > > But you destroy cache locality by secondary hash and not storing hashsum in a entries array. > > Assume 10000 element hash, lets count cache misses: > > My hash: > > - hit without collision > -- lookup position in bins +1 > -- check st_table_entry +1 > -- got 2 > - hit after one collision > -- lookup position in bins +1 > -- check st_table_entry +1 > -- check second entry +1 > -- got 3 > - miss with empty bin > - lookup position in bins +1 > - got 1 > - miss after one collision > -- lookup position in bins +1 > -- check st_table_entry +1 > -- got 2 > - miss after one collision > -- lookup position in bins +1 > -- check st_table_entry +1 > -- check second entry +1 > -- got 3 > > Your hash: > > - hit without collision > -- lookup position in entries +1 > -- check st_table_element +1 > -- got 2 > - hit after one collision > -- lookup position in entries +1 > -- check st_table_entry +1 > -- lookup second position in entries +1 > -- check second element +1 > -- got 4 > - miss with empty entry > - lookup position in entries +1 > - got 1 > - miss after one collision > -- lookup position in entries +1 > -- check st_table_element +1 > -- check second position in entries +1 > -- got 3 > - miss after one collision > -- lookup position in entries +1 > -- check st_table_element +1 > -- check second position in entries +1 > -- check second entry +1 > -- check third position in entries +1 > -- got 5 > > So your implementation always generates more cache misses than mine. You complitely destroy whole idea of open-addressing. > What is missing in the above calculations is the probability of collisions for *the same size table*. The result can be not so obvious (e.g. in first case we have collision in 20% but in the second one only 10%): 2 + 3/5 vs 2 + 4/10 or 2.6 vs 2.4 cache misses. But I am writing about it below. > To overcome this issue you ought use fill factor 0.5. > Providing, you don't use 32bit indices, you spend at least 24+8*2=40 bytes per element - just before rebuilding. > And just after rebuilding entries with table you spend 24*2+8*2*2=80bytes per element! > That is why your implementation doesn't provide memory saving either. > One test mentioned in this thread showed that in 3 cases out of 4 my tables are more compact than the current one. > My current implementation uses at least 32+4/1.5=34 bytes, and at most 32*1.5+4=52 bytes. > And I'm looking for possibility to not allocate double-linked list until neccessary, so it will be at most 24*1.5+4=40 bytes for most of hashes. > It would be a fair comparison if you used 64-bit vs 64-bit. As I wrote 32-bit hashes and indexes are important for your implementation. So for 64-bit the numbers should look like at least 48 + 8 / 1.5 = 51 bytes (vs. 40 for my approach) and what number of collisions would be in the above case. Pretty big if you are planning in average 1.5 elements for a bin. at most 48 * 1.5 + 8 = 80 bytes (vs. 80) analogously a big number of collisions if you plan in average 1 element for a bin If the community decides that we should constrain the table sizes, then I might reconsider my design. When I started work on the tables, I assumed that I can not put additional constraints to existing tables in other words that I should not change the sizes. To be honest at the start I also thought what if the index were 32-bit (although I did not think further about changing hash size also as you did). > Lists are slow when every element is allocated separately. Then there is also TLB miss together with cache miss for every element. > When element are allocated from array per hash, then there are less both cache and TLB misses. > It is still the same cache miss when the next element in the bin list is out of cache line and it is very probable if you have moderate or big hash tables. It is an improvement that you put elements in the array, at least they will be closer to each other and will be not some freed elements created for other tables. Still, IMHO, it is practically the same pointer chasing problem. > And I repeat again: you do not understand when and why open addressing may save cache misses. I think I do understand "why open addressing may save cache misses". My understanding is that it permits to remove the bin (bucket) lists and *decreases size of the element*. As a consequence you can *increase the array entries* and have a *very healthy load factor* practically excluding collision occurrences while *having the same size for the table*. > For open addressing to be effective one need to store whole thing that needed to check hit in an array itself (so at least hash sum owt to be stored). That will increase the entry size. It means that for the same size table I will have a bigger load factor. Such solution *decreases cache misses* in case of collisions *but increases the collision probability* for the same size tables. > And second probe should be in a same cache line, that limits you: > > - to simple schemes: linear probing, quadratic probing, As I wrote only one thing interesting for me in quadratic probing is better data locality in case of collisions, I'll try it and consider if my final patch will be in the trunk. > - or to custom schemes, when you explicitely check neighbours before long jump, > - or exotic schemes, like Robin-Hood hashing. > > You just break every best-practice of open-addressing. No, I don't. IMHO. You also omitted table traverse operations here. I believe in my approach it will work better. But I guess we could argue also about it a lot. When the meeting decides about the sizes, we will have more clarity. There is no ideal solutions and speculations sometimes might be interesting but the final results on benchmarks should be a major criterion. Although our discussions are sometimes emotional, the competition will help to improve Ruby hash tables which is good for MRI community. ---------------------------------------- Feature #12142: Hash tables with open addressing https://bugs.ruby-lang.org/issues/12142#change-57368 * Author: Vladimir Makarov * Status: Open * Priority: Normal * Assignee: ---------------------------------------- ~~~ Hello, the following patch contains a new implementation of hash tables (major files st.c and include/ruby/st.h). Modern processors have several levels of cache. Usually,the CPU reads one or a few lines of the cache from memory (or another level of cache). So CPU is much faster at reading data stored close to each other. The current implementation of Ruby hash tables does not fit well to modern processor cache organization, which requires better data locality for faster program speed. The new hash table implementation achieves a better data locality mainly by o switching to open addressing hash tables for access by keys. Removing hash collision lists lets us avoid *pointer chasing*, a common problem that produces bad data locality. I see a tendency to move from chaining hash tables to open addressing hash tables due to their better fit to modern CPU memory organizations. CPython recently made such switch (https://hg.python.org/cpython/file/ff1938d12240/Objects/dictobject.c). PHP did this a bit earlier https://nikic.github.io/2014/12/22/PHPs-new-hashtable-implementation.html. GCC has widely-used such hash tables (https://gcc.gnu.org/svn/gcc/trunk/libiberty/hashtab.c) internally for more than 15 years. o removing doubly linked lists and putting the elements into an array for accessing to elements by their inclusion order. That also removes pointer chaising on the doubly linked lists used for traversing elements by their inclusion order. A more detailed description of the proposed implementation can be found in the top comment of the file st.c. The new implementation was benchmarked on 21 MRI hash table benchmarks for two most widely used targets x86-64 (Intel 4.2GHz i7-4790K) and ARM (Exynos 5410 - 1.6GHz Cortex-A15): make benchmark-each ITEM=bm_hash OPTS='-r 3 -v' COMPARE_RUBY='' Here the results for x86-64: hash_aref_dsym 1.094 hash_aref_dsym_long 1.383 hash_aref_fix 1.048 hash_aref_flo 1.860 hash_aref_miss 1.107 hash_aref_str 1.107 hash_aref_sym 1.191 hash_aref_sym_long 1.113 hash_flatten 1.258 hash_ident_flo 1.627 hash_ident_num 1.045 hash_ident_obj 1.143 hash_ident_str 1.127 hash_ident_sym 1.152 hash_keys 2.714 hash_shift 2.209 hash_shift_u16 1.442 hash_shift_u24 1.413 hash_shift_u32 1.396 hash_to_proc 2.831 hash_values 2.701 The average performance improvement is more 50%. ARM results are analogous -- no any benchmark performance degradation and about the same average improvement. The patch can be seen as https://github.com/vnmakarov/ruby/compare/trunk...hash_tables_with_open_addressing.patch or in a less convenient way as pull request changes https://github.com/ruby/ruby/pull/1264/files This is my first patch for MRI and may be my proposal and implementation have pitfalls. But I am keen to learn and work on inclusion of this code into MRI. ~~~ ---Files-------------------------------- 0001-st.c-use-array-for-storing-st_table_entry.patch (46.7 KB) -- https://bugs.ruby-lang.org/ Unsubscribe: