[#74190] [Ruby trunk Feature#12134] Comparison between `true` and `false` — duerst@...
Issue #12134 has been updated by Martin D端rst.
3 messages
2016/03/07
[#74269] Type systems for Ruby — Rob Blanco <ml@...>
Dear ruby-core,
5 messages
2016/03/10
[#74395] [Ruby trunk Feature#12142] Hash tables with open addressing — shyouhei@...
Issue #12142 has been updated by Shyouhei Urabe.
3 messages
2016/03/17
[ruby-core:74185] Re: [Ruby trunk Feature#12142] Hash tables with open addressing
From:
SASADA Koichi <ko1@...>
Date:
2016-03-06 19:26:48 UTC
List:
ruby-core #74185
On 2016/03/05 1:31, vmakarov@redhat.com wrote:
> So the packed element approach could be implemented too for the proposed
> implementation.
I agree.
> I don't see it is necessary unless the hash tables will
> be again used for method tables where most of them are small.
As some people said, there are many small Hash objects, like that:
def foo(**opts)
do_something opts[:some_option] || default_value
end
foo(another_option: customized_value)
BTW, from Ruby 2.2, most of passing keyword parameters does not create
Hash object. In above case, a hash object is created explicitly (using
`**` a keyword hash parameter).
> Hash
> tables will be faster than the used binary search. But it is not a
> critical code (at least for benchmarks in MRI) as we search method table
> once for a method and all other calls of the method skips this search.
> I am sure you know it much better.
Maybe we continue to use id_table for method table, or something. It is
specialized for ID key table.
BTW (again), I (intuitively) think linear search is faster than using
Hash table on small elements. We don't need to touch entries table.
(But no evidence I have.)
For example, assume 8 elements.
One element consume 24B, so that we need to load 8 * 24B = 192B on worst
case (no entry) with linear search. 3 L1 cache misses on 64B L1 cache CPU.
However, if we collect hash values (making a hash value array), we only
need to load 8 * 8B = 64B.
... sorry, it is not simple :p
> Speaking of measurements. Could you recommend credible benchmarks for
> the measurements. I am in bechmarking business for a long time and I
> know benchmarking may be an evil. It is possible to create benchmarks
> which prove opposite things. In compiler field, we use
> SPEC2000/SPEC2006 which is a consensus of most parties involved in the
> compiler business. Do Ruby have something analogous?
as other people, i agree. and Ruby does not have enough benchmark :(
I think discourse benchmark can help.
> In the proposed implementation, the table size can be decreased. So in
> some way it is collected.
>
> Reading the responses to all of which I am going to answer, I see people
> are worrying about memory usage. Smaller memory usage is important
> for better code locality too (although a better code locality does not mean a
> faster code automatically -- the access patter is important too). But
> I consider the speed is the first priority these days (especially when memory
> is cheap and it will be much cheaper with new coming memory
> technology).
>
> In many cases speed is achieved by methods which requires more memory.
> For example, Intel compiler generates much bigger code than GCC to
> achieve better performance (this is most important competitive
> advantage for their compiler).
Case by case.
For example, Heroku smallest dyno only provides 512MB.
>> I think goods overcomes bads.
>>
>
> Thanks, I really appreciate your opinion. I'll work on the found
> issues. Although I am a bit busy right now with work on GCC6 release.
> I'll have more time to work on this in April.
So great.
I hope GCC6 released in success.
>> * I always confuse about "open addressing" == "closed hashing" https://en.wikipedia.org/wiki/Open_addressing
>
> Yes, the term is confusing but it was used since 1957 according to Knuth.
I need to complain old heroes.
--
// SASADA Koichi at atdot dot net
Unsubscribe: <mailto:ruby-core-request@ruby-lang.org?subject=unsubscribe>
<http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>