<yorickpeterse>
mpapis: can you use benchmark-ips for that benchmark? This benchmark might not be triggering the JIT
<yorickpeterse>
basically just require 'benchmark/ips' and replace 'Benchmark.bmbm' with 'Benchmark.ips'
<mpapis>
yorickpeterse, testing
<mpapis>
yorickpeterse, new addition? that benchmark/ips
<kagaro>
coolthanks
<yorickpeterse>
mpapis: it's a Gem evan wrote at some point, takes care of warming up and such
<yorickpeterse>
so typically you'll get better results when JITs are involved and the likes
<mpapis>
runnning now
<mpapis>
yorickpeterse, similar, adding results
<yorickpeterse>
What's the particular use case for this potential optimization, something you noticed as being slow?
<yorickpeterse>
ah, ty
<mpapis>
yorickpeterse, I was just discussing in a PR for nanoc using "each <<" over "+=" might be slower, then we went into benchmarking
<mpapis>
and then I noticed that rubinius concat is slower for array compared to other rubies
<yorickpeterse>
For Arrays the latter should, in theory, be slower
<yorickpeterse>
Since it requires an extra allocation and re-assignment
<yorickpeterse>
Though the resizing of an array _might_ be more expensive
<yorickpeterse>
or the code is just meh
<mpapis>
yes I now understand, but it coudl be optimized
<mpapis>
the question I asked in jruby is if it should be optimized
<yorickpeterse>
If it's worth it, sure
<mpapis>
well, natural instinct is that adding to array once should be faster then many times, that's why I was thinking += would be faster
<mpapis>
then each <<
<mpapis>
in the end I go for concat, but this should be optimized
<yorickpeterse>
both concat and << should be faster +=
<mpapis>
why to copy array if you have to reassign it back to the same variable
<yorickpeterse>
* than +=
<mpapis>
and even += mentions concat in MRI docs
<chrisseaton>
mpapis: += to actually just extend the same array would require very sophisticated escape analysis and dynamic deoptimization to handle things like ObjectSpace - it's a good idea for an optimisation, but not very simple to implement I think
josh-k has joined #rubinius
josh-k has quit [Ping timeout: 272 seconds]
<yorickpeterse>
"NoMethodError: undefined method `empty?' on an instance of ActiveModel::Errors" what the fuck
<yorickpeterse>
brixen: what's this new feature we have where methods randomly vanish? :P
<mpapis>
chrisseaton, I would think that all rubies jut map "x+=y" to "x=x+y" would be matter of hijacking the mapping and replacing it with "x.concat(y)"
<mpapis>
especially that the += docs reference concat
<chrisseaton>
mpapis: but only if it's an array - so does that mean you have to check the type on every single + operation?
<mpapis>
chrisseaton, if it's faster then I would not mind
<mpapis>
also string
<mpapis>
actually anything that has concat should use it for += not just arrays
<chrisseaton>
mpapis: I think you would find that the guard would be more expensive than the optimisation - a common problem in optimisations
<chrisseaton>
mpapis: it could also break lots of Ruby code
<chrisseaton>
mpapis: ideally these optimisations would be invisible to the user - we just need more poweful escape analysis and deoptimization to do it
<chrisseaton>
mpapis: I looked into an optimisation where I stored a reference in each array to the location where it was originally allocated, recorded all resizes, and tried to make the array the correct size on the next allocation at that location - that was simpler than what we're talking about here and was already extremely complicated to implement
<mpapis>
chrisseaton, += is suppose to work as concat so it should be as fast as it
<chrisseaton>
mpapis: but that's not the semantics! the semantics are to allocate a new array - so expecting it to be as fast as concat is a programmer's mistake. If we can make it faster, great, but it's not unreasonable for it to be slower.
<yorickpeterse>
mpapis: += also allocates a new Array, so it can't be safely replaced with Array#concat
<chrisseaton>
mpapis: exactly - and you can tell the difference by things like ObjectSpace, so you can't just replace it
<chrisseaton>
mpapis: if you're serious enough to change Ruby's semantics, there are many other things you could do to improve performance a lot more than this
<mpapis>
well then it would have to come from MRI to be approved?
<chrisseaton>
mpapis: yeah - but I doubt they'd go for it as it could very subtlely break existing code
<yorickpeterse>
You can't change += to be the same as Array#concat
<yorickpeterse>
it could potentially break every piece of Ruby code written in the past 15 years
<mpapis>
then it would be as dup.concat, let me benchmark this
<chrisseaton>
mpapis: you're also going to introduce a semantic edge-case that you'll have to specifically teach all new Ruby programmers - "in just this one case + and = don't do what you'd expect from how they work everywhere else, they do something else instead"
<chrisseaton>
although I would say - I actually think Python does exactly what you're proposing
<mpapis>
chrisseaton, any other language will have += faster then each<<
<mpapis>
this is expected behavior
<yorickpeterse>
That's bullshit
<yorickpeterse>
Also this is Ruby, not "every other language"
<yorickpeterse>
Ruby specifies that x += y is the same as x = x + y
<yorickpeterse>
which in case of Array leads to an extra allocation
<yorickpeterse>
which _could_ be faster than adding an element to an existing array
<yorickpeterse>
but that depends on when the array is resized
<yorickpeterse>
and the allocation overhead
<yorickpeterse>
that in turn is implementation specific
<mpapis>
and dup.concat is slower from << in all mri, jruby, rbx
<mpapis>
lunch time
<yorickpeterse>
euh yeah of course it is
<yorickpeterse>
you're duplicating an array
diegoviola has joined #rubinius
<headius>
yorickpeterse: dup itself should be pretty cheap
goyox86 has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<headius>
if the array has copy-on-write semantics as in JRuby and MRI (I don't think Rubinius does)
<mpapis>
headius, dup.concat was slower everywhere
<mpapis>
even on jruby
<headius>
dup+concat would force a full copy anyway
<headius>
I was just referring to dup alone
<mpapis>
ah
<headius>
and yes, dup + concat would be as slow as +=
<headius>
or slower
<headius>
+= is slow because it does a full copy of elements from the original and the added array
<mpapis>
it's slower, I guess the best course of action would be improving docs to properly differentiate between += and concat
<mpapis>
not just mention each another
<chrisseaton>
mpapis: where in the docs are you looking? I can't see a mention of += in the concat docs
<chrisseaton>
mpapis: I can see 'See also Array#+.'
goyox86 has joined #rubinius
sferik has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
sferik has joined #rubinius
<yorickpeterse>
headius: chrisseaton: can JRuby somehow read JVM options from a file, similar to .jrubyrc?
<chrisseaton>
yorickpeterse: I don't believe so
<yorickpeterse>
hm
<chrisseaton>
yorickpeterse: always having to have all JVM options set before you start can be a problem - I've recently been trying to work around it in Truffle by making some JIT options runtime configurable
<yorickpeterse>
in this case it's a JVM option, not sure what JRuby can do about that
<chrisseaton>
yorickpeterse: there is of course JRUBY_OPTS and JAVA_OPTIONS
<yorickpeterse>
In my case installing some Gem apparently needs more than the default 500MB
<yorickpeterse>
yeah that's one option
<yorickpeterse>
trying to see where I'm going to smack that atm
<chrisseaton>
500MB seems very low these days doesn't it
<yorickpeterse>
For the JVM, yes
sferik has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
sferik has joined #rubinius
sferik has quit [Client Quit]
<headius>
JAVA_OPTS=-Xmx2g
sferik has joined #rubinius
<headius>
500MB is *our* default...I think JVM's default cap is still even smaller
<yorickpeterse>
hm
<yorickpeterse>
Ah well, I found a decent place to hook it into in this deployment process
<headius>
I'm willing to bump that up, though...we haven't reevaluated it in many years
<headius>
can you give me more info on what needed that much memory?
<yorickpeterse>
mind you I'm installing a Gem of 100-something MB here :P
<mpapis>
both reference the second but with no explanation
<chrisseaton>
mpapis: what explanation do you think is needed though - they're two methods which do different things, but are similar so they link to each other - what would you say?
<mpapis>
chrisseaton, there is no documentation of += - would be good to state not obvious things
<chrisseaton>
mpapis: but the documentation makes it clear what Array#+ does - it creates a copy, so I think that's really clear that += would create a copy. What you proposed with making += use concat would have to be documented as it breaks that pattern - as it currently is I can't see why any documentation is needed.
benlovell has quit [Ping timeout: 240 seconds]
lbianc has joined #rubinius
tenderlove has joined #rubinius
<chrisseaton>
mpapis: I do sort of see what you are saying, but I would ask you 'why did you think += would behave like concat? what in the docs lead you to that conclusion'
<mpapis>
chrisseaton, my experience from other languages and plain logic, did not put much attention into the "description" of the + and there was no docs for +=
<chrisseaton>
mpapis: I guess the problem is that Ruby doesn't really document semantics anywhere - you're right I'm not sure where += is documented, except in the books maybe
<mpapis>
as I have mri bugtracker open already I will open a ticket
<goyox86>
Hey guys how do I build RBX with debugging symbols?
<brixen>
./configure --debug-build
<goyox86>
brixen thx
<brixen>
n/p
<goyox86>
Hey brixen I'm playing with metrics I'm sending the metrcis to influxdb from C++
<brixen>
would you be interested in writing up how you did that for the blog?
<goyox86>
yup
<brixen>
sweet
<brixen>
did you write a custom emitter or are you using the statsd emitter?
<goyox86>
I wrote an emitter :)
<brixen>
interesting
<brixen>
was there an issue with the statsd one?
<goyox86>
nope I just was trying to setup statsd + graphite and I just ended up chasing my tail :p
<brixen>
yeah, graphite is a big pain
<goyox86>
I went to influxdb site an really liked the db
<brixen>
apparently, there's a docker container for it
<brixen>
does influxdb not accept statsd?
<brixen>
or is there a statsd adapter?
<goyox86>
I think it still does not support statsd
<brixen>
I'm not opposed to including other emitters, but I don't want extra maintenance if it's unnecessary
<brixen>
ok
<goyox86>
I understand I was just playing either way I've build the emitter uisng the influxdb-c libraries that are "*not_that_great tm :)" And are built on top of licbcurl
<goyox86>
Hey brixen lets say I want to write a code that uses Rubinius::Metrics.data and spans a thread and periodically send data to influxdb. Is there an event from the vm when RBX is ready? at Ruby level?
<brixen>
goyox86: there's already a thread that sends the data to a location at a regular interval
<brixen>
so, that's not a great approach
pwh has quit []
<brixen>
however, you can run a thread that sends the data in Rubinius::Metrics.data at a regular interval
<brixen>
it's updated transparently, and you shouldn't expect to synchronize with the metrics thread
<brixen>
so, no, there's no event and I won't be adding one
<goyox86>
brixen That is what i'm talking about I have something like this https://gist.github.com/goyox86/4290e0c95abd86d76858. The thing is that I don't know how the workflow would be for building a tool (a gem) that is build on top of that
<goyox86>
I think I'm missing something xD
<goyox86>
OMG my writing is terrible I menat building a tool on top of that
<goyox86>
meant* Llisus.
<jc00ke>
goyox86: don't worry, I'm having the same problem this morning!
<brixen>
goyox86: gotta run for now
<brixen>
goyox86: but the point is, you don't build on top of that
<goyox86>
brixennp
<goyox86>
np*
<brixen>
you build on top of influxdb or statsd etc
<brixen>
if there's no good adapter to influxdb, then we can include an emitter
<goyox86>
brixen Roger that
<brixen>
you can make an emitter like your gist, but look at Rubinius::Config[:'system.metrics.interval']
<brixen>
sleep for that interval between writing your payload
<brixen>
that approach is *possible* but sub-optimal
<yorickpeterse>
(this is from Wikipedia, the only sane non academic super-complex to read example I could find)
<yorickpeterse>
* non super-complex
<yorickpeterse>
errr dat engrish
<yorickpeterse>
This is one of the areas where I wish I did actually pursued some kind of degree, perhaps it would've helped understanding all the darn formulas
<yorickpeterse>
or maybe I should've paid more attention to maths during high school, instead of re-programming my calculator and playing games on it
<brixen>
looks like dnsimple is still getting ddos'd so dunno if people will actually be able to install 2.4
<yorickpeterse>
Now I just need to figure out why this Python code uses an extra table for figuring out what rules to use
arrubin_ is now known as arubin
<brixen>
dang, forgot to dedicate 2.4.0 to ez in the News file :(
<brixen>
I'll put it in the github release
<jc00ke>
@brixen I didn't know him but looked up to him and his passion. Considering going to his funeral, though I heard it was for friends and family only.
juergenb has joined #rubinius
arrubin has quit [*.net *.split]
slaught has quit [*.net *.split]
ssedov has quit [*.net *.split]
_whitelogger has quit [*.net *.split]
chrisseaton has quit [*.net *.split]
heftig has quit [*.net *.split]
evan has quit [*.net *.split]
slaught_ is now known as slaught
juergenb has quit [Client Quit]
chrisseaton_ is now known as chrisseaton
havenwood has quit [Remote host closed the connection]
<cremes>
any good link explaining what happened to ezra? it was shocking to hear of his passing but i haven’t found any details. he was a young guy!