<headius[m]>
would help short circuit some paths here
<headius[m]>
amazing how much the CRuby C code boils away if you inline a bunch of stuff and optimize for common paths
<headius[m]>
they overload these methods SO bad
<headius[m]>
huh I think I have it
<headius[m]>
ok... copy_stream does not go through this logic, or at least does not make use of this fast path because it only uses IOOutputStream if the target is not an IO
<headius[m]>
IOOutputStream is used by the Psych ext to wrap target IO for dumping (passed to SnakeYAML), for stdout and stderr streams provided by Ruby.getError/OutputStream, by Marshal for dumping to a target IO or IO-like, by GzipWriter for writing to a stream, and of course anyone that calls to_outputstream on an IO
<headius[m]>
hmmm might be a use case for this in the IOChannel faker
<headius[m]>
ahh but that is only used when it is not a real IO anyway so no benefit
<headius[m]>
I only have write implemented so far and there will be further optimization there too
<jswenson[m]>
That’s awesome! looking forward to this.
nirvdrum has quit [Ping timeout: 245 seconds]
nirvdrum has joined #jruby
<enebo[m]1>
kares: whenever you want to talk about extension API yak shave I should be available
<headius[m]>
woot, fast path writes can now avoid all allocation
<headius[m]>
slow path writes with transcoding will also allocate several fewer objects
<headius[m]>
enebo: I almost have read side of this PR done and then I will review your arity stuff
<enebo[m]1>
headius: I just reviewed that IO PR. A few pretty unimportant comments.
<headius[m]>
enebo: ok
<headius[m]>
pushing read stuff shortly after some smoke tests
<headius[m]>
read was substantially easier than write because most of the transcoding happens downstream from the byte[] logic
<headius[m]>
write with transcode could probably be improved to use allocated-once buffers more but it is an uncommon use case
<headius[m]>
I wish Java IDEs gave some visual indication that a call is self-recursive
<headius[m]>
how many times do I have overloads that accidentally call themselves
<enebo[m]1>
if (array.size() == 0) array = RubyArray.newEmptyArray(context.runtime);
<enebo[m]1>
So we have a normal block dispatch and it gets an empty array. We then make a new empty array. This is for a proc which expects a single required argument
<enebo[m]1>
So I guess proc { |c| }.call(*[])
<enebo[m]1>
oh wait this is weirder than I thought
<headius[m]>
why does it make a new array
<enebo[m]1>
yeah that was why I pasted it there
<headius[m]>
oh I suppose it dups normally?
<headius[m]>
to avoid *args on receiver writing into some source array
<enebo[m]1>
if I do arr = []; call(*arr) to not get the []?
<enebo[m]1>
I will try that
<enebo[m]1>
hmm
<enebo[m]1>
oh I should add this only comes from the weird cases of us doing .call with NORMAL (which is done a few places I mentioed yesterday) and for yieldSpecific
<enebo[m]1>
so in yieldSpecific it would be some internal caller passing in an empty ruby array.
<enebo[m]1>
and we do not want it poisoned by being bound to the proc variable
<enebo[m]1>
which I would argue should not be in the place but back at the caller
<headius[m]>
hmm ok
<enebo[m]1>
the call path with NORMAL blocks should not be going through call at all but that is also all internal callers or indirected through Java wrapped mechanisms for calling Ruby blocks
<enebo[m]1>
but it is a very very narrow case. You pass a single [] to a normal block or internal consumer with a proc which has one required value
<enebo[m]1>
I am going to remove it and see what happens but it looked weird enough to immediately paste it into channel :)
<enebo[m]1>
Once I split this helper into the two paths these methods are very similar
<enebo[m]1>
<#<Errno::EOPNOTSUPP: Operation not supported - No message available>>.
<enebo[m]1>
weird. Never seen that fail in test:mri.
<headius[m]>
hmmm
<headius[m]>
anything else?
<headius[m]>
might be some fcntl or ioctl error
<enebo[m]1>
This is on my green branch except for some small changes I have been making
<headius[m]>
read support is pushed
<enebo[m]1>
cool
<headius[m]>
there's gobs of low hanging perf fruit though this whole IO pipeline
<headius[m]>
these changes also will reduce some alloc for non-fast path
<enebo[m]1>
It would be real fun to see some dark matter come out of this
<enebo[m]1>
I mean let's face it there is a lot of IO in the world
<enebo[m]1>
I am just not sure how much more IO will hit this specific code
<enebo[m]1>
but Rails marshals
<headius[m]>
yeah it is hard to measure because these are all tiny transient objects
<headius[m]>
it all gets cleaned up in eden space so hard to see a straight-line improvement, but lots of concurrent usage will start to overload alloc and GC
<headius[m]>
at least alloc on these paths is drastically reduced, which is always a win
<enebo[m]1>
yeah I guess it could help in places where memory is tighter at a minimum
<enebo[m]1>
small eden space means more frequent activity