Hobogrammer has quit [Ping timeout: 252 seconds]
Hobogrammer has joined #jruby
norc__ has joined #jruby
norc_ has quit [Ping timeout: 250 seconds]
pawnbox has joined #jruby
pawnbox has quit [Ping timeout: 240 seconds]
yfeldblum has quit [Ping timeout: 250 seconds]
prasunanand has joined #jruby
pawnbox has joined #jruby
pawnbox has quit [Ping timeout: 260 seconds]
pawnbox has joined #jruby
pawnbox has quit [Ping timeout: 260 seconds]
pawnbox has joined #jruby
pawnbox has quit [Ping timeout: 276 seconds]
yfeldblum has joined #jruby
pawnbox has joined #jruby
pawnbox has quit [Ping timeout: 260 seconds]
prasunanand has quit [Ping timeout: 260 seconds]
prasunanand has joined #jruby
pawnbox has joined #jruby
pawnbox has quit [Ping timeout: 250 seconds]
thedarkone2 has quit [Quit: thedarkone2]
pawnbox has joined #jruby
pitr-ch has joined #jruby
raeoks has joined #jruby
donV has joined #jruby
e_dub has quit [Read error: Connection reset by peer]
e_dub has joined #jruby
pilhuhn has joined #jruby
Hobogrammer has quit [Ping timeout: 276 seconds]
pitr-ch has quit [Quit: Textual IRC Client: www.textualapp.com]
pawnbox has quit [Remote host closed the connection]
pawnbox has joined #jruby
Specialist has joined #jruby
raeoks has quit [Ping timeout: 240 seconds]
Antiarc has quit [Ping timeout: 276 seconds]
pawnbox has quit [Remote host closed the connection]
pawnbox has joined #jruby
vtunka has joined #jruby
pawnbox has quit [Remote host closed the connection]
dumdedum has joined #jruby
shellac has joined #jruby
pawnbox has joined #jruby
dumdedum has quit [Quit: foo]
donV has quit [Read error: Connection reset by peer]
donV has joined #jruby
Specialist has quit [Ping timeout: 250 seconds]
Specialist has joined #jruby
donV has quit [Quit: donV]
PragTob has joined #jruby
vtunka has quit [Quit: Leaving]
vtunka has joined #jruby
donV has joined #jruby
skade has joined #jruby
yfeldblum has quit [Ping timeout: 250 seconds]
pawnbox has quit [Remote host closed the connection]
pawnbox has joined #jruby
<GitHub101> [jruby] eregon pushed 1 new commit to master: https://git.io/vocDx
<GitHub101> jruby/master f7a8a2e Benoit Daloze: [Truffle] Fix naming for calling a proc with a block.
tcrawley-away is now known as tcrawley
<eregon> headius: Thanks for reporting this upstream https://bugs.ruby-lang.org/issues/12359#change-59184: matz agreed!
<GitHub133> [jruby] pitr-ch pushed 14 new commits to master: https://git.io/voc96
<GitHub133> jruby/master 5c34605 Petr Chalupa: [Truffle] add back API methods
<GitHub133> jruby/master f26cb94 Petr Chalupa: [Truffle] keep Ruby methods of AtomicReference untill moved under Truffle
<GitHub133> jruby/master 29ef974 Petr Chalupa: [Truffle] add back Dir API methods
e_dub has quit [Quit: ZZZzzz…]
vtunka has quit [Quit: Leaving]
donV has quit [Ping timeout: 250 seconds]
vtunka has joined #jruby
<GitHub134> [jruby] pitr-ch pushed 4 new commits to master: https://git.io/voc7x
<GitHub134> jruby/master d09139b Petr Chalupa: [Truffle] used step_internal's if
<GitHub134> jruby/master 4c155c9 Petr Chalupa: [Truffle] pub back methods used by random
<GitHub134> jruby/master 4df834d Petr Chalupa: [Truffle] putting back code used in if/case branches...
donV has joined #jruby
lance|afk is now known as lanceball
tcrawley is now known as tcrawley-away
tcrawley-away is now known as tcrawley
tcrawley is now known as tcrawley-away
<GitHub72> [jruby] chrisseaton pushed 4 new commits to master: https://git.io/vocAp
<GitHub72> jruby/master a5ef069 Chris Seaton: [Truffle] Move defining primitives to CoreLibrary.
<GitHub72> jruby/master d3325f8 Chris Seaton: [Truffle] Split loading primitives and nodes.
<GitHub72> jruby/master 38b41a4 Chris Seaton: [Truffle] Parallelise loading primitives.
e_dub has joined #jruby
tcrawley-away is now known as tcrawley
dfr has quit [Ping timeout: 250 seconds]
<GitHub33> [jruby] pitr-ch pushed 1 new commit to master: https://git.io/vochC
<GitHub33> jruby/master 9f706bb Petr Chalupa: Revert "[Truffle] add back String API methods"...
enebo has joined #jruby
dfr has joined #jruby
e_dub has quit [Quit: ZZZzzz…]
johnsonch_afk is now known as johnsonch
skade has quit [Quit: Computer has gone to sleep.]
shellac has quit [Quit: Leaving]
<travis-ci> jruby/jruby (master:dec2534 by Petr Chalupa): The build was broken. (https://travis-ci.org/jruby/jruby/builds/137224909)
e_dub has joined #jruby
donV has quit [Ping timeout: 250 seconds]
blandflakes has joined #jruby
camlow325 has joined #jruby
camlow32_ has joined #jruby
camlow325 has quit [Read error: Connection reset by peer]
Aethenelle has joined #jruby
camlow325 has joined #jruby
camlow32_ has quit [Read error: Connection reset by peer]
prasunanand has quit [Remote host closed the connection]
skade has joined #jruby
shellac has joined #jruby
PragTob has quit [Remote host closed the connection]
vtunka has quit [Quit: Leaving]
thedarkone2 has joined #jruby
hobodave has joined #jruby
pawnbox has quit [Remote host closed the connection]
vtunka has joined #jruby
pawnbox has joined #jruby
pawnbox has quit [Remote host closed the connection]
vtunka has quit [Client Quit]
pawnbox has joined #jruby
pawnbox has quit [Ping timeout: 272 seconds]
shellac has quit [Quit: Computer has gone to sleep.]
pawnbox has joined #jruby
hobodave_ has joined #jruby
Specialist has quit [Remote host closed the connection]
hobodave_ has quit [Max SendQ exceeded]
hobodave has quit [Ping timeout: 260 seconds]
hobodave has joined #jruby
Specialist has joined #jruby
Aethenelle has quit [Quit: Aethenelle]
skade has quit [Quit: Computer has gone to sleep.]
<travis-ci> jruby/jruby (master:65f0175 by Chris Seaton): The build has errored. (https://travis-ci.org/jruby/jruby/builds/137235943)
<GitHub63> [jruby] headius pushed 1 new commit to packed_arrays: https://git.io/voCzx
<GitHub63> jruby/packed_arrays b2e2611 Charles Oliver Nutter: Treat all fills as unpackable operations because I'm lazy.
e_dub has quit [Quit: ZZZzzz…]
e_dub has joined #jruby
<kares_> enebo: hey! isn't there the possibility for optimization with caller ?
<kares_> did not checked yet - just asking
<kares_> e.g. caller[0] vs. caller(1, 1)
<kares_> ... knowing up front that only one previous frame is needed
<chrisseaton> That's interesting
<chrisseaton> And for one frame you can hint to pass in the caller, so you could avoid walking the stack at all
<headius> kares_: interesting idea
<headius> chrisseaton: yeah, we already have "called name" and file+line available within the method body
<kares_> yes and it's the 80-20 case ... most of times people do caller just to get the calling trace
<kares_> I know Rails does this and its the same case on the mailing list
<headius> ugh
<headius> where does Rails do caller?
<kares_> headius: recall they did it around logging somewhere
<kares_> but maybe its only with SQL logging being on
<headius> either way that's not good
<kares_> (which is usually off on production)
<headius> whew :-)
<headius> yeah if they were doing that it would even be a lot of overhead on MRI
<headius> not as much as us, but a lot of objects and stack-walking
<kares_> will double check - just reading the mailing list issue I noticed that I've seen the exact same type of caller usage
pietr0 has joined #jruby
<kares_> caller[0] ... which is pretty much caller(1, 1)
<headius> caller[0] or caller(0)?
<headius> the former will still need caller frame which usually will need backtrace
<kares_> caller[0] in the mailing list
<headius> not much we can do for that :-(
<kares_> isn't it shifted ... caller[0] == caller(1, 1)
<kares_> ?
<headius> no
<kares_> have to try
<chrisseaton> headius: but can't the caller know that the callee will need it via some flag and then pass in it's backtrace info - we do that for binding_of_caller etc
<chrisseaton> no issue if you pass it in if you don't need it, and if you mess up and don't pass it when it was needed you can still backtrace to get it
<GitHub105> [jruby] mohamedhafez opened issue #3963: SelectorPool returns Selectors with old keys in them https://git.io/voCVZ
<kares_> maybe I am missing smt obvious or JRuby is misbehaving : https://gist.github.com/kares/b642cd3f6a646d943ed994e2f77a00e3
<kares_> oh right - its misbehaving
<kares_> caller(x, y) is wrong
<kares_> IGNORE THAT - wrong terminal window
shellac has joined #jruby
<kares_> all is well but still : caller[0] == caller(1, 1) as I thought
pilhuhn is now known as pil-afk
<kares_> etc. caller[1] == caller(2, 1)
prasunanand has joined #jruby
<headius> chrisseaton: we can't pass a Frame if we haven't allocated one...we could pass file AND line AND call name but there's three more things to push through call stack
<headius> just in case we need them
<chrisseaton> yeah I guess you can't just keep passing more stuff
<chrisseaton> I was thinking the first time you use caller[0] it would then tell its caller to 'hey next time pass me your backtrace', so it'd one be turned on if you need it
<headius> yes, that sort of thing is possible...easier with indy calls
<headius> we may eventually just have to go all indy because it's too hard to handle all these "maybe we need to do it" things with hand-written call paths
yosafbridge has quit [Ping timeout: 276 seconds]
<headius> we need to be able to massage the calls in-flight rather than having to determine what they need up front
<headius> kares_: yes, so if it were caller(0) (this method) we could use your idea
<kares_> yep
<headius> it did just occur to me, though, that when we use stack trace for jitted methods we're only getting the actual defined method name, not the name it was called as
<headius> which may be a bug or may be ok...I don't recall
<headius> needless to say, caller is one of my least favorite Ruby features to support on an existing VM
<kares_> yes guess the Thread#getStackTrace is unavoidable
<headius> for now, at least
<headius> I need to investigate recent JDK additions that might make short stack-walks cheaper
<headius> so we could see caller[0] and just ask for one callling frame from JVM
blandflakes has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
<kares_> headius: they do plan some improvements but in a new API
<headius> yes
<kares_> around StackTraceElement short-comings
<kares_> etc
<headius> yeah, and there was the whole getCallerClass debocle too...I believe they have an alternative API for that now
shellac has quit [Quit: Computer has gone to sleep.]
drbobbeaty has joined #jruby
<headius> either way, they do appear to be trying to improve how we get information about call stack
<kares_> yy it should tell you the Class directly
donV has joined #jruby
<travis-ci> pitr-ch/jruby (master:73f1250 by Petr Chalupa): The build is still failing. (https://travis-ci.org/pitr-ch/jruby/builds/137277512)
Specialist has quit [Remote host closed the connection]
<enebo> kares_: headius: I half thought of using .rbj as an extension and adding a simple macro language! :)
<enebo> caller is almost always used for this sort of thing and is basically a smell of not having mcros
<enebo> although I know how matz feels about them
e_dub has quit [Quit: ZZZzzz…]
<kares_> enebo: still wondering - what guarantees must be me in order to just get caller(1, 1) from the frame-stack (without getStackTrace) ?
<kares_> * met
<headius> the caller method would need to put its file, line, and name somewhere on heap
camlow32_ has joined #jruby
<headius> a waste if they're not needed
<headius> or run in interpreted mode, where everything gets a heap frame :-D
<kares_> right I see
<enebo> kares_: I think largest issue is many methiods will not get a frame
<kares_> yep that is what I was afraid off ;(
<enebo> headius: It would be interesting to get more data on cost of parameters
<enebo> headius: I know scala passes like 400 per call :P
<headius> if things inline a lot of them will get reduced by register alloc, but when not inlined they just consume frame space and/or registers
<headius> and since usually they'll be different values for each method, they can't be folded into one register across calls
<enebo> I think that is not even my fear
<enebo> Mine would be bytecode limits
camlow325 has quit [Ping timeout: 244 seconds]
camlow325 has joined #jruby
<enebo> pushing n more values per call in a method will make it larger
<headius> to do the calls, definitely
<headius> I could statically put the information into indy call sites, which will then only affects bootstrapping
<headius> I don't think any of this is feasible along non-indy paths though
<enebo> with profiling on we actually can know this
<enebo> not filename but method name at least
<enebo> I would hate to save the line and file per call
<headius> well I just did a pass over indy call sites to send in file and line number
<enebo> but there is some synergy to passing callsite to method being called
<headius> mostly because I log them when logging indy behavior
<headius> "call to X from y.rb:1 bound directly blah blah"
camlow325 has quit [Remote host closed the connection]
<headius> if I put in the statically-determined method name we'd have everything available in at least indy sites
<enebo> but our profiling would be cheaper if we passed callsite to method being called
camlow32_ has quit [Ping timeout: 252 seconds]
<headius> how to get that into the next method, I'm not sure...they'd have to be more params
<headius> i.e. raise callsite to an operand
<headius> yes?
<enebo> this is beyond indy since we call through native methods
<enebo> an operand…I don’t know
<headius> static stuff is easy for me to add into call sites...anything dynamic would have to be passed on stack
<enebo> I think we would just pass it or no
<enebo> headius: profiling requires recording sites and I record them in a pretty janky way
<enebo> headius: so I think ignoring JIT or anything else if we want profiling to be cheaper we should consider this pretty seriously
<enebo> headius: but it would actually be usable for caller opto too
<headius> sure
<enebo> I have not even considered impling this because I want what we have more solid and working to know what wins we can have
<enebo> pushing it through now feels premature
<enebo> if we do not see big gains from inlining or not enough to cover slowdown from profiling then we might be stuck
<enebo> I personally do not think so since right now I think it will pay off huge in scopes which have method calls which pass blocks
<enebo> that one pattern alone will be pretty big payoff since blocks are so much more expensive than methods
<headius> yeah, that's the big money
<enebo> we do not need to profile the planet so I think I can keep overall system cost lower too
<headius> inlining in IR can do things I can't even force in JIT too
<headius> even if hotspot did inline through a method and the block it receives, it wouldn't eliminate the overhead we have to support closures
<enebo> right now profiler does keep track of all sites over time and dumps the stats periodically for temporal usefulness (and not a ton of memory use)
<headius> graal might not either since we put the frames into a globally-visible, long-lived heap object
<enebo> yeah
<headius> it needs to optimize at IR level or pass everything on stack
<headius> or both :-)
<enebo> well I just know we eliminate tons of crap in simple loop calling a closure scenario
<headius> yeah ship it
<headius> I want it
<enebo> and that ignores we can run passes on post-inlined body
<headius> your work to reduce IR size will have a high impact there too
<enebo> anyways…the caller opto might fit into profiler work since passing callsite can I think give visiblilty to scope
<enebo> headius: yeah smaller it gets the more we can inline too
<enebo> that hotspot uses incoming bytecode size for a limit is really disappointing
<headius> nice work, enebo and subbu
<enebo> headius: this sort of makes me wish we had bit the bullet and allowed all instrs to accept a line number operand
<enebo> line number is not really an instr which can be helpful in analysis
<enebo> but it would mean a subset of all instrs would have to be able to cope with being a line number boundary
<enebo> for JIT I guess it would be trivial to change over to
<enebo> even interp would not be hard but we have so many operand decode paths I think it would be error prone
<headius> jit currently just uses the last line encountered to pass into indy bootstrap
<headius> and file is constant for a given IRScope once loaded
<enebo> yeah but I am just talking about representation here in the IR instr side of things
<subbu> good to see.
<enebo> how you see the number change as your process the scope would be a realtively small change for JIT to ask instrs
<enebo> vs visitor to LineNumber()
<headius> makes no difference to me
<headius> old JIT pulled from nodes and just injected line information when it changed
<enebo> subbu: look at this after and wonder how hard it would be to give back tmp vars for reuse when 1:1 for things like eqq+btrue (e.g. single use temps)
yosafbridge has joined #jruby
<enebo> linenum instrs are now shared so they do not have a large memory footprint anymore
<enebo> it is just a big contributor to # of instr per scope
<headius> hmm
<headius> I thought I modified IR to have a case/when instruction for all-Fixnum case/when
<subbu> enebo, but, line # instrs don't show up in bytecode, so how does it matter?
<enebo> it may be faster for interpreter to have them since it does not force their burden on every instr in the scope
<headius> maybe the default is throwing it off
<enebo> subbu: I was only pointing out they are extra stuff which analysis does not use
<enebo> subbu: so it is just extra instrs per scope we walk over
knu has quit [Read error: Connection reset by peer]
<enebo> subbu: but it might be best way to represent them and it might be faster as an instr than not for interpreter
<enebo> subbu: I am merely musing on it
<travis-ci> jruby/jruby (packed_arrays:b2e2611 by Charles Oliver Nutter): The build is still failing. (https://travis-ci.org/jruby/jruby/builds/137292079)
<headius> subbu: really nice to see all those nil inits disappearing now
<subbu> enebo, ok .. i have a mild pref. for the current form. since it is more readable in IR output. as for reusing tmps, sure, i can take a look later.
<headius> they were a fair bit of JVM bytecode
<subbu> headius, ya .. i could have done this long back .. but, i was lazy since i felt that hotspot can deal with it trivially and the extra work didn't feel worth it.
<subbu> but i have learnt since that bytecode size matters.
<headius> sadly
<enebo> subbu: actually can you confirm something…BB8 looks like we are no longer creating noresult instrs
<enebo> subbu: does it look like that for you too?
<headius> it probably always will matter too since everything's going to run in an interpreter or low-tier JIT before the big optz kick in
<chrisseaton> you need a JIT that think about things at a higher level, like Graal
<headius> chrisseaton: Graal interprets too
<headius> it will matter
<chrisseaton> Although to be fair I think Truffle inlines based on number of Graal nodes, which is also a poor metric
<headius> or rather I mean I assume Graal doesn't kick in until after interp has been running for some time
<subbu> enebo, itlooks like that in that particular case .. but depends on what the ast node for that is.
<headius> yeah, nodes are a better metric but not great
<chrisseaton> Right
<enebo> subbu: it may be that those temps %v_9,10 did have uses and a pass removed them
<headius> and very time-sensitive...nodes before optimizing? after? how do you decide how many nodes is too many before optz?
<subbu> enebo, if so, easy to check with ast --ir dump
<headius> hotspot also has a node count threshold :-)
<enebo> subbu: yeah I just need to know what this is code headius made for it? :)
<headius> so multiple different metrics for bytecode size vs hotness AND a tree of nodes metric
<headius> enebo: what did I do?
<subbu> he wants ruby code for that snippet
<headius> oh!
<headius> I'll add
<headius> it's just a three fixnum case/when with a default
* subbu wanders back into wmf land
<headius> it's there now
<enebo> headius: thanks
donV has quit [Quit: donV]
<enebo> heh the irony with this code is we do plan on some numeric switch support overall
<headius> I wonder about creating richer test-and-branch instructions too
<headius> rather than eqq + b_true, it would just be eqq_branch
<headius> eqq_branch I can compile more optimally than an opaque IRubyObject + b_true
<enebo> headius: this looks broken
<headius> broken?
<enebo> look at bb8
<enebo> how does lots happen only
<enebo> like it is missing a BB
<enebo> HAHAHAHA
<enebo> fuck me
<enebo> you JAVA PROGRAMMER
knu has joined #jruby
<headius> I don't see what's wrong with BB 8
<enebo> default;
<enebo> you mean else?
<headius> oh hahahah
<headius> yeah
<headius> I mean else
<enebo> ok so it does not look broken at IR level now and it is still a reasonable case to see why results were not eliminated
<enebo> I think in this snippet if builder knows it is using 1:1 temp usage we can eliminate 7 temp vars
<enebo> So a question would be is it helpful to only have 5 vs 12
<enebo> in the case of referencing crap in temps stuck down stack I could see it helping
<headius> I fixed the gist and dropped the bytecode dumps
Antiarc has joined #jruby
<enebo> I had thought if we could have all jumps use the same temp in a scope we could just use a field for it
<enebo> since only one jump can ever be active at a time in a scope
rcvalle has joined #jruby
<headius> I think we need to add b_eqq
<enebo> not sure if that is really helpful or not since we still need other temps
<enebo> yeah compound instrs could also help
<headius> it will drop an instruction for every "when" at minimum, and in JIT I can compile that as a boolean rather than IRubyObject.isTrue()
<enebo> b_eqq is really really common as a pattern too
<headius> and it will be easier to see through for making homogeneous case/when into switches
shellac has joined #jruby
<headius> right now I can't easily associate the eqq with the b_true in JIT
<enebo> headius: yeah although I have this fantasy that detecting without it means we can convert is/elsif chains which happen to match same pattern
<headius> if (a === b) could still be compiled as b_eqq
<enebo> I guess we can still detect that pattern separately but I would not take the time to look
<headius> it's equivalent
<enebo> yeah that’s true
<enebo> I guess we should examine most uses of our branches and see if we are missing some big patterns
<enebo> eqq + b_true does happen a lot
shellac has quit [Quit: Computer has gone to sleep.]
subbu is now known as subbu|lunch
<headius> enebo: in fact, eqq ALWAYS goes with b_true
<headius> we only emit it for when
<enebo> headius: in a casual survey this only happens for case/when
<headius> well if (a === b) will be call("===") followed by b_true
<enebo> headius: so it may be that we are better off not making a new branch but making the actual table lookup instr
<headius> yeah I need to see why that's not emitting
<headius> I thought I landed it for fixnums at least
<enebo> headius: is it on master already?
<headius> thought so
<headius> maybe I never landed a branch
<enebo> we do commonly use b_truee with eqq_rescue
<headius> ahh, fast_fixnum_case
<headius> oh yes, that would be the other pattern for eqq + b_true
<enebo> not probably common enough to add a even more specialized instr for it htough
<enebo> like in 80,000 lines I saw it like 10 times
<enebo> I guess if you have tons of rescues
<headius> yeah I'd expect since-rescue is by far the most common, dropping off fast after that
<headius> rescue could be optimized to be constant time as well if the referenced constants don't change
<headius> i.e. if it's all rescue SomeClass, we just check if SomeClass constant would be invalidated, otherwise direct branch based on previously-seen class ID
<enebo> headius: yeah it possibly could be a specialized variant of a switch
<enebo> headius: although honestly I am not sure it matters in that case…it is generating an exception :)
<headius> this is talking about optimizing exception *handling* though
<headius> heh yeah
<headius> we're in sync
<enebo> just tried lex bench for parse and we take 130s vs 2.3 taking 17s
<enebo> so I am guess it is still not small enough to fit yet
<headius> probably not
<headius> it's really big
<enebo> I will generate IR for it
<headius> yeah I was just going to suggest that
<headius> we do have a hard limit on IR size for JIT right now
<headius> 1000 instructions I believe
<headius> that's the new meaning of jit.maxsize
<enebo> I can up that quite a bit though right?
<headius> it was root nodes before
<headius> yeah you can
camlow325 has joined #jruby
<headius> if you don't see it JIT that's why
<headius> if it JITs then it may be too big
<enebo> but we could never do it enough to get it native
<enebo> we would run out
shellac has joined #jruby
e_dub has joined #jruby
bga57 has joined #jruby
shellac has quit [Client Quit]
<enebo> org.jruby.compiler.NotCompilableException: Could not compile org.jruby.internal.runtime.methods.MixedModeIRMethod@5ac3e1c8; instruction count 10819 exceeds threshold of 10000
<enebo> java.lang.RuntimeException: Method code too large!
<headius> oh, 10k
<headius> 10k is probably still way too much JVM bytecode too
<enebo> well I should run this against 9.1.2.0 and see how many it was
<enebo> yeah still too large
<headius> it's closer though
<enebo> wow
<enebo> 10855 vs 10819
<enebo> inconceivable
<headius> heheh
<headius> so work continues :-)
<headius> enebo: I'll test out my branch today and if it passes everything I'll land it
<headius> so at least we'll have constant-time case/when with all fixnums
<enebo> headius: that number makes no sense
<headius> MRI does it for string, symbol, and nil/true/false also
<enebo> headius: this methods has like 1000 temp vars
<headius> sounds about right
<headius> most could be reused
<enebo> so it should have 1000 less
<enebo> yeah well nearly all should disappear with your landing of that instr too
yfeldblum has joined #jruby
<headius> ¯\_(ツ)_/¯
<headius> that case when isn't homogeneous though
<enebo> my other weird observation is that this has lots of double line_num instrs
<headius> it's all strscan.scan calls
<enebo> aha
<headius> oh hey, did we ever turn DCE back on for JIT?
<enebo> we are emitting for begin and for instr before begin
<headius> we can do that now that the nils are being populated correctly
<headius> enebo: looks like I never completed the fixnum case/when work
<headius> I think I got it to work for most cases and then got pulled away
<enebo> headius: DCE as a pass itself seems to still be running so I guess I am not sure what you are referring to
<headius> enebo: I think it only runs in simple
<enebo> headius: ok
<headius> or full
<headius> not in JIT
<headius> but in any case we want it at the end of all JIT passes
<enebo> headius: well it is not super important we fix this today either I geuss we just notice stuff and poke and prod :)
<enebo> headius: subbu|lunch: perhaps we just moved it earlier?
<headius> earlier?
<enebo> I don’t remember this at all
<enebo> in pass list
<headius> we removed it
<GitHub55> [jruby] headius created fast_fixnum_case (+2 new commits): https://git.io/voCFK
<GitHub55> jruby/fast_fixnum_case 8acfb77 Charles Oliver Nutter: Optimize all-Fixnum case/when to have an O(1) switch.
<GitHub55> jruby/fast_fixnum_case d696406 Charles Oliver Nutter: Work in progress
<enebo> headius: it is in there
<headius> removed altogether from JIT because of nil init
<headius> ohhhh
<headius> ok yeah it is there earlier but then we do a bunch more passes
<enebo> unless subbu|lunch re-added it recently or you mean something else doing some DCE like?
<headius> it should be at end now
<enebo> ok I will try and see what happens :)
<headius> yeah subbu added it back
<headius> but I don't know why it wouldn't be at end
<headius> I mean it's after optimize scopes, optimize delegation, call protocol
<headius> or I mean it's before those
<headius> and they make a lot of changes
<enebo> he did not seem to reenable for last commit
<enebo> but I will move it to the end if that is how it was
<enebo> hahahaha
<enebo> ok ignore me a bit…I had 9.1.2.0 tag
<headius> hah
<headius> well then the difference between counts is *really* weird
<headius> my fixnum case doesn't check for === overwrite in Fixnum either
<headius> not that I care
<enebo> even on master he did not add DCE recently
<headius> yeah looks like it came in with the nil init work
<enebo> I do not see that…previous commit before he swapped over from ensuretemp?
<chrisseaton> but if you aren't handling monkey patching of ===, you aren't implementing Ruby though
<headius> chrisseaton: quiet, you
<headius> can you monkey-patch fixnum to be broken and still run? Because we can
shellac has joined #jruby
<chrisseaton> well you won't run this properly! or that min thing
<headius> I'm not sure "Ruby" requires "you can break the whole VM by changing what math does"
<chrisseaton> but yeah we'd probably break entirely - we inherit that from Rubinius
<chrisseaton> Right, but just stick a switch point on it, it takes 2 minutes
<headius> I know, and I will, and I do elsewhere
<headius> but if a feature isn't implemented, and nobody ever notices, is it really a feature?
<chrisseaton> Rubinius has some safe-math idea we're supposed to be using I think
<chrisseaton> that min thing shipped
<headius> what was the min thing?
<chrisseaton> I think you optimised Array#min or maybe sort for Fixnum, and didn't handle monkey patching of <=>
<chrisseaton> I opened an issue - you closed it I think :)
<headius> ah, yes
<headius> and changing what Fixnum#<=> does is broken behavior in any app
<chrisseaton> You must pushed it back it seems
<chrisseaton> You *just* pushed it back to the next version, I mean
pil-afk is now known as pilhuhn
<headius> right, because we can do the check but
<chrisseaton> I think this is a real slippery slope, although I know I'm being pedantic. But one little shortcut here, one shortcut there, before you know it you're missing something important that someone uses.
<headius> and then we fix that thing
<chrisseaton> Maybe we're slower at warmup than you partly because we have full dispatch everywhere
<headius> this is pragmatic versus dogmatic
<headius> I'm trying to make a tool people use today
<headius> there are tradeoffs
<headius> there's also places where MRI explicitly *doesn't* dispatch, which you can't emulate easily from pure-Ruby impls
<headius> it's not consistent enough to say that these cases are broken by definition
<chrisseaton> Yeah we need a primitive or something to emulate that - I don't think Rubinius had that
<headius> no, they remained staunchly opposed even when a valid patch broke their VM
<headius> I can make code that will run under MRI that won't run under Rubinius or JRuby+Truffle...so who's implementing Ruby right?
<chrisseaton> Dose that mathn module define basic Fixnum methods?
<chrisseaton> Or complex or something?
<headius> it overwrote / for a while but I'm not sure if it does anymore
<headius> and I explicitly excluded / from optimization because of that
<headius> AND it was very wrong for them to do it
<chrisseaton> Can't call it an optimisation if it's observable :)
<headius> and when truffle breaks, is that an optimization or a feature? :-)
<chrisseaton> It's a todo
<headius> anyway, you see my point...this is grey area
<chrisseaton> Sure
<headius> sure, and it's a todo for us as well
<headius> you have different priorities
<enebo> hey can I interrupt this conversation for a hair-splitting question
<enebo> clearly you can guard against something changing per call at the site
<enebo> but for something lime Array.min is a coarsened check good enough
<headius> I'm of the opinion that you shouldn't be able to change fundamental primitive operations like math or integer comparison, no matter what you do
<headius> and I think that's a pretty safe opinion to have
<chrisseaton> Could already have a coarse check in some of your call sites don't you?
<headius> some of them do, yes
<enebo> headius: regardless I think a reasonable fix for min would be one coarse check before the loop
<headius> the problem is that it's too coarse at the moment...just a "did they reopen fixnum" guard
<headius> we need one for each fixnum method in question
<enebo> since if someone change cmp half way through the loop it might be semantically wrong in some sense but that is insane
<headius> doable but not a high priority at the moment
<headius> enebo: MRI will only check once
<enebo> oh yeah that is true
<chrisseaton> Am I think in thinking that Rujit omitted all guards when compiled?
<enebo> we do not even know it is homogeneous
<headius> they do it in many places, but they never re-check on each dispatch
<headius> chrisseaton: one of them did, not sure if it was that one
<chrisseaton> I can imagine MRI saying we have a new JIT but it's got this feature where it makes your code faster but you can't change things when it's optimised
<headius> the one I remember that removed guards was the one that required you to run your program for a while and then compile the C it barfed out
<headius> and they still didn't usually beat JRuby + indy
<enebo> headius: yeah that was way back when ko1 was still at Tokyo U
<headius> chrisseaton: I think that's a reasonable thing to do given limited resources for making a JIT
<enebo> I moved DCE to the end and it did not affect parse code generated at all
<headius> enebo: the stuff emitted by the later passes must not deadify much code then
<enebo> but I guess it is only looking for dead instrs so perhaps this was not DCE running but that something is more conservative somewhere about marking instrs dead
<enebo> yeah I am guessing none
<headius> it will/may later
<enebo> headius: but I half wondered if we commented out something
<headius> certainly possible
<headius> I would have expected it to remove *something*
<enebo> I guess I will ask subbu|lunch later when he is around
<headius> what about no DCE pass at all?
<headius> that would tell us if it's actually even working
<enebo> headius: well it appears to be running before the fix too
<enebo> just really early in the passes
<enebo> like it will for sure kill some stuff we mark as dead quickly like recv_self
<enebo> so just removing it will show we are killing instrs
<headius> we do have some side-effect-free instrs still getting pushed through to JIT
<headius> recv_self isn't even useful when %self is already a special variable
<headius> enebo: small patch and fixnum case passes specs
<enebo> recv_self is always marked dead
<headius> I'll push and let travis chew on it
<headius> enebo: ok good
<enebo> it should not be there
<headius> we should just eliminate it
<enebo> I am less sure about current_scope or module
<enebo> although those also should be killed
<headius> yeah those are why I was talking about having more explicit data flow from scope or frame into instrs that need them there
<headius> we *should* only be adding them when needed, but that doesn't take into account any subsequent optimizations
<enebo> headius: hey I am all for adding the constituent parts
<enebo> headius: I just am hoping we can get rid of frame
<headius> agree
<enebo> ok
<enebo> they just need to be pervasive enough to really be able to know whether we can kill those individual parts (e.g. visibility) within a scope
<GitHub38> [jruby] headius pushed 1 new commit to fast_fixnum_case: https://git.io/voCpD
<GitHub38> jruby/fast_fixnum_case b4f2054 Charles Oliver Nutter: Binary search returns -(something) so broaden this test.
<enebo> pervasive enough in that IR can track those fields entirely
subbu|lunch is now known as subbu
<headius> yeah, like frame logic has visibility logic tacked on whether vis is used or not
<headius> chrisseaton: closing out that conversation...I agree in principal, but in practice it's picking a 0.01% "feature" that most people can't use without breaking everything anyway
<headius> we have bigger fish to fry
<chrisseaton> the only reason why I think it's a shame is that you already have the tech to handle it with zero overhead with your switch points
<headius> switch points don't optimize well from JAva
<chrisseaton> oh right they'd complicate the java code
<headius> since you can't do invokedynamic from Java
<headius> not even that
<headius> they're not seen as constant
<headius> because we can't bind them to a site
<headius> method handles themselves only optimize if loaded from a static final
<headius> I mean optimize into the caller
<headius> nothing in JRuby is static final :-)
<headius> Array#min in Ruby would work and guard properly, but then we don't have a cheap way to iterate using Fixnum
<headius> tradeoffs
skade has joined #jruby
<headius> I did prototype some code at one point that would post-process Java-based core methods and rewrite them to use indy
<headius> something like that could make it irrelevant whether we use Java or Ruby
<enebo> where is a min benchmark?
<headius> might be one in rbx benches
<headius> we might have one, I'm not sure
<headius> the min work was just to update compat with MRI for 2.3 so I didn't look at perf
<enebo> Ignore the IRMAnager line :)
<headius> that's about right, but I think it should be metaClass.getInvalidator
<headius> it will be switchpoint-based on indy and integer-based on non-indy
shellac has quit [Quit: Computer has gone to sleep.]
<subbu> what was the qn. i got pinged on? there is a lot of ocntext before/after. :)
<enebo> headius: so object compare out of getData right?
<headius> no, just isValid
<headius> it abstracts the two ways of checking validity
<headius> I'm talking from memory, not sure about exact method names
<enebo> headius: I do not see an isValie method on that interface?
<headius> oh, bugger
<headius> hmm
<headius> let me see
<enebo> subbu: headius mentioned we disabled something because rthe the temp init not having false edges
<enebo> subbu: he said DCE but I think it has always been there (although maybe it is further up the line in ir.passes
<headius> enebo: bleh, ok...I guess there's still some hand logic required
<enebo> subbu: does that ring any bells?
<enebo> headius: so getData saved against current with ==?
<headius> enebo: yeah I guess you are right
<enebo> ok
<headius> that will work with both types of invalidator
<headius> one uses just object, other uses object and switchpoint
<headius> but getData will always be a way to check
<headius> I think I didn't have isValid because it would need to take in an existing value for the object form, but no value for switchpoint form
<headius> I guess I should just modify it to use a non-indy switchpoint-like thing
<subbu> enebo, no, .. anyway .. if you point me at some ruby code in which something is not getting dce-ed where you think it should, i can take a look in the night maybe.
<enebo> subbu: this is coming from headius memory not mine
<headius> I know we removed it or moved it because it wiped out my nil inits
<enebo> headius: I do vaguely remember that
<enebo> chrisseaton: you know of a Array.min micro bench?
<subbu> no ... it was in the wrong place back at that time. so, if DCE is on the list of passes, it will run.
<headius> oh enebo...I don't think this is quite right either
<headius> if fixnum changes, it will still grab new id and think it's ok
<subbu> so, anyway, if you have a ruby snippet you want me to look at, i'll.
<headius> this only checks for it changing along the way
<headius> not whether someone overrode Fixnum#<=> before the call to min
<enebo> headius: oh
<enebo> headius: once changed though it will always be off
<headius> you want runtime.isFixnumReopened
<headius> not the normal class generation guard
<headius> that replaces old ID with new ID
<headius> or invalidates switchpoint and creates a new one
<headius> isFixnumReopened is just a single global flag...this is why I said we need finer-grained checks
<enebo> so what is Data
<headius> it's an Object
<headius> as in new Object()
<enebo> headius: I did not want reopened because activesupport will kill the opt right?
<headius> activesupport already kills my guard
<headius> for adding *unrelated* methods
<headius> that's why this isn't the right mechanism for individual method guards
<enebo> headius: I just want something which is current id of state in some fashio
<enebo> then it would just work unless it was changed
<headius> if we saved Fixnum's ID at boot, you could compare to that
<headius> we don't right now
<enebo> Is generation no longer used?
<headius> I have to check
<headius> it may be cruft after I moved to Invalidator abstraction
<enebo> because generation solves it more or less
<enebo> unless you overflow the type in the loop back to the same generation :)
<headius> well generation won't work either
<headius> you're still assuming a pure Fixnum by grabbing generation within min
<headius> you need to know the generation/id/invalidator for a pristine Fixnum to do this guard
<enebo> so you mean if someone extends Fixnum
<enebo> I was assuming any changes to the type would increment or change somerthing to be different
<headius> occurs to me now I could start putting invalidator on CallEntry, since that's also what we use for invalidating at call site
<enebo> so a store up front should protect against change
<headius> CallEntry itself could be a SwitchO
<headius> SwitchPoint
<enebo> so how does this actually work?
<headius> you can't extend Fixnum
<headius> enebo: yeah but change it from what?
<travis-ci> monkstone/jruby (master:6244f18 by Martin Prout): The build was broken. (https://travis-ci.org/monkstone/jruby/builds/137319665)
<headius> you don't know what the right initial Fixnum generation was
<headius> you're only checking if it changes while you're in the loop
<enebo> oh I guess I can do one hash up front as well
<enebo> that is not the end of the world
<headius> you need to know what it was before patching
<enebo> I can boolean isBuilrtin before loop for cmp on first element
<headius> obviously this all needs to coalesce into fewer guard mechanisms
<headius> making CallEntry be the invalidator is sounding right to me
<enebo> if it is not builtin then I pass in null
<enebo> headius: well hey that sounds a lot less adhoc :)
<headius> yeah
<headius> then every method in the system would have an invalidator/switchpoint associated with it
<enebo> headius: but I can definitely fix this patch with an isBuiltin() check
<enebo> when I save TypeId
<headius> invalidating one would invalidate everywhere that stale method was called, so there's less of an invalidation cascade
<headius> yeah
<headius> that's the right way for now
<headius> or accept that any reopen invalidates and use the flags in Ruby
<enebo> yeah I think I am fine with this since it is a little more flexible
<enebo> we don’t care if it has been reopened ever…just that cmp is the same and the type has not changed in the loop
<headius> right
<headius> we just want to make sure cmp is the original one
<headius> better invalidation on per-method basis will make that easier in the future
<headius> at boot we'll just gather up some key invalidators we're interested in and know on a method-by-method basis what core stuff has been overwritten
<headius> enebo: I have also thought about reworking CallSite to just be indy CallSite too
<headius> if it's just binding directly to DynamicMethod it shouldn't be any worse than what we have
<enebo> headius: well we should check warmup
<headius> and then -Xcompile.invokedynamic would push indy all the way through that properly
<headius> yeah dunno what the warmup characteristics are for j.l.invoke.CallSite used manually
<headius> it's mostly a plain Java class
<headius> but CachingCallSite < j.l.i.CallSite + CallEntry < SwitchPoint would condense a lot of logic
<headius> plus we could use the same call sites in interp and in JIT and not lose profile
<headius> super yay
<headius> plus we could feed IR profiling into indy call sites
<headius> since they're the same sites
<headius> imagine being able to take a polymorphic cache directly from IR and stuff it into indy
<headius> already hot
<enebo> sounds good when you say it
<enebo> :)
<headius> hmmm
<headius> our hierarchy-based invalidation could be modified to only invalidate that method name down-hierarchy
<headius> that woud be huge
<headius> changing one method in Kernel wouldn't blow cache for every method in the system then
<headius> yeah this is the way to go
<headius> 9.2
esmiurium has quit [Ping timeout: 272 seconds]
<headius> looks good
<headius> remove cmp_opt commenting...that's what you're adding in
<headius> in MRI it's static bool[] = {0}
<headius> they flip that bit once it fails
<GitHub159> [jruby] chrisseaton pushed 5 new commits to master: https://git.io/voWJH
<GitHub159> jruby/master 3c64f60 Chris Seaton: [Truffle] Expression in single-quoted string.
<GitHub159> jruby/master 7771a7f Chris Seaton: [Truffle] Add samples to metrics alloc JSON output.
<GitHub159> jruby/master b11ca8f Chris Seaton: [Truffle] Add samples to metrics time JSON output.
<enebo> yeah I will also remove the two fixmes
<enebo> maybe add a comment about chrisseaton
* enebo kids
<headius> hahah
yfeldblum has quit [Ping timeout: 250 seconds]
<enebo> wow emacs crashed!
<headius> jeez, per-method invalidation might be really easy to add
<headius> I'm halfway through it already
<headius> and if a method wasn't overridden in a class, no invalidate happens at all
<headius> class generation just goes away
<headius> still has to walk down hierarchy though
<headius> oh wait...oh man, it doesn't
<headius> I just borrow the CacheEntry from superclass
<headius> invalidate top one and everyone that cached it below will invalidate at once
<headius> dark matter
skade has quit [Quit: Computer has gone to sleep.]
Aethenelle has joined #jruby
skade has joined #jruby
<travis-ci> monkstone/jruby (master:dd8fceb by Martin Prout): The build was broken. (https://travis-ci.org/monkstone/jruby/builds/137320049)
<headius> I wonder what mr prout is working on
skade has quit [Quit: Computer has gone to sleep.]
Aethenelle has quit [Quit: Aethenelle]
camlow325 has quit [Ping timeout: 250 seconds]
hobodave has quit [Quit: ["Textual IRC Client: www.textualapp.com"]]
<headius> enebo: heh, this is even better than I thought...if a method getting replaced has never been cached, there's no invalidation cost at all
<headius> we invalidated hierarchy regardless before
<headius> we will have a lot more SwitchPoint in flight though
<headius> one per method definition at most
pawnbox has quit [Remote host closed the connection]
pawnbox has joined #jruby
camlow325 has joined #jruby
<GitHub128> [jruby] chrisseaton pushed 1 new commit to truffle-head: https://git.io/voWZX
<GitHub128> jruby/truffle-head 21ddeb3 Chris Seaton: Merge branch 'master' into truffle-head
prasunanand has quit [Quit: Leaving]
yfeldblum has joined #jruby
norc__ has quit [Quit: Leaving]
norc has joined #jruby
cprice404 has joined #jruby
pawnbox has quit [Remote host closed the connection]
pawnbox has joined #jruby
e_dub has quit [Ping timeout: 250 seconds]
camlow325 has quit [Remote host closed the connection]
camlow325 has joined #jruby
camlow325 has quit [Remote host closed the connection]
<GitHub93> [jruby] atambo opened issue #3964: LoadError: load error: jopenssl/load -- java.lang.NoSuchMethodError: org/jruby/gen/org$jruby$ext$openssl$OpenSSL$POPULATOR.populateMethod https://git.io/voW0J
<headius> chrisseaton: are you doing anything to defer heavy optimizations during early stages of execution?
<headius> i.e. like we delay even going to JIT until stuff gets hot enough
<headius> I'm starting to wonder what total cost we have just from prematurely caching stuff
tcrawley is now known as tcrawley-away
camlow325 has joined #jruby
camlow325 has quit [Remote host closed the connection]
pawnbox has quit [Read error: Connection timed out]
pawnbox has joined #jruby
camlow325 has joined #jruby
shellac has joined #jruby
camlow325 has quit [Remote host closed the connection]
camlow325 has joined #jruby
<chrisseaton> we do all our caching in the interpreter without any delay, but we don't JIT until a threshold of iterations
<chrisseaton> so the first time you do an eval, we'll cache that, even if it's only run once
<chrisseaton> maybe we want to back that off a bit, I'm not sure
<chrisseaton> probably hurts memory more than anything else - setting up a cache is just allocating a couple of objects
<chrisseaton> headius: ^
skade has joined #jruby
shellac has quit [Quit: Computer has gone to sleep.]
<chrisseaton> I mean, the cache is quick to allocate, but it hangs onto who-knows-what memory
yfeldblum has quit [Remote host closed the connection]
skade has quit [Quit: Computer has gone to sleep.]
yfeldblum has joined #jruby
<headius> chrisseaton: what about the AST transformations? Do those start happening immediately?
camlow32_ has joined #jruby
<headius> startup time might be improved a lot if that were deferred too
<chrisseaton> Yes - keep in mind Truffle was originally designed to just be a way to speed up interrpeters
<headius> I just botched part of my caching experiment so it was caching and invalidating during boot time, and it increased our *base* startup threefold
<headius> there's some untold amount of startup time lost by caching and invalidating while booting e.g. a large Rails app
<headius> expensive for us, but potentially REALLY expensive for you since you make assumptions earlier
<chrisseaton> but the cost of rewriting a cache in the interpreter is tiny - just allocate a new small object
<chrisseaton> we probably aren't compiling much while loading (maybe some stuff involved in hashing I think)
camlow325 has quit [Ping timeout: 276 seconds]
<chrisseaton> I shaved more than a second off our startup time the other day, so there is still low hanging fruit
<headius> sure...but then your boot time may be creating scads of objects while our boot time just doesn't cache
<headius> it's a tough balance for sure
<headius> the fastest Ruby startup in the world right now is MRI's plain old bytecode interpreter that does no real magic at all...competing with that and doing ANY early speculation is really hard
<headius> and the fastest JRuby startup...was the plain old AST interpreter that did nothing special either
<chrisseaton> I'm not too worried about this - SVM has tens of ms startup time
<chrisseaton> Maybe that shows that the interpreter implementation code performance is the biggest thing
<chrisseaton> Things like AOTing the lexer
<chrisseaton> SVM might be slower though - we recently did things like starting to load RubyGems on startup, and haven't tried since then
<headius> the fact that most of Ruby is going to boot dynamically makes SVM a pretty weak solution to me
<enebo> chrisseaton: when will SVM load Java code from the classpath?
<headius> you'll get your core compiled ahead of time, but the rest will be just as slow as now
<headius> and for a large app, that's considerably more code than core
<chrisseaton> enebo: when someone writes a Java byte code interpreter
<enebo> chrisseaton: I have always wondered why that was not one of the first things made
<headius> me too
<chrisseaton> we're still exploring how best to make byte code interpreters in the sulong project
<headius> somehow llvm is more interesting than jvm for a jvm-based language framework?
<headius> I don't get that at all
<chrisseaton> it's only SVM that can't class load, of course GraalVM still can
<headius> especially given the huge limitation that being unable to integrate with Java poses
<headius> right, just SVM
<headius> but when we're talking about *most* JRuby apps using some Java code, usually loaded dynamically from a gem, SVM is not an option
<chrisseaton> we're aiming to run the code designed for MRI though
<chrisseaton> and their C extensions
<headius> yes, because it's such good code :-)
<chrisseaton> Java interop is important to us, and we have a state-of-the-art interop system in the works on several papers on that already
<headius> I don't have a lot of confidence that any MRI C ext will work in a parallel environment...not sure if you have a plan to address that
<headius> we have written our Java-based exts all the while with parallelism in mind, while people write MRI C exts with parallelism explicitly *not* part of the equation
<chrisseaton> it's an open question
<headius> well, it's not totally open, since Rubinius has had to force locking around known-unsafe exts already
<headius> and that's just ones that have been reported to blow up under load
<headius> you can always lock around all exts of course
<headius> hmm, though even that, I'm not sure
<headius> if they start mutating stuff from a C ext that's not thread-safe with the main runtime, you're still stuck
<chrisseaton> Petr is building a memory model for Ruby, so we're thinking about this a a deep level
<headius> I agree it *can* be done in the future, but no existing C ext will be honoring that memory model when run parallel
<enebo> I guess it might be a path forward though
<enebo> if someone does formalize something and MRI finally just makes a second API then perhaps we can all be happy
<enebo> At least it is what I always root for
<chrisseaton> SVM solves something really important - deploying without a JVM, and tens of ms startup, at the very least for small apps - something JVM languages don't have any other ideas for
<headius> I have AOT compiled JRuby with available JVM AOT options before, and gotten decent startup improvement
<headius> we also don't *want* to boot without the JVM...that's kinda what the J is there for :-)
<headius> booting without the JVM means booting without JVM-based libraries...one of the biggest selling points for using JRuby in the first place
<headius> chrisseaton: don't get me wrong...I'm beating on an old drum
<enebo> chrisseaton: will SVM also include Graal ever?
<chrisseaton> SVM includes Graal today
<headius> I just don't see that SVM is a solution to the kind of deployment/dev scenarios that startup slow today
<enebo> so maybe it will end up being a JVM anyways at some point
<chrisseaton> It's peak performance is about the same as Graal on a JVM, so you get hello-world in a few ms, and peak performance faster than indy
<enebo> I guess though by that logic we should shitcan openjdk and just use svm
<headius> it's not an answer to say "we have a solution for startup: don't use JRuby Java integration features ever"
<chrisseaton> Well maybe it's not the solution you want
<headius> it's not the right solution for JRuby :-)
<headius> bottom line
<headius> it might be the right solution for a Ruby impl that leverages only MRI C exts and FFI libraries
<enebo> yeah I was thinking that
<enebo> if it is literally to replace what MRI is it makes sense to me
<headius> right
<enebo> which might be Ruby 3x3 :)
<headius> but it's not an answer to "JRuby starts up slow" because it can't do a bunch of what JRuby does
<chrisseaton> There will be a Java byte code interpreter at some point as well
<chrisseaton> I mean should be pretty easy to map that onto a Java interpreter
<enebo> chrisseaton: yeah if it can provide JI and startup quick and maybe not be super quick I think that would be a big improvement for dev env experience
<headius> that will be very interesting to see
<headius> classloading, security, etc would have to come along with
<enebo> since speed generally is not that important for dev envs compared to startup
<headius> but if that all gets done, it could be a good option
<enebo> yeah JI cannot really be done without CL
<chrisseaton> you can also build a custom SVM image with all the Java classes you need baked in
<enebo> security I don’t know how important that is though
<headius> chrisseaton: can you do ClassLoader.load("SomeClass")?
<chrisseaton> Not without the byte code interpreter
<headius> reflective access to the classes?
<chrisseaton> SVM currently has no knowledge of bytecode
<chrisseaton> No reflection
<headius> tricky bits, fo rsure
<chrisseaton> you guys seem pretty negative about truffle and graal today
<headius> just balancing your positivism :-D
<headius> I know, mostly devil's advocate sort of thing from my perspective
<enebo> chrisseaton: I don’t think I have said a single negative thing
<headius> this is all really exciting stuff, and there's a lot of big challenges ahead
<headius> I'm being realistic
<headius> fwiw are the discussions about memory model happening somewhere I can see?
<headius> I have not followed concurrent-ruby gitter for a while...maybe they're happening there or somewhere I don't know about
<chrisseaton> Yeah Petr has an epic bug somewhere on the MRI tracker
<enebo> chrisseaton: neat…so MRI folks are working with him?
<chrisseaton> I don't think actively https://news.ycombinator.com/item?id=11898034
<headius> ahh yes...I have followed some of this
<chrisseaton> sorry yeah that one