<hcatlin>
Specifically, the message "Warning: the --client flag is deprecated and has no effect most JVMs" is getting populated into our test results
mark_menard has quit [Remote host closed the connection]
mark_menard has joined #jruby
mark_menard has quit [Ping timeout: 260 seconds]
subbu is now known as subbu|lunch
<enebo[m]>
hcatlin: I do not know why but rvm is specifying that after installation. rvm 1.29.3 is what your CI is using and that version was released at some point in 2018. Setting an explicit export JRUBY_OPTS="" might solve it if it is not simple to update rvm on travis.
<hcatlin>
oh wow, yeah I didn't actually notice the RVM version
<hcatlin>
good catch.
<enebo[m]>
JRUBY_OPTS="--dev" is also a carrot in that it typically speeds up ci runs as most code is only run once so optimization is less important
<enebo[m]>
hcatlin: it is possible I am wrong in what is setting --client as an option but that warning starts on first ruby command after rvm install
<headius[m]>
Travis might be setting that environment
<headius[m]>
Update rvm for sure but also look through the environment settings above
subbu|lunch is now known as subbu
<enebo[m]>
headius: it might be travis but I think we would see lots of warnings across ci if so
<headius[m]>
Yeah it may not be Travis but up until recently they were still setting flags like enable C extension support
<enebo[m]>
headius: yeah I will not lay money on anything involving travis env :)
<headius[m]>
mark_menard: yeah well if you figure it out I'll be keen to know but I'm glad you got past it
<headius[m]>
enebo: so this gzip thing I mentioned on friday, it was from a tweet
<headius[m]>
I had him send me an example file and both CRuby and JRuby do unpack less then the total content of the file, compared to gunzip
ur5us has joined #jruby
<headius[m]>
$ gunzip --stdout test.gz | wc
<headius[m]>
155 4805 95218
<headius[m]>
that's the correct size
<headius[m]>
$ gunzip -l test.gz
<headius[m]>
compressed uncompressed ratio uncompressed_name
<headius[m]>
8874 19680 54.9% test
<headius[m]>
that's about the size that we and CRuby unpack
<headius[m]>
have you ever seen anything like that?
<enebo[m]>
HAHA
<enebo[m]>
so a 95k file only ends up as 19k on Ruby?
<enebo[m]>
err sorry you are showing me two things here
<headius[m]>
yeah it seems like we and CRuby (both zlib-based though ours is a Java port) stop reading once they have the full reported size of the file
<headius[m]>
those are both just gunzip command lines, but the headers for the gzip file show a completely different total size than is actually there
<headius[m]>
I posit that tools are ignoring that header and just unpacking until end of stream
<enebo[m]>
So if it is a grwoing file it stops at whatever the original stat reported size was?
<enebo[m]>
Or what is written is actually wrong and it honors it
<headius[m]>
this file was reportedly generated by AWS gzipping log files
<enebo[m]>
vs pretending that info is correct
<headius[m]>
somewhere, somehow
<headius[m]>
it seems like a busted header doesn't it?
<enebo[m]>
I could say the turd reference could be applied to something else as well
<headius[m]>
I mean why would the headers say the file will be 19k but it's actually 95k
<enebo[m]>
Interesting to see gunzip not care but then if you cannot trust the written size why write it
<enebo[m]>
I have no opinion on the who is more wrong here but if gunzip/python/go all ignore and just read the data then perhaps that is more right?
<headius[m]>
it may be
<headius[m]>
I have been trying to find more info on this situation
<enebo[m]>
I don't know though. If I had a file of unknown origin and I uncompressed the first n bytes and it had m more I would really wonder if I actually am getting real data or not
<enebo[m]>
with that said if someone can add m more bytes then they probably can add a new header length
<headius[m]>
yeah I am assuming we and CRuby have code that looks at header and unpacks that much data, because it's consistently the same amount
<enebo[m]>
So I wonder what gunzip does if it says it has 19k but it only has 5k?
<headius[m]>
you do write headers first and then start compressing the data, so if the file size changed during compression perhaps this is what you get
<headius[m]>
heh yeah similar situation... would CRuby just blow up because it can't read all 19kj?
<enebo[m]>
in a pure stream I imagine you cannot go back and write it but in that case why would it write any length?
<enebo[m]>
unless it writes it at the end or something
<enebo[m]>
but that would be even weirder for it to be wrong
<enebo[m]>
I have not looked at the structure of gzip data in25 years
<headius[m]>
this seems to be the same behavior, size is only reflecting the first file provided
<headius[m]>
arguably it's a gzip bug
<headius[m]>
this would also explain why this file uncompresses exactly 32 lines plus newline and then quits... that was the end of the first file
<headius[m]>
there doesn't seem to be a way to get gzip to show the multiple files used
<enebo[m]>
heh...so what is the turd now
<enebo[m]>
is it people concat'ing n zip files into a single file and expecting it to work because gunzip does
<enebo[m]>
I guess I don't even care and it appears zcat will work
<headius[m]>
basically gzip allows you to specify multiple files that are logically concatenated into the gzip stream
<headius[m]>
uncompressing normally will just produce the cat'ed content of those files
<headius[m]>
that behavior is not directly supported at the zlib level until the addition of zcat... so presumably gunzip and friends just did the logic at the tool level up until then
<headius[m]>
but GzipReader in Ruby does not do it
<byteit101[m]1>
headius: Let me know if there's a better spot to discuss concrete constructors in than https://github.com/jruby/jruby/issues/449 (ooh, 3 digit issue!)
<headius[m]>
Hah yeah knock down those old bugs
<byteit101[m]1>
I filed that issue, and then promptly copied the javafx launcher monstrosity into ruby :-D
sagax has quit [Read error: Connection reset by peer]
mark_menard has quit [Remote host closed the connection]