15:41:01 <itchka> #startmeeting
15:41:01 <chwido> Woof! Let's start the meeting. It's Wed Jun 20 15:41:01 2018 UTC. The chair is itchka. Information about me at https://wiki.openmandriva.org/en/Chwido.
15:41:01 <chwido> Useful Commands: #action #agreed #help #info #idea #link #topic.
15:41:01 <chwido> You should add extra chair(s) just in case, with #chair nick.
15:41:01 <chwido> Have a good meeting, don't bark too much!
15:41:02 <bero> that should give packagers more time
15:41:03 <ben79> and I'd like to propose a specific Package Management Rule or Guidline for out-of-tree kernel modules
15:41:33 <ben79> bero: OK
15:41:54 <bero> out-of-tree kernel modules are evil and should be avoided if at all possible...
15:42:09 <bero> the right way to handle them is to build them inside the kernel package, the way we do with virtualbox
15:42:19 <ben79> the world should be perfect and people should all be nice
15:42:24 <bero> but of course for a few super crappy ones (NaziVidia dirt etc) that's not possible because of licensing
15:42:26 <Pharaoh_Atem> well, if you're going to do kmods, then we should adopt Fedora's akmod system
15:42:33 <bero> NaziVidia shitheads should be shot dead
15:42:42 <Pharaoh_Atem> that way all the kmods are registered as rpms correctly
15:42:54 <Pharaoh_Atem> rather than the pile of garbage that is dkms
15:43:04 <ben79> I don't like nVidia any more than bero or Linus do...
15:43:38 <itchka> I think there's a bit of a problem doing that with virtualbox if the kernel gets ahead of the installed VB..
15:44:23 <bero> itchka: The way it's done is the virtualbox package packages the source for its kernel modules and the kernel package picks it up from there and merges it into the tree
15:44:27 <ben79> I don't know how kernel modules will be handled in Lx 4?
15:44:31 <bero> itchka: always in sync...
15:44:41 <ben79> will we still use dkms for nvidia
15:44:59 <bero> ben79: so far, same as before, if it isn't in the kernel package it's not supported, but dkms might work
15:45:22 <Pharaoh_Atem> bero: we should split vbox out into an akmod
15:45:49 * bero has never looked at akmod, so no opinion on that right now
15:46:25 * ben79 And right now I'm PO'd at OM nVida complaining whining users complaing about something that has been an issue for more than 10 years that I know of
15:46:27 <Pharaoh_Atem> https://src.fedoraproject.org/rpms/akmods
15:46:58 <itchka> bero: That's good then, but does that work when you are running a VM with a later kernel on an earlier version of VB which also has earlier kernel modules.
15:47:02 * ben79 And right now I'm PO'd at OM nVidia users complaining whining users complaing about something that has been an issue for more than 10 years that I know of
15:47:29 <bero> Pharaoh_Atem: Sounds like potentially better than dkms, but I strongly prefer building the modules inside the kernel package -- that way users don't need a compiler installed and won't get confused by compiler errors at boot time
15:47:48 <Pharaoh_Atem> bero: fair, but we could also just produce kmod packages in tandem
15:47:51 <Pharaoh_Atem> Mageia does this, too
15:48:50 <bero> itchka: works as long as VB doesn't change the ABI -- but that's the same problem if you run VBox 2.0 with VBox 5.12 kernel modules that were built with dkms or in the virtualbox package or anywhere else
15:49:10 <bero> Pharaoh_Atem: We tried that a number of times, doesn't work, people simply don't think of building the packages at the same time
15:49:36 <Pharaoh_Atem> bero: I would propose we trigger kmod package builds automatically rather than having people do it
15:49:43 <bero> Pharaoh_Atem: we could think about that if abf added some functionality along the lines of "whenever the kernel package is built, automatically follow up with modules a, b, c, ..."
15:49:54 <Pharaoh_Atem> yes
15:49:57 <bero> Pharaoh_Atem: yes, but that would require very serious work on abf
15:50:07 <bero> and I don't think anyone who could do that has time right now
15:50:09 * ben79 Yes, I'm liking this direction...
15:50:15 <Pharaoh_Atem> or an external tool to just query and queue build jobs
15:50:25 <Pharaoh_Atem> we _could_ do that, have it live as a daemon and do it
15:50:39 <itchka> Isn't this just a chain build?
15:50:42 <bero> We have that, e.g. to build all of kde -- abf chain_build
15:50:57 <itchka> so do it with abf-console-client
15:51:30 <bero> the problem with that is that people need to actually use it (as opposed to clicking their way through the web page) and people need to do something if the chain_build errors out on something (genuine build failure or metadata generation error)
15:51:53 <Pharaoh_Atem> bitch with emails :D
15:51:57 <bero> unfortunately some people are forced to use Windoze boxes at work so they throw stuff into abf with the web frontend while they're at work
15:51:59 <Pharaoh_Atem> but anyway, it's definitely doable
15:52:22 <Pharaoh_Atem> it's something I've wanted to build for copr stuff for the same reason: https://pagure.io/copr-autorebuilder
15:52:36 <Pharaoh_Atem> but there's no reason that the code couldn't be extended to interface with abf via abf-c-c
15:55:40 <bero> yes, but it has to be done before we can say someone needs to use it ;)
15:56:15 <ben79> copr looks like a solution to my non-technical brain
15:56:15 <bero> I'm also working on tools that try to automatically find the BuildRequires: for a package (most common kapps build failure in abf...)
15:56:34 <bero> but can't tell people to use that before it's ready either ;)
15:57:50 <Pharaoh_Atem> yep
15:58:04 <Pharaoh_Atem> my code does nothing right now, but it's something I want to make happen eventually
15:58:18 <Pharaoh_Atem> now that we have DNF, we can actually do a ton of fun things with it
15:59:34 <itchka> Coulkd we not extend chain build a bit to cope with simple failures?
16:02:01 <Pharaoh_Atem> probably, but how would we determine "simple" failures?
16:02:09 <Pharaoh_Atem> also build cycles would need to be figured out
16:02:45 <bero> well, the metadata ones should simply not occur anymore at some point when abf is fixed...
16:02:56 <itchka> Well dealing with the metadata issue shouldn't be too difficult
16:02:58 <bero> but so far no success on that...
16:03:27 <bero> and I'm not a big fan of a hacky workaround like detecting it from the next package build ;)
16:03:28 <itchka> if the package is in the repos then the metadata needs a rebuild
16:04:11 <itchka> I know I'm very naughty when it comes to that sort of thing!!! :)
16:06:39 <ben79> So can we agree regarding Packaging Guidlines that when mass rebuild starts we take the Mageia doc and edit it to our needs?
16:06:47 <bero> ben79: yes
16:07:58 <ben79> And can we agree to the need for some kind of guidlines for the building or kernel modules even evil nazi ones? Or maybe some kind of automation process for this?
16:09:31 <bero> ben79: yes, that too -- but I think we also need to agree that users need to be aware of the fact that nazi stuff can break at any time (e.g. they simply haven't make a version compatible with a kernel we're switching to) and they should really be using something else
16:10:04 <bero> once we have a mechanism for automatically triggering other builds, the situation should get a little better, but it will never be perfect
16:10:19 <bero> simply because we can't get the Nazis to release stuff that works even marginally when we need it
16:10:42 <ben79> I try my damnedest to get nvidia user to realize that we are at the whim of nVidia on graphic drivers
16:10:43 <bero> Fix #1: boycott Nazividia, fix #2: use and improve nouveau
16:11:53 <ben79> We should continue to remind all nVidia user to try noveau every so often and if it works for them by all  means use it, encourage open source software
16:12:50 <itchka> bero: There's one thing about nouveau that looks like it will never be fixed and that's multi-threading I seem to remember seeing some comments from the maintainer where he said that the code base was unsuitable for multi-threading.
16:13:44 <ben79> nVidia is likely to remain one of the major problems for the forseeable future. Until users start to realize they don't need it and should not want it...
16:14:17 <itchka> The best bet is just to use AMD/ATI
16:14:43 <bero> It's not like the binary drivers support multi-threading properly though
16:15:03 <bero> yes, use AMD/ATI or Intel or Adreno or Vivante
16:16:30 <ben79> So can we get some #action on these
16:17:06 <itchka> #chair ben79
16:17:06 <chwido> Current chairs: ben79 itchka
16:17:16 <itchka> off you go Ben
16:17:42 <itchka> You can create action points now
16:18:20 <itchka> I'm trying to fix grub2 so my mind is only half on the job
16:19:01 <ben79> #action 1.When mass rebuild starts we (at least bero and ben79 and Pharaoh_Atem) will take Megeia Packaging Guidlines and edit them to suit OpenMandriva's needs.
16:19:14 <ben79> did I do that right?
16:19:36 <itchka> Yes I think so
16:20:34 <bero> yes, I think so too
16:20:49 <bero> We may also want to #share something about it to make rugyada happy ;)
16:22:24 <ben79> #action 2. We have agreed on the need for a policy and method to link building of out-or-tree kernel modules to the release of a new kernel package (when the kernel packages goes in testing repo)
16:22:53 <ben79> So how do we share the above? replace #action with #share?
16:22:57 <bero> yes
16:23:34 <ben79> #share 1. When mass rebuild starts we (at least bero and ben79 and Pharaoh_Atem) will take Megeia Packaging Guidlines and edit them to suit OpenMandriva's needs.
16:24:06 <ben79> #share 2. We have agreed on the need for a policy and method to link building of out-or-tree kernel modules to the release of a new kernel package (when the kernel packages goes in testing repo)
16:25:28 <ben79> Now QA-Team is going to need to come up with guidlines for testing packages and for testing ISO's
16:25:34 <ben79> too
16:26:09 <itchka> They already exist
16:26:31 <ben79> Are they in our wiki?
16:26:36 <itchka> I have pointed to them loads of times
16:27:40 <itchka> No they are in the gdrive that I created to hold all QA documents.
16:27:49 <ben79> I know about the one for Release testing do we have a document for package testing anywhere?
16:28:49 <ben79> https://docs.google.com/document/d/1ohi4Sf4Tw3tFBQDJVj4VXBRpTyR_u5giGCmKo-R111g/edit
16:29:18 <itchka> I wouldn't want to write one of those..there are guidlines for testing packages in the iso test procedure.
16:29:46 <itchka> for packages read programs
16:32:02 <ben79> You don't want to document how packages are approved in Kahinah?
16:32:05 <bero> Maybe we need to link that gdrive from the wiki
16:32:11 <bero> should be more public...
16:32:41 <ben79> Yes if we have QA guidlines they certainly need to be in wiki
16:33:44 <itchka> In my experience you have to use a lot of judgement when testing packages on kahinah. Testing systemd is completely different from testing plasma which is completely different from testing grub2.
16:34:08 <ben79> I think we should take that , edit if needed and put it in wiki, and maybe add or have a seperate page for package approval process in wiki also, but for now no more than that.
16:34:53 <ben79> package approval process as far as I know would take about 3 - 6 lines. unless I'm missing something
16:36:03 <ben79> Do we agree?
16:36:38 <itchka> If you mean how to approve a package on kahinah then yes probably. To actually test packages more like several pages.
16:40:12 <ben79> Lets start with the basic document we have Plus a blurb about Kahinah, I think I know what you mean about to actually test packages but
16:40:45 <ben79> that should wait till after a block buster Lx 4 release when we have trippled the sixe of QA-Team
16:41:11 <ben79> sixe = size
16:41:52 <itchka> One benefit of dnf is that we are unlikely to get as many issues with package installation.
16:43:31 <ben79> Yeah, but we'll still have people using 2014... and Lx 3 like the guy on YouTube that wrote a review of "latest from OpenMandriva" and he the 3.0 ISO a week after Lx 3.03 was released
16:43:39 <itchka> but when things go wrong it may be more difficult to back out. I think if one wants to encourage more testers you must give then a way to back out broken packages that they have installed for testing in a quick and convenient way.
16:43:43 <ben79> but I digress
16:44:14 <itchka> I would suiggest that this be investigated before trying to rtecruit more testers
16:44:29 <bero> We need to try harder to get people on supported versions...
16:44:45 <ben79> Yes that would be an excellent point
16:44:57 <bero> also I think we need to make the website much much easier to us
16:44:58 <bero> e
16:45:10 <bero> Try downloading an iso the way a normal user would
16:45:23 <bero> go to https://www.openmandriva.org/
16:45:25 <bero> click Download
16:45:32 <bero> click Our Mirrors
16:45:39 <ben79> #action QA-Team will edit and place in wiki the document existing in Google drive about Guidlines for testing packages and ISO's
16:45:54 <bero> and try to navigate that page while being a "dumb n00b"
16:46:11 <ben79> #share QA-Team will edit and place in wiki the document existing in Google drive about Guidlines for testing packages and ISO's
16:46:32 <bero> If I want to download the latest and greatest ISO of a distribution that is suitable for non-tech people, I don't want to be bothered with mirror statistics and all, I just want to get my iso
16:46:41 <bero> I wonder how many people turned back when seeing that "download" page
16:46:46 <ben79> I think maybe ben79 gets the job on the QA-Team document going in wiki
16:47:17 <itchka> Well volunteered Bne :)
16:47:35 <bero> I think Ubuntu gets just about nothing right, but they certainly get getting you to download their stuff much better than we do
16:47:41 <ben79> bero: Yes, we have reports of users finding that confusing and not noob friendly
16:48:06 <bero> and that's the ones reporting it, not the ones just heading over to microsoft.com or ubuntu.com when they see it
16:48:20 <bero> that definitely needs fixing before the release
16:49:11 <ben79> ONe way would be to emphasize the downloads from SF and torrent and put the mirrors in small type. I think other distros do similar
16:49:30 <ben79> Whereas we have mirrors in first place
16:50:05 <ben79> RaphalJadot[m]: Ping ^^^
16:50:23 <ben79> Workshop Team: Ping ^^^
16:52:29 <ben79> Another thing for next release would be to name the directory where the ISO's are something more obvious to a noob that 'release_current'
16:52:51 <bero> Or just send the download link straight to the iso and not to some directory
16:53:45 <ben79> Does dnf have anything like urpmi.recover?
16:55:04 <ben79> Will Lx 4 have anything like draksnapshot?
16:56:02 <ben79> draksnapshot does not work in Lx 3 by the way.
16:57:13 <bero> What are urpmi.recover and draksnapshot? Never used either of them...
16:59:53 <ben79> I may not have correct name for urpmi.recover but with it you can set restore points and if packager update borks your system urpmi.recover will revert your system to the restore point
17:00:40 <ben79> I *think* draksnapshot takes a system snapshot sort of like VBox does but I've never used it.
17:01:27 <ben79> # urpmi.recover
17:01:27 <ben79> urpmi.recover version 8.03.4
17:03:04 <ben79> so if you have a checkpoint set and there is a problem you can run # urpmi.recover --rollback and system will be restored to the checkpoint as far as installed packages
17:03:26 <ben79> great feature for QA-testers
17:11:14 <ben79> Questions for developers:
17:11:57 <ben79> Where are we at regarding an working Lx 4/Cooker ISO?
17:13:24 <bero> Son_Goku: just the person we need... ;) Do you know if dnf has anything like this:
17:13:31 <bero> [19:01:32] <ben79> # urpmi.recover
17:13:31 <bero> [19:01:32] <ben79> urpmi.recover version 8.03.4
17:13:31 <bero> [19:03:09] <ben79> so if you have a checkpoint set and there is a problem you can run # urpmi.recover --rollback and system will be restored to the checkpoint as far as installed packages
17:13:31 <bero> [19:03:31] <ben79> great feature for QA-testers
17:13:48 <Son_Goku> if you have btrfs or lvm, you can use the snapper plugin
17:13:51 <Son_Goku> and it would do that
17:14:01 <Son_Goku> but otherwise, there's the dnf history
17:14:12 <bero> ben79: Currently sorting out all remaining dependencies and getting things ready for the mass build...
17:14:21 <Son_Goku> you can list, undo, and redo transactions
17:14:24 <bero> ben79: shouldn't be much longer before we have an at least semi-working iso
17:15:06 <ben79> Is there a Release Plan or Roadmap for Lx 4 anywhere?
17:15:14 <itchka> bero: I still can't get an iso to boot. I'm hoping that the problem lies with grub..
17:17:11 <Son_Goku> ben79, I don't think we've got much of a plan other than "rebuild the world, fix the tooling, and then release"
17:18:24 <ben79> that makes it more difficult for me to write a blog post about Lx 4... and is one reason why I haven't yet
17:20:00 <bero> ben79: We'll put something up when that mass build is running... Right now it makes more sense to focus everything on getting that started because it will take forever
17:21:06 <ben79> OK, I probably did not realize that the same way Y'all do
17:21:48 <Pharaoh_Atem> do we have snapper packaged in OpenMandriva?
17:21:50 <Pharaoh_Atem> I don't think we do
17:21:56 <ben79> So are Y'all mostly done fixing fallout or issues from conversion from urpmi to dnf?
17:22:28 <Pharaoh_Atem> yeah, there's no snapper: https://github.com/OpenMandrivaAssociation/dnf-plugins-extras/blob/master/dnf-plugins-extras.spec#L11-L12
17:22:44 <ben79> Fishing for Red Snapper is an industry where I live.
17:22:55 <bero> Pharaoh_Atem: I don't think we do either -- AFAIK it doesn't even work on our default setup (which is ext4 without lvm)
17:23:16 <bero> ben79: yes, urpmi->dnf, rpm5->rpm4 and a few issues related to updated system libraries
17:23:45 <Pharaoh_Atem> well, also the package repositories are going to be fully consistent for the first time in a very long time
17:23:53 <Pharaoh_Atem> because we literally have to rebuild all packages
17:24:57 <ben79> bero: Pharaoh_Atem: and in this process we or Y'all or mostly cleaning up issues with tool-chain packages and libraries, Python come to mind...
17:25:15 <Pharaoh_Atem> pretty much, yeah
17:25:21 <Pharaoh_Atem> they all have to get fixed as part of this
17:25:34 <bero> yes
17:25:42 <ben79> and there is reason to expect that the same issues won't happen again
17:25:45 <bero> a lot of things especially in contrib are going to fail because they haven't been updated in years
17:25:46 <ben79> ?
17:26:36 <Pharaoh_Atem> well, I don't know if we can expect it won't happen again
17:26:39 <Pharaoh_Atem> but it won't be on this scale again
17:26:53 <bero> no, this is going to happen again simply because we don't have enough people to keep every contrib package updated all the time
17:26:54 <ben79> OK, when we do mass rebuild will we basically fix what we can and especially with contrib remove a lot of packages?
17:27:06 <Pharaoh_Atem> yes
17:27:23 <bero> we also need to make cooker based releases way more frequently instead of backporting everything and making another 3.x release
17:27:32 <bero> 3.0 was a great release when it came out
17:27:42 <bero> but now its core has aged and it hasn't aged all that well
17:27:55 <ben79> OK, that's an answer I can understand, I don't think it reasonable to expect perfection, at least not yet :)
17:28:28 <Pharaoh_Atem> if we do more frequent releases based on the development tree, it's less likely to be a problem
17:28:34 <bero> probably the same will be true for 4.0 unless we actually get our stuff together and make a cooker-based 4.1 instead of a 4.0-with-1000-backports based 4.1
17:28:40 <bero> Pharaoh_Atem: exactly
17:29:14 <Pharaoh_Atem> the other question to ask though is if we have the capability to pull that off alone
17:29:30 <Pharaoh_Atem> and we'll have to figure that out as we go
17:29:31 <ben79> 'nuther question, I haven't heard much about "Rolling Release" lately is that idea on hold or dead for nw?
17:29:46 <bero> we probably do if we spend less time on doing 1000s of backports
17:30:07 <bero> ben79: no, it's certainly where I want to go... And I think TPG as well
17:30:30 <Pharaoh_Atem> I think it could be an interesting strategy, especially if we consider the idea of pairing up with Mageia
17:30:44 <itchka> bero: But so far it has proved unworkable.
17:30:47 <Pharaoh_Atem> OpenMandriva could roll while Mageia could do the stable releases
17:31:02 <Pharaoh_Atem> two names, one source tree
17:31:13 <RaphalJadot[m]> hi
17:31:16 <itchka> It make any for of QA a nightmare
17:31:35 <ben79> OK, so for now we are doing releases but Rolling release is something for the near future?
17:31:39 <itchka> it makes any form of QA  a nightmare
17:31:51 <Pharaoh_Atem> ben79: yeah, it's something to consider in the future
17:32:03 <Pharaoh_Atem> I think we'd want to take some of the concepts from openSUSE for a rolling system though
17:32:11 <Pharaoh_Atem> like using btrfs by default, configuring snapper + dnf, etc.
17:32:24 <itchka> you'll end like before with _TPG pushing packages that haven't been tested with a wider audience.
17:32:30 <bero> itchka: not at all unworkable, we just have to create a stable tree and make sure stuff from cooker goes into it in a timely manner
17:33:47 <itchka> bero: In the past it has been a disaster imho
17:34:44 <bero> because nobody was actively trying
17:34:51 <bero> we didn't have anyone actually using it
17:35:01 <bero> AFAIK I'm still the only one who actually uses cooker day-to-day
17:35:15 <ben79> So one way of looking at things currently is "Our hardly working devs are doing behind the scenes work transitioning Cooker from urpmi to dnf and rpm5 to rpm4 in preparation for development of our upcoming Lx 4 release"
17:35:32 <itchka> Well bero you are a pretty unique user :)
17:35:47 <Pharaoh_Atem> I don't know about "hardly working"
17:35:53 <Pharaoh_Atem> that's pretty unfair
17:35:56 <ben79> meant in humor
17:36:16 <ben79> I know Y'all work very hard and don't sleep enough
17:36:32 <Pharaoh_Atem> I suspect that Lx4 is not going to be terribly exciting unless we do some user-facing changes
17:36:37 <ben79> and of course i would not put that ina blog
17:36:50 <Pharaoh_Atem> maybe drakxtools -> manatools, dnfdragora, btrfs + snapper by default, etc.
17:38:00 <ben79> Um, some thing need to be decided by developers not Linux dummies like me
17:38:33 <ben79> manatools seems interesting don't know how hard would be to get in Lx 4 release
17:39:16 <ben79> same for dnfdragora
17:39:20 <bero> ben79: Probably less hard than keeping drakx working
17:39:44 <ben79> would btrf just be a decision to prefer it over ext4?
17:40:06 <bero> ben79: yes -- and of course testing that goes with it
17:40:25 <bero> and at least if the last time I checked, it would come with a performance hit and some issues with grub not supporting it properly
17:40:35 <bero> so I'm not sure that's the right way to go just yet
17:40:49 <Pharaoh_Atem> SUSE has a patchset for properly supporting it in grub
17:40:57 <Pharaoh_Atem> I imported it into Fedora's grub back in Fedora 27
17:41:05 <Pharaoh_Atem> and Mageia has been carrying some of those patches for a while
17:41:59 <itchka> f2fs is now in grub2 for 2.04 maybe btrfs will be too
17:42:12 <ben79> well from the sound of things regarding Lx 3 there is an argument for doing a not exciting Lx 4 release just to get the system packages and libraries in better shape
17:42:51 <Pharaoh_Atem> well, we have to do _something_ user visible
17:42:56 <Pharaoh_Atem> otherwise it's pretty boring :/
17:43:12 <bero> KDE 5.13 is pretty visible
17:43:12 <itchka> I just saw the build for a btrfs module flash past on my screen
17:43:28 <Pharaoh_Atem> manatools+dnfdragora, btrfs, and snapper setup would be pretty easy features to add, and stand out well, too
17:43:39 <Pharaoh_Atem> also, we have the python3 as python thing too
17:43:51 <bero> itchka: It is supported, just (last time I checked) it didn't work very reliably
17:44:09 <Pharaoh_Atem> btrfs has worked fairly well for me for the past three years
17:44:12 <ben79> bero: you say with btrfs there would be a performance hit?
17:44:48 <bero> ben79: based on my past experience with it, yes. Not sure to what extent that still applies
17:45:28 <ben79> OK,
17:45:28 <itchka> ben79: If I build a grub2 snapshot for 3.0 can you test for the microcode image issue for me?
17:45:51 <ben79> I can try
17:46:03 <ben79> if I can get the snapshot to work
17:47:36 <itchka> My patch for that won't apply and the code has changed such that I'm not confident in hacking it in. The code is so different it's possible that the isse has been addressed
17:48:59 <ben79> I think it probably has since i forgot the exact problem
17:49:34 <ben79> Oh, i remember
17:49:35 <bero> Pharaoh_Atem: One thing I remember not working with grub+btrfs is remembering the last kernel that booted successfully. Do you know if that's fixed?
17:49:43 <Pharaoh_Atem> that's definitely fixed
17:49:59 <ben79> it's generating proper entries in grub.menu
17:50:14 <Pharaoh_Atem> at least it works on my openSUSE system like I expect
17:51:00 <ben79> that is fixed
17:51:52 <ben79> OR was it your patch that fixed it?
17:54:02 <ben79> Snapper >>> The ultimate snapshot tool for Linux
17:54:11 <ben79> so that's a must have
17:54:33 <ben79> or is it
17:54:39 <itchka> ben79: My patch fixed parsing of the other os's grub.cfg's so that microcode.img AND initrd.img were included.
17:54:50 <itchka> in the initrd line.
17:55:15 <ben79> Yes, and that has been working without issue for a good while now
17:55:34 <ben79> but I can test whatever if I can get snapshot to work
17:58:06 <ben79> OK we have 3 #actions and 3 #shares plus a gentleman't agreement to come up with a Release Plan for Lx 4 or a Road map
18:00:19 <ben79> Pharaoh_Atem: Please know the "hardly working" was meant in humor as it is 180 degrees opposite of the truth. (Another ben79 humor attempt backfires...)
18:01:24 <ben79> So should we go to AOB (Any Other Business) or end meeting?
18:01:53 <bero> we have LOTS more stuff... but not sure if the right people are there...
18:02:03 <bero> obvious thing is getting ready for the mass build
18:02:06 <bero> when?
18:02:16 <ben79> tomorrow?
18:02:43 <bero> probably to early because we need to get all of kapps fixed first and it takes a long time to build
18:02:52 <ben79> OK
18:03:02 <bero> but I'd say as soon as the remaining updates for kapps, qt, pd and lxqt are in the tree
18:03:28 <bero> _TPG and I also agree that we should enter a soft freeze for updates in Lx4 when the mass build is going
18:03:44 <bero> should probably get everyone else to agree or object
18:03:51 <ben79> so it is true to say the comversion from urpmi/rpm5 to dnf/rpm4 is done in Cooker
18:03:53 <ben79> ?
18:03:58 <bero> mostly
18:04:12 <ben79> OK
18:04:15 <bero> the mass build will barf on packages that still need work
18:04:32 <Pharaoh_Atem> but it's successfully bootstrapped on all arches
18:04:37 <Pharaoh_Atem> which is what we needed
18:04:37 <bero> so we'll only really really know where we are once the mass build is finished and we know if the number of failed builds is 10, 100, 1000 or 10000
18:04:54 <bero> yes, that's another BIG piece of news for Lx4 btw, full support for aarch64 and armv7hnl
18:05:01 <bero> that was completely unsupported in all previous releases
18:05:10 <ben79> the mass rebuild or 3rd mass rebuild is kind last step in that process?
18:05:34 <ben79> Yes, that is big news indeed!
18:05:39 <bero> yes, mass rebuild and fixing the errors detected by it are essentially the last steps in that process
18:06:10 <Pharaoh_Atem> bero: we might want to consider importing my appliance-tools and livecd-tools into omv for letting people produce their own media using omv packages
18:06:24 <Pharaoh_Atem> https://github.com/livecd-tools
18:07:00 <ben79> Pharaoh_Atem: that would be popular, we have some users that would use those
18:07:15 <Pharaoh_Atem> they're packaged in Fedora and Mageia
18:07:30 <Pharaoh_Atem> Fedora uses it for ARM images, while Mageia offers it as a way to produce appliance and live media
18:08:31 <ben79> I was thinking of OM users doing there own spins like Mandian doing a Cinnamon spin
18:08:34 <ben79> etc
18:09:06 <Pharaoh_Atem> but yeah, they were introduced in Mageia 6 to Mageia: https://wiki.mageia.org/en/Mageia_6_Release_Notes#LiveCD_Tools
18:10:03 <bero> yes... ideally people would do it on abf, but can't hurt to provide other tools too
18:10:22 <Pharaoh_Atem> hell, no reason abf can't use these too
18:11:09 <bero> itchka: ben79: RaphalJadot[m]: How is THAT for a download page? http://getlinux.org/
18:12:40 <ben79> effective, simple
18:12:47 <ben79> fast
18:14:44 <itchka> iNDEED
18:15:19 <ben79> AND not confusing
18:19:11 <bero> I don't think we should get that much into the face of users, throwing a 2GB image at them for just going to that URL, but something along those lines...
18:23:26 <ben79> what would be noob friendly? If the direct link and the torrent link were emphasized and the mirror link in smaller type for expert users or users with unique problems downloading
18:24:02 <ben79> and then for the individual mirror links to link directly to the page with the actual ISO's
18:24:13 <ben79> just thinking out loud
18:30:30 <bero> IMO we should just automate generating a link to a mirror...
18:30:50 <bero> the sourceforge goes to an ad-infested page
18:30:58 <bero> not everyone has a torrent client
18:31:08 <bero> so by default someone should download from a mirror close to them
18:31:20 <bero> we can determine the best mirror based on geoip or somesuch
18:31:40 <bero> and then allow a "pick custom mirror" small print for the current page or so
18:31:44 <ben79> Hmmm... having our own direct link, could be better
18:32:14 <ben79> from experience OM users use torrents very little, they probably would not be missed
18:32:22 <ben79> if they went away
18:33:02 <bero> not much of a surprise, given in many countries you automatically land on a "to be monitored" list if you use bittorrent
18:33:25 <bero> should probably keep them up for those in free countries such as North Korea though
18:33:28 <ben79> Yep
18:34:16 <ben79> don't wanna discriminate against the North Koreans
18:46:45 <bero> Another thing (possibly related to getlinux.org): What does everyone think of selling computers with OMLx preloaded?
18:46:52 <bero> I think in the x86 world there's probably not too much demand
18:46:58 <bero> given you can get the components anywhere
18:47:07 <bero> but for aarch64, things may be different
18:47:15 <bero> it's still hard to get a proper aarch64 desktop these days
18:51:43 <bero> even more so for risc-v once we have that up and running
18:56:49 <ben79> I think there *might* be some interest, the other archs need to be more publicized, not many know much about them amongst ordinary Linux users
18:57:20 <ben79> having a product would make publicizing much easier it woule seem
18:57:34 <ben79> woule indeed or would seem
19:00:37 <itchka> creating hardware is an expensive business..
19:02:17 <itchka> particularly since everyone wants laptops
19:07:06 <itchka> We could try for an arch desktop. That's buildable.
19:09:47 <bero> I've built 3 of them, so we know what components will work
19:10:14 <bero> one problem I see is logistics and shipping/customs
19:11:04 <bero> I can easily get someone to build them here (I have a neighbor on disability...) but shipping and customs will probably make it overly expensive
19:11:44 <itchka> Sell though Amazon perhaps?
19:12:25 <itchka> Ship in batches to them and to another 3rd party retailer
19:12:33 <itchka> or to
19:12:39 <bero> Bezos is a pure scumbag, but it could be a way to make it more visible
19:12:56 <bero> I don't think we can ship in batches because I doubt we'll have more than 1 or 2 people a month interested in getting one
19:15:43 <ben79> itchka: this is your field: https://forum.openmandriva.org/t/activating-kmail-on-openmandrivalx3-03-gives-errors/1895
19:16:54 <itchka> bero: If we went for the really tiny barebones cases perhaps it would be more economic. It's not as if we need to dump a lot of heat.
19:17:28 <itchka> We could get the size down to a large shoenox.
19:17:36 <itchka> shoebox
19:17:50 <itchka> ben79: I'll look
19:19:29 <bero> http://www.lc-power.com/en/product/gehaeuse/mini-itx/lc-1400mi/ <--- this is what I'm currently using -- even smaller ones usually don't have enough room to accomodate a PCI-E card, and/or they don't come with a PSU (separate PSUs are usually more expensive than this box), but of course there may be better options...
19:19:57 <ben79> itchka: that *may* be a *special* user,
19:20:29 <ben79> KMail starts OK here otherwise I don't know since I don't use it
19:21:26 * bero will give kmail another try with a huge mailbox when kapps 18.04.2 is done building
19:21:35 <bero> not much of a point in testing with obsolete versions
19:22:41 <itchka> kmail works great with postgres mariadb can't handle the SQL flood that it generates/
19:23:08 <itchka> If it were me I would use postgres for lx4
19:23:30 <itchka> or at least make it easy to switch.
19:23:51 <Pharaoh_Atem> bero: it'd be interesting to offer omv/mga computers for arm and riscv arches
19:24:21 <Pharaoh_Atem> it'd be interesting to set up as a model where buying a computer with that distro supports that distro
19:24:45 <bero> yes... that's what I thought too
19:25:13 <bero> maybe even adding some high-end x86 box that is hard to find in usual stores (32-core threadripper) into the mix too
19:26:10 <Pharaoh_Atem> I'd like that :)
19:27:31 <bero> and of course there is the whole AMD thing...
19:27:50 <ben79> which is?
19:27:56 <bero> I know a guy at AMD who would very much like to see a Linux system that is highly optimized for AMD processors
19:28:15 <ben79> that could be interesting as well
19:28:25 <bero> much like Intel is doing Clear Linux to optimize for latest and greatest Intel processors
19:28:26 <ben79> very
19:28:48 <bero> I'm planning to do a mass rebuild with -march=znver1 etc. that will run only on current AMDs
19:29:12 <bero> If the performance is much better than our current generic x86_64 builds, there's a good chance we can get them interested
19:29:38 <bero> that's another thing I'm planning to do in parallel while the mass build is running
19:30:04 <ben79> that approach might get some leverage to get things built
19:32:45 <bero> itchka: I'll have a look at mariadb vs. postgres vs. sqlite with kmail/akonadi once 18.04.2 is built
19:33:47 <bero> itchka: there's servers that literally handle billions of connections per day with mariadb, so I don't think mariadb itself is too borked to handle the SQL flood -- it's more likely the interfaces to it
19:34:10 <bero> itchka: so let's see how it all performs with 18.04.2 and qtsql 5.11.1 before making the final decision there
19:34:39 <bero> itchka: I have a test mailbox with tens of thousands of messages, so I think we can get reasonable results
19:48:40 <bero> fdrt: HisShadow_: crazy: Pharaoh_Atem: itchka: The metadata problem just happened again and we have a more detailed log... But I'm still not sure what would possibly cause it
19:48:48 <bero> http://file-store.openmandriva.org/api/v1/file_stores/d53df9cd614fed0c079e1cabd2e81439d7e7842c.log?show=true
19:49:07 <bero> As you can see imagemagick-7.0.8.2-1 got built and signed correctly, and apparently moved to the tree too
19:49:15 <bero> and createrepo_c is run correctly
19:49:31 <bero> but somehow it just doesn't seem to pick up the new file
19:51:14 <Pharaoh_Atem> hold the fuck on
19:51:16 <bero> actually
19:51:23 <Pharaoh_Atem> ==> warning: tag 273 type(0x6) != implicit type(0x0)
19:51:25 <Pharaoh_Atem> that's not right
19:51:27 <bero> at least this time it looks like even only a subpackage was left out
19:51:37 <Pharaoh_Atem> that looks like rpm5 rpm is being used here
19:51:38 <bero> this is the subsequent failure
19:51:45 <bero> http://file-store.openmandriva.org/api/v1/file_stores/69d20ec0717a6ea98693df240b4c4e31bbc809aa.log?show=true
19:51:54 <bero> "  - nothing provides libMagickCore-7.Q16HDRI.so.6()(64bit) needed by imagemagick-7.0.8.2-1.aarch64"
19:52:10 <Pharaoh_Atem> welp
19:52:45 <bero> So it does see imagemagick-7.0.8.2-1.aarch64 is supposed to install, but it doesn't see the new lib64MagickCore7.Q16HDRI_6-7.0.8.2-1-omv4000.aarch64.rpm package providing what it's asking for
19:52:57 <Pharaoh_Atem> you should probably add "Problem: conflicting requests" to the error check
19:54:25 <bero> http://file-store.openmandriva.org/api/v1/file_stores/d53df9cd614fed0c079e1cabd2e81439d7e7842c.log?show=true has more mentions of lib64MagickCore7.Q16HDRI_6-debuginfo than real lib64MagickCore7.Q16HDRI_6
19:54:44 <bero> so at least this time it looks like we have a subpackage just not being seen by createrepo_c
19:55:15 <Pharaoh_Atem> are we still using the old rpm5 based createrepo_c?
19:55:21 <bero> no
19:55:43 <bero> the whole reason for the createrepo container is to be able to run cooker/rpm4 createrepo_c even though the host container is on Lx3
19:57:22 <Pharaoh_Atem> hmm
19:58:04 <bero> One obvious "fix" would be to run metadata regeneration from scratch if we see "conflicting requests" anywhere, but I don't really want to mess things up with such crappy workarounds everywhere
19:58:13 <bero> that'll just make things unmaintainable in the future
19:58:27 <Pharaoh_Atem> hmm
19:58:40 <Pharaoh_Atem> and it seems that the warning has no meaning, since the other RPMs made it in
19:58:43 <Pharaoh_Atem> and that warning is there too
19:59:56 <bero> I wonder if we have some weird timing issue with the new files not showing up inside the container in time
20:00:14 <bero> not sure if how docker shares directory access with the host system...
20:00:21 <crazy> bero: that seems now to fail elsewhere .. metadata generation code seems to run right from the log
20:01:21 <bero> If there's any sort of caching involved, I could see something along the lines of "packages signed and moved into right directory, createrepo container started, container cached old directory listing so not all files visible yet, createrepo_c run on incomplete directory"
20:03:14 <bero> Let's try adding "sync" between finishing signing and starting the createrepo_c container just in case there's a problem there...
20:03:36 <bero> I don't really think it'll help, but it's worth a try
20:07:47 <ben79> Maybe we should end meeting?
20:08:23 <bero> ben79: go ahead... I don't think we have any more meeting topics right now
20:08:38 <ben79> OK,
20:08:44 <ben79> #endmeeting