do we know how long we will have to fix all the FTBFS and autopkgtest before the freeze ?
I am a bit worrying for the scientific stack , will we have enough time to work with our upstream in order to fix all these FTBFS. In the scientific stack, things are going slowly....
We are not 100% of our time dedicated to Debian work... so I hope that it will not ruine the effort of the trixie cycle for scientific softwares.
moving to Python 3.12 was not that simple...
this is the same as we did for the Python 3.12 transition. Please note
that we don't enable any of the experimental features in Python 3.12 (no
GIL, JIT compilation), so assuming there are currently no other RC
issues in your packages, there should plenty of time to fix any 3.13
related issues.
So we try hard to maintain our packages in testing, and it it always a deception to see them (part of) expelled from testing due to an FTBFS
with a new Python or a failing autopkgtest.
Hi PICCA (2024.11.13_10:04:26_+0000)
I am a bit worrying for the scientific stack , will we have enough
time to work with our upstream in order to fix all these FTBFS. In the scientific stack, things are going slowly....
The reality here is that Python has a 6-month release cycle, these days.
python3-defaults in unstable now adds Python 3.13 as a supported Python 3.13 version. You might see some additional build failures, until the binNMUs[...]
for this addition are done [1]. This might take some days for some architectures. We will most likely also see some more issues once the lower levels of this addition are done.
[1] https://release.debian.org/transitions/html/python3.13-add.html
[...]
* spyder: #1088068/#1089054.
While there are a few bits of that transition tracker still red, the
current target is to work on the list of autopkgtest failures shown on https://tracker.debian.org/pkg/python3-defaults in order to get the
addition of 3.13 as a supported version into testing. As usual, this
page can be a little hard to interpret because it shows test failures of
the versions of those packages in testing, and you have to click through
to each corresponding package (sometimes through multiple levels of
failures) to see whether it's been fixed in unstable. But with ~35
packages left there, it's getting easier to wade through and we're
getting pretty close.
* audioread: #1082047; apparently needs packaging of a couple of pieces
removed from the standard library. Reverse-dependencies are eartag,
puddletag, and python3-acoustid.
* dask/dask.distributed: #1088234 and #1088286, but also #1085947 in
sphinx-book-theme. I sank a bunch of time into trying to fix this
last month and didn't really get anywhere very satisfying. Can
anyone with more experience with these packages figure this out?
* datalad-next: #1088038. Probably not too hard if you can figure out
how that test is supposed to work.
* deepdiff: #1088239, blocked by orderly-set in NEW. I poked
#debian-ftp.
* hyperkitty: #1088312. Should be fairly easy.
* ironic-python-agent: #1089531. Should be fairly easy; zigo said on
IRC that this is a leaf package and doesn't need to block migration.
* ovn-octavia-provider: #1088762. zigo said on IRC that this is a leaf
package and doesn't need to block migration.
* pocketsphinx-python: #1088764. Apparently difficult.
* python-attrs: Fixed in unstable; blocked on python-cattrs.
* python-beartype: #1089017. Apparently fixed upstream, though I don't
know exactly where.
* python-cattrs: #1073406/#1086614.
* python-omegaconf: #1089049.
* python-oslo.messaging: I believe this is fixed in unstable
(#1089050) and waiting for python-eventlet to migrate to testing.
* python-pure-python-adb: #1082251/#1084618; apparently just needs a
dependency on python3-zombie-telnetlib?
* python-voip-utils: #1088827 fixed in unstable, but has an autopkgtest
regression on s390x (#1089826).
* rich: #1082290; seems to be fixed upstream.
* smart-open: #1089053; upstream fix in progress.
* spyder: #1088068/#1089054.
* twisted: Fixed in unstable, just waiting for matrix-synapse to
migrate first (which should be soon).
There are also a number of architecture-specific failures showing up
there. Some might go away with a few more retries I guess, but we'll
likely need to work out what to do about the rest. I haven't looked at
these in any depth.
On Mon, Dec 16, 2024 at 01:58:14AM +0000, Colin Watson wrote:
[...]
* spyder: #1088068/#1089054.
I'm struggling with this one; I've asked at https://github.com/spyder-ide/spyder/issues/23074
for help, but nothing so far. I've just pushed my current work to
salsa (git@salsa.debian.org:science-team/spyder.git), and if anyone
has time to look into this, I'd really appreciate it.
On Tue, Dec 17, 2024 at 12:53:42PM +0000, Julian Gilbey wrote:
On Mon, Dec 16, 2024 at 01:58:14AM +0000, Colin Watson wrote:
[...]
* spyder: #1088068/#1089054.
I'm struggling with this one; I've asked at https://github.com/spyder-ide/spyder/issues/23074
for help, but nothing so far. I've just pushed my current work to
salsa (git@salsa.debian.org:science-team/spyder.git), and if anyone
has time to look into this, I'd really appreciate it.
I poked around a bit in pdb. I think the problem is that one plugin is calling the icon machinery at class creation time, before a QApplication
or equivalent has been set up, so font loading doesn't happen. Since
pytest goes through and loads all the Python files to look for tests,
this causes it problems.
[...]
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 04:13:20 |
Calls: | 10,386 |
Calls today: | 1 |
Files: | 14,057 |
Messages: | 6,416,605 |