It looks like there's a JIT hang that's been blocking my nightly benchmark runs across all machines for the past two nights (see [here](https://github.com/savannahostrowski/pyperf_bench/actions/runs/25443771452) and [here](https://github.com/savannahostrowski/pyperf_bench/actions/runs/25369160867)), as well as @diegorusso's runs on his runner (see [here](https://github.com/diegorusso/pyperf-bench/actions/runs/25413478324) and [here](https://github.com/diegorusso/pyperf-bench/actions/runs/25354872255)). These runs always hang at the [bm_xml_etree.iterparse](https://github.com/python/pyperformance/blob/main/pyperformance/data-files/benchmarks/bm_xml_etree/run_benchmark.py) benchmark on JIT builds. A minimum reproducer: ``` import io import xml.etree.ElementTree as ET data = b"<r>" + b"<a/>" * 1000 + b"</r>" for _ in range(200): for event, elem in ET.iterparse(io.BytesIO(data)): pass ``` After bisecting, it looks like this was introduced by https://github.com/python/cpython/pull/148745. I'm not sure about the exact mechanism, but two observations that may or may not be related: - The iterator returned by `xml.etree.iterparse` has `tp_iternext` == `slot_tp_iternext`. - The new specialization in [Python/optimizer_bytecodes.c](https://github.com/python/cpython/blob/2b7c28a4406da1b26dd0ebd38aa7371bed873ce4/Python/optimizer_bytecodes.c#L1462) excludes `PyGen_Type` but doesn't have a corresponding exclusion for the `slot_tp_iternext` case: ``` if (type != NULL && type != &PyGen_Type && type->tp_iternext != NULL) { ... ADD_OP(_ITER_NEXT_INLINE, 0, (uintptr_t)type->tp_iternext); } ``` Empirically, also excluding the case where `tp_iternext == slot_tp_iternext` makes the hang go away in my local builds, but I don't know whether that's the real fix or just an incidental one that happens to side-step the trigger. Would appreciate a closer look from someone who knows the trace semantics here. cc: @markshannon @NekoAsakura <!-- gh-linked-prs --> ### Linked PRs * gh-149491 <!-- /gh-linked-prs -->