Skip to content

instruction CACHE#7251

Merged
youknowone merged 4 commits intoRustPython:mainfrom
youknowone:cache-inst
Mar 1, 2026
Merged

instruction CACHE#7251
youknowone merged 4 commits intoRustPython:mainfrom
youknowone:cache-inst

Conversation

@youknowone
Copy link
Member

@youknowone youknowone commented Feb 27, 2026

Summary by CodeRabbit

  • Bug Fixes

    • Fixed async/await/yield unwind so nested async generators, loops and comprehensions unwind to correct end targets.
    • Resolved jump/branch edge cases so break/continue behave reliably in complex async and generator scenarios.
  • Behavior

    • Improved line-tracing and NOP alignment to preserve accurate tracing and exception mapping across control-flow constructs.
    • Ensured branch targets are computed consistently after instruction-cache entries, improving correctness of control flow.
  • Performance

    • Added inline-cache awareness across compile, JIT and runtime paths for more accurate offsets and steadier execution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 27, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Thread async/yield end-targets through codegen paths and return send-targets from yield-from; add per-instruction CACHE accounting to IR/bytecode; and make JIT, VM, frame, and monitoring cache-aware so jumps/targets skip CACHE units.

Changes

Cohort / File(s) Summary
Codegen Async Flow
crates/codegen/src/compile.rs
compile_yield_from_sequence now returns BlockIdx; end_async_for_target threaded across async/yield/await/loop/comprehension paths; added emit_end_async_for(send_target: BlockIdx); added Nop emissions and updated call sites to ignore returned BlockIdx where unused.
Codegen IR / Instruction Sizing
crates/codegen/src/ir.rs
Added InstructionInfo.cache_entries: u32; compute/propagate cache counts; emit CACHE code units; include CACHE in offset, linetable, and exception-table sizing; normalize jumps with CACHE-aware offsets.
Bytecode Instruction API
crates/compiler-core/src/bytecode/instruction.rs
Added pub fn cache_entries(self) -> usize; expanded disassembly/formatting to surface Cache and new variants.
JIT Target Resolution & Tests
crates/jit/src/instructions.rs, crates/jit/tests/common.rs
Precompute instruction targets accounting for caches; add_instruction gains offset: u32; treat Instruction::Cache as no-op in JIT paths and test harness (annotation extraction/no-op set).
VM Frames / Dispatch & Instrumentation
crates/vm/src/frame.rs, crates/vm/src/builtins/frame.rs, crates/vm/src/builtins/code.rs, crates/vm/src/stdlib/sys/monitoring.rs
Make targets relative to after_caches = i + 1 + caches; add private helpers (jump_relative_forward/backward, pop_jump_if_relative, skip_caches_if_fallthrough); update ForIter/PopJump*/Send/Jump* logic and monitoring instrumentation to skip CACHE entries and compute offsets from after-cache positions.
Misc / Small Adjustments
crates/jit/tests/common.rs, other small diffs
LoadGlobal index encoding adjustments in tests; add Instruction::Cache to ignored/skip lists in multiple analysis/test paths.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Parser as AST/Parser
participant Codegen as Codegen
participant IR as IR/InstructionInfo
participant JIT as JIT
participant VM as VM/Runtime

Parser->>Codegen: produce bytecode (yield/await + CACHE markers)
Codegen->>IR: emit instructions and set cache_entries
Codegen->>Codegen: compile_yield_from_sequence -> BlockIdx(send_target)
IR->>JIT: provide CACHE-aware offsets & resolved labels
JIT->>VM: emit/runtime labels using CACHE-aware targets
VM->>VM: execute; skip CACHE units; use relative targets; unwind -> EndAsyncFor uses threaded send_target

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • ShaharNaveh
  • fanninpm

Poem

🐰 I counted caches, hopped past each byte,

Threaded end-targets down the async night.
Jumps now land where my whiskers hope,
Send blocks sewn tight with a carrot-rope.
Hooray — compiled, and my tail's upright!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'instruction CACHE' directly and specifically addresses the main change: comprehensive support for instruction cache entries throughout the codebase, enabling proper bytecode offset calculations and cache-aware control flow.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 27, 2026

📦 Library Dependencies

The following Lib/ modules were modified. Here are their dependencies:

[x] lib: cpython/Lib/opcode.py
[ ] lib: cpython/Lib/_opcode_metadata.py
[x] test: cpython/Lib/test/test__opcode.py (TODO: 2)
[x] test: cpython/Lib/test/test_opcodes.py

dependencies:

  • opcode (native: _opcode, builtins)
    • _opcode_metadata
    • _opcode_metadata

dependent tests: (42 tests)

  • opcode: test__opcode test_code test_compile test_dis test_peepholer
    • dis: test_ast test_compiler_assemble test_dtrace test_fstring test_inspect test_monitoring test_opcache test_patma test_positional_only_arg
      • bdb: test_bdb
      • inspect: test_abc test_argparse test_asyncgen test_buffer test_builtin test_collections test_coroutines test_decimal test_enum test_functools test_generators test_grammar test_ntpath test_operator test_posixpath test_pydoc test_signal test_sqlite3 test_traceback test_type_annotations test_types test_typing test_unittest test_yield_from test_zipimport test_zoneinfo
      • trace: test_trace

[x] lib: cpython/Lib/dis.py
[ ] test: cpython/Lib/test/test_dis.py (TODO: 39)

dependencies:

  • dis

dependent tests: (68 tests)

  • dis: test__opcode test_ast test_code test_compile test_compiler_assemble test_dis test_dtrace test_fstring test_inspect test_monitoring test_opcache test_patma test_peepholer test_positional_only_arg
    • bdb: test_bdb
    • inspect: test_abc test_argparse test_asyncgen test_buffer test_builtin test_collections test_coroutines test_decimal test_enum test_functools test_generators test_grammar test_ntpath test_operator test_posixpath test_pydoc test_signal test_sqlite3 test_traceback test_type_annotations test_types test_typing test_unittest test_yield_from test_zipimport test_zoneinfo
      • ast: test_compiler_codegen test_future_stmt test_site test_ssl test_type_comments test_ucn test_unparse
      • asyncio: test_asyncio test_contextlib_async test_logging test_os test_sys_settrace test_unittest
      • cmd: test_cmd
      • dataclasses: test__colorize test_copy test_ctypes test_genericalias test_pprint test_regrtest
      • importlib.metadata: test_importlib
      • pkgutil: test_pkgutil test_runpy
      • rlcompleter: test_rlcompleter
      • trace: test_trace
      • xmlrpc.server: test_docxmlrpc test_xmlrpc

[x] lib: cpython/Lib/inspect.py
[ ] test: cpython/Lib/test/test_inspect (TODO: 39)

dependencies:

  • inspect

dependent tests: (86 tests)

  • inspect: test_abc test_argparse test_asyncgen test_buffer test_builtin test_code test_collections test_coroutines test_decimal test_enum test_functools test_generators test_grammar test_inspect test_monitoring test_ntpath test_operator test_patma test_posixpath test_pydoc test_signal test_sqlite3 test_traceback test_type_annotations test_types test_typing test_unittest test_yield_from test_zipimport test_zoneinfo
    • ast: test_ast test_compile test_compiler_codegen test_dis test_fstring test_future_stmt test_site test_ssl test_type_comments test_ucn test_unparse
      • annotationlib: test_annotationlib test_reprlib test_type_params
      • dbm.dumb: test_dbm_dumb
      • pyclbr: test_pyclbr
      • traceback: test_asyncio test_code_module test_contextlib test_contextlib_async test_dictcomps test_exceptions test_http_cookiejar test_importlib test_iter test_listcomps test_pyexpat test_setcomps test_socket test_subprocess test_sys test_threadedtempfile test_threading test_unittest test_with
    • asyncio: test_asyncio test_logging test_os test_sys_settrace
    • bdb: test_bdb
    • cmd: test_cmd
    • dataclasses: test__colorize test_copy test_ctypes test_genericalias test_pprint test_regrtest
      • pprint: test_htmlparser test_sys_setprofile
    • importlib.metadata: test_importlib
    • pkgutil: test_pkgutil test_runpy
    • pydoc:
      • xmlrpc.server: test_docxmlrpc test_xmlrpc
    • rlcompleter: test_rlcompleter
    • trace: test_trace

[ ] test: cpython/Lib/test/test_monitoring.py (TODO: 9)

dependencies:

dependent tests: (no tests depend on monitoring)

[x] test: cpython/Lib/test/test_compile.py (TODO: 31)
[x] test: cpython/Lib/test/test_compiler_assemble.py
[x] test: cpython/Lib/test/test_compiler_codegen.py
[x] test: cpython/Lib/test/test_peepholer.py (TODO: 30)

dependencies:

dependent tests: (no tests depend on compile)

Legend:

  • [+] path exists in CPython
  • [x] up-to-date, [ ] outdated

@github-actions
Copy link
Contributor

github-actions bot commented Feb 27, 2026

Code has been automatically formatted

The code in this PR has been formatted using:

  • cargo fmt --all
    Please pull the latest changes before pushing again:
git pull origin cache-inst

@youknowone youknowone force-pushed the cache-inst branch 3 times, most recently from c05e83f to 49ec351 Compare February 28, 2026 01:42
@youknowone youknowone marked this pull request as ready for review February 28, 2026 01:43
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
crates/codegen/src/compile.rs (1)

6796-6853: Run the required clean build for these bytecode-instruction changes.

Given instruction emission/targets changed here, please run the repository’s clean-build step before final validation to avoid stale artifacts masking issues.

rm -r target/debug/build/rustpython-* && find . | grep -E "\.pyc$" | xargs rm -r

As per coding guidelines "When modifying bytecode instructions, perform a clean build by running: rm -r target/debug/build/rustpython-* && find . | grep -E "\.pyc$" | xargs rm -r".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/codegen/src/compile.rs` around lines 6796 - 6853, After changing
bytecode emission/targets in compile_yield_from_sequence (and related
Instruction/PseudoInstruction usage like Send, SetupFinally, YieldValue, Resume,
CleanupThrow, EndSend), perform a clean build to avoid stale artifacts: remove
target/debug build caches and .pyc files then rebuild by running: rm -r
target/debug/build/rustpython-* && find . | grep -E "\.pyc$" | xargs rm -r; then
run the repository build/test commands to validate the changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/codegen/src/ir.rs`:
- Around line 325-377: The code rewrites `op` then sets `info.instr = op.into()`
but leaves `info.cache_entries` as the old value, which can desynchronize
`offset_after` and jump encoding; after `info.instr = op.into()`
recompute/update `info.cache_entries` (e.g. call the existing helper that
computes cache entries or recalc it from the new `info.instr`/`info.arg`), then
use that updated `info.cache_entries` when computing `offset_after`, `new_arg`,
and the `recompile` size check so the jump encoding and emitted CACHE units stay
consistent.

In `@crates/vm/src/frame.rs`:
- Around line 1360-1361: Multiple instruction handlers (e.g., ForIter, Send,
InstrumentedJumpForward, InstrumentedJumpBackward,
InstrumentedPopJumpIfTrue/False/None/NotNone) and the helpers
jump_relative_forward/jump_relative_backward compute relative targets using
unchecked u32 arithmetic, which can wrap; update each target computation (where
you currently call bytecode::Label(self.lasti() + 1 + u32::from(arg)) or
similar) to use checked_add/checked_sub on self.lasti() (and on added/subtracted
offsets), handle None by returning a system error (propagate the error type your
frame code uses), and then call execute_for_iter or the existing branch logic
with the validated Label; apply the same pattern in
jump_relative_forward/jump_relative_backward to consistently detect
overflow/underflow and return an appropriate system error instead of allowing a
panic during fetch.

---

Nitpick comments:
In `@crates/codegen/src/compile.rs`:
- Around line 6796-6853: After changing bytecode emission/targets in
compile_yield_from_sequence (and related Instruction/PseudoInstruction usage
like Send, SetupFinally, YieldValue, Resume, CleanupThrow, EndSend), perform a
clean build to avoid stale artifacts: remove target/debug build caches and .pyc
files then rebuild by running: rm -r target/debug/build/rustpython-* && find . |
grep -E "\.pyc$" | xargs rm -r; then run the repository build/test commands to
validate the changes.

ℹ️ Review info

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 81acb9b and 49ec351.

⛔ Files ignored due to path filters (9)
  • Lib/opcode.py is excluded by !Lib/**
  • Lib/test/test_dis.py is excluded by !Lib/**
  • Lib/test/test_monitoring.py is excluded by !Lib/**
  • Lib/test/test_peepholer.py is excluded by !Lib/**
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_ands.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_mixed.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_ors.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__nested_bool_op.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__nested_double_async_with.snap is excluded by !**/*.snap
📒 Files selected for processing (9)
  • crates/codegen/src/compile.rs
  • crates/codegen/src/ir.rs
  • crates/compiler-core/src/bytecode/instruction.rs
  • crates/jit/src/instructions.rs
  • crates/jit/tests/common.rs
  • crates/vm/src/builtins/code.rs
  • crates/vm/src/builtins/frame.rs
  • crates/vm/src/frame.rs
  • crates/vm/src/stdlib/sys/monitoring.rs

Comment on lines +1360 to +1361
let target = bytecode::Label(self.lasti() + 1 + u32::from(arg));
self.execute_for_iter(vm, target)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's check the file size and then examine the specific lines
wc -l crates/vm/src/frame.rs

# Look at lines 1360-1361
echo "=== Lines 1360-1361 ==="
sed -n '1355,1365p' crates/vm/src/frame.rs

# Check if jump_relative_forward/backward functions exist
echo "=== Searching for jump_relative functions ==="
rg -n 'fn jump_relative_' crates/vm/src/frame.rs

# Look at lines around 2434-2435
echo "=== Lines 2430-2440 ==="
sed -n '2430,2440p' crates/vm/src/frame.rs

# Look at lines around 2660-2672
echo "=== Lines 2655-2675 ==="
sed -n '2655,2675p' crates/vm/src/frame.rs

# Look at lines around 2729-2730
echo "=== Lines 2725-2735 ==="
sed -n '2725,2735p' crates/vm/src/frame.rs

# Look at lines around 2742-2743
echo "=== Lines 2738-2748 ==="
sed -n '2738,2748p' crates/vm/src/frame.rs

# Look at lines around 2755-2756
echo "=== Lines 2751-2761 ==="
sed -n '2751,2761p' crates/vm/src/frame.rs

# Look at lines around 2767-2768
echo "=== Lines 2763-2773 ==="
sed -n '2763,2773p' crates/vm/src/frame.rs

# Look at lines around 3560-3570
echo "=== Lines 3555-3575 ==="
sed -n '3555,3575p' crates/vm/src/frame.rs

# Check for checked_add/checked_sub usage in the file
echo "=== Checked arithmetic usage ==="
rg -n 'checked_add|checked_sub' crates/vm/src/frame.rs

Repository: RustPython/RustPython

Length of output: 5778


Guard relative jump target arithmetic against u32 wrap-around.

Target computations in multiple instruction handlers use unchecked +/- on u32:

  • ForIter (line 1360)
  • Send (line 2434)
  • InstrumentedJumpForward/Backward (lines 2660, 2667)
  • InstrumentedPopJumpIf{True,False,None,NotNone} (lines 2729, 2742, 2755, 2768)
  • jump_relative_forward/backward helpers (lines 3560, 3567)

With malformed bytecode, overflow/underflow can wrap lasti to an invalid index, causing a panic during instruction fetch instead of failing gracefully.

Use checked_add and checked_sub to detect overflow/underflow and return a system error. Apply consistently across all target calculation sites.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/frame.rs` around lines 1360 - 1361, Multiple instruction
handlers (e.g., ForIter, Send, InstrumentedJumpForward,
InstrumentedJumpBackward, InstrumentedPopJumpIfTrue/False/None/NotNone) and the
helpers jump_relative_forward/jump_relative_backward compute relative targets
using unchecked u32 arithmetic, which can wrap; update each target computation
(where you currently call bytecode::Label(self.lasti() + 1 + u32::from(arg)) or
similar) to use checked_add/checked_sub on self.lasti() (and on added/subtracted
offsets), handle None by returning a system error (propagate the error type your
frame code uses), and then call execute_for_iter or the existing branch logic
with the validated Label; apply the same pattern in
jump_relative_forward/jump_relative_backward to consistently detect
overflow/underflow and return an appropriate system error instead of allowing a
panic during fetch.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
crates/vm/src/frame.rs (1)

3557-3575: ⚠️ Potential issue | 🟠 Major

Guard relative jump arithmetic against u32 wrap-around.

Line 3564 and Line 3574 use unchecked arithmetic, and the same pattern is still present at Line 1360, Line 2434, Line 2660, Line 2671, and Line 2729+ sites. With malformed bytecode, this can wrap/underflow lasti and redirect execution to invalid instruction indices.

🐛 Suggested direction
- fn jump_relative_forward(&mut self, delta: u32, caches: u32) {
-     let target = self.lasti() + caches + delta;
-     self.update_lasti(|i| *i = target);
- }
+ fn jump_relative_forward(
+     &mut self,
+     delta: u32,
+     caches: u32,
+     vm: &VirtualMachine,
+ ) -> PyResult<()> {
+     let target = self
+         .lasti()
+         .checked_add(caches)
+         .and_then(|v| v.checked_add(delta))
+         .ok_or_else(|| vm.new_system_error("relative forward jump overflow".to_owned()))?;
+     self.update_lasti(|i| *i = target);
+     Ok(())
+ }

- fn jump_relative_backward(&mut self, delta: u32, caches: u32) {
-     let target = self.lasti() + caches - delta;
-     self.update_lasti(|i| *i = target);
- }
+ fn jump_relative_backward(
+     &mut self,
+     delta: u32,
+     caches: u32,
+     vm: &VirtualMachine,
+ ) -> PyResult<()> {
+     let target = self
+         .lasti()
+         .checked_add(caches)
+         .and_then(|v| v.checked_sub(delta))
+         .ok_or_else(|| vm.new_system_error("relative backward jump underflow".to_owned()))?;
+     self.update_lasti(|i| *i = target);
+     Ok(())
+ }
#!/bin/bash
# Read-only verification: list unchecked relative jump math sites and checked arithmetic usage.
rg -n 'lasti\(\)\s*\+\s*1\s*\+\s*u32::from\(arg\)|lasti\(\)\s*\+\s*u32::from\(arg\)|lasti\(\)\s*\+\s*1\s*-\s*u32::from\(arg\)|\+\s*caches\s*\+\s*delta|\+\s*caches\s*-\s*delta' crates/vm/src/frame.rs
rg -n 'checked_add|checked_sub' crates/vm/src/frame.rs

Also applies to: 1358-1361, 2433-2434, 2660-2672, 2728-2773

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/frame.rs` around lines 3557 - 3575, Guard all relative-jump
arithmetic against u32 wrap/underflow by replacing unchecked
addition/subtraction in jump_relative_forward and jump_relative_backward (and
the other sites with the same pattern) with checked arithmetic (e.g.,
u32::checked_add / u32::checked_sub); compute target via checked ops, and if
they return None, stop and handle the invalid target (propagate an
Err/trap/panic consistent with the frame's error model) instead of calling
update_lasti with a wrapped value; ensure the same pattern is applied to the
other occurrences (the similar add/sub uses that set lasti via update_lasti) so
all malformed bytecode cannot produce wrapped/invalid instruction indices.
🧹 Nitpick comments (4)
crates/jit/src/instructions.rs (1)

184-204: Prefer deriving cache widths from Instruction::cache_entries() to prevent drift.

While the current hard-coded cache widths (0 and 1) are correct, duplicating this metadata creates maintenance risk. If cache_entries() is updated, the JIT's target calculation could silently diverge. Consider refactoring instruction_target to query the opcode metadata directly, ensuring JIT labels remain synchronized with compiler-core.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/jit/src/instructions.rs` around lines 184 - 204, The match in
instruction_target currently uses hard-coded cache widths (0 and 1) when calling
Self::jump_target_forward/backward; replace those magic numbers by querying the
opcode's cache width from the instruction metadata (use
Instruction::cache_entries() or the equivalent method on the matched variant)
and pass that value into jump_target_forward/jump_target_backward so the JIT
follows the canonical cache entry count; update all arms that currently pass 0
or 1 (e.g., Instruction::JumpForward, JumpBackward, JumpBackwardNoInterrupt,
PopJumpIfFalse/True/IfNone/IfNotNone, ForIter, Send) to compute width =
instruction.cache_entries() (or the per-variant entry count) and use width or
width - 1 where semantics require, preserving the existing offset adjustments.
crates/vm/src/builtins/frame.rs (1)

201-233: Consider making unchecked jump arithmetic explicit with checked operations or comments.

Lines 202, 214, 224, 232, and 253 use unchecked addition and subtraction to compute jump targets. While subsequent bounds checks (j < stacks.len()) prevent out-of-bounds access, the pattern silently skips processing jump targets that wrap (e.g., when oparg exceeds next_i on backward jumps). This defensive behavior is safe but implicit. Either use checked_add/checked_sub to be explicit about handling invalid jumps, or add a comment clarifying that out-of-bounds targets are intentionally skipped during stack analysis.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/builtins/frame.rs` around lines 201 - 233, The jump target
arithmetic currently uses plain add/sub (e.g., computing j = next_i + oparg as
usize and j = next_i - oparg as usize inside the match arms for
Instruction::Send, Instruction::JumpForward, Instruction::JumpBackward /
JumpBackwardNoInterrupt and the earlier case that pops a value) which can
silently wrap and be implicitly ignored by the subsequent j < stacks.len()
checks; update these sites to use explicit checked arithmetic
(checked_add/checked_sub on usize or convert oparg safely) and handle the None
case by skipping the target, or add a short clarifying comment at each
calculation (mentioning j, next_i, oparg, and stacks.len()) that wrapping is
intentionally treated as an invalid/ignored target—pick one approach and apply
it consistently to all occurrences in the file.
crates/codegen/src/compile.rs (1)

7918-7922: Async inlined-comprehension loop cleanup currently diverges from the async-for protocol.

This branch emits GetANext + yield_from without SetupFinally/PopBlock, then emits EndAsyncFor plus an extra PopTop. That differs from the non-inlined async path and can mis-handle stack/exception cleanup if PEP 709 inlining is enabled later.

♻️ Suggested alignment with the non-inlined async pattern
             if generator.is_async {
+                emit!(
+                    self,
+                    PseudoInstruction::SetupFinally {
+                        target: after_block
+                    }
+                );
                 emit!(self, Instruction::GetANext);
                 self.emit_load_const(ConstantData::None);
                 end_async_for_target = self.compile_yield_from_sequence(true)?;
+                emit!(self, PseudoInstruction::PopBlock);
                 self.compile_store(&generator.target)?;
             } else {
                 emit!(
                     self,
                     Instruction::ForIter {
@@
             self.switch_to_block(after_block);
             if is_async {
                 self.emit_end_async_for(end_async_for_target);
-                // Pop the iterator
-                emit!(self, Instruction::PopTop);
             } else {
                 // END_FOR + POP_ITER pattern (CPython 3.14)
                 emit!(self, Instruction::EndFor);
                 emit!(self, Instruction::PopIter);
             }

Also applies to: 7958-7963

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/codegen/src/compile.rs` around lines 7918 - 7922, The async
inlined-comprehension branch (where generator.is_async is checked and the code
emits Instruction::GetANext, self.compile_yield_from_sequence(true), and
self.compile_store(&generator.target)) is missing the SetupFinally/PopBlock
pairing before yielding and leaves an extra PopTop/EndAsyncFor sequence that
diverges from the non-inlined async-for protocol; update this branch (and the
analogous block around the other compile_yield_from_sequence call) to mirror the
non-inlined pattern by emitting a SetupFinally before GetANext/yield_from,
ensuring PopBlock is emitted after the yield sequence, and then using
EndAsyncFor without leaving the stray PopTop — i.e., wrap the GetANext +
compile_yield_from_sequence(true) call with the same
SetupFinally/PopBlock/EndAsyncFor pairing used by the non-inlined async-for path
so stack/exception cleanup matches.
crates/vm/src/frame.rs (1)

581-592: Extract duplicated “fallthrough cache-skip” logic into one helper.

The same post-dispatch cache-skip block is repeated three times. Centralizing it reduces drift risk when cache semantics evolve.

♻️ Refactor sketch
+ #[inline]
+ fn skip_caches_if_fallthrough(&mut self, op: Instruction, lasti_before: u32) {
+     if self.lasti() == lasti_before {
+         let base = op.to_base().unwrap_or(op);
+         let caches = base.cache_entries();
+         if caches > 0 {
+             self.update_lasti(|i| *i += caches as u32);
+         }
+     }
+ }

Then call this helper in the three current call sites.

As per coding guidelines: “When branches differ only in a value but share common logic, extract the differing value first, then call the common logic once to avoid duplicate code.”

Also applies to: 2868-2876, 2907-2916

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/frame.rs` around lines 581 - 592, The post-dispatch
"fallthrough cache-skip" block duplicated around calls to
self.execute_instruction should be extracted into a single helper (e.g., a
private method on Frame like skip_fallthrough_caches(&mut self, op: Instruction)
or skip_fallthrough_caches(&mut self, base_op: Instruction)) that captures the
logic: record lasti_before = self.lasti(), check self.lasti() == lasti_before,
derive base_op = op.to_base().unwrap_or(op), get caches =
base_op.cache_entries(), and if caches > 0 call self.update_lasti(|i| *i +=
caches as u32); replace the three duplicated blocks with calls to this helper
immediately after the corresponding execute_instruction(op, arg, &mut
do_extend_arg, vm) invocations so behavior is identical.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/vm/src/stdlib/sys/monitoring.rs`:
- Around line 408-411: The current branch for Instruction::ForIter and
Instruction::Send always returns Some(after_caches + delta + 1) which can
overshoot when the END_FOR/END_SEND marker isn't at that slot; update the logic
in the Instruction::ForIter / Instruction::Send match arm to mirror the
conditional adjustment used elsewhere: compute candidate = after_caches + delta
+ 1, check whether the instruction at candidate is the corresponding END_FOR or
END_SEND token, and only return candidate if it matches, otherwise return
Some(after_caches + delta); reference Instruction::ForIter, Instruction::Send,
END_FOR, END_SEND, after_caches and delta to locate and apply this conditional
check.

---

Duplicate comments:
In `@crates/vm/src/frame.rs`:
- Around line 3557-3575: Guard all relative-jump arithmetic against u32
wrap/underflow by replacing unchecked addition/subtraction in
jump_relative_forward and jump_relative_backward (and the other sites with the
same pattern) with checked arithmetic (e.g., u32::checked_add /
u32::checked_sub); compute target via checked ops, and if they return None, stop
and handle the invalid target (propagate an Err/trap/panic consistent with the
frame's error model) instead of calling update_lasti with a wrapped value;
ensure the same pattern is applied to the other occurrences (the similar add/sub
uses that set lasti via update_lasti) so all malformed bytecode cannot produce
wrapped/invalid instruction indices.

---

Nitpick comments:
In `@crates/codegen/src/compile.rs`:
- Around line 7918-7922: The async inlined-comprehension branch (where
generator.is_async is checked and the code emits Instruction::GetANext,
self.compile_yield_from_sequence(true), and
self.compile_store(&generator.target)) is missing the SetupFinally/PopBlock
pairing before yielding and leaves an extra PopTop/EndAsyncFor sequence that
diverges from the non-inlined async-for protocol; update this branch (and the
analogous block around the other compile_yield_from_sequence call) to mirror the
non-inlined pattern by emitting a SetupFinally before GetANext/yield_from,
ensuring PopBlock is emitted after the yield sequence, and then using
EndAsyncFor without leaving the stray PopTop — i.e., wrap the GetANext +
compile_yield_from_sequence(true) call with the same
SetupFinally/PopBlock/EndAsyncFor pairing used by the non-inlined async-for path
so stack/exception cleanup matches.

In `@crates/jit/src/instructions.rs`:
- Around line 184-204: The match in instruction_target currently uses hard-coded
cache widths (0 and 1) when calling Self::jump_target_forward/backward; replace
those magic numbers by querying the opcode's cache width from the instruction
metadata (use Instruction::cache_entries() or the equivalent method on the
matched variant) and pass that value into
jump_target_forward/jump_target_backward so the JIT follows the canonical cache
entry count; update all arms that currently pass 0 or 1 (e.g.,
Instruction::JumpForward, JumpBackward, JumpBackwardNoInterrupt,
PopJumpIfFalse/True/IfNone/IfNotNone, ForIter, Send) to compute width =
instruction.cache_entries() (or the per-variant entry count) and use width or
width - 1 where semantics require, preserving the existing offset adjustments.

In `@crates/vm/src/builtins/frame.rs`:
- Around line 201-233: The jump target arithmetic currently uses plain add/sub
(e.g., computing j = next_i + oparg as usize and j = next_i - oparg as usize
inside the match arms for Instruction::Send, Instruction::JumpForward,
Instruction::JumpBackward / JumpBackwardNoInterrupt and the earlier case that
pops a value) which can silently wrap and be implicitly ignored by the
subsequent j < stacks.len() checks; update these sites to use explicit checked
arithmetic (checked_add/checked_sub on usize or convert oparg safely) and handle
the None case by skipping the target, or add a short clarifying comment at each
calculation (mentioning j, next_i, oparg, and stacks.len()) that wrapping is
intentionally treated as an invalid/ignored target—pick one approach and apply
it consistently to all occurrences in the file.

In `@crates/vm/src/frame.rs`:
- Around line 581-592: The post-dispatch "fallthrough cache-skip" block
duplicated around calls to self.execute_instruction should be extracted into a
single helper (e.g., a private method on Frame like skip_fallthrough_caches(&mut
self, op: Instruction) or skip_fallthrough_caches(&mut self, base_op:
Instruction)) that captures the logic: record lasti_before = self.lasti(), check
self.lasti() == lasti_before, derive base_op = op.to_base().unwrap_or(op), get
caches = base_op.cache_entries(), and if caches > 0 call self.update_lasti(|i|
*i += caches as u32); replace the three duplicated blocks with calls to this
helper immediately after the corresponding execute_instruction(op, arg, &mut
do_extend_arg, vm) invocations so behavior is identical.

ℹ️ Review info

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 49ec351 and 67de59a.

⛔ Files ignored due to path filters (9)
  • Lib/opcode.py is excluded by !Lib/**
  • Lib/test/test_dis.py is excluded by !Lib/**
  • Lib/test/test_monitoring.py is excluded by !Lib/**
  • Lib/test/test_peepholer.py is excluded by !Lib/**
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_ands.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_mixed.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_ors.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__nested_bool_op.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__nested_double_async_with.snap is excluded by !**/*.snap
📒 Files selected for processing (9)
  • crates/codegen/src/compile.rs
  • crates/codegen/src/ir.rs
  • crates/compiler-core/src/bytecode/instruction.rs
  • crates/jit/src/instructions.rs
  • crates/jit/tests/common.rs
  • crates/vm/src/builtins/code.rs
  • crates/vm/src/builtins/frame.rs
  • crates/vm/src/frame.rs
  • crates/vm/src/stdlib/sys/monitoring.rs
🚧 Files skipped from review as they are similar to previous changes (2)
  • crates/jit/tests/common.rs
  • crates/compiler-core/src/bytecode/instruction.rs

Comment on lines 408 to 411
Instruction::ForIter { .. } | Instruction::Send { .. } => {
// Skip over END_FOR/END_SEND
Some(u32::from(arg) as usize + 1)
Some(after_caches + delta + 1)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Make END_ skipping conditional for ForIter/Send targets.*

Unconditionally adding + 1 can overstep the real target when END_FOR/END_SEND is not at that slot. This should mirror the conditional target adjustment used elsewhere in the PR.

Suggested fix
-                Instruction::ForIter { .. } | Instruction::Send { .. } => {
-                    // Skip over END_FOR/END_SEND
-                    Some(after_caches + delta + 1)
-                }
+                Instruction::ForIter { .. } => {
+                    let t = after_caches + delta;
+                    let t = if matches!(
+                        code.code.instructions.get(t).map(|u| u.op.to_base().unwrap_or(u.op)),
+                        Some(Instruction::EndFor)
+                    ) { t + 1 } else { t };
+                    Some(t)
+                }
+                Instruction::Send { .. } => {
+                    let t = after_caches + delta;
+                    let t = if matches!(
+                        code.code.instructions.get(t).map(|u| u.op.to_base().unwrap_or(u.op)),
+                        Some(Instruction::EndSend)
+                    ) { t + 1 } else { t };
+                    Some(t)
+                }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/stdlib/sys/monitoring.rs` around lines 408 - 411, The current
branch for Instruction::ForIter and Instruction::Send always returns
Some(after_caches + delta + 1) which can overshoot when the END_FOR/END_SEND
marker isn't at that slot; update the logic in the Instruction::ForIter /
Instruction::Send match arm to mirror the conditional adjustment used elsewhere:
compute candidate = after_caches + delta + 1, check whether the instruction at
candidate is the corresponding END_FOR or END_SEND token, and only return
candidate if it matches, otherwise return Some(after_caches + delta); reference
Instruction::ForIter, Instruction::Send, END_FOR, END_SEND, after_caches and
delta to locate and apply this conditional check.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
crates/vm/src/stdlib/sys/monitoring.rs (1)

408-411: ⚠️ Potential issue | 🟡 Minor

Make END_ skipping conditional for ForIter/Send.*

Line 410 always adds + 1, which can skip past the real target when END_FOR/END_SEND is not actually at that slot.

Suggested fix
-                Instruction::ForIter { .. } | Instruction::Send { .. } => {
-                    // Skip over END_FOR/END_SEND
-                    Some(after_caches + delta + 1)
-                }
+                Instruction::ForIter { .. } => {
+                    let t = after_caches + delta;
+                    let is_end_for = matches!(
+                        code.code.instructions.get(t).map(|u| u.op.to_base().unwrap_or(u.op)),
+                        Some(Instruction::EndFor)
+                    );
+                    Some(if is_end_for { t + 1 } else { t })
+                }
+                Instruction::Send { .. } => {
+                    let t = after_caches + delta;
+                    let is_end_send = matches!(
+                        code.code.instructions.get(t).map(|u| u.op.to_base().unwrap_or(u.op)),
+                        Some(Instruction::EndSend)
+                    );
+                    Some(if is_end_send { t + 1 } else { t })
+                }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/stdlib/sys/monitoring.rs` around lines 408 - 411, The branch
handling Instruction::ForIter and Instruction::Send always returns
Some(after_caches + delta + 1), which incorrectly skips one slot even when the
following opcode isn't END_FOR/END_SEND; change this to conditionally add +1 by
looking up the instruction at index after_caches + delta (using the same
bytecode/instruction buffer visible in the surrounding function) and: if that
instruction is Instruction::EndFor (for ForIter) or Instruction::EndSend (for
Send) return Some(after_caches + delta + 1), otherwise return Some(after_caches
+ delta); keep referencing the Instruction enum and the local variables
after_caches and delta to locate and implement this conditional logic.
🧹 Nitpick comments (1)
crates/vm/src/frame.rs (1)

2534-2566: Document unchecked arithmetic assumption in helper functions.

The jump_relative_forward and jump_relative_backward helpers use unchecked arithmetic, which the doc comments justify as "compiler guarantees valid targets." While this is reasonable for well-formed bytecode, consider adding a debug assertion to catch malformed bytecode during development.

♻️ Optional: Add debug assertions for safety
     #[inline]
     fn jump_relative_forward(&mut self, delta: u32, caches: u32) {
+        debug_assert!(
+            self.lasti().checked_add(caches).and_then(|v| v.checked_add(delta)).is_some(),
+            "jump_relative_forward overflow"
+        );
         let target = self.lasti() + caches + delta;
         self.update_lasti(|i| *i = target);
     }

     #[inline]
     fn jump_relative_backward(&mut self, delta: u32, caches: u32) {
+        debug_assert!(
+            self.lasti().checked_add(caches).and_then(|v| v.checked_sub(delta)).is_some(),
+            "jump_relative_backward underflow"
+        );
         let target = self.lasti() + caches - delta;
         self.update_lasti(|i| *i = target);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/frame.rs` around lines 2534 - 2566, The helpers
jump_relative_forward and jump_relative_backward use unchecked arithmetic based
on the assumption that the compiler or caller ensures valid targets; add debug
assertions in each helper (e.g., debug_assert! or debug_assert_eq!) that
validate the computed target is within the code bounds and that the resulting
lasti/offset is non-negative and aligns with instruction-width expectations
before performing the unchecked arithmetic, and add a short comment documenting
the invariant being checked so malformed bytecode is caught in debug builds
while release keeps the unchecked ops.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/codegen/src/compile.rs`:
- Line 5168: The emit!(self, Instruction::NotTaken) call is incorrectly added on
the async-iteration/SEND path; remove that emission so async-iteration (e.g.,
Send/ForIter handling in compile.rs) does not produce NOT_TAKEN. Update the
related logic so only boolean conditional jumps emit NOT_TAKEN—ensure
is_conditional_jump in ir.rs matches only POP_JUMP_IF_* (and does not include
ForIter or Send), and delete the emit!(..., Instruction::NotTaken) from the
async-iteration branch (the code that handles Send/ForIter) so branch-monitoring
semantics remain correct.

In `@crates/vm/src/builtins/code.rs`:
- Around line 972-975: The comment for the END_ASYNC_FOR handling is incorrect:
it says "src = END_SEND position (i - oparg)" but the code computes src using
next_i.checked_sub(oparg as usize) where next_i = i + 1 (i.e. i + 1 - oparg).
Update the comment to match the code's formula (e.g., "src = END_SEND position
(i + 1 - oparg)") or change the calculation to match the original comment;
adjust the comment near the code that defines next_i, oparg, and src_i in the
END_ASYNC_FOR block so the math and wording are consistent.

---

Duplicate comments:
In `@crates/vm/src/stdlib/sys/monitoring.rs`:
- Around line 408-411: The branch handling Instruction::ForIter and
Instruction::Send always returns Some(after_caches + delta + 1), which
incorrectly skips one slot even when the following opcode isn't
END_FOR/END_SEND; change this to conditionally add +1 by looking up the
instruction at index after_caches + delta (using the same bytecode/instruction
buffer visible in the surrounding function) and: if that instruction is
Instruction::EndFor (for ForIter) or Instruction::EndSend (for Send) return
Some(after_caches + delta + 1), otherwise return Some(after_caches + delta);
keep referencing the Instruction enum and the local variables after_caches and
delta to locate and implement this conditional logic.

---

Nitpick comments:
In `@crates/vm/src/frame.rs`:
- Around line 2534-2566: The helpers jump_relative_forward and
jump_relative_backward use unchecked arithmetic based on the assumption that the
compiler or caller ensures valid targets; add debug assertions in each helper
(e.g., debug_assert! or debug_assert_eq!) that validate the computed target is
within the code bounds and that the resulting lasti/offset is non-negative and
aligns with instruction-width expectations before performing the unchecked
arithmetic, and add a short comment documenting the invariant being checked so
malformed bytecode is caught in debug builds while release keeps the unchecked
ops.

ℹ️ Review info

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 67de59a and 0be59d5.

⛔ Files ignored due to path filters (9)
  • Lib/opcode.py is excluded by !Lib/**
  • Lib/test/test_dis.py is excluded by !Lib/**
  • Lib/test/test_monitoring.py is excluded by !Lib/**
  • Lib/test/test_peepholer.py is excluded by !Lib/**
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_ands.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_mixed.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_ors.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__nested_bool_op.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__nested_double_async_with.snap is excluded by !**/*.snap
📒 Files selected for processing (9)
  • crates/codegen/src/compile.rs
  • crates/codegen/src/ir.rs
  • crates/compiler-core/src/bytecode/instruction.rs
  • crates/jit/src/instructions.rs
  • crates/jit/tests/common.rs
  • crates/vm/src/builtins/code.rs
  • crates/vm/src/builtins/frame.rs
  • crates/vm/src/frame.rs
  • crates/vm/src/stdlib/sys/monitoring.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • crates/jit/tests/common.rs

end_async_for_target = self.compile_yield_from_sequence(true)?;
// POP_BLOCK for SETUP_FINALLY - only GetANext/yield_from are protected
emit!(self, PseudoInstruction::PopBlock);
emit!(self, Instruction::NotTaken);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove Instruction::NotTaken from async-iteration control flow.

This NOT_TAKEN is tied to the SEND-driven async iteration path, not a boolean POP_JUMP_IF_* conditional jump. It can distort branch-monitoring semantics.

Suggested fix
-            emit!(self, Instruction::NotTaken);

Based on learnings: In RustPython's codegen ir.rs, is_conditional_jump should only match POP_JUMP_IF_*; ForIter and Send should not be included because NOT_TAKEN is for boolean-test conditional jumps.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
emit!(self, Instruction::NotTaken);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/codegen/src/compile.rs` at line 5168, The emit!(self,
Instruction::NotTaken) call is incorrectly added on the async-iteration/SEND
path; remove that emission so async-iteration (e.g., Send/ForIter handling in
compile.rs) does not produce NOT_TAKEN. Update the related logic so only boolean
conditional jumps emit NOT_TAKEN—ensure is_conditional_jump in ir.rs matches
only POP_JUMP_IF_* (and does not include ForIter or Send), and delete the
emit!(..., Instruction::NotTaken) from the async-iteration branch (the code that
handles Send/ForIter) so branch-monitoring semantics remain correct.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
crates/vm/src/frame.rs (1)

1349-1352: ⚠️ Potential issue | 🟠 Major

Guard relative jump arithmetic with checked operations (still unresolved).

This repeats a previously reported issue: relative target math is still unchecked in helper and direct paths. Malformed bytecode can wrap/underflow lasti and corrupt control-flow before a later panic.

#!/bin/bash
# Verify unresolved unchecked relative target arithmetic in frame execution.
# Expected after fix: no matches.
rg -n 'self\.lasti\(\)\s*\+\s*1\s*\+\s*u32::from\(arg\)|self\.lasti\(\)\s*\+\s*1\s*-\s*u32::from\(arg\)|self\.lasti\(\)\s*\+\s*caches\s*[+-]\s*delta' crates/vm/src/frame.rs
Suggested fix direction
- fn jump_relative_forward(&mut self, delta: u32, caches: u32) {
-     let target = self.lasti() + caches + delta;
-     self.update_lasti(|i| *i = target);
- }
+ fn jump_relative_forward(
+     &mut self,
+     delta: u32,
+     caches: u32,
+     vm: &VirtualMachine,
+ ) -> FrameResult {
+     let target = self
+         .lasti()
+         .checked_add(caches)
+         .and_then(|v| v.checked_add(delta))
+         .ok_or_else(|| vm.new_system_error("relative jump overflow".to_owned()))?;
+     self.update_lasti(|i| *i = target);
+     Ok(None)
+ }

Apply the same checked pattern to backward targets and direct relative computations.

Also applies to: 2428-2429, 2655-2667, 2724-2763, 3538-3557

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/frame.rs` around lines 1349 - 1352, The relative jump
computation in Instruction::ForIter (and similar sites) performs unchecked
arithmetic like bytecode::Label(self.lasti() + 1 + u32::from(arg)), allowing
wrap/underflow; update these sites (e.g., inside the match arm that calls
self.execute_for_iter(vm, target) and other locations noted) to use checked
arithmetic (checked_add/checked_sub or checked_{add,sub}_then_convert) and
return an Err/raise a VM error if the computed target overflows or is out of
range before constructing bytecode::Label; ensure the same checked pattern is
applied to backward targets and direct relative computations referenced in
execute_for_iter and other helper paths so all relative jumps validate bounds
before use.
crates/codegen/src/compile.rs (1)

5126-5126: ⚠️ Potential issue | 🟠 Major

Remove Instruction::NotTaken from async-iteration SEND flow.

NOT_TAKEN here is attached to async iteration control flow (SEND path), not a boolean POP_JUMP_IF_* branch, so monitoring semantics become incorrect.

Suggested fix
-            emit!(self, Instruction::NotTaken);

Based on learnings: In RustPython's codegen ir.rs, the is_conditional_jump function should only match POP_JUMP_IF_* instruction variants; ForIter and Send should not be included because NOT_TAKEN is for boolean-test conditional jumps only.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/codegen/src/compile.rs` at line 5126, The emitted
Instruction::NotTaken is wrongly applied to the async-iteration SEND path;
remove the emit!(self, Instruction::NotTaken) call from the SEND/SEND-related
async-iteration flow so that NOT_TAKEN is only used for boolean POP_JUMP_IF_*
branches. Locate the SEND/Send handling in compile.rs (the emit!(...,
Instruction::NotTaken) call) and delete that emission, and also ensure the
is_conditional_jump matcher (or equivalent logic) only considers POP_JUMP_IF_*
variants (not ForIter or Send) so NOT_TAKEN stays exclusive to boolean-test
conditional jumps.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/codegen/src/ir.rs`:
- Around line 346-348: Clarify the comment explaining END_SEND_OFFSET (const
END_SEND_OFFSET: u32 = 5) by explicitly breaking down the 5 code units: SEND
instruction (1), its cache entry (1), the three intermediate code units between
the SEND/CACHE pair and END_SEND (3), summing to 5; update the comment near
END_SEND_OFFSET to list these components and their counts and reference SEND and
END_SEND so readers can verify the correspondence with CPython's
END_SEND_OFFSET.

In `@crates/vm/src/builtins/frame.rs`:
- Around line 201-203: In mark_stacks in crates/vm/src/builtins/frame.rs,
replace all unchecked arithmetic that computes jump targets (e.g., `let j =
next_i + oparg as usize` and `let j = next_i - oparg as usize`) with checked
operations: use next_i.checked_add(oparg as usize) for forward targets and
next_i.checked_sub(oparg as usize) for backward targets, and if the checked
operation returns None simply continue/skip that instruction (gracefully skip
stack analysis for that target) so underflow/overflow cannot produce an
out-of-bounds j; update every occurrence mentioned (the forward cases at the
three `next_i + oparg as usize` sites, the backward `next_i - oparg as usize`
site, and the `let mut j = next_i + oparg as usize` site) to follow this
pattern.

---

Duplicate comments:
In `@crates/codegen/src/compile.rs`:
- Line 5126: The emitted Instruction::NotTaken is wrongly applied to the
async-iteration SEND path; remove the emit!(self, Instruction::NotTaken) call
from the SEND/SEND-related async-iteration flow so that NOT_TAKEN is only used
for boolean POP_JUMP_IF_* branches. Locate the SEND/Send handling in compile.rs
(the emit!(..., Instruction::NotTaken) call) and delete that emission, and also
ensure the is_conditional_jump matcher (or equivalent logic) only considers
POP_JUMP_IF_* variants (not ForIter or Send) so NOT_TAKEN stays exclusive to
boolean-test conditional jumps.

In `@crates/vm/src/frame.rs`:
- Around line 1349-1352: The relative jump computation in Instruction::ForIter
(and similar sites) performs unchecked arithmetic like
bytecode::Label(self.lasti() + 1 + u32::from(arg)), allowing wrap/underflow;
update these sites (e.g., inside the match arm that calls
self.execute_for_iter(vm, target) and other locations noted) to use checked
arithmetic (checked_add/checked_sub or checked_{add,sub}_then_convert) and
return an Err/raise a VM error if the computed target overflows or is out of
range before constructing bytecode::Label; ensure the same checked pattern is
applied to backward targets and direct relative computations referenced in
execute_for_iter and other helper paths so all relative jumps validate bounds
before use.

ℹ️ Review info

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0be59d5 and 9792d68.

⛔ Files ignored due to path filters (9)
  • Lib/opcode.py is excluded by !Lib/**
  • Lib/test/test_dis.py is excluded by !Lib/**
  • Lib/test/test_monitoring.py is excluded by !Lib/**
  • Lib/test/test_peepholer.py is excluded by !Lib/**
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_ands.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_mixed.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__if_ors.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__nested_bool_op.snap is excluded by !**/*.snap
  • crates/codegen/src/snapshots/rustpython_codegen__compile__tests__nested_double_async_with.snap is excluded by !**/*.snap
📒 Files selected for processing (9)
  • crates/codegen/src/compile.rs
  • crates/codegen/src/ir.rs
  • crates/compiler-core/src/bytecode/instruction.rs
  • crates/jit/src/instructions.rs
  • crates/jit/tests/common.rs
  • crates/vm/src/builtins/code.rs
  • crates/vm/src/builtins/frame.rs
  • crates/vm/src/frame.rs
  • crates/vm/src/stdlib/sys/monitoring.rs

Comment on lines +346 to +348
// The offset (in code units) of END_SEND from SEND in the yield-from sequence.
// Matches CPython's END_SEND_OFFSET in Python/assemble.c.
const END_SEND_OFFSET: u32 = 5;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Check the actual code at the specified lines
head -360 crates/codegen/src/ir.rs | tail -20

Repository: RustPython/RustPython

Length of output: 1178


🏁 Script executed:

# Search for END_SEND_OFFSET usage and context
rg -n "END_SEND_OFFSET" --type rust -A 5 -B 2

Repository: RustPython/RustPython

Length of output: 1571


🏁 Script executed:

# Check SEND instruction definition
rg -n "SEND" --type rust | grep -E "(Instruction|Send|cache)" | head -20

Repository: RustPython/RustPython

Length of output: 269


🏁 Script executed:

# Look for bytecode instruction definitions to understand the structure
rg -n "enum.*Instruction" --type rust -A 30 | head -80

Repository: RustPython/RustPython

Length of output: 6327


🏁 Script executed:

# Search for SEND instruction definition with cache entries
rg -n "SEND|cache" --type rust -A 3 -B 3 | grep -A 10 -B 10 "Send"

Repository: RustPython/RustPython

Length of output: 11832


🏁 Script executed:

# Look for instruction cache entries definitions
rg -n "cache_entries" --type rust | head -20

Repository: RustPython/RustPython

Length of output: 1880


🏁 Script executed:

# Search for bytecode layout or instruction size definitions
rg -n "cache.*entry|CACHE|code.*unit" --type rust | head -30

Repository: RustPython/RustPython

Length of output: 3168


🏁 Script executed:

# Check if there's any documentation or comments about SEND's structure
rg -n "Send.*cache|Send.*offset" --type rust -A 5 -B 5

Repository: RustPython/RustPython

Length of output: 842


🏁 Script executed:

# Find the cache_entries implementation for SEND instruction
rg -n "fn cache_entries" --type rust -A 60 | grep -A 60 "cache_entries"

Repository: RustPython/RustPython

Length of output: 5666


🏁 Script executed:

# More specifically, look for SEND's cache entries in the match statement
rg -n "Self::Send" --type rust -B 5 -A 5 | head -100

Repository: RustPython/RustPython

Length of output: 4600


🏁 Script executed:

# Check the full cache_entries implementation
sed -n '517,650p' crates/compiler-core/src/bytecode/instruction.rs

Repository: RustPython/RustPython

Length of output: 4849


🏁 Script executed:

# Find any other references to END_SEND_OFFSET or documentation about the bytecode layout
rg -n "END_SEND|ForIter|CLEANUP_THROW" --type rust -B 3 -A 3 | grep -A 10 -B 10 "offset\|sequence"

Repository: RustPython/RustPython

Length of output: 9223


🏁 Script executed:

# Check what SEND and related instructions look like in compile.rs for context
sed -n '6750,6810p' crates/codegen/src/compile.rs

Repository: RustPython/RustPython

Length of output: 2192


🏁 Script executed:

# Let me check the actual bytecode layout for ForIter vs Send to understand the pattern
rg -n "ForIter.*cache|ForIter.*=>" --type rust -B 2 -A 2 | head -50

Repository: RustPython/RustPython

Length of output: 3990


🏁 Script executed:

# Check if there are any tests or documentation about END_SEND_OFFSET
rg -n "END_SEND_OFFSET|5.*code.*unit|offset.*SEND" --type rust

Repository: RustPython/RustPython

Length of output: 453


🏁 Script executed:

# Check what instructions come between SEND and END_SEND in practice
# by looking at the actual code generation
grep -n "YIELD_VALUE\|RESUME\|SETUP_FINALLY" crates/codegen/src/compile.rs | sed -n '6770,6810p'

Repository: RustPython/RustPython

Length of output: 47


🏁 Script executed:

# Verify the exact bytecode units by checking any test cases
find . -name "*.rs" -type f | xargs grep -l "END_SEND_OFFSET\|yield.*from.*sequence" | head -5

Repository: RustPython/RustPython

Length of output: 123


Verify the breakdown of END_SEND_OFFSET.

The value of 5 is correct and matches CPython's END_SEND_OFFSET. However, the breakdown needs clarification: SEND (1 unit) + its 1 cache entry (1 unit) + intermediate instructions (3 units) = 5 units to END_SEND. The original comment's accounting of "SEND (1) + cache (1) + END_SEND (1) + potential intermediate" is incomplete as it doesn't explicitly detail the 3 intermediate code units between the SEND/CACHE pair and END_SEND.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/codegen/src/ir.rs` around lines 346 - 348, Clarify the comment
explaining END_SEND_OFFSET (const END_SEND_OFFSET: u32 = 5) by explicitly
breaking down the 5 code units: SEND instruction (1), its cache entry (1), the
three intermediate code units between the SEND/CACHE pair and END_SEND (3),
summing to 5; update the comment near END_SEND_OFFSET to list these components
and their counts and reference SEND and END_SEND so readers can verify the
correspondence with CPython's END_SEND_OFFSET.

Comment on lines +201 to 203
// Relative forward: target = after_caches + delta
let j = next_i + oparg as usize;
next_stack = pop_value(next_stack);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n crates/vm/src/builtins/frame.rs | sed -n '190,270p'

Repository: RustPython/RustPython

Length of output: 4617


🏁 Script executed:

rg -n "fn mark_stacks" crates/vm/src/builtins/frame.rs -A 5

Repository: RustPython/RustPython

Length of output: 345


🏁 Script executed:

rg -n "set_f_lineno\|mark_stacks" crates/vm/src/builtins/frame.rs | head -20

Repository: RustPython/RustPython

Length of output: 47


🏁 Script executed:

# Search for how mark_stacks results are used
rg -n "mark_stacks" crates/vm/src/builtins/frame.rs

Repository: RustPython/RustPython

Length of output: 194


🏁 Script executed:

cat -n crates/vm/src/builtins/frame.rs | sed -n '520,550p'

Repository: RustPython/RustPython

Length of output: 1504


🏁 Script executed:

# Look for set_f_lineno and how stacks are used
rg -n "set_f_lineno\|let stacks = mark_stacks" crates/vm/src/builtins/frame.rs -B 2 -A 10

Repository: RustPython/RustPython

Length of output: 47


🏁 Script executed:

# Check if bytecode validation happens or if malformed bytecode is possible
rg -n "bytecode\|Constant\|CodeObject" crates/vm/src/builtins/frame.rs | head -20

Repository: RustPython/RustPython

Length of output: 47


🏁 Script executed:

# Check what compatible_stack does and how critical stack accuracy is
rg -n "fn compatible_stack" crates/vm/src/builtins/frame.rs -A 10

Repository: RustPython/RustPython

Length of output: 469


🏁 Script executed:

# Check if there are any unchecked_arithmetic or overflow checks elsewhere
rg -n "checked_sub\|checked_add\|overflow" crates/vm/src/builtins/frame.rs

Repository: RustPython/RustPython

Length of output: 47


🏁 Script executed:

# Check bytecode loading and validation
rg -n "CodeObject\|from_bytes\|validate" crates/vm/src/builtins/frame.rs | head -10

Repository: RustPython/RustPython

Length of output: 47


🏁 Script executed:

# Look at the broader mark_stacks function to understand all code paths
cat -n crates/vm/src/builtins/frame.rs | sed -n '153,290p'

Repository: RustPython/RustPython

Length of output: 7297


🏁 Script executed:

# Check if bytecode comes from user input or trusted sources
rg -n "bytecode\|CodeObject" crates/vm/src/builtins/frame.rs | head -5

Repository: RustPython/RustPython

Length of output: 47


🏁 Script executed:

# Look for similar patterns in the codebase to see if checked arithmetic is used elsewhere
rg -n "checked_sub\|saturating_sub" crates/vm/src/builtins/ | head -20

Repository: RustPython/RustPython

Length of output: 47


🏁 Script executed:

# Let me verify the exact line numbers match the review comment
cat -n crates/vm/src/builtins/frame.rs | sed -n '201,203p;213,215p;223,225p;231,233p;252,254p'

Repository: RustPython/RustPython

Length of output: 1178


🏁 Script executed:

# Verify what happens with the bounds check on underflow
python3 << 'EOF'
# Demonstrate underflow behavior
next_i = 5
oparg = 10

# Simulating unchecked subtraction
result_unchecked = next_i - oparg  # Python allows negative, but Rust usize wraps
print(f"next_i={next_i}, oparg={oparg}")
print(f"Rust usize subtraction wraps to: {(1 << 64) + result_unchecked if result_unchecked < 0 else result_unchecked}")
print(f"After wrap, j >= stacks.len() (len ~= 256 for typical code): {(1 << 64) + result_unchecked > 256}")
EOF

Repository: RustPython/RustPython

Length of output: 203


🏁 Script executed:

# Check if there's any bytecode validation in the codebase
find crates -name "*.rs" | xargs grep -l "CodeObject\|bytecode" | head -5

Repository: RustPython/RustPython

Length of output: 231


🏁 Script executed:

# Look at how CodeObject is constructed
rg -n "impl.*CodeObject\|fn new" crates/bytecode/src/lib.rs 2>/dev/null | head -10

Repository: RustPython/RustPython

Length of output: 47


🏁 Script executed:

cat -n crates/vm/src/builtins/frame.rs | sed -n '229,239p'

Repository: RustPython/RustPython

Length of output: 688


Use checked arithmetic for mark_stacks relative targets.

Unchecked next_i ± oparg as usize operations can underflow with malformed bytecode. Line 232 (backward jump) is particularly vulnerable: when oparg > next_i, subtraction wraps to a large value that silently fails the bounds check, skipping stack analysis for that target. This corrupts the stack-compatibility results used by set_f_lineno without triggering an error.

Apply checked_sub for backward targets and checked_add for forward targets, gracefully skipping the instruction on arithmetic failure.

Affected lines
  • Line 202: let j = next_i + oparg as usize; (forward)
  • Line 214: let j = next_i + oparg as usize; (forward)
  • Line 224: let j = next_i + oparg as usize; (forward)
  • Line 232: let j = next_i - oparg as usize; (backward)
  • Line 253: let mut j = next_i + oparg as usize; (forward)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/vm/src/builtins/frame.rs` around lines 201 - 203, In mark_stacks in
crates/vm/src/builtins/frame.rs, replace all unchecked arithmetic that computes
jump targets (e.g., `let j = next_i + oparg as usize` and `let j = next_i -
oparg as usize`) with checked operations: use next_i.checked_add(oparg as usize)
for forward targets and next_i.checked_sub(oparg as usize) for backward targets,
and if the checked operation returns None simply continue/skip that instruction
(gracefully skip stack analysis for that target) so underflow/overflow cannot
produce an out-of-bounds j; update every occurrence mentioned (the forward cases
at the three `next_i + oparg as usize` sites, the backward `next_i - oparg as
usize` site, and the `let mut j = next_i + oparg as usize` site) to follow this
pattern.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/codegen/src/ir.rs (1)

1519-1522: ⚠️ Potential issue | 🟡 Minor

Clear stale operand/cache metadata when rewriting a jump to Nop.

When a redundant jump is converted to Nop, the old jump arg is retained. If that NOP is preserved as a line marker, it can still emit unnecessary EXTENDED_ARG code units and skew code size.

💡 Proposed fix
             if is_jump_to_next && let Some(last_instr) = blocks[idx].instructions.last_mut() {
                 last_instr.instr = Instruction::Nop.into();
+                last_instr.arg = OpArg::new(0);
                 last_instr.target = BlockIdx::NULL;
+                last_instr.cache_entries = 0;
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/codegen/src/ir.rs` around lines 1519 - 1522, When replacing a
redundant jump with a Nop in the is_jump_to_next branch, also clear any leftover
operand/cache metadata so the old jump argument doesn't persist; in the same
block where you set last_instr.instr = Instruction::Nop.into() and
last_instr.target = BlockIdx::NULL, additionally reset the jump operand fields
(e.g. last_instr.arg or last_instr.operand) and any cache/metadata fields (e.g.
last_instr.cache or similar) to their neutral/default values so the NOP cannot
emit EXTENDED_ARGs or skew code size.
🧹 Nitpick comments (1)
crates/codegen/src/ir.rs (1)

1568-1596: Refactor duplicated jump-direction logic in pseudo jump resolution.

Both pseudo-jump arms perform the same forward/backward check. Extract the direction once and reuse it to reduce duplication.

♻️ Proposed refactor
             let target_pos = block_order[target.idx()];
+            let is_forward = target_pos > source_pos;
             info.instr = match info.instr {
                 AnyInstruction::Pseudo(PseudoInstruction::Jump { .. }) => {
-                    if target_pos > source_pos {
+                    if is_forward {
                         Instruction::JumpForward {
                             target: Arg::marker(),
                         }
                         .into()
                     } else {
                         Instruction::JumpBackward {
                             target: Arg::marker(),
                         }
                         .into()
                     }
                 }
                 AnyInstruction::Pseudo(PseudoInstruction::JumpNoInterrupt { .. }) => {
-                    if target_pos > source_pos {
+                    if is_forward {
                         Instruction::JumpForward {
                             target: Arg::marker(),
                         }
                         .into()
                     } else {
                         Instruction::JumpBackwardNoInterrupt {
                             target: Arg::marker(),
                         }
                         .into()
                     }
                 }
                 other => other,
             };

As per coding guidelines: "When branches differ only in a value but share common logic, extract the differing value first, then call the common logic once to avoid duplicate code."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/codegen/src/ir.rs` around lines 1568 - 1596, The match on info.instr
duplicates the target_pos > source_pos check for both PseudoInstruction::Jump
and PseudoInstruction::JumpNoInterrupt; instead compute a boolean (e.g.
is_forward = target_pos > source_pos) before or at the top of the match and then
use that to select the appropriate Instruction variant. Update the arms for
AnyInstruction::Pseudo(PseudoInstruction::Jump { .. }) and
AnyInstruction::Pseudo(PseudoInstruction::JumpNoInterrupt { .. }) to use the
computed is_forward and construct either Instruction::JumpForward { target:
Arg::marker() }.into() or the corresponding backward variant
(Instruction::JumpBackward{...} or Instruction::JumpBackwardNoInterrupt{...})
without repeating the comparison, and assign the result back to info.instr.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@crates/codegen/src/ir.rs`:
- Around line 1519-1522: When replacing a redundant jump with a Nop in the
is_jump_to_next branch, also clear any leftover operand/cache metadata so the
old jump argument doesn't persist; in the same block where you set
last_instr.instr = Instruction::Nop.into() and last_instr.target =
BlockIdx::NULL, additionally reset the jump operand fields (e.g. last_instr.arg
or last_instr.operand) and any cache/metadata fields (e.g. last_instr.cache or
similar) to their neutral/default values so the NOP cannot emit EXTENDED_ARGs or
skew code size.

---

Nitpick comments:
In `@crates/codegen/src/ir.rs`:
- Around line 1568-1596: The match on info.instr duplicates the target_pos >
source_pos check for both PseudoInstruction::Jump and
PseudoInstruction::JumpNoInterrupt; instead compute a boolean (e.g. is_forward =
target_pos > source_pos) before or at the top of the match and then use that to
select the appropriate Instruction variant. Update the arms for
AnyInstruction::Pseudo(PseudoInstruction::Jump { .. }) and
AnyInstruction::Pseudo(PseudoInstruction::JumpNoInterrupt { .. }) to use the
computed is_forward and construct either Instruction::JumpForward { target:
Arg::marker() }.into() or the corresponding backward variant
(Instruction::JumpBackward{...} or Instruction::JumpBackwardNoInterrupt{...})
without repeating the comparison, and assign the result back to info.instr.

ℹ️ Review info

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9792d68 and dd6f2a2.

📒 Files selected for processing (1)
  • crates/codegen/src/ir.rs

- Add cache_entries() method to Instruction enum
- Emit CACHE code units after opcodes in finalize_code
- Handle NO_LOCATION (line=-1) in linetable for CACHE entries
- Account for CACHE entries in exception table generation
- Skip CACHE entries in VM execution loop (with jump detection)
- Handle CACHE in InstrumentedLine/InstrumentedInstruction/InstrumentedForIter/InstrumentedNotTaken
- Skip CACHE in monitoring instrumentation phases
- Update co_branches() for cache-adjusted offsets
- Restore _cache_format in Lib/opcode.py
- Remove expectedFailure from test_c_call, test_start_offset
- Convert jump arguments from absolute to relative offsets
  in frame.rs, monitoring.rs, and stack_analysis
- Add jump_relative_forward/backward helpers to ExecutingFrame
- Resolve pseudo jump instructions before offset fixpoint loop
- Emit NOP for break, continue, pass to match line-tracing
- Fix async for: emit EndAsyncFor with correct target, add NotTaken
- Fix comprehension if-cleanup to use separate block
- Fix super() source range for multi-line calls
- Fix NOP removal to preserve line-marker NOPs
- Fix InstrumentedLine cache skipping after re-dispatch
- Match InstrumentedResume/YieldValue in yield_from_target
- Remove CALL_FUNCTION_EX cache entry from opcode.py
- Remove resolved expectedFailure markers
Clean up comments that unnecessarily mention CPython per project
convention. Replace with concise descriptions of the behavior itself.
@youknowone youknowone merged commit ccd377c into RustPython:main Mar 1, 2026
14 checks passed
@youknowone youknowone deleted the cache-inst branch March 1, 2026 05:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant