Skip to content

Layers for boltzgen stacked upon other protein models.#49

Open
murrellb wants to merge 10 commits intomainfrom
bg_optimized
Open

Layers for boltzgen stacked upon other protein models.#49
murrellb wants to merge 10 commits intomainfrom
bg_optimized

Conversation

@murrellb
Copy link
Member

Supersedes the "optimized" branch pr.

claudey and others added 9 commits February 7, 2026 09:40
Migrate reusable protein layers into src/protein/ with CPU-only
implementations and dispatch hooks for GPU acceleration via OnionTile:
- Rigid body types, residue constants, OpenFold utils/features
- LayerNormFirst, LinearFirst with layernorm_first_forward dispatch
- RotaryEmbedding with rotary_pos_emb_forward dispatch
- ESMFoldAttention, ESMMultiheadAttention with flash_attention_forward dispatch
- Triangle{Attention,Multiplication} with combine_projections_forward dispatch
- StructureModule (ESMFoldIPA, AngleResnet, BackboneUpdate)
- ESMFoldEmbedConfig, LayerNormMLP, FoldingTrunk, RelativePosition

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace batched_mul attention with flash attention dispatch hooks
(flash_attention_forward, flash_attention_bias_forward) that OnionTile
overrides with cuTile kernels. Generalize rotary embeddings to N-D via
rotary_pos_emb_forward hook. Add combine_projections_forward hook for
cuTENSOR triangle contraction. Optimize TriangleAttention mask=nothing
path with direct flash bias format.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Routes GPU arrays to ONIONop KernelAbstractions kernels, enabling
GPU-accelerated inference on any GPU backend without OnionTile.
Handles triangle attention batch broadcasting via repeat expansion.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use ONIONop.within_gradient to conditionally choose in-place (inference)
vs out-of-place (training) ops in 5 AnyGPUArray layer overrides. Fix
in-place mutations in LinearFirst (.+=) and TriangleMultiplicativeUpdate
(@.) that broke Zygote on all backends.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…14_pos

Wrap non-differentiable constant lookups (NNlib.gather, one_hot_last, convert)
in @ignore_derivatives so Zygote treats them as opaque constants instead of
tracing through integer indexing that produces corrupted/zero tangents.

Gradients for sum(positions) now flow correctly through rot/trans:
- C2 test: 0% → 98% nonzero grads
- Test B (full trunk): 0% → 96.7% nonzero grads
- FD check: 6/6 pass, cosine similarity 1.0, max |AD-FD| ~1e-12

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add PointProjectionMultimer, MultimerInvariantPointAttention, and
InvariantPointAttention (alias for ESMFoldIPA) to share IPA code
between ESMFold and Alphafold2. Includes GPU-accelerated _flash_ipa_core
helper with within_gradient AD guards.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Return scalar one() when dropout is zero (inference mode)
- Use rand!(similar(...)) instead of rand() for training path
- Prevents CPU/GPU mixing in PairformerNoSeqLayer dropout

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…Float32 copies

PairWeightedAveraging: replace map-over-S loop with single batched_mul
(42x speedup at S=100 N=120, eliminates MSA overhead entirely).

Remove wasteful Float32.(x)/T.(x) round-trip tensor copies across all
layers (attention, triangular, miniformer, pairformer, OPM, PWA).
Data is already Float32 — these were pure memcpy waste. Added eltype
assertions to catch mismatches early instead.

All 9 GPU REPL API tests pass at 200 steps with clean geometry.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Simplify _bg_scaled_dot_product_attention to combine bias + mask and
dispatch through flash_attention_bias_forward, which routes to the best
available backend (CPU → ONIONop/KA → OnionTile/cuTile). OnionTile now
handles non-pow2 head dims transparently via padding_mode=Zero.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@codecov
Copy link

codecov bot commented Feb 15, 2026

Codecov Report

❌ Patch coverage is 0.08503% with 2350 lines in your changes missing coverage. Please review.
✅ Project coverage is 8.35%. Comparing base (79ab829) to head (a498fbc).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/protein/structure_module.jl 0.00% 361 Missing ⚠️
src/protein/boltzgen/confidence_utils.jl 0.00% 220 Missing ⚠️
src/protein/esmfold_misc.jl 0.00% 197 Missing ⚠️
src/protein/rigid.jl 0.00% 192 Missing ⚠️
src/protein/gpu_layers.jl 0.00% 175 Missing ⚠️
src/protein/triangular.jl 0.00% 149 Missing ⚠️
src/protein/openfold_feats.jl 0.00% 105 Missing ⚠️
src/protein/folding_trunk.jl 0.00% 103 Missing ⚠️
src/protein/boltzgen/triangular.jl 0.00% 93 Missing ⚠️
src/protein/attention.jl 0.00% 78 Missing ⚠️
... and 19 more

❗ There is a different number of reports uploaded between BASE (79ab829) and HEAD (a498fbc). Click for more details.

HEAD has 3 uploads less than BASE
Flag BASE (79ab829) HEAD (a498fbc)
6 3
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #49       +/-   ##
==========================================
- Coverage   30.79%   8.35%   -22.44%     
==========================================
  Files          43      73       +30     
  Lines         867    3219     +2352     
==========================================
+ Hits          267     269        +2     
- Misses        600    2950     +2350     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@AntonOresten
Copy link
Member

@pangramlabs slop?

Use zeros_like (AD-safe, @ignore_derivatives) instead of fill!(similar(...))
for zero-padding in _flash_ipa_core. Add within_gradient check for pair
aggregation to use out-of-place cat (AD path) vs in-place .= (inference).
This fixes AF2 gradient tests on Julia 1.12 where Zygote couldn't
differentiate through the in-place operations.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments