Skip to content

[Project] Combining Intrinsic Alignment Measurements Across Surveys #29

@nikosarcevic

Description

@nikosarcevic

Combining Intrinsic Alignment Measurements Across Surveys

Objective

We aim to combine IA measurements from multiple galaxy surveys within a unified framework to enhance statistical power, inform realistic data-driven priors for cosmic shear and 3×2pt analyses, and make use of survey complementarities while carefully modeling selection and measurement differences.

Contacts: Niko Sarcevic (nikosarcevic) & David Navarro (DavidNavarroG)

Participants:

Niko Sarcevic, & David Navarro, open to anyone who wants to collaborate!

Detailed description

Combining Intrinsic Alignment Measurements Across Surveys

Project Leads: Nikolina Šarčević, David Navarro Girones, +

Presentation

Slides available here


Objective

To combine intrinsic alignment (IA) measurements from multiple galaxy surveys in order to increase statistical power and broaden the probed parameter space — including redshift, luminosity, and galaxy type — within a unified framework. This joint approach enables us to utilize the complementary strengths of different surveys (e.g., deep photometry, high-quality spectroscopy, wide area) while accounting for differences in selection functions, redshift quality, magnitude limits, and color definitions.

We aim to start with simple models and estimators and build complexity only as supported by the data. Beyond improving IA constraints, another goal is to inform priors on IA model parameters — including redshift and luminosity evolution — for use in upcoming cosmic shear and 3x2pt analyses. This work can help establish more realistic, data-driven priors for nuisance parameters in cosmological inference.

Motivation

  • IA is a key astrophysical systematic in weak lensing studies, yet observational constraints remain heterogeneous.
  • Combining datasets can:
    • Improve signal-to-noise and parameter constraints
    • Extend coverage in redshift, luminosity, and color space
    • Cross-check consistency across samples and survey strategies

Surveys & Contacts (tentative)

Survey / Combo Contact / Paper Notes
$${\color{pink} \mathrm{UNIONS} \space + \space \mathrm{eBOSS}}$$ Fabian Hervas
$${\color{blue} \mathrm{DES} \space + \space \mathrm{eBOSS}}$$ Simon Samuroff
$${\color{pink} \mathrm{Low-z} \space \mathrm{(SDSS)}}$$ Singh et al.
$${\color{blue} \mathrm{MegaZ} \space + \space \mathrm{SDSS}}$$ Benjamin Joachimi
$${\color{green} \mathrm{PAUS}}$$ David Navarro-Gironés
$${\color{green} \mathrm{KiDS}}$$ M.C. Fortuna, C. Georgiou
$${\color{pink} \mathrm{GAMA} \space + \space \mathrm{SDSS}}$$ Harry Johnston
$${\color{pink} \mathrm{DESI} \space + \space \mathrm{Y1}}$$ Jared Siegel

Color coding: pink = spec-z, green = photo-z, blue = mixed

Key Questions & Constraints

  • Data types:
    • Are samples photometric or spectroscopic or mixed? (mixed so this is an additional problem to deal with)
    • What is the quality and method of redshift estimation per survey?
  • Color & Magnitude Selections:
    • How are colors defined across surveys (filters, bandpasses)?
    • What are the magnitude limits?
  • Systematic Differences:
    • Different photometric systems
    • Survey depth, completeness etc variations
    • Selection functions and use of random catalogs (e.g., radial selection of randoms).
      • Concern: Each survey constructs randoms to match its own selection function. Combining data without harmonizing these could introduce artificial IA signals.
  • Some brainstorming strategies:
    • Standardize or reweight randoms based on a joint selection function
    • Use matched selections across surveys (e.g., apply the stricter cuts to both, see how much a tradeoff that is)
    • Alternatively, think about more flexible selection strategies to avoid losing much data
    • Model the differences explicitly in the analysis pipeline, possibly using sims? No clue if this makes sense

Relevant Literature

TODO: more papers on observational constraints

Modeling & Estimators

  • IA Models:
    • Baseline: NLA
    • Extensions: zNLA, z-LF-NLA, TATT
    • Future: More flexible models if data quality permits
  • IA Estimators:
    • Projected 2PCF: wg+
    • Multipole-based IA estimators (arXiv:2307.02545) — optional/test
  • Software:
    • Correlation estimation: TreeCorr
    • Theo modeling: PyCCL

Main Steps

  1. Data Collection
    • Identify catalog custodians and request access
    • Determine usable subsets with IA-relevant measurements
    • Gather definitions for color, magnitude limits, redshift estimation methods
  2. Data Harmonization
    • Standardize redshift, magnitude, and color definitions
    • Derive consistent galaxy properties: luminosity, stellar/halo mass, color, Sérsic index
    • Think if optionally we can have more meaningful splits
  3. Binning Strategy
    • Define/agree on binning in redshift, luminosity, color, Ⲡ, Sérsic
    • Choose physically motivated splits that span different galaxy types and environments
  4. Signal Measurement
    • Use established estimators (start with wg+​)
    • Test different IA estimator methodologies as needed
  5. Modeling
    • Fit models (NLA, zNLA, LFzNLA, TAT.) to the measured signals
    • The simplest NLA should be the baseline
    • Incorporate redshift and luminosity evolution

Deliverables

  • Preliminary presentation (Niko to draft slides)
  • GitHub repository
  • add proposal to echoIA GitHub
  • Clean planning document (Niko to polish this draft– ongoing)
  • Summary of IA modeling approaches and estimator methods
  • First results on joint IA constraints across selected surveys

Metadata

Metadata

Labels

projectThis labels is used to mark the ongoing projects within the echoIA community.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions