Back to projects

Algory Capital / Emory University | 2023 to present

Python | SQL | PostgreSQL | DuckDB | Streamlit | Docker | GitHub Actions

Quant research infrastructure built to make strategy work faster, cleaner, and harder to fool.

The goal was to replace fragmented research notebooks with shared infrastructure for point-in-time data, walk-forward validation, diagnostics, and execution-aware review across a 30+ member research organization.

Key Outcomes

The proof from the shared research platform.

Velocity

8.4x

increase in reproducible research throughput after consolidating tooling

Validation

18

reviewed research memos produced with standardized validation

Team surface

11

reproducible project repositories produced by the research organization

Project Breakdown

Problem, method, system, validation, results, reliability, and research value.

Problem

Strategy work was fragmented across analysts, notebooks, and hand-built review habits.

  • Research quality depended too much on local notebook habits and manual interpretation.
  • The organization needed a shared way to ingest, validate, compare, and review strategies.

Method

The platform made point-in-time research the default workflow.

  • Spanned ingestion, universe construction, feature engineering, cross-sectional factor testing, walk-forward modeling, portfolio construction, transaction-cost simulation, exposure control, and attribution.
  • Replaced notebook-only workflows with versioned experiment templates, canonical data loaders, sealed train/test windows, automated tearsheets, invariant checks, and standardized review rubrics.
  • Review included tradability and execution effects, not just attractive backtests.

System / Stack

The platform made research discipline easier to practice.

  • Used Python, pandas, NumPy, SciPy, scikit-learn, XGBoost, statsmodels, cvxpy, SQL, PostgreSQL, DuckDB, Streamlit, FastAPI, and Docker.
  • Built the workflow from point-in-time ingestion through factor modeling, walk-forward backtesting, portfolio construction, attribution, and execution simulation.
  • Made lineage, exposure control, covariance shrinkage, and validation gates part of the default review surface.

Validation Methodology

Leakage-aware evaluation and model-risk checks were built into review.

  • Used purged and embargoed time splits, universe-retention accounting, survivorship-bias checks, delayed feature availability, benchmark alignment, turnover constraints, slippage assumptions, and factor-decay analysis.
  • Added multiple testing controls, White-style reality checks, deflated Sharpe diagnostics, false discovery control, unstable correlation warnings, regime-sensitivity analysis, and backtest-overfitting alarms.
  • Connected exposures, attribution, and transaction-cost-aware rebalancing so review included tradability as well as returns.

Results

The platform improved both velocity and legibility across the organization.

  • Reproducible research throughput increased by 8.4x.
  • Produced 18 reviewed research memos and 11 reproducible project repositories.
  • Built portfolio-optimization modules for mean-variance, risk parity, rank-weighted, volatility-targeted, and drawdown-aware portfolios.

Failure Modes / Reliability Checks

The platform was designed to catch beautiful but false backtests.

  • Checked lookahead bias, survivorship bias, delayed feature availability, multiple testing, unstable correlations, factor decay, turnover constraints, transaction costs, and regime sensitivity.

Why It Matters for Research

Quantitative research is a laboratory for reproducible empirical inference.

  • The work sharpened a research habit: hypothesis formation, falsification, peer review, replication, and model-risk documentation should be supported by the system itself.

Confidentiality Boundary

Workflow, review design, and outcomes are documented here.

Proprietary strategy logic, holdings, and private research data remain private.