Back to projects

Independent / Research Infrastructure Project

Python | FastAPI | pandas | NumPy | SciPy | cvxpy | SQL | Docker | Streamlit | GitHub Actions

Quant Research Lab: point-in-time factor research and reproducible backtesting.

Quant Research Lab is a public-facing factor research and backtesting demo covering point-in-time ingestion, signal construction, portfolio formation, evaluation, transaction-cost modeling, benchmark comparison, and attribution.

Key Outcomes

A public demo for empirical research discipline.

Research Flow

PIT

point-in-time ingestion and availability windows

Validation

IC

rank information coefficients and factor returns

Risk

Stress

transaction-cost, covariance, exposure, and regime checks

Project Breakdown

Problem, method, system, validation, results, reliability, and research value.

Problem

Backtests can be persuasive while hiding invalid assumptions.

  • Lookahead variables, survivorship effects, unstable universes, future price references, and transaction-cost fragility can turn a backtest into a false claim.
  • The project needed to make assumptions, availability windows, and validation constraints visible.

Method

The demo follows the full empirical finance research loop.

  • Covered point-in-time ingestion, signal construction, portfolio formation, evaluation, transaction-cost modeling, benchmark comparison, and attribution.
  • Implemented factors for momentum, value, quality, volatility, liquidity, seasonality, residualization, and cross-sectional normalization.

System / Stack

The stack supports reproducible research artifacts.

  • Used Python, FastAPI, pandas, NumPy, SciPy, cvxpy, SQL, Docker, GitHub Actions, Streamlit, experiment manifests, and backtesting workflows.
  • Organized each module around explicit assumptions and availability windows.

Validation Methodology

Tearsheets were designed for falsification, not decoration.

  • Generated rank ICs, factor returns, turnover, drawdown, exposure decomposition, covariance sensitivity, deflated Sharpe diagnostics, transaction-cost stress tests, and regime breakdowns.
  • Compared strategies against benchmarks and attribution surfaces under transaction-cost and regime assumptions.

Results

The project became a teaching artifact for reproducible quant research.

  • Built a public-facing factor research and backtesting demo with point-in-time data, signal construction, portfolio formation, evaluation, transaction-cost modeling, benchmark comparison, and attribution.
  • Made charts traceable to assumptions, data windows, transformations, validation constraints, code commits, and falsification checks.

Failure Modes / Reliability Checks

The checks target the common ways empirical finance overclaims.

  • Tracked lookahead variables, survivorship effects, future price references, unstable universe definitions, duplicate securities, time-window alignment errors, transaction costs, covariance sensitivity, and regime instability.

Why It Matters for Research

It makes model comparison legible under changing assumptions.

  • The project mirrors broader research practice: assumptions, data availability, validation constraints, uncertainty, and failure checks should be part of the artifact, not left in memory.