user
~22 tokens
My Postgres query on a 10M row table takes 8 seconds. How do I diagnose and fix this?
assistant
~180 tokens
Step-by-step diagnosis:
**1. Run EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)** — Look for: Seq Scan on large tables (missing index), Rows Removed by Filter (not selective), Sort Method: external merge (not enough work_mem).
**2. Common fixes:** Seq Scan -> Add index. Index Scan + Sort -> Composite index matching WHERE + ORDER BY. Hash Join on large table -> Index on join column.
**3. Composite index** matching your query pattern exactly. Column order matters: most selective first, then filter, then sort. This lets Postgres do an Index Only Scan.
**4. After adding index:** Run ANALYZE to update statistics. Should drop to <10ms.
user
~20 tokens
What about when I have complex WHERE clauses with OR conditions? Indexes don't help as much.
assistant
~140 tokens
OR conditions are the nemesis of B-tree indexes. Options:
1. **Rewrite OR as UNION ALL** — each branch uses its own index, Postgres can parallelize.
2. **GIN index on array column** — for multi-value matching.
3. **Partial indexes** — if one branch is rare, tiny index for that specific query.
4. **Expression indexes** — for computed conditions.
**Rule of thumb:** If EXPLAIN shows BitmapOr, Postgres is already combining indexes. If still slow, restructure the query or use a single composite index.