> cd ~/projects/llm-context-engineering
> cat README.md
> STATUS: LOADED
~/projects/llm-context-engineering[LOADED]
LLM Context Engineering
Claude CodeCodexBash
Spec-to-code context maps and agent orchestration for Claude Code and Codex.
A methodology and toolset for getting better results from AI coding assistants. The core insight: long conversations rot. Fresh context per iteration beats accumulated context every time.
// PROBLEM
The Problem
Long LLM conversations suffer from context rot. As the window fills up with old messages, compaction loses critical information and output quality degrades. The AI starts forgetting important details or making contradictory suggestions.
// SOLUTION
The Solution
Ralph runs autonomous loops with fresh context per iteration. Instead of relying on model memory, file-based state (git commits, spec checkboxes) persists progress. Each iteration starts clean with only what it needs to know.
// HIGHLIGHTS
Interesting Details
- •PIN pattern: README.md as lookup table with synonyms so Claude can find what it needs
- •One task per iteration, then exit (the loop restarts fresh for the next task)
- •Guard validation ensures prompts have required structure before execution
- •Test output filtering shows only failures (success = one line), reducing noise
- •Works with Claude Code, Droid (Factory), and Codex (GPT-5.2)
// TECH STACK
Built With
BashClaude CodeCodexShell scripting