Large language models (LLMs) have demonstrated impressive capabilities across diverse settings but still struggle as the length and complexity of the context increases. To address this challenge, we propose Thinking Recursively and Dynamically (ThReaD), a framework that adapts model generation by dynamically spawning new threads based on context. This method decomposes tasks into simpler sub-problems, enabling recursive problem-solving for complex tasks. ThReaD outperforms existing methods on benchmarks including ALFWorld, TextCraft, WebShop, DataCommons QA, and MIMIC-III ICU QA.