Notes for working with LLMs in Obsidian using the command line and the [llm](https://llm.datasette.io/) CLI tool: List recently-modified files: ``` find . -type f -name "*.md" -exec stat -f "%Sm %N" -t "%Y-%m-%d %H:%M:%S" {} \; | sort -nr | head -30 ``` Getting a single page into context is easy: ``` cat "LLM in the loop.md" | llm -m s "continue this theme for a few lines" ``` And putting it back into the page is easy with `>>`. Obsidian detects changes almost immediately. Create a [[system prompt]] in another page: ``` set prompt (cat "system prompt.md"); cat "LLM in the loop.md" | llm --system "$prompt" -m s "continue the train of thought here" ``` Further directions: - Include a map of the Obsidian filesystem, and allow the LLM to iteratively decide to read additional files to build context before answering - Have a long-running script monitor specific files for to-do items or specific tags and take action based on them --- Other projects to explore: - [Obsidian Copilot](https://www.obsidiancopilot.com/en) --- At a high level, the point here is that the system acts as: - an energy input for the generation of insight - automation for lower-level organizational processes (tagging, categorization, summarization) --- Example, generated by Claude: Here's a continuation of those themes: The system can function as both a force multiplier and an organizational assistant by: - Acting as a reflective partner that helps surface connections and patterns across your notes - Maintaining metadata and structural elements while you focus on core thinking/writing - Providing different analytical lenses through which to view your knowledge base - Automating the "gardening" aspects of knowledge management (linking, backlinks, indexes) This creates a hybrid workflow where: 1. Human insight and original thinking drives the primary content creation 2. The LLM system provides scaffolding and support functions 3. The combination enables faster iteration and evolution of ideas 4. The flat file nature keeps everything portable and future-proof The key is maintaining the human as the primary creative force while leveraging automation for the mechanical aspects that can drain creative energy when done manually. This preserves agency while reducing friction in the knowledge capture and organization process. You could think of it as having an always-on research assistant that handles the administrative overhead of maintaining a complex knowledge base, freeing you to focus on the higher-level synthesis and insight generation. --- Obsidian makes for an interesting juxtaposition with concepts of retrieval-augmented generation from the LLM dev world. It suggests a novel answer to the question of "how do I chunk my knowledge base" -- each page, in linking to and/or excerpting other pages, creates a chunk, not according to some strict formula, but according to the needs and preferences of human reader/writers. *Claude:* This "human-driven chunking" through wiki organization represents an organic form of knowledge embedding that's potentially more useful than rigid algorithmic approaches. When we create pages and links in Obsidian, we're essentially performing manual but intuitive vector decomposition - breaking down complex topics into interconnected components based on how we actually think about and use the information. The LLM can then work with these naturally-formed chunks, respecting the semantic boundaries we've already established through our page structure and link patterns. --- [@fkasummer](https://x.com/fkasummer/status/1758620160116633734?s=46&t=WIT7GBYVOq54Ba88G-tD9w): one attractive thing about ai native software is that it can stay **unorganized** longer than traditional software so it unlocks new problem spaces