Skip to content

How erdscribe Works: From Clawdbot to Blog

A brief walkthrough of how blog.erdscribe.com gets built — from finding an interesting tweet to a published article on the web.

The Stack

Layer Tool Role
Capture dydo (Clawdbot) + dydo-summarise skill AI agent that reads, summarizes, and saves content
Storage Obsidian vault (local markdown) Knowledge base, all files are .md with YAML frontmatter
Sync obsidian-git plugin Auto-pushes vault to GitHub every 5 minutes
Build Zensical (Python SSG by squidfunk) Converts markdown vault into a static site
Deploy GitHub Actions + Cloudflare Pages Auto-builds and hosts at blog.erdscribe.com

Step 1: Content Capture with Clawdbot

dydo is a personal AI assistant (a "Clawdbot" — Claude Code running with custom skills). One of its skills is dydo-summarise, a two-stage summarization workflow:

Stage 1 — Quick Screen. I paste a URL (tweet thread, blog post, YouTube video) and the skill fetches the content and produces a short Traditional Chinese summary: core argument in 2-3 sentences, 3-5 key bullets. This gets sent to my Telegram so I can decide if it's worth archiving.

Stage 2 — Deep Summary. If I say yes, it creates a full bilingual summary (English + Traditional Chinese) structured by theme, with concrete numbers and quotes. The output is a markdown file saved directly into my Obsidian vault at ~/Documents/knowledge/vault/.

Each file gets YAML frontmatter:

---
title: "Article Title Here"
author: Author Name
source: https://original-url.com
date: 2026-02-10
tags: [AI, infrastructure, scaling]
---

The skill handles different source types automatically — Twitter threads via bird read, YouTube via transcript extraction, Cloudflare-protected sites via headless browser.

Step 2: Obsidian as the Knowledge Base

The vault is organized into topic folders:

vault/
├── AI/                  (20 notes)
├── Finance/             (4 notes)
├── blockchain/          (5 notes)
├── Security/            (2 notes)
└── workflows/           (1 note)

I read and annotate in Obsidian. The obsidian-git plugin auto-commits and pushes to a private GitHub repo (hsrvc/knowledge-vault) every few minutes. No manual git needed.

Step 3: Vault = Blog (Zero Duplication)

The key architectural decision: the vault IS the blog. No separate repo, no sync scripts, no copy step.

We added a zensical.toml config file to the vault root with docs_dir = ".", which tells the SSG to build from the vault directory itself. The markdown files Obsidian writes are the same files the blog renders.

[project]
site_name = "erdscribe"
site_url = "https://blog.erdscribe.com"
docs_dir = "."

nav = [
  { Home = "index.md" },
  { "AI" = ["AI/index.md", "AI/article-1.md", ...] },
  ...
]

Each section has an index.md with a timeline-style article catalog — dates, titles, authors, and tag pills — all in inline HTML+CSS so it works within the markdown SSG constraints.

Step 4: Auto-Deploy Pipeline

Write in Obsidian
    → obsidian-git auto-push (every 5 min)
        → GitHub Actions triggers on push to main
            → pip install zensical && zensical build
            → copy theme assets (Zensical quirk)
            → wrangler pages deploy → Cloudflare Pages
                → live at blog.erdscribe.com

Total latency from saving a note to it appearing on the blog: ~7-12 minutes.

The GitHub Actions workflow (deploy.yml) is 36 lines:

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"
      - name: Install and build
        run: |
          pip install -r requirements.txt
          zensical build --clean
          # Zensical doesn't copy theme assets automatically
          python -c "
          import zensical, os, shutil
          src = os.path.join(os.path.dirname(zensical.__file__), 'templates', 'assets')
          shutil.copytree(src, 'site/assets', dirs_exist_ok=True)
          if os.path.isdir('stylesheets'):
              shutil.copytree('stylesheets', 'site/stylesheets', dirs_exist_ok=True)
          "
      - name: Deploy to Cloudflare Pages
        uses: cloudflare/wrangler-action@v3
        with:
          apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
          accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
          command: pages deploy site --project-name erdscribe

What Makes This Work

  1. No friction. Writing a note in Obsidian IS publishing. No export step, no CMS, no deploy button.
  2. AI does the heavy lifting. The summarize skill handles fetching, structuring, translating, and filing. I just paste URLs and decide yes/no.
  3. Vault-as-repo. The Obsidian vault is the git repo is the blog source. One truth, zero duplication.
  4. Static = fast + free. Cloudflare Pages hosting is free. No server, no database, no maintenance.

Gotchas We Hit

  • Zensical is alpha (v0.0.23) — it doesn't copy theme CSS/JS to the build output. The deploy workflow has a post-build Python script to fix this.
  • wrangler doesn't work locally under Claude Code agent mode — reports 0 files uploaded. We use GitHub Actions exclusively for deploys.
  • No date-sorting plugin — we hand-maintain an explicit nav in zensical.toml to control article ordering (newest-first).
  • Inline HTML in markdown — Zensical's md_in_html extension doesn't reliably process <div markdown> blocks. We use pure HTML for styled components (homepage cards, timeline catalogs).