Sparrow's wagtail-ai integration ships in every site image by default. Editors get a context-aware AI assistant inside the rich-text editor; developers get the same model accessible through the public Sparrow Import API for headless content generation. Both flows authenticate against your provider of choice — Anthropic, OpenAI, or an Ollama instance you host yourself.
In the editor
Open any RichTextField or rich-text block in a StreamField. Highlight a paragraph and the AI menu surfaces in the toolbar — Rewrite, Summarise, Translate, Expand outline, plus a free-form Ask… prompt. The assistant has the surrounding page context (title, hero, sibling pages) so the generated text matches voice and structure instead of reading like generic boilerplate.
Alt-text generation runs the same way from the image chooser: pick an image, hit Generate alt text, accept or edit. Accessibility audits go from afternoon-long tasks to inline review.
From the API
The same model is reachable headlessly. The Sparrow Import API accepts YAML page definitions (see /api/wagtail/import/) and produces real Wagtail pages with revisions and publish-on-save behaviour. A typical pipeline:
curl -X POST https://<site>/api/wagtail/import/ \
-H "X-API-Key: $SPARROW_API_KEY" \
-H "Content-Type: text/yaml" \
--data-binary @generated-post.ymlThe API key is derived from the site's SPARROW_SECRET_KEY via SHA-256 (hashlib.sha256(f"{key}:sparrow-api-key".encode()).hexdigest()[:48]) — no separate secret store to manage. The same endpoint handles snippets, layouts, navigation: anything you would normally import from docker/import/*.yml.
Combined with a local Claude or GPT call producing the YAML, this is the building block for batch content backfills, AI-assisted CMS migrations, and editorial agents that draft posts ahead of human review.
Configuration
The integration is enabled per-site via WAGTAIL_AI in settings:
WAGTAIL_AI = {
"BACKENDS": {
"default": {
"CLASS": "wagtail_ai.ai.llm.LLMBackend",
"CONFIG": {"MODEL_ID": "claude-sonnet-4-6"},
},
},
}Switch MODEL_ID to gpt-4o, an Ollama tag (ollama/llama3.2), or any other provider llm supports. The API key for the chosen backend lives in the namespace's secrets, never in the image.