Why We Trained on This
We had 3 goals:
- Upskill the creative team so they can use Agents in Feed to scale outputs without drowning in manual work
- Clarify how agent logic interacts with feed inputs, copy generation, and image variants
- Prepare for automation-heavy projects that depend on clear rules, smart structuring, and repeatable logic
Even if the interface changes later, the thinking habits stay with the team - and that’s what really matters.
Why This Still Matters (Even If It’s Temporary)
Learning the current system helps teams build creative and operational muscle memory:
- Understand how logic-based decisioning works in automated content
- Test and prototype scaling rules before Nodal View arrives
- Troubleshoot intelligently instead of guessing
- Build confidence in chaining, referencing, and variant workflows
Tools change. Skills endure.
Two Core Actions You’ll Use Most
1. GenerateImage / GenerateVideo
Use these when your cell contains a pure text prompt.
- No need to write format size in the prompt
- But you must select the format size + model in the dropdown
If you pick the wrong combination, the generation will fail. That’s not creativity — that’s physics.
2. UseAgent
Use this when your prompt needs intake from another cell.
- You must include the format size in the prompt
- Pencil will automatically highlight which cells you are referencing
- The “Attachment” input box works well only for one reference
- More than one attachment usually confuses the AI
- Best combo for referencing: Nano Banana (image) + Google Gemini (text)
- Reminder: not all models support all sizes (Sora does not support 1080×1920)
This is where chained logic lives. This is how your sheet becomes a structured creative machine instead of a patchwork of prompts.
Prompt Referencing Cheat Sheet
(Everyone should memorize this part.)
- C3 / F3 → fully relative (row + column change)
- $C3 / $F3 → lock column (drag horizontally)
- C$3 / F$3 → lock row (drag vertically)
- $C$3 / $F$3 → fully locked (never moves)
This is the difference between feeling powerful and feeling confused.
Best Practices for Stronger Generations
Better inputs = better context = better outputs
A good rule of thumb:
- Product image + image reference → 55% accuracy
- Product image + written product details + written label + written visual style → 80–90% accuracy
Every extra reference point gives the agent a firmer foundation.
Moodboard cells
Referencing multiple images in one cell can lead to split-screen visuals.
Good for ideation. Bad for final imagery.
Use with intention.
What Agents in Feed Are Great For
✔ Rapid scaling by dragging prompts across rows
✔ Seeing strategy, prompts, and outputs in a single structured view
✔ Early concept testing
✔ Variant exploration
✔ Fast prototyping
✔ Building row-by-row logic for campaigns
It’s a thinking tool as much as a production tool.
Where It Struggles
Like every system, it has edges:
- Cannot stitch videos unless using scene-based templates yet
- Start/End frame referencing for video not supported yet
- One designer at a time per sheet (collaboration bottleneck)
- Requires a clean, well-structured sheet to avoid errors
- Asset Library upload isn’t supported yet (CMS links work fine)
- Prompt-to-output isn’t always a clean 1:1
- Model-format mismatches remain the #1 cause of failed generations
The system is powerful, but it rewards discipline.