Elevate ceremonies, team dynamics, and continuous improvement with AI as your facilitation co-pilot.
Tool: ChatGPT (faster, more creative with facilitation ideas)
Why: Generates engaging formats and questions quickly, better at variety
I'm facilitating a retrospective for a [team size] team that just completed a sprint where [describe 1-2 key issues: e.g., "we missed 40% of story points" or "morale dropped after a production incident"].
Generate:
1. A retrospective format (not Start/Stop/Continue)
2. 5 specific questions that surface root causes
3. A closing activity to commit to 1-2 experiments
Keep it under 90 minutes.
Convert these retrospective notes into a concise improvement action plan with format: Issue | Root Cause | Experiment | Owner | Success Metric | Review Date.
Retro notes:
[Paste notes]
Based on these sprint outcomes, suggest 3 possible retrospective themes with 2-3 guiding questions each.
Sprint outcomes:
- Velocity: [number]
- Stories completed: [X of Y]
- Key challenges: [list]
- Team sentiment: [describe]
Tool: ChatGPT with Code Interpreter OR Claude with Artifacts
Why: Processes velocity data, calculates trends, identifies patterns
Here's our last 6 sprints of velocity data:
Sprint 1: [X] points
Sprint 2: [X] points
Sprint 3: [X] points
Sprint 4: [X] points
Sprint 5: [X] points
Sprint 6: [X] points
Our Definition of Done hasn't changed. Team composition is stable.
Identify 3 likely root causes for the volatility and suggest 1 facilitation technique per cause to improve flow.
Analyze this cycle time data and identify bottlenecks.
Stories from last sprint:
1. Story A: To Do (1 day) โ In Progress (8 days) โ Review (3 days) โ Done
2. Story B: To Do (2 days) โ In Progress (4 days) โ Review (5 days) โ Done
3. Story C: To Do (1 day) โ In Progress (12 days) โ Review (2 days) โ Done
Which stage is the bottleneck? Suggest 2 experiments to reduce it.
Based on these sprint metrics, give me a health score (1-10) and top 3 improvement areas.
Metrics:
- Planned: [X] points, Completed: [Y] points
- Stories spilled: [number]
- Blockers raised: [number]
- Team capacity: [X] days available, [Y] days used
- Sprint goal: [achieved/not achieved]
Tool: ChatGPT or Claude (web interface)
Why: Creates structured agendas and facilitates alignment quickly
Generate a Sprint Planning agenda for a [team size] cross-functional team with a focus on predictability and ownership.
Sprint context:
- Sprint length: [X weeks]
- Team capacity: [X points or days]
- Top backlog items: [list 3-5]
Include time boxes, key questions to ask, and facilitation tips.
Summarize these daily stand-up notes into a clear update for stakeholders, highlighting progress toward sprint goal, blockers needing help, and forecast.
Stand-up notes:
[Paste raw notes from team members]
Output format:
- Progress: [1-2 sentences]
- Critical blockers: [bullet list]
- Forecast: [on track / at risk / off track + why]
Create a Sprint Review script that encourages transparency and feedback.
Sprint details:
- Sprint goal: [goal]
- Completed stories: [list]
- Demos planned: [what will be shown]
- Stakeholders attending: [roles]
Include:
- Opening (5 min)
- Demo structure (20-30 min)
- Feedback collection method
- Closing with next sprint preview
Tool: Claude (better at nuanced coaching advice)
Why: Handles complex interpersonal situations with more depth
My team waits for me to assign work, rarely volunteers for tasks, and doesn't resolve blockers independently. They're technically strong but lack ownership.
Suggest 3 coaching techniques I can use over the next 2 sprints to shift behavior. Include:
- What to say/do
- When to intervene vs. step back
- How to measure improvement
Two team members disagree on [describe conflict: e.g., "technical approach" or "who should own this work"].
Suggest:
1. A facilitation approach to surface the real issue
2. Questions to ask each person
3. How to guide them to a resolution without deciding for them
We have a new [role] joining the team. Create a 2-week onboarding plan covering:
- Agile practices and team norms
- Tools and access
- Pairing/shadowing schedule
- First small contribution
Team context: [describe team size, sprint length, key practices]
Tool: ChatGPT or Claude (web interface)
Why: Quickly structures problems and generates action plans
We've had story spillover in 5 of our last 6 sprints. Common patterns:
- Stories marked "In Progress" but not touched for 3+ days
- Acceptance criteria discovered mid-sprint
- Dependencies on external teams surfacing late
Generate:
1. The likely root cause for each pattern
2. One experiment to address each cause
3. How to measure if the experiment worked
This blocker has been open for [X days]: [describe blocker]
Create an escalation plan:
- Who to involve (in order)
- What information they need
- Deadline for each escalation step
- Workaround options while we wait
Identify risks for this sprint based on the plan below. Create a risk table: Risk | Likelihood (Low/Med/High) | Impact (Low/Med/High) | Mitigation.
Sprint plan:
- Sprint goal: [goal]
- Key stories: [list]
- Team capacity: [any constraints]
- External dependencies: [list]
Tool: ChatGPT (better at creative activities)
Why: More playful and varied in suggestions
My team seems low-energy and disengaged mid-sprint. Suggest 3 quick activities (5-15 minutes each) to re-energize them that work for remote/hybrid teams.
Team context: [size, remote/hybrid/co-located, current mood]
We just hit a major milestone: [describe achievement]. Suggest 3 ways to recognize the team's effort that feel genuine and meaningful (not just pizza parties).
Team preferences: [any known preferences or constraints]
Refine prioritization, communication, and value delivery with the right AI tool for each task.
Tool: ChatGPT or Claude (web interface)
Why: Fast iteration, handles long backlogs, outputs structured formats (Gherkin, tables, acceptance criteria)
Rewrite this user story using INVEST criteria. Add acceptance criteria in Gherkin format (Given/When/Then).
Current story:
[Paste your story here]
Output:
- Rewritten story (Independent, Negotiable, Valuable, Estimable, Small, Testable)
- 3-5 acceptance criteria in Gherkin
- Any assumptions or dependencies
Break this epic into 5-8 user stories that can each be completed in one sprint.
Epic:
[Paste epic description]
For each story, include:
- User story format (As a [who], I need [what], so that [why])
- Estimated size (S/M/L)
- Any dependencies between stories
This story is too large to complete in one sprint. Split it into 2-3 smaller stories that each deliver incremental value.
Large story:
[Paste story]
For each split, explain:
- What value it delivers independently
- What gets deferred to later stories
Tool: ChatGPT with Code Interpreter OR Claude with Artifacts
Why: Can process data tables, calculate scores, generate prioritization matrices, export as CSV
Analyze these backlog items and create a prioritization table with columns: Item | Business Value (1-5) | Effort (S/M/L) | Risk (Low/Med/High) | Priority Score | Recommendation.
Calculate Priority Score as: (Business Value ร 2) โ (Effort penalty: S=1, M=3, L=5) โ (Risk penalty: Low=0, Med=2, High=4)
Items:
1. [Item title and 1-line description]
2. [Item title and 1-line description]
3. [Item title and 1-line description]
...
Rank by Priority Score and flag any high-risk items needing de-risking work first.
Place these backlog items into a 2ร2 matrix:
- High Value / Low Effort (do first)
- High Value / High Effort (plan carefully)
- Low Value / Low Effort (quick wins)
- Low Value / High Effort (reconsider)
Items:
[Paste list with rough effort estimates]
Output as a table showing which quadrant each item belongs in and why.
Identify dependencies between these stories and suggest optimal sequencing to minimize blocking.
Stories:
1. [Story title and 1-line description]
2. [Story title and 1-line description]
3. [Story title and 1-line description]
...
Output format:
Story | Depends On | Reason | Recommended Sequence (1st, 2nd, 3rd...)
Tool: ChatGPT or Claude (web interface)
Why: Synthesizes multiple inputs into clear goals and narratives
Based on these stories we're committing to this sprint, write a sprint goal that:
- Describes user/business outcome (not technical output)
- Can be achieved even if 1-2 stories spill over
- Fits in one sentence
- Inspires the team
Stories for this sprint:
1. [Story title]
2. [Story title]
3. [Story title]
...
Also suggest 1-2 success metrics we can check at sprint review.
Our team has [X] story points of capacity this sprint based on past velocity. We're considering committing to these stories.
Stories and estimates:
1. [Story] - [points]
2. [Story] - [points]
3. [Story] - [points]
...
Total: [sum] points
Is this realistic? If we're over capacity, which stories should we defer and why?
Tool: ChatGPT (faster, more conversational tone)
Why: Better at translating technical work into business language
Write customer-facing release notes for these completed stories. Keep it concise, focus on benefits (not features), and use friendly language.
Completed stories:
1. [Story description]
2. [Story description]
3. [Story description]
Format:
- What's New (headline + 1-2 sentences each)
- Why It Matters (1 sentence connecting to customer value)
Convert these meeting notes into backlog-ready action items with priority.
Meeting notes:
[Paste raw notes]
Output format:
| Action Item | Priority (High/Med/Low) | Owner | Due Date |
Summarize this sprint's outcomes for executives. Focus on: business value delivered, risks mitigated, and next sprint's focus.
Sprint details:
- Sprint goal: [goal]
- Completed stories: [list]
- Spilled stories: [list if any]
- Key decisions made: [list]
Output as 3-4 bullet points, each under 2 sentences.
Tool: Claude (better at strategic thinking and connecting themes)
Why: Handles complex context, makes logical connections between goals and initiatives
Create a roadmap narrative connecting our quarterly goals to user outcomes.
Q1 Goals:
1. [Goal]
2. [Goal]
3. [Goal]
Target users: [describe]
Key pain points we're solving: [list]
Output as a 3-paragraph story: where we are, where we're going, why it matters to users.
Group these backlog items into 3-5 strategic themes. For each theme, write a 1-sentence description and list which items belong.
Backlog items:
1. [Item]
2. [Item]
3. [Item]
...
This will help me communicate our roadmap to leadership.
Tool: ChatGPT or Claude (web interface)
Why: Processes messy input (customer feedback, support tickets) into structured backlog items
Convert these customer feedback notes into backlog items using format: As a [user type], I need [capability] so that [outcome/benefit].
Include suggested priority (High/Med/Low) based on frequency and impact mentioned.
Feedback:
[Paste customer feedback, support tickets, or interview notes]
The dev team mentioned these technical debt items. Translate them into business language I can use to prioritize against feature work.
Technical debt items:
1. [Technical description]
2. [Technical description]
3. [Technical description]
For each, explain:
- What breaks or slows down if we don't fix it
- Which features are blocked by it
- Suggested priority
Review these user stories and identify which ones have weak or missing acceptance criteria. Suggest improved criteria for each.
Stories:
1. [Story with current acceptance criteria]
2. [Story with current acceptance criteria]
3. [Story with current acceptance criteria]
Output format:
Story | Issue | Improved Acceptance Criteria
Tool: Miro + AI (generate structure in ChatGPT/Claude, paste into Miro for team collaboration)
Why: Live collaboration on refined outputs
Generate a user story map structure for this feature.
Feature: [describe feature]
User journey steps: [list high-level steps]
Output as:
- Activities (top level)
- User tasks (under each activity)
- Stories (under each task, prioritized)
Format as a table I can paste into Miro.
Simplify problem-solving, coding, and documentation with AI as your technical partner.
Tool: Cursor or GitHub Copilot (in your IDE) OR Claude for detailed reviews
Why: IDE tools see full codebase context; Claude gives more thorough analysis
Review this [language] code for performance bottlenecks. Suggest specific optimizations with before/after examples.
Focus on:
- Time complexity
- Memory usage
- Database query efficiency (if applicable)
Code:
[Paste code]
Expected scale: [e.g., "handles 10K requests/min" or "processes 1M records"]
Review this code for security vulnerabilities. Check for:
- SQL injection
- XSS risks
- Authentication/authorization gaps
- Sensitive data exposure
- Input validation issues
Code:
[Paste code]
For each issue found, explain the risk and show a secure alternative.
Audit this code for readability issues. Suggest improvements for:
- Variable/function naming
- Code structure
- Comments (where needed)
- Complexity reduction
Code:
[Paste code]
Rewrite the most problematic section with improvements highlighted.
Tool: Claude (better at technical explanation)
Why: More precise with technical concepts, better at layered explanations
Explain what this function does in plain English at 3 levels:
1. One-sentence summary
2. Paragraph explanation for a developer unfamiliar with the codebase
3. Inline comments for the code itself
Function:
[Paste function]
Context: [what system/feature this is part of]
Generate developer documentation for these API endpoints.
For each endpoint, include:
- Purpose
- HTTP method and path
- Request parameters (with types and descriptions)
- Response format (with example)
- Error codes and meanings
- Example curl command
Endpoints:
[Paste endpoint details or code]
Write an Architecture Decision Record for this technical decision.
Decision: [what was decided]
Context: [why this decision was needed]
Alternatives considered: [list]
Trade-offs: [pros/cons of chosen approach]
Format as ADR template:
- Title
- Status (proposed/accepted)
- Context
- Decision
- Consequences
Tool: Cursor or Copilot (for generation in IDE) OR Claude (for test strategy)
Why: IDE tools generate tests faster; Claude provides better test design thinking
Generate unit tests for this [language] method using [testing framework].
Include:
- Happy path (2-3 cases)
- Edge cases (boundary conditions)
- Error conditions (invalid input, exceptions)
Method:
[Paste method]
Use [mocking library if needed] for dependencies.
I have these test cases for [feature/function]. Identify coverage gaps and suggest 3-5 additional test cases.
Existing tests:
1. [Test description]
2. [Test description]
3. [Test description]
Feature/function behavior:
[Describe what it should do]
Generate integration test scenarios for this workflow:
Workflow: [describe multi-step process or system interaction]
For each scenario, provide:
- Test name
- Preconditions
- Steps
- Expected result
- What could break if this isn't tested
Tool: Cursor (for in-IDE refactoring) OR Claude (for refactoring strategy)
Why: Cursor can apply changes directly; Claude provides better strategic planning
Propose a step-by-step refactoring plan for this code to improve maintainability without changing behavior.
Prioritize changes by:
1. Low-risk, high-impact first
2. Can be done incrementally
3. Doesn't require extensive testing
Code:
[Paste code]
Current issues:
- [Issue 1]
- [Issue 2]
This function is doing too much. Suggest how to break it into smaller, single-responsibility functions.
Function:
[Paste function]
For each extracted function, provide:
- Function name
- Purpose
- Parameters
- Return value
Convert this technical debt description into a backlog-ready story with clear acceptance criteria.
Technical debt:
[Describe the debt]
Impact:
- What breaks or slows down because of it
- Which features are blocked
Output format:
As a [developer/system], I need [refactoring] so that [benefit].
Acceptance criteria:
- [ ] Criterion 1
- [ ] Criterion 2
Tool: Cursor or Copilot (in-IDE assistance)
Why: Real-time help as you code
Generate 3 clear, conventional commit messages for these code changes.
Changes:
[Describe what changed]
Format: <type>(<scope>): <description>
Types: feat, fix, docs, refactor, test, chore
Create a code review checklist for [language/framework] that covers:
- Functionality (does it work?)
- Security (any vulnerabilities?)
- Performance (any bottlenecks?)
- Readability (clear and maintainable?)
- Tests (adequate coverage?)
Make it concise (10-15 items max).
I'm getting this error: [paste error message]
Context:
- What I'm trying to do: [describe]
- Code: [paste relevant code]
- Environment: [language version, framework, OS]
Help me:
1. Understand what's causing it
2. Suggest 2-3 ways to fix it
3. Explain how to prevent it in the future
Improve test coverage, find edge cases, and communicate quality with precision.
Tool: Claude (more thorough with edge cases)
Why: Better at systematic thinking and comprehensive coverage
Generate test cases from this user story, categorized by:
- Positive scenarios (happy path)
- Negative scenarios (invalid input, errors)
- Edge cases (boundary conditions, unusual flows)
User story:
[Paste story with acceptance criteria]
For each test case, provide:
- Test ID
- Description
- Preconditions
- Steps
- Expected result
Generate 3-5 exploratory testing charters for this feature.
Feature: [describe feature]
For each charter, include:
- Focus area
- What to explore
- Risks to investigate
- Time box (15-30 min)
Suggest test data sets for boundary testing based on these input constraints:
Input field: [field name]
Type: [string/number/date/etc.]
Constraints: [min/max length, format, allowed values]
Provide:
- Valid boundaries (just inside limits)
- Invalid boundaries (just outside limits)
- Special cases (null, empty, special characters)
Tool: Claude (better strategic thinking)
Why: Handles complex test planning and risk assessment
Create a test strategy for this feature.
Feature: [describe]
User impact: [high/medium/low]
Technical complexity: [high/medium/low]
Release timeline: [when]
Include:
- Test scope (what to test)
- Test types (unit, integration, E2E, performance, security)
- Risk areas (what could go wrong)
- Entry/exit criteria
- Test environment needs
We have 200 regression tests that take 4 hours to run. Help me prioritize which tests to run first for this release.
Release changes:
[List features/fixes being deployed]
Current test suite:
[Paste test suite outline or list of test areas]
Output:
- High priority (must run): [list]
- Medium priority (should run if time): [list]
- Low priority (can skip this release): [list]
Identify 5 high-risk areas in this feature and propose a test strategy for each.
Feature description:
[Paste feature details]
For each risk, provide:
- What could go wrong
- Impact if it breaks
- Test approach to mitigate
- Recommended test types
Tool: ChatGPT or Claude
Why: Structures messy bug info into clear reports
Convert these rough bug notes into a structured bug report.
Notes:
[Paste rough description of bug]
Output format:
- Title: [concise, descriptive]
- Severity: [Critical/High/Medium/Low]
- Steps to reproduce:
1. Step 1
2. Step 2
- Expected result:
- Actual result:
- Environment: [browser, OS, app version]
- Attachments: [screenshots, logs]
Analyze this bug and suggest likely root causes with recommended actions.
Bug description:
[Paste bug details]
Frequency: [how often it happens]
Conditions: [when it happens]
Provide:
1. Top 3 likely root causes
2. How to verify each cause
3. Recommended fix approach
I've logged these bugs over the last sprint. Identify patterns and suggest process improvements.
Bugs:
1. [Bug summary]
2. [Bug summary]
3. [Bug summary]
...
Questions to answer:
- Are there common themes? (e.g., specific features, types of issues)
- What could we catch earlier? (e.g., in code review, unit tests)
- What process changes would prevent these?
Tool: ChatGPT with Code Interpreter (for metrics) OR Claude (for written reports)
Why: Code Interpreter processes data; Claude writes better narratives
Create a test summary report for stakeholders in non-technical language.
Test cycle: [sprint/release number]
Test execution:
- Total test cases: [X]
- Passed: [X]
- Failed: [X]
- Blocked: [X]
Key bugs found:
1. [Bug summary + severity]
2. [Bug summary + severity]
Release recommendation: [Go/No-Go + why]
Format as 3-4 concise paragraphs.
Analyze this test coverage data and identify gaps.
Current coverage:
- Unit tests: [X%]
- Integration tests: [X%]
- E2E tests: [X%]
Features/modules:
1. [Feature A] - coverage: [X%]
2. [Feature B] - coverage: [X%]
3. [Feature C] - coverage: [X%]
Suggest:
- Which areas need more coverage
- What types of tests to add
- Priority order
Create a defect metrics summary from this data:
Defects by severity:
- Critical: [X]
- High: [X]
- Medium: [X]
- Low: [X]
Defects by status:
- Open: [X]
- In Progress: [X]
- Fixed: [X]
- Verified: [X]
Defect age:
- 0-3 days: [X]
- 4-7 days: [X]
- 8+ days: [X]
Provide:
- Key insights
- Red flags (if any)
- Recommended actions
Tool: Cursor or Copilot (for test code) OR Claude (for automation strategy)
Why: IDE tools help write test automation; Claude helps plan what to automate
Help me decide which manual tests to automate first.
Manual test suite:
1. [Test name] - execution time: [X min] - run frequency: [per sprint/release]
2. [Test name] - execution time: [X min] - run frequency: [per sprint/release]
3. [Test name] - execution time: [X min] - run frequency: [per sprint/release]
Criteria:
- Time saved per year
- Complexity to automate (estimate)
- Stability (does it change often?)
Rank by ROI and explain why.
Propose a test strategy for our CI/CD pipeline with frequent deployments.
Context:
- Deploy frequency: [daily/multiple per day/per sprint]
- Team size: [X developers]
- Current build time: [X minutes]
- Test types we have: [list]
Recommend:
- Which tests to run on commit (fast feedback)
- Which tests to run before merge (quality gate)
- Which tests to run post-deployment (smoke tests)
- Time budget for each stage
Generate realistic test data for this scenario.
Data needed:
- [Entity 1]: [fields and constraints]
- [Entity 2]: [fields and constraints]
Relationships: [how entities relate]
Provide 5-10 sample records in [JSON/CSV/SQL] format.
Turn AI into your strategic co-pilot for data-driven decisions and vision alignment.
Tool: ChatGPT with web search enabled (Claude can also search)
Why: Access to current market data, trends, competitor info
Summarize the top 5 trends affecting [your product area] in the next 12 months.
Product area: [e.g., "SaaS collaboration tools" or "fintech lending"]
For each trend, provide:
- What's changing
- Why it matters to our customers
- Potential impact on our product strategy
- 1-2 sources or examples
Create a competitive comparison matrix for these alternatives.
Our product: [brief description]
Competitors: [list 3-5 competitors]
Compare on:
- Target customer
- Key features
- Pricing model
- Strengths
- Weaknesses
- Market positioning
Output as a table.
Identify 3 emerging technologies that could disrupt [your market] within 2 years.
Market: [describe]
Current approach: [how things work today]
For each technology:
- What it is
- How it could change the market
- Threat or opportunity for us?
- Recommended action (monitor, experiment, invest)
Tool: Claude (better at strategic synthesis)
Why: Handles complex context, makes logical connections
Generate a concise vision statement that reflects these inputs:
Strategic goals:
1. [Goal]
2. [Goal]
3. [Goal]
Customer pain points we solve:
- [Pain 1]
- [Pain 2]
Our unique approach: [what makes us different]
Output 3 versions:
- One sentence (for exec summary)
- One paragraph (for team alignment)
- One page (for detailed strategy doc)
Translate our product OKRs into key measurable outcomes with leading indicators.
Objective: [objective statement]
Key Results:
1. [Key result]
2. [Key result]
3. [Key result]
For each KR, suggest:
- Measurable outcome (lagging indicator)
- Leading indicator (what predicts success)
- How to track it
Convert these strategic bets into a hypothesis-driven roadmap.
Bets we're making:
1. [Bet/assumption about market or customers]
2. [Bet/assumption]
3. [Bet/assumption]
For each bet, create:
- Hypothesis statement (We believe [X] will result in [Y] for [customer segment])
- How to validate it (experiment or metric)
- Success criteria
- Time frame
Tool: ChatGPT with Code Interpreter OR Claude with Artifacts
Why: Handles scoring, calculations, and prioritization matrices
Score these features using: Value (1-10) ร Confidence (0-1) / Effort (1-10)
Features:
1. [Feature] - Value: [score] - Confidence: [0-1] - Effort: [score]
2. [Feature] - Value: [score] - Confidence: [0-1] - Effort: [score]
3. [Feature] - Value: [score] - Confidence: [0-1] - Effort: [score]
Calculate priority score and rank them. Show the formula and explain the top-ranked item's strategic fit.
Draft a one-page business case for investing in this feature.
Feature: [describe]
Problem it solves: [describe]
Target customers: [segment]
Estimated effort: [weeks/sprints]
Include:
- Problem statement (2-3 sentences)
- Proposed solution
- Expected impact (revenue, retention, NPS, cost savings)
- Investment required
- Risks and mitigation
- Recommendation (go/no-go with reasoning)
Help me decide: build this capability in-house or buy/integrate a third-party solution?
Capability needed: [describe]
Current situation: [what we have now]
Compare on:
- Cost (build vs. buy, including maintenance)
- Time to market
- Strategic control (how critical to our differentiation?)
- Risk (technical, vendor, integration)
- Long-term flexibility
Recommendation: [build/buy/hybrid] with reasoning.
Tool: Claude (better at qualitative analysis)
Why: Synthesizes messy research data into insights
Create 3 user personas from this research data.
Data:
- Demographics: [paste survey/analytics data]
- Behavioral patterns: [paste usage data or interview notes]
- Goals: [what users are trying to achieve]
- Pain points: [frustrations and obstacles]
For each persona, include:
- Name and role
- Goals
- Behaviors
- Pain points
- Motivations
- How our product helps them
Convert these customer research notes into actionable insights and priorities.
Research notes:
[Paste interview transcripts, survey responses, support tickets]
Output:
- Top 5 themes (with frequency)
- Key pain points (ranked by severity and frequency)
- Feature requests (grouped by theme)
- Recommended next steps
Analyze this customer feedback using Jobs-to-be-Done framework.
Feedback:
[Paste customer quotes or descriptions of how they use the product]
For each "job," identify:
- Functional job (what task they're trying to complete)
- Emotional job (how they want to feel)
- Social job (how they want to be perceived)
- Current workarounds (how they solve it today)
- Our opportunity (how we can help better)
Tool: ChatGPT with Code Interpreter (for data analysis)
Why: Processes data, calculates metrics, generates insights
Design a product metrics dashboard for [feature/product area].
Business goal: [what we're trying to achieve]
User journey: [key steps]
Recommend:
- North Star Metric (primary success measure)
- 3-5 supporting metrics (leading and lagging indicators)
- How to track each
- What "good" looks like (benchmarks or targets)
I want to analyze user retention by cohort. Structure this request for our data team.
Cohorts to compare:
- [Cohort 1: e.g., "users who signed up in Q1"]
- [Cohort 2]
- [Cohort 3]
Retention definition: [e.g., "returned within 30 days"]
Metrics to track: [e.g., "activation rate, feature usage, churn"]
Output as a clear data request with expected format.
Define success metrics for this feature before we build it.
Feature: [describe]
Goal: [what problem it solves]
Target users: [segment]
Provide:
- Leading indicators (early signals it's working)
- Lagging indicators (long-term impact)
- How to measure each
- Target values (what success looks like)
- When to check (1 week, 1 month, 1 quarter)
Tool: ChatGPT (better conversational tone)
Why: Translates technical/complex info into executive-friendly language
Summarize this quarter's product work into an executive update.
Shipped:
- [Feature 1]
- [Feature 2]
In progress:
- [Initiative 1]
Metrics:
- [Key metric]: [value and trend]
- [Key metric]: [value and trend]
Format as 3-4 bullet points, each under 2 sentences. Lead with business impact, not features.
Create a roadmap narrative connecting our quarterly themes to customer outcomes.
Q1 theme: [theme and goals]
Q2 theme: [theme and goals]
Q3 theme: [theme and goals]
Target customers: [segment]
Key outcomes we're driving: [list]
Output as a 3-paragraph story: where we are, where we're going, why it matters.
I made this product decision: [describe decision]
Context:
- Why this decision was needed
- Options considered
- Trade-offs
- Who it affects
Draft a communication for [stakeholder group: team/leadership/customers] explaining the decision clearly and addressing likely concerns.
Transform ambiguity into actionable insight and crystal-clear requirements.
Tool: Claude (better at structured output and precision)
Why: Handles complex requirements with more accuracy, better at formal documentation
Turn these stakeholder quotes into clear business requirements with MoSCoW priority (Must/Should/Could/Won't).
Stakeholder quotes:
[Paste raw quotes from interviews or meetings]
For each requirement, provide:
- Requirement ID
- Description (clear, testable)
- Priority (Must/Should/Could/Won't)
- Rationale (why it matters)
- Acceptance criteria
I'm meeting with [stakeholder role] to gather requirements for [project/feature].
Generate 10-15 questions organized by:
- Current state (how things work today)
- Pain points (what's broken or frustrating)
- Desired outcome (what success looks like)
- Constraints (budget, time, technical limitations)
- Assumptions (what they're taking for granted)
Context: [brief project description]
Review these requirements and identify:
- Vague terms that need clarification
- Missing details
- Hidden assumptions
- Potential conflicts between requirements
- Questions I should ask before sign-off
Requirements:
[Paste requirements document]
Output as: Requirement | Issue | Recommended Clarifying Question
Tool: Claude (better at logical flows) + Mermaid for diagrams
Why: Can generate process descriptions and diagram syntax
Draft current-state and future-state workflows from this narrative.
Narrative:
[Paste description of how process works today and what needs to change]
Output as two numbered lists:
**Current State:**
1. Step 1
2. Step 2
...
**Future State:**
1. Step 1
2. Step 2
...
Highlight what's different in bold.
Convert this process description into Mermaid flowchart syntax.
Process:
[Paste process steps]
Include:
- Decision points (diamond shapes)
- Alternative paths
- Start/end points
- Actors/systems involved
Create a swimlane process map for this workflow.
Workflow: [describe multi-party process]
Actors involved: [list roles/systems]
Output as:
- Lanes (one per actor)
- Steps in each lane
- Hand-offs between lanes
- Decision points
Format as structured text I can convert to a diagram.
Tool: Claude (precision with technical concepts)
Why: Better at structured data modeling and entity relationships
Extract entities, attributes, business rules, and exceptions from this policy text. Present as a data dictionary.
Policy text:
[Paste policy or business rules document]
Output format:
**Entity Name**
- Attributes: [list with data types]
- Business Rules: [list]
- Exceptions: [list]
- Relationships: [to other entities]
Define the entity-relationship model for this system.
System description:
[Describe what the system manages]
Key entities: [list if known, or let AI identify them]
For each entity, provide:
- Entity name
- Key attributes
- Relationships to other entities (1:1, 1:many, many:many)
- Sample data
Map the data flow for this process.
Process: [describe]
Data sources: [list systems/databases]
Data consumers: [who/what needs this data]
Output:
- Source โ Transformation โ Destination
- Data format at each stage
- Timing (real-time, batch, on-demand)
- Volumes (if known)
Tool: ChatGPT or Claude (both handle story format well)
Why: Either works; Claude is slightly more precise with Gherkin
Convert this epic into user stories with acceptance criteria using Gherkin format (Given/When/Then).
Epic:
[Paste epic description]
For each story:
- User story (As a [who], I need [what], so that [why])
- Acceptance criteria (3-5 scenarios in Gherkin)
- Dependencies (if any)
- Estimated complexity (S/M/L)
These acceptance criteria are too vague. Rewrite them to be specific, testable, and unambiguous.
Current criteria:
[Paste existing criteria]
For each, provide:
- Revised criterion (clear, measurable)
- How to test it
- Edge cases to consider
Identify non-functional requirements for this feature that aren't explicitly stated but should be defined.
Feature: [describe]
Consider:
- Performance (response time, throughput)
- Security (authentication, authorization, data protection)
- Usability (accessibility, user experience standards)
- Reliability (uptime, error handling)
- Scalability (growth projections)
Output as NFRs with measurable criteria.
Tool: Claude (better analytical structure)
Why: Handles complex comparisons and reasoning
Create a gap analysis for this initiative.
Initiative: [describe what's being implemented]
Output format:
| Capability | Current State | Target State | Gap | Recommendation | Priority |
Capabilities to analyze:
[List key capabilities or paste requirement areas]
Analyze the impact of this proposed change.
Proposed change:
[Describe the change]
Assess impact on:
- Users (who's affected, how)
- Systems (technical dependencies)
- Processes (what workflows change)
- Data (what data is affected)
- Compliance (regulatory considerations)
For each area, provide:
- Impact level (High/Medium/Low)
- Specific changes needed
- Risks
- Mitigation approach
Identify all dependencies for this project.
Project scope:
[Describe project]
Map dependencies:
- Technical (systems, APIs, infrastructure)
- Data (data sources, integrations)
- People (teams, roles, approvals)
- Process (existing workflows that must be maintained)
- External (vendors, partners, regulations)
For each dependency:
- What it is
- Why it's critical
- Risk if not available
- Owner/responsible party
Tool: ChatGPT (better conversational tone)
Why: More natural for communication and stakeholder-facing docs
Create a RACI matrix for this initiative.
Initiative: [describe]
Roles involved: [list roles or let AI suggest based on initiative]
Activities:
1. [Activity]
2. [Activity]
3. [Activity]
...
Output as table:
| Activity | [Role 1] | [Role 2] | [Role 3] |
Where each cell contains R (Responsible), A (Accountable), C (Consulted), or I (Informed)
Prepare a requirements review meeting agenda and sign-off checklist.
Meeting purpose: [e.g., "sign-off on Phase 1 requirements"]
Attendees: [stakeholder roles]
Duration: [X minutes]
Include:
- Agenda with time boxes
- What to review
- Questions to resolve
- Sign-off checklist (what must be confirmed)
- Next steps
Create a stakeholder communication plan for this project.
Project: [describe]
Duration: [X months/sprints]
Stakeholders: [list groups: executives, end users, IT, compliance, etc.]
For each stakeholder group:
- Information needs (what they care about)
- Communication frequency (weekly, monthly, at milestones)
- Format (email, dashboard, meeting)
- Owner (who communicates)
Tool: Claude (more precise with measurable criteria)
Why: Better at defining testable, quantifiable success measures
Propose measurable KPIs that prove this feature solved the stated problem.
Problem: [describe business problem]
Feature: [describe solution]
Target users: [who benefits]
For each KPI, provide:
- Metric name
- What it measures
- How to calculate it
- Target value (what success looks like)
- Data source
- Measurement frequency
Define success criteria at multiple levels for this project.
Project: [describe]
Create criteria for:
- Business success (revenue, cost savings, efficiency)
- User success (adoption, satisfaction, task completion)
- Technical success (performance, stability, scalability)
Format:
| Level | Metric | Target | How Measured | Timeline |
Uncover user insights, accelerate design thinking, and elevate every experience.
Tool: Claude (better at qualitative synthesis)
Why: Handles messy research data, identifies patterns, creates structured personas
Create 3 user personas from this research data.
Research data:
- Analytics: [paste behavioral data]
- Interviews: [paste key quotes or themes]
- Support tickets: [paste common issues]
For each persona, include:
- Name and role
- Demographics (age, location, tech savviness)
- Goals (what they're trying to achieve)
- Behaviors (how they currently work)
- Pain points (frustrations and obstacles)
- Motivations (why they'd use our product)
- Quote (something they might say)
Turn these support tickets into UX insights and prioritized opportunities.
Tickets:
[Paste support ticket summaries]
Output:
- Top 3 themes (with frequency)
- UX problems causing these issues
- Severity (based on frequency and user impact)
- Recommended design improvements
- Priority (High/Med/Low)
Analyze this user journey and identify pain points at each stage.
Journey: [describe steps user takes]
Current experience: [describe what happens at each step]
For each stage:
- Pain point (what's frustrating)
- Evidence (data, quotes, observations)
- Impact (how it affects user success)
- Opportunity (how we could improve it)
Tool: ChatGPT (better conversational tone and creativity)
Why: More natural, human-friendly language for UI copy
Propose wireframe copy for this page:
Page purpose: [what the page does]
Target user: [persona]
User goal: [what they're trying to accomplish]
Provide:
- Headline (clear, benefit-focused)
- Subhead (adds context or urgency)
- Primary CTA (action-oriented button text)
- Secondary CTA (alternative action)
- Supporting copy (1-2 sentences if needed)
Rewrite microcopy for error states and empty states to be more human and clear.
Current copy:
- Error: [paste current error message]
- Empty state: [paste current empty state message]
Context: [what triggered the error or why state is empty]
Rewrite to:
- Explain what happened (in plain language)
- Tell user what to do next (actionable)
- Match tone: [friendly/professional/helpful]
Write onboarding copy for this flow.
Product: [brief description]
User goal: [what they want to accomplish]
Steps: [list onboarding steps]
For each step, provide:
- Title (what this step is about)
- Body (1-2 sentences, benefit-focused)
- CTA (button text)
- Skip option text (if applicable)
Keep it concise and motivating.
Tool: Claude (more structured test planning)
Why: Better at systematic test design and criteria
Generate 5 usability test tasks and success criteria for this flow.
Flow: [describe user flow being tested]
User goal: [what they're trying to achieve]
For each task:
- Task description (what to ask user to do)
- Starting point (where they begin)
- Success criteria (what "completion" looks like)
- What to observe (specific behaviors or friction points)
- Time expectation (how long it should take)
List accessibility issues likely present in this layout, referencing WCAG standards.
Layout description:
[Describe the UI: form, dashboard, navigation, etc.]
Elements present:
- [List: buttons, inputs, images, modals, etc.]
Check for:
- Color contrast (WCAG AA minimum)
- Keyboard navigation
- Screen reader compatibility
- Focus indicators
- Alt text for images
- Form labels and error messages
- Touch target sizes (mobile)
Output as: Issue | WCAG Standard | Severity | How to Fix
Create a design QA checklist for this component before handoff to dev.
Component: [describe: e.g., "checkout form" or "dashboard card"]
Check:
- Visual consistency (spacing, typography, colors match design system)
- Responsive behavior (mobile, tablet, desktop)
- Interactive states (hover, focus, active, disabled, error)
- Edge cases (long text, empty states, loading states)
- Accessibility (contrast, labels, keyboard navigation)
Format as checklist with Y/N/N/A columns.
Tool: Claude (better at logical structure) + Mermaid for diagrams
Why: Generates structured flows and diagram syntax
Map a happy path and 3 failure paths for this flow.
Flow: [describe: e.g., "user signup" or "checkout process"]
For each path:
- Steps (numbered)
- Decision points
- System responses
- Where it ends
Format:
**Happy Path:** [steps]
**Failure Path 1:** [what goes wrong + steps]
**Failure Path 2:** [what goes wrong + steps]
**Failure Path 3:** [what goes wrong + steps]
Propose a navigation structure for this app/site.
Content/features:
[List main sections, features, or content types]
User goals:
[What users are trying to accomplish]
Provide:
- Primary navigation (top-level items)
- Secondary navigation (grouped under primary)
- Suggested information architecture (how content is organized)
- Rationale (why this structure supports user goals)
Analyze these card sorting results and suggest an IA structure.
Card sorting data:
[Paste results: which items were grouped together, category names users suggested]
Output:
- Recommended IA structure (categories and subcategories)
- Items that were unclear or contested
- Alternative groupings to test
- Rationale for recommendations
Tool: Claude (more technical precision)
Why: Better at structured tokens and systematic design documentation
Suggest design tokens based on this brand guide. Output as a compact token table.
Brand guide:
- Colors: [list hex codes]
- Typography: [fonts, sizes, weights]
- Spacing: [any spacing rules]
- Other: [shadows, borders, etc.]
Output format:
| Token Name | Value | Usage |
Include:
- Color tokens (primary, secondary, neutrals, semantic)
- Typography tokens (font families, sizes, line heights)
- Spacing tokens (margin/padding scale)
- Shadow tokens (elevation levels)
Document this component for our pattern library.
Component: [name, e.g., "Alert Banner"]
Purpose: [what it does]
Variants: [list: success, warning, error, info]
For each variant, provide:
- When to use it
- Visual specs (colors, icons)
- Copy guidelines
- Accessibility requirements
- Code snippet (HTML structure)
Propose a responsive breakpoint strategy for this design.
Design type: [e.g., "dashboard," "marketing site," "mobile app"]
Content density: [high/medium/low]
Key components: [list components that need to adapt]
Recommend:
- Breakpoints (px values)
- What changes at each breakpoint
- Mobile-first or desktop-first approach
- Rationale
Tool: ChatGPT (more creative and exploratory)
Why: Better at brainstorming and generating diverse ideas
Generate 3 different design concepts for this feature.
Feature: [describe]
User need: [what problem it solves]
Constraints: [technical, business, or design constraints]
For each concept:
- Name/theme
- Core interaction pattern
- Key differentiator
- Pros
- Cons
- Best for (which user segment or scenario)
Analyze this design pattern I found and suggest how to adapt it for our use case.
Pattern: [describe or paste screenshot description]
Source: [where you found it]
Our use case: [describe our feature/flow]
Our constraints: [list any limitations]
Provide:
- What works about this pattern
- What doesn't fit our context
- How to adapt it
- Risks or considerations
Critique this design using structured criteria.
Design: [describe or paste visual description]
Goals: [what the design is trying to achieve]
Evaluate on:
- Clarity (is it immediately understandable?)
- Usability (can users accomplish their goals?)
- Accessibility (does it work for all users?)
- Visual hierarchy (does it guide attention correctly?)
- Brand alignment (does it feel like our product?)
- Innovation vs. convention (right balance?)
For each criterion: Score (1-5) + Explanation + Recommendation
Tool: Claude (better at structured experiment design)
Why: More systematic in hypothesis and metric definition
Create an A/B test experiment backlog with hypothesis, metric, and test design.
Feature/flow: [what you're testing]
Problem: [what's not working]
For each experiment:
- Hypothesis (We believe [change] will result in [outcome] for [user segment])
- Metric (how to measure success)
- Variants (A: control, B: treatment)
- Sample size needed (if known)
- Duration (how long to run)
- Success criteria (what result = winner)
Create a validation plan for this design concept before building it.
Concept: [describe]
Assumptions we're making: [list]
Recommend:
- Validation methods (prototype test, survey, interview, analytics review)
- Key questions to answer
- Success criteria (what would prove this is the right direction)
- Resources needed
- Timeline
Plan smarter, report faster, and manage risk proactively.
Tool: Claude (better at structured planning)
Why: More systematic, handles complex dependencies
Generate a project plan outline including key milestones, dependencies, and risks.
Project: [name and brief description]
Duration: [estimated timeline]
Team: [size and roles]
Goal: [what success looks like]
Include:
- Phases (major stages)
- Milestones (with target dates)
- Key deliverables
- Dependencies (internal and external)
- Risks (high-level)
- Resource needs
Create a work breakdown structure for this project.
Project: [describe]
Major deliverables: [list]
Break down each deliverable into:
- Level 1: Major phases
- Level 2: Work packages
- Level 3: Tasks (actionable items)
Format as indented list with estimated effort for each task.
Draft a project charter for stakeholder approval.
Project: [name]
Business problem: [what we're solving]
Proposed solution: [high-level approach]
Sponsor: [who's funding/approving]
Include:
- Project objectives (SMART goals)
- Scope (in-scope and out-of-scope)
- Success criteria
- Key stakeholders and roles
- High-level timeline
- Budget estimate (if known)
- Approval signature section
Tool: ChatGPT (better conversational summaries)
Why: Creates stakeholder-friendly updates quickly
Summarize progress across these teams into an executive update.
Team 1: [status and key accomplishments]
Team 2: [status and key accomplishments]
Team 3: [status and key accomplishments]
Format as:
- Overall status (Green/Yellow/Red with one-sentence explanation)
- Key accomplishments this period
- Upcoming milestones
- Risks or blockers needing attention
- Ask (what you need from leadership)
Keep it to 4-5 bullet points max.
Create a weekly status report template for this project.
Project: [name]
Stakeholders: [who receives this]
Key focus areas: [what they care about]
Include sections for:
- Progress this week (accomplishments)
- Plan for next week
- Blockers/risks
- Metrics (if applicable)
- Decisions needed
Format as fillable template.
Create a milestone tracking table for this project.
Project phases: [list phases]
Key milestones: [list or let AI suggest based on phases]
Output format:
| Milestone | Target Date | Owner | Status | Dependencies | Notes |
Include realistic spacing between milestones based on typical project timelines.
Tool: Claude (more systematic risk analysis)
Why: Better at structured risk assessment and mitigation planning
Draft a risk register table with probability, impact, and mitigation columns.
Project: [describe]
Known concerns: [list if any]
For each risk, provide:
- Risk ID
- Description (what could go wrong)
- Category (technical, resource, schedule, external)
- Probability (Low/Med/High)
- Impact (Low/Med/High)
- Risk score (Probability ร Impact)
- Mitigation plan (how to reduce or respond)
- Owner (who monitors this)
Prioritize by risk score.
Create an issue log template for tracking project problems.
Columns needed:
- Issue ID
- Description
- Raised by
- Date raised
- Priority (P1/P2/P3)
- Status (Open/In Progress/Resolved/Closed)
- Owner
- Resolution plan
- Target resolution date
- Actual resolution date
Add 2-3 example issues to demonstrate usage.
Generate a RAID log (Risks, Assumptions, Issues, Dependencies) for this project.
Project: [describe]
For each category:
**Risks:** [potential problems]
**Assumptions:** [what we're assuming is true]
**Issues:** [current problems blocking progress]
**Dependencies:** [external factors we rely on]
Format as table with: Item | Description | Impact | Owner | Status
Tool: ChatGPT with Code Interpreter OR Claude
Why: Can calculate capacity, utilization, and allocation
Create a resource allocation plan for a 3-sprint initiative.
Initiative: [describe]
Team members: [list roles and names]
Sprint length: [X weeks]
For each sprint:
- Who is allocated
- % capacity (e.g., 50% time, 100% time)
- What they're working on
- Any conflicts or constraints
Output as table:
| Sprint | Team Member | Allocation % | Work Assignment | Notes |
Calculate if we have enough capacity for this project.
Project workload:
- [Task/phase]: [estimated hours/days]
- [Task/phase]: [estimated hours/days]
Total: [sum]
Team capacity:
- [Team member]: [available hours/days over project duration]
- [Team member]: [available hours/days over project duration]
Total: [sum]
Analysis:
- Are we over/under capacity?
- By how much?
- Recommendations (hire, descope, extend timeline)
Based on this velocity data, forecast project completion.
Velocity (last 6 sprints): [list story points or tasks completed per sprint]
Remaining work: [X story points or tasks]
Calculate:
- Average velocity
- Estimated sprints remaining
- Projected completion date
- Confidence level (based on velocity consistency)
- Risk factors (if velocity is volatile)
Tool: Claude (better at logical analysis)
Why: More precise with dependency logic and critical path identification
Identify all task dependencies and suggest sequencing.
Tasks:
1. [Task description]
2. [Task description]
3. [Task description]
...
For each task:
- Depends on (which tasks must finish first)
- Blocks (which tasks can't start until this finishes)
- Can run in parallel with (which tasks are independent)
Output as dependency chain and suggested order.
Identify the critical path based on this milestone list.
Milestones and estimated durations:
1. [Milestone] - [X days/weeks]
2. [Milestone] - [X days/weeks]
3. [Milestone] - [X days/weeks]
...
Dependencies: [describe which milestones depend on others]
Output:
- Critical path (sequence that determines project duration)
- Total project duration
- Slack time (where delays won't impact end date)
- High-risk tasks (no slack, must finish on time)
Analyze this project timeline and identify schedule compression options.
Current timeline: [X weeks/months]
Target timeline: [Y weeks/months - shorter]
Tasks and durations:
[List tasks with current estimates]
Suggest:
- Fast-tracking (what can run in parallel that's currently sequential)
- Crashing (where adding resources would shorten duration)
- Descoping (what could be cut or deferred)
- Trade-offs for each option
Tool: ChatGPT (better conversational tone)
Why: More natural for communication plans and meeting agendas
Generate a stakeholder communication cadence plan.
Project: [name]
Duration: [X months]
Stakeholders:
- [Group 1]: [e.g., "Executive sponsors"]
- [Group 2]: [e.g., "Project team"]
- [Group 3]: [e.g., "End users"]
For each group:
- What they need to know
- How often to communicate (daily/weekly/monthly/at milestones)
- Format (email, meeting, dashboard, Slack)
- Who owns it
Create a project kickoff meeting agenda.
Project: [name and brief description]
Duration: [X hours]
Attendees: [roles]
Include:
- Welcome and introductions (5 min)
- Project overview (why we're doing this) (10 min)
- Goals and success criteria (10 min)
- Scope and deliverables (15 min)
- Timeline and milestones (10 min)
- Roles and responsibilities (10 min)
- Communication and collaboration norms (10 min)
- Risks and dependencies (10 min)
- Q&A (remaining time)
Create a change request template for scope changes.
Include fields for:
- Change request ID
- Requested by
- Date submitted
- Description of change
- Reason/justification
- Impact analysis (scope, schedule, budget, resources)
- Alternatives considered
- Recommendation (approve/reject/defer)
- Approver signature
- Decision and date
Tool: Claude (better at structured retrospectives)
Why: More systematic in lessons learned and handoff documentation
Write a project closure summary highlighting lessons learned and next steps.
Project: [name]
Duration: [actual vs. planned]
Budget: [actual vs. planned]
Goals: [achieved/partially achieved/not achieved]
Include:
- What was delivered
- What went well
- What didn't go well
- Lessons learned (for future projects)
- Outstanding items (what's handed off or deferred)
- Recommendations
Design a lessons learned session agenda.
Project: [name]
Team: [size]
Session length: [X minutes]
Include:
- Retrospective format (not just "what went well/badly")
- Questions to ask
- How to capture insights
- How to turn lessons into actionable improvements
- Who owns follow-up
Create a project handoff document for the operations/support team.
Project: [name]
What was delivered: [list deliverables]
Include:
- Summary (what this project accomplished)
- Key features/components
- How it works (high-level)
- Known issues or limitations
- Support contacts
- Documentation links
- Next phase or future enhancements
Strengthen leadership visibility, team growth, and delivery excellence.
Tool: ChatGPT with Code Interpreter OR Claude with Artifacts
Why: Processes sprint data, calculates trends, generates actionable insights
Analyze these sprint metrics and create a coaching plan to improve predictability without overtime.
Last 6 sprints:
- Sprint 1: Planned [X] points, Completed [Y] points, Spillover [Z] stories
- Sprint 2: Planned [X] points, Completed [Y] points, Spillover [Z] stories
- Sprint 3: Planned [X] points, Completed [Y] points, Spillover [Z] stories
- Sprint 4: Planned [X] points, Completed [Y] points, Spillover [Z] stories
- Sprint 5: Planned [X] points, Completed [Y] points, Spillover [Z] stories
- Sprint 6: Planned [X] points, Completed [Y] points, Spillover [Z] stories
Team context: [size, seniority mix, any recent changes]
Provide:
- Velocity trend analysis
- Predictability score (consistency)
- Root cause hypotheses for volatility
- 3 coaching interventions (specific actions I can take)
- Success metrics (how to measure improvement)
Create a team health metrics summary from this data.
Metrics:
- Velocity: [trend over last 6 sprints]
- Cycle time: [average days from start to done]
- Code review time: [average time PRs wait for review]
- Bug escape rate: [bugs found in production vs. caught in dev]
- Deployment frequency: [how often we ship]
- On-call incidents: [number and severity]
- Team satisfaction: [survey score or sentiment]
Output:
- Health score (1-10) with rationale
- Top 3 strengths (what's working)
- Top 3 concerns (what needs attention)
- Recommended actions
Analyze our capacity vs. incoming demand.
Team capacity:
- Team size: [X engineers]
- Available points/week: [Y]
- Planned time off: [upcoming absences]
Demand:
- Product backlog: [X points]
- Tech debt: [Y points]
- Support/maintenance: [Z hours/week]
- Unplanned work: [estimate %]
Provide:
- Can we meet demand with current capacity?
- If not, by how much are we short?
- Options (hire, reduce scope, defer, improve efficiency)
- Recommendation with reasoning
Tool: Claude (better at nuanced people management advice)
Why: Handles complex interpersonal situations with depth
Draft a 30-60-90 day plan for a new [role] joining this team.
Role: [e.g., "Senior Backend Engineer"]
Team context: [size, tech stack, current projects]
New hire background: [seniority, specialization if known]
For each phase:
**30 days (Learn):**
- Goals
- Key activities (setup, training, shadowing)
- First contributions (small, low-risk)
- Success criteria
**60 days (Contribute):**
- Goals
- Ownership areas
- Expected velocity
- Success criteria
**90 days (Lead):**
- Goals
- Leadership expectations
- Full autonomy areas
- Success criteria
Generate a 1:1 agenda template focused on growth, delivery, and well-being.
Engineer: [name and context if relevant]
Frequency: [weekly/biweekly]
Include sections for:
- Check-in (how are they feeling?)
- Recent work (wins, challenges, blockers)
- Growth (skills, career goals, learning)
- Feedback (both directions)
- Action items from last 1:1
- Upcoming priorities
Add suggested questions for each section that encourage open dialogue.
Identify skill gaps from this team matrix and propose a pairing or mentoring plan.
Team skills matrix:
| Engineer | Backend | Frontend | DevOps | System Design | Leadership |
| Engineer A | Strong | Weak | Medium | Weak | Medium |
| Engineer B | Medium | Strong | Weak | Medium | Weak |
| Engineer C | Strong | Medium | Strong | Strong | Strong |
...
Team needs: [upcoming projects or strategic direction]
Provide:
- Critical skill gaps (what's blocking us)
- Development opportunities (who can level up where)
- Pairing suggestions (who should work together)
- Mentoring assignments (senior โ junior knowledge transfer)
- External training needs (what we can't develop internally)
Assess this engineer's readiness for promotion to [next level].
Current level: [e.g., "Mid-level Engineer"]
Target level: [e.g., "Senior Engineer"]
Time in role: [X months/years]
Recent work:
- [Project 1 and impact]
- [Project 2 and impact]
- [Technical contributions]
- [Team contributions]
Promotion criteria for target level:
[Paste criteria or let AI suggest based on industry standards]
Provide:
- Strengths (where they meet/exceed criteria)
- Gaps (what's not yet demonstrated)
- Development plan (how to close gaps)
- Timeline (realistic path to promotion)
- What to document for promotion packet
Tool: Claude (better at technical depth and architecture thinking)
Why: More precise with technical decisions and trade-offs
Design a lightweight tech debt policy.
Include:
- Definition (what counts as tech debt)
- Intake process (how engineers surface it)
- Prioritization (how we decide what to fix)
- Allocation (% of sprint capacity reserved for tech debt)
- Visibility (how we track and communicate it)
- Approval process (when we need to say no)
Keep it under 1 page, actionable for team to adopt immediately.
Summarize these production incidents into key themes and prevention actions.
Incidents (last quarter):
1. [Date] - [Severity] - [Brief description] - [Root cause]
2. [Date] - [Severity] - [Brief description] - [Root cause]
3. [Date] - [Severity] - [Brief description] - [Root cause]
...
Analyze:
- Patterns (are there common themes?)
- Root causes (systemic issues vs. one-offs)
- Prevention opportunities (monitoring, testing, process)
- Recommended actions (prioritized)
- Who owns each action
Write an ADR for this technical decision.
Decision: [what we decided]
Context: [why this decision was needed, what problem it solves]
Options considered:
1. [Option A] - pros/cons
2. [Option B] - pros/cons
3. [Option C] - pros/cons
Chosen option: [which one and why]
Provide ADR in standard format:
- Title
- Status (proposed/accepted/deprecated)
- Context
- Decision
- Consequences (what this enables and what it constrains)
- Alternatives considered
Tool: Claude (better strategic thinking)
Why: Connects technical work to business outcomes more effectively
Translate these roadmap goals into measurable engineering OKRs.
Roadmap goals:
1. [Goal: e.g., "Improve platform reliability"]
2. [Goal: e.g., "Reduce time to market"]
3. [Goal: e.g., "Scale to 10x users"]
For each goal, create:
- Objective (qualitative, inspiring)
- 3-4 Key Results (quantitative, measurable)
- How to track (data source, frequency)
- Owner
Help me decide how to balance feature work vs. tech debt this quarter.
Context:
- Product backlog: [X points of features]
- Tech debt backlog: [Y points]
- Team capacity: [Z points/sprint ร number of sprints]
- Business pressure: [High/Medium/Low for new features]
- Technical health: [assessment: stable/degrading/fragile]
Recommend:
- Allocation split (X% features, Y% tech debt)
- Rationale (why this balance)
- Risks of this approach
- How to communicate to stakeholders
Map engineering dependencies for this quarter's roadmap.
Planned work:
1. [Initiative/feature]
2. [Initiative/feature]
3. [Initiative/feature]
For each, identify:
- Internal dependencies (other teams, shared services)
- External dependencies (vendors, partners, APIs)
- Risk level (Low/Med/High)
- Mitigation plan (if high risk)
- Owner (who coordinates)
Output as dependency matrix.
Tool: ChatGPT or Claude
Why: Both handle process improvement well
Create a "meeting diet" plan: which meetings to cut, merge, or make async.
Current meetings:
1. [Meeting name] - [Frequency] - [Duration] - [Attendees] - [Purpose]
2. [Meeting name] - [Frequency] - [Duration] - [Attendees] - [Purpose]
3. [Meeting name] - [Frequency] - [Duration] - [Attendees] - [Purpose]
...
Team size: [X]
Time spent in meetings: [Y hours/week per person]
Recommend:
- Keep (necessary and efficient)
- Cut (low value, can be eliminated)
- Merge (consolidate with other meetings)
- Make async (can be handled via doc or Slack)
- Reduce frequency (weekly โ biweekly)
Calculate time saved.
Our code review process is slow. Suggest improvements.
Current state:
- Average PR review time: [X hours/days]
- PRs waiting for review: [Y on average]
- Team size: [Z engineers]
- Review bottlenecks: [describe: e.g., "only 2 people approve PRs"]
Suggest:
- Process changes (how to streamline)
- Ownership model (who reviews what)
- Tooling (automation to reduce manual review)
- Metrics to track (measure improvement)
- Implementation plan (how to roll out changes)
Draft a communication for shifting from output metrics to outcome metrics.
Current metrics (output-focused):
- [Metric 1: e.g., "story points completed"]
- [Metric 2: e.g., "features shipped"]
New metrics (outcome-focused):
- [Metric 1: e.g., "user activation rate"]
- [Metric 2: e.g., "system uptime"]
Context for team: [why we're making this change]
Write a communication that:
- Explains the shift (why outcomes matter more)
- Clarifies what won't change (we still value delivery)
- Shows how their work connects to outcomes
- Addresses likely concerns
- Invites feedback
Keep it under 3 paragraphs, encouraging tone.
Tool: ChatGPT (better conversational tone)
Why: More natural for sensitive communications and updates
Help me communicate this setback to stakeholders.
Setback: [describe what went wrong or is delayed]
Impact: [who/what is affected]
Root cause: [what happened]
Mitigation: [what we're doing about it]
Stakeholder: [product leadership/executive/customer]
Draft a message that:
- States the problem clearly (no sugar-coating)
- Explains root cause (without blaming)
- Outlines action plan (what we're doing)
- Sets new expectations (revised timeline/scope)
- Offers accountability (what I'm owning)
Tone: [professional/apologetic/matter-of-fact]
Create an update for my manager on team status.
Update areas:
- Team health: [Green/Yellow/Red + why]
- Delivery: [on track/at risk/behind + details]
- People: [hiring, attrition, performance issues]
- Technical: [key decisions, tech debt, incidents]
Include:
- What's going well (1-2 items)
- What needs attention (1-2 items)
- What I need from them (support, decisions, resources)
Format as 4-5 concise bullet points.
Help me document this performance issue for HR records.
Engineer: [role, tenure]
Issue: [specific behavior or performance gap]
Impact: [how it affects team/projects]
Previous conversations: [what's been discussed, when]
Create documentation that includes:
- Objective description of issue (facts, not opinions)
- Specific examples (dates, situations)
- Expected behavior vs. actual behavior
- Previous feedback given
- Action plan (what needs to change, by when)
- Consequences if no improvement
- Support being offered (coaching, resources)
Keep it factual, fair, and legally sound.
Shape architecture, strategy, and innovation through intelligent foresight.
Tool: Claude (superior technical depth and strategic thinking)
Why: Better at complex technical reasoning, architecture trade-offs, long-term thinking
Write a decision memo: build vs. buy for this capability.
Capability needed: [describe]
Business context: [why we need this, strategic importance]
Current situation: [what we have now, pain points]
Compare:
**Build:**
- Cost (development + ongoing maintenance)
- Time to market
- Strategic control (how critical to differentiation?)
- Technical debt risk
- Team skill fit
**Buy:**
- Cost (licensing + integration + maintenance)
- Time to market
- Vendor lock-in risk
- Flexibility (can we customize?)
- Support & reliability
**Hybrid:**
- [Any buy + customize scenarios]
Recommendation: [Build/Buy/Hybrid] with clear reasoning.
Output as executive-ready memo (1-2 pages).
Propose a target architecture for this scale scenario, outlining trade-offs and migration steps.
Current architecture:
[Describe: monolith/microservices, databases, infrastructure]
Scale requirements:
- Current: [X users, Y requests/sec, Z data volume]
- Target (3 years): [10X users, 10Y requests/sec, 10Z data volume]
Constraints:
- Budget: [range if known]
- Team size: [current and planned]
- Timeline: [how long for migration]
Provide:
- Target architecture diagram (describe layers, components, data flow)
- Key architectural decisions (and why)
- Migration phases (step-by-step from current to target)
- Trade-offs (cost, complexity, risk)
- Prerequisites (team skills, infrastructure, tools)
Design a platform strategy that defines paved roads, golden paths, and service ownership standards.
Context:
- Number of teams: [X]
- Services/repos: [Y]
- Tech stack diversity: [High/Medium/Low]
- Current pain points: [deployment complexity, inconsistent practices, etc.]
Define:
- **Paved roads** (recommended, well-supported paths: languages, frameworks, infrastructure)
- **Golden paths** (default patterns for common tasks: new service, CI/CD, observability)
- **Service ownership** (what teams own: code, deployment, monitoring, on-call)
- **Platform team charter** (what platform provides vs. what teams own)
- **Governance** (how we decide what's on the paved road, how we deprecate)
Output as 2-3 page strategy doc.
Tool: Claude (better long-term strategic thinking)
Why: Handles complex trade-offs, thinks multi-dimensionally
Generate a tech radar entry for adopting this technology/tool.
Technology: [name and brief description]
Category: [languages/frameworks/tools/platforms/techniques]
Current status: [not on our radar / experimenting / using in some teams / standard]
Provide:
- **Assessment** (Adopt/Trial/Assess/Hold)
- **Rationale** (why this assessment, what problem it solves)
- **Evaluation plan** (how to test/pilot)
- **Success criteria** (what would prove it's valuable)
- **Exit criteria** (what would make us abandon it)
- **Risk factors** (what could go wrong)
- **Timeline** (when to revisit this assessment)
Outline a 12-month modernization plan for this legacy stack, split by quarter.
Legacy stack:
[Describe current technology, architecture, technical debt]
Modernization goals:
[What you're trying to achieve: scalability, maintainability, performance, cost reduction]
Constraints:
- Must maintain production stability
- Team capacity: [X engineers available for modernization]
- Budget: [range if known]
For each quarter:
- **Focus area** (what gets modernized)
- **Key deliverables** (specific outcomes)
- **Dependencies** (what must be done first)
- **Risk mitigation** (how to de-risk)
- **Success metrics** (how to measure progress)
Prioritize by: business value, risk reduction, dependency order.
Define 5 engineering principles that guide decision-making โ concise, enforceable, and timeless.
Company context:
- Industry: [e.g., fintech, SaaS, e-commerce]
- Stage: [startup/growth/enterprise]
- Engineering culture: [describe values, what matters most]
For each principle:
- **Principle statement** (one sentence)
- **What it means** (1-2 sentences explaining intent)
- **Example decision** (how this principle guided a real choice)
- **Anti-pattern** (what violates this principle)
Examples of principles:
- "Simple beats clever"
- "Security is not negotiable"
- "Measure twice, cut once"
Make them specific to our context, not generic platitudes.
Tool: Claude (better at assessing emerging tech impact)
Why: More systematic in evaluating nascent technologies
Create an AI governance checklist covering data integrity, model risk, auditing, and safeguards.
Context:
- We're building/integrating AI features: [describe use cases]
- Data used: [type, sensitivity, source]
- Regulatory environment: [any compliance requirements]
Checklist categories:
**Data Integrity:**
- [Data quality, bias in training data, data lineage]
**Model Risk:**
- [Model accuracy, failure modes, drift monitoring]
**Auditing:**
- [Explainability, decision logs, human oversight]
**Safeguards:**
- [Fallback mechanisms, rate limiting, content filtering]
**Ethics & Fairness:**
- [Bias testing, fairness metrics, diverse testing]
**Privacy & Security:**
- [Data handling, PII protection, model security]
For each item, include: What to check | Who owns it | Frequency | Documentation required
Generate a strategic foresight brief on emerging tech that could reshape your product in 3โ5 years.
Product/industry: [describe]
Current tech stack: [high-level]
Emerging technologies to assess:
[List: AI advances, quantum computing, edge computing, AR/VR, blockchain, etc., or ask AI to identify relevant ones]
For each technology:
- **What it is** (brief explanation)
- **Maturity timeline** (when it becomes practical)
- **Potential impact on our product** (threat or opportunity)
- **Strategic response** (ignore, monitor, experiment, invest)
- **Action items** (what to do now)
Focus on signal vs. noise โ what actually matters vs. hype.
Simulate the impact of adopting AI for [use case]. Assess ROI, risks, and implementation effort.
Use case: [e.g., "AI-powered customer support," "code generation for developers"]
Current state:
- How this is done today
- Cost (time, money, resources)
- Pain points
AI solution:
- Proposed approach (which AI tools/models)
- Expected benefits (time saved, quality improvement, cost reduction)
- Implementation effort (engineering time, infrastructure)
- Ongoing costs (API costs, maintenance, monitoring)
Provide:
- Cost-benefit analysis (quantify where possible)
- Break-even timeline (when benefits exceed costs)
- Risks (accuracy, reliability, bias, vendor dependency)
- Recommendation (go/no-go/pilot)
Tool: ChatGPT with Code Interpreter OR Claude
Why: Handles multi-dimensional data analysis across products
Draft an executive update on engineering health โ include KPIs, risks, and recommended actions.
KPIs:
- Deployment frequency: [X per week]
- Lead time: [Y hours from commit to production]
- Change failure rate: [Z%]
- MTTR: [mean time to recovery]
- Velocity: [trend]
- Team satisfaction: [score]
Context:
- Team size: [X engineers]
- Major initiatives: [list]
- Recent incidents: [if any]
Format as:
- **Summary** (Green/Yellow/Red with one-sentence rationale)
- **Key metrics** (with trends)
- **Risks** (top 3 concerns)
- **Recommended actions** (what we should do)
- **Investment needs** (if requesting budget/headcount)
Keep it under 1 page, executive-friendly language.
Create a portfolio-level risk register and heatmap across all products with mitigation priorities.
Products/initiatives:
1. [Product A] - [brief description, strategic importance]
2. [Product B] - [brief description, strategic importance]
3. [Initiative C] - [brief description, strategic importance]
For each, identify:
- **Technical risks** (architecture, tech debt, scalability)
- **Resource risks** (team capacity, key person dependency)
- **External risks** (vendor, partner, market)
Output as:
| Product/Initiative | Risk | Likelihood (H/M/L) | Impact (H/M/L) | Mitigation Priority | Owner |
Create risk heatmap: High likelihood + High impact = Priority 1
Simulate the impact of reducing infrastructure spend by 20%. List architectural trade-offs.
Current spend:
- Compute: [$ per month]
- Storage: [$ per month]
- Networking: [$ per month]
- Third-party services: [$ per month]
Total: [$ per month]
Target: 20% reduction = [$ savings]
Provide options:
1. [Option: e.g., "Right-size instances"] - Savings: [$] - Trade-off: [performance, complexity]
2. [Option: e.g., "Reduce data retention"] - Savings: [$] - Trade-off: [observability, compliance]
3. [Option: e.g., "Consolidate services"] - Savings: [$] - Trade-off: [development time, risk]
Recommend:
- Which combination hits 20% with least impact
- What we shouldn't cut (critical infrastructure)
- Implementation timeline
Tool: Claude (more systematic with risk assessment)
Why: Better at structured risk analysis and compliance thinking
Produce a disaster-recovery summary showing RTO, RPO, coverage gaps, and test cadence.
Systems:
1. [System A] - [criticality: High/Medium/Low]
2. [System B] - [criticality]
3. [System C] - [criticality]
For each system:
- **RTO** (Recovery Time Objective: how long can it be down?)
- **RPO** (Recovery Point Objective: how much data loss is acceptable?)
- **Current backup strategy**
- **Current recovery process**
- **Coverage gaps** (what's not backed up or tested)
- **Test frequency** (when we last tested DR)
Provide:
- Risk assessment (where we're exposed)
- Recommended improvements
- Test schedule (how often to test DR)
Assess our security posture across these dimensions.
Dimensions:
- **Authentication & Authorization** (how users/services prove identity)
- **Data protection** (encryption at rest/in transit, sensitive data handling)
- **Network security** (firewalls, VPNs, segmentation)
- **Application security** (input validation, secure coding, dependency management)
- **Monitoring & Response** (logging, alerting, incident response)
- **Compliance** (GDPR, SOC2, HIPAA, etc.)
For each:
- Current state (what we have)
- Gaps (what's missing or weak)
- Risk level (High/Med/Low)
- Recommended actions (prioritized)
Output as security roadmap for next 6-12 months.
Analyze impact of [regulation: e.g., GDPR, SOC2, HIPAA] on our technical architecture.
Regulation: [name]
Current architecture: [brief description]
Compliance requirements:
[List key requirements or let AI identify based on regulation]
For each requirement:
- **What it means** (plain English)
- **Current state** (compliant/partially compliant/non-compliant)
- **Gap** (what needs to change)
- **Technical implementation** (how to become compliant)
- **Effort** (S/M/L)
- **Risk if not addressed**
Provide:
- Prioritized implementation plan
- Estimated timeline
- Resource needs
Tool: Claude (better structured rubrics and evaluation)
Why: More systematic in defining competencies and scoring
Propose a hiring rubric for this role including competencies and scoring criteria.
Role: [e.g., "Staff Engineer," "Engineering Manager"]
Level: [seniority]
Team context: [what this person will work on, team structure]
For each competency:
- **Competency name** (e.g., System Design, Leadership, Communication)
- **What we're looking for** (specific behaviors/skills)
- **How to assess** (interview format, questions, exercises)
- **Scoring rubric** (1-5 scale with descriptions for each level)
- **Weight** (how important this competency is: High/Medium/Low)
Include:
- Technical competencies
- Leadership/collaboration competencies (even for IC roles)
- Cultural fit indicators
Generate a question bank for interviewing [role].
Role: [e.g., "Senior Backend Engineer"]
Focus areas:
- [Technical area 1: e.g., "Distributed systems"]
- [Technical area 2: e.g., "Database design"]
- [Behavioral area: e.g., "Collaboration under ambiguity"]
For each question:
- **Question**
- **What it assesses** (which competency)
- **Good answer looks like** (key points to listen for)
- **Red flags** (concerning responses)
- **Follow-up questions** (to probe deeper)
Provide 10-15 questions across technical and behavioral areas.
The form has been successfully submitted.