Welcome to the Future

Your Team Uses AI. But Are You Actually Faster? Most teams bolt AI onto broken processes and hope for magic. IntelAgility gives you a complete, step-by-step system to operationalize AI across 12 modules—and prove ROI with data. Try it free for a limited time only.

IntelAgility Framework (SOP) v1.0

1. About This Framework

The IntelAgility SOP is a complete, step‑by‑step playbook for integrating AI into Agile software delivery. It operationalizes 12 modules across the lifecycle, pairing human accountability with AI copilots and Atlassian Intelligence.

  • Audience: Engineering leaders, Scrum Masters, Product Owners, Developers, QA, SecOps.
     
  • Scope: Jira/Confluence (Atlassian Intelligence) + IDE copilots (e.g., GitHub Copilot) + CI/CD + Git hosting.
     
  • Goal: Increase throughput and quality while reducing rework, delays, and meeting overhead.
     

2. Governance & Guardrails

  • Human-in-the-loop: AI outputs are suggestions, humans remain accountable for decisions and merges.
     
  • Data sensitivity: Do not paste PII/secret keys into prompts. Use approved repositories and redaction rules.
     
  • Traceability: Use PR templates and commit trailers to indicate AI involvement (e.g., 'AI-Assisted: Yes/No').
     
  • Transparency: Maintain the AI Trust Ledger and Story Pointing Ledger for auditability and learning.
     

3. Roles & RACI (High Level)

  • Product Owner (PO): Defines business value; validates AI-generated backlog items/ACs.
     
  • Tech Lead/Senior Dev: Validates AI effort splits, approves complex merges, curates prompts/templates.
     
  • Developers: Drive Copilot usage in IDE; run module procedures; record outcomes.
     
  • Scrum Master: Facilitates cadence, ensures data capture, drives continuous improvement.
     
  • QA: Leverages AI for tests and triage; validates coverage and critical paths.
     
  • SecOps: Reviews security scans and remediation steps; maintains policies.
     
  • Data/Platform: Owns dashboards, telemetry, and automation at scale.
     

4. Refinement 2.0

Purpose

Transform ambiguous backlog items into implementation‑ready stories using Atlassian Intelligence and technical value statements.

Prerequisites

  • Backlog items captured with problem statement/user value.
     
  • Atlassian Intelligence enabled for Jira/Confluence.
     
  • Team agrees on the AI‑Friendly/Human‑Heavy/Balanced taxonomy.
     

Inputs

  • Initial story title/description from PO.
     
  • Any linked designs, APIs, or constraints.
     
  • Historical Jira tickets for similar work (for AI context).
     

Tools

  • Jira + Atlassian Intelligence
     
  • Confluence for linked specs
     
  • Design tool links (optional)
     

Definition of Ready (DoR)

  • Story has a clear user outcome and acceptance criteria can be drafted.
     
  • Technical Value Statement field is present in Jira.
     
  • Labels field includes AI‑Friendly/Human‑Heavy/Balanced (placeholder until confirmed).
     

Procedure (Step‑by‑Step)

  1. Open the story in Jira → click 'Assist' to generate draft Acceptance Criteria (ACs) and edge cases.
     
  2. Add a **Technical Value Statement** describing system impact (e.g., APIs, constraints, performance).
     
  3. Have Atlassian Intelligence propose **dependencies** based on linked components/pages.
     
  4. Tag the story as **AI‑Friendly/Human‑Heavy/Balanced** (initial hypothesis).
     
  5. Link similar historical tickets suggested by AI to support context and estimates.
     
  6. PO reviews/edits ACs; Tech Lead validates technical feasibility.
     
  7. Scrum Master confirms DoR: ACs present, value statements filled, initial AI tag set.
     

Definition of Done (DoD)

  • ACs meet team quality criteria (clear numbered format).
     
  • Technical Value Statement added and approved by the Team.
     
  • Initial AI tag applied and dependencies recorded.
     

Outputs/Artifacts

  • Refinement summary (AI‑generated) attached to Jira ticket.
     
  • Linked references to similar past tickets.
     
  • Initial AI tag and value fields populated.
     

Metrics to Track

  • Refinement cycle time (request → DoR)
     
  • % stories with ACs generated/assisted by AI
     
  • % stories with Technical Value Statement populated
     

Example Prompts / Templates

  • Jira (Assist): 'Draft acceptance criteria and edge cases for this story based on the description and linked docs.'
     
  • Jira (Assist): 'Suggest likely dependencies and related tickets.'
     
  • Confluence: 'Summarize this page into ACs for the linked Jira ticket [KEY-123].'
     

Risks & Mitigations

  • Over‑reliance on AI for ACs → ensure PO/QA review.
     
  • Missing constraints → require Technical Value Statement as part of DoR.
     

Automation Ideas / Scale‑Up

  • Jira automation: when DoR=Yes → notify team channel.
     
  • Create a Confluence template for 'Tech Value Statement' and auto‑link it.
     

5. Story Pointing 2.0

Purpose

Make pointing faster and more objective by estimating AI vs human effort using IDE copilots and historical tickets.

Prerequisites

  • Developers have Copilot/IDE chat enabled.
     
  • Historical ticket tags and sizes available for reference.
     
  • Team calibrated baseline (e.g., 5 points historical examples).
     

Inputs

  • Current story with ACs and Technical Value Statement.
     
  • Similar historical Jira tickets (linked).
     
  • Constraints (e.g., SaaS limitations).
     

Tools

  • IDE Copilot/Chat
     
  • Jira
     
  • Confluence
     

Definition of Ready (DoR)

  • ACs and constraints are available in ticket.
     
  • Story tagged with initial AI‑Friendly/Human‑Heavy/Balanced (from Refinement 2.0).
     

Procedure (Step‑by‑Step)

  1. Developer opens the story in IDE via Copilot Chat context (paste ACs as needed).
     
  2. Ask Copilot: 'Given our past tickets [list/linked], what % can AI implement vs human effort?'
     
  3. Copilot proposes an **AI vs Human effort split** (e.g., 50/50) and cites similar work.
     
  4. Translate effort split to points (e.g., historical 5 → 2--3 if 50% AI).
     
  5. Review feasibility: if SaaS constraints block AI, adjust split downward.
     
  6. Senior dev validates; team confirms in planning.
     
  7. Record split and final points in **Story Pointing Ledger** (custom field or Confluence table).
     

Definition of Done (DoD)

  • Final story points agreed and recorded.
     
  • AI/Human split stored in ticket or ledger with brief rationale.
     
  • Exceptions (e.g., constraints) noted.
     

Outputs/Artifacts

  • Updated Story Points
     
  • AI vs Human % split (custom fields)
     
  • Link to similar historical tickets used as evidence
     

Metrics to Track

  • Variance between estimated split vs actual effort
     
  • Pointing time per ticket
     
  • Share of tickets with AI ≥ 30% contribution
     

Example Prompts / Templates

  • Copilot: 'Analyze these ACs and similar tickets [KEY-101, KEY-209] and estimate AI vs human %.'
     
  • Copilot: 'Convert a 50/50 split into a suggested point size using our historical 5-point reference.'
     

Risks & Mitigations

  • Copilot hallucination → require citation of similar tickets and human validation.
     
  • Gaming points → audit splits against actuals in retros.
     

Automation Ideas / Scale‑Up

  • Create a Jira custom field group: AI Contribution %, Human Contribution %, Evidence Links.
     
  • Auto‑export ledger to a dashboard (see Intelligence 2.0).
     

6. Execution 2.0

Purpose

Accelerate implementation while preserving quality and accountability with AI‑assisted coding and PR pre‑reviews.

Prerequisites

  • Branch protection and PR templates in place.
     
  • Code style and linting rules defined.
     
  • Copilot/IDE chat installed.
     

Inputs

  • Assigned story with ACs and AI/Human split.
     
  • Existing codebase and style guides.
     

Tools

  • IDE + Copilot
     
  • Git hosting (e.g., GitHub/GitLab)
     
  • CI pipeline
     

Definition of Ready (DoR)

  • Branch created and linked to Jira ticket.
     
  • PR template includes AI‑Assisted checkbox and test expectations.
     

Procedure (Step‑by‑Step)

  1. Use Copilot to scaffold code/tests from ACs while keeping small, reviewable commits.
     
  2. Run local tests and linters; fix issues.
     
  3. Open PR with template completed (AI‑Assisted: Yes/No; Summary; Risk areas).
     
  4. Request Copilot/AI pre‑review for style/perf/test gaps.
     
  5. Address AI pre‑review comments; push updates.
     
  6. Request human review (at least one senior dev).
     
  7. Link PR to Jira and update ticket status.
     

Definition of Done (DoD)

  • PR merged with required approvals.
     
  • All checks green; no critical AI warnings.
     
  • Jira ticket moved to Done or Ready for QA.
     

Outputs/Artifacts

  • Merged PR with template filled
     
  • PR discussion (AI + human reviews)
     
  • Updated Jira status
     

Metrics to Track

  • Lead time for changes
     
  • Rework rate (post‑merge fixes)
     
  • AI vs human review comments ratio
     

Example Prompts / Templates

  • Copilot: 'Review this PR for style/performance/test gaps and list specific fixes.'
     
  • PR Template: 'AI‑Assisted: ☐Yes ☐No --- Areas assisted: ____'
     

Risks & Mitigations

  • Over‑trusting AI review → keep mandatory human reviewer.
     
  • Large PRs → enforce small batch size.
     

Automation Ideas / Scale‑Up

  • CI job to run static analysis and test coverage gates.
     
  • PR label 'High AI Contribution' to route to senior reviewers.
     

7. Testing 2.0

Purpose

Boost coverage and reduce manual toil by generating tests from ACs and code changes.

Prerequisites

  • Test framework configured; coverage thresholds defined.
     
  • Access to ACs and edge cases in Jira.
     

Inputs

  • PR diff and linked story ACs.
     
  • Historical defects in similar areas.
     

Tools

  • IDE Copilot/Chat
     
  • CI test runner
     
  • Jira/Confluence
     

Definition of Ready (DoR)

  • ACs validated and PR open.
     
  • Test data available or mock strategy defined.
     

Procedure (Step‑by‑Step)

  1. Ask Copilot to generate unit tests for the changed code based on ACs and edge cases.
     
  2. Author integration tests where applicable; use AI to draft scenarios.
     
  3. Run tests locally; iterate until green.
     
  4. Commit tests in the same PR; describe coverage in PR template.
     
  5. Enable CI to block merge if coverage below threshold.
     

Definition of Done (DoD)

  • New tests added and passing.
     
  • Coverage meets or exceeds baseline.
     
  • PR template documents coverage summary.
     

Outputs/Artifacts

  • New/updated test files
     
  • Coverage report
     
  • Test summary in PR
     

Metrics to Track

  • Coverage %
     
  • Escaped defects post‑release
     
  • Time to create tests
     

Example Prompts / Templates

  • Copilot: 'Generate unit tests for functions changed in this diff; include edge cases from ACs.'
     
  • Copilot: 'Draft an integration test outline for scenario X using mocks.'
     

Risks & Mitigations

  • Flaky tests → maintain flake‑tracking; prioritize stabilization.
     
  • Over‑generated trivial tests → focus on risk‑based areas.
     

Automation Ideas / Scale‑Up

  • CI comments coverage diff on PR.
     
  • Nightly job runs AI to suggest missing tests for critical modules.
     

8. Quality 2.0

Purpose

Create accountability for defects and prevent repeats by tagging origin and recommending proven fixes.

Prerequisites

  • Bug template includes fields for 'Origin: AI/Human/Mixed'.
     
  • Root Cause Analysis (RCA) checklist defined.
     

Inputs

  • Bug report with reproduction steps and logs.
     
  • Linked PR/commit suspected to introduce bug.
     

Tools

  • Jira
     
  • Git hosting + blame/annotate
     
  • IDE Copilot/Chat
     

Definition of Ready (DoR)

  • Reproducible bug or sufficient telemetry.
     
  • Access to PR diffs and authors.
     

Procedure (Step‑by‑Step)

  1. Open/triage the bug; link to suspect PRs.
     
  2. Use blame/annotate to locate offending lines; record tentative origin (AI/Human/Mixed).
     
  3. Ask AI to propose likely fixes referencing similar historical bugs.
     
  4. Assign owner; implement fix; add/adjust tests.
     
  5. Complete RCA checklist and finalize origin label.
     
  6. Update AI Trust Ledger with any AI‑related issues.
     

Definition of Done (DoD)

  • Bug fixed, tested, and verified in environment.
     
  • RCA completed; origin tagged; lessons captured.
     

Outputs/Artifacts

  • Updated Jira bug with origin + RCA
     
  • Linked PR with fix
     
  • Tests preventing regression
     

Metrics to Track

  • Mean time to resolution (MTTR)
     
  • % bugs by origin
     
  • Repeat incident rate
     

Example Prompts / Templates

  • Copilot: 'Suggest a fix for this bug based on these logs and past fix [BUG-321].'
     
  • Jira (Assist): 'Summarize RCA in 3 bullets for leadership.'
     

Risks & Mitigations

  • Mislabeling origin → require reviewer sign‑off.
     
  • Privacy in logs → sanitize before sharing with AI.
     

Automation Ideas / Scale‑Up

  • Automate origin suggestion from PR metadata (e.g., 'AI‑Assisted' checkbox).
     
  • Dashboard slice by origin to guide training and guardrails.
     

9. Documentation 2.0

Purpose

Keep docs in lockstep with code through AI‑assisted generation and release hooks.

Prerequisites

  • PR template includes 'Docs impact: Yes/No'.
     
  • Confluence/Docs repo structure agreed.
     

Inputs

  • Code diffs and public APIs changed.
     
  • Jira release notes content.
     

Tools

  • Copilot/IDE
     
  • Confluence or Docs repo
     
  • CI release pipeline
     

Definition of Ready (DoR)

  • PR open with accurate description.
     
  • Docs owners identified.
     

Procedure (Step‑by‑Step)

  1. Ask Copilot to generate README/API docs from new/changed code.
     
  2. Paste draft into Docs repo or Confluence; edit for clarity.
     
  3. On merge, trigger release notes generation (AI‑assisted) from merged PR titles.
     
  4. Link docs and release notes back to Jira epic.
     

Definition of Done (DoD)

  • Docs updated/created and linked to the PR.
     
  • Release notes published for the increment.
     

Outputs/Artifacts

  • Updated docs pages
     
  • Release notes
     
  • Links in Jira
     

Metrics to Track

  • % PRs with docs updated
     
  • Docs freshness (last updated date vs code change)
     

Example Prompts / Templates

  • Copilot: 'Generate API docs for these endpoints and include examples.'
     
  • Atlassian: 'Draft release notes from these merged PRs.'
     

Risks & Mitigations

  • Low doc quality → enforce human editorial pass.
     
  • Fragmentation → centralize in a single source of truth.
     

Automation Ideas / Scale‑Up

  • Docs CI check to flag missing updates.
     
  • Auto‑publish release notes to Confluence space.
     

10. Onboarding 2.0

Purpose

Reduce ramp‑up time with personalized AI playbooks and on‑demand knowledge retrieval.

Prerequisites

  • Access to historical tickets, docs, and code repos.
     
  • New dev role profile defined.
     

Inputs

  • Role responsibilities and tech stack.
     
  • Key systems, domains, and runbooks.
     

Tools

  • Confluence + Atlassian Intelligence
     
  • IDE Copilot/Chat
     

Definition of Ready (DoR)

  • New hire accounts provisioned.
     
  • Mentor/buddy assigned.
     

Procedure (Step‑by‑Step)

  1. Generate a 2‑week learning path based on role and current backlog.
     
  2. Create a 'Start Here' page: systems, repos, environments, common commands.
     
  3. Enable 'Ask AI' with links to key spaces and repos.
     
  4. Have the new dev complete a small, AI‑assisted starter task; review together.
     

Definition of Done (DoD)

  • New hire completes path and first ticket.
     
  • Feedback captured to refine the playbook.
     

Outputs/Artifacts

  • Personalized playbook page
     
  • Starter task PR
     
  • Q&A transcript for common questions
     

Metrics to Track

  • Time to first PR
     
  • Time to independent delivery
     
  • New hire satisfaction
     

Example Prompts / Templates

  • Confluence: 'Create a two‑week learning plan for a new frontend dev joining Team X.'
     
  • Copilot: 'Explain why module Y was implemented this way; link relevant tickets.'
     

Risks & Mitigations

  • Access gaps → preflight checklist for accounts.
     
  • Overload → pace content, focus on active code areas.
     

Automation Ideas / Scale‑Up

  • Template the onboarding page; auto‑populate with team‑specific links.
     
  • Monthly refresh jobs to keep links alive.
     

11. Planning 2.0

Purpose

Commit confidently using AI‑simulated sprint scenarios and capacity forecasts.

Prerequisites

  • Team capacity data (holidays, PTO) available.
     
  • Backlog items refined and pointed (Story Pointing 2.0).
     

Inputs

  • Candidate stories for the sprint.
     
  • Dependencies and risks list.
     

Tools

  • Jira + Atlassian Intelligence
     
  • Team calendar
     
  • Dashboard
     

Definition of Ready (DoR)

  • Stories meet DoR with points and splits.
     
  • Capacity baselines captured.
     

Procedure (Step‑by‑Step)

  1. Ask Atlassian Intelligence to propose 2--3 sprint scenarios (best/worst/balanced).
     
  2. Review risk hotspots; adjust scope or sequence.
     
  3. Lock plan; record rationale and success criteria for the sprint goal.
     
  4. Broadcast plan and risks to stakeholders.
     

Definition of Done (DoD)

  • Sprint plan documented with chosen scenario.
     
  • Risks acknowledged and owners assigned.
     

Outputs/Artifacts

  • Sprint goal page
     
  • Scenario comparison summary
     
  • Committed backlog with owners
     

Metrics to Track

  • Planned vs delivered delta
     
  • Risk burn‑down
     
  • Spillover rate
     

Example Prompts / Templates

  • Jira (Assist): 'Simulate sprint outcomes with current capacity and suggest a balanced plan.'

Risks & Mitigations

  • Optimism bias → stick to evidence‑based scenarios.
     
  • Hidden dependencies → bring Refinement 2.0 outputs into planning.
     

Automation Ideas / Scale‑Up

  • Auto‑create a Sprint Goal Confluence page from the chosen scenario.
     
  • Notify team of risks via chat integration.
     

12. Intelligence 2.0

Purpose

Make AI ROI visible with blended delivery and quality metrics.

Prerequisites

  • Data export permissions from Jira and Git host.
     
  • Dashboard platform selected.
     

Inputs

  • Story points, splits, velocity, bug origins, lead times.
     
  • PR metadata and coverage.
     

Tools

  • BI/Dashboard tool
     
  • Jira/Git exports
     
  • Confluence for narration
     

Definition of Ready (DoR)

  • Data sources validated.
     
  • Metric definitions agreed (see Appendix).
     

Procedure (Step‑by‑Step)

  1. Build a dashboard showing AI contribution %, velocity trends, bug origin mix, pointing accuracy, and MTTR.
     
  2. Add narrative summaries per sprint for leadership.
     
  3. Use insights in retros to choose experiments.
     

Definition of Done (DoD)

  • Dashboard linked in team home page.
     
  • Retro notes reference data, not anecdotes.
     

Outputs/Artifacts

  • Published dashboard
     
  • Sprint narrative
     
  • Experiment backlog
     

Metrics to Track

  • % AI‑friendly stories over time
     
  • Velocity/throughput trend
     
  • Bug origin trend, MTTR
     

Example Prompts / Templates

  • Confluence: 'Summarize Sprint 23 performance with highlights and risks in 6 bullets.'

Risks & Mitigations

  • Metric overload → keep to a handful of leading indicators.
     
  • Data quality → standardize fields and PR templates.
     

Automation Ideas / Scale‑Up

  • Automate weekly data refresh and Slack digests.
     
  • Auto‑generate retro slide from dashboard highlights.
     

13. Collaboration 2.0

Purpose

Reduce time in meetings by using AI to summarize, track actions, and surface patterns.

Prerequisites

  • Meeting cadences defined (standup, refinement, retro).
     
  • Recording/notes permissions as needed.
     

Inputs

  • Calendar events and agendas.
     
  • Chat threads and Jira updates.
     

Tools

  • Atlassian Intelligence
     
  • Meeting notes tool
     
  • Team chat
     

Definition of Ready (DoR)

  • Agenda templates ready.
     
  • Action owner list available.
     

Procedure (Step‑by‑Step)

  1. Use AI to generate standup summaries and blockers; post to channel and link to tickets.
     
  2. Retro: have AI recall recurring issues across past sprints; propose experiments.
     
  3. Auto‑log decisions and action items with owners and due dates.
     

Definition of Done (DoD)

  • Summaries posted; actions assigned with dates.
     
  • Retro experiment chosen and added to backlog.
     

Outputs/Artifacts

  • Standup/retro summaries
     
  • Action log
     
  • Decision register
     

Metrics to Track

  • Time spent in meetings
     
  • Action completion rate
     
  • Recurring blocker frequency
     

Example Prompts / Templates

  • Atlassian: 'Summarize blockers from today's standup and link to Jira issues.'
     
  • Atlassian: 'List top 3 recurring retro themes from the last 5 sprints.'
     

Risks & Mitigations

  • Privacy for recordings → opt‑in and redact.
     
  • Action drift → Scrum Master weekly review.
     

Automation Ideas / Scale‑Up

  • Auto‑remind action owners.
     
  • Retro template with AI‑generated insights prefilled.
     

14. Security 2.0

Purpose

Shift security left with AI‑assisted scanning and guided remediation.

Prerequisites

  • Static/dynamic analysis tools configured in CI.
     
  • Security policy baseline established.
     

Inputs

  • PR diffs and dependency manifests.
     
  • Security rules and exceptions list.
     

Tools

  • Copilot/IDE
     
  • CI security scanners
     
  • Jira
     

Definition of Ready (DoR)

  • PR open; CI checks wired.
     
  • Security owners available for escalation.
     

Procedure (Step‑by‑Step)

  1. Run security scans on PR; review findings.
     
  2. Ask AI to suggest least‑privilege fixes and safer patterns.
     
  3. Create Jira security tasks for medium/high items with ACs.
     
  4. Block merge on critical issues; document exceptions with approval.
     

Definition of Done (DoD)

  • PRs merged with no critical vulns.
     
  • Security tasks tracked and prioritized.
     

Outputs/Artifacts

  • Security scan reports
     
  • Jira security tasks
     
  • Exception register
     

Metrics to Track

  • Critical vuln count
     
  • Time to remediate
     
  • % PRs with zero‑critical status
     

Example Prompts / Templates

  • Copilot: 'Refactor this code to eliminate hard‑coded secrets and use vault access.'

Risks & Mitigations

  • False positives → tune rules and baselines.
     
  • Security vs velocity tension → risk‑based gates.
     

Automation Ideas / Scale‑Up

  • Nightly dependency audit with automated ticket creation.
     
  • Security dashboard slice on leadership page.
     

15. Customer Feedback 2.0

Purpose

Convert raw feedback into prioritized backlog items with AI‑generated ACs and themes.

Prerequisites

  • Source systems connected (support, NPS, CRM).
     
  • PO criteria for value/impact defined.
     

Inputs

  • Exported feedback/comments.
     
  • Existing product roadmap/OKRs.
     

Tools

  • Atlassian Intelligence
     
  • CSV/ETL from support tools
     
  • Confluence
     

Definition of Ready (DoR)

  • Data deduplicated and anonymized as needed.
     
  • PO available for review.
     

Procedure (Step‑by‑Step)

  1. Cluster feedback into themes; identify top pain points.
     
  2. Generate backlog‑ready tickets with draft ACs and impact notes.
     
  3. Link themes to epics; add Technical Value Statements for implementation hints.
     
  4. PO validates and prioritizes; publish summary to stakeholders.
     

Definition of Done (DoD)

  • Themes, epics, and stories created with ACs.
     
  • Prioritized list approved by PO.
     

Outputs/Artifacts

  • Theme map
     
  • Backlog items with ACs
     
  • Stakeholder summary
     

Metrics to Track

  • Time from feedback to backlog
     
  • % feedback converted to actionable items
     
  • Theme impact on roadmap
     

Example Prompts / Templates

  • Atlassian: 'Cluster this CSV of user comments into 5 themes and draft one story per theme with ACs.'

Risks & Mitigations

  • Garbage in → ensure dedupe and anonymization.
     
  • Bias → include diverse samples and human review.
     

Automation Ideas / Scale‑Up

  • Monthly theme refresh job.
     
  • Stakeholder digest auto‑generated in Confluence.
     

16. Framework Artifacts --- Field Definitions & Templates

Jira Custom Fields (recommended)

  • AI Contribution % (Number, 0--100)
     
  • Human Contribution % (Number, 0--100)
     
  • AI/Human Split Evidence (Text/Links)
     
  • Technical Value Statement (Long Text)
     
  • AI Tag (Single‑select: AI‑Friendly, Human‑Heavy, Balanced)
     

Registers & Ledgers

  • AI Trust Ledger: date, ticket, module used, override? (Y/N), notes, outcome.
     
  • Story Pointing Ledger: ticket, baseline points, AI/Human %, final points, variance.
     

PR Template (Excerpt)

  • AI‑Assisted: ☐Yes ☐No --- Areas assisted: ____
     
  • Tests added/updated: ☐Yes ☐No --- Coverage summary: ____
     
  • Docs impact: ☐Yes ☐No --- Links: ____
     

17. Metrics & Dashboards --- Canonical Definitions

  • AI‑Friendly Share: % of Done stories tagged AI‑Friendly.
     
  • Velocity/Throughput: Completed points/items per sprint.
     
  • Pointing Accuracy: |Estimated points − Actual effort proxy| over time.
     
  • Bug Origin Mix: % AI vs Human vs Mixed; goal is AI≠higher defect rate.
     
  • MTTR: Mean Time To Resolve defects.
     
  • Coverage Trend: Line/branch coverage across critical modules.
     

18. 90‑Day Implementation Roadmap

Phase 1 (Weeks 1--3) --- Foundations

  • Enable Atlassian Intelligence; install IDE copilots.
     
  • Create Jira custom fields and PR templates.
     
  • Pilot modules: Refinement 2.0, Story Pointing 2.0, Execution 2.0.
     

Phase 2 (Weeks 4--8) --- Scale to Quality & Testing

  • Roll out Testing 2.0 and Quality 2.0.
     
  • Start Documentation 2.0 and Collaboration 2.0.
     
  • Stand up initial Intelligence 2.0 dashboard.
     

Phase 3 (Weeks 9--12) --- Full Lifecycle

  • Add Planning 2.0, Security 2.0, Customer Feedback 2.0.
     
  • Refine dashboards; define investment decision rules (e.g., Enterprise Copilot).
     
  • Retrospective: Adjust guardrails and prompts.
     

19. Investment Decision Rules (e.g., Enterprise Copilot)

  • AI‑Friendly share ↑ 15%+ QoQ AND bug rate stable/↓ 10% → scale licenses.
     
  • Velocity ↑ 10%+ with stable spillover → expand module adoption.
     
  • If pointing variance > threshold → revisit calibration and prompting.
     

20. Conclusion

IntelAgility turns AI from ad‑hoc tooling into a governed, measurable operating model. Follow these SOPs to remove delivery friction end‑to‑end and prove, with data, whether AI is making your team better.

Built on Unicorn Platform