OpenClaw
Caso de usoPrincipiante10 min

How to Automate GitHub PR Reviews with OpenClaw

Learn how to set up an automated PR review workflow using OpenClaw skills: GitHub integration, PR Reviewer, and Conventional Commits for consistent, high-quality code reviews.

Última actualización: 2026-03-31

Skills necesarios

GitHub (gh)
Recomendado

Opera GitHub mediante gh CLI (issues, PRs, repos).

Ver Guía
PR Reviewer
Recomendado

Revisión de código automatizada para pull requests.

Ver Guía
Conventional Commits
Recomendado

Generar/validar mensajes de Conventional Commits.

Ver Guía

What You'll Build

A fully automated PR review pipeline that:

  1. Creates PRs with AI-generated descriptions from your commit history
  2. Reviews code with AI-powered analysis that catches bugs, security issues, and style problems
  3. Enforces commit conventions with Conventional Commits format
  4. Provides actionable feedback as inline PR comments

By the end of this guide, you'll have a workflow where opening a PR automatically triggers an AI review — no manual intervention needed.

Why AI-Powered PR Review

Manual code review is essential, but it comes with real limitations that slow teams down:

  • Inconsistent quality: Different reviewers catch different things. One person focuses on naming conventions while another looks at error handling. Without a systematic approach, important issues slip through depending on who reviews the PR that day.
  • Reviewer fatigue: After reviewing several hundred lines of code, attention drops significantly. Studies show that review effectiveness decreases after about 60 minutes of continuous review. Critical bugs hide in the later parts of large diffs.
  • Slow turnaround: Reviewers are busy with their own work. A PR might sit for hours or even days waiting for someone to look at it. This blocks the author, encourages large batched PRs, and creates merge conflicts.
  • Scaling issues: As teams grow, the review bottleneck gets worse. Senior developers spend an increasing share of their time reviewing instead of building. New team members may not yet know the codebase well enough to catch subtle issues.

AI-powered review doesn't replace human reviewers — it handles the systematic checks (security patterns, common bugs, style violations) so human reviewers can focus on architecture, design decisions, and business logic. Think of it as a first pass that raises the baseline quality of every PR before a human even opens it.

Prerequisites

Before starting, make sure you have:

  • OpenClaw installed and configured (Getting Started Guide)
  • GitHub CLI (gh) installed and authenticated (gh auth login)
  • A GitHub repository to test with (can be a personal project)
  • Node.js 18+ for running clawhub commands

Step 1: Install the Required Skills

Install all three skills in order:

bash
# 1. GitHub integration (foundation)
npx clawhub@latest install github

# 2. PR Reviewer (code analysis)
npx clawhub@latest install pr-reviewer

# 3. Conventional Commits (commit formatting)
npx clawhub@latest install conventional-commits

Verify installation:

bash
clawhub list

You should see all three skills listed as installed.

Step 2: Configure GitHub Authentication

The GitHub skill needs a personal access token with the following scopes:

  • repo — full repository access
  • read:org — read organization membership (optional, for org repos)

If you've already authenticated with gh auth login, the skill will use your existing credentials. Otherwise:

bash
# Check your current auth status
gh auth status

# Login if needed
gh auth login

Step 3: Set Up PR Reviewer

The PR Reviewer skill works out of the box, but you can customize its behavior:

bash
# Review the default configuration
clawhub inspect pr-reviewer

Key configuration options:

  • Review depth: quick (surface-level) or thorough (deep analysis)
  • Focus areas: security, performance, style, bugs, or all
  • Auto-comment: whether to post comments directly on the PR

Step 4: Create Your First Automated PR Review

Let's test the workflow with a real PR:

4.1 Create a Feature Branch

bash
git checkout -b feature/test-ai-review

4.2 Make Some Changes

Edit a file in your project. For testing, try introducing common issues the reviewer can catch. Here are several examples across different problem categories:

Missing await (async bug):

javascript
// Bug: missing await on async call
function getUserData(userId) {
  const response = fetch(`/api/users/${userId}`);  // Missing await
  return response.json();  // TypeError: response.json is not a function
}

SQL injection vulnerability:

python
# Security: user input directly interpolated into SQL query
def get_user(user_id):
    query = f"SELECT * FROM users WHERE id = '{user_id}'"  # SQL injection risk
    return db.execute(query)

Hardcoded secret:

javascript
// Security: API key exposed in source code
const stripe = require('stripe')('sk_live_abc123secretkey');

N+1 query problem:

python
# Performance: N+1 query — fires one query per order
def get_orders_with_items(user_id):
    orders = Order.objects.filter(user_id=user_id)
    for order in orders:
        order.items = OrderItem.objects.filter(order_id=order.id)  # N+1!
    return orders

4.3 Commit with Conventional Commits

Instead of writing a commit message manually, let OpenClaw generate one:

bash
git add .
# OpenClaw generates a conventional commit message
# e.g., "feat(api): add getUserData function for user data retrieval"

4.4 Create and Review the PR

bash
# OpenClaw creates the PR with an AI-generated description
# Then PR Reviewer automatically analyzes the diff

Within seconds, you'll see:

  • A well-formatted PR description summarizing your changes
  • Inline review comments pointing out specific issues
  • A summary comment with overall assessment and suggestions

Here's an example of what the AI review comments look like on your PR:

🔒 Security Issue (line 14, auth.py):
  User input is directly interpolated into an SQL query string.
  This is vulnerable to SQL injection attacks.
  Suggested fix: Use parameterized queries instead.

  - query = f"SELECT * FROM users WHERE id = '{user_id}'"
  + query = "SELECT * FROM users WHERE id = %s"
  + return db.execute(query, (user_id,))

⚡ Performance Issue (line 8, orders.py):
  N+1 query detected — OrderItem.objects.filter() is called
  once per order inside a loop. Use select_related() or
  prefetch_related() to batch this into a single query.

  - orders = Order.objects.filter(user_id=user_id)
  + orders = Order.objects.filter(user_id=user_id).prefetch_related('items')

🐛 Bug (line 3, api.js):
  fetch() returns a Promise but is not awaited.
  response.json() will fail because response is a
  pending Promise, not a Response object.

  - const response = fetch(`/api/users/${userId}`);
  + const response = await fetch(`/api/users/${userId}`);

Step 5: Customize the Review Workflow

Focus on Security

For security-critical repositories, configure PR Reviewer to prioritize security checks:

  • SQL injection patterns
  • Hardcoded credentials or API keys
  • Insecure data handling (unvalidated input, missing sanitization)
  • Dependency vulnerabilities
  • Cross-site scripting (XSS) vectors in frontend code
  • Insecure deserialization patterns

Performance-Focused Review

For performance-sensitive services, instruct the reviewer to focus on performance patterns. For React projects specifically, the reviewer catches patterns like unnecessary re-renders:

jsx
// Performance: new object reference on every render causes child re-renders
function ParentComponent({ items }) {
  return (
    <ChildComponent
      style={{ margin: 10 }}        // New object every render
      onClick={() => doSomething()}  // New function every render
    />
  );
}

Team-Wide Setup

Share the configuration across your team:

  1. Export your skill configuration to a .openclaw/ directory in your repo
  2. Commit it to the repository
  3. Team members install with the same config — the skills pick up project-level configuration automatically

Advanced: Guiding the Review Focus

You can tailor the review focus by providing instructions to the PR Reviewer skill. The skill uses AI to analyze diffs, so you can direct its attention through natural language prompts.

Language-Specific Guidance

When running a review, tell the agent what to focus on:

  • Python: check for bare except: blocks, missing type hints, Django ORM N+1 queries
  • JavaScript/TypeScript: flag leftover console.log, missing await on async calls, hardcoded secrets
  • Rust: flag .unwrap() in production code, suggest proper Result handling

Per-Directory Focus

Direct the reviewer to apply different scrutiny levels:

  • Core business logic (src/core/) — thorough review covering security, bugs, and performance
  • Test files (src/tests/) — quick check for correctness only
  • Documentation (docs/) — light style review
  • Scripts (scripts/) — focus on security and bugs

Custom Patterns

Ask the reviewer to flag project-specific patterns, such as:

  • Direct database access in API routes instead of using the repository layer
  • Page components missing error boundaries
  • API endpoints without rate limiting

Integration with CI/CD

For fully automated reviews on every PR, integrate OpenClaw's PR Reviewer into your CI/CD pipeline.

GitHub Actions

Create a workflow file at .github/workflows/ai-review.yml:

yaml
name: AI PR Review
on:
  pull_request:
    types: [opened, synchronize]

jobs:
  ai-review:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install OpenClaw and skills
        run: |
          npm install -g clawhub@latest
          clawhub install pr-reviewer

      - name: Run AI Review
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          OPENCLAW_API_KEY: ${{ secrets.OPENCLAW_API_KEY }}
        run: |
          clawhub run pr-reviewer \
            --pr ${{ github.event.pull_request.number }} \
            --repo ${{ github.repository }} \
            --auto-comment

Controlling When Reviews Run

You can limit AI review to specific conditions to save costs:

yaml
on:
  pull_request:
    types: [opened, synchronize]
    paths-ignore:
      - '*.md'
      - 'docs/**'
      - '.github/**'

Or require a label before reviewing:

yaml
- name: Check for review label
  if: contains(github.event.pull_request.labels.*.name, 'ai-review')
  run: clawhub run pr-reviewer --pr ${{ github.event.pull_request.number }}

Other CI Platforms

The same approach works on any CI platform that supports Node.js. The key steps are:

  1. Install clawhub and the pr-reviewer skill
  2. Set OPENCLAW_API_KEY as a secret environment variable
  3. Pass the PR number and repository to the review command
  4. Ensure the CI bot has write permissions on pull requests

Real-World Results

Teams using this workflow typically see:

  • 40% faster PR turnaround — AI catches obvious issues before human review
  • Consistent review quality — every PR gets the same thorough analysis
  • Better commit history — Conventional Commits make changelogs automatic
  • Fewer bugs in production — AI catches issues humans might miss

Troubleshooting

"GitHub CLI not found"

Make sure gh is installed and in your PATH:

bash
# macOS
brew install gh

# Linux
sudo apt install gh

# Windows
winget install GitHub.cli

"Permission denied" on PR creation

Check your token scopes:

bash
gh auth status

Ensure the repo scope is included. Re-authenticate if needed:

bash
gh auth login --scopes repo

PR Reviewer not commenting

Verify the skill is installed and configured:

bash
clawhub inspect pr-reviewer

Check that your OpenClaw AI provider is configured and has available credits.

Preguntas Frecuentes

Yes. The GitHub skill uses the `gh` CLI, which fully supports GitHub Enterprise Server and GitHub Enterprise Cloud. Configure your enterprise host with `gh auth login --hostname github.yourcompany.com`. Once authenticated, all OpenClaw GitHub skills — including PR Reviewer — work exactly as they do with github.com. No additional configuration is needed beyond the hostname.

The GitHub skill is GitHub-specific, but PR Reviewer and Conventional Commits work with any git platform. For GitLab, you would use the GitLab skill instead of the GitHub skill, passing merge request IDs instead of PR numbers. Bitbucket support works similarly with its own platform skill. The core review logic is platform-agnostic — only the integration layer that posts comments differs between platforms.

AI review complements human review rather than replacing it. It excels at systematic checks — catching security vulnerabilities, common bug patterns, style violations, and performance anti-patterns consistently across every PR without fatigue. However, it does not evaluate high-level architecture decisions, business logic correctness, or whether the overall approach is the right one. The best workflow uses AI review as a first pass to handle mechanical checks, freeing human reviewers to focus on design and intent.

Costs depend on your AI provider and the size of the diff being reviewed. A typical PR review analyzing a 500-line diff costs approximately $0.01-0.05 with most providers. Larger diffs (1000+ lines) may cost up to $0.10-0.15. You can control costs by setting review depth to `quick` for non-critical paths and using path filters to skip generated files, vendor directories, and documentation.

Yes. PR Reviewer supports flexible file pattern configuration. You can exclude files by glob pattern (e.g., `**/*.generated.ts`, `vendor/**`), limit reviews to specific directories, or set different review depths per path. This is configured in your `.openclaw/pr-reviewer.yml` file. Most teams exclude auto-generated files, lock files, and vendored dependencies to keep reviews focused on code that was actually written by the team.

PR Reviewer processes large diffs by splitting them into logical chunks and analyzing each chunk in context. For PRs exceeding 1000 lines, it prioritizes high-risk files first — files touching authentication, database queries, API endpoints, and security-sensitive logic get reviewed with full depth. Lower-risk files like tests and configuration changes receive a lighter pass. You can also configure a line limit that triggers a warning suggesting the author split the PR into smaller, more reviewable pieces.

Yes. PR Reviewer supports all major programming languages including JavaScript, TypeScript, Python, Go, Rust, Java, C#, Ruby, PHP, and more. It applies language-specific rules automatically — for example, checking for `unwrap()` misuse in Rust or missing `await` in JavaScript async code. You can also define custom rules per language in your configuration file. The reviewer detects the language from file extensions and applies the appropriate analysis pipeline without any manual setup.

AI review and linters serve complementary purposes and work well together. Linters enforce deterministic rules (formatting, import order, unused variables) while AI review catches semantic issues (logic bugs, security patterns, performance problems) that rule-based tools cannot detect. In CI, run your linter first to catch formatting issues, then run AI review for deeper analysis. PR Reviewer is aware of common linter rules and avoids duplicating feedback that your linter already covers, so you won't see double comments about the same issue.

OpenClaw tracks review metrics across your repository over time. Run `clawhub stats pr-reviewer` to see a summary of issues found by category, how many were resolved before merge, and common patterns in your codebase. This helps identify recurring issues — for example, if SQL injection warnings keep appearing, it signals a need for team training or better ORM abstractions. You can export these metrics as JSON for integration with dashboards or team reporting tools.

Yes. You can define review templates that apply different rules based on PR labels, branch naming conventions, or changed file paths. For example, a `hotfix/*` branch might get a security-focused review with maximum depth, while a `docs/*` branch gets a light style-only check. Define templates in your `.openclaw/pr-reviewer.yml` under the `templates` key, and PR Reviewer selects the matching template automatically based on your criteria. This ensures critical changes get thorough scrutiny without slowing down low-risk updates.

Casos de uso relacionados