DocumentationAgent TemplatesTest Generator Agent
Agent Templates

Test Generator Agent

Generate unit tests

Test Generator Agent

The Test Generator Agent automatically creates unit tests for your code. It analyzes your functions and classes, then generates comprehensive test suites.

What It Does

  • Analyzes code structure - Understands functions, classes, and their dependencies
  • Generates test cases - Creates tests for happy paths and edge cases
  • Follows conventions - Matches your existing test style
  • Mocks dependencies - Properly handles external dependencies
  • Maintains coverage - Tracks and improves test coverage

Supported Testing Frameworks

| Language | Frameworks | |----------|------------| | TypeScript/JavaScript | Jest, Vitest, Mocha | | Python | pytest, unittest | | Go | testing package | | Rust | Built-in tests | | Java | JUnit |

Configuration

agents:
  - name: test-generator
    template: test-generator
    triggers:
      pull_request:
        - opened
    config:
      # Testing framework to use
      framework: jest

      # Test file location pattern
      test_location: "__tests__/{filename}.test.ts"

      # Coverage threshold to maintain
      coverage_threshold: 80

      # Types of tests to generate
      test_types:
        - unit
        - edge_cases
        - error_handling

      # Files to generate tests for
      include_patterns:
        - "src/**/*.ts"
        - "!src/**/*.test.ts"

      # How to create the tests
      output_mode: pull_request  # or 'commit' to auto-commit

How It Works

1. Code Analysis

The agent analyzes your code to understand:

  • Function signatures and return types
  • Class methods and properties
  • Dependencies and imports
  • Existing test patterns in your codebase

2. Test Generation

For each function, it generates:

  • Happy path tests - Normal expected behavior
  • Edge cases - Boundary conditions
  • Error cases - Exception handling
  • Mock setup - External dependency mocking

3. Output

The agent can either:

  • Create a new PR with the generated tests
  • Add commits to the existing PR
  • Post test code as a review comment

Example Output

For this function:

export function calculateDiscount(price: number, percent: number): number {
  if (percent < 0 || percent > 100) {
    throw new Error('Invalid percentage');
  }
  return price * (1 - percent / 100);
}

The agent generates:

import { calculateDiscount } from './pricing';

describe('calculateDiscount', () => {
  describe('happy path', () => {
    it('should calculate 10% discount correctly', () => {
      expect(calculateDiscount(100, 10)).toBe(90);
    });

    it('should calculate 50% discount correctly', () => {
      expect(calculateDiscount(200, 50)).toBe(100);
    });

    it('should return original price for 0% discount', () => {
      expect(calculateDiscount(100, 0)).toBe(100);
    });
  });

  describe('edge cases', () => {
    it('should return 0 for 100% discount', () => {
      expect(calculateDiscount(100, 100)).toBe(0);
    });

    it('should handle decimal prices', () => {
      expect(calculateDiscount(99.99, 10)).toBeCloseTo(89.99);
    });
  });

  describe('error handling', () => {
    it('should throw for negative percentage', () => {
      expect(() => calculateDiscount(100, -10))
        .toThrow('Invalid percentage');
    });

    it('should throw for percentage over 100', () => {
      expect(() => calculateDiscount(100, 150))
        .toThrow('Invalid percentage');
    });
  });
});

Best Practices

  1. Review generated tests - AI-generated tests should be reviewed
  2. Customize for your style - Adjust configuration to match conventions
  3. Focus on new code - Generate tests for PR changes, not entire codebase
  4. Combine with coverage tools - Use with your existing coverage reporting
BambooSnow - AI Agent Automation Platform