← Back to Documentation
Documentation
Testing Guide
Comprehensive guide to running tests for all components in the ad-blocking repository.
Overview
This project uses different testing frameworks for each language implementation:
| Component | Framework | Test Location | Test Command |
|---|---|---|---|
| TypeScript (Rules Compiler) | Deno Test | src/rules-compiler-typescript/tests/ |
deno task test |
| TypeScript (API Client) | Deno Test | src/adguard-api-typescript/tests/ |
deno task test |
| .NET (Rules Compiler) | xUnit | src/rules-compiler-dotnet/src/RulesCompiler.Tests/ |
dotnet test |
| .NET (API Client) | xUnit | src/adguard-api-dotnet/src/AdGuard.ApiClient.Test/ |
dotnet test |
| Python | pytest | src/rules-compiler-python/tests/ |
pytest |
| Rust (Rules Compiler) | cargo test | src/rules-compiler-rust/src/ |
cargo test |
| Rust (API Client) | cargo test | src/adguard-api-rust/tests/ |
cargo test |
| PowerShell | Pester | src/adguard-api-powershell/Tests/ |
Invoke-Pester |
TypeScript (Deno) Testing
Rules Compiler Tests
cd src/rules-compiler-typescript
# Run all tests
deno task test
# Run specific test file
deno test tests/compiler.test.ts
# Run with coverage
deno task test:coverage
# Run with verbose output
deno test --allow-all tests/ --trace-leaks
# Watch mode for development
deno task dev
API Client Tests
cd src/adguard-api-typescript
# Run all tests
deno task test
# Run specific test file
deno test tests/client.test.ts
# Run tests with coverage
deno task test:coverage
# Run integration tests (requires API key)
export ADGUARD_API_KEY="your-key"
deno test tests/ --allow-env --allow-net
Writing Deno Tests
import { assertEquals, assertRejects } from 'https://deno.land/std@0.203.0/assert/mod.ts';
import { describe, it, beforeEach, afterEach } from 'https://deno.land/std@0.203.0/testing/bdd.ts';
describe('MyModule', () => {
let instance: MyModule;
beforeEach(() => {
instance = new MyModule();
});
it('should do something', () => {
const result = instance.doSomething();
assertEquals(result, 'expected value');
});
it('should handle errors', async () => {
await assertRejects(
async () => await instance.failingMethod(),
Error,
'Expected error message'
);
});
afterEach(() => {
// Cleanup
});
});
Common Deno Test Options
| Option | Description |
|---|---|
--allow-read |
Allow file system read |
--allow-write |
Allow file system write |
--allow-net |
Allow network access |
--allow-env |
Allow environment access |
--allow-all / -A |
Allow all permissions |
--coverage |
Generate coverage |
--trace-leaks |
Detect resource leaks |
--watch |
Watch mode |
.NET (xUnit) Testing
Rules Compiler Tests
cd src/rules-compiler-dotnet
# Run all tests
dotnet test RulesCompiler.slnx
# Run with verbose output
dotnet test RulesCompiler.slnx --verbosity detailed
# Run specific test class
dotnet test --filter "FullyQualifiedName~ConfigurationValidatorTests"
# Run specific test method
dotnet test --filter "Name~ShouldValidateConfiguration"
# Run tests with coverage (requires coverlet)
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=opencover
API Client Tests
cd src/adguard-api-dotnet
# Run all tests
dotnet test AdGuard.ApiClient.slnx
# Run specific test class
dotnet test --filter "FullyQualifiedName~DevicesApiTests"
# Run tests matching pattern
dotnet test --filter "Name~GetAccountLimits"
# Run with code coverage
dotnet test /p:CollectCoverage=true
Writing xUnit Tests
using Xunit;
using FluentAssertions;
public class MyServiceTests
{
private readonly MyService _service;
public MyServiceTests()
{
_service = new MyService();
}
[Fact]
public void ShouldDoSomething()
{
// Arrange
var input = "test";
// Act
var result = _service.DoSomething(input);
// Assert
result.Should().Be("expected value");
}
[Theory]
[InlineData("input1", "output1")]
[InlineData("input2", "output2")]
public void ShouldHandleMultipleInputs(string input, string expected)
{
var result = _service.Process(input);
result.Should().Be(expected);
}
[Fact]
public async Task ShouldHandleAsync()
{
var result = await _service.DoAsyncWork();
result.Should().NotBeNull();
}
}
Common xUnit Filters
| Filter | Example | Description |
|---|---|---|
FullyQualifiedName |
FullyQualifiedName~ConfigurationTests |
Match fully qualified name |
Name |
Name~ShouldValidate |
Match test method name |
Category |
Category=Integration |
Match test category/trait |
Priority |
Priority=1 |
Match priority trait |
Python (pytest) Testing
Running Tests
cd src/rules-compiler-python
# Install dev dependencies first
pip install -e ".[dev]"
# Run all tests
pytest
# Run with verbose output
pytest -v
# Run specific test file
pytest tests/test_config.py
# Run specific test function
pytest tests/test_config.py::test_read_yaml
# Run tests matching pattern
pytest -k "test_read"
# Run with coverage
pytest --cov=rules_compiler --cov-report=term-missing
# Run with coverage HTML report
pytest --cov=rules_compiler --cov-report=html
# Run in parallel (requires pytest-xdist)
pytest -n auto
Writing pytest Tests
import pytest
from rules_compiler import RulesCompiler
class TestRulesCompiler:
@pytest.fixture
def compiler(self):
return RulesCompiler()
def test_compile(self, compiler):
result = compiler.compile("config.yaml")
assert result.success
assert result.rule_count > 0
@pytest.mark.parametrize("config,expected", [
("config1.yaml", 100),
("config2.yaml", 200),
])
def test_multiple_configs(self, compiler, config, expected):
result = compiler.compile(config)
assert result.rule_count == expected
def test_error_handling(self, compiler):
with pytest.raises(FileNotFoundError):
compiler.compile("nonexistent.yaml")
@pytest.mark.asyncio
async def test_async_operation(self, compiler):
result = await compiler.async_compile("config.yaml")
assert result is not None
Common pytest Options
| Option | Description |
|---|---|
-v |
Verbose output |
-s |
Show print statements |
-x |
Stop on first failure |
--maxfail=N |
Stop after N failures |
-k EXPRESSION |
Run tests matching expression |
--cov=MODULE |
Code coverage for module |
-n auto |
Run in parallel |
--pdb |
Drop into debugger on failure |
Rust (cargo test) Testing
Rules Compiler Tests
cd src/rules-compiler-rust
# Run all tests
cargo test
# Run with output
cargo test -- --nocapture
# Run specific test
cargo test test_count_rules
# Run tests in a specific module
cargo test config::
# Run with verbose output
cargo test -- --nocapture --test-threads=1
# Run tests for specific package
cargo test --package rules-compiler
# Run doc tests
cargo test --doc
API Client Tests
cd src/adguard-api-rust
# Run all workspace tests
cargo test
# Run tests for specific package
cargo test --package adguard-api-lib
cargo test --package adguard-api-cli
# Run integration tests
cargo test --test integration_tests
# Run with coverage (requires tarpaulin)
cargo tarpaulin --out Html
Writing Rust Tests
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic_functionality() {
let compiler = RulesCompiler::new();
let result = compiler.compile("config.yaml", None);
assert!(result.is_ok());
}
#[test]
fn test_error_handling() {
let compiler = RulesCompiler::new();
let result = compiler.compile("nonexistent.yaml", None);
assert!(result.is_err());
}
#[test]
#[should_panic(expected = "Invalid configuration")]
fn test_panic() {
panic!("Invalid configuration");
}
#[tokio::test]
async fn test_async() {
let result = async_function().await;
assert!(result.is_ok());
}
#[test]
#[ignore] // Run with --ignored flag
fn expensive_test() {
// Long-running test
}
}
Common cargo test Options
| Option | Description |
|---|---|
-- |
Pass options to test binary |
--nocapture |
Show print statements |
--test-threads=N |
Number of parallel threads |
--ignored |
Run ignored tests |
--release |
Run tests in release mode |
--doc |
Run documentation tests |
PowerShell (Pester) Testing
Running Tests
cd src/adguard-api-powershell
# Run all tests
Invoke-Pester -Path ./Tests/
# Run with detailed output
Invoke-Pester -Path ./Tests/ -Output Detailed
# Run specific test file
Invoke-Pester -Path ./Tests/RulesCompiler.Tests.ps1
# Run tests with code coverage
Invoke-Pester -Path ./Tests/ -CodeCoverage ./Invoke-RulesCompiler.psm1
# Run with results export
Invoke-Pester -Path ./Tests/ -Output Detailed -OutputFile ./TestResults.xml
Writing Pester Tests
Describe "RulesCompiler" {
BeforeAll {
Import-Module ./Invoke-RulesCompiler.psm1 -Force
}
Context "When compiling rules" {
It "Should compile successfully" {
$result = Invoke-RulesCompiler -ConfigPath "config.json"
$result.Success | Should -Be $true
}
It "Should handle missing config" {
{ Invoke-RulesCompiler -ConfigPath "nonexistent.json" } |
Should -Throw
}
}
Context "Configuration validation" {
It "Should validate config format" {
$config = Read-CompilerConfiguration -Path "config.json"
$config.Name | Should -Not -BeNullOrEmpty
}
It "Should support parameters <ConfigPath>" -TestCases @(
@{ ConfigPath = "config1.json" },
@{ ConfigPath = "config2.json" }
) {
param($ConfigPath)
$result = Invoke-RulesCompiler -ConfigPath $ConfigPath
$result | Should -Not -BeNull
}
}
AfterAll {
Remove-Module Invoke-RulesCompiler -Force
}
}
Common Pester Options
| Option | Description |
|---|---|
-Path |
Test file or directory path |
-Output |
Output verbosity (None, Normal, Detailed, Diagnostic) |
-CodeCoverage |
Files to analyze for code coverage |
-PassThru |
Return test result object |
-Show |
What to show (None, All, Failed, Passed, etc.) |
Continuous Integration
GitHub Actions Example
name: Run All Tests
on: [push, pull_request]
jobs:
test-typescript:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: denoland/setup-deno@v1
with:
deno-version: v2.x
- name: Run TypeScript tests
run: |
cd src/rules-compiler-typescript
deno task test
cd ../adguard-api-typescript
deno task test
test-dotnet:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-dotnet@v3
with:
dotnet-version: '10.0.x'
- name: Run .NET tests
run: |
cd src/rules-compiler-dotnet
dotnet test RulesCompiler.slnx
cd ../adguard-api-dotnet
dotnet test AdGuard.ApiClient.slnx
test-python:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install and test
run: |
cd src/rules-compiler-python
pip install -e ".[dev]"
pytest --cov=rules_compiler
test-rust:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions-rs/toolchain@v1
with:
toolchain: stable
- name: Run Rust tests
run: |
cd src/rules-compiler-rust
cargo test
cd ../adguard-api-rust
cargo test
test-powershell:
runs-on: windows-latest
steps:
- uses: actions/checkout@v4
- name: Run PowerShell tests
shell: pwsh
run: |
Install-Module -Name Pester -Force -SkipPublisherCheck
cd src/adguard-api-powershell
Invoke-Pester -Path ./Tests/ -Output Detailed
Test Coverage
Generating Coverage Reports
Deno
cd src/rules-compiler-typescript
deno task test:coverage
# View coverage report at coverage/html/index.html
.NET (with coverlet)
cd src/rules-compiler-dotnet
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=opencover
dotnet tool install -g dotnet-reportgenerator-globaltool
reportgenerator -reports:coverage.opencover.xml -targetdir:coverage
Python
cd src/rules-compiler-python
pytest --cov=rules_compiler --cov-report=html
# View coverage/index.html
Rust
cd src/rules-compiler-rust
cargo install cargo-tarpaulin
cargo tarpaulin --out Html
# View tarpaulin-report.html
Best Practices
1. Test Organization
- Keep tests close to source code
- Use descriptive test names
- Group related tests together
- Separate unit and integration tests
2. Test Independence
- Each test should be independent
- Avoid shared mutable state
- Clean up after tests
- Use fixtures/setup methods
3. Test Coverage
- Aim for high code coverage (>80%)
- Focus on critical paths
- Test error conditions
- Include edge cases
4. Performance
- Keep unit tests fast (<100ms)
- Mark slow tests appropriately
- Run integration tests separately
- Use parallelization when possible
5. Mocking
Use mocks for external dependencies:
// Deno
import { stub } from 'https://deno.land/std@0.203.0/testing/mock.ts';
// .NET
using Moq;
var mock = new Mock<IService>();
// Python
from unittest.mock import Mock, patch
// Rust
use mockall::predicate::*;
Troubleshooting
Deno Tests Failing
# Clear cache
deno cache --reload src/mod.ts
# Run with permissions
deno test --allow-all tests/
# Check for resource leaks
deno test --trace-leaks tests/
.NET Tests Not Running
# Restore packages
dotnet restore
# Clean and rebuild
dotnet clean
dotnet build
dotnet test
Python Tests Not Found
# Install in editable mode
pip install -e ".[dev]"
# Verify pytest installation
pytest --version
# Check PYTHONPATH
export PYTHONPATH=$PWD:$PYTHONPATH
Rust Tests Failing
# Update dependencies
cargo update
# Clean build
cargo clean
cargo test
# Run single-threaded
cargo test -- --test-threads=1
Running All Tests
To run all tests across all components:
# From repository root
./scripts/run-all-tests.sh # Linux/macOS
.\scripts\run-all-tests.ps1 # Windows
Create this script to automate testing:
#!/bin/bash
# run-all-tests.sh
set -e
echo "Running TypeScript tests..."
cd src/rules-compiler-typescript && deno task test && cd ../..
cd src/adguard-api-typescript && deno task test && cd ../..
echo "Running .NET tests..."
cd src/rules-compiler-dotnet && dotnet test RulesCompiler.slnx && cd ../..
cd src/adguard-api-dotnet && dotnet test src/AdGuard.ApiClient.sln && cd ../..
echo "Running Python tests..."
cd src/rules-compiler-python && pytest && cd ../..
echo "Running Rust tests..."
cd src/rules-compiler-rust && cargo test && cd ../..
cd src/adguard-api-rust && cargo test && cd ../..
echo "Running PowerShell tests..."
cd src/adguard-api-powershell && pwsh -Command "Invoke-Pester -Path ./Tests/" && cd ../..
echo "All tests passed!"
Related Documentation
- TypeScript Rules Compiler Guide
- TypeScript API SDK Guide
- .NET Compiler README
- Python Compiler README
- Rust Compiler README
License
GPLv3 - See LICENSE for details.