Skip to main content
Version: 2.1.X

Advanced settings

The Advanced settings page lets you fine‑tune how PerForge analyzes test results for the current project. These options control ML anomaly detection and how final transaction status is calculated in reports.

All values are saved per project – different projects can have their own ML and transaction status configuration.

Where to find it

  • In the web UI, open the top navigation bar.
  • Click Settings → Advanced settings in the dropdown.
  • The page header shows Current Project: <project-name>, confirming that the settings you edit apply only to that project.

On this page you will see two main sections inside an accordion:

  • ML Analysis Settings
  • Transaction Status Settings

Each section has its own Save changes and Reset to defaults buttons.

How changes are applied

  • Save changes
    • Validates all fields in the section (range checks for numbers, required fields, etc.).
    • Sends the updated values to /api/v1/settings/<category>.
    • Updates the configuration only for the current project.
  • Reset to defaults
    • Opens a confirmation dialog for the selected category (for example, ML Analysis or Transaction Status).
    • On confirmation, calls /api/v1/settings/reset for that category and reloads the default values from the backend.
    • This operation affects only the current project; it does not touch settings in other projects.
  • Unsaved indicator
    • A small blue dot next to the accordion title (for example, ML Analysis Settings).
    • Appears when there are unsaved edits in that category and disappears after a successful save or reset.

Below is an overview of what you can configure in each section.

ML Analysis Settings

These settings control the internal ML engine that analyzes backend test metrics (for example, throughput, users, response times) and produces the ${ml_summary} and ${ml_anomalies} results used in templates.

The UI groups options into logical subsections; each parameter has a detailed description and allowed range shown as a tooltip and “Range: min – max” hint.

Isolation Forest

  • contamination (float)
    • Expected proportion of anomalies in the dataset.
    • Lower values = stricter detection (fewer points marked as anomalies).
  • isf_threshold (float)
    • Decision threshold for classifying points as anomalies based on Isolation Forest scores.
  • isf_feature_metric (string, choice: overalThroughput or overalUsers)
    • Main performance metric used as the primary feature for multi‑dimensional anomaly detection.
  • isf_validation_threshold (float)
    • Minimum relative deviation from the median required to keep an Isolation Forest anomaly; smaller deviations are treated as normal to reduce false positives.

Z‑Score Detection

  • z_score_threshold (float)
    • How many standard deviations from the mean a point must be to be considered an outlier.
  • z_score_median_validation_threshold (float)
    • Additional median‑based validation step; anomalies must also deviate sufficiently from the local median to be kept, reducing noise.

Rolling Analysis

  • rolling_window (int)
    • Number of samples in the moving window for rolling statistics.
  • rolling_correlation_threshold (float)
    • Minimum correlation between load and performance metrics required to consider the system stable.

Ramp‑Up Detection

  • ramp_up_required_breaches_min / max (int)
    • Lower and upper bounds for how many consecutive anomaly breaches are needed to confirm a tipping point.
  • ramp_up_required_breaches_fraction (float)
    • Fraction of total samples used to dynamically compute the required number of breaches, clamped between min and max.
  • ramp_up_base_metric (string, choice: overalUsers or overalThroughput)
    • Base metric treated as the independent variable during ramp‑up analysis.

Load Detection

  • fixed_load_percentage (int)
    • Percentage of samples where user count must be stable for the test to be classified as a fixed‑load test.

Metric Stability

  • slope_threshold (float)
    • Threshold for the absolute slope of a trend line; higher slopes indicate instability.
  • p_value_threshold (float)
    • Significance level for trend tests; lower values make it easier to detect statistically significant trends.
  • numpy_var_threshold (float)
    • Maximum allowed variance before a metric is considered unstable.
  • cv_threshold (float)
    • Maximum allowed coefficient of variation (relative volatility).

Context Filtering

  • context_median_window (int)
    • Window size for local median context around each point.
  • context_median_pct (float)
    • Allowed deviation from the local median before a point is treated as a true anomaly.
  • context_median_enabled (bool)
    • Master switch to enable/disable contextual median post‑filtering of anomalies.

Severity thresholds

  • severity_critical (float)
    • Minimum impact score for an anomaly to be treated as Critical.
  • severity_high (float)
    • Minimum impact score for an anomaly to be treated as High severity.
  • severity_medium (float)
    • Minimum impact score for an anomaly to be treated as Medium severity.
  • severity_low (float)
    • Minimum impact score for an anomaly to be treated as Low severity.

Stability Analysis

  • stability_outlier_z_score (float)
    • Z‑score used to remove extreme outliers before running stability checks.
  • stability_min_points (int)
    • Minimum number of samples required before stability can be evaluated for a metric.
  • baseline_window (int)
    • Number of data points before an anomaly window that are used to compute a local baseline for comparison.

Merging & Grouping

  • merge_gap_samples (int)
    • Maximum gap in samples between anomaly periods that will be merged into a single event.

Per‑Transaction Analysis

  • per_txn_analysis_enabled (bool)
    • Enables or disables detailed per‑transaction anomaly detection.
  • per_txn_metrics (list)
    • Comma‑separated list of metrics to analyze per transaction (for example rt_ms_median, rt_ms_avg, rt_ms_p90, error_rate, rps).
  • per_txn_coverage (float)
    • Target share of total traffic (for example 0.8 = top transactions covering 80% of traffic) to include in analysis.
  • per_txn_max_k (int)
    • Hard cap on the number of transactions analyzed.
  • per_txn_min_points (int)
    • Minimum number of data points required for a transaction to be included.

Transaction Status Settings

These settings control how the final status of each transaction (Pass / Warning / Failed) is calculated in the Transaction Status tables used in reports, combining NFR checks, baseline comparison, and ML anomalies.

NFR Validation

  • nfr_enabled (bool)
    • Enables using configured NFRs when evaluating transaction status.
    • When enabled, transactions that violate selected NFRs can be marked as Warning or Failed.

Baseline Comparison

  • baseline_enabled (bool)
    • Enables comparison of current test metrics against a baseline test.
  • baseline_warning_threshold_pct (float)
    • Percentage degradation vs baseline that raises a Warning (for example, response time 10% slower than baseline).
  • baseline_failed_threshold_pct (float)
    • Percentage degradation vs baseline that raises Failed.
  • baseline_metrics_to_check (list)
    • Comma‑separated list of metrics to include in baseline comparison (for example avg, pct50, pct90, errors).

ML Anomaly Detection

  • ml_enabled (bool)
    • Includes ML‑detected anomalies when computing final transaction status.
  • ml_min_impact (float)
    • Minimum traffic‑weighted impact required for an ML anomaly to influence the status (very low‑traffic anomalies can be ignored).

Data Query Settings

These settings control how raw backend listener metrics are grouped into time buckets before analysis and charting.

  • backend_query_granularity_seconds (int)
    • Time granularity (in seconds) for aggregating backend listener metrics in time‑series queries.
    • Lower values provide more detailed curves but may increase query cost; higher values smooth the data and reduce load.

Together, these settings determine how strict PerForge is when flagging anomalies, regressions, and failures in your reports for each project, and how finely it samples raw metrics for analysis.