Skip to main content
Version: 2.1.X

Analysis process

PerForge v2 introduces a powerful, multi-stage analysis engine that automatically processes your performance test data to provide deep, actionable insights. This process runs every time you generate a report, combining data collection, AI analysis, machine learning, and NFR validation.

The entire analysis pipeline is designed to be modular. You can enable or disable different stages based on your needs.

Step-by-step breakdown​

1. Data collection​

The process begins by gathering raw performance data. PerForge connects to your configured data sources to pull all necessary metrics for the analysis.

  • Time-series Data: Metrics are collected from InfluxDB.
  • Visual Data: Graph screenshots are fetched from Grafana.

2. AI-powered analysis​

Using a selected AI provider (like OpenAI or Gemini), PerForge sends multiple prompts to analyze different aspects of the collected data.

  • N Prompts for Graphs: One prompt is sent for each Grafana screenshot to identify visual patterns, trends, and anomalies.
  • 1 Prompt for Aggregated Data: A separate prompt is used to generate a natural language summary for the aggregated metrics table.

3. Machine learning (ML) analysis​

Next, a sophisticated ML pipeline runs to detect statistical anomalies and performance patterns without using an external LLM.

Key activities in this stage include:

  • Test type identification: The tool automatically determines if the test was a Ramp-up or Fixed Load test and analyzes each period separately.
  • For Ramp-up Tests: It identifies if a saturation point was reached and, if so, suggests the application's capacity.
  • For Fixed Load Tests: It checks for spikes and evaluates if metrics are stable using algorithms like Isolation Forest, Z-score, and Linear Regression.

The findings from this stage are stored in the ${ml_summary} variable.

4. NFR validation​

If Non-Functional Requirements (NFRs) are provided, PerForge validates the test results against them.

  • APPDEX Score: An APPDEX (Application Performance Index) score is calculated based on the percentage of NFRs that were met.
  • Failed NFRs: A detailed list of requests that failed to meet the defined NFRs is generated.

This summary is made available in the ${nfr_summary} variable.

5. Final summary generation​

In the final step, all the individual observations from the previous stages are compiled and sent in a template prompt to the AI. This generates a holistic, high-level summary of the entire test run, providing a clear and concise conclusion.

The combined results of this stage are consolidated into the ${ai_summary} variable.

warning

Memory and Token Usage On the AI integration page, you can enable memory between these prompts. This allows the LLM to maintain context from one analysis step to the next (e.g., remembering graph anomalies when summarizing aggregated data). Be aware that enabling this feature will significantly increase token consumption.

How to use the results​

All generated findings are injected into your reports using template variables. Simply add these placeholders to your report templates (Markdown, HTML, etc.) to include the summaries:

  • ${ai_summary}: Contains the AI-driven analysis of your test data.
  • ${ml_summary}: Provides statistical insights and anomaly detection results from the ML pipeline.
  • ${nfr_summary}: Includes the APPDEX score and a list of any NFR violations.

By leveraging this comprehensive analysis process, you can transform raw test data into actionable insights with minimal effort.