You paid for a VAPT engagement. The report just landed in your inbox. Forty pages of severity ratings, CVSS scores, attack narratives, and remediation guidance. Your CTO will read the technical sections. Your investor wants the executive summary. Your auditor wants compliance mapping. You need to know what to do first.
This is a founder’s guide to reading a VAPT report — without becoming a security professional.
Quick Answer
A VAPT report has four parts that matter: the executive summary (1-2 pages, business risk overview), the findings list (each vulnerability with severity, evidence, fix), the methodology section (proves the testing was real and structured), and the retest section (confirms what got fixed). Read in this order:
- Skim the executive summary to know the business risk posture
- Sort findings by severity (Critical, High, Medium, Low, Informational)
- For each Critical and High finding, read: what it is, what’s at risk, how to fix it
- Schedule remediation for Critical and High in 30-60 days
- Send the retest before sharing the report with auditors or customers
If a finding seems wrong or has compensating controls, push back. Good vendors expect this and update the report.
The Four Sections That Matter
A real VAPT report has structure. If your report is a wall of CVE numbers without these sections, you got a scanner output, not a pentest report. (What separates a good pentest report from a bad one covers this in depth.)
1. Executive Summary
One to two pages. Business-language overview. Should tell you:
- What was tested (scope)
- How many findings, broken down by severity
- The top 3-5 risks in plain English
- Recommended priority order
- A high-level posture statement (e.g., “Your application has strong authentication controls but weak session management; recommend immediate remediation of session-related findings before next release”)
This is what your investor or board will read. If the executive summary is technical jargon or a bullet list of CVE numbers, the report has failed at communication.
2. Methodology Section
Proves the engagement was real and structured. Should reference:
- The standards followed (OWASP Web Security Testing Guide, OWASP API Top 10, OWASP Mobile Top 10, OWASP LLM Top 10, PTES, NIST SP 800-115)
- The testing approach (grey-box, black-box, or white-box)
- The tools used (Burp Suite Professional, Nuclei, custom scripts, manual techniques)
- The dates of testing and the scope tested
Auditors check this section first. If it’s vague or missing, the report loses credibility as audit evidence.
3. Findings
The bulk of the report. Each finding should include:
- Title — clear and specific. “Broken Object Level Authorization in /api/v2/users” not “Access Control Issue”
- Severity — CVSS v3.1 score and vector, or OWASP Risk Rating
- Description — what the vulnerability is, why it exists
- Business impact — what an attacker could actually do
- Proof of concept — step-by-step reproduction with screenshots, request/response pairs
- Remediation — specific fix guidance, not “improve input validation”
- References — OWASP, CWE, vendor documentation
If a finding is missing the proof of concept or the remediation steps, send it back to the vendor.
4. Retest Section
After you fix things, the vendor retests and updates each finding’s status to Open, Remediated, Risk Accepted, or Compensating Controls in Place. A finalised report with all High and Critical findings as Remediated is what your auditor wants to see.
Severity Ratings Explained
Two systems are common. You will see one or both in your report.
CVSS v3.1 (most common)
| Score | Severity | What it means |
|---|---|---|
| 9.0-10.0 | Critical | Trivially exploitable, severe impact (data breach, system takeover) |
| 7.0-8.9 | High | Significant impact, relatively easy to exploit |
| 4.0-6.9 | Medium | Real risk but requires specific conditions to exploit |
| 0.1-3.9 | Low | Minor risk, hard to exploit, limited impact |
| 0.0 | None | Informational, no exploitable security risk |
CVSS scores are deterministic — the same vulnerability gets roughly the same score from any tester. But the score does NOT include your business context. A CVSS 9.8 SQL injection on an isolated dev environment is less urgent than a CVSS 7.5 IDOR on your production billing endpoint.
OWASP Risk Rating
OWASP combines CVSS-style technical severity with a likelihood estimate based on threat agent skill required, ease of discovery, ease of exploit, and awareness of the vulnerability. Output is the same Critical / High / Medium / Low / Informational scale but the methodology differs.
Both systems are valid. The point is consistency within the report, not which one the vendor uses.
What to Fix First
Triage in this order:
1. Critical and High severity on internet-facing assets. Every day these stay open is a day an attacker can find them too. Fix in 30 days.
2. Anything affecting authentication, authorisation, or payment. Even Medium severity here matters more than a Critical CVSS on an isolated function. Authorisation flaws (BOLA, BFLA, IDOR) and authentication bypasses are how real breaches happen.
3. Anything tagged for compliance evidence. If your report is meant to satisfy a SOC 2 or ISO 27001 auditor, the auditor will want all High and Critical remediated by the time they receive the report. Fix in 30-60 days.
4. Chained findings. Sometimes the report flags a Low severity finding that, combined with another Low or Medium, escalates to High when chained. The vendor should call this out explicitly. Treat the chain at the highest severity in the chain.
5. Findings on authenticated user paths. If a feature only authenticated users can reach, the finding is real but the threat model is narrower (an attacker needs valid credentials). Still fix, but typical timeline is 60 days.
6. Informational and Low severity findings. Address as part of normal release cycle. Don’t let them block production.
How to Challenge a Finding
Vendors expect pushback. The mature move is to challenge with reasoning, not to ignore.
False Positive
The finding describes a vulnerability that doesn’t actually exist. Common cases:
- The tester misinterpreted the application behaviour
- The tester saw an error message that looked like a vulnerability but is actually expected behaviour
- The vulnerability was reproducible only because of test environment configuration that doesn’t exist in production
How to challenge: send the vendor your reasoning with evidence. The vendor should either re-verify and remove the finding, or push back with additional proof.
Compensating Controls
The vulnerability exists but other defences mean the risk is materially lower. Common cases:
- WAF rule blocks the attack pattern (show the WAF rule and rule-trigger logs)
- Network ACL prevents the attacker from reaching the vulnerable endpoint
- Application-layer rate limiting prevents the attack from completing
- Internal monitoring would detect and alert before exploitation
How to challenge: document the compensating control with evidence (configuration screenshots, log samples, control test results). The vendor should update the finding to “Compensating Controls in Place” with your evidence in the report.
Risk Accepted
The vulnerability is real, the impact is understood, but you have a documented business reason not to fix it now. Common cases:
- Fix requires breaking change in API; scheduled for next major version
- Vulnerability is in a deprecated feature being removed in 90 days
- Cost of fix exceeds risk for the specific deployment
How to challenge: provide written acceptance with the business reasoning, the timeline, and the approver (typically CTO or CISO). The vendor updates the finding to “Risk Accepted” with your statement attached.
Severity Pushback
You agree the finding exists but think the severity is wrong. Common cases:
- Vendor scored CVSS 8.5 but the affected system has no user data
- Vendor scored Critical but the finding requires authenticated access AND a specific configuration that’s rare
How to challenge: provide your assessment of the technical impact and likelihood. Good vendors will revise the score with both views documented.
What NOT to do: ask the vendor to remove a finding because “it doesn’t look good in the report.” That’s how reports become useless for compliance and audit. Honesty wins.
What Auditors Look For
If your VAPT report is going to a SOC 2 or ISO 27001 auditor, they check four things specifically:
1. Scope coverage. Did the pentest cover all in-scope systems for the audit period? If your audit scope is Production Web App + Production API, but the pentest only covered Production Web App, the API is unaudited.
2. Compliance mapping. Each finding should map to a control framework. SOC 2: Trust Services Criteria (CC6.1 access controls, CC6.7 transmission of data, CC7.2 monitoring). ISO 27001: Annex A controls (A.5 policies, A.8 asset management, A.9 access control, A.12 operations security, A.13 communications security, A.14 system acquisition development).
3. Remediation timeline. Auditors want to see High and Critical findings remediated within 30-60 days. Findings that stay open longer raise questions about the company’s ability to operate the security program.
4. Independence. The pentester must be independent of the team that built the system being tested. A pentest by your own dev team does not count for compliance. A third-party vendor with documented qualifications (OSCP, CompTIA PenTest+) is what auditors expect.
The SOC 2 audit pentest expectations post covers this in more depth.
Red Flags in a Bad Report
If you see any of these, push back hard:
- No proof of concept on findings. A finding without reproduction steps is not actionable.
- Generic remediation guidance. “Improve input validation” is not remediation. “Implement parameterised queries using the application’s ORM and remove dynamic string concatenation in
users.py:142” is. - No methodology section. Without it, you cannot prove to an auditor that the engagement followed a real standard.
- Findings without CVSS or OWASP scores. Severity is required for prioritisation.
- Identical wording across multiple findings. Suggests scanner-output that wasn’t manually verified.
- No retest option or retest charged at full engagement price. Industry standard is one retest included.
- Final delivery in 1-2 days for a “complete pentest” of a complex application. Real manual pentesting takes 5-15 days per scope. Automated scans deliver in hours.
The manual pentest vs automated scanning post explains why these red flags matter.
The Retest
Most vendors include one retest in the engagement price; some charge separately. Either way, schedule it.
The retest verifies that:
- Each finding marked “Remediated” no longer exists
- The fix didn’t introduce new vulnerabilities
- Compensating controls (if you chose that path) are actually in place
Send the report to auditors or customers AFTER the retest closes out the High and Critical findings. A clean report with most findings showing “Remediated” reads dramatically better than the initial report with everything “Open.” Same engagement, same vendor, very different signal to the reader.
A Good Report Reads Like a Conversation, Not a Compliance Document
The best pentest reports we deliver and the best ones we read have a pattern: each finding tells a small story. What the tester did. What broke. What it would let an attacker do. How to fix it. Why the fix matters.
If your report reads like a database export of CVE numbers, that’s the warning sign. If it reads like an experienced pentester walking you through what they found and why it matters, you got value for the engagement.
For deeper coverage of report quality and vendor selection, see what a good pentest report looks like and how to evaluate a penetration testing firm. To see the structure in practice, our redacted sample report shows the full format we deliver.
If you have a recent pentest report and want a second opinion on the findings, prioritisation, or whether the vendor delivered fair value, see our pentest services and contact us to scope a review.