One of the primary questions we get while scoping out web application penetration tests (including mobile applications and APIs) is about what methodology we use. Of course it’s natural for people to wonder how we’re going to go about testing their assets, and somewhat surprisingly, it can be hard to get this kind of information from your pen testers. Or sometimes, they’ll point you to a large standard and just say that they use that. But we think it’s important for you to understand the flow and stages of a typical test for us. And while our testing methodology pulls some of the most important aspects from each of the industry-accepted standards listed below, we also think it’s important to customize this process and put it in terms our clients and partners will understand.
As a rule, our application-level penetration testing consists of both unauthenticated and authenticated testing using both automated and manual methods with particular emphasis placed on identifying vulnerabilities associated with the OWASP Top 10 Most Critical Application Vulnerabilities. It is important to note that a penetration test is not just an automated vulnerability scan, and a large portion of web application penetration testing is a manual process with a skilled engineer attempting to identify, exploit, and evaluate the associate risk of security issues.
What standards do we use?
What tools do we use?
Some of the primary tools used during web application penetration testing include:
- Burp Suite Pro
- Metasploit Framework
What’s the process?
The overall web application penetration testing process can be broken down into 3 primary stages, each of which has several sub-stages.
1. Gather Scoping Information
After initiating the project, scoping/target information will be collected from the client. In the case of web application penetration testing, this information will include any applicable IP addresses and URLs, authentication credentials (2 sets of credentials for each role being tested), and a list of any sensitive or restricted portions of the application that shouldn’t be scanned or exploited.
2. Review Rules of Engagement
This process will involve a brief meeting with the client to review and acknowledge the penetration testing rules of engagement, confirm project scope and testing timeline, identify specific testing objectives, document any testing limitations or restrictions, and answer any questions related to the project.
1. Intelligence Gathering
Once the test has officially begun, a start notification will be sent to the client informing them of the activity’s commencement. The first phase will involve open-source intelligence gathering, which includes a review of publicly available information and resources. The goal of this phase is to identify any sensitive information that may help during the following phases of testing, which could include email addresses, usernames, software information, user manuals, forum posts, etc.
Tools may include: Recon-ng, Maltego, Google Hacking, Wayback Machine
2. Threat Modeling
For this assessment, the threat modeling phase serves to evaluate the types of threats that may affect the targets that are in scope. The types of attacks and likelihood of these threats materializing will serve to inform risk rankings/priorities that are assigned to vulnerabilities throughout the assessment. The perspective of the testing (external, internal, authenticated, unauthenticated, etc.) will also be identified to ensure the validity of vulnerabilities discovered. This phase of the assessment should also include manual discovery and crawling of the application, determining business functionality from both an authenticated and unauthenticated perspective. The use of an application proxy to evaluate packet-level traffic and response headers will also be used.
Tools may include: Burp Suite Pro, Cookies Manager+, NoRedirect
3. Vulnerability Analysis
The vulnerability analysis phase will encompass the discovery and enumeration of all in-scope targets/applications at both the network layer and the application layer. At the network layer, Triaxiom will evaluate the attack surface of all in-scope assets using port scans, banner analysis, and vulnerability scans. At the application layer, Triaxiom will run automated vulnerability scans, starting from the unauthenticated perspective and then moving to each of the in-scope, authenticated roles. Then, Triaxiom will perform manual identification of vulnerabilities involving form submission and application input points, looking for issues such as injection attacks (SQL, command, XPath, LDAP, XXE, XSS), error analysis, file uploads, etc. Finally, Triaxiom will attempt directory brute-forcing and vulnerability identification based on disclosed software versions.
Tools may include: Burp Suite Pro, Nessus, Dirbuster/Dirb, Nikto, Searchsploit
This phase will involve taking all potential vulnerabilities identified in the previous phases of the assessment and attempting to exploit them as an attacker would. This helps to evaluate the realistic risk level associated with the successful exploitation of the vulnerability, analyze the possibility of exploit/attack chains, and account for any mitigating controls that may be in place. Additionally, we’ll identify any false positives during this activity. Triaxiom will exploit automatically identified vulnerabilities and evaluate issues requiring manual identification and exploitation. This will include business logic flaws, authentication/authorization bypasses, direct object references, parameter tampering, and session management.
Tools may include: Burp Suite Pro, Metasploit Framework, sqlmap, B33F
5. Post Exploitation
After successful exploitation, analysis will continue to include infrastructure analysis, pivoting, sensitive data identification, exfiltration, and identification of high-value targets/data. We’ll use the information collected in the prioritization and criticality ranking of identified vulnerabilities.
Tools may include: Burp Suite Pro, Metasploit Framework
After completing the active potion of the assessment, Triaxiom will formally document the findings. The output provided will generally include an executive-level report and a technical findings report. The executive-level report is written for management consumption and includes a high-level overview of assessment activities, scope, most critical/thematic issues discovered, overall risk scoring, organizational security strengths, and applicable screenshots. The technical findings report, on the other hand, will include all vulnerabilities listed individually, with details as to how to recreate the issue, understand the risk, recommended remediation actions, and helpful reference links.
2. Quality Assurance
All assessments go through a rigorous technical and editorial quality assurance phase. This may also include follow-ups with the client to confirm or deny environment details, as appropriate.
The final activity in any assessment will be a presentation of all documentation to the client. Triaxiom will walk the client through the information provided, make any updates needed, and address questions regarding the assessment output. Following this activity, we’ll provide new revisions of documentation and schedule any formal retesting, if applicable.
Hopefully this helps shed some light on our testing process. We put a lot of thought and organization into planning our approach to web app penetration testing. This level of rigor ensures we perform a thorough assessment consistently and cover as much ground as possible. If you’ve still got questions, let us know!