Our API Penetration Testing Methodology

This blog outlines Triaxiom Security’s methodology for conducting Application Programming Interface (API) penetration tests. An API penetration test emulates an external attacker or malicious insider specifically targeting a custom set of API endpoints and attempting to undermine the security in order to impact the confidentiality, integrity, or availability of an organization’s resources. This document outlines the standards, tools used, and process that Triaxiom Security’s engineers will follow while completing an assessment according to our API penetration testing methodology.

Standards

Triaxiom Security’s API penetration testing methodology is based on the following industry standards:

Minimum Qualifications

The lead engineer for any API penetration test shall at a minimum meet the following:

  • Have a minimum of 5 years of experience in Information Security.
  • Hold the Offensive Security Certified Professional (OSCP) certification.
  • Hold the Certified Information System Security Professional (CISSP) certification and be in good standing.
  • Have completed all API penetration testing training requirements and been formally approved.

Sample of Tools Used

Although our API penetration testing methodology cannot list every tool we may use, the following is a sample set of tools that may be used during an assessment: 

·       Burp Suite Professional·       Recon-ng·       Nmap
·       Sqlmap·       Metasploit Framework·       Custom Scripts
·       Curl·       Swagger UI·       WSDLer

Process

Our API penetration testing methodology can be broken into 3 primary stages, each with several steps.

Planning

1. Gather Scoping Information

After initiating the project, scoping/target information will be collected from the client. In the case of API penetration testing, this information will include any applicable IP addresses and URLs, a definition file or documentation for all endpoint definitions, authentication credentials or API tokens (2 sets of credentials for each role being tested), and a list of any sensitive or restricted endpoints that shouldn’t be scanned or exploited.

2. Review Rules of Engagement

This process will involve a brief meeting with the client to review and acknowledge the penetration testing rules of engagement, confirm project scope and testing timeline, identify specific testing objectives, document any testing limitations or restrictions, and answer any questions related to the project.

Execution

1. Reconnaissance

Once the test has officially begun, a start notification will be sent to the client. The first phase will involve open-source intelligence gathering, which includes a review of publicly available information and resources. The goal of this phase is to identify any sensitive information that may help during the following phases of testing, which could include email addresses, usernames, technology in use, user manuals, forum posts, etc. Additionally, this step will include searching for sensitive information that should not be publicly available, such as internal communications, salary information, or other potentially harmful information.

Tools may include: Recon-ng, Google Hacking, Custom Scripts

2. Threat Modeling

For this assessment, the threat modeling phase serves to evaluate the types of threats that may affect the target APIs that are in scope. The types of attacks and likelihood of these threats materializing will serve to inform risk rankings/priorities that are assigned to vulnerabilities throughout the assessment. The perspective of the testing (external/internal, authenticated/unauthenticated, black box/crystal box, etc.) will also be identified to ensure the validity of vulnerabilities discovered. This phase of the assessment should also include manual review of the exposed endpoints, determining business functionality of the endpoints, and identifying unauthenticated/authenticated endpoint attack surface. An application proxy will be used to baseline and capture normal API interactions for all in-scope endpoints, and packet-level traffic and response headers will be analyzed.

Tools may include: nmap, Burp Suite Professional, curl, Swagger UI, WSDLer, Custom Scripts

3. Vulnerability Analysis

The vulnerability analysis phase will encompass the enumeration of all in-scope targets/applications at both the network layer and the application layer. At the network layer, port scans, banner analysis, and vulnerability scans may be run to evaluate the attack surface of all in-scope assets. At the application layer, starting from the unauthenticated perspective and then moving to each of the in-scope, authenticated roles, automated vulnerability scans will be run. Manual identification and confirmation of vulnerabilities for each tested endpoint will be conducted, including injection-style attacks (SQL, command, XPath, LDAP, XXE, XSS), error analysis, file uploads, etc. Vulnerability identification based on identified software versions will also be attempted.

Tools may include: nmap, Nessus, Nikto, Burp Suite Professional, curl

4. Exploitation

This phase will involve taking all potential vulnerabilities identified in the previous phases of the assessment and attempting to exploit them as an attacker would. This helps to evaluate the realistic risk level associated with the successful exploitation of the vulnerability, analyze the possibility of exploit/attack chains, and account for any mitigating controls that may be in place. Additionally, any false positives will be identified during this phase. Not only will automatically identified vulnerabilities be exploited, but issues requiring manual identification and exploitation will be evaluated, as well. This includes business logic flaws, authentication/authorization bypasses, direct object references, parameter tampering, and session management.

Tools may include: Burp Suite Professional, Metasploit Framework, sqlmap

5. Post Exploitation

After successful exploitation, analysis may continue, including infrastructure analysis, pivoting, sensitive data identification, data exfiltration, and identification of high-value targets/data. We’ll use the information collected here in the prioritization and criticality ranking of identified vulnerabilities.

Tools may include: Burp Suite Professional, Metasploit Framework

Post-Execution

1. Reporting

After completing the active potion of the assessment, Triaxiom will formally document the findings. The output provided will generally include an executive-level report and a technical findings report. The executive-level report is written for management consumption and includes a high-level overview of assessment activities, scope, most critical/thematic issues discovered, overall risk scoring, organizational security strengths, and applicable screenshots. The technical findings report, on the other hand, will include all vulnerabilities listed individually, with details as to how to recreate the issue, understand the risk, recommended remediation actions, and helpful reference links.

2. Quality Assurance

All assessments go through a rigorous technical and editorial quality assurance phase. This may also include follow-ups with the client to confirm or deny environment details, as appropriate.

3. Presentation

The final activity in any assessment will be a presentation of all documentation to the client. Triaxiom will walk the client through the information provided, make any updates needed, and address questions regarding the assessment output. Following this activity, we’ll provide new revisions of documentation and schedule any formal retesting, if applicable.

Contact us if you’d like to discuss scheduling an API Penetration Test.