Key Technology Overview
Black-Box Application Security Testing
In black-box security analysis, only client-side code is available. Server-side endpoints can only be analysed by interacting with their interface endpoints.
Black-box vulnerability testing typically consists of three phases:

Attack-surface enumeration: searching for available server-side endpoints (API endpoints);

Sending requests with attack vectors to the identified endpoints;

Analysing the results.
Advanced Security-Aware Crawling
Our advanced crawling technology is designed to automatically discover every available server-side endpoint, solving the first-phase problem. In a black-box setting, there are several ways to discover server HTTP endpoints:

Inferring from the client side, by determining which requests the client can send;

Fingerprinting the software running on the server and using known endpoints specific to that software (for example, well-known WordPress endpoints);

Fuzzing the server using requests generated from a dictionary and analysing the responses — a technique known as directory busting.
JavaScript Static Analysis for API Enumeration
Our advanced crawling technology relies on static analysis of client-side code, combined with more traditional headless-browser dynamic crawling and security-aware static crawling techniques. Our static-analysis approach infers server-side endpoints from non-trivial value analysis and code-path analysis of client-side JavaScript, enabling secure scanning of web applications.
Advanced Crawling
Thanks to our advanced crawling technology we are able to:

Detect server endpoints from invalid, unreachable, or commented-out client-side code;

Detect server-side endpoints from client-side code that is only active inside authenticated client areas or admin areas;

Use OpenAPI / Swagger API specifications and other endpoint information sources as a starting point for analysis and crawling.
This lets us deliver the best attack-surface enumeration on the market.
Pitfalls of Traditional Dynamic Crawling
The most important quality metric when searching for endpoints is completeness.
In general, directory busting and fingerprinting cannot identify every endpoint — especially for non-standard, custom-built software.
Being able to infer server endpoints from the client side is essential for a black-box scanner to achieve sufficient endpoint coverage.
Dynamic Crawling
Dynamic crawling uses a headless browser to interact automatically with web UI elements, simulating user actions and observing the requests sent to the server.
While dynamic crawling usually works well, in some cases it cannot discover certain endpoints. Sometimes the UI is too complex to crawl completely. Performing every possible user action can take excessive time. In these cases, the crawler stops before finishing, potentially missing endpoints.
In addition, JS code that triggers endpoint access is sometimes not reachable from the UI at all — essentially unused code. Such code is still valuable to the scanner and can hit live parts of the server. We call these endpoints hidden endpoints.