CAPEC-80 - Using UTF-8 Encoding to Bypass Validation Logic

This attack is a specific variation on leveraging alternate encodings to bypass validation logic. This attack leverages the possibility to encode potentially harmful input in UTF-8 and submit it to applications not expecting or effective at validating this encoding standard making input filtering difficult. UTF-8 (8-bit UCS/Unicode Transformation Format) is a variable-length character encoding for Unicode. Legal UTF-8 characters are one to four bytes long. However, early version of the UTF-8 specification got some entries wrong (in some cases it permitted overlong characters). UTF-8 encoders are supposed to use the "shortest possible" encoding, but naive decoders may accept encodings that are longer than necessary. According to the RFC 3629, a particularly subtle form of this attack can be carried out against a parser which performs security-critical validity checks against the UTF-8 encoded form of its input, but interprets certain illegal octet sequences as characters.

Severity

Likelihood

Confidentiality

Integrity

Availability

  • Attack Methods 3
  • Injection
  • Protocol Manipulation
  • API Abuse
  • Purposes 1
  • Penetration
  • Sec Principles 1
  • Reluctance to Trust
  • Scopes 4
  • Bypass protection mechanism
  • Authorization
  • Access_Control
  • Confidentiality
  • Execute unauthorized code or commands
  • Availability
  • Integrity
  • Confidentiality
  • Modify memory
  • Integrity
  • Unexpected State
  • Availability

Low level: An attacker can inject different representation of a filtered character in UTF-8 format.

Medium level: An attacker may craft subtle encoding of input data by using the knowledge that she has gathered about the target host.

The application's UTF-8 decoder accepts and interprets illegal UTF-8 characters or non-shortest format of UTF-8 encoding.

Input filtering and validating is not done properly leaving the door open to harmful characters for the target host.

Attacker may try to inject dangerous characters using UTF-8 different representation using (example of invalid UTF-8 characters). The attacker hopes that the targeted system does poor input filtering for all the different possible representations of the malicious characters. Malicious inputs can be sent through an HTML form or directly encoded in the URL.

The attacker can use scripts or automated tools to probe for poor input filtering.

RFC 3629 - http://www.faqs.org/rfcs/rfc3629.html

Step 1 - Survey the application for user-controllable inputs

Using a browser or an automated tool, an attacker follows all public links and actions on a web site. He records all the links, the forms, the resources accessed and all other potential entry-points for the web application..

Tecnique ID: 1 - Environment(s) env-Web

Use a spidering tool to follow and record all links and analyze the web pages to find entry points. Make special note of any links that include parameters in the URL.

Tecnique ID: 2 - Environment(s) env-Web

Use a proxy tool to record all user input entry points visited during a manual traversal of the web application.

Tecnique ID: 3 - Environment(s) env-Web

Use a browser to manually explore the website and analyze how it is constructed. Many browsers' plugins are available to facilitate the analysis or automate the discovery.

Indicator ID: 1 - Environment(s) env-Web

Type: Positive

Inputs are used by the application or the browser (DOM)

Indicator ID: 2 - Environment(s) env-Web

Type: Inconclusive

Using URL rewriting, parameters may be part of the URL path.

Indicator ID: 3 - Environment(s) env-Web

Type: Inconclusive

No parameters appear to be used on the current page. Even though none appear, the web application may still use them if they are provided.

Indicator ID: 4 - Environment(s) env-Web

Type: Negative

Applications that have only static pages or that simply present information without accepting input are unlikely to be susceptible.


Security Control ID: 1

Type: Detective

Monitor velocity of page fetching in web logs. Humans who view a page and select a link from it will click far slower and far less regularly than tools. Tools make requests very quickly and the requests are typically spaced apart regularly (e.g. 0.8 seconds between them).

Security Control ID: 2

Type: Detective

Create links on some pages that are visually hidden from web browsers. Using iframes, images, or other HTML techniques, the links can be hidden from web browsing humans, but visible to spiders and programs. A request for the page, then, becomes a good predictor of an automated tool probing the application.

Security Control ID: 3

Type: Preventative

Use CAPTCHA to prevent the use of the application by an automated tool.

Security Control ID: 4

Type: Preventative

Actively monitor the application and either deny or redirect requests from origins that appear to be automated.


Outcome ID: 1

Type: Success

A list of URLs, with their corresponding parameters (POST, GET, COOKIE, etc.) is created by the attacker.

Outcome ID: 2

Type: Success

A list of application user interface entry fields is created by the attacker.

Outcome ID: 3

Type: Success

A list of resources accessed by the application is created by the attacker.



Step 1 - Probe entry points to locate vulnerabilities

The attacker uses the entry points gathered in the "Explore" phase as a target list and injects various UTF-8 encoded payloads to determine if an entry point actually represents a vulnerability with insufficient validation logic and to characterize the extent to which the vulnerability can be exploited..

Tecnique ID: 1 - Environment(s) env-Web

Try to use UTF-8 encoding of content in Scripts in order to bypass validation routines.

Tecnique ID: 2 - Environment(s) env-Web

Try to use UTF-8 encoding of content in HTML in order to bypass validation routines.

Tecnique ID: 3 - Environment(s) env-Web

Try to use UTF-8 encoding of content in CSS in order to bypass validation routines.

Indicator ID: 1 - Environment(s) env-Web

Type: Positive

The application accepts user-controllable input.


Security Control ID: 1

Type: Preventative

Implement input validation routines that filter or transcode for UTF-8 content.

Security Control ID: 2

Type: Preventative

Specify the charset of the HTTP transaction/content.

Security Control ID: 3

Type: Detective

Monitor inputs to web servers. Alert on unusual charset and/or characters.

Security Control ID: 4

Type: Preventative

Actively monitor the application and either deny or redirect requests from origins that appear to be attack attempts.


Outcome ID: 1

Type: Success

The attacker's UTF-8 encoded payload is processed and acted on by the application without filtering or transcoding

Outcome ID: 2

Type: Failure

The application decodes the charset and filters the inputs.



The Unicode Consortium recognized multiple representations to be a problem and has revised the Unicode Standard to make multiple representations of the same code point with UTF-8 illegal. The UTF-8 Corrigendum lists the newly restricted UTF-8 range (See references). Many current applications may not have been revised to follow this rule. Verify that your application conform to the latest UTF-8 encoding specification. Pay extra attention to the filtering of illegal characters.

The exact response required from an UTF-8 decoder on invalid input is not uniformly defined by the standards. In general, there are several ways a UTF-8 decoder might behave in the event of an invalid byte sequence:
It is possible for a decoder to behave in different ways for different types of invalid input.
RFC 3629 only requires that UTF-8 decoders must not decode "overlong sequences" (where a character is encoded in more bytes than needed but still adheres to the forms above). The Unicode Standard requires a Unicode-compliant decoder to "...treat any ill-formed code unit sequence as an error condition. This guarantees that it will neither interpret nor emit an ill-formed code unit sequence."
Overlong forms are one of the most troublesome types of UTF-8 data. The current RFC says they must not be decoded but older specifications for UTF-8 only gave a warning and many simpler decoders will happily decode them. Overlong forms have been used to bypass security validations in high profile products including Microsoft's IIS web server. Therefore, great care must be taken to avoid security issues if validation is performed before conversion from UTF-8, and it is generally much simpler to handle overlong forms before any input validation is done.
To maintain security in the case of invalid input, there are two options. The first is to decode the UTF-8 before doing any input validation checks. The second is to use a decoder that, in the event of invalid input, returns either an error or text that the application considers to be harmless. Another possibility is to avoid conversion out of UTF-8 altogether but this relies on any other software that the data is passed to safely handling the invalid data.
Another consideration is error recovery. To guarantee correct recovery after corrupt or lost bytes, decoders must be able to recognize the difference between lead and trail bytes, rather than just assuming that bytes will be of the type allowed in their position.

For security reasons, a UTF-8 decoder must not accept UTF-8 sequences that are longer than necessary to encode a character. If you use a parser to decode the UTF-8 encoding, make sure that parser filter the invalid UTF-8 characters (invalid forms or overlong forms).

Look for overlong UTF-8 sequences starting with malicious pattern. You can also use a UTF-8 decoder stress test to test your UTF-8 parser (See Markus Kuhn's UTF-8 and Unicode FAQ in reference section)

Assume all input is malicious. Create a white list that defines all valid input to the software system based on the requirements specifications. Input that does not match against the white list should not be permitted to enter into the system. Test your decoding process against malicious input.