Disclaimer

The content of this blog is my personal opinion only. Although I am an employee - currently of Nvidia, in the past of other companies such as Iagination Technologies, MIPS, Intellectual Ventures, Intel, AMD, Motorola, and Gould - I reveal this only so that the reader may account for any possible bias I may have towards my employer's products. The statements I make here in no way represent my employer's position, nor am I authorized to speak on behalf of my employer. In fact, this posting may not even represent my personal opinion, since occasionally I play devil's advocate.

See http://docs.google.com/View?id=dcxddbtr_23cg5thdfj for photo credits.

Thursday, May 11, 2017

Intel IAMT bug: strncmp(trusted,untrusted,strlen(untrusted))

Intel's embarrassingly negligent IAMT bug seems ... easy to imagine how it happened.

Embedi's analysis shows that the bug is in code that looks like
if( strncmp(computed_response, user_response, response_length) )
    exit(0x99); 
using the user_response length to limit the length of the string comparison, rather than the expected length (which I believe is, in this case, constant 16, 128-bits, the length of the MD5 hash used in Kerberos authentication).

Immediately I thought of
strncmp(computed_response, user_response, strlen(user_response))
which inspired the riff below

Pay no attention to the riff below

Embedi's writeup indicates that the user_response is actually
struct AUTH_HEAD_VALUE {
    char *str;
    int len;
};
which probably invalidates any supposition that strlen may have one time been involved.

Nevertheless, the riff is fun, and although inaccurate in detail, probably has some aspects of truth.

Riffing on strcmp(trusted,untrusted,bad_length)

When I heard and saw
strncmp(computed_response, user_response, response_length)
Immediately I thought
strncmp(computed_response,
    user_response, strlen(user_response))
I imagined the original code was
strcmp(computed_response, user_response)
I guessed that some "security audit" might have said
Security Auditor: strcmp is insecure, this code must be changed to use strncmp
Might not have been a human security audit.  Might have been a secure-lint tool.

---

The programmer who made the change to use strncmp looked around for a size_t to use as the maximum length to compare.  If the strings were "naked" null terminated C strings, he may just have guessed wrong, choosing the second rather than the first.

 Embedi's writeup indicates that the user_response is not a naked C string, but is actually
struct AUTH_HEAD_VALUE {
    char *str;
    int len;
};
Hey, wait, there's a length here we can pass to strncmp!  It just happens to be the user response length. The wrong string length.

---

Perhaps it was not obvious (to the programmer making the change) what the buffer length of the computed_response is.  BTW, it is really a buffer length, not a string length.  It might be declared as
typedef uint32_t MD5_hash[MD5_SIZE_IN_WORDS]
or
typedef MD5_hash[MD5_SIZE_IN_BYTES]
Or it might have been malloc'ed.

The code might have made some provision for changing eventually to a non-MD5 hash, so you would not want to hard-wire the MD5_SIZE into the code. The code doing the comparison might not need to be aware that the hash involved was MD5.

---

Possibly the hypothetical strcmp code was actually more secure than the strncmp code - so long as both hashes were guaranteed to be null terminated.  So long as the user response was guaranteed not to overflow any buffer allocated for it.

---

But come to think of it, optimized network code probably does not copy the user response from the header into a separate newly allocated string.  It probably sets AUTH_HEAD_VALUE.str to point directly into the buffer containing the headers.

(Or at least the buffer containing part of the headers.  If the headers are split into several buffers... well, that's a bug that has been seen before.)

So, it is probably not "naked null-terminated C string" data.  Probably not:
strcmp(computed_response, user_response)
But if it were, then
strncmp(computed_response, user_response,
            max(strlen(computed_response),strlen(user_response)) )
might have been better.  Really that is equivalent to strcmp - but at least it might silence the security audit tool's warning about strcmp. Replacing it by equally annoying warnings about strlen instead of strnlen.

---

But - the code audit / lint tools that might have triggered this may not have been security oriented. They may have been using a buffer overflow detector like valgrind or purify. These may have warned about read accesses beyond the memory allocated to hold the hashes.

Strictly speaking, neither strcmp not strlen need to perform buffer overflow read memory accesses, if given properly sized null terminated C string arguments.  But... if an "optimized" strcmp or strlen is used, it is common to process things 4 or 8 bytes at a time - the number of bytes that fit into a 32-bit or 64-bit register - in which case the code might read beyond the end of the memoryu allocated for the string, past the terminating null byte.  In past lives, when I was more the "low level assembly code optimization guy" rather than a security guy, I have written such code.  It is hard to write optimized strcmp and strlen code that doesn't go past the terminating null, that still runs faster than doing it a byte at a time. Even fixed length strnlen, strncmp, bcopy, memcpy are hard to write using registers wider than the official granule size, without going past the end.  Which is one reason why I advocate special instruction hardware support, whether RISC or CISCy microcode.

Examining my motivation

When I heard about the IAMT bug, my first reaction was "I told them so - I told them that the ARC embedded CPU would lead to security bugs."  Yes, really, I did (see another post).

But I also said to myself "That's what you get when you have the B-team working on an important feature".

Face it: at Intel, at least when the manageability push started, the A-team worked on CPUs, hardware and software.  The guys working on chipsets and I/O devices were usually understaffed and in a hurry.

I don't like thinking such uncharitable thoughts. Moreover, chipsets and I/O devices are more and more important.  One of the smartest guys I know told me that he decided to go into chipsets because there were too many guys working in Intel CPUs, too much bureaucracy, while in chipsets he could innovate more easily.

So I imagined these scenarios, about code reviews and security audits and lint tools
Security Auditor: strcmp is insecure, this code must be changed to use strncmp
and making changes in a hurry, as a way of imagining how such bugs could have happened.

Doesn't excuse the bugs.  But may be more productive in determining how to prevent such bugs in the future, than simply saying (as I heard one security podcaster say): "This is unimaginably bad code.  One wonders how Intel could ever have allowed such code to pass code reviews and security audits.  One wonders if one should ever trust Intel CPU security again."

Conclusion

My overall point is that code reviews, security audits, and tools such as security lints or buffer overflow detectors, may have triggered code changes that introduced this bug to originally correct code.

This is no excuse.  It is even more important to review code changes than original code, since bug density is higher.

Of course, it is possible that the bug was present since the code was originally written.

'via Blog this'

No comments: