Ian Schorr
changed
bug 5778
What |
Removed |
Added |
Status |
RESOLVED
|
UNCONFIRMED
|
CC |
|
[email protected]
|
Resolution |
FIXED
|
---
|
Ever confirmed |
1
|
|
Comment # 4
on bug 5778
from Ian Schorr
This appears to have introduced a regression.
There are a number of different places where get_unicode_or_ascii_string() is
used and in most places I can see, the decision on whether or not to add a
padding byte is based on offset of the string within the packet, not the length
of the buffer.
See the attached capture (vctest.cap), for example. Look at the "Transaction
Name" in frames 16 and 18 (towards the end of Trans Request section). In frame
16 the bytecount is odd number and there is a padding byte, but the string
started at an odd offset. In frame 18 the bytecount is an even number but
there IS a padding byte, so the string is decoded incorrectly (which goes on to
break SMB's treating this packet as a PIPE protocol packet, DCE/RPC, etc). I
believe I have other examples of strings in other kinds of SMB commands being
broken by the same change.
MS-SMB says about strings in section 2.2.1.1 (Character sequences):
Unless otherwise noted, when a Unicode string is passed it MUST be aligned to a
16-bit boundary with respect to the beginning of the SMB Header (section
2.2.3.1). In the case where the string does not naturally fall on a 16-bit
boundary, a null padding byte MUST be inserted, and the string MUST begin at
the next address. For Core Protocol messages in which a buffer format byte
precedes a Unicode string, the padding byte is found after the buffer format
byte.
...This seems to sway more towards the original behavior of
get_unicode_or_ascii_string being correct, doesn't it?
You are receiving this mail because:
- You are the assignee for the bug.
- You are watching all bug changes.