Understanding speech in complex acoustic environments presents a challenge for most hearing-impaired listeners. In conditions where normal-hearing listeners effortlessly utilize spatial cues to improve speech intelligibility, hearing-impaired listeners often struggle. In this thesis, the influence of two such cues on speech intelligibility was studied. First, the benefit from early reflections (ER’s) in a room was determined using a virtual auditory environment. ER’s were found to be useful for speech intelligibility, but to a smaller extent than the direct sound (DS). The benefit was quantified with an intelligibility-weighted “efficiency factor” which revealed that the spectral characteristics of the ER’s caused the reduced benefit. Hearing-impaired listeners were able to utilize the ER energy as effectively as normal-hearing listeners, most likely because binaural processing was not required for the integration of the ER’s with the DS. Different masker types were found to have an impact on the binaural processing of the overall speech signal but not on the processing of ER’s. Second, the influence of interaural level differences (ILD’s) on speech intelligibility was investigated with a hearing aid research platform. ILD’s are considered important for localizing sounds and for the perceptual separation of competing sound sources. Bilateral hearing aids with independent compression algorithms typically decrease ILD’s, such that the perception of spatial sounds becomes distorted. Hearing aids that are binaurally linked can utilize the signals at both ears and preserve the ILD’s through co-ordinated compression. Hearing-impaired listeners received a small, but not significant advantage from linked compared to independent compression. It was concluded that, for speech intelligibility, the exact ILD information is not crucial. The results from an additional experiment demonstrated that the ER benefit was maintained with independent as well as with linked hearing aid compression. Overall, this work contributes to the understanding of ER processing in listeners with normal and impaired hearing and may have implications for speech perception models and the development of compensation strategies in future generations of hearing instruments.