Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural acoustic scenes, with each of the sounds having its own characteristic pattern of amplitude modulations. Complex sounds, such as speech, share the same amplitude modulations across a wide range of frequencies. This "comodulation" is an important characteristic of these sounds since it can enhance their audibility when embedded in similar background interferers, a phenomenon referred to as comodulation masking release (CMR). Knowledge of the auditory processing of amplitude modulations provides therefore crucial information for a better understanding of how the auditory system analyses acoustic scenes. The purpose of the present thesis is to develop a computational auditory processing model that accounts for a large variety of experimental data on CMR, in order to obtain a more thorough understanding of the basic processing principles underlying the processing of across-frequency modulations. The second chapter introduces a processing stage, in which information from different peripheral frequency channels is combined. This so-called across-channel processing is assumed to take place at the output of a modulation filterbank, and is crucial in order to account for CMR conditions where the frequency spacing of comodulated components is relatively large. The third chapter investigates the role of nonlinear inner-ear (cochlear) processing on CMR. A compressive non-linearity is incorporated in the modeling framework suggested in the second chapter. This non-linearity is necessary to account for CMR in conditions which are sensitive to cochlear suppression. The fourth chapter examines the role of cognitive processing in different stimulus paradigms: CMR, binaural masking level differences and modulation detection interference are investigated in contexts of auditory grouping. It is shown that auditory grouping can influence the results in conditions where the processing in the auditory system is dominated by across-channel comparisons. Overall, this thesis provides insights into the specific mechanisms involved in the perception of comodulated sounds. The results are important as a basis for future models of complex modulation processing in the human auditory system.