The interest in Three-Dimensional Video (3DV) technologies has grown considerably in both the academic and industrial worlds in the recent years. A simple and flexible 3DV representation is the so-called Multiview Video-plus-Depth (MVD), in which depth maps are provided together with multi-view video. Depth maps are typically used to synthesize the desired output views, and the performance of view synthesis algorithms strongly depends on the accuracy of depth information. In this thesis, novel algorithms for efficient depth map compression in MVD scenarios are proposed, with particular focus on edge-preserving solutions. In a proposed scheme, texture-depth correlation is exploited to predict surface shapes in the depth signal. In this way depth coding performance can be improved in terms of both compression gain and edge-preservation. Another solution proposes a new intra coding mode targeted to depth blocks featuring arbitrarily-shaped edges. Edge information is encoded exploiting previously coded edge blocks. Integrated in H.264/AVC, the proposed mode allows significant bit rate savings compared with a number of state-of-the-art depth codecs. View synthesis performances are also improved, both in terms of objective and visual evaluations. Depth coding based on standard H.264/AVC is explored for multi-view plus depth image coding. A single depth map is used to disparity-compensate multiple views and allow more efficient coding than H.264 MVC at low bit rates. Lossless coding of depth maps is also addressed: an efficient scheme for stereoscopic disparity maps based on bit-plane decomposition and context-based arithmetic coding is proposed. Inter-view redundancy is exploited by means of disparity warping. Major gains in compression efficiency are noticed when comparing with a number of standard solutions for lossless coding. New approaches for distributed video-plus-depth coding are also presented in this thesis. Motion correlation between the two signals is exploited at the decoder side to improve the performance of the side information generation algorithm. In addition, optical flow techniques are used to extract dense motion information and generate improved candidate side information. Multiple candidates are merged employing multi-hypothesis strategies. Promising rate-distortion performance improvements compared with state-of-the-art Wyner-Ziv decoders are reported, both when texture is exploited to encode depth maps, and vice versa.