Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Model-Based Delay-Distortion Optimization for Video Streaming Using Packet Interleaving
 

Summary: Model-Based Delay-Distortion Optimization for Video Streaming
Using Packet Interleaving
Yi J. Liang
, John G. Apostolopoulos and Bernd Girod
Streaming Media Systems Group
Information Systems Laboratory
Hewlett-Packard Labs, Palo Alto, CA 94304 Stanford University, Stanford, CA 94305
Invited Paper
ABSTRACT
Bursty channel losses have been shown to generally produce larger
total mean-square error distortion in streaming video than an
equivalent number of isolated losses. This paper proposes a sim-
ple packet interleaving scheme to combat the effect of bursty losses
by dispersing them. The optimal interleaver for minimizing the ex-
pected total distortion of the decoded video, subject to a delay con-
straint, is determined using a model that accurately predicts the
expected distortion for different packet loss patterns. Compared
to other forms of error-resilience, packet interleaving has the ad-
vantages of (1) simplicity and (2) not requiring any extra bitrate.
For a simple burst loss channel, where each loss event has 100 ms

  

Source: Apostolopoulos, John - Hewlett Packard Research Labs
Girod, Bernd - Department of Electrical Engineering, Stanford University

 

Collections: Computer Technologies and Information Sciences; Engineering