| 리포트 | 기술문서 | 테크-블로그 | 원샷 갤러리 | 링크드인 | 스폰서 컨텐츠 | 네트워크/통신 뉴스 | 인터넷자료실 | 자유게시판    한국 ICT 기업 총람 |

제품 검색

|

통신 방송 통계

 
 
 
섹션 5G 4G LTE C-RAN/Fronthaul Gigabit Internet IPTV/UHD IoT SDN/NFV Wi-Fi Video Streaming KT SK Telecom LG U+ OTT Network Protocol CDN YouTube Data Center
 

2023

5G 특화망

포탈

Private 5G/이음 5G

 포탈홈

  넷매니아즈 5G 특화망 분석글 (128)   5G 특화망 4가지 구축모델   산업계 5G 응용   산업분야별 5G 특화망 활용사례  [5G 특화망 벤더Samsung | HFR | Nokia | more
 

해외

  국가별 사설5G 주파수 [국가별 구축현황] 일본 | 독일 | 미국 | 프랑스 | 영국  [사설5G 사업자] Verizon | AT&T | DT | Telefonica | AWS | Microsoft | NTT동일본 | NTT Com    
 

국내

  5G 특화망 뉴스 | 국내 5G 특화망 구축 현황 | 국내 5G 특화망사업자 현황 (19개사) | 국내 자가구축사례 일람 | 국내 특화망 실증사업사례 일람 | 5G 특화망 정책
 
 

[5G 특화망 구축 사례] 한국식품산업클러스터 | 반월시화산단 삼성서울병원 | 롯데월드 | 한국수력원자력 | 해군본부 | 한국전력공사 | more  [이통사] KT

 
 
스폰서채널 |

 • HFR의 5G 특화망 솔루션 (my5G)  Updated   |   뉴젠스의 5G 특화망 구축 및 운영 서비스  NEW  

  스폰서채널 서비스란?
banner
banner
Increasing the User Perceived Quality for IPTV Services
February 01, 2008 | By ALU
코멘트 (0)
3

Thank you for visiting Netmanias! Please leave your comment if you have a question or suggestion.
Transcript
Increasing the User Perceived Quality for IPTV Services
Natalie Degrande, Koen Laevens, and Danny De Vleeschauwer, Alcatel-Lucent Bell Randy Sharpe, Alcatel-Lucent USA Inc.
ABSTRACT
Currently, digital television is gradually replacing analogue TV. Although these digital TV services can be delivered via various broad­cast networks (e.g., terrestrial, cable, satellite), Internet Protocol TV over broadband telecom­munication networks offers much more than tra­ditional broadcast TV. Not only can it improve the quality that users experience with this linear programming TV service, but it also paves the way for new TV services, such as video-on­demand, time-shifted TV, and network personal video recorder services, because of its integral return channel and the ability to address individ­ual users. This article first provides an overview of a typical IPTV network architecture and some basic video coding concepts. Based on these, we then explain how IPTV can increase the linear programming TV quality experienced by end users by reducing channel-change latency and mitigating packet loss. For the latter, forward error correction and automatic repeat request techniques are discussed, whereas for the former a solution based on a circular buffer strategy is described. This article further argues that the availability of larger buffers in the network enables IPTV to better offer new services (in particular, time-shifted TV, network personal video recorder, and video-on-demand) than the competing platforms.

INTRODUCTION
Digital television (TV) is gradually replacing analogue TV. There are a number of competing methods to deliver digital TV services. Terrestri­al, satellite, and cable networks have switched or are switching to transporting TV signals over digital channels. The new kid on the block is TV transported over an Internet Protocol (IP)-based telecommunication network. Telecom operators have deployed IPTV over digital subscriber line (DSL) technology or even over passive optical network (PON) technology, whereas cable oper­ators envision offering IPTV via the data-over­cable service interface specifications (DOCSIS). Because of its integral return channel and the possibility to address users individually, IPTV technology can offer much more than the tradi­tional linear programming TV service. New ser­vices such as video-on-demand, time-shifted TV, network personal video recorder, and others become practical.
Figure 1 shows a typical IPTV architecture [1]. The super head end (SHE) (re)encodes and packetizes each broadcast TV channel it receives from various nationwide content providers into an IP flow. These IP flows are transported over a core network to a number of video hub offices (VHOs). A VHO also can (re)encode and pack­etize local broadcast TV channels. A VHO deliv­ers both, that is, the nationwide and local IP flows, over a metro network to each video serv­ing office (VSO), which in turn distributes the content over an access network and possibly a home network to a user’s set top box (STB). Because all networks are multicast-enabled, there is a multicast tree set up per TV channel. The SHE and VHO also can host a video-on­demand server (farm) with movie titles, recently broadcasted shows, and so on. A user requesting such a piece of content sets up a unicast flow from a video-on-demand server to his or her STB. Note that there may be more than one STB per home (each serving one TV).
This article discusses the improvements that can be made in this network to increase the quality the user experiences. In particular, for the linear programming TV service, we show how the effect of packet loss is mitigated and how channel-change latency is reduced. For the on-demand services, we point out how intelligent content distribution techniques improve the responsiveness. But before we delve into these matters, we briefly introduce the elements of video coding technology that are required to understand the solutions we will describe.

VIDEO CODING TECHNOLOGY
Video consists of a sequence of frames taken at regular time intervals (typically every 33.3 ms or 40 ms). Because in its raw form video requires a bit rate much too high for economical transport, it is compressed using video codec technology. Most standardized codecs (e.g., MPEG2, H.264) use a motion-compensated predictive technique [2]. Figure 2 illustrates that this technique relies on organizing the frames into a set of anchor

■ Figure 1. A typical IPTV network architecture.
frames (also referred to as I-frames), often regu­larly spaced in time, and a set of predicted frames (also referred to as P- or B-frames) between these anchor frames. The anchor frames are encoded (with image compression tech­niques similar to still image compression) with­out making reference to other frames. The predicted frames are first predicted based on one or more surrounding frames, using the esti­mated motion of the objects in the frames it refers to. To reconstruct a predicted frame, the decoder requires only the decoded versions of the frames it refers to, the motion of the objects, and some information pertaining to the unpre­dictable parts of that frame. All predicted frames between two anchor frames, together with the starting anchor frame, often are referred to as a group of pictures (GoP) and the distance between anchor frames as the GoP size. A typi­cal choice for the GoP size ranges between 0.5 s and 2 s.
The anchor frames can be decoded on their own but require many bits to encode. The pre­dicted frames require fewer bits, but rely on the successful decoding of the frames they refer to. Indeed, if a frame has a decoding error, this error propagates to predicted frames using it as a reference. Because an anchor frame is required to stop decoding errors from propagating, and decoding can begin only after the reception of an anchor frame (including after changing chan­nels), a GoP size that is too long is not desired. Therefore, the optimum GoP size is a trade-off between compression efficiency, decoding error visibility, and channel-change performance. Later we discuss network techniques that allow having a large GoP size (and hence a lower bit rate), while attaining the minimal impact of packet losses and a small channel-change latency.

LINEAR PROGRAMMING TV
The linear programming TV service mimics the functionality of traditional analogue TV. That is, the user is presented with a bouquet of channels (and often also an electronic program guide) and can watch one of these channels on full screen view (and possibly another channel as a picture-in-picture view). Users dislike a video that is disturbed by too many visible artifacts and favor a small channel-change latency. In this sec­tion, we discuss how the user experience can be improved for the linear programming TV ser­vice.

MITIGATING PACKET LOSS
We assume that the video sequence is encoded at a bit rate that is high enough for the video to be free of artifacts due to encoding. In this case, visible distortions still can occur due to packets that are lost during transit from the encoder to the STB. The current state-of-the-art decoders are sensitive to packet loss. A lost packet with information pertaining to an anchor frame dam­ages the anchor frame itself and all other (pre­dicted) frames in the GoP that depend on it. A packet loss affecting a predicted frame may or may not propagate (depending on whether or not the frame in question is used as a reference for other frames). With the protocol stacks cur­rently used, it is not readily known what type of information a packet contains. Therefore, often the worst-case is assumed . that every lost packet translates in a visible (or audible) artifact. In standards bodies [3, 4], it is recommended
IEEE Communications Magazine . February 2008
Authorized licensed use limited to: UNIVERSIDADE TECNICA DE LISBOA. Downloaded on September 9, 2009 at 10:40 from IEEE Xplore. Restrictions apply.

that the mean time between visible distortions should be greater than four hours.
In the IPTV network shown in Fig. 1, there are various sources of packet loss. First, the last mile in the access network, that is, the DSL link from the DSL access multiplexer (DSLAM) to the user’s home, is often hampered by noise. Link layer techniques (a combination of inter­leaving and Reed-Solomon and trellis coding) can drastically reduce the impact of noise, but at the expense of (high) overhead and/or latency. It is hard to tune these techniques such that the line is robust enough for services like video, without introducing too much latency for ser­vices like voice and gaming, and not waste too much (overhead) bandwidth. Often a compro­mise is made with a resulting packet loss that does not achieve the target of a mean time between visible distortions of four hours. Because home networks also experience noise (wireless links, IP over power-line), they often are prone to packet loss as well. Another source of packet loss is congestion in the network result­ing in buffers overflowing in the network nodes. Most IPTV networks are dimensioned such that buffer overflow seldom occurs. Finally, link or node failures in the network also can cause packet loss. Layer 1.3 protection mechanisms (automatic protection switching, IP rerouting, etc.) handle the restoring of connectivity, typical­ly within a 50 ms to 250 ms window after the failure was detected, but during this period all packets are lost.
There are two application layer techniques to reduce the impact of packet loss: application layer forward error correction (FEC) and auto­matic repeat reQuest (ARQ). The application layer FEC technique relies on injecting redun­dant information into the video flow, which enables the receiver to reconstruct lost data. A number of FEC packets are calculated and trans­mitted with each block of payload packets. The ARQ technique requests the retransmission of only the lost packets. Note that if the first request or retransmission gets lost, an additional retransmission can be requested, and this proce­dure can be repeated as long as the retransmis­sion arrives before the decoder requires it.
Both techniques introduce an additional latency mainly because they must build up a buffer in the STB before they can reliably com­pensate for packet loss. For application layer FEC, this buffer must be large enough to accom­modate a block of payload packets and the asso­ciated FEC packets, such that the missing payload packets can be reconstructed. For ARQ, this buffer must be large enough for lost packets to be reliably detected and for retransmissions to be requested and received. The larger the buffer, the more retransmissions are possible.
Both techniques also introduce an additional overhead bit rate. For FEC, this is a constant flow of FEC packets, whereas for ARQ, this is a variable flow of retransmitted packets. On the DSL link and the home network, the FEC over­head is usually a lot larger than the ARQ over­head, because FEC packets are often transported in vain, that is, if all payload packets in a block arrive (which is common), the FEC packets are superfluous; whereas ARQ only retransmits the packets that were actually lost. In the multicast-enabled core metro and access network, the FEC overhead can be multicast and is propor­tional to the number of channels (and does not depend on the number of users), whereas for an ARQ scheme, each STB requests its individually lost packets over a unicast flow and as such, the overhead for ARQ is proportional to the num­ber of users.
Application layer FEC fails if the packet loss is so severe that more (payload and FEC) pack­ets from a FEC block are lost than can be recov­ered. ARQ fails if none of the (consecutive) retransmissions of a lost packet . that can be requested within the available time window . arrive successfully at the receiver. The applica­tion layer FEC and ARQ schemes must be tuned such that this failure occurs less frequently than the specified target of a mean time of four hours between visible distortions dictates.
To perform application layer FEC, various FEC schemes can be used. The best performing FEC schemes attain the performance of a maxi­mum distance separable code. If in such a maxi­mum distance separable code, K payload packets are collected to which M FEC packets are added, making up a block of (K+M=) N packets, this scheme can recover any set of M lost packets out of the block of these N packets. An example of such a maximum distance separable code is a Reed-Solomon code [5]. Other FEC schemes are, for example, binary codes [6], Raptor codes [7], the ProMPEG code of practice 3 FEC code [8], low-density parity check codes [5], hybrid schemes [4, 9], and so on. Most of these codes do not reach the performance of a maximum distance separable code in all circumstances but are computationally a little less demanding than Reed-Solomon codes.
Which of the two techniques, application layer FEC or ARQ, performs best depends on the round trip time (RTT) from the STB to the place from where the retransmissions are per­formed. For low RTT values (very common in DSL access networks), that is, when there is enough time to allow for several retransmissions, ARQ outperforms application layer FEC. For example, Fig. 3 shows the resulting mean time between visible distortions after Reed-Solomon application layer FEC (dashed lines) and ARQ (solid lines) as a function of the packet-loss ratio. The thick line shows the mean time between visible distortions when no application layer protection is used. The curves pertain to high definition TV and apply to a link layer pro­tection technique that is tuned to 8 ms interleav­ing [10]. For ARQ, the label L indicates the maximum number of retransmissions. The buffer in the STB must be L times the RTT. The over­head bit rate associated with the ARQ schemes that meet the target of a mean time between vis­ible distortions of four hours is very low (less than 1 percent). The label RS(N,K) pertains to application layer FEC with N the total number of packets in a FEC block and K the number of payload packets in this block. The code labeled RS(78,72) requires an STB buffer of 100 ms, whereas the two other codes require a buffer of 200 ms. The code labeled RS(150,144) has an associated overhead bit rate of 4 percent, where­as the two other codes have twice this overhead.
Either application layer FEC or ARQ can correct for packet loss caused by noise on the DSL link, moderate packet loss in the home net­work, and packet loss due to buffer overflow at the expense of a couple of 100 ms additional latency and an overhead less than 10 percent for FEC and much less overhead for ARQ. Howev­er, the packet loss caused by node or link fail­ures followed by reroutes is hard to correct at reasonable amounts of overhead bit rate and additional latency.
While application layer FEC is best added at or near the encoder (i.e., in the SHE or the VHO); for ARQ, it is more beneficial to host the retransmission from a network buffer at the edge of the network (e.g., in the VSO). It reduces the RTT and as the overhead for ARQ in the network (other than the DSL link) is pro­portional to the number of users, it alleviates the overhead in the network beyond the retransmis­sion buffer.

FAST CHANNEL CHANGE
Figure 4 illustrates why the channel-change latency can be high. This figure shows three cumulative traffic curves. The left green line is the cumulative (departed) traffic curve associat­ed with the encoder. This line indicates the num­ber of bits that were sent over the multicast tree from the encoder as time evolves. The dashed green line is the cumulative arrived traffic curve associated with the STB. This line indicates the number of bits that arrived at the STB as time evolves. The right green line is the cumulative departure traffic curve associated with the STB. This line indicates the number of bits that were read from the STB and presented to the decoder as time evolves. The slope of each line corre­sponds to the bit rate. The horizontal difference

■ Figure 4. Traditional channel change.
between the left green line and the dashed green line is the time required to send a particular bit from the encoder to the STB. The vertical differ­ence between the green dashed line and the right green line is the number of bits in the STB buffer.
Figure 4 shows that (in addition to the laten­cy associated with the signaling messages not shown in the figure) the two most important components of the channel-change latency are the wait for an anchor frame to arrive before decoding can begin and the time that the STB requires to build up a buffer (e.g., to be able to compensate for packet loss as discussed in the previous section and/or to prevent buffer under-run during decoding due to jitter on the net­work).
According to [11], an acceptable channel-change latency is less than 0.5 s. As explained previously, the GoP size is often larger than this value (for the video bit rate to be low enough), such that without special measures the channel-change latency is likely to be unsatisfactory.
Figure 5 illustrates a technique to drastically decrease the channel-change latency. It consists of the following steps. When the user tunes in to a channel, a unicast flow is set up. Over this uni­cast flow, the packets associated with one of the most recent anchor frames first is pushed to the STB (followed by the packets associated with the frames after this anchor frame). After the STB receives the anchor frame, it can immediately start to decode and display the sequence. How­ever, there are two problems. First, the unicast flow lags behind the multicast flow, that is, it sends information that was transmitted over the multicast tree some time ago. Second, if the STB starts to play out immediately, it has no opportu­nity to build up a buffer. By sending the unicast at a higher rate than the multicast flow, both problems are solved. The red dashed line in Fig. 5 illustrates the unicast burst. The fact that its slope is larger than the slopes of the green lines (associated with the multicast flow) indicates
IEEE Communications Magazine . February 2008
Authorized licensed use limited to: UNIVERSIDADE TECNICA DE LISBOA. Downloaded on September 9, 2009 at 10:40 from IEEE Xplore. Restrictions apply.

■ Figure 5. Fast channel change.
that the unicast flow has a higher bit rate. The moment the unicast flow catches up with the multicast flow (where the dashed lines meet), the STB terminates the unicast flow and joins the multicast flow. The lower the overhead bit rate that is chosen, the longer the unicast flow lasts. Note that during the unicast burst, the buffer gradually fills, starting from being empty. As such, if play-out starts immediately from the moment the first anchor frame arrives in the buffer, there will be no packet loss protection until the buffer fills to the required size.
Choosing the most recent anchor frame is risky, because the buffer (e.g., for application layer FEC or ARQ) that is built up might be too small because the buffer only fills up during the unicast burst. As such, to build up a sufficient STB buffer, an anchor frame further in the past than the most recent one is advisable. However, the further in the past the anchor frame is select­ed, the longer the unicast burst will last.
One remaining question is from where this anchor frame and all following frames to be sent in a unicast burst can be retrieved. The encoder could host a circular buffer, which could retain a couple of the most recent GoPs. But for similar reasons as for the ARQ buffer, this circular buffer retaining the most recent GoPs should be implemented at the edge of the network. In fact, if a circular buffer for fast channel-change pur­poses is implemented, the retransmission buffer comes practically for free. Also, notice that the STB buffer that must build up for application layer FEC or ARQ to work reliably does not necessarily result in an additional latency. In this fast channel-change scheme, it just introduces an initial period (immediately after the channel change) where there is no protection.
The DSL link must be dimensioned such that it can support the higher bit rate associated with the unicast flow. Because video traffic has priori­ty over data traffic, the unicast flow temporarily borrows capacity from the data services. In most cases, this borrowed capacity is only a small por­tion of the available bit rate for data traffic.
Alternatively, a variant of this method can be used that economizes on the bit rate of the uni­cast burst by not sending all the packets. In par­ticular, the packets associated with anchor and other reference frames are always sent, but non-reference frames or other less important packets may not be sent. Depending on how many of the packets are discarded, this may introduce some motion artifacts just after a channel change.
Other techniques for fast channel change exist. One technique is based on the picture-in­picture channel. This channel has a lower bit rate (and resolution) than the regular channel. It is constructed with a very small GoP size. When the STB changes to another channel, it first tunes in to the picture-in-picture channel, which is temporarily displayed until the STB has received an anchor frame for the regular chan­nel and the STB buffer has been filled to an acceptable level. With this technique, a lower spatial resolution is briefly displayed following a channel change, and it requires a higher bit rate because the picture-in-picture channel and regu­lar channel must be simultaneously transported over the DSL link.


ON DEMAND SERVICES
Because of its ease of addressing individual users and because it has an integral return channel, IPTV is well suited to offer new TV services augmenting the linear programming TV service. Indeed, the increasing demand for individual content, such as video-on-demand and new ser­vices, such as time-shifted TV and network per­sonal video recorder make the IPTV service offering more attractive than other broadcast oriented platforms that are just migrating from analogue to digital. In this section, we discuss techniques to increase the user’s quality of expe­rience for these services.
VIDEO-ON-DEMAND
Video-on-demand libraries grow larger and larg­er. In principle, for each user request, a unicast flow must be set up. Multicast delivery, which saves a lot of capacity in the core metro and access networks in the linear programming TV service, cannot be exploited for video-on-demand services. Instead, caching techniques can be used to reduce demands on the network. Popular video-on-demand files optimally are stored clos­er to the STBs, such that the packets associated with these video files are required to be trans­ported only once over the core (and metro) net­work, whereas less popular video-on-demand content may be kept on a central video-on­demand server. The decision about which con­tent to place in which location on the network is based on the (assumed or measured) popularity of the content.
Considering the description given previously, there is a clear resemblance between the video­on-demand buffers in the network and the fast channel-change circular buffer for linear pro­gramming TV. However, whereas for fast chan­nel change, there is a relatively small circular buffer, video-on-demand requires a considerably larger buffer containing the complete video-on­demand files.
As explained previously, mitigating packet loss requires (among other things) a buffer to be built up in the STB. With linear programming TV, this buffer is filled during the unicast burst, and its size is limited by the amount of informa­tion present in the fast-channel-change circular buffer. For video-on-demand, however, the com­plete video-on-demand file is present in the video-on-demand buffer. Then, if an initial burst can be tolerated, a large STB buffer can be built up, enabling as many retransmissions as desired. This immediately leads to the conclusion that in the case of video-on-demand, an ARQ scheme easily will outperform any FEC scheme.

NEW SERVICES
Time-Shifted TV . With time-shifted TV, a user can start watching a program from the beginning after the program actually started or even after it ended in the linear programming TV service. Or the viewer may want to rewind or pause the linear programming TV program that he or she is watching, perhaps temporarily switching to another channel. For time-shifted TV, the network provider stores the programs in a (circular) buffer somewhere in the network. From the moment a user decides to start watch­ing delayed content, a unicast flow is set up from the buffer to the user. From that moment on, the user stays on the unicast and does not return to the multicast flow.
In fact, the time-shifted TV buffer is effec­tively a scaled up version of the fast-channel­change circular buffer. However, for time-shifted TV, the buffer must be a lot larger and to miti­gate the resource demands placed on the net­work by the unicast flows, the time-shifted TV buffer should be located close to the edge of the network.
Some operators may choose to offer a limited time-shifted TV functionality supported by fea­tures in the STB. In this case only live-pause and rewind are possible. A user cannot start to watch the beginning of a program after it started, nor temporarily watch another program while live-pausing the first one on a bandwidth constrained network.
Network Personal Video Recorder . With a network personal video recorder, a user indi­cates in advance which programs, offered in a linear programming TV service, he or she wants to record for later viewing. The network provider makes a copy of the requested programs and stores them in the network. In fact, the network personal video recorder service essentially is a video-on-demand service, where the network provider records all content offered in the linear programming TV service that was identified by one or more users for later viewing. Because in this case the users actually indicate that they were interested in recording the program, the network provider has a clear view of the popu­larity of the content and as such, can adapt its caching strategy for optimal storage.
Notice that the recorded files also can be stored on the STB, but this makes a service, where a user views one channel while recording one or more other channels, a challenge on a bandwidth constrained network.


CONCLUSION

In addition to improving the linear programming TV experience, IPTV is well suited for new services such as video-on-demand, time-shifted TV, and network personal video recorder. For these services, the buffers in the network must be larger than what is required for fast channel change and ARQ.
In this article, we gave an overview of an IPTV network architecture and reviewed basic video compression concepts. Based on this, it was shown that IPTV can offer the linear program­ming TV service comparable to other platforms (satellite, cable, terrestrial) but with superior channel-change performance and a large mean time between visible distortions. For the chan­nel-change latency to be reduced, a strategy based on a circular buffer in the access network was proposed. For reducing the number of visi­ble distortions that result from packet loss, two application layer strategies were discussed: appli­cation layer FEC and ARQ. Both introduce additional latency and overhead. ARQ generally outperforms FEC when ARQ is provided from a retransmission buffer in the access network. A common buffer may be used for both retrans­missions and fast channel changes.
In addition to improving the linear program­ming TV experience, IPTV is well suited for new services such as video-on-demand, time-shifted TV, and network personal video recorder. For these services, the buffers in the network must be larger than what is required for fast channel change and ARQ.
REFERENCES
[1] ATIS Std. ATIS-0800007, “IPTV High Level Architecture,” 2007.
[2] D. Marpe, T. Wiegand, and G. J. Sullivan, “The H.264/MPEG4 Advanced Video Coding Standard and Its Applications,” IEEE Commun. Mag., vol. 44, no. 8, Aug. 2006, pp. 134.44.
[3] ATIS Std. ATIS-0800005, “IPTV Packet Loss Issue Report,” 2007.
[4] ETSI, “Digital Video Broadcasting (DVB); Guidelines for DVB-IP Phase 1 Handbook,” tech. rep. ETSI­TR102542V1.3.1, Nov. 2007.
[5] D. MacKay, Information Theory, Inference and Learning Algorithms, Cambridge Univ. Press, 2003.
[6] F. Vanhaverbeke et al., “Binary Erasure Codes for Packet Transmission Subject to Correlated Erasures,” Proc. 2006 Pacific-Rim Conf. Multimedia, Hangzhou, China, Nov. 2.4, 2006, pp. 48.55.
[7] M. Luby et al., “High-Quality Video Distribution Using Power Line Communication and Application Layer For­ward Error Correction,” Proc. IEEE Int’l. Symp. Power Line Commun. and Its Apps., Orlando, FL, Mar. 26.28, 2007, pp. 431.36.
[8] SMPTE, “Forward Error Correction for Real-time Video/Audio Transport Over IP Networks,” SMPTE-2022­1, 2007.
[9] F. Vanhaverbeke, M. Moeneclaey, and D. De Vleeschauwer, “Retransmission Strategies for an Acceptable Quality of HDTV in the Wireless and Wired Home Scenario,” Int’l. J. Commun. Sys., no. 20, 2006, pp. 297.311.
[10] N. Degrande, D. De Vleeschauwer, and K. Laevens, “Protecting IPTV Against Packet Loss: Techniques and Trade-Offs,” to appear, Bell Labs Tech. J., vol. 13, no. 1, Spring 2008.
[11] R. Kooij, K. Ahmed, and K. Brunnstr?m, “Perceived Quality of Channel Zapping,” Proc. 5th IASTED Int’l. Conf. Commun. Sys. and Networks, Palma de Mallorca, Spain, Aug. 28.30, 2006.
BIOGRAPHIES
NATALIE DEGRANDE (natalie.degrande @alcatel-lucent.be) obtained an M.Sc. in physics and a Ph.D. in physics from Ghent University, Belgium, in 1995 and 2001, respectively. Since then she has been a researcher at Alcatel-Lucent Bell (formerly, Alcatel Bell). First she worked in the traffic engi­neering team on the development of resource optimizing algorithms. Her current research focus is on triple-play ser­vices and video quality issues.
KOEN LAEVENS (koen.laevens @alcatel-lucent.be) obtained an M.Sc. in electrical engineering and a Ph.D. in applied sciences from Ghent University, Belgium, in 1991 and 1999, respectively. Over the years he has held several positions as researcher at Ghent University, Microsoft Research Cambridge, and Alcatel-Lucent Bell (formerly Alcatel Bell). His research focuses on the broad area of performance evaluation, emphasizing applications in the telecom sector. His current interests include congestion control and video delivery mechanisms in packet-switched networks. He is a part-time lector with the Telecommuni­cations and Information Processing Department (TELIN) of Ghent University.
DANNY DE VLEESCHAUWER (danny.de.vleeschauwer@alcatel­lucent.be) obtained an M.Sc. in electrical engineering and a Ph.D. in applied sciences from Ghent University, Belgium, in 1985 and 1993, respectively. He has been with Alcatel-Lucent Bell (formerly Alcatel Bell) since 1998 and was a researcher at Ghent University from 1987 to 1998. He worked first on image processing and later on the applica­tion of queuing theory in packet-based networks. His cur­rent research focus is on ensuring adequate quality for triple-play services offered over packet-based networks. He is a guest professor with TELIN, Ghent University, and a member of the Alcatel-Lucent Technical Academy.
RANDY SHARPE (randy.sharpe@alcatel-lucent.com) obtained a B.Sc. in electrical engineering from the University of Michi­gan in 1978 and an M.Sc. in electrical engineering from the Massachusetts Institute of Technology in 1979. In 2007 he joined the Access Division CTO organization, focusing on IPTV aspects of broadband access networks. He devel­oped digital video transmission systems while at Bell Labs before becoming a founder and system architect of Broad-Band Technologies. He joined the Alcatel USA CTO organi­zation in 2001, specializing in access network issues. He also co-chairs the Architecture Task Force of the IPTV Inter­operability Forum and is involved in other standards activi­ties.
View All (861)
4G (2) 4G Evolution (1) 5G (49) 5G 특화망 (10) 5g (1) 802.11 (1) 802.1X (1) ALTO (1) ANDSF (1) AT&T (2) Acceleration (1) Adobe HDS (3) Akamai (6) Amazon (3) Apple HLS (4) Authentication (1) BRAS (2) BT (1) Backbone (4) Backhaul (12) BitTorrent (1) Broadcasting (3) C-RAN (13) C-RAN/Fronthaul (12) CCN (4) CDN (52) CDNi (1) COLT (1) CORD (1) CPRI (2) Cache Control (1) Caching (5) Carrier Cloud (2) Carrier Ethernet (9) Channel Zapping (4) China Mobile (1) China Telecom (1) Cloud (10) Cloudfront (1) DASH (2) DCA (1) DHCP (3) DNS (1) DSA (1) Data Center (7) Dynamic Web Acceleration (1) EDGE (1) EPC (5) Edge (1) Energy (1) Ericsson (5) Ethernet (8) FEO (2) Fairness (1) Fronthaul (5) GiGAtopia (1) Gigabit Internet (2) Global CDN (1) Google (5) HLS (1) HTTP (1) HTTP Adaptive Streaming (18) HTTP Progressive Download (3) HTTP Streaming (1) HetNet (1) Hot-Lining (1) Hotspot 2.0 (2) Huawei (3) ICN (4) IP (1) IP Allocation (1) IP Routing (8) IPTV (15) Intel (1) Internet (1) Interoperability (2) IoST (1) IoT (14) KT (22) LG U+ (3) LTE (70) LTE MAC (1) LTE-A (2) Licensed CDN (1) M2M (3) MEC (5) MPLS (25) MVNO (1) Market (4) Metro Ethernet (7) Microsoft (2) Migration (1) Mobile (4) Mobile Backhaul (1) Mobile Broadcasting (1) Mobile CDN (2) Mobile IP (1) Mobile IPTV (3) Mobile Video (1) Mobile Web Perormance (1) Mobility (1) Multi-Screen (7) Multicast (7) NFC (1) NFV (2) NTT Docomo (2) Netflix (6) Network Protocol (31) Network Recovery (3) OAM (6) OTT (31) Ofcom (1) Offloading (2) OpenFlow (1) Operator CDN (14) Orange (1) P2P (4) PCC (1) Page Speed (1) Private 5G (13) Programmable (1) Protocol (7) Pseudowire (1) QoS (5) Router (1) SCAN (1) SD-WAN (1) SDN (15) SDN/NFV (15) SK Telecom (22) SON (1) SaMOG (1) Samsung (2) Security (6) Service Overlay (1) Silverlight (4) Small Cell (3) Smart Cell (1) Smart Grid (2) Smart Network (2) Supper Cell (1) Telefonica (1) Telstra (1) Terms (1) Traffic (2) Traffic Engineering (1) Transcoding (3) Transparent Cache (2) Transparent Caching (14) VLAN (2) VPLS (2) VPN (9) VRF (2) Vendor Product (2) Verizon (2) Video Optimization (4) Video Pacing (1) Video Streaming (14) Virtual Private Cloud (1) Virtualization (3) White Box (1) Wholesale CDN (4) Wi-Fi (13) WiBro(WiMAX) (4) Wireless Operator (5) YouTube (4) eMBMS (4) eNB (1) 망이용대가 (1) 망중립성 (1) 스마트 노드 (1) 이음 5G (3)

 

 

     
         
     

 

     
     

넷매니아즈 회원 가입 하기

2023년 6월 현재 넷매니아즈 회원은 55,000+분입니다.

 

넷매니아즈 회원 가입을 하시면,

► 넷매니아즈 신규 컨텐츠 발행 소식 등의 정보를

   이메일 뉴스레터로 발송해드립니다.

► 넷매니아즈의 모든 컨텐츠를 pdf 파일로 다운로드

   받으실 수 있습니다. 

     
     

 

     
         
     

 

 

비밀번호 확인
코멘트 작성시 등록하신 비밀번호를 입력하여주세요.
비밀번호