Top Banner
Digital Broadcasting Assessment is at the end of this booklet Basic Networking & Unicast Streaming – LAB Surveying the status of machines in E6 The machines are currently setup for streaming and currently have all IP addresses, subnets and each computer is named. These machines also have two network cards installed. The main University network uses NIC (network interface card) card 1 and our internal network uses NIC card 2. Networking & Streaming on Windows 2003 Server/clients What is needed to network computers to create a local area network and what do you have to do? On a basic scale a couple of network cards, a network switch (or hub) and CAT 5 network cabling with 2 or more computers required. When all hardware is correctly installed you are ready to begin setting them up. E6 Networking Laboratory The PC’s in E6 are installed with three operating systems. For the purposes of this laboratory all students will mainly be using the Windows 2003 server boot option. Starting the servers PC’s. 1. Boot the Pc 2. Choose the Windows 2003 Server boot option 3. On log in screen enter User = Administrator Password = serves 1
35
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Digital Broadcasting.doc

Digital Broadcasting

Assessment is at the end of this booklet

Basic Networking & Unicast Streaming – LAB

Surveying the status of machines in E6

The machines are currently setup for streaming and currently have all IP addresses, subnets and each computer is named.

These machines also have two network cards installed. The main University network uses NIC (network interface card) card 1 and our internal network uses NIC card 2.

Networking & Streaming on Windows 2003 Server/clients

What is needed to network computers to create a local area network and what do you have to do? On a basic scale a couple of network cards, a network switch (or hub) and CAT 5 network cabling with 2 or more computers required. When all hardware is correctly installed you are ready to begin setting them up.

E6 Networking Laboratory

The PC’s in E6 are installed with three operating systems. For the purposes of this laboratory all students will mainly be using the Windows 2003 server boot option.

Starting the servers PC’s.

1. Boot the Pc2. Choose the Windows 2003 Server boot option3. On log in screen enter

User = AdministratorPassword = serves

Check the current IP address on the PC - IPCONFIG

Start > Run > type in command or cmd to start the command prompt.

A black command screen will boot up resembling a DOS screen.

1

Page 2: Digital Broadcasting.doc

In this window type “ipconfig” and press enter. Firstly, you are told the current IP address, the subnet and the gateway of the Local Area Connection.

If no IP configuration information is displayed it means all NIC cards are currently disabled.

Testing the Network

The network is currently setup and to test for communication you are going to deploy the use of “ping”. Ping verifies connections to a remote computer or computers. This command is available only if the TCP/IP protocol has been installed. To ping a specific computer use the following command

Ping other pc by entering ping followed by the I.P. address of the corresponding machine. E.g ping 111.111.111.100 and press enter.

A message similar to below (but with replacement IP address) will be as the following diagram shows.

Ping sends out 32 bytes of information. It has a TTL (time to live of 128) which means it can travel through 128 switches/hub or routers before it will be destroyed. This ensures that they do not live forever.

All the machines should be able to ping each other and respond with a message.

2

Page 3: Digital Broadcasting.doc

If it says “Request Timed out” it means no signal has been returned.

Class Layout of PC’s in Lab

Follow the diagram below to ensure you know whose machine you are pinging. They will also know they can ping you too.

Naming a Computer

All computers need to have names. The NetBEUI protocol demands all computers are named e.g Computer 1, Computer 2 or names like TIM, PETER & HEATHER are just as acceptable. It is also easier to find computers if you know the name of them. Calling a computer 111.111.111.111 is hardly an easy number to remember.

The machines in this lab are named e6-XX

To change the PC name. Right click the My Computer link from the desktop and click properties to enter the computers system properties section.

Click the Network Identification tab from the top. The full computer name and the workgroup is on display. To change them click the Properties box below the Network ID. (Network ID is a quick way to gain access to setting the network protocols).

3

Page 4: Digital Broadcasting.doc

Properties of the Network ID

Name the computer whatever you wish, however, the workgroup is to be called workgroup. Then hit the ok button and exit this box. For changes to be effective a reboot is necessary.

Setting up a local area network (LAN)

The machines are networked by standard CAT 6 network cable (1000mbps) and a network switch (also 1000mbps). The job of the switch is to link the PC’s together. None of the PC’s in this lab are connected to the main University network so we can have full control whilst studying Internet Streaming and not effect any other area of the network.

Networking the Computers

To successfully start a network each machine must be running a network language (a protocol).

Setting up the Networking Protocols

4

Right click the Network neighbourhood on the desktop and choose properties

Page 5: Digital Broadcasting.doc

The image on the PC’s in F11 will have 2 local area connection links present but one is greyed out. These machines have 2 NIC cards installed. (nic = network interface card). For this testing only one card is used because the greyed out card is configured for usage on the main University network.

Installing Protocols

After selecting the properties of the network connection a box will be asking for network protocols to be installed or turned on. (identical to the picture on next page).

5

Right click the local area connection icon and click properties.

Page 6: Digital Broadcasting.doc

Then highlight the Internet Protocol option (TCP/IP) and click properties.

How to Set up a Network using the Internet Protocol (TCP/IP) Properties

Click – Use the following IP address option button

Enter a 32 Bit IP address in the boxes. For this test ensure the experiment’s IP address begins with 111.111.111 _ _ _ . The last digit will be 1 to 12

6

Set the IP address, subnet mask and ignore the gateway and the DNS.

The gateway is the computers connection to the outside world. DNS is Domain name server. Example - The server, which allows the machine to be on the student domain like yourselves.

Page 7: Digital Broadcasting.doc

depending on the number given to you in class. The use the Subnet 255.255.255.0

To communicate successfully in our Lab each machine does not need to be on the same subnet but it is good practice to ensure machines in the same area have the same subnet. You could say it is like organising them into sections.

Testing the Streams

Start Internet explorer and type the URL of your stream in the address bar. Currently there is no Internet connection on these machines.

To view your stream type either

mms://name_of_your_server/name_of_publishing_point/pinball.wmvormms://ft9-161/pubone/pinball.wmvormms://111.111.111.161/pubone/pinball.wmv

in the URL bar.

You can test this from any machine on the network in the lab.

Useful Definitions

OVERVIEW NetBEUI (courtesy of Microsoft help guide)

NetBIOS Extended User Interface (NetBEUI) is suited for use in small workgroups or LANs. You can install a NetBIOS gateway and the NetBEUI client protocol on all remote access servers running Windows 2000 and most Windows networking clients. Previous Windows NT remote access clients, LAN Manager remote access clients, MS-DOS remote access clients, and Windows for Workgroups remote access clients may use NetBEUI. NetBEUI is not routable, and the only configuration required for the protocol is a computer name.

TCP/IP

Transmission Control Protocol/Internet Protocol (TCP/IP) is the most popular protocol, and the basis for the Internet. Its routing capabilities provide maximum flexibility in an enterprise-wide network.

On a TCP/IP network, you must provide IP addresses to clients. Clients may also require a naming service or a method for name resolution. This section explains IP addressing and name resolution for Network and Dial-up Connections on TCP/IP networks. It also describes the FTP and Telnet tools that are provided by Windows 2000 TCP/IP.

7

Page 8: Digital Broadcasting.doc

Compression – Encoding for the web

Compression in its most basic form looks for redundant data. Most sentences are made up from words that get repeated frequently. At the transmission end data is compressed. Any data that is repeated can therefore be remember once and then pointed at again to recreate the same message at the receiver end. A vector calculation will enable the full sentence to be recreated. Vector calculations also take up very little memory that actual data.

Data compression is compression that works on generally random content like computer files and it does not have to know details of what the files contents are. One of the most popular compression algorithms over the past decade is .zip from PKzip. Apples use .Sit from Aladdin systems.

Have you ever wondered why a compressed image or piece of data will not recompress again to a smaller size? Well uncompressed information in numerical format looks like it contains lots of patterns of numbers. Compressed data under the same tests is very random. So if data is compressed efficiently in the first place the encoded file looks very random and thus is nearly impossible for a compression scheme to understand it and therefore becomes near by impossible to compress much more. Using programmes like Winzip on a piece of uncompressed video you would probably achieve at least a 2:1 compression ratio. Where as on a jpeg you would achieve very little.

There are two main types of compression; lossy and lossless. A lossless codec preserves all of the information contained within the original file. TIFF and PNG are lossless. Lossless codec’s do not always compress the file smaller than the original.Lossy basically means data is lost and cannot be retrieved. Only a best effort of reproducing the original file is possible such as with a JPEG compression. Lossy codec’s generally allow you to specify a wanted data rate. Media Cleaner offers this service across all the Real, Windows and QuickTime compression setting. All compression for video and audio across the web is lossy.

Interlaced and Progressive TV Pictures

Televisions create their image by drawing (scanning) lines of light on the CRT face, left to right, top to bottom, to produce a picture over the entire screen. The resultant images that make up the motion picture are comprised of alternating interlaced fields: that is, one field consists of all the odd lines (1,3,5 etc. through 625), and the other consists of all the even lines (2,4,6 etc. through 624). The result is that only half of the video's display is drawn every 50th of a second. A simulation of this is shown on the left. Field 1 is scanned, and then Field 2 is scanned. Traditional talk quotes PAL television as having 25 frames per second, each being comprised of two interlaced fields. That is incorrect and should be erased from your mind. The PAL interlaced system shows 50 unique images (fields) per second, and each field is an independent snapshot in time. So we don't want to think of interlaced televisions in terms of

8

Page 9: Digital Broadcasting.doc

frames but rather in terms of fields, interlaced fields, and 50 of them per second.

As long as nothing in the picture changes (such as when showing a still image), the alternating fields will actually complement each other and form a complete picture. The display will flicker and scan lines will be visible (both a inherent of an interlaced system), but that's life in an interlaced world. We don't spend the evening looking at a still picture on the TV though. The moment something moves, we get interlacing artefacts.

http://www.hometheaterhifi.com/volume_7_4/dvd-benchmark-part-5-progressive-10-2000.html

Progressive Scan

Progressive displays, such as high-performance CRT/LCD/DLP/LCOS projectors and the new HDTV-ready TVs, can show progressive scanned images as opposed to interlaced. We learned above that an interlaced display shows 50 fields per second. But with progressive, each scan of the CRT face is a complete picture including all scan lines, top to bottom, now called a frame, and we are showing 50 of those per second. The benefits of a progressive display are no flicker, scan lines are much less visible (permitting closer seating to the display), and they have none of the artefacts we described for the interlaced display (as long as they are fed a real progressive signal).

But sources which are truly progressive in nature are hard to come by right now. Movies on DVD are decoded by the player as interlaced fields. All of the film's original frames are there, but they are just divided into "halves". What we're going to talk about next is how we take the interlaced content of DVD and recreate the full film frames so we can display them

9

Page 10: Digital Broadcasting.doc

progressively. The term commonly used to restore the progressive image is deinterlacing, though we think it is more correct to call it re-interleaving, which is a subset of deinterlacing.

The magnitude of the change you will see also depends on what you were watching before. If you switch from a regular interlaced TV to a progressive-scanned 576p picture, you should see a much smoother, more film-like picture, with much less obvious scan line structure, and more apparent vertical resolution. The difference should, in fact, be pretty obvious. But if, like most new high-end TVs, your TV has a built in deinterlacer (often called a “line doubler”), then the television has been converting your interlaced signal to 576p already, so the smoothness and lack of line structure are already there. So what can a progressive DVD player offer? Better deinterlacing, to start with. In other words, the deinterlacer in the DVD player is likely better than the one in your TV (the DVD player's deinterlacer also performs the deinterlacing in the digital domain right off the disc, rather than sending an analogue video signal to the TV, which has to convert it back to digital for deinterlacing).

Computer monitors use a progressive scan. Therefore compression for the web has to include deinterlacing if the original signal is from a Pal or NTSC video source. This not only improves the viewable image but reduces the bit rate too. But upon my quick test the file started I used was originally 35MB and 3 seconds in duration and was reduced to 249KB deinterlaced and 251KB interlaced (140:1 ratio approx to original). The difference in data rates per second is not significant but the deinterlaced (progressive file) was displayed much clearer on screen and would be far easier for the media player to reassemble at the client end.

This is a fundamental limit to compression and it is called the Shannon limit. Claude Shannon invented information theory in 1948. He created mathematical solutions to calculate how much information could physically be transferred over a telephone or telegraph wire.

As random data cannot be compressed anymore its limit will be the same size will be the same even after the compress algorithm has been run. Where as with highly redundant data the Shannon limit can be tiny. For example a file that consists of 10 billion “010101010” can be compressed to simply a 0 and a 1. Most compression applications have a value that controls some kind of trade off between speed and compression efficiency. In compressing for the web you have to think of more things whilst compressing video such as compression time, compression efficiency, decoding it and then the careful balance to leave the signal to be acceptable by the audience.

The decoding time for modern codec’s such as Real Networks Helix and WM9 is very CPU intensive thus only useable for fast modern computers. MPEG 4 also demands a powerful machine to decode the signal. The compression time is also a negative exponential. By this I mean it describes the compression time increases more and more when you try to compress a file more. A 10% increase in compression may take 100%

10

Page 11: Digital Broadcasting.doc

more time to successfully do it. Therefore you are trying to reach Shannon’s limit.

Decisions to make about streaming video.

- Target bit rate – knowing the connection speeds- Balancing audio vs. video bit rate- Image size- Frame rate- I - frames

To compensate for congested networks the target bit rate is always set lower than the ideal bit rate. (eg. 50kbps modems can only achieve 53kbps). Many encoding technicians will target a bit rate of 36 – 42 kbps. For two channel ISDN live with a max bandwidth of 128k you target 80 – 100 kbps. Broadband connections can generally be set closer to full capacity eg 512k is full. A T-1 runs at 1.5mbps but if it shares it with several users. T-3 lives run at 44mbs.

Audio vs Video bit rate

Obviously video requires much more bit-rate than the audio signal. Although, it can be common to find audio being sacrificed for better video. Compression experts and user experience have shown that better audio quality provides a better overall impression of the video. Though with a 56k modem you must choose a bit rate of 5-8 kps to leave enough room for the video.

Ultimately your target audience will decide factors in how you deliver your audio. You have to choose the sampling rate and where you want stereo audio or mono. Most codec’s dictate these parameters based on the bit-rate you have chosen.

Reducing Bit Rate

Scaling your frame size is an excellent method of lowering bit rate. It scale you frame size it is advisable to scale by a factor of 4.

720 x 576 to 180 x 144 ( a division of 4)

This is because the codec’s work by encoding blocks of 4 or 16 and you get a better quality picture by doing so. As to encode it only has to remove 3 out of 4 pixels and you could say it makes life easier with symmetrical encryption. A usual size is 160 x 120 when using a dedicated capture device. This frame size is adequate for a 56k modem. If you are streaming to 56k and you have

11

Page 12: Digital Broadcasting.doc

captured at DV rates, it’s recommended to resize to 4 reduction (180 x 144). Large frame sizes are generally expected at broadband rates. 240 x 180 or 320 x 240. Occasionally if distribution is only via LAN or a high speed connection you can encode at 640 x 480 this is classed as full screen.

Frame Rate

Frame rate is another important factor to reduce bit rate. At 56k rate you will find encoding takes place at 15 fps in USA and 12.5 in UK. 10 is around the minimum. Over the next few years you will see the fps go up to full frame rate. I recommend 15fps using low speed connections.

Keyframes

Keyframes are made up of what they refer to as l-frames similar to MPEG. The video codec’s using these keyframes as anchor frames. The keyframe requires more bits to encode it than other frames. The other frames are called difference frames. They are made from a prediction from its previous frame. Similar to the P-frame in MPEG. All the information needed to encode and decode a keyframe comes from a single source frame called the intra-frame or l-frame (keyframe). Keyframes are useful for random access in a movie but using less of them does help to reduce data bit rate.

Media cleaner Pro has an option to reduce the scale quality. This option applies very complex filtering to the image prior to scaling it, thereby reducing or eliminating aliasing or jagged edges. It is usual to use the accurate function here but it will take longer to render. So it depends if you are short of time.

De-interlacing

Due to TV being captured with 2 fields, each field containing half the lines. Removing a field will immediately half the bitrate. Putting these two fields together can cause jagged edges as the fields are separated in two. You can choose to blur these lines together by selecting the blend option. This will help to smooth out high motion sequences particularly as when you stream you reduce the fps gives better compression. You can also select one field or the other. Usually it does not matter which field you choose unless you have small details.

Field Dominance Change

This is a property which defines whether the odd fields or the even fields come first in the interlaced pattern. Occasionally single lines become separated from objects because of the compression and it causes jagged moving edges that should not be doing that. You can use this to cure this problem but the majority of time it can be ignored.

Inverse Telecine

12

Page 13: Digital Broadcasting.doc

Many pieces of software such as MCP (Media Cleaner Pro ) have in Inverse telecine. It’s on the de-interlace menu. If original video came from film at 24 fps and converted to 25 or 30 so at some point unnecessary information was created so you can remove it. It’s a process called 3-2 pulldown.

Filtering for Signals

Just as a one-dimensional filter can be applied to an audio signal to improve it, 2D filters can be applied to images to improve the final compression. Low pass filters can be used to help reduce the bit-rate. Applying it to an edge will spread that edge out over several pixels. As the codec codes the differences and not identical areas it reduces the bitrate.

Blurring applies a low pass filter across the entire image giving a softer focus, making more colours the same so making it easier for the codec to work.

Adaptive noise reduction only applies a blue (filter) to flat areas so edges are not affected. It’s usual to apply the latter unless you’re looking for a softer focus. In this instance when you have areas of uniform colour. Football matches could have this applied because the green area needs to be uniform and its flat so very little encoding could take place.

Differences Frames

The frames in between the keyframes are called difference frames (sometimes called delta frames, these are the P frame sin MPEG’s). They code the difference between themselves and the previous frame. So, if they are identical to last frame nothing will be coded. These difference frames attempt to code only motion in the video. Noise and transitions make changes in between frames. Note, If you are shooting for the web and you wish to keep the quality reasonably high, it is advisable to use no transitions.

Apple

The early pioneers of media streaming and using movies on the web were Apple. They started playing video on the Internet using their QuickTime 2 format. Streaming at this stage was not possible. They would place short clips on their websites for downloading and viewing using Apples Movie player, or a plug in web browser.

Apple upgraded to Quicktime 2.5 by adding a fast start feature allowing the movie to being playing before it was fully downloaded. This became known as progressive download. This is still used today, offering higher quality movies on slower connections. Movie trailers and product demo’s are encoded as progressive downloads.

Cinepak was traditionally the codec of choice for cross platform delivery of Quicktime but it’s an older technology intended for CD-Rom delivery of media.

13

Page 14: Digital Broadcasting.doc

With advent of web video, apple teamed up with several companies to offer the very best in compression. These included Sorenson Qualcomms Pure Voice and Qdesign’s music codec.

True streaming was introduced with QuickTime 4, however this also introduced the need for a QuickTime streaming server, which permitted real time delivery of content.

Sorenson

The Sorenson video codec is the choice for streaming or downloading QuickTime movies because it is designed to package large amounts of information into very low bit rates.

When choosing frame rates a good idea is to choose a frame rate which is evenly divided into the source frame rate:

Eg – 12.5 fps.

as it allows the codec to keep one frame every two frames in 25 fps movie.

Keyframes

Sorenson has a default setting of 1 keyframe every 10 seconds. If you find video deteriorating too much between keyframes, try placing them closer together. If you need to reduce bit-rate spread them out.

Preprocessing audio and video content for web actually allows you to control what information you feed to the codec’s.

Compressing the I frame (you can it spatial compression) is a major method of reducing the data rate. The I frame is twice as big a the difference frames. The difference frames (delta) are made up from these frames. The more the I is compressed the less data in the P frames. But at the expense of quality.

Getting proper parameters

Knowing your target audience, including platforms and connection speeds, in an integral step for achieving quality video and audio.

Setting reasonable frame sizes and frame rates will reduce the amount of data the codec must compress, offering higher quality content for every it that counts.

Cropping

Cropping, you’ll want to crop the image if there is tearing or black bands around any of the frame edges. These pixels occur during capture and lead to unnecessary pixels that can waste precious bandwidth. If you crop ensure you crop the frame to the same aspect ratio of a tv 43. But I can really recommend you crop to ensure a 4x4 or 16x16, otherwise multiples of 4. This

14

Page 15: Digital Broadcasting.doc

is because the codec’s divide images into multiples of 4. Then the codec will not try to resize your image to fit its inherent constraints.

Intelligent Scaling of Image Size

In order to improve quality and achieve higher frame rates, especially at laser connection speeds, you may want to scale your video even more.

Real Video Codec’s.

Real system was designed especially for the web and is the earliest contender in this arena back when we used 14k modems Real Introduced Real Audio. Their audio codec’s were well developed and highly specialised for many different connections. They then started developing video streaming codec’s. However, each time a new video codec was introduced it required a new player to be downloaded. The latest version is called G2 for Real version 8.

Real offer 3 basic options

- Realtime for streaming- WWW Move setting (for progressive download)- Surestream for dynamic scaling of your streams.

Real WWW Movie only offers 3 settings - small, middle and high. It generally presents a higher overall bit rate for stream than the Realtime the because it ‘tricks’ the user into thinking the quality is so much better for the same encoding bit rate.

If you are using a standard web server you need to choose the straight Real video codec. If you are using a media server use the G2 codec. G2 has been developed to include SVT (Scalable Video Technology), The server can drop frames dynamically in order to keep up with the stream. This does works not like surestream, which clocks down the connection. Real has a proprietary intelligence built in to the encoder, which uses mathematical algorithms to insert keyframe for you. You can have manual control but it’s very efficient and to increase quality you will increase BR.

Real system also allows you to prioritise audio or video when network connectors become congested. This option is only available when running from a Real server system. If you choose audio quality over video in a ‘During congestion’ option you should expect dropped frames and frozen video. When the roles are reversed it will skip audio tracks during congestion.

Sure Stream Scalability

It dynamically scales the content that your serving based on the connection speed of your client. You encode multiple streams in one file. When you select the encoding rates eg 56k, ISDN, Dual ISDN, T1 the real server will provide the highest possible bit rate within the file boxed on the actual connection.

15

Page 16: Digital Broadcasting.doc

Real also gives you the option of a max fps where it will automatically drop so many fps if the client cannot provide a continuous playback performance.

Microsoft’s Windows Media Codec’s

WM technologies includes several streaming media components based around the Win 2000 server OS and are capable of encoding a 640x480 video at 30fps over a 1mps stream. These components are grouped in Media Encoder. To create .wmv file use windows media services to serve the .wmv stream and WMplayer allows you to watch it. Windows media resource but helps producers in creation, distribution and playback of .wmv files.

Windows Media Encoder 7.1 includes de-interlacing and inverse telecine to improve the compressed picture quality. As explained earlier in lecture.

Osprey 500 card is developed and optimised to exclusively support WM format. It is the first digital captive card to include tight integration with WM Encoder. The reduction is 655 is reduced because the A to D convert is lossless. This particular card does the DV decoding on its card rather than on the CPU like a normal IEEE1394. This particular card has a maximum file size of 17 terra bytes. It is state of the art, so it can directly capture large amounts of data.

With WM technologies, Microsoft introduced Microsoft media server. MMS is its server protocol. MMS protocol has both a data delivery mechanism which ensures that packets reach the client and a control mechanism to handle clients requests eg stop/play

URLS;s point to media clips or a windows media server usually being

MMS://

MMS has in itself two protocols which acts as a subject to explicitly request the stream to use UDP or TCP. Mmst and Mmsu protocols are now encompassed inside MMS.

HTTP protocol can stream from a normal web server but it is the slowest of the protocols. Unlike RTSP and MMS, it cannot 100% support a constant bit rate. It just serves the stream as fast as it can.

With Real server, real networks introduced as its primary server protocol the real time streaming protocol.

RTSP

RTSP is an open standards based protocol for streaming. To use this protocol URL’s point to media clips on a Real server, they begin with RTSP://.

WM architecture encompasses many delivery mechanisms, but the familiar. AST format (Active Streaming Format) is targeted at the streaming market.

16

Page 17: Digital Broadcasting.doc

They adopted this format when they entered the web video market. MPEG 4 codec was still in development and so their MBEG 4 is not a true MPEG4 they renamed it Windows Media Video codec.

The latest version is V3 but you require an updated media player, which is freely available. This also allows you to set the amount of keyframes, which WM encoder refers to as I-frames. You can control the compression by finding a happy medium between smoothness or crispness. If motion is important you edge the slider toward the lower end of the scale and it will smooth and blur the image slightly. If image data such as text or edges is more important you set a higher setting for crispness.

Microsoft’s current technology for dynamic scaling is called intellistream. It works like Real Surestream. You are limited to just two different bit rates of video and the audio rate must be saved on both.

Assessment Help

How to do the encoding part of the assessment

Set up the file with the following parameters, this allows a fair test to be conducted.

Frame Size 320 x 240

17

Part one of the assessment asks you to choose the Big, medium and small settings to create streaming files for Real, QuickTime and Microsoft.

You must not alter any settings for part1 just choose boxes and render the files. Only 1 can be chosen at a time (unlike my picture).

Page 18: Digital Broadcasting.doc

Frame Rate of 15fpsData Rate split between audio and video

Video – 300kbitAudio – 80kbit

This gives you the ability to see what the processing filters actually, visually do.

Analyse your observations and base your conclusion of these pre-processing filters on the observation and the theory.

REQUIREMENTS

In a group of four, utilising two computers you are to set up a live stream suitable for streaming a live radio station.

Using 2 computers, 1 will encode the audio into a window media audio file and a Microsoft Media Server to stream the live file.

Live Audio streaming

Insert a CD into the CD player and using media player play the CD. Set the sound card audio recording properties to wave out. The enables the sound from the CD player to be sent inside the PC and so windows media encoder can listen for it.

To navigate to the sound card properties double click the volume control (the speaker like icon) in the bottom right hand corner of the windows desktop. Tehn click options>properties then hit the recording button and select wave out mix.

18

Page 19: Digital Broadcasting.doc

Wave out will then be available for you to select like shown below. Without this selected the signal cannot be received by windows media encoder.

Encoding the audio using Windows Media Encoder

For the purposes of this test you will be using Windows Media encoder ver 9. This software is available free from

http://www.microsoft.com/windows/windowsmedia/9series/encoder/default.aspx

19

Page 20: Digital Broadcasting.doc

Setup windows media encoder to broadcast your live audio stream. Follow the wizard.

De-select the video check box and ensure the audio checkbox is select. (see diagram above).

Set the file to broadcast on port 7070. This can be any port expect but for today we shall use this particular port.

Information about ports

A port can be thought of as a hole, which your bit stream of data is sent through to the network. A PC can have thousands of ports, each service that

20

Page 21: Digital Broadcasting.doc

uses the NIC (network interface card) e.g Email, Internet etc uses a separate port. The Internet uses port 80 where as FTP information uses port 21. These can be monitored by running a firewall on your PC. E.g Sygate Personal Firewall can monitor incoming and outgoing packeted information.

Unicast Streaming

Unicast is a one to one client server relationship. It’s a little like when you turn on a VCR. A unique stream is sent from the server. No other client has access to this stream either. But I wouldn’t hold me to this if discussing it in security terms. Each stream can be terminated by the server or the client. The communication channel between the client and the server stays open as packets travel from the server to the client. This maximises the ability to compensate for lost data and deliver a better experience to the end user. Uses can also pause, fast forward and rewind the incoming stream. This is not the case if the file is being streamed from a web server but only from a real media server.

Multicast Streaming

Multicast streaming is like watching TV. A Media server generates one single stream that allows multiple player-clients to connect to it. Users watch the content from the time they join the broadcast. The client contents to the stream but not the server. Thus, because the clients do not connect to the server feedback cannot be sent. This means replacement packets cannot be sent if they are lost. This does save bandwidth as 1000 connections to a 500k stream means no more to the server than 1 connection.

To support multicasting networks must be equipped with routers and switchers supporting multicast protocols. Many Internet service providers do not have hardware to support packets delivery via multicast protocols. As the

21

Page 22: Digital Broadcasting.doc

Internet was designed the hardware and software on to delivery images, text and applications but not real time data such as video and audio. In recent years ISp’s have been upgrading their networks to increase bandwidth but it will take 5-10 years before all ISP’s can fully multicast. Corporations tend to be the main users of multicasting. ISP’s to the home will be the slowest until streaming media users rises heavily it will not happen too quickly. When the modem has been buried and unicast streaming loads become too much through high volume usage will the ISP’s sort this out. Prehaps then we will see the big TV channels go too video streaming as well as traditional transmission.

Multicasting also uses class D IP addresses in the range 224.0.0.0 – 239.0.0.0.

Protocols

Media delivery protocols used for streaming range from UDP, Multicast UDP and TCP which allows a choice of delivery based upon RTP (real time protocol).

Real, Windows and QuickTime all use UDP initially as the preferred mode of delivery. If UDP is blocked by a firewall (firewalls block data from coming in from specific ports) it rolls over to use TCP. The Microsoft MMS protocol can also automatically rollover to http streaming (not 100% reliable though). The UDP stream is classed as connectionless. TCP is not connectionless

22

Page 23: Digital Broadcasting.doc

because it constantly generates authentication and verification signals that adds to data on networks. It also allows for things like FF, RW and PLAY of unicast in demand media streams. Both MMS and RTSP uses TCP to carry the requests for interaction over the streaming media even if they are steamed via UDP or HTTP.

When a client calls the server, a TCP request asks for a data connection. When the server confirms the content it first attempts to send packets over the UDP port. If the network congestion or a firewall blocks the data the player attempts to get the server to use the TCP port. It will then use the HTTP port. (HTTP = 8080). However, you can configure your streams to go through any port. Using Real you use the following code

Rtsp://real_server_name:8080/file.rm?cloakport “554, 7070, 8080”

554 = RTSP port 7070 = PNM port (Progressive Networks Media) 8080=Http port

So, we have discussed the job of the server but what about the media player? The player must be able to understand the real time streaming protocol. Decode the media and perform error correction (as necessary) and then play it. This is a very complex process that self adjusts to network loads and delays several times a second all the while trying to appear as if the audio and video is playing without interruption. It contains a buffer which stores the steamed packets in RAM. This time is invaluable in allowing the packets to be resent. If the player is extensively buffering it generally means network congestion and the player cannot display the missing packets quickly enough to be real time. Another delay is bottlenecking. This happens when the user attempts to view a stream too big for their connection.

In networking, the term protocol refers to a set of rules that govern communications. Protocols are to computers what language is to humans. Since this article is in English, to understand it you must be able to read English. Similarly, for two devices on a network to successfully communicate, they must both understand the same protocols.

What's a playlist?

Many digital audio and multimedia players for your computer support playlists. A playlist offers you better organization and management of the various music files on your computer by controlling what files are played and in what order, much like a music playlist used by radio stations. Playlists are commonly used by amateur and professional computer DJs at parties and dances to allow a continuous play of music, queuing programmed and requested songs. More advanced programs' playlist feature can support fading and mixing between tracks, sorting by artist or genre, and effectively give you the ability to turn your computer into a modern jukebox.

23

Page 24: Digital Broadcasting.doc

Within a playlist file (which can be viewed as a text document) is the local (hard-drive) and/or remote (Internet) location of each file within that playlist. As the audio player moves through each file on the playlist, it sources that file from the specified location, either on your hard-drive or the Internet (a URL).

Playlist formats:

.pls [WinAmp MPEG PlayList File] - The extension for a playlist created in Winamp that can contain MPEG audio files

.m3u [MPEG URL file] - This extension is used to stream and MP3 file or files

.asx [Active Streaming File - Windows Media] .wvx/.wax [Windows Media metafiles] - When a browser downloads a

file with one of these extensions from a Web site, it opens Windows MediaPlayer. Windows Media Player then locates and plays the content specifiedin the file.

.m2a [MPEG Archive Enhanced .M3U Playlist File]

A playlist can be written in two formats. SMIL and ASX. SMIL has now taken over from ASX.

Creating a SMIL File

You can create a SMIL file (extension .smil) with any text editor or word processor that can save output as plain text. If you are familiar with HTML mark-up, you will pick up SMIL quickly. In its simplest form, a SMIL file lists multiple clips played in sequence:

<asx version= 3.0> <entry> <ref href ="http://mediaserver/folder/one.wmv"/> <audio src=" http://mediaserver/folder/two.wmv "/> <audio src=" http://mediaserver/folder/three.wmv "/> </entry></asx>

<smil> <body> <audio src="rtsp://realserver.example.com/one.rm"/> <audio src="rtsp://realserver.example.com/two.rm"/> <audio src="rtsp://realserver.example.com/three.rm"/> </body></smil>

Legal Aspects of Content Distribution

In its early stages the internet was saw as a global email system designed to deliver data or documents. The exploitation of the web was so fast

24

Page 25: Digital Broadcasting.doc

governments had no legal infrastructure in place. An example f this is the ongoing process in the USA to establish Internet Tax Laws. The most famous being battle belongs to the music industry. Record label bosses failed to envision the potential of the internet as a major distributor. 1999 saw the start of legislation task forces to search the globe to eliminate the free distribution of media in violation of copyright laws. It was essential to write and pass laws for online copyright laws. In November 1999 at the first MP3 conference in New York record companies witness the distribution of CD in MP3 format. They suddenly realised their revenue was under threat and sent their lawyers to court to stop companies from illegal distribution. Companies like MP3.com and Napster had to then comply with US copyright laws and pay fines or face shutdown.

The law currently states any means of distribution of media must be authorised by the owner. A financial statement is needed between the owner and the distributor, the sharing of media without consent and with no exchange of fees was viewed as a clear violati0on of copyright laws (only after the law was updated!).

www.publaw.com

Unicast v IP Multicast.

There are two key streaming delivery techniques: unicast and multicast. Unicast refers to networking in which computers establish two-way, point-to-point connections. Most networks operate in this fashion....users request a file, and a server sends the file to those clients only. When streaming multimedia over a network, the advantage to unicast is that the client computer can communicate with the computer supplying the multimedia stream. The disadvantage of unicast is that each client that connects to the server receives a separate stream, which rapidly uses up network bandwidth.

Multicast refers to the networking technique in which one computer sends a single copy of the data over the network and many computers receive that data. Unlike a broadcast, routers can control where a multicast travels on the network. When streaming multimedia over the network, the advantage to multicasting is that only a single copy of the data is sent across the network, which preserves network bandwidth. The disadvantage to multicasting is that it is connectionless; clients have no control over the streams they receive. To use IP multicast on a network, the network routers must support the IP Multicast protocol. Most routers now handle multicast

The Video Streaming Server is responsible for delivering compressed video to each individual request for a particular video stream. This is usually handled by one of the commercial streaming media software packages such as RealNetworks® RealSystem™ or Microsoft® Windows Media™ Technologies. The bandwidth connection to the Video Streaming Server must accommodate the total bandwidth of all the requests for a video stream, unlike the Encoding Station, which must only accommodate one copy of each. As a result, the Video Streaming Server usually has a direct connection to a very

25

Page 26: Digital Broadcasting.doc

high bandwidth line. For example, if there were 100 requests for a video stream compressed at 28.8 Kbps, the server would require at least a 3 Mbps connection. The Encoding Station and the Video Streaming Server can be one single system. However, unless hardware encoding is used, this would typically be for a situations requiring limited performance (e.g. a single input stream and a small number of viewer requests). Even so, it would still require a fairly high-performance system. It is much more common to have two separate systems.

The Web Server for video streaming is in no way different from other Web Servers. The web site merely contains a URL link to the Video Streaming Server - one for every available video stream. Typically this is an icon on the web page which gets selected.

A Video Player application is required to decode the specific video stream received by the system requesting the stream over the Internet (or corporate Intranet). The most popular current video streaming applications are RealNetworks® RealSystem™ and Microsoft® Windows Media™ Technologies. Both of these require downloading a corresponding Video Player application such as RealPlayer® or Windows Media™ Player; but both of these are free. There are other video streaming applications that are implemented in such a way as to include the player in the stream and no download is required.

Internet Protocols.

There are several internet protocols available for streaming data, TCP, UDP, RTP, RTSP, MMS & HTTP. Generally, each configures the data into packets, with each packet having a 'header' that identifies its contents. The protocol used is usually determined by the need to have reliable or unreliable communications.

TCP is a reliable protocol designed for transmitting alphanumeric data; it can stop and correct itself when data is lost. This protocol is used to guarantee sequenced, error-free transmission, but its very nature can cause delays and reduced throughput. This can be especially annoying when streaming audio and video.

User Datagram Protocol (UDP) within the IP stack, is by contrast, an unreliable protocol in which data is lost in preference to maintaining the flow.

Real-Time Protocol (RTP) was developed by the Internet Engineering Task Force (IETF) to handle streaming audio and video and uses IP Multicast. RTP is a derivative of UDP in which a time-stamp and sequence number is added to the packet header. This extra information allows the receiving client to reorder out of sequence packets, discard duplicates and synchronise audio and video after an initial buffering period. Real-Time Control Protocol (RTCP) is used to control RTP.

With RealServer™, RealNetworks introduced as its primary server protocol the RealTime Streaming Protocol (RTSP); an open, standards-based protocol

26

Page 27: Digital Broadcasting.doc

for multimedia streaming. To use this prot0col, URLs that point to media clips on a RealServer™ begin with rtsp://

With Windows Media™ Technologies, Microsoft introduced Microsoft Media™ Server (MMS) as its primary server protocol. MMS protocol has both a data delivery mechanism to ensure that packets reach the client and a control mechanism to handle client requests such as Stop/Play. MMS includes both Microsoft Media Server protocol/UDP (MMSU) and Microsoft Media Server protocol/TCP (MMST) as subsets to explicitly request the stream to use UDP or TCP respectively. Media Stream Broadcast Distribution (MSBD) protocol was used to transfer streams from the Windows Media™ Encoder to the Windows Media™ Server or between servers.

However, Windows Media™ Encoder 7 no longer supports MSBD and uses HTTP instead. URLs that point to media clips on a Windows Media™ Server usually begin with mms://Hyper Text Transport Protocol (HTTP) is the slowest of the protocols and is used by Internet Web Servers. HTTP is transparent to some older firewalls and can bypass security in such cases. Unlike RTSP and MMS that can serve the stream at a steady bitrate, HTTP would just serve the stream as fast as it could, hence it is better to have separate web and streaming servers.

27