This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Wireless X Labs is a brand-new platform designed to get together telecom operators, technical vendors and partners from vertical sectors to explore future mobile application scenarios, drive business and technical innovations and build an open ecosystem. Wireless X Labs have set up three laboratories, which aim to explore three major areas: people-to-people connectivity, applications for vertical sectors and applications in household.
HUAWEI TECHNOLOGIES CO., LTD. Bantian, Longgang DistrictShenzhen518129, P. R. ChinaTel:+86-755-28780808
GENERAL DISCLAIMERTHE INFORMATION IN THIS DOCUMENT MAY CONTAIN PREDICTIVE STATEMENT, INCLUDING BUT NOT LIMITED TO, STATEMENTS REGARDING FUTURE FINANCIAL RESULTS, OPERATING RESULTS, FUTURE PRODUCT PORTFOLIOS, AND NEW TECHNOLOGIES. THERE ARE A NUMBER OF FACTORS THAT COULD CAUSE ACTUAL RESULTS AND DEVELOPMENTS TO DIFFER MATERIALLY FROM THOSE EXPRESSED OR IMPLIED IN THE PREDICTIVE STATEMENTS. THEREFORE, SUCH INFORMATION IS PROVIDED FOR REFERENCE PURPOSES ONLY, AND CONSTITUTES NEITHER AN OFFER NOR A COMMITMENT. HUAWEI MAY CHANGE THE INFORMATION AT ANY TIME WITHOUT NOTICE, AND IS NOT RESPONSIBLE FOR ANY LIABILITIES ARISING FROM YOUR USE OF ANY OF THE INFORMATION PROVIDED HEREIN.
, , are trademarks or registered trademarks of Huawei Technologies Co.,Ltd Other Trademarks,product,service and company names mentioned are the property of thier respective owners
CONTENTS
01
02 10
05 11
16
BACKGROUND
EVALUATION MODELFRAMEWORK
MODEL FORMULA STRUCTURE
SUBJECTIVE EXPERIMENT AND ANALYSIS METHOD
TYPICAL MODEL VALUES
MODEL APPLICATION
Overview:
This document describes cloud virtual reality (VR) service experience evaluation, including the requirement background, evaluation model framework, subjective experiment and analysis method, model formula structure, typical model values, and model application scenarios. The industry's first VR device high-pixel per degree (PPD) equivalent test and multi-degree of freedom (DOF) interactive test allow the VR service experience evaluation model not only to be applicable to status evaluation, but also to provide forward-looking guidance for VR service future development. It is worth mentioning that this evaluation model algorithm will be released and shared in the form of free software development kits (SDKs). Partners in the VR industry are welcome to try and promote its development.
Video bit ratel.11
Video frame ratel.12
Video resolutionl.13
Screen resolutionl.14
Screen refresh rate Visual fidelity
Audio-visual fidelity
Consistency/Integrity
Interaction realism
VR presence index
Acoustic fidelity
Interaction consistency
l.15
Video channel numberl.16
Video codecl.17
FOVl.18
Head MTPl.116
Head MTSl.117
Body MTPl.118
Operation response delayl.119
DOFl.120
TBDl.121
TBDl.122
Audio bit ratel.19
Audio channel numberl.110
Audio codecl.111
Asynchronous audioand videol.112
Average stalling durationl.113
Stalling frequencyl.114
Packet loss ratel.115
O.21
O.31
O.22
O.23
O.33
O.32
O.41
EVALUATION MODELFRAMEWORK
Source:Prospective Industry Research VR/AR market scale ($ 100 million)(亿美金)
200
400
600
800
1000
1200
1400
1600
02018 2019 2020 2022 20232021
BACKGROUND
01 02
In recent years, VR technology has gradually entered into many fields such as education, entertainment, medical care, environmental protection, transportation, and public health. It has great application value and business potential. The data released by relevant investigative agencies also indicates that VR services will usher in a rapid development in the coming years.
Compared with traditional video services, VR provides users with new service experience featuring free switches of perspectives and frequent action interaction. Researchers usually characterize a user's immersion in a virtual environment by the concept of presence.
The evaluation model framework is built based on hierarchical mapping, making it possible to avoid a large number of parameter cross tests and control the test volume in the test phase, which is conductive to analyze and establish the model. Figure 2-1 shows the evaluation model framework. The presence factor layer is the model input layer, which includes the factors that can be extracted and quantified in the subjective experiment system. The sub-perception experience layer includes audio-visual fidelity, continuity/integrity, and interaction realism.
As the development of computing power and storage cloudification speeds up, VR can be used to render real-time images by cloud computing. Further, the ultra-large bandwidth and ultra-low latency of the 5G network ensure the availability of cloud-rendering images, and greatly improve user experience by access anytime, anywhere, and local-like operations. As one of the Cloud X services, the cloud VR service evolves as an inevitable choice considering the requirements, technology, cost, and user experience.
However, there is no specific cloud VR service experience evaluation model in the industry. Therefore, the E2E industry cannot enjoy effective user-oriented development, and no model can serve as a reference for 5G network construction. It can be said that how to accurately and effectively evaluate and predict cloud VR service experience remains an urgent issue to be resolved.
To address this issue, Huawei X Labs works in collaboration with well-known experts from universities in China to systematically study the factors affecting cloud VR service experience, build evaluation model framework, and establish experience evaluation models based on subjective experiments and data training that comply with ITU specifications. Different evaluation modules of the model meet the experience evaluation requirements of operators, device manufacturers, and content providers at different levels and from different perspectives.
Figure 1-1 Estimated global VR/AR market scale
Figure 2-1 Framework of the experience evaluation model for VR head-mounted devices
Cloud X Service Experience Model SeriesCloud VR Presence Index
Video bit rate
Table 2-1 lists input parameters and acronyms or abbreviations of the evaluation model.
ID ID
I.11
I.12
I.13
I.14
I.15
I.16
O.21
O.22
O.23
O.31
O.32
O.33
O.41
1~5 points
1~5 points
1~5 points
1~5 points
1~5 points
1~5 points
1~5 pointsI.17
I.18
I.19
I.110
I.111
I.112
I.113
I.114
Video frame rate
Video resolution
Screen resolution
Screen refresh rate
Visual fidelity
Acoustic fidelity
Interaction consistency
Audio-visual fidelity
Consistency/Integrity
Interaction realism
VR presence index
Video channelnumber
Video codec
Field of view (FOV)
Audio bit rate
Audio channelnumber
Audio codec
Asynchronous audioand video
Average stallingduration
Stalling frequency
Br MOSVF Visual Fidelity
Acoustic Fidelity
Interaction Consistency
Acoustic-visual Fidelity
Re-buffering/ Packet Loss
Interaction Realism
Virtual Reality Presence Index
MOSAF
MOSIC
MOSAVF
MOSRP
MOSIR
VR PI
FR
Rh,Rv
RSh
RR
ST
Video Codec
FoVh
ABr
SP
Audio Codec
tasyn
Tr
RF
04
I.115
I.116
I.117
I.118
I.119
I.120
Packet loss rate
Head MTP
Head MTS
Body MTP
Operation responsedelay
DOF
Average video bit rate (bit/s)
Number of video frames per second (fps)
Number of horizontal and vertical pixels in a video
Number of horizontal pixels in monocular screen
Screen refresh times per second
Monocular video (1) or stereoscopic video (2)
H.265/HEVC, H.264/AVC, VP9
Horizontal monocular FOV
Average audio bit rate (kbit/s)
Stereo sound (2), spatial sound (8)
AAC-LC, Opus
Delay of asynchronous audio and video (second)
Total stalling duration during a single playback, including the initialbuffering duration (second)
Stalling frequency during a single playback (number of stalling times/playback duration)
Packet loss rate at the application layer (%)
Delay between head rotation and image refresh (ms)
Delay between head rotation and audio direction change (ms)
Delay between body movement and body movement in image (ms)
Delay between user operation instruction and response in image (ms)
Virtual reality system operable dimension
ppl
thd
tad
tbd
tod
DOF
03
Table 2-1 Evaluation model input parameters
Table 2-2 lists the output and acronyms or abbreviations of the evaluation model.
Table 2-2 Evaluation model output
Parameter Description Description Full NameValueAcronym orAbbreviation
Acronym orAbbreviation
Cloud X Service Experience Model SeriesCloud VR Presence Index
1
2
3
4
5
0Bad Poor Fair ExcellentGood
0605
Subjective Experiment Platform
Subjective Scoring Method
Motion data&
subjective scores
Impairmentemulator
Audio and videodata
Test scenario Head-mounteddevice
Sequence Ai Graybackground
Sequence Bj Graybackground
Scoring phase Scoring phase Scoring phase
Sequence Ck
~10s ≤10s ~10s ≤10s ~10s
The VR experience subjective experiment platform quantitatively controls the input parameters of the presence factor layer, provides different VR experiences for testers and the subjective scoring function. Figure 3-1 shows the architecture of the subjective experiment platform. It consists of the following modules:
1. VR content service module. This module distributes VR contents to VR head-mounted devices.
2. Terminal service and technical parameter data module. This module consists of VR head-mounted devices. Test scenarios with different quality and DOF are presented to test personnel based on different experiment purposes. In addition, the module further feeds back related technical parameters to the user data cloud host.
3. Subjective experience data module. This module is deployed on the cloud host and is responsible for recording and collecting subjective experience scores provided by test personnel.
The software and hardware involved in the subjective experiment platform include: (1) HTC Vive Pro; 2) HTC Vive; 3) Pico Neo; 4) 2K monitor (refresh rate: 144 Hz); 5) 4K monitor (refresh rate 90 Hz); 6) high-perfor-mance cloud host; 7) VR player; 8) VR scoring software; 9) test and control software for the 6 DOF head-mounted devices and handles.
Note: 1) PPD, angular resolution or spatial resolution, refers to the number of pixels filled in each 1° in FOV.
The greater the value of PPD is, the more precise the details are displayed in VR devices, and the clearer the
displayed image is. 2) With low PPD and refresh rate of the current commercial VR devices, the 2K and 4K
monitors are mainly used to equivalently test the effect of PPD and frame rate on video quality.
Based on ITU-T P.913, the single-stimulus method (SSM) is used for subjective experiments. A sequence of videos to test video quality is played randomly and not repeatedly. The test personnel evaluate the sequence at intervals, as shown in Figure 3-2. When scoring the head-mounted devices, the test personnel control the score duration by themselves. After scoring is complete, the test personnel click the button "next" to play the next sequence.
The test personnel scoring is based on absolute category rating (ACR) 5-point scale according to the P.913 standard. Figure 3-3 shows the meaning of each score.
Figure 3-1 Architecture of the subjective experiment platform
Figure 3-2 Subjective experiment sequence and SSM scoring process
Figure 3-3 ACR 5-point scale
SUBJECTIVE EXPERIMENTAND ANALYSIS METHOD
Content and user data cloud host
Cloud X Service Experience Model SeriesCloud VR Presence Index
0807
1.PPD test scope building
To study the impact of resolution on user experience, PPD is introduced as measurement of pixel density, referring to the average pixel number in each degree of FOV. When FOV of a device is 110° and the monocular resolution is 1K to 10K, the test scenario of 10 to 90 PPD can be realized. However, the maximum monocular resolution of current commercial VR head-mounted devices is only 2K, leading to a low PPD. To obtain overall test data about the PPD effect on video quality, a 32-inch 4K monitor is used for PPD equivalent test. The test data and test data based on head-mounted devices are used for model training. The test video is played by video pixel and screen pixel 1:1 (if the video pixel is lower than the screen pixel, the video image will not extend to the full screen). Users' watching distances are adjusted to control the PPD test sequence while watching, as shown in Figure 3-4.
2.Multi-DOF interaction realism test
The following test scenarios are developed to test users' interaction realism:
1)Head 3 DOF, hand 3 DOF, and the ray-breaking bubble are set different head, limb and operation interaction delays (the inherent MTP delay of the subjective experiment platform is about 27 ms).
2)Head 6 DOF, hand 3 DOF , and the ray-breaking bubble are set different head, limb and operation interaction delays.
3)Head 6 DOF, hand 6 DOF and the ray/touch-breaking bubble are set different head, limb and operation interaction delays.
Table 3-1 lists the relationship between the equivalent PPD, video resolution, and watching distance.
Key Test and Analysis Method
Resolution<3840x2160
Distance adjustment Distance adjustment
Resolution=3840x2160
Monitor Monitor
Test Sequence Resolution
734*413
1100*619
1467*825
2200*1238
2934*1650
3840*1920
3840*1920
36.3
27.2
36.3
27.2
36.3
56.7
90.3
10
15
20
30
40
60
90
73.3°
73.3°
73.3°
73.3°
73.3°
64°
42.7°
FOV PPD
Application detailsHead 3 DOF+ hand 3 DOF
+ operation 1 DOF gameHead 6 DOF + hand 3 DOF
+ operation 1 DOF gameHead 6 DOF + hand 6 DOF
+ operation 1 DOF game
DOF scenario number 7 10 13
1 2 3
Figure 3-4 Equivalent PPD test method
Table 3-1 Equivalent PPD for a 32-inch 4K monitor
Table 3-2 lists the DOF scenario number as the evaluation model input.
Table 3-2 DOF scenario number
Watching Distance (cm)
Experiment Number
Cloud X Service Experience Model SeriesCloud VR Presence Index
1009
Figure 3-5 shows the video quality evaluation results of the high quality test sequence (QP = 22, bit per pixel before compression is 12, compression ratio is 57, and BPPafter compression ≈ 0.21) with different PPD when the frame rate is 30 fps.
It can be seen from the figure that the video quality sensed by the test personnel increases with the increase of the pixel density PPD, but after the PPD reaches 60, the PPD effect on video quality increases at a much slower rate. In addition, under the same PPD condition, common video image quality (such as Beauty~YachtRide) is higher than that of game image (for example, Battlefield). With dynamic blur technology, common videos are displayed smoothly when the frame rate is 30 fps. However, the game image is generated in real time, and when the frame rate is 30 fps, the test personnel can sense frame switching.
In the formula, VR PI is presence index, MOSAVF is audio-visual fidelity, MOSIR is interaction realism, and MOSRP is audio-visual consistency/integrity. VR presence is based on audio-visual fidelity (MOSAVF). Interaction realism MOSIR and consistency/integrity MOSRP are used as experience impairment factors, of which interaction delay and increase of UDP packet loss rate may affect user experience.
For details about the relationship between sub-perception items and related input parameters, see the following expression:
Note: 1) BPP is the number of coded bits per pixel (bit); 2) QP, the H.264 quantization parameter, is the sequence number of the quantization step Qstep. For encoding luminance (Luma), the quantization step Qstep has 52 values, and the QP value is 0–51. When the QP value is set to the minimum value 0, it indicates that the quantization is the finest. In contrast, when the QP value is set to the maximum value 51, it indicates that the quantization is the roughest.
3) BPPafter compression= Bit per pixel before compression/Compression ratio.
Example of Subjective Experiment Data
The formula structure of the VR presence index evaluation model is as follows:
VR PI =min(max((MOSAVF -1)·(1-v53(5-MOSIR)-v54(5-MOSRP))+1,1),5) 1
2MOSV = f1(Video Codec,Br,FR,(Rh,Rv),ST)
3MOSVF = f2(MOSV,FoVh)
4MOSAF = f3(Audio Codec,ABr,SP)
5MOSAVF= f4(MOSVF,MOSAF,tasvn)
6MOSIR = f5(DOF,thd,tbd,tod)
7MOSRP = f6(Tr,RF,ppl)
Beauty
Bosphorus
HoneyBee
Jockey
ReadySetGo
YachtRide
Battlefield1
Battlefield2
4.00
3.00
2.00
1.00
5.00
0 20 40 60 80 100
MOSV
PPD
Figure 3-5 Relationship between PPD and video quality
MODEL FORMULASTRUCTURE
Cloud X Service Experience Model SeriesCloud VR Presence Index
1211
By setting other input parameters in the model to a better value, we can visually observe the effect of specific input parameters on the corresponding experience evaluation module.
1.Typical PPD effect on video quality MOSV
Figure 5-1 shows the typical PPD value effect on video quality MOSV based on H.264 encoding when BPP is 0.1 and frame rate is 120 fps. As shown in the following figure, when PPD is set to 15, the video quality MOSV
is about 3.28 points. When PPD is set to 20, the video quality MOSV is about 3.67 points. When PPD is set to 60, the video quality MOSV is about 4.51 points. When PPD is larger than 60, the video quality MOSV
increases slowly.
2.Typical value of monocular resolution effect on video quality MOSV
Figure 5-2 shows the typical value of monocular resolution effect on video quality MOSV based on H.264 encoding when BPP is 0.1, frame rate is 120 fps, and FOV is 110°. It can be seen that the video quality MOSV corresponding to the monocular resolution 1.5K is about 3.13 points; the video quality MOSV corresponding to the monocular resolution of 2K (with high performance devices available in commercial use) is about 3.55 points.
3.Typical frame rate effect on video quality MOSV
Figure 5-3 shows, in H.264 encoding, the typical value of frame rate effect on video quality MOSV when BPP is 0.1 and PPD is 120.
1) Game scenario: When the frame rate is 30 fps, the video quality MOSV is about 3.15 points. When the FR is 60 fps, the video quality MOSV is about 4.15 points. When the FR is 90 fps, the video quality MOSV exceeds 4.52 points. When the FR exceeds 90 fps, the video quality increases slowly.
2) Video scenario: When the frame rate is 30 fps, the video quality MOSV is about 3.88 points. When the FR is 60 fps, the video quality MOSV is about 4.51 points. When the FR is 90fps, the video quality MOSV reaches 4.61 points.
1.00
1.50
2.00
2.50
3.00
3.50
4.00
4.50
5.00
0 20 40 80 10060
MOSV
PPD
FR(fps)
1.00
1.50
2.00
2.50
3.00
3.50
4.00
4.50
5.00
1.0 2.01.5 2.5 4.0 4.5
M
onocular Resolution(K)
3.0 3.5
MOSV
1.00
1.50
2.00
2.50
3.00
3.50
4.00
4.50
5.00
0 4020 60 140 16080 120100
MOSV MOSv(video) MOSv(game)
Figure 5-1 Relationship between PPD and video quality (MOSV)
Figure 5-2 Relationship between monocular resolution and video quality (MOSV)
Figure 5-3 Relationship between frame rate (FR) and video quality (MOSV)
TYPICAL MODELVALUES
Cloud X Service Experience Model SeriesCloud VR Presence Index
1413
4.Typical value of FOV effect on visual fidelity (MOSVF)
On the basis of video quality, visual fidelity is mainly affected by FOV. Figure 5-4 shows, in H.264 encoding, the typical value of horizontal FOV (with corresponding binocular FOV) effect on visual fidelity MOSVF when BPP is 0.1, frame rate is 120 fps, and PPD is 120. It can be seen that, within the range from 60° to 110° (current test range), visual fidelity MOSVF increases linearly.
5.Typical value of head MTP effect on interaction consistency MOShd
Figure 5-5 shows the typical value of head MTP delay effect on interaction consistency MOShd. When the head MTP delay is less than or equal to 20 ms, the MOShd is not damaged. When the delay is equal to 50 ms, the value of MOShd is 3.63. When the delay is 100 ms, the MOShd is 2.55. When the delay is 200 ms, the MOShd decreases to 1.47.
6.Typical value of body MTP effect on interaction consistency MOSbd
Figure 5-6 shows the typical of body MTP delay effect on body interaction consistency MOSbd. When the body MTP delay is less than or equal to 50 ms, the MOSbd is not damaged. When the delay is 100 ms, the MOSbd is 4.02. When the delay is 300 ms, the MOSbd is 2.44. When the delay is 500 ms, the MOSbd decreases to 1.71 points.
Note: If the head moves with body movement, for example, the leg movement leads to the head movement, evaluate the experience on head MTP.
7.Typical value of operation response delay effect on interaction consistency MOSod
Figure 5-7 shows the typical value of operation response delay effect on operation interaction consistency MOSod. When the operation response delay is less than or equal to 50 ms, the MOSod is not damaged. When the delay is 100 ms, the MOSod is 3.95 points. When the delay is 300 ms, the MOSod is 2.40. When the delay is 500 ms, the value of MOSod decreases to 1.70.
Figure 5-4 Relationship between FOV (FoVh) and visual fidelity (MOSVF)
Figure 5-6 Relationship between body MTP delay (tbd) and body interaction consistency (MOSbd)
Figure 5-7 Relationship between operation response delay (tod) and operation interaction consistency (MOSod)Figure 5-5 Relationship between head MTP delay (thd) and head interaction consistency (MOShd)
Cloud X Service Experience Model SeriesCloud VR Presence Index
1615
8.Typical value of UDP packet loss rate effect on audio-visual integrity MOSP
Figure 5-8 shows the typical value of UDP packet loss rate effect on audio-visual integrity MOSP when the VR video data is transmitted over UDP. When the UDP packet loss rate is 0.25%, the MOSP is 3.98. When the ppl is 0.5%, the MOSP is 3.23. When the ppl is 1%, the MOSP is 2.25.
9.Typical value of FEC failure rate effect on audio-visual integrity MOSP
Figure 5-9 shows the typical value of the FEC failure rate effect on audio-visual integrity MOSP when the video data is transmitted in UDP+FEC mode. When the FEC failure rate is 0.5%, the MOSP is 4.39. When the rate is 1%, the MOSP is 3.88. When the rate is 2%, the MOSP is 3.08.
1.Video quality MOSV evaluation of panoramic videos with different resolutions
Table 6-1 shows, in H.264 encoding, the PPD, bit rate and video quality MOSV when BPP is 0.1, FOV is 110°, frame rate is 30 fps, compressed binocular video data is not referenced each other, and the overall bit rate is twice of the monocular video bit rate. The video quality MOSV of the 8K 3D panoramic video is only "user acceptable".
2.Best experience evaluation of mainstream commercial VR head-mounted devices
Table 6-2 describes, in H.264 encoding, the typical value of video quality and VR presence index of current commercial high-performance VR head-mounted devices when BPP is 0.1, and there is no consistency/integrity experience impairment.
U
DP Packet Loss Rate (%
)
FEC Failure Rate (%)
1.00
1.50
2.00
2.50
3.00
3.50
4.00
4.50
5.00
0 10.5 1.5 3.5 42 32.5
MOSp
1.00
1.50
2.00
2.50
3.00
3.50
4.00
4.50
5.00
0 21 3 7 84 65
MOSp
1920 5.33 10.55 1.54
2560 7.11 1.81
3840 10.67 2.29
7680 21.33
18.75
42.19
168.75 3.13
HTC Vive Pro
Pico Neo commercial
edition
Monocular resolution 1440x1660, FOV 110°, refresh rate 90 Hz, head 6 DOF + hand 6 DOF
Monocular resolution 1440x1660, FOV 101°, refresh rate 90 Hz, head 6 DOF + hand 6 DOF
Monocular resolution 1280x1440, FOV 100°, refresh rate 72 Hz, head 3 DOF + hand 3 DOF
Monocular resolution 1280x1440, FOV 100°, refresh rate 72 Hz, head 6 DOF + hand 6 DOF
Xiaomi VR all-in-one appliance
Xiaomi VR all-in-one appliance + NOLO
Video
Game
Video
Game
Video
Game
Video
Game
2.98
3.17
3.11
2.98
2.83
2.98
2.83
3.07
3.1
3.02
2.99
2.5
2.99
2.88
3.153.04
Table 6-1 Experience scores of panoramic videos with different resolutions
Table 6-2 Best experience of mainstream commercial VR head-mounted devices
Figure 5-8 Relationship between UDP packet loss rate (ppl) and audio-visual integrity (MOSP)
Figure 5-9 Relationship between FEC failure rate and audio-visual integrity (MOSP)
MODEL APPLICATION
Panoramic HorizontalResolution Equivalent PPD in FOV Bit Rate (Mbit/s) Video Quality (MOSV)
Device Service Type Video QualityMOSV
Presence IndexVR PIPerformance and Specifications
Cloud X Service Experience Model SeriesCloud VR Presence Index
3.User experience with different monocular resolutions under different network conditions
Table 6-3 shows the game service experience with different monocular resolutions under different network conditions based on the following conditions:
1) Based on H.264 encoding, BPP is 0.21.
2) Horizontal FOV: 110°.
3) Video DOF: head 6 DOF+ hand 6 DOF+ handle operation 1 DOF.
To learn more about model details and have an experience, scan the following QR code to access the introduction web page of the Cloud X service experience evaluation model.
According to the preceding table, we can observe the current development of the VR industry, and plan the future industry development path focusing on user experience improvement based on the 5G network transmission and the motion prediction technology without interaction consistency impairment.
1.Considering the display capability of the current mainstream head-mounted devices and the content capability, the presence index is 3.09 points, only at "user acceptable" level, when monocular resolution is 1.5K and frame rate is 60 fps.
2.Only when monocular resolution reaches 2K and frame rate 90 fps, the presence index can reach 3.5 points, which serves as the goal of experience improvement in the next 2 to 3 years. Or, the monocular resolution
reaches 3K, and the frame rate reaches 60 fps.
3.Presence index reaching 4 points (user experience: "good") is the goal after three years, which requires a monocular resolution of 6K and a frame rate of 90 fps. Monocular resolution 4K, frame rate 90 fps, and the presence index 3.81 can be a transition goal.
For details about the VR service experience status and industry development path, see Figure 6-1.
This document is edited by Huawei X Labs.
Due to the rapid development of related technologies in the 5G E2E industry, this document is only for reference and cannot be used as a
basis for investment research or decision-making. All statements, information, and recommendations in this document do not constitute a
warranty of any kind, express or implied. We may supplement, correct and revise relevant information without notice, but does not guarantee
immediate release of the revised version. All statements, information, and recommendations in this document do not assume any
responsibility for any direct or indirect investment profit and loss.
This document is an intellectual property of Huawei. No part of this document may be reproduced or transmitted in any form or by any means
without prior written consent. If any content of this report is released by any other party in the form of reference, Huawei should be attributed
to as the source. Any citation, deletion and modification shall not violate the original meaning of this report.
The era of 5G has come. We believe that this research provides an important reference for the Cloud VR service experience evaluation and an effective theoretical basis for predicting the Cloud VR service experi-ence in the planning phase. To enhance industry cooperation, the evaluation model algorithm will be released and shared in the form of free SDKs. Partners in the VR industry are welcome to try and feed back sugges-tions.
TFigure 6-1 VR service experience status and industry future development path
CONCLUSION
Cloud X Service Experience Model SeriesCloud VR Presence Index