Afshin Headshot

We sat down with Afshin, our Engineering Intern, to talk about what he does here at Pixvana, 360 video quality, and company culture.

What do you do at Pixvana? Walk me through a day in your life here.

I work on assessing the perceptual quality of 360-degree videos, which is essentially measuring the effect of different projections on quality. Ideally, a 360 video is represented by a sphere, and the video must be mapped to a 2D planean equirectangular projection, for examplebefore encoding and streaming. There are various mapping projection methods, and at Pixvana we are working on novel projection methods that deliver high quality while requiring less bandwidth. It’s important to make sure that a projection provides a high quality viewing experience.

Designing and building a program that will accomplish objective quality assessments for 360 videos is not a straightforward task, especially for projections that have varying quality around the sphere. I research the most up-to-date quality assessment methods and try to design and implement reliable methods for the purpose of assessing quality.

What’s your background?

I studied Computer Engineering during undergrad and then got my Masters in Information Tech from Sharif University of Technology. I have been always interested in multimedia, particularly digital video. I am always trying to learn more about digital video and apply my knowledge to improve users’ quality of experience.

I’ve been researching multimedia networking, which is when we stream video over a network. Video streaming is very challenging, especially under difficult network conditions. My Master’s project focused on video streaming over Vehicular Ad Hoc Networks (VANET), and I designed a video encoding technique which is robust to lossy channel conditions of VANET.

Can you tell me a little bit about your work on adaptive 360-degree video streaming using layered coding?

My current PhD research at UT-Dallas is about improving the quality of experience for video streaming applications when network conditions are very dynamic, like cellular networks. With the recent advances in head-mounted displays (HMDs) and VR technologies, streaming 360 video has become feasible, but it is still not mature.  Much work has been done on adaptive streaming of high resolution 360 videos to reduce bandwidth requirements, but there are still many unsolved problems. For example, there is a buffering problem in all existing adaptive 360 video streaming methods. The client-side cannot buffer video for longer durations than 2 seconds because a user’s future viewport, or viewing region, is not known for more than 2 seconds. Having short-duration buffering means that even with short-lived networks bandwidth drops and video playback stalls. These stalls have negative effects on the quality of experience. My poster shows a proposal for a method based on scalable video coding that solves this buffering problem. We received second place for best poster at the 2017 IEEE Virtual Reality Conference.

Adaptive 360-Degree Video Streaming using Layered Coding Infographic

 

What drew you to VR? What do you find most compelling about X-Reality?

I believe that X-Reality (XR) is the next media. It will become more popular as head-mounted displays improve in terms of wider field-of-view (FoV) and higher pixel density. XR provides a close-to-reality experience at a low cost for users. For example, someone who wants to try riding a space shuttle can with XR. It has enormous applications in education, entertainment, and elsewhere.

What have you learned since starting at Pixvana? Have there been any big surprises?

The way that people at Pixvana work together has impressed me. Before joining Pixvana, I thought that this performance would be achievable only in a stressful environment, but working here is really different and we are progressing quickly on our projects in a cheerful atmosphere. I think that something that makes this possible is our regular goal-planning meetings and checking our progress by giving demos to the whole development team. This keeps us accountable to each other and ensures that we deliver.

What’s one of your favorite VR experiences?

I love watching 360 videos of experiences that are difficult to have in real life, like travelling to space or sitting in a fighter jet cockpit.

What’s next for you?

My next step is to continue my research on 360 video streaming and finish my PhD dissertation. I’ll have a clearer direction in my research after working with Pixvana, where I’ve been learning about many challenges and problems in this field. I just found out that my paper, which details experiments that we outlined in the poster, has been accepted at the Association for Computing Machinery’s annual conference on multimedia.

Thanks so much for taking the time to educate us about the cutting-edge research you’re doing, and best of luck finishing your dissertation!

About Afshin Taghavi Nasrabadi

Afshin is a Computer Science PhD student at the University of Texas at Dallas, where he researches multimedia and computer networks. He got his BSc from Isfahan University of Technology in Computer Engineering in 2011, and his MSc from Sharif University of Technology in Information Technology in 2013. After completing his Master’s, Afshin worked as a researcher at the Institute for Research in Fundamental Sciences, where he worked on performance evaluation of H.265 encoders and quality assessment of channel-distorted videos. Afshin loves music, especially classical music, and spends his spare time composing music and playing piano. His undergrad research project was about automated music composition using AI.

Filed Under: