Saturday, 20 December 2014

Firefox video playback's skip-to-next-keyframe behavior

One of the quirks of Firefox's video playback stack is our skip-to-next-keyframe behavior. The purpose of this blog post is to document the tradeoffs skip-to-next-keyframe makes.

The fundamental question that skip-to-next-keyframe answers is, "what do we do when the video stream decode can't keep up with the playback speed?

Video playback is a classic producer/consumer problem. You need to ensure that your audio and video stream decoders produce decoded samples at a rate no less that the rate at which the audio/video streams need to be rendered. You also don't want to produce decoded samples at a rate too much greater than the consumption rate, else you'll waste memory.

For example, if we're running on a low end PC, playing a 30 frames per second video, and the CPU is so slow that it can only decode an average of 10 frames per second, we're not going to be able to display all video frames.

This is also complicated by our video stack's legacy threading model. Our first video decoding implementation did the decoding of video and audio streams in the same thread. We assumed that we were using software decoding, because we were supporting Ogg/Theora/Vorbis, and later WebM/VP8/Vorbis, which are only commonly available in software.

The pseudo code for our "decode thread" used to go something like this:
 
while (!AudioDecodeFinished() || !VideoDecodeFinished()) {
  if (!HaveEnoughAudioDecoded()) {
    DecodeSomeAudio();
  }
  if (!HaveEnoughVideoDecoded()) {
    DecodeSomeVideo();
  }
  if (HaveLotsOfAudioDecoded() && HaveLotsOfVideoDecoded()) {
    SleepUntilRunningLowOnDecodedData();
  }
}

 
This was an unfortunate design, but it certainly made some parts of our code much simpler and easier to write.

We've recently refactored our code, so it no longer looks like this, but for some of the older backends that we support (Ogg, WebM, and MP4 using GStreamer on Linux), the pseudocode is still effectively (but not explicitly or obviously) this. MP4 on Windows, MacOSX, and Android in Firefox 36 and later now decode asynchronously, so we are not limited to decoding only on one thread.

The consequence of decoding audio and video on the same thread only really bites on low end hardware. I have an old Lenovo x131e netbook, which on some videos can take 400ms to decode a Theora keyframe. Since we use the same thread to decode audio as video, if we don't have at least 400ms of audio already decoded while we're decoding such a frame, we'll get an "audio underrun". This is where we don't have enough audio decoded to keep up with playback, and so we end up glitching the audio stream. This sounds is very jarring to the listener.

Humans are very sensitive to sound; the audio stream glitching is much more jarring to a human observer than dropping a few video frames. The tradeoff we made was to sacrifice the video stream playback in order to not glitch the audio stream playback. This is where skip-to-next-keyframe comes in.

With skip-to-next-keyframe, our pseudo code becomes:

while (!AudioDecodeFinished() || !VideoDecodeFinished()) {
  if (!HaveEnoughAudioDecoded()) {
    DecodeSomeAudio();
  }
  if (!HaveEnoughVideoDecoded()) {
    bool skipToNextKeyframe =
      (AmountOfDecodedAudio < LowAudioThreshold()) ||

       HaveRunOutOfDecodedVideoFrames();
    DecodeSomeVideo(skipToNextKeyframe);
  }
  if (HaveLotsOfAudioDecoded() && HaveLotsOfVideoDecoded()) {
    SleepUntilRunningLowOnDecodedData();
  }
}


We also monitor how long a video frame decode takes, and if a decode takes longer than the low-audio-threshold, we increase the low-audio-threshold.

If we pass a true value for skipToNextKeyframe to the decoder, it is supposed to give up and skip its decode up to the next keyframe. That is, don't try to decode anything between now and the next keyframe.

Video frames are typically encoded as a sequence of full images (called "key frames", "reference frames", or  I-frames in H.264) and then some number of frames which are "diffs" from the key frame (P-Frames in H.264 speak). (H.264 also has B-frames which are a combination of diffs of frames frames both before and after the current frame, which can lead the encoded stream to be muxed out-of-order).

The idea here is that we deliberately drop video frames in the hope that we give time back to the audio decode, so we are less likely to get audio glitches.

Our implementation of this idea is not particularly good.

Often on low end Windows machines playing HD videos without hardware accelerated video decoding, you'll get a run of say half a second of video decoded, and then we'll skip everything up to the next keyframe (a couple of seconds), before playing another half a second, and then skipping again, ad nasuem, giving a slightly weird experience. Or in the extreme, you can end up with only getting the keyframes decoded, or even no frames if we can't get the keyframes decoded in time. Or if it works well enough, you can still get a couple of audio glitches at the start of playback until the low-audio-threshold adapts to a large enough value, and then playback is smooth.

The FirefoxOS MediaOmxReader also never implemented skip-to-next-keyframe correctly, our behavior there is particularly bad. This is compounded by the fact that FirefoxOS typically runs on lower end hardware anyway. The MediaOmxReader doesn't actually skip decode to the next keyframe, it decodes to the next keyframe. This will cause the video decode to hog the decode thread for even longer; this will give the audio decode even less time, which is the exact opposite of what you want to do. What they should do is skip the demux of video up to the next keyframe, but if I recall correctly there was bugs in the Android platform's video decoder library that FirefoxOS is based on that caused this to be unreliable.

All these issues occur because we share the same thread for audio and video decoding. This year we invested some time refactoring our video playback stack to be asynchronous. This enables backends that support it to do their decoding asynchronously, on another own thread. So since audio decodes on a separate thread to video, we should have glitch-free audio even when the video decode can't keep up, even without engaging skip-to-next-keyframe. We still need to do something like skipping the video decode when the video decode is falling behind, but it can probably engage less aggressively.

I did a quick test the other day on a low end Windows 8.0 tablet with an Atom Z2760 CPU with skip-to-next-keyframe disabled and async decoding enabled, and although the video decode falls behind and gets out of sync with audio (frames are rendered late) we never glitched audio.

So I think it's time to revisit our skip-to-next-keyframe logic, since we don't need to sacrifice video decode to ensure that audio playback doesn't glitch.

When using async decoding we still need some mechanism like skip-to-next-keyframe to ensure that when the video decode falls behind it can catch up. The existing logic to engage skip-to-next-keyframe also performs that role, but often we enter skip-to-next-keyframe and start dropping frames when video decode actually could keep up if we just gave it a chance. This often happens when switching streams during MSE playback.

Now that we have async decoding, we should experiment with modifying the HaveRunOutOfDecodedVideoFrames() logic to be more lenient, to avoid unnecessary frame drops during MSE playback. One idea would be to only engage skip-to-next-keyframe if we've missed several frames. We need to experiment on low end hardware.

Wednesday, 19 February 2014

How to prefetch video/audio files for uninterrupted playback in HTML5 video/audio

Sometimes when you're playing a media file using an HTML5 <video> or <audio> element, or with WebAudio, you really want to be sure that the whole audio/video file is totally downloaded before you start playing it. For example, you may be writing a game, and you want to be sure all your sound effects are preloaded, so there's no delay between your animations and your sound effects while the network downloads the remainder of the file.

So how can you be sure a media resource is fully downloaded before beginning playing it? You could wait for the "canplaythrough" event to fire on all your media elements, but that event is not fired correctly by Chrome.

A more reliable solution is to prefetch the video/audio file using XHR/AJAX requests, and play the video/audio from a Blob URI.

Here's a simple snippet of a JS file that downloads a file using XHR. The function accepts callbacks to return results.

function prefetch_file(url,
                       fetched_callback,
                       progress_callback,
                       error_callback) {
  var xhr = new XMLHttpRequest();
  xhr.open("GET", url, true);
  xhr.responseType = "blob";

  xhr.addEventListener("load", function () {
    if (xhr.status === 200) {
      var URL = window.URL || window.webkitURL;
      var blob_url = URL.createObjectURL(xhr.response);
      fetched_callback(blob_url);
    } else {
      error_callback();
    }
  }, false);

  var prev_pc = 0;
  xhr.addEventListener("progress", function(event) {
    if (event.lengthComputable) {
      var pc = Math.round((event.loaded / event.total) * 100);
      if (pc != prev_pc) {
        prev_pc = pc;
        progress_callback(pc);
      }
    }
  });
  xhr.send();
}

When the file is successfully downloaded, the fetched_callback is called with an argument which is the blob URI. You can simply set this as the src of an audio or video element and can then play the fully-downloaded resource. You can also set the same blob URI as the src of multiple audio/video elements, and the downloaded data won't be re-downloaded or duplicated/copied in memory.

There's also a progress_callback that's called with a percentage complete parameter as the file is downloaded, and an error_callback that's called when the download fails.

For a working demo: prefetching a video file before playback using HTML5 demo

Tuesday, 3 December 2013

Why does the HTML fullscreen API ask for approval after entering fullscreen, rather than before?

The HTML fullscreen API is a little different from other JS APIs that require permission, in that it doesn't ask permission before entering fullscreen, it asks forgiveness *after* entering fullscreen.

Firefox's fullscreen approval dialog, which asks "forgiveness" rather than permission.
The rationale for having our fullscreen API implementation ask forgiveness rather than request permission is to make it easier on script authors.

When the original API was designed, we had a number of HTML/JS APIs like the geolocation API that would ask permission. The user was prompted to approve, deny, or ignore the request, though they could re-retrieve the request later from an icon in the URL bar to approve the request at a later time.

Geolocation approval dialog, from Dive Into HTML's geolocation example.
The problem with this design for script authors is that they can't tell if the user has ignored the approval request, or is just about to go back and approve it by bringing up the geolocation door-hanger again.

This model of requesting permission has been seen to cause problems for web apps in the wild using the geolocation API. Often if a user ignores the geolocation permission request, the web app doesn't work right, and if you approve the request some time later, the site often doesn't start working correctly. The app just doesn't know if it should throw up a warning, or if it's about to be granted permission.

So the original developers of the fullscreen spec (Robert O'Callahan, and later I and others were involved), opted to solve this problem by having our implementation ask forgiveness. Once you've entered fullscreen, the user is asked to confirm the action.

This forces the user to approve or deny the request immediately, and this means that script will immediately know whether fullscreen was engaged, so script will know whether it needs to take its fallback path or not.

Note that the specification for requestFullscreen() defines that most of the requestFullscreen() algorithm should run asynchronously, so there is scope to change the fullscreen approval dialog to being a permission request before entering fullscreen instead if future maintainers, or other implementors/browser, wish to do so.

Sunday, 24 November 2013

The rise and fall of Movie Rotator

Several years ago my Dad came to me and asked me how to rotate a video he'd recorded on his camera. He'd turned the camera sideways to record a video of someone standing and talking (in portrait orientation) but when he played the video back on his computer it was rendered in landscape orientation by his media player, i.e. not the same we he'd shot it, so the picture was rendered off by 90 degrees.

My Dad was running Windows XP, and so didn't have a platform video editor, so I set about making one.

I built and released Movie Rotator version 1.0, in March 2008. I learned a lot, and I shipped my own first complete product. By May 2010 I was consistently getting 4,000 downloads per month. Movie Rotator 1 was free and open source, and used the Quicktime runtime to set a rotation matrix in the MP4/MOV container causing the video to be rotated by the media player. It was a simple solution, and didn't require the video the be re-encoded, but some media players didn't honour that matrix stored in the container, so it didn't work for some video players. Most notably YouTube. There were other problems that became apparent later too.
Movie Rotator 1

I never wanted to charge for Movie Rotator 1, I was never sure whether it would take off, if there would be enough demand for it.

Fast forward to late 2012, and I learned how to use Windows Media Foundation to playback videos as part of my H.264/AAC/MP4 support work in Firefox on Windows. I had been looking forward to being able to learn this on "company time", as I could then use those skills in making Movie Rotator 2, a newer version that reencoded the videos rotated, to solve all the problems in version 1. Movie Rotator 1 was still getting 4000 downloads per month, and had steady Google Adsense income. Competitors existed. They charged for their software. In January 2013 I started work on Movie Rotator 2. This time I intended to try selling it, to make some money for my family.

I sacrificed to work on Movie Rotator 2. My daughter was only a year old at the time, so others had to fill in for me while I was working nights and weekends on this project. I started to skip the Saturday morning black belt class to work on Movie Rotator

After months of work I had learned a huge amount working on Movie Rotator 2. I learned all about about encoding and playback with WMF. I also had to learn about Direct3D9, Direct2D, and a bunch of other C++11x features. I learned a little about crypto and how software license keys are made and validated. I also learned to ruthlessly prioritize. In the evenings I was exhausted from work and parenting, so I saved the hard problems for Saturdays when I worked on the weekend. Of course that would be greatly inconveniencing my family. So I spent my time carefully.

Then in August 2013 I checked my websever's logs again. They were down, way down. I was down to only 1900 downloads per month. Traffic was dropping steadily. In all likelihood, I extrapolated and Movie Rotator didn't have long left.

I'd always assumed that Movie Rotator was a product only good for a few years, but I'd expected a few more yet.

What to do? Development of the software was almost finished. I'd been on the cusp of engaging an accountant to incorporate and a lawyer to sort out a software license/purchase agreement... But now I wasn't certain I'd recoup the expense. I was gutted. I'd sacrificed so much, and rolled up snake eyes.

So I released a free version of Movie Rotator 2 in September 2012, and today scrubbed the code and committed it to posterity as open source. Movie Rotator 2's code is now available on GitHub.
Movie Rotator 2
I benefited greatly building this product. I learned an awful lot, way more than on Movie Rotator 1. I really learned how to focus and prioritize. But it is still a bitter pill; all the sacrifice I made, to not achieve my goal.

I guess the world has changed. My theory is that people are mostly recording and watching their videos on their mobile phones now, and those handle video rotation just fine, they have to. At least now that my code is open sourced, it may still help someone someday, possibly me even. I'd like to take more time to clean it up more, and experiment with the design to see how various patterns work. It's good to have a moderately sized codebase to test such things on. You never know what may happen. My DirectShow Firefox code languished for years before finally being resurrected this year to ship in Firefox, so who knows...

Changing a failed mSATA hard disk in an HP Spectre XT Ultrabook

The hard drive on wife's HP Spectre XT Envy Ultrabook died last weekend. Once we ran scandisk on it we realised it had about 20% bad sectors. It was easy enough to replace, but I learned a few things that would make it easier for others, and I didn't find any info by others who had faced this.

Unscrewing the laptop's case's screws is easy.

First thing, the hard drive is actually an mSATA disk. I was expecting a 2.5" SSD when I opened up the case, but mSATA drives are only about as big as a matchbox, and a few millimetres high. And the trick to opening up the case is to use a small flathead screwdriver and twist it to cause the fastenings to unlock. Having two screwdrivers also made things easier; leaving one wedged between the top and bottom of the case in while levering the other made it easier to open the case.

Twist the screwdriver to quickly separate the laptop shell.
Installing the new mSATA drive in our HP Spectre XT.
Secondly we didn't have a recovery media. Things would have been a lot easier if we'd created one before the disk failed! We could have flashed either a DVD or a USB pen drive using the system recovery media tool that HP ships, but the internal drive was already too far gone by the time we tired to do this, so it failed. Also, when the tool says it requires a minimum of 16 GB free space on a USB drive to create a recovery media, it means it. We bought a 16GB pen drive, and ended having to go back to the store for a 32GB one (and then the recovery media creation failed because of the bad sectors on the laptop's drive). Moral of the story: create a recovery media before the disk fails, and don't buy a USB pen drive less than or equal to the specified minimum requirements when creating a recovery media. You'll have a bad time.

Thirdly I made the mistake of buying a replacement mSATA drive that was smaller than the original, the replacement mSATA I bought was 240GB, whereas the original 256GB. This meant that Clonezilla refused to clone the old drive onto the replacement drive, even though we had plenty of empty space on the disk. And I had to run Clonezilla with some special parameters to ignore the bad sectors, which would be intimidating for some people I'm sure). Moral of the story: don't buy a replacement smaller than the original. You'll have a bad time.

So since I couldn't clone the old drive, I opted to do a clean install. Luckily I was able to use a Win7 Home Premium x64 ISO image I had downloaded from my work intranet create a bootable USB Windows installer, and the install (using the product key on the bottom of the laptop!) went smoothly. I was able to easily find the drivers on HP's support website. Then it was just a matter of installing Windows Updates for hours...

All in all, replacing the drive was easy enough, I could have made it easier by creating a recovery media before the disk failed, and by buying a replacement mSATA drive that was not smaller than the original.

Wednesday, 13 November 2013

What does the H.264/avc1 codecs parameters for video/mp4 mime types mean?

The HTMLMediaElement.canPlayType() API enables you to query what video formats a user agent can play. For "video/mp4", the container for H.264/AAC, you can specify a "codecs" parameter that denotes the the H264 profile and level. Firefox doesn't currently handle MP4 codecs parameter very well, so I took it upon myself to figure out what the codecs parameters mean for H.264.

According to RFC6381 The 'Codecs' and 'Profiles' Parameters for "Bucket" Media Types, codecs paremeters for H.264 are contained in the "avc1" sample entry, and are is represented as follows:

avc1.PPCCLL

That is, the string "avc1." (or "avc2.", I'm not sure what the difference is yet), followed by 3 bytes represented in hex without the "0x" prefix, where the bytes represent the following:

PP = profile_idc
CC = constraint_set flags
LL = level_idc

These fields are defined in in Annex 1 of ITU-T H.264 and ISO/IEC 14496-10:2012  twinned standards. ITU-T H.264 can be downloaded for free.

profile_idc defines the H.264 profile. ITU-T H.264 doesn't have a single table listing what the different profile_idc values mean, but handily, Microsoft defines an eAVEncH264VProfile enumeration on the decimal values of the profile_idc in Codecapi.h (available on Win7):

enum eAVEncH264VProfile {
  eAVEncH264VProfile_unknown                    = 0,
  eAVEncH264VProfile_Simple                     = 66,
  eAVEncH264VProfile_Base                       = 66,
  eAVEncH264VProfile_Main                       = 77,
  eAVEncH264VProfile_High                       = 100,
  eAVEncH264VProfile_422                        = 122,
  eAVEncH264VProfile_High10                     = 110,
  eAVEncH264VProfile_444                        = 144,
  eAVEncH264VProfile_Extended                   = 88,
  eAVEncH264VProfile_ScalableBase               = 83,
  eAVEncH264VProfile_ScalableHigh               = 86,
  eAVEncH264VProfile_MultiviewHigh              = 118,
  eAVEncH264VProfile_StereoHigh                 = 128,
  eAVEncH264VProfile_ConstrainedBase            = 256,
  eAVEncH264VProfile_UCConstrainedHigh          = 257,
  eAVEncH264VProfile_UCScalableConstrainedBase  = 258,
  eAVEncH264VProfile_UCScalableConstrainedHigh  = 259
};


So for example, avc1.4D401E has a profile_idc of 0x4D, which is 77 in decimal, so it's main profile.

constraint_set flags are encoded as a bit flags 6 bitfields named constraint_set0_flag through to constraint_set5_flag. The meaning of a constraint_setN_flag being set depends on the profile being represented. The bits are stored in with constraint_set0_flag in the high bit, and so there are two padding 0s in the low bits. So for example, avc1.4D401E, the contraint_set flags are 0x40, so that's 01000000b, thus this means constraint_set1_flag is set.

The level number, level_idc, is defined as fixed point, from 1 to 5.2, with an oddball 1b level, as defined in ITU-T H.264 in Table A-1 - Level Limits. The level_idc is encoded in the codecs parameter as 10 times the level number, i.e. level 5.1 is represented as decimal 51, or 0x33 hex. So continuing our example avc1.4D401E, the level_idc is 1E, or 30 decimal, so level 3.0.

Level 1b is an oddball encoding, and is encoded as 11 decimal with the constraint_set3_flag equal to 1. If the constraint_set3_flag is not equal to 1, level 1.1 is encoded.

Microsoft also conveniently define this in the eAVEncH264VLevel enumeration.

enum eAVEncH264VLevel {
  eAVEncH264VLevel1    = 10,
  eAVEncH264VLevel1_b  = 11,
  eAVEncH264VLevel1_1  = 11,
  eAVEncH264VLevel1_2  = 12,
  eAVEncH264VLevel1_3  = 13,
  eAVEncH264VLevel2    = 20,
  eAVEncH264VLevel2_1  = 21,
  eAVEncH264VLevel2_2  = 22,
  eAVEncH264VLevel3    = 30,
  eAVEncH264VLevel3_1  = 31,
  eAVEncH264VLevel3_2  = 32,
  eAVEncH264VLevel4    = 40,
  eAVEncH264VLevel4_1  = 41,
  eAVEncH264VLevel4_2  = 42,
  eAVEncH264VLevel5    = 50,
  eAVEncH264VLevel5_1  = 51,
  eAVEncH264VLevel5_2  = 51

}

Tuesday, 20 August 2013

Mozilla at the New Zealand Programming Contest

On Saturday Mozilla sponsored the Auckland site of the New Zealand Programming Contest. We supplied t-shirts for the participants, competed in the competition, and we provided pizza for dinner afterwards.


Word must have got out that we were giving away swag, as the contest had double the normal participants than usual, around 120 people, and we ran out of t-shirts!

I've been wanting to do this for a while. I learned a lot about coding during training for the programming contest while I was at university, so I think it's a great way to encourage the next generation to hone their skills.

We're also looking for interns to join us over the summer, so I also took the opportunity to make a plug for our 2013 Mozilla Auckland Internship intake. I think it's a good way to get targeted exposure to the types of people we want to hire too.

We did well in the competition too, largely thanks to Edwin Flores, who formerly represented Australasia at the ACM Programming Contest world finals a few years back. Go Team!