Copyright © 2024 World Wide Web Consortium. W3C® liability, trademarkand permissive document licenserules apply.
HTMLMediaElement
[HTML] to allow JavaScript to generate
media streams for playback. Allowing JavaScript to generate streams facilitates a variety of
use cases like adaptive streaming and time shifting live streams.
This section describes the status of this document at the time of its publication. A list of currentW3C publications and the latest revision of this technical report can be found in theW3Ctechnical reports indexat https://www.w3.org/TR/.
On top of editorial updates, substantive changes since publication as aW3CRecommendation inNovember 2016are:
changeType
()
method to switch among codecs or
bytestreams
MediaSource
objects off the main thread in
dedicated workers
createObjectURL
()
extension to theURL
object following
its integration in the File API [FILEAPI]
ManagedMediaSource
,ManagedSourceBuffer
,and
BufferedChangeEvent
interfaces supporting power-efficient streaming and active buffered
media cleanup by the user agent
For a full list of changes made since the previous version, see thecommits.
The working group maintainsa list of all bug reports that the editors have not yet tried to address.
Implementors should be aware that this specification is not stable.Implementors who are not taking part in the discussions are likely to find the specification changing out from under them in incompatible ways.Vendors interested in implementing this specification before it eventually reaches the Candidate Recommendation stage should track theGitHub repositoryand take part in the discussions.
This document was published by theMedia Working Groupas an Editor's Draft.
Publication as an Editor's Draft does not imply endorsement byW3Cand its Members.
This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the W3CPatent Policy. W3Cmaintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of theW3CPatent Policy.
This document is governed by the 03 November 2023W3CProcess Document.
This section is non-normative.
This specification allows JavaScript to dynamically construct media streams for
<audio> and <video>. It defines a MediaSource object that can serve as a source
of media data for an HTMLMediaElement. MediaSource objects have one or more
SourceBuffer
objects. Applications append data segments to theSourceBuffer
objects, and can adapt the quality of appended data based on system performance and other
factors. Data from theSourceBuffer
objects is managed as track buffers for audio,
video and text data that is decoded and played. Byte stream specifications used with these
extensions are available in the byte stream format registry [MSE-REGISTRY].
This specification was designed with the following goals in mind:
This specification defines:
Thetrack buffersthat providecoded framesfor theenabled
audioTracks
,theselected
videoTracks
,and the"showing"
or
"hidden"
textTracks
.All these
tracks are associated withSourceBuffer
objects in the
activeSourceBuffers
list.
Apresentation timestamprange used to filter outcoded frameswhile
appending. The append window represents a single continuous time range with a single
start time and end time. Coded frames withpresentation timestampwithin this
range are allowed to be appended to theSourceBuffer
while coded frames outside
this range are filtered out. The append window start and end times are controlled by
theappendWindowStart
andappendWindowEnd
attributes respectively.
A unit of media data that has apresentation timestamp,adecode timestamp, and acoded frame duration.
The duration of acoded frame.For video and text, the duration indicates how long the video frame or textSHOULDbe displayed. For audio, the duration represents the sum of all the samples contained within the coded frame. For example, if an audio frame contained 441 samples @44100Hz the frame duration would be 10 milliseconds.
The sum of acoded framepresentation timestampand itscoded frame duration.It represents thepresentation timestampthat immediately follows the coded frame.
A group ofcoded framesthat are adjacent and have monotonically increasing
decode timestampswithout any gaps. Discontinuities detected by thecoded frame processingalgorithm andabort
()
calls trigger the start of a new
coded frame group.
The decode timestamp indicates the latest time at which the frame needs to be decoded assuming instantaneous decoding and rendering of this and any dependant frames (this is equal to thepresentation timestampof the earliest frame, inpresentation order,that is dependant on this frame). If frames can be decoded out of presentation order,then the decode timestampMUSTbe present in or derivable from the byte stream. The user agentMUSTrun theappend erroralgorithm if this is not the case. If frames cannot be decoded out ofpresentation orderand a decode timestamp is not present in the byte stream, then the decode timestamp is equal to thepresentation timestamp.
A sequence of bytes that contain all of the initialization information required to decode a sequence ofmedia segments.This includes codec initialization data, Track IDmappings for multiplexed segments, and timestamp offsets (e.g., edit lists).
Thebyte stream format specificationsin the byte stream format registry [MSE-REGISTRY] contain format specific examples.
A sequence of bytes that contain packetized & timestamped media data for a portion of themedia timeline.Media segments are always associated with the most recently appendedinitialization segment.
Thebyte stream format specificationsin the byte stream format registry [MSE-REGISTRY] contain format specific examples.
AMediaSource
object URL is a uniqueblob URLcreated by
createObjectURL
()
.It is used to attach aMediaSource
object to an
HTMLMediaElement.
These URLs are the same as ablob URLs,except that anything in the definition of
that feature that refers toFile
andBlob
objects is hereby extended to also
apply toMediaSource
objects.
Theoriginof the MediaSource object URL is therelevant settings objectof
thisduring the call tocreateObjectURL
()
.
For example, theoriginof the MediaSource object URL affects the way that the media element isconsumed by canvas.
The parent media source of aSourceBuffer
object is theMediaSource
object
that created it.
The presentation start time is the earliest time point in the presentation and specifies the initialplayback positionandearliest possible position.All presentations created using this specification have a presentation start time of 0.
For the purposes of determining ifHTMLMediaElement
's
buffered
contains aTimeRanges
that includes the current
playback position, implementationsMAYchoose to allow a current playback position at
or afterpresentation start timeand before the firstTimeRanges
to play the
firstTimeRanges
if thatTimeRanges
starts within a reasonably short time,
like 1 second, afterpresentation start time.This allowance accommodates the
reality that muxed streams commonly do not begin all tracks precisely at
presentation start time.ImplementationsMUSTreport the actual buffered range,
regardless of this allowance.
The presentation interval of acoded frameis the time interval from its presentation timestampto thepresentation timestampplus thecoded frame's duration.For example, if a coded frame has a presentation timestamp of 10 seconds and acoded frame durationof 100 milliseconds, then the presentation interval would be [10-10.1). Note that the start of the range is inclusive, but the end of the range is exclusive.
The order thatcoded framesare rendered in the presentation. The presentation order is achieved by orderingcoded framesin monotonically increasing order by theirpresentation timestamps.
A reference to a specific time in the presentation. The presentation timestamp in a coded frameindicates when the frameSHOULDbe rendered.
A position in amedia segmentwhere decoding and continuous playback can begin without relying on any previous data in the segment. For video this tends to be the location of I-frames. In the case of audio, most audio frames can be treated as a random access point. Since video tracks tend to have a more sparse distribution of random access points, the location of these points are usually considered the random access points for multiplexed streams.
The specificbyte stream format specificationthat describes the format of the
byte stream accepted by aSourceBuffer
instance. Thebyte stream format specification,for aSourceBuffer
object, is initially selected based on the
typepassed to theaddSourceBuffer
()
call that created
the object, and can be updated bychangeType
()
calls on the object.
SourceBuffer
configuration
A specific set of tracks distributed across one or moreSourceBuffer
objects
owned by a singleMediaSource
instance.
ImplementationsMUSTsupport at least 1MediaSource
object with the following
configurations:
MediaSource objectsMUSTsupport each of the configurations above, but they are only required to support one configuration at a time. Supporting multiple configurations at once or additional configurations is a quality of implementation issue.
A byte stream format specific structure that provides theTrack ID,codec configuration, and other metadata for a single track. Each track description inside a singleinitialization segmenthas a uniqueTrack ID.The user agentMUSTrun theappend erroralgorithm if theTrack IDis not unique within the initialization segment.
A Track ID is a byte stream format specific identifier that marks sections of the byte stream as being part of a specific track. The Track ID in atrack descriptionidentifies which sections of amedia segmentbelong to that track.
TheMediaSource
interface represents a source of media data for an
HTMLMediaElement
.It keeps track of thereadyState
for this source as
well as a list ofSourceBuffer
objects that can be used to add media data to the
presentation. MediaSource objects are created by the web application and then attached to
an HTMLMediaElement. The application uses theSourceBuffer
objects in
sourceBuffers
to add media data to this source. The HTMLMediaElement
fetches this media data from theMediaSource
object when it is needed during
playback.
EachMediaSource
object has a[[live seekable
range]]internal slot that stores anormalized TimeRanges object.It
is initialized to an emptyTimeRanges
object when theMediaSource
object is
created, is maintained bysetLiveSeekableRange
()
and
clearLiveSeekableRange
()
,and is used in10.
HTMLMediaElement Extensions
to modifyHTMLMediaElement
'sseekable
behavior.
EachMediaSource
object has a[[has ever been
attached]]internal slot that stores aboolean
.It is initialized to false when
theMediaSource
object is created, and is set true in the extended
HTMLMediaElement
'sresource fetch algorithmas described in theattaching to a media elementalgorithm. The extendedresource fetch algorithmuses this internal
slot to conditionally fail attachment of aMediaSource
using aMediaSourceHandle
set on aHTMLMediaElement
'ssrcObject
attribute.
WebIDLenumReadyState
{
"closed
",
"open
",
"ended
",
};
closed
open
SourceBuffer
objects inMediaSource
'ssourceBuffers
.
ended
MediaSource
's
endOfStream
()
has been called.
WebIDLenumEndOfStreamError
{
"network
",
"decode
",
};
network
Terminates playback and signals that a network error has occurred.
JavaScript applicationsSHOULDuse this status code to terminate playback with a network error. For example, if a network error occurs while fetching media data.
decode
Terminates playback and signals that a decoding error has occurred.
JavaScript applicationsSHOULDuse this status code to terminate playback with a decode error. For example, if a parsing error occurs while processing out-of-band media data.
WebIDL[Exposed=(Window,DedicatedWorker)]
interfaceMediaSource
:EventTarget{
constructor
();
[SameObject,Exposed=DedicatedWorker]
readonly attributeMediaSourceHandle
handle
;
readonly attributeSourceBufferList
sourceBuffers
;
readonly attributeSourceBufferList
activeSourceBuffers
;
readonly attributeReadyState
readyState
;
attributeunrestricted doubleduration
;
attributeEventHandleronsourceopen
;
attributeEventHandleronsourceended
;
attributeEventHandleronsourceclose
;
static readonly attributebooleancanConstructInDedicatedWorker
;
SourceBuffer
addSourceBuffer
(DOMStringtype);
undefinedremoveSourceBuffer
(SourceBuffer
sourceBuffer);
undefinedendOfStream
(optionalEndOfStreamError
error);
undefinedsetLiveSeekableRange
(doublestart,doubleend);
undefinedclearLiveSeekableRange
();
staticbooleanisTypeSupported
(DOMStringtype);
};
Contains a handle useful for attachment of a dedicated workerMediaSource
object to an
HTMLMediaElement
viasrcObject
.The handle remains the same object
for thisMediaSource
object across accesses of this attribute, but it is distinct for
eachMediaSource
object.
This specification may eventually enable visibility of this attribute onMediaSource
objects on the main Window context. If so, specification care will be necessary to prevent
potential backwards incompatible changes, such as could happen if exceptions were thrown on
accesses to this attribute.
On getting, run the following steps:
MediaSource
object has not yet been created, then run the
following steps:
MediaSourceHandle
object and associated resources, linked internally to this
MediaSource
.
MediaSourceHandle
object that is this attribute's value.
Contains the list ofSourceBuffer
objects associated with thisMediaSource
.When
MediaSource
'sreadyState
equals "closed
"this list
will be empty. OncereadyState
transitions to "open
"
SourceBuffer objects can be added to this list by usingaddSourceBuffer
()
.
Contains the subset ofsourceBuffers
that are providing the
selected
video track, theenabled
audio track(s), and the
"showing"
or"hidden"
text
track(s).
SourceBuffer
objects in this listMUSTappear in the same order as they appear in the
sourceBuffers
attribute; e.g., if only sourceBuffers[0] and
sourceBuffers[3] are inactiveSourceBuffers
,then activeSourceBuffers[0]
MUSTequal sourceBuffers[0] and activeSourceBuffers[1]MUSTequal sourceBuffers[3].
Section3.15.5 Changes to selected/enabled track statedescribes how this attribute gets updated.
Indicates the current state of theMediaSource
object. When theMediaSource
is createdreadyState
MUSTbe set to "closed
".
Allows the web application to set the presentation duration. The duration is initially set
to NaN when theMediaSource
object is created.
On getting, run the following steps:
readyState
attribute is "closed
"then return
NaN and abort these steps.
On setting, run the following steps:
TypeError
exception and
abort these steps.
readyState
attribute is not "open
"then throw
anInvalidStateError
exception and abort these steps.
updating
attribute equals true on anySourceBuffer
in
sourceBuffers
,then throw anInvalidStateError
exception and abort
these steps.
Theduration changealgorithm will adjustnew durationhigher if there is any currently buffered coded frame with a higher end time.
appendBuffer
()
andendOfStream
()
can update the
duration under certain circumstances.
Returns true.
This attribute enables main thread and dedicated worker feature detection of support for
creating and using aMediaSource
object in a dedicated worker, and mitigates the need
for higher latency detection polyfills like attempting creation of aMediaSource
object
from a dedicated worker, especially if the feature is not supported.
Adds a newSourceBuffer
tosourceBuffers
.
TypeError
exception and abort
these steps.
SourceBuffer
objects in
sourceBuffers
,then throw aNotSupportedError
exception and abort these
steps.
QuotaExceededError
exception and abort these steps.
For example, a user agentMAYthrow aQuotaExceededError
exception if the media
element has reached theHAVE_METADATA
readyState. This can occur
if the user agent's media engine does not support adding more tracks during playback.
readyState
attribute is not in the "open
"state
then throw anInvalidStateError
exception and abort these steps.
ManagedSourceBuffer
ifthisis a
ManagedMediaSource
,or aSourceBuffer
otherwise, with their respective associated
resources.
[[generate timestamps flag]]
to the value in the
"Generate Timestamps Flag" column of theMedia Source Extensions™ Byte Stream Format Registryentry that is associated with
type.
[[generate timestamps flag]]
is true, setbuffer's
mode
to "sequence
".Otherwise, setbuffer's
mode
to "segments
".
sourceBuffers
.
addsourcebuffer
atthis's
sourceBuffers
.
Removes aSourceBuffer
fromsourceBuffers
.
sourceBuffers
then throw aNotFoundError
exception and abort these
steps.
updating
attribute equals true, then run the
following steps:
updating
attribute to false.
abort
atsourceBuffer.
updateend
atsourceBuffer.
AudioTrackList
object
returned bysourceBuffer.audioTracks
.
AudioTrack
object in theSourceBuffer audioTracks list,run the
following steps:
sourceBuffer
attribute on theAudioTrack
object to
null.
AudioTrack
object from theSourceBuffer audioTracks list.
This should triggerAudioTrackList
[HTML] logic toqueue a taskto
fire an eventnamedremovetrackusingTrackEvent
with thetrack
attribute initialized to theAudioTrack
object, at theSourceBuffer audioTracks list.If theenabled
attribute on theAudioTrack
object was true at the beginning of this
removal step, then this should also triggerAudioTrackList
[HTML] logic
toqueue a tasktofire an eventnamedchangeat the
SourceBuffer audioTracks list.
Window
,to remove theAudioTrack
object (or instead, theWindow
mirror
of it if theMediaSource
object was constructed in a
DedicatedWorkerGlobalScope
) from the media element:
AudioTrackList
object returned by theaudioTracks
attribute on the HTMLMediaElement.
AudioTrack
object from theHTMLMediaElement audioTracks
list.
This should triggerAudioTrackList
[HTML] logic toqueue a task
tofire an eventnamedremovetrackusing
TrackEvent
with thetrack
attribute initialized to the
AudioTrack
object, at theHTMLMediaElement audioTracks list.If the
enabled
attribute on theAudioTrack
object was true at
the beginning of this removal step, then this should also trigger
AudioTrackList
[HTML] logic toqueue a tasktofire an event
namedchangeat theHTMLMediaElement audioTracks list.
VideoTrackList
object
returned bysourceBuffer.videoTracks
.
VideoTrack
object in theSourceBuffer videoTracks list,run the
following steps:
sourceBuffer
attribute on theVideoTrack
object to
null.
VideoTrack
object from theSourceBuffer videoTracks list.
This should triggerVideoTrackList
[HTML] logic toqueue a taskto
fire an eventnamedremovetrackusingTrackEvent
with thetrack
attribute initialized to theVideoTrack
object, at theSourceBuffer videoTracks list.If theselected
attribute on theVideoTrack
object was true at the beginning of this
removal step, then this should also triggerVideoTrackList
[HTML] logic
toqueue a tasktofire an eventnamedchangeat the
SourceBuffer videoTracks list.
Window
,to remove theVideoTrack
object (or instead, theWindow
mirror
of it if theMediaSource
object was constructed in a
DedicatedWorkerGlobalScope
) from the media element:
VideoTrackList
object returned by thevideoTracks
attribute on the HTMLMediaElement.
VideoTrack
object from theHTMLMediaElement videoTracks
list.
This should triggerVideoTrackList
[HTML] logic toqueue a task
tofire an eventnamedremovetrackusing
TrackEvent
with thetrack
attribute initialized to the
VideoTrack
object, at theHTMLMediaElement videoTracks list.If the
selected
attribute on theVideoTrack
object was true at
the beginning of this removal step, then this should also trigger
VideoTrackList
[HTML] logic toqueue a tasktofire an event
namedchangeat theHTMLMediaElement videoTracks list.
TextTrackList
object
returned bysourceBuffer.textTracks
.
TextTrack
object in theSourceBuffer textTracks list,run the
following steps:
sourceBuffer
attribute on theTextTrack
object to
null.
TextTrack
object from theSourceBuffer textTracks list.
This should triggerTextTrackList
[HTML] logic toqueue a taskto
fire an eventnamedremovetrackusingTrackEvent
with
thetrack
attribute initialized to theTextTrack
object, at
theSourceBuffer textTracks list.If themode
attribute on the
TextTrack
object was"showing"
or"hidden"
at the beginning of this removal step, then this
should also triggerTextTrackList
[HTML] logic toqueue a taskto
fire an eventnamedchangeat theSourceBuffer
textTracks list.
Window
,to remove theTextTrack
object (or instead, theWindow
mirror
of it if theMediaSource
object was constructed in a
DedicatedWorkerGlobalScope
) from the media element:
TextTrackList
object returned by thetextTracks
attribute on the HTMLMediaElement.
TextTrack
object from theHTMLMediaElement textTracks
list.
This should triggerTextTrackList
[HTML] logic toqueue a taskto
fire an eventnamedremovetrackusingTrackEvent
with thetrack
attribute initialized to theTextTrack
object, at theHTMLMediaElement textTracks list.If the
mode
attribute on theTextTrack
object was"showing"
or"hidden"
at
the beginning of this removal step, then this should also trigger
TextTrackList
[HTML] logic toqueue a tasktofire an event
namedchangeat theHTMLMediaElement textTracks list.
activeSourceBuffers
,then removesourceBuffer
fromactiveSourceBuffers
andqueue a tasktofire an eventnamed
removesourcebuffer
at theSourceBufferList
returned by
activeSourceBuffers
.
sourceBuffers
andqueue a tasktofire an eventnamedremovesourcebuffer
at theSourceBufferList
returned by
sourceBuffers
.
Signals the end of the stream.
readyState
attribute is not in the "open
"state
then throw anInvalidStateError
exception and abort these steps.
updating
attribute equals true on anySourceBuffer
in
sourceBuffers
,then throw anInvalidStateError
exception and abort
these steps.
Updates[[live seekable range]]
that is used in section
10.
HTMLMediaElement Extensionsto modifyHTMLMediaElement
's
seekable
behavior.
When this method is invoked, the user agent must run the following steps:
readyState
attribute is not "open
"then throw
anInvalidStateError
exception and abort these steps.
TypeError
exception and abort these steps.
[[live seekable range]]
to be a newnormalized TimeRanges objectcontaining a single range whose start position is
startand end position isend.
Updates[[live seekable range]]
that is used in section
10.
HTMLMediaElement Extensionsto modifyHTMLMediaElement
's
seekable
behavior.
When this method is invoked, the user agent must run the following steps:
readyState
attribute is not "open
"then throw
anInvalidStateError
exception and abort these steps.
[[live seekable range]]
contains a range, then set
[[live seekable range]]
to be a new emptyTimeRanges
object.
Check to see whether theMediaSource
is capable of creatingSourceBuffer
objects for the specified MIME type.
If true is returned from this method, it only indicates that theMediaSource
implementation is capable of creatingSourceBuffer
objects for the specified MIME type.
AnaddSourceBuffer
()
callSHOULDstill fail if sufficient resources are not
available to support the addition of a newSourceBuffer
.
This method returning true implies thatHTMLMediaElement
's
canPlayType
()
will return "maybe" or "probably" since it does not make
sense for aMediaSource
to support a type the HTMLMediaElement knows it cannot play.
When this method is invoked, the user agent must run the following steps:
Event name | Interface | Dispatched when... |
---|---|---|
sourceopen |
Event
|
MediaSource 'sreadyState transitions from "closed "
to"open "or from"ended "to"open ".
|
sourceended |
Event
|
MediaSource 'sreadyState transitions from "open "
to"ended ".
|
sourceclose |
Event
|
MediaSource 'sreadyState transitions from "open "
to"closed "or"ended "to"closed ".
|
When aWindow
HTMLMediaElement
is attached to aDedicatedWorkerGlobalScope
MediaSource
,each context has algorithms that depend on information from the other.
HTMLMediaElement
is exposed only toWindow
contexts, butMediaSource
and
related objects defined in this specification are exposed inWindow
and
DedicatedWorkerGlobalScope
contexts. This lets applications construct a
MediaSource
object in either of those types of context and attach it to an
HTMLMediaElement
object in aWindow
context using aMediaSource object URLor
aMediaSourceHandle
as described in theattaching to a media elementalgorithm. A
MediaSource
object is notTransferable
;it is only visible in the context where
it was created.
The rest of this section describes a model for bounding information latency for
attachments of aWindow
media element to aDedicatedWorkerGlobalScope
MediaSource
.While the model describes communication using message passing,
implementationsMAYchoose to communicate in potentially faster ways, such as using
shared memory and locks. Attachments to aWindow
MediaSource
synchronously have
the information already without communicating it across contexts.
AMediaSource
that is constructed in aDedicatedWorkerGlobalScope
has a
[[port to main]]internal slot that stores a
MessagePort
setup during attachment and nulled during detachment. AWindow
[[port to main]]
is always null.
AnHTMLMediaElement
extended by this specification and attached to a
DedicatedWorkerGlobalScope
MediaSource
similarly has a[[port to worker]]internal slot that stores aMessagePort
and a[[channel with worker]]internal slot
that stores aMessageChannel
,both setup during attachment and nulled during
detachment. Both[[port to worker]]
and[[channel with worker]]
are null unless attached to aDedicatedWorkerGlobalScope
MediaSource
.
Algorithms in this specification that need to communicate information from aWindow
HTMLMediaElement
to an attachedDedicatedWorkerGlobalScope
MediaSource
,or
vice versa, will use these internal ports implicitly to post a message to their
counterpart, where the implicit handler of the message runs steps as described in the
algorithms.
There are distinct mechanisms for attaching aMediaSource
to a media element
depending on where theMediaSource
object was constructed, in aWindow
versus
in aDedicatedWorkerGlobalScope
:
Attaching aMediaSource
that was constructed in aWindow
can be done by
assigning aMediaSource object URLfor thatMediaSource
to the media
elementsrc
attribute or the src attribute of a <source>
inside a media element. AMediaSource object URLis created by passing a
MediaSource object tocreateObjectURL
()
.
Though implementationsMAYallowMediaSource object URLcreation in a
DedicatedWorkerGlobalScope
for aMediaSource
constructed in that worker,
attempting to use thatMediaSource object URLto attach to a media element
using either thesrc
attribute or the src attribute of a
<source> inside a media elementMUSTfail in the media element'sresource fetch algorithm,as extended below.
Extending the object URL attachment mechanism to worker MediaSource object URLs would further propagate this idiom that is less preferred versus using srcObject, and would unnecessarily increase user agent interoperability risk and implementation complexity.
MediaSource
that was constructed in a
DedicatedWorkerGlobalScope
can only be done by obtaining a handle from it using
handle
,transferring thatMediaSourceHandle
to theWindow
context and assigning it to the media elementsrcObject
attribute.
For the purposes of aligning this specification withHTMLMediaElement
resource
loading and fetching algorithms, the underlyingDedicatedWorkerGlobalScope
MediaSource
is the MediaSource object mentioned there, and the
MediaSourceHandle
object is the media provider object.
If theresource fetch algorithmwas invoked with a media provider object that is a
MediaSource
object, aMediaSourceHandle
object or a URL record whose object is
aMediaSource
object, then let mode be local, skip the first step in theresource fetch algorithm(which may otherwise set mode to remote) and continue the execution
of theresource fetch algorithm.
The first step of theresource fetch algorithmis expected to eventually align with
selecting local mode for URL records whose objects are media provider objects. The
intent is that if theHTMLMediaElement
'ssrc
attribute or
selected childsource
'ssrc
attribute is ablob:
URL matching a
MediaSource object URLwhen the respectivesrc
attribute was last changed, then
thatMediaSource
object is used as the media provider object and current media
resource in the local mode logic in theresource fetch algorithm.This also means
that the remote mode logic that includes observance of any preload attribute is skipped
when a MediaSource object is attached. Even with that eventual change to [HTML], the
execution of the following steps at the beginning of the local mode logic is still
required when the current media resource is aMediaSource
object.
At the beginning of the "Otherwise (mode is local)" section of theresource fetch algorithm,execute the additional steps, below.
Relative to the action which triggered the media element's resource selection algorithm, these steps are asynchronous. Theresource fetch algorithmis run after the task that invoked the resource selection algorithm is allowed to continue and a stable state is reached. Implementations may delay the steps in the "Otherwise" clause, below, until the MediaSource object is ready for use.
MediaSource
object, aMediaSourceHandle
object or a URL record whose
object is aMediaSource
object, then:
MediaSource
that was constructed in aDedicatedWorkerGlobalScope
,such as would occur if
attempting to use aMediaSource object URLfrom a
DedicatedWorkerGlobalScope
MediaSource
MediaSource
'shandle
from the
DedicatedWorker to the Window context and assigning it to the media element's
srcObject
attribute is the only way to attach such a
MediaSource.
MediaSourceHandle
whose
[[Detached]]
internal slot is true
MediaSourceHandle
whose underlying
MediaSource
's[[has ever been attached]]
internal slot is
true
MediaSource
more than once using a
MediaSourceHandle
,even if theMediaSource
was constructed on
Window
and had been loaded previously using aMediaSource object URL.
This doesn't preclude subsequent use of aMediaSource object URLfor a
Window
MediaSource
from succeeding though.
readyState
is NOT set to "closed
"
MediaSource
's[[has ever been attached]]
internal slot to true.
MediaSource
was constructed in a
DedicatedWorkerGlobalScope
,then setup worker attachment
communication and open theMediaSource
:
[[channel with worker]]
to be a new
MessageChannel
.
[[port to worker]]
to the
port1
value of[[channel with worker]]
.
port2
of[[channel with worker]]
as both the value and the sole member of thetransferList,
and let the result beserialized port2.
MediaSource
's
DedicatedWorkerGlobalScope
that will
DedicatedWorkerGlobalScope
'srealm,and set[[port to main]]
to be the
resulting deserialized clone of the transferred
port2
value of[[channel with worker]]
.
readyState
attribute to
"open
".
sourceopen
at
theMediaSource
.
MediaSource
was constructed in aWindow
:
[[channel with worker]]
null.
[[port to worker]]
null.
[[port to main]]
null.
readyState
attribute to
"open
".
sourceopen
at the
MediaSource
.
appendBuffer
()
.
MediaSource
is attached.
An attached MediaSource does not use the remote mode steps in theresource fetch algorithm,so the media element will not fire "suspend" events. Though future versions of this specification will likely remove "progress" and "stalled" events from a media element with an attached MediaSource, user agents conforming to this version of the specification may still fire these two events as these [HTML] references changed after implementations of this specification stabilized.
The following steps are run in any case where the media element is going to transition
toNETWORK_EMPTY
andqueue a tasktofire an eventnamed
emptiedat the media element. These stepsSHOULDbe run right
before the transition.
MediaSource
was constructed in aDedicatedWorkerGlobalScope
:
MediaSource
using an internaldetach
message posted to
[[port to worker]]
.
[[port to worker]]
null.
[[channel with worker]]
null.
detach
notification runs the
remainder of these steps in theDedicatedWorkerGlobalScope
MediaSource
.
MediaSource
was constructed in aWindow
:
Window
MediaSource
.
[[port to main]]
null.
readyState
attribute to "closed
".
ManagedMediaSource
,then setstreaming
attribute tofalse
.
duration
to NaN.
SourceBuffer
objects fromactiveSourceBuffers
.
removesourcebuffer
at
activeSourceBuffers
.
SourceBuffer
objects fromsourceBuffers
.
removesourcebuffer
at
sourceBuffers
.
sourceclose
at theMediaSource
.
Going forward, this algorithm is intended to be externally called and run in any case
where the attachedMediaSource
,if any, must be detached from the media element. It
MAYbe called on HTMLMediaElement [HTML] operations like load() andresource fetch algorithmfailures in addition to, or in place of, when the media element transitions
toNETWORK_EMPTY
.Resource fetch algorithm failures are those
which abort either the resource fetch algorithm or the resource selection algorithm,
with the exception that the "Final step" [HTML] is not considered a failure that
triggers detachment.
Run the following steps as part of the "Wait until the user agent has established whether or not the media data for the new playback position is available, and, if it is, until it has decoded enough data to play back that position "step of the seek algorithm:
The media element looks formedia segmentscontaining thenew playback
positionin eachSourceBuffer
object in
activeSourceBuffers
.Any position within aTimeRanges
in the
current value of theHTMLMediaElement
'sbuffered
attribute
has all necessary media segments buffered for that position.
TimeRanges
ofHTMLMediaElement
's
buffered
HTMLMediaElement
'sreadyState
attribute is
greater thanHAVE_METADATA
,then set the
HTMLMediaElement
'sreadyState
attribute to
HAVE_METADATA
.
PerHTMLMediaElement ready states
[HTML] logic,HTMLMediaElement
's
readyState
changes may trigger events on the
HTMLMediaElement.
appendBuffer
()
call
causes thecoded frame processingalgorithm to set the
HTMLMediaElement
'sreadyState
attribute to a value
greater thanHAVE_METADATA
.
The web application can usebuffered
and
HTMLMediaElement
'sbuffered
to determine what the
media element needs to resume playback.
If thereadyState
attribute is "ended
"and the
new playback positionis within aTimeRanges
currently in
HTMLMediaElement
'sbuffered
,then the seek operation
must continue to completion here even if one or more currently selected or
enabled track buffers' largest range end timestamp is less thannew playback
position.This condition should only occur due to logic in
buffered
whenreadyState
is
"ended
".
The following steps are periodically run during playback to make sure that all of the
SourceBuffer
objects inactiveSourceBuffers
haveenough data to ensure uninterrupted playback.Changes toactiveSourceBuffers
also
cause these steps to run because they affect the conditions that trigger state
transitions.
Havingenough data to ensure uninterrupted playbackis an
implementation specific condition where the user agent determines that it currently has
enough data to play the presentation without stalling for a meaningful period of time.
This condition is constantly evaluated to determine when to transition the media
element into and out of theHAVE_ENOUGH_DATA
ready state. These
transitions indicate when the user agent believes it has enough data buffered or it
needs more data respectively.
An implementationMAYchoose to use bytes buffered, time buffered, the append rate, or
any other metric it sees fit to determine when it has enough data. The metrics usedMAY
change during playback so web applicationsSHOULDonly rely on the value of
HTMLMediaElement
'sreadyState
to determine whether more data
is needed or not.
When the media element needs more data, the user agentSHOULDtransition it from
HAVE_ENOUGH_DATA
toHAVE_FUTURE_DATA
early
enough for a web application to be able to respond without causing an interruption in
playback. For example, transitioning when the current playback position is 500ms before
the end of the buffered data gives the application roughly 500ms to append more data
before playback stalls.
HTMLMediaElement
'sreadyState
attribute equals
HAVE_NOTHING
:
HTMLMediaElement
'sbuffered
does not contain a
TimeRanges
for the current playback position:
HTMLMediaElement
'sreadyState
attribute to
HAVE_METADATA
.
PerHTMLMediaElement ready states
[HTML] logic,HTMLMediaElement
's
readyState
changes may trigger events on the
HTMLMediaElement.
HTMLMediaElement
'sbuffered
contains aTimeRanges
that includes the current playback position andenough data to ensure uninterrupted playback:
HTMLMediaElement
'sreadyState
attribute to
HAVE_ENOUGH_DATA
.
PerHTMLMediaElement ready states
[HTML] logic,HTMLMediaElement
's
readyState
changes may trigger events on the
HTMLMediaElement.
HAVE_CURRENT_DATA
.
HTMLMediaElement
'sbuffered
contains aTimeRanges
that includes the current playback position and some time beyond the current playback
position, then run the following steps:
HTMLMediaElement
'sreadyState
attribute to
HAVE_FUTURE_DATA
.
PerHTMLMediaElement ready states
[HTML] logic,HTMLMediaElement
's
readyState
changes may trigger events on the
HTMLMediaElement.
HAVE_CURRENT_DATA
.
HTMLMediaElement
'sbuffered
contains aTimeRanges
that ends at the current playback position and does not have a range covering the
time immediately after the current position:
HTMLMediaElement
'sreadyState
attribute to
HAVE_CURRENT_DATA
.
PerHTMLMediaElement ready states
[HTML] logic,HTMLMediaElement
's
readyState
changes may trigger events on the
HTMLMediaElement.
During playbackactiveSourceBuffers
needs to be updated if the
selected
video track, theenabled
audio track(s), or a
text trackmode
changes. When one or more of these changes occur the
following steps need to be followed. Also, whenMediaSource
was constructed in a
DedicatedWorkerGlobalScope
,then each change that occurs to aWindow
mirror of
a track created previously by the implicit handler for the internalcreate track
mirror
messageMUSTalso be made to the correspondingDedicatedWorkerGlobalScope
track using an internalupdate track state
message posted to
[[port to worker]]
whose implicit handler makes the change and
runs the following steps. Likewise, each change that occurs to a
DedicatedWorkerGlobalScope
trackMUSTalso be made to the correspondingWindow
mirror of the track using an internalupdate track state
message posted to
[[port to main]]
whose implicit handler makes the change to the mirror.
SourceBuffer
associated with the previously selected video track is
not associated with any other enabled tracks, run the following steps:
SourceBuffer
fromactiveSourceBuffers
.
removesourcebuffer
at
activeSourceBuffers
SourceBuffer
associated with the newly selected video track is not
already inactiveSourceBuffers
,run the following steps:
SourceBuffer
associated with this
track is not associated with any other enabled or selected track, then run the
following steps:
SourceBuffer
associated with the audio track from
activeSourceBuffers
removesourcebuffer
at
activeSourceBuffers
SourceBuffer
associated with this track
is not already inactiveSourceBuffers
,then run the following steps:
SourceBuffer
associated with the audio track to
activeSourceBuffers
addsourcebuffer
at
activeSourceBuffers
mode
becomes"disabled"
and theSourceBuffer
associated with this track is not associated with any other
enabled or selected track, then run the following steps:
SourceBuffer
associated with the text track from
activeSourceBuffers
removesourcebuffer
at
activeSourceBuffers
mode
becomes"showing"
or
"hidden"
and theSourceBuffer
associated with this
track is not already inactiveSourceBuffers
,then run the following
steps:
SourceBuffer
associated with the text track to
activeSourceBuffers
addsourcebuffer
at
activeSourceBuffers
Follow these steps whenduration
needs to change to anew
duration.
duration
is equal tonew duration,then
return.
SourceBuffer
objects in
sourceBuffers
,then throw anInvalidStateError
exception and abort
these steps.
SourceBuffer
objects in
sourceBuffers
.
This condition can occur because thecoded frame removalalgorithm preserves coded frames that start before the start of the removal range.
duration
tonew duration.
Window
to update the media element's duration:
duration
tonew duration.
This algorithm gets called when the application signals the end of stream via an
endOfStream
()
call or an algorithm needs to signal a decode error. This
algorithm takes anerrorparameter that indicates whether an error
will be signalled.
readyState
attribute value to "ended
".
sourceended
at theMediaSource
.
SourceBuffer
objects in
sourceBuffers
.
This allows the duration to properly reflect the end of the appended media segments. For example, if the duration was explicitly set to 10 seconds and only media segments for 0 to 5 seconds were appended before endOfStream() was called, then the duration will get updated to 5 seconds.
network
"
Window
:
HTMLMediaElement
'sreadyState
attribute
equalsHAVE_NOTHING
HTMLMediaElement
'sreadyState
attribute is
greater thanHAVE_NOTHING
decode
"
Window
:
HTMLMediaElement
'sreadyState
attribute
equalsHAVE_NOTHING
HTMLMediaElement
'sreadyState
attribute is
greater thanHAVE_NOTHING
This algorithm is used to run steps onWindow
from aMediaSource
attached from
either the sameWindow
or from aDedicatedWorkerGlobalScope
,usually to update
the state of the attachedHTMLMediaElement
.This algorithm takes asteps
parameter that lists the steps to run onWindow
.
MediaSource
was constructed in aDedicatedWorkerGlobalScope
:
mirror on window
message to[[port to main]]
whose
implicit handler inWindow
will runsteps.Return control to the caller without
awaiting that handler's receipt of the message.
Window
rather than these
stepssomehow happening in the middle of some otherWindow
task's
execution, and
DedicatedWorkerGlobalScope
.
TheMediaSourceHandle
interface represents a proxy for aMediaSource
object that is
useful for attaching aDedicatedWorkerGlobalScope
MediaSource
to aWindow
HTMLMediaElement
usingsrcObject
as described in theattaching to a media elementalgorithm.
This distinct object is necessary to attach a cross-contextMediaSource
to a media
element becauseMediaSource
objects themselves are not transferable since they are
event targets.
EachMediaSourceHandle
object has a[[has ever
been assigned as srcobject]]internal slot that stores aboolean
.It is
initialized to false when theMediaSourceHandle
object is created, is set true in the
extendedHTMLMediaElement
'ssrcObject
setter as described in
section10.
HTMLMediaElement Extensions,and if true, prevents successful transfer of
theMediaSourceHandle
as described in section4.1
Transfer.
MediaSourceHandle
objects areTransferable
,each having a[[Detached]]internal slot that is used to ensure that once the
handle object instance has been transferred, that instance cannot be transferred again.
WebIDL[Transferable,Exposed=(Window,DedicatedWorker)]
interfaceMediaSourceHandle
{};
TheMediaSourceHandle
transfer stepsandtransfer-receiving stepsrequire the
implementation to maintain an implicit internal slot referencing the underlying
MediaSource
to enableattaching to a media elementusing
srcObject
and consequent setup of an attachment'scross-context communication model.
Implementors should be aware that assumption of "move" semantics implied by
Transferable
is not always reality. For example, extensions or internal
implementations of postMessage using broadcast may cause unintended multiple recipients
of a transferredMediaSourceHandle
.For this reason, implementations are guided to
not resolve which potential clone of a transferredMediaSourceHandle
is still valid
for attachment until and unless any handle for the underlyingMediaSource
object is
used in the asynchronous portion of the media element's resource selection algorithm.
This is similar to the existing behavior for attachment viaMediaSource object URLs,
which can be cloned easily, where such a URL is valid for at most one attachment start
(across all of its potentially many clones).
ImplementationsMUSTsupport at most one attachment (load) via
srcObject
ever for theMediaSource
object underlying a
MediaSourceHandle
,regardless of potential cloning of theMediaSourceHandle
due
to varying implementations ofTransferable
.
Seeattaching to a media elementfor how this is enforced during the asynchronous portion of the media element's resource selection algorithm.
MediaSourceHandle
is only exposed onWindow
andDedicatedWorkerGlobalScope
contexts, and cannot successfully transfer between differentagent clusters[ECMASCRIPT]. Transfer of aMediaSourceHandle
object can only succeed
within the sameagent cluster.
For example, transfer of aMediaSourceHandle
object from either aWindow
or
DedicatedWorkerGlobalScope
to either a SharedWorker or a ServiceWorker will not
succeed. Developers should be aware of this difference versusMediaSource object URLs
which areDOMString
s that can be communicated many ways. Even so,attaching to a media elementusing aMediaSource object URLcan only succeed for aMediaSource
that was constructed in aWindow
context. See also the integration of the
agentandagent clusterformalisms for Web Application APIs
[HTML] where related concepts such asdedicated worker agentsare defined.
Transfer stepsfor aMediaSourceHandle
objectMUSTinclude the following step:
MediaSourceHandle
's[[has ever been assigned as srcobject]]
internal slot is true, then thetransfer stepsmust fail by throwing a
DataCloneError
exception.
WebIDLenumAppendMode
{
"segments
",
"sequence
",
};
segments
sequence
timestampOffset
attribute will be updated if a new offset is needed to make the new media segments
adjacent to the previous media segment. Setting thetimestampOffset
attribute in "sequence
"mode allows a media segment to be placed at a
specific position in the timeline without any knowledge of the timestamps in the media
segment.
WebIDL[Exposed=(Window,DedicatedWorker)]
interfaceSourceBuffer
:EventTarget{
attributeAppendMode
mode
;
readonly attributebooleanupdating
;
readonly attributeTimeRangesbuffered
;
attributedoubletimestampOffset
;
readonly attributeAudioTrackListaudioTracks
;
readonly attributeVideoTrackListvideoTracks
;
readonly attributeTextTrackListtextTracks
;
attributedoubleappendWindowStart
;
attributeunrestricted doubleappendWindowEnd
;
attributeEventHandleronupdatestart
;
attributeEventHandleronupdate
;
attributeEventHandleronupdateend
;
attributeEventHandleronerror
;
attributeEventHandleronabort
;
undefinedappendBuffer
(BufferSourcedata);
undefinedabort
();
undefinedchangeType
(DOMStringtype);
undefinedremove
(doublestart,unrestricted doubleend);
};
mode
of typeAppendMode
Controls how a sequence ofmedia segmentsare handled. This attribute is
initially set byaddSourceBuffer
()
after the object is created, and
can be updated bychangeType
()
or setting this attribute.
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers
attribute
of theparent media source,then throw anInvalidStateError
exception and
abort these steps.
updating
attribute equals true, then throw an
InvalidStateError
exception and abort these steps.
[[generate timestamps flag]]
equals true andnew mode
equals "segments
",then throw aTypeError
exception and abort
these steps.
If thereadyState
attribute of theparent media sourceis in
the "ended
"state then run the following steps:
readyState
attribute of theparent media source
to "open
"
sourceopen
at theparent media source.
[[append state]]
equalsPARSING_MEDIA_SEGMENT,then
throw anInvalidStateError
and abort these steps.
sequence
",then set the
[[group start timestamp]]
to the[[group end timestamp]]
.
updating
of typeboolean
,readonly
Indicates whether the asynchronous continuation of anappendBuffer
()
orremove
()
operation is still being processed. This attribute is
initially set to false when the object is created.
buffered
of typeTimeRanges
,readonly
Indicates whatTimeRanges
are buffered in theSourceBuffer
.This attribute is
initially set to an emptyTimeRanges
object when the object is created.
When the attribute is read the following stepsMUSToccur:
sourceBuffers
attribute
of theparent media sourcethen throw anInvalidStateError
exception and
abort these steps.
SourceBuffer
object.
TimeRanges
object
containing a single range from 0 tohighest end time.
SourceBuffer
,run
the following steps:
Texttrack buffersare included in the calculation ofhighest end time, above, but excluded from the buffered range calculation here. They are not necessarily continuous, nor should any discontinuity within them trigger playback stall when the other media tracks are continuous over the same time range.
readyState
is "ended
",then set the end
time on the last range intrack rangestohighest end time.
timestampOffset
of typedouble
Controls the offset applied to timestamps inside subsequentmedia segmentsthat
are appended to thisSourceBuffer
.ThetimestampOffset
is
initially set to 0 which indicates that no offset is being applied.
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers
attribute
of theparent media source,then throw anInvalidStateError
exception and
abort these steps.
updating
attribute equals true, then throw an
InvalidStateError
exception and abort these steps.
If thereadyState
attribute of theparent media sourceis in
the "ended
"state then run the following steps:
readyState
attribute of theparent media source
to "open
"
sourceopen
at theparent media source.
[[append state]]
equalsPARSING_MEDIA_SEGMENT,then
throw anInvalidStateError
and abort these steps.
mode
attribute equals "sequence
",then
set the[[group start timestamp]]
tonew timestamp offset.
audioTracks
of typeAudioTrackList
,readonly
AudioTrack
objects created by this object.
videoTracks
of typeVideoTrackList
,readonly
VideoTrack
objects created by this object.
textTracks
of typeTextTrackList
,readonly
TextTrack
objects created by this object.
appendWindowStart
of typedouble
Thepresentation timestampfor the start of theappend window.This attribute is initially set to thepresentation start time.
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers
attribute
of theparent media source,then throw anInvalidStateError
exception and
abort these steps.
updating
attribute equals true, then throw an
InvalidStateError
exception and abort these steps.
appendWindowEnd
then throw aTypeError
exception and abort these
steps.
appendWindowEnd
of typeunrestricted double
Thepresentation timestampfor the end of theappend window.This attribute is initially set to positive Infinity.
On getting, Return the initial value or the last value that was successfully set.
On setting, run the following steps:
sourceBuffers
attribute
of theparent media source,then throw anInvalidStateError
exception and
abort these steps.
updating
attribute equals true, then throw an
InvalidStateError
exception and abort these steps.
TypeError
and abort these steps.
appendWindowStart
then
throw aTypeError
exception and abort these steps.
onupdatestart
of typeEventHandler
The event handler for theupdatestart
event.
onupdate
of typeEventHandler
The event handler for theupdate
event.
onupdateend
of typeEventHandler
The event handler for theupdateend
event.
onerror
of typeEventHandler
The event handler for theerror
event.
onabort
of typeEventHandler
The event handler for theabort
event.
appendBuffer
Appends the segment data in anBufferSource
[WEBIDL] to
theSourceBuffer
.
When this method is invoked, the user agent must run the following steps:
[[input buffer]]
.
updating
attribute to true.
updatestart
at this
SourceBuffer
object.
abort
Aborts the current segment and resets the segment parser.
When this method is invoked, the user agent must run the following steps:
sourceBuffers
attribute
of theparent media sourcethen throw anInvalidStateError
exception and
abort these steps.
readyState
attribute of theparent media sourceis not
in the "open
"state then throw anInvalidStateError
exception
and abort these steps.
InvalidStateError
exception and abort these steps.
updating
attribute equals true, then run the following
steps:
updating
attribute to false.
abort
at this
SourceBuffer
object.
updateend
at this
SourceBuffer
object.
appendWindowStart
to thepresentation start time.
appendWindowEnd
to positive Infinity.
changeType
Changes the MIME type associated with this object. Subsequent
appendBuffer
()
calls will expect the newly appended bytes to conform
to the new type.
When this method is invoked, the user agent must run the following steps:
TypeError
exception and
abort these steps.
sourceBuffers
attribute
of theparent media source,then throw anInvalidStateError
exception and
abort these steps.
updating
attribute equals true, then throw an
InvalidStateError
exception and abort these steps.
SourceBuffer
objects in thesourceBuffers
attribute of the
parent media source,then throw aNotSupportedError
exception and abort these
steps.
If thereadyState
attribute of theparent media sourceis in
the "ended
"state then run the following steps:
readyState
attribute of theparent media source
to "open
".
sourceopen
at theparent media source.
[[generate timestamps flag]]
on thisSourceBuffer
object to the value in the "Generate Timestamps Flag" column of the byte stream
format registry [MSE-REGISTRY] entry that is associated withtype.
[[generate timestamps flag]]
equals true:
mode
attribute on thisSourceBuffer
object to
"sequence
",including running the associated steps for that
attribute being set.
mode
attribute on this
SourceBuffer
object, without running any associated steps for that
attribute being set.
[[pending initialization segment for changeType flag]]
on thisSourceBuffer
object to true.
remove
Removes media for a specific time range. Thestartof the removal range, in seconds measured frompresentation start timeTheendof the removal range, in seconds measured frompresentation start time.
When this method is invoked, the user agent must run the following steps:
sourceBuffers
attribute
of theparent media sourcethen throw anInvalidStateError
exception and
abort these steps.
updating
attribute equals true, then throw an
InvalidStateError
exception and abort these steps.
duration
equals NaN, then throw aTypeError
exception and
abort these steps.
duration
,then
throw aTypeError
exception and abort these steps.
TypeError
exception and abort these steps.
If thereadyState
attribute of theparent media sourceis in
the "ended
"state then run the following steps:
readyState
attribute of theparent media source
to "open
"
sourceopen
at theparent media source.
Atrack bufferstores thetrack descriptionsandcoded framesfor an individual track. The track buffer is updated asinitialization segmentsandmedia segmentsare appended to theSourceBuffer
.
Eachtrack bufferhas alast decode timestamp variable that stores the decode timestamp of the lastcoded frameappended in the currentcoded frame group.The variable is initially unset to indicate that no coded frameshave been appended yet.
Eachtrack bufferhas alast frame duration variable that stores thecoded frame durationof the lastcoded frameappended in the currentcoded frame group.The variable is initially unset to indicate that no coded frameshave been appended yet.
Eachtrack bufferhas ahighest end timestamp variable that stores the highestcoded frame end timestampacross allcoded framesin the currentcoded frame groupthat were appended to this track buffer. The variable is initially unset to indicate that nocoded frameshave been appended yet.
Eachtrack bufferhas aneed random access point flag variable that keeps track of whether the track buffer is waiting for arandom access pointcoded frame.The variable is initially set to true to indicate thatrandom access pointcoded frameis needed before anything can be added to thetrack buffer.
Eachtrack bufferhas atrack buffer ranges variable that represents the presentation time ranges occupied by thecoded frames currently stored in the track buffer.
For track buffer ranges, these presentation time ranges are based onpresentation timestamps,frame durations, and potentially coded frame group start times for coded
frame groups across track buffers in a muxedSourceBuffer
.
For specification purposes, this information is treated as if it were stored in a
normalized TimeRanges object.Intersectedtrack buffer rangesare
used to reportHTMLMediaElement
'sbuffered
,andMUSTtherefore
support uninterrupted playback within each range ofHTMLMediaElement
's
buffered
.
These coded frame group start times differ slightly from those mentioned in thecoded frame processingalgorithm in that they are the earliestpresentation timestamp
across all track buffers following a discontinuity. Discontinuities can occur within the
coded frame processingalgorithm or result from thecoded frame removal
algorithm, regardless ofmode
.The threshold for determining
disjointness oftrack buffer rangesis implementation-specific. For example, to
reduce unexpected playback stalls, implementationsMAYapproximate thecoded frame processingalgorithm's discontinuity detection logic by coalescing adjacent ranges
separated by a gap smaller than 2 times the maximum frame duration buffered so far in
thistrack buffer.ImplementationsMAYalso use coded frame group start times as
range start times acrosstrack buffersin a muxedSourceBuffer
to further reduce
unexpected playback stalls.
Event name | Interface | Dispatched when... |
---|---|---|
updatestart |
Event
|
SourceBuffer 'supdating transitions from false to true.
|
update |
Event
|
ASourceBuffer 's append or remove successfully completed.SourceBuffer 's
updating transitions from true to false.
|
updateend |
Event
|
The append or remove of aSourceBuffer ended.
|
error |
Event
|
An error occurred during the append to aSourceBuffer .updating
transitions from true to false.
|
abort |
Event
|
TheSourceBuffer 's append was aborted by anabort () call.
updating transitions from true to false.
|
EachSourceBuffer
object has an[[append
state]]internal slot that keeps track of the high-level segment parsing state.
It is initially set toWAITING_FOR_SEGMENTand can transition to the following
states as data is appended.
Append state name | Description |
---|---|
WAITING_FOR_SEGMENT | Waiting for the start of aninitialization segmentormedia segmentto be appended. |
PARSING_INIT_SEGMENT | Currently parsing aninitialization segment. |
PARSING_MEDIA_SEGMENT | Currently parsing amedia segment. |
EachSourceBuffer
object has an[[input
buffer]]internal slot that is a byte buffer that holds unparsed bytes across
appendBuffer
()
calls. The buffer is empty when theSourceBuffer
object is created.
EachSourceBuffer
object has a[[buffer full
flag]]internal slot that keeps track of whetherappendBuffer
()
is allowed to accept more bytes. It is set to false when theSourceBuffer
object is
created and gets updated as data is appended and removed.
EachSourceBuffer
object has a[[group start
timestamp]]internal slot that keeps track of the starting timestamp for a new
coded frame groupin the "sequence
"mode. It is unset when the
SourceBuffer object is created and gets updated when themode
attribute equals "sequence
"and thetimestampOffset
attribute is set, or thecoded frame processingalgorithm runs.
EachSourceBuffer
object has a[[group end
timestamp]]internal slot that stores the highestcoded frame end timestamp
across allcoded framesin the currentcoded frame group.It is set to 0 when
the SourceBuffer object is created and gets updated by thecoded frame processing
algorithm.
The[[group end timestamp]]
stores the highestcoded frame end timestampacross alltrack buffersin aSourceBuffer
.Therefore, care should
be taken in setting themode
attribute when appending multiplexed
segments in which the timestamps are not aligned across tracks.
EachSourceBuffer
object has a[[generate timestamps flag]]internal slot that is a boolean that keeps track
of whether timestamps need to be generated for thecoded framespassed to the
coded frame processingalgorithm. This flag is set by
addSourceBuffer
()
when theSourceBuffer
object is created and is
updated bychangeType
()
.
When the segment parser loop algorithm is invoked, run the following steps:
[[input buffer]]
is empty, then jump to the
need more datastep below.
[[input buffer]]
contains bytes that violate the
SourceBuffer byte stream format specification,then run theappend error
algorithm and abort this algorithm.
[[input buffer]]
.
If the[[append state]]
equalsWAITING_FOR_SEGMENT,then run
the following steps:
[[input buffer]]
indicates the start
of aninitialization segment,set the[[append state]]
to
PARSING_INIT_SEGMENT.
[[input buffer]]
indicates the start
of amedia segment,set[[append state]]
to
PARSING_MEDIA_SEGMENT.
If the[[append state]]
equalsPARSING_INIT_SEGMENT,then run
the following steps:
[[input buffer]]
does not contain a complete
initialization segmentyet, then jump to theneed more datastep below.
[[input buffer]]
.
[[append state]]
toWAITING_FOR_SEGMENT.
If the[[append state]]
equalsPARSING_MEDIA_SEGMENT,then run
the following steps:
[[first initialization segment received flag]]
is false
or the[[pending initialization segment for changeType flag]]
is
true, then run theappend erroralgorithm and abort this algorithm.
[[input buffer]]
contains one or more completecoded frames,then run thecoded frame processingalgorithm.
The frequency at which the coded frame processing algorithm is run is implementation-specific. The coded frame processing algorithmMAYbe called when the input buffer contains the complete media segment or itMAYbe called multiple times as complete coded frames are added to the input buffer.
SourceBuffer
is full and cannot accept more media data, then set
the[[buffer full flag]]
to true.
[[input buffer]]
does not contain a completemedia segment,then jump to theneed more datastep below.
[[input buffer]]
.
[[append state]]
toWAITING_FOR_SEGMENT.
When the parser state needs to be reset, run the following steps:
[[append state]]
equalsPARSING_MEDIA_SEGMENTand the
[[input buffer]]
contains some completecoded frames,then run the
coded frame processingalgorithm until all of these completecoded frameshave
been processed.
mode
attribute equals "sequence
",then set
the[[group start timestamp]]
to the[[group end timestamp]]
[[input buffer]]
.
[[append state]]
toWAITING_FOR_SEGMENT.
This algorithm is called when an error occurs during an append.
updating
attribute to false.
error
at thisSourceBuffer
object.
updateend
at thisSourceBuffer
object.
decode
".
When an append operation begins, the following steps are run to validate and prepare
theSourceBuffer
.
SourceBuffer
has been removed from thesourceBuffers
attribute of theparent media sourcethen throw anInvalidStateError
exception
and abort these steps.
updating
attribute equals true, then throw an
InvalidStateError
exception and abort these steps.
MediaSource
was constructed in aWindow
HTMLMediaElement
's
error
attribute is not null. If that attribute is null, then
letrecent element errorbe false.
Window
case, but run on theWindow
HTMLMediaElement
on any change to
itserror
attribute and communicated by using
[[port to worker]]
implicit messages. If such a message has
not yet been received, then letrecent element errorbe false.
InvalidStateError
exception
and abort these steps.
If thereadyState
attribute of theparent media sourceis in
the "ended
"state then run the following steps:
readyState
attribute of theparent media sourceto
"open
"
sourceopen
at theparent media source.
If the[[buffer full flag]]
equals true, then throw a
QuotaExceededError
exception and abort these steps.
This is the signal that the implementation was unable to evict enough data to
accommodate the append or the append is too big. The web applicationSHOULDuse
remove
()
to explicitly free up space and/or reduce the size of the
append.
WhenappendBuffer
()
is called, the following steps are run to process
the appended data.
updating
attribute to false.
update
at thisSourceBuffer
object.
updateend
at thisSourceBuffer
object.
Follow these steps when a caller needs to initiate a JavaScript visible range removal operation that blocks other SourceBuffer updates:
updating
attribute to true.
updatestart
at this
SourceBuffer
object.
updating
attribute to false.
update
at thisSourceBuffer
object.
updateend
at thisSourceBuffer
object.
The following steps are run when thesegment parser loopsuccessfully parses a completeinitialization segment:
Each SourceBuffer object has a[[first initialization segment received flag]]internal slot that tracks whether the first initialization segmenthas been appended and received by this algorithm. This flag is set to false when the SourceBuffer is created and updated by the algorithm below.
Each SourceBuffer object has a[[pending
initialization segment for changeType flag]]internal slot that tracks whether an
initialization segmentis needed since the most recent
changeType
()
.This flag is set to false when the SourceBuffer is
created, set to true bychangeType
()
and reset to false by the
algorithm below.
duration
attribute if it currently equals NaN:
[[first initialization segment received flag]]
is true,
then run the following steps:
User agentsMAYconsider codecs, that would otherwise be supported, as "not
supported" here if the codecs were not specified intype
parameter passed to (a) the most recently successful
changeType
()
on thisSourceBuffer
object, or (b) if no
successfulchangeType
()
has yet occurred on this object,
theaddSourceBuffer
()
that created thisSourceBuffer
object. For example, if the most recently successful
changeType
()
was called with'video/webm'
or
'video/webm; codecs= "vp8" '
,and a video track containing vp9 appears in
the initialization segment, then the user agentMAYuse this step to
trigger a decode error even if the other two properties' checks, above,
pass. Implementations are encouraged to trigger error in such cases only
when the codec is indeed not supported or the other two properties' checks
fail. Web authors are encouraged to usechangeType
()
,
addSourceBuffer
()
andisTypeSupported
()
with precise codec parameters to more proactively detect user agent
support.changeType
()
is required if theSourceBuffer
object's bytestream format is changing.
If the[[first initialization segment received flag]]
is false,
then run the following steps:
User agentsMAYconsider codecs, that would otherwise be supported, as "not
supported" here if the codecs were not specified intypeparameter
passed to (a) the most recently successfulchangeType
()
on
thisSourceBuffer
object, or (b) if no successful
changeType
()
has yet occurred on this object, the
addSourceBuffer
()
that created thisSourceBuffer
object.
For example,MediaSource.isTypeSupported('video/webm;codecs= "vp8,vorbis" ')
may return true, but ifaddSourceBuffer
()
was called with
'video/webm;codecs= "vp8" '
and a Vorbis track appears in theinitialization segment,then the user agentMAYuse this step to trigger a decode error.
Implementations are encouraged to trigger error in such cases only when the
codec is indeed not supported. Web authors are encouraged to use
changeType
()
,addSourceBuffer
()
and
isTypeSupported
()
with precise codec parameters to more
proactively detect user agent support.changeType
()
is
required if theSourceBuffer
object's bytestream format is changing.
For each audio track in theinitialization segment,run following steps:
AudioTrack
object.
id
property on
new audio track.
language
property onnew
audio track.
label
property onnew audio
track.
kind
property onnew
audio track.
If thisSourceBuffer
object'saudioTracks
's
length
equals 0, then run the following steps:
enabled
property onnew audio trackto
true.
audioTracks
attribute on
thisSourceBuffer
object.
This should triggerAudioTrackList
[HTML] logic toqueue a tasktofire an eventnamedaddtrackusing
TrackEvent
with thetrack
attribute initialized to
new audio track,at theAudioTrackList
object referenced by the
audioTracks
attribute on thisSourceBuffer
object.
DedicatedWorkerGlobalScope
:
create track mirror
message to
[[port to main]]
whose implicit handler inWindow
runs the following steps:
AudioTrack
object.
audioTracks
attribute on the HTMLMediaElement.
audioTracks
attribute on the HTMLMediaElement.
This should triggerAudioTrackList
[HTML] logic toqueue a tasktofire an eventnamedaddtrackusing
TrackEvent
with thetrack
attribute initialized to
mirrored audio trackornew audio track,at theAudioTrackList
object referenced by theaudioTracks
attribute on
the HTMLMediaElement.
For each video track in theinitialization segment,run following steps:
VideoTrack
object.
id
property on
new video track.
language
property onnew
video track.
label
property onnew video
track.
kind
property onnew
video track.
If thisSourceBuffer
object'svideoTracks
's
length
equals 0, then run the following steps:
selected
property onnew video trackto
true.
videoTracks
attribute on
thisSourceBuffer
object.
This should triggerVideoTrackList
[HTML] logic toqueue a tasktofire an eventnamedaddtrackusing
TrackEvent
with thetrack
attribute initialized to
new video track,at theVideoTrackList
object referenced by the
videoTracks
attribute on thisSourceBuffer
object.
DedicatedWorkerGlobalScope
:
create track mirror
message to
[[port to main]]
whose implicit handler inWindow
runs the following steps:
VideoTrack
object.
videoTracks
attribute on the HTMLMediaElement.
videoTracks
attribute on the HTMLMediaElement.
This should triggerVideoTrackList
[HTML] logic toqueue a tasktofire an eventnamedaddtrackusing
TrackEvent
with thetrack
attribute initialized to
mirrored video trackornew video track,at theVideoTrackList
object referenced by thevideoTracks
attribute on
the HTMLMediaElement.
For each text track in theinitialization segment,run following steps:
TextTrack
object.
id
property on
new text track.
language
property onnew
text track.
label
property onnew text
track.
kind
property onnew
text track.
mode
property onnew text trackequals
"showing"
or"hidden"
,then setactive track flagto true.
textTracks
attribute on
thisSourceBuffer
object.
This should triggerTextTrackList
[HTML] logic toqueue a tasktofire an eventnamedaddtrackusing
TrackEvent
with thetrack
attribute initialized to
new text track,at theTextTrackList
object referenced by the
textTracks
attribute on thisSourceBuffer
object.
DedicatedWorkerGlobalScope
:
create track mirror
message to
[[port to main]]
whose implicit handler inWindow
runs the following steps:
TextTrack
object.
textTracks
attribute on the HTMLMediaElement.
textTracks
attribute
on the HTMLMediaElement.
This should triggerTextTrackList
[HTML] logic toqueue a tasktofire an eventnamedaddtrackusing
TrackEvent
with thetrack
attribute initialized to
mirrored text trackornew text track,at theTextTrackList
object referenced by thetextTracks
attribute on
the HTMLMediaElement.
[[first initialization segment received flag]]
to true.
[[pending initialization segment for changeType flag]]
to
false.
Window
:
HTMLMediaElement
'sreadyState
attribute is
greater thanHAVE_CURRENT_DATA
,then set the
HTMLMediaElement
'sreadyState
attribute to
HAVE_METADATA
.
PerHTMLMediaElement ready states
[HTML] logic,HTMLMediaElement
's
readyState
changes may trigger events on the
HTMLMediaElement.
sourceBuffers
of theparent media sourcehas
[[first initialization segment received flag]]
equal to true, then use
theparent media source'smirror if necessaryalgorithm to run the following
step inWindow
:
HTMLMediaElement
'sreadyState
attribute is
HAVE_NOTHING
,then set theHTMLMediaElement
's
readyState
attribute toHAVE_METADATA
.
PerHTMLMediaElement ready states
[HTML] logic,HTMLMediaElement
's
readyState
changes may trigger events on the
HTMLMediaElement. If transition fromHAVE_NOTHING
to
HAVE_METADATA
occurs, it should trigger HTMLMediaElement
logic toqueue a tasktofire an eventnamed
loadedmetadataat the media element.
When completecoded frameshave been parsed by thesegment parser loopthen the following steps are run:
For eachcoded framein themedia segmentrun the following steps:
[[generate timestamps flag]]
equals true:
Special processing may be needed to determine the presentation and decode timestamps for timed text frames since this information may not be explicitly present in the underlying format or may be dependent on the order of the frames. Some metadata text tracks, like MPEG2-TS PSI data, may only have implied timestamps. Format specific rules for these situationsSHOULDbe in thebyte stream format specificationsor in separate extension specifications.
Implementations don't have to internally store timestamps in a double precision floating point representation. This representation is used here because it is the representation for timestamps in the HTML spec. The intention here is to make the behavior clear without adding unnecessary complexity to the algorithm to deal with the fact that adding a timestampOffset may cause a timestamp rollover in the underlying timestamp representation used by the byte stream format. Implementations can use any internal timestamp representation they wish, but the addition of timestampOffsetSHOULDbehave in a similar manner to what would happen if a double precision floating point representation was used.
mode
equals "sequence
"and
[[group start timestamp]]
is set, then run the following steps:
timestampOffset
equal to[[group start timestamp]]
minuspresentation timestamp.
[[group end timestamp]]
equal to
[[group start timestamp]]
.
[[group start timestamp]]
.
IftimestampOffset
is not 0, then run the following steps:
timestampOffset
to thepresentation timestamp.
timestampOffset
to thedecode timestamp.
mode
equals "segments
":
[[group end timestamp]]
topresentation
timestamp.
mode
equals "sequence
":
[[group start timestamp]]
equal to the
[[group end timestamp]]
.
appendWindowStart
,
then set theneed random access point flagto true, drop the coded frame, and
jump to the top of the loop to start processing the next coded frame.
Some implementationsMAYchoose to collect some of these coded frames with
presentation timestampless thanappendWindowStart
and use
them to generate a splice at the first coded frame that has apresentation timestampgreater than or equal toappendWindowStart
even if
that frame is not arandom access point.Supporting this requires multiple
decoders or faster than real-time decoding so for now this behavior will not be
a normative requirement.
appendWindowEnd
,then
set theneed random access point flagto true, drop the coded frame, and jump
to the top of the loop to start processing the next coded frame.
Some implementationsMAYchoose to collect coded frames withpresentation
timestampless thanappendWindowEnd
andframe end timestamp
greater thanappendWindowEnd
and use them to generate a splice
across the portion of the collected coded frames within the append window at
time of collection, and the beginning portion of later processed frames which
only partially overlap the end of the collected coded frames. Supporting this
requires multiple decoders or faster than real-time decoding so for now this
behavior will not be a normative requirement. In conjunction with collecting
coded frames that spanappendWindowStart
,implementationsMAY
thus support gapless audio splicing.
This is to compensate for minor errors in frame timestamp computations that can appear when converting back and forth between double precision floating point numbers and rationals. This tolerance allows a frame to replace an existing one as long as it is within 1 microsecond of the existing frame's start time. Frames that come slightly before an existing frame are handled by the removal step below.
Removing allcoded framesuntil the nextrandom access pointis a conservative estimate of the decoding dependencies since it assumes all frames between the removed frames and the next random access point depended on the frames that were removed.
The greater than check is needed because bidirectional prediction between coded frames can causepresentation timestampto not be monotonically increasing even though the decode timestamps are monotonically increasing.
[[group end timestamp]]
,then set[[group end timestamp]]
equal toframe
end timestamp.
[[generate timestamps flag]]
equals true, then set
timestampOffset
equal toframe end timestamp.
If theHTMLMediaElement
'sreadyState
attribute is
HAVE_METADATA
and the newcoded framescause
HTMLMediaElement
'sbuffered
to have aTimeRanges
for
the current playback position, then set theHTMLMediaElement
's
readyState
attribute to
HAVE_CURRENT_DATA
.
PerHTMLMediaElement ready states
[HTML] logic,HTMLMediaElement
's
readyState
changes may trigger events on the HTMLMediaElement.
If theHTMLMediaElement
'sreadyState
attribute is
HAVE_CURRENT_DATA
and the newcoded framescause
HTMLMediaElement
'sbuffered
to have aTimeRanges
that
includes the current playback position and some time beyond the current playback
position, then set theHTMLMediaElement
'sreadyState
attribute toHAVE_FUTURE_DATA
.
PerHTMLMediaElement ready states
[HTML] logic,HTMLMediaElement
's
readyState
changes may trigger events on the HTMLMediaElement.
If theHTMLMediaElement
'sreadyState
attribute is
HAVE_FUTURE_DATA
and the newcoded framescause
HTMLMediaElement
'sbuffered
to have aTimeRanges
that
includes the current playback position andenough data to ensure uninterrupted playback,then set theHTMLMediaElement
'sreadyState
attribute toHAVE_ENOUGH_DATA
.
PerHTMLMediaElement ready states
[HTML] logic,HTMLMediaElement
's
readyState
changes may trigger events on the HTMLMediaElement.
duration
,
then run theduration changealgorithm withnew durationset
to the maximum of the current duration and the[[group end timestamp]]
.
Follow these steps whencoded framesfor a specific time range need to be removed from the SourceBuffer:
For eachtrack bufferin thisSourceBuffer
,run the following steps:
duration
If thistrack bufferhas arandom access pointtimestamp that is greater than or equal toend,then updateremove end timestampto that random access point timestamp.
Random access point timestamps can be different across tracks because the dependencies betweencoded frameswithin a track are usually different than the dependencies in another track.
For each removed frame, if the frame has adecode timestampequal to thelast decode timestampfor the frame's track, run the following steps:
mode
equals "segments
":
[[group end timestamp]]
topresentation timestamp.
mode
equals "sequence
":
[[group start timestamp]]
equal to the
[[group end timestamp]]
.
Removing allcoded framesuntil the nextrandom access pointis a conservative estimate of the decoding dependencies since it assumes all frames between the removed frames and the next random access point depended on the frames that were removed.
If this object is inactiveSourceBuffers
,thecurrent playback positionis greater than or equal tostartand less
than theremove end timestamp,andHTMLMediaElement
's
readyState
is greater than
HAVE_METADATA
,then set theHTMLMediaElement
's
readyState
attribute toHAVE_METADATA
and stall playback.
PerHTMLMediaElement ready states
[HTML] logic,HTMLMediaElement
's
readyState
changes may trigger events on the
HTMLMediaElement.
This transition occurs because media data for the current position has been removed. Playback cannot progress until media for thecurrent playback positionis appended or the 3.15.5 Changes to selected/enabled track state.
[[buffer full flag]]
equals true and this object is ready
to accept more bytes, then set the[[buffer full flag]]
to false.
This algorithm is run to free up space in thisSourceBuffer
when new data is
appended.
Need to recognize step here that implementationsMAYdecide to set
[[buffer full flag]]
true here if it predicts that processing
new datain addition to any existing bytes in[[input buffer]]
would exceed the capacity of theSourceBuffer
.Such a step enables more
proactive push-back from implementations before acceptingnew datawhich would
overflow resources, for example. In practice, at least one implementation already
does this.
[[buffer full flag]]
equals false, then abort these steps.
ImplementationsMAYuse different methods for selectingremoval rangesso web
applicationsSHOULD NOTdepend on a specific behavior. The web application can use
thebuffered
attribute to observe whether portions of the buffered
data have been evicted.
Follow these steps when thecoded frame processingalgorithm needs to generate a splice frame for two overlapping audiocoded frames:
floor(x * sample_rate + 0.5) / sample_rate
).
For example, given the following values:
presentation timestampanddecode timestampare updated to 10.0125 since 10.01255 is closer to 10 + 100/8000 (10.0125) than 10 + 101/8000 (10.012625)
Some implementationsMAYapply fades to/from silence to coded frames on either side of the inserted silence to make the transition less jarring.
This is intended to allownew coded frameto be added to thetrack buffer as ifoverlapped framehad not been in thetrack bufferto begin with.
If thenew coded frameis less than 5 milliseconds in duration, then coded frames that are appended after thenew coded framewill be needed to properly render the splice.
See theaudio splice renderingalgorithm for details on how this splice frame is rendered.
The following steps are run when a spliced frame, generated by theaudio splice framealgorithm, needs to be rendered by the media element:
Here is a graphical representation of this algorithm.
Follow these steps when thecoded frame processingalgorithm needs to generate a splice frame for two overlapping timed textcoded frames:
This is intended to allownew coded frameto be added to thetrack bufferas if it hadn't overlapped any frames intrack bufferto begin with.
SourceBufferList
is a simple container object forSourceBuffer
objects. It
provides read-only array access and fires events when the list is modified.
WebIDL[Exposed=(Window,DedicatedWorker)]
interfaceSourceBufferList
:EventTarget{
readonly attributeunsigned longlength
;
attributeEventHandleronaddsourcebuffer
;
attributeEventHandleronremovesourcebuffer
;
getter
SourceBuffer
(unsigned longindex);
};
length
of typeunsigned long
,readonly
Indicates the number ofSourceBuffer
objects in the list.
onaddsourcebuffer
of typeEventHandler
The event handler for theaddsourcebuffer
event.
onremovesourcebuffer
of typeEventHandler
The event handler for theremovesourcebuffer
event.
Allows the SourceBuffer objects in the list to be accessed with an array operator (i.e., []).
When this method is invoked, the user agent must run the following steps:
length
attribute then return undefined and abort these steps.
SourceBuffer
object in the list.
Event name | Interface | Dispatched when... |
---|---|---|
addsourcebuffer |
Event
|
When aSourceBuffer is added to the list.
|
removesourcebuffer |
Event
|
When aSourceBuffer is removed from the list.
|
AManagedMediaSource
is aMediaSource
that actively manages its memory content.
Unlike aMediaSource
,theuser agentcan evict content through the
memory cleanupalgorithm from itssourceBuffers
(populated withManagedSourceBuffer
) for any reason.
WebIDL[Exposed=(Window,DedicatedWorker)]
interfaceManagedMediaSource
:MediaSource
{
constructor
();
readonly attributebooleanstreaming
;
attributeEventHandleronstartstreaming
;
attributeEventHandleronendstreaming
;
};
streaming
On getting:
Event name | Interface | Dispatched when... |
---|---|---|
startstreaming |
Event
|
AManagedMediaSource 'sstreaming attribute changed from
false totrue .
|
endstreaming |
Event
|
AManagedMediaSource 'sstreaming attribute changed from
true tofalse .
|
The following steps are run periodically, whenever theSourceBuffer Monitoringalgorithm is scheduled to run.
Havingenough managed data to ensure uninterrupted playbackis an implementation
defined condition where the user agent determines that it currently has enough data to play
the presentation without stalling for a meaningful period of time. This condition is
constantly evaluated to determine when to transition the value of
streaming
.These transitions indicate when the user agent believes
it has enough data buffered or it needs more data respectively.
Beingable to retrieve and buffer data in an efficient wayis an implementation defined condition where the user agent determines that it can fetch new data in an energy efficient manner while able to achieve the desired memory usage.
MediaSource
SourceBuffer Monitoringalgorithm.
buffered
attribute contains aTimeRanges
that includes the current
playback position andenough managed data to ensure uninterrupted playbackand is
able to retrieve and buffer data in an efficient way
streaming
,queue an element taskon themedia element
that runs the following steps:
streaming
attribute tocan play
uninterrupted and efficiently.
startstreaming
at theManagedMediaSource
.
endstreaming
at the
ManagedMediaSource
.
sourceBuffers
:
WebIDL[Exposed=(Window,DedicatedWorker)]
interfaceBufferedChangeEvent
:Event{
constructor
(DOMStringtype,optionalBufferedChangeEventInit
eventInitDict= {});
[SameObject] readonly attributeTimeRangesaddedRanges
;
[SameObject] readonly attributeTimeRangesremovedRanges
;
};
dictionaryBufferedChangeEventInit
:EventInit{
TimeRangesaddedRanges
;
TimeRangesremovedRanges
;
};
addedRanges
updatestart
andupdateend
events (which
would have occurred during the last run of thecoded frame processingalgorithm).
removedRanges
updatestart
andupdateend
events (which
would have occurred during the last run of thecoded frame removalorcoded frame evictionalgorithm or if the user agent evicted content in response to a
memory cleanup).
WebIDL[Exposed=(Window,DedicatedWorker)]
interfaceManagedSourceBuffer
:SourceBuffer
{
attributeEventHandleronbufferedchange
;
};
onbufferedchange
Anevent handler IDL attributewhoseevent handler event typeis
bufferedchange
.
Event name | Interface | Dispatched when... |
---|---|---|
bufferedchange |
BufferedChangeEvent
|
TheManagedSourceBuffer 's buffered range changed following a call to
appendBuffer () ,remove () ,
endOfStream () ,or as a consequence of the user agent running the
memory cleanupalgorithm.
|
The following steps are run at the completion of all operations to the
ManagedSourceBuffer
bufferthat would cause abuffer's
buffered
to change. That is onceappendBuffer
()
,
remove
()
ormemory cleanupalgorithm have
completed.
buffered
attribute before the changes occurred.
buffered
TimeRanges
.
BufferedChangeEventInit
dictionary initialized with
addedas itsaddedRanges
andremovedas its
removedRanges
bufferedchange
atbufferusing the
BufferedChangeEvent
interface, initialized witheventInitDict.
ManagedMediaSource
parent
activeSourceBuffers
:
currentTime
until such presentation could be retrieved again.
Implementations can use different strategies for selectingremoval rangesso web
applications shouldn't depend on a specific behavior. The web application would listen
to thebufferedchange
event to observe whether portions of the buffered data have
been evicted.
This section specifies what existingHTMLMediaElement
'sseekable
andHTMLMediaElement
'sbuffered
attributes on the
HTMLMediaElement
MUSTreturn when aMediaSource
is attached to the element, and
what the existingHTMLMediaElement
'ssrcObject
attributeMUSTalso
do when it is set to be aMediaSourceHandle
object.
HTMLMediaElement
'sseekable
TheHTMLMediaElement
'sseekable
attribute returns a new static
normalized TimeRanges objectcreated based on the following steps:
MediaSource
was constructed in aDedicatedWorkerGlobalScope
that is
terminated or is closing then return an emptyTimeRanges
object and abort these
steps.
This case is intended to handle implementations that may no longer maintain any
previous information about buffered or seekable media in a MediaSource that was
constructed in a DedicatedWorkerGlobalScope that has been terminated by
terminate
()
or user agent execution ofterminate a workerfor the
MediaSource's DedicatedWorkerGlobalScope, for instance as the eventual result of
close
()
execution.
Should there be some (eventual) media element error transition in the case of an attached worker MediaSource having its context destroyed? The experimental Chromium implementation of worker MSE just keeps the element readyState, networkState and error the same as prior to that context destruction, though the seekable and buffered attributes each report an empty TimeRange.
duration
and
[[live seekable range]]
,determined as follows:
MediaSource
was constructed in aWindow
duration
and setrecent live seekable
rangeto be[[live seekable range]]
.
duration
and[[live seekable range]]
were recently,
updated by handling implicit messages posted by theMediaSource
to its
[[port to main]]
on every change toduration
or
[[live seekable range]]
.
TimeRanges
object.
HTMLMediaElement
'sbuffered
attribute.
HTMLMediaElement
'sbuffered
attribute returns
an emptyTimeRanges
object, then return an emptyTimeRanges
object and
abort these steps.
HTMLMediaElement
's
buffered
attribute.
HTMLMediaElement
'sbuffered
TheHTMLMediaElement
'sbuffered
attribute returns a static
normalized TimeRanges objectbased on the following steps.
MediaSource
was constructed in aDedicatedWorkerGlobalScope
that is
terminated or is closing then return an emptyTimeRanges
object and abort these
steps.
This case is intended to handle implementations that may no longer maintain any
previous information about buffered or seekable media in a MediaSource that was
constructed in a DedicatedWorkerGlobalScope that has been terminated by
terminate
()
or user agent execution ofterminate a workerfor the
MediaSource's DedicatedWorkerGlobalScope, for instance as the eventual result of
close
()
execution.
Should there be some (eventual) media element error transition in the case of an attached worker MediaSource having its context destroyed? The experimental Chromium implementation of worker MSE just keeps the element readyState, networkState and error the same as prior to that context destruction, though the seekable and buffered attributes each report an empty TimeRange.
MediaSource
was constructed in aWindow
TimeRanges
object.
activeSourceBuffers
.length does not equal 0 then run the
following steps:
buffered
for eachSourceBuffer
object in
activeSourceBuffers
.
TimeRanges
object containing a single range from 0 tohighest end time.
SourceBuffer
object inactiveSourceBuffers
run the following steps:
buffered
attribute on the current
SourceBuffer
.
readyState
is "ended
",then set
the end time on the last range insource rangestohighest end time.
TimeRanges
resulting from the steps for
theWindow
case, but run with theMediaSource
and itsSourceBuffer
objects in theirDedicatedWorkerGlobalScope
and communicated by using
[[port to main]]
implicit messages on every update to the
activeSourceBuffers
,readyState
,or any of the
buffering state that would change any of the values of each of those
buffered
attributes of theactiveSourceBuffers
.
The overhead of recalculating and communicatingrecent intersection rangesso frequently is one reason for allowing implementation flexibility to query this information on-demand using other mechanisms such as shared memory and locks as mentioned incross-context communication model.
HTMLMediaElement
'ssrcObject
If aHTMLMediaElement
'ssrcObject
attribute is assigned a
MediaSourceHandle
,then set[[has ever been assigned as srcobject]]
for thatMediaSourceHandle
to true as part of the synchronous steps of
the extendedHTMLMediaElement
'ssrcObject
setter that occur
before invoking the element's load algorithm.
This prevents transferring thatMediaSourceHandle
object ever again, enabling clear
synchronous exception if that is attempted.
MediaSourceHandle
needs to be added toHTMLMediaElement
's MediaProvider IDL
typedef and related text involving media provider objects.
This section specifies extensions to the [HTML]AudioTrack
definition.
WebIDL[Exposed=(Window,DedicatedWorker)]
partial interfaceAudioTrack{
readonly attributeSourceBuffer
?sourceBuffer
;
};
AudioTrack
needs Window+DedicatedWorker exposure.
sourceBuffer
of typeSourceBuffer
,
readonly, nullable
On getting, run the following step:
SourceBuffer
that was created on the same
realmas this track, and if thatSourceBuffer
has not been
removed from thesourceBuffers
attribute of itsparent media source:
SourceBuffer
that created this track.
DedicatedWorkerGlobalScope
SourceBuffer
notified its
internalcreate track mirror
handler inWindow
to create this track, then the
Window
copy of the track would return null for this attribute.
This section specifies extensions to the [HTML]VideoTrack
definition.
WebIDL[Exposed=(Window,DedicatedWorker)]
partial interfaceVideoTrack{
readonly attributeSourceBuffer
?sourceBuffer
;
};
VideoTrack
needs Window+DedicatedWorker exposure.
sourceBuffer
of typeSourceBuffer
,
readonly, nullable
On getting, run the following step:
SourceBuffer
that was created on the same
realmas this track, and if thatSourceBuffer
has not been
removed from thesourceBuffers
attribute of itsparent media source:
SourceBuffer
that created this track.
DedicatedWorkerGlobalScope
SourceBuffer
notified its
internalcreate track mirror
handler inWindow
to create this track, then the
Window
copy of the track would return null for this attribute.
This section specifies extensions to the [HTML]TextTrack
definition.
WebIDL[Exposed=(Window,DedicatedWorker)]
partial interfaceTextTrack{
readonly attributeSourceBuffer
?sourceBuffer
;
};
sourceBuffer
of typeSourceBuffer
,
readonly, nullable
On getting, run the following step:
SourceBuffer
that was created on the same
realmas this track, and if thatSourceBuffer
has not been
removed from thesourceBuffers
attribute of itsparent media source:
SourceBuffer
that created this track.
DedicatedWorkerGlobalScope
SourceBuffer
notified its
internalcreate track mirror
handler inWindow
to create this track, then the
Window
copy of the track would return null for this attribute.
The bytes provided throughappendBuffer
()
for aSourceBuffer
form a
logical byte stream. The format and semantics of these byte streams are defined inbyte stream format specifications.The byte stream format
registry [MSE-REGISTRY] provides mappings between a MIME type that may be passed to
addSourceBuffer
()
,isTypeSupported
()
or
changeType
()
and the byte stream format expected by aSourceBuffer
using that MIME type for parsing newly appended data. Implementations are encouraged to
register mappings for byte stream formats they support to facilitate interoperability. The
byte stream format registry [MSE-REGISTRY] is the authoritative source for these
mappings. If an implementation claims to support a MIME type listed in the registry, its
SourceBuffer
implementationMUSTconform to thebyte stream format specification
listed in the registry entry.
The byte stream format specifications in the registry are not intended to define new storage formats. They simply outline the subset of existing storage format structures that implementations of this specification will accept.
Byte stream format parsing and validation is implemented in thesegment parser loop algorithm.
This section provides general requirements for all byte stream format specifications:
AudioTrack
,
VideoTrack
,andTextTrack
attribute values from data ininitialization segments.
If the byte stream format covers a format similar to one covered in the in-band tracks spec [INBANDTRACKS], then itSHOULDtry to use the same attribute mappings so that Media Source Extensions playback and non-Media Source Extensions playback provide the same track information.
The number and type of tracks are not consistent.
For example, if the firstinitialization segmenthas 2 audio tracks and 1 video track, then allinitialization segmentsthat follow it in the byte streamMUST describe 2 audio tracks and 1 video track.
Unsupported codec changes occur acrossinitialization segments.
See theinitialization segment receivedalgorithm,
addSourceBuffer
()
andchangeType
()
for details and
examples of codec changes.
Video frame size changes. The user agentMUSTsupport seamless playback.
This will cause the <video> display region to change size if the web application does not use CSS or HTML attributes (width/height) to constrain the element size.
Audio channel count changes. The user agentMAYsupport this seamlessly and could trigger downmixing.
This is a quality of implementation issue because changing the channel count may require reinitializing the audio device, resamplers, and channel mixers which tends to be audible.
buffered
attribute.
This is intended to simplify switching between audio streams where the frame boundaries don't always line up across encodings (e.g., Vorbis).
For example, if I1 is associated with M1, M2, M3 then the aboveMUSThold for all the combinations I1+M1, I1+M2, I1+M1+M2, I1+M2+M3, etc.
Byte stream specificationsMUSTat a minimum define constraints which ensure that the above requirements hold. Additional constraintsMAYbe defined, for example to simplify implementation.
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key wordsMAY,MUST,MUST NOT,SHOULD,andSHOULD NOTin this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
<videoid="v"autoplay></video>
<script>
constvideo =document.getElementById("v");
constmediaSource =newMediaSource();
mediaSource.addEventListener("sourceopen",onSourceOpen);
video.src =window.URL.createObjectURL(mediaSource);
asyncfunctiononSourceOpen(e){
constmediaSource = e.target;
if(mediaSource.sourceBuffers.length >0)return;
constsourceBuffer = mediaSource.addSourceBuffer(
'video/webm; codecs= "vorbis,vp8" ',
);
video.addEventListener("seeking",(e) =>onSeeking(mediaSource, e.target));
video.addEventListener("progress",() =>
appendNextMediaSegment(mediaSource),
);
try{
constinitSegment =awaitgetInitializationSegment();
if(initSegment ==null) {
// Error fetching the initialization segment. Signal end of stream with an error.
mediaSource.endOfStream("network");
return;
}
// Append the initialization segment.
sourceBuffer.addEventListener("updateend",functionfirstAppendHandler(){
sourceBuffer.removeEventListener("updateend",firstAppendHandler);
// Append some initial media data.
appendNextMediaSegment(mediaSource);
});
sourceBuffer.appendBuffer(initSegment);
}catch(error) {
// Handle errors that might occur during initialization segment fetching.
console.error("Error fetching initialization segment:",error);
mediaSource.endOfStream("network");
}
}
asyncfunctionappendNextMediaSegment(mediaSource){
if(
mediaSource.readyState ==="closed"||
mediaSource.sourceBuffers[0].updating
)
return;
// If we have run out of stream data, then signal end of stream.
if(!haveMoreMediaSegments()) {
mediaSource.endOfStream();
return;
}
try{
constmediaSegment =awaitgetNextMediaSegment();
//NOTE:If mediaSource.readyState == "ended", this appendBuffer() call will
// cause mediaSource.readyState to transition to "open". The web application
// should be prepared to handle multiple "sourceopen" events.
mediaSource.sourceBuffers[0].appendBuffer(mediaSegment);
}
catch(error) {
// Handle errors that might occur during media segment fetching.
console.error("Error fetching media segment:",error);
mediaSource.endOfStream("network");
}
}
functiononSeeking(mediaSource, video){
if(mediaSource.readyState ==="open") {
// Abort current segment append.
mediaSource.sourceBuffers[0].abort();
}
// Notify the media segment loading code to start fetching data at the
// new playback position.
seekToMediaSegmentAt(video.currentTime);
// Append a media segment from the new playback position.
appendNextMediaSegment(mediaSource);
}
functiononProgress(mediaSource, e){
appendNextMediaSegment(mediaSource);
}
// Example of async function for getting initialization segment
asyncfunctiongetInitializationSegment(){
// Implement fetching of the initialization segment
// This is just a placeholder function
}
// Example function for checking if there are more media segments
functionhaveMoreMediaSegments(){
// Implement logic to determine if there are more media segments
// This is just a placeholder function
}
// Example function for getting the next media segment
asyncfunctiongetNextMediaSegment(){
// Implement fetching of the next media segment
// This is just a placeholder function
}
// Example function for seeking to a specific media segment
functionseekToMediaSegmentAt(currentTime){
// Implement seeking logic
// This is just a placeholder function
}
</script>
<script>
asyncfunctionsetUpVideoStream(){
// Specific video format and codec
constmediaType ='video/mp4; codecs= "mp4a.40.2,avc1.4d4015" ';
// Check if the type of video format / codec is supported.
if(!window.ManagedMediaSource?.isTypeSupported(mediaType)) {
return;// Not supported, do something else.
}
// Set up video and its managed source.
constvideo =document.createElement("video");
constsource =newManagedMediaSource();
video.controls =true;
awaitnewPromise((resolve) =>{
video.src = URL.createObjectURL(source);
source.addEventListener("sourceopen",resolve, {once:true});
document.body.appendChild(video);
});
constsourceBuffer = source.addSourceBuffer(mediaType);
// Set up the event handlers
sourceBuffer.onbufferedchange =(e) =>{
console.log("onbufferedchange event fired.");
console.log(`Added Ranges:${timeRangesToString(e.addedRanges)}`);
console.log(`Removed Ranges:${timeRangesToString(e.removedRanges)}`);
};
source.onstartstreaming =async() => {
constresponse =awaitfetch("./videos/bipbop.mp4");
constbuffer =awaitresponse.arrayBuffer();
awaitnewPromise((resolve) =>{
sourceBuffer.addEventListener("updateend",resolve, {once:true});
sourceBuffer.appendBuffer(buffer);
});
};
source.onendstreaming =async() => {
// Stop fetching new segments here
};
}
// Helper function...
functiontimeRangesToString(timeRanges){
constranges = [];
for(leti =0;i < timeRanges.length; i++) {
ranges.push([timeRanges.start(i), timeRanges.end(i)]);
}
return"["+ ranges.map(([start, end]) =>`[${start},${end})`) +"]";
}
</script>
<bodyonload="setUpVideoStream()"></body>
The editors would like to thank Alex Giladi, Bob Lund, Chris Needham, Chris Poole, Chris Wilson, Cyril Concolato, Dale Curtis, David Dorwin, David Singer, Duncan Rowden, François Daoust, Frank Galligan, Glenn Adams, Jer Noble, Joe Steele, John Simmons, Kagami Sascha Rosylight, Kevin Streeter, Marcos Cáceres, Mark Vickers, Matt Ward, Matthew Gregan, Michael(tm) Smith, Michael Thornburgh, Mounir Lamouri, Paul Adenot, Philip Jägenstedt, Philippe Le Hegaret, Pierre Lemieux, Ralph Giles, Steven Robertson, and Tatsuya Igarashi for their contributions to this specification.
This section is non-normative.
The video playback quality metrics described in previous revisions of this specification
(e.g., sections 5 and 10 of theCandidate Recommendation) are
now being developed as part of [MEDIA-PLAYBACK-QUALITY]. Some implementations may have
implemented the earlier draftVideoPlaybackQuality
object and theHTMLVideoElement
extension methodgetVideoPlaybackQuality
()
described in those previous
revisions.
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in: