The features in this specification extend or modify those found in Pointer Events, a W3C Recommendation that describes events and related interfaces for handling hardware agnostic pointer input from devices including a mouse, pen, touchscreen, etc. For compatibility with existing mouse based content, this specification also describes a mapping to fire Mouse Events for other pointer device types.

This specification is an update to [[PointerEvents3]]. It includes editorial clarifications and new features that facilitate more use cases.

Introduction

Today, most [[HTML]] content is used with and/or designed for mouse input. Those that handle input in a custom manner typically code to [[UIEVENTS]] Mouse Events. Newer computing devices today, however, incorporate other forms of input, including touchscreens, pen input, etc. Event types have been proposed for handling each of these forms of input individually. However, that approach often incurs unnecessary duplication of logic and event handling overhead when adding support for a new input type. This often creates a compatibility problem when content is written with only one device type in mind. Additionally, for compatibility with existing mouse-based content, mostuser agentsfire Mouse Events for all input types. This makes it ambiguous whether a Mouse Event represents an actual mouse device or is being produced from another input type for compatibility, which makes it hard to code to both device types simultaneously.

To reduce the cost of coding to multiple input types and also to help with the above described ambiguity with Mouse Events, this specification defines a more abstract form of input, called apointer.A pointer can be any point of contact on the screen made by a mouse cursor, pen, touch (including multi-touch), or other pointing input device. This model makes it easier to write sites and applications that work well no matter what hardware the user has. For scenarios when device-specific handling is desired, this specification also defines properties for inspecting the device type which produced the event. The primary goal is to provide a single set of events and interfaces that allow for easier authoring for cross-device pointer input while still allowing for device-specific handling only when necessary for an augmented experience.

An additional key goal is to enable multi-threaded user agents to handledirect manipulationactions for panning and zooming (for instance, with a finger or stylus on a touchscreen), without blocking on script execution.

While this specification defines a unified event model for a variety of pointer inputs, this model does not cover other forms of input such as keyboards or keyboard-like interfaces (for instance, a screen reader or similar assistive technology running on a touchscreen-only device, which allows users sequential navigation through focusable controls and elements). While user agents might choose to also generate pointer events in response to these interfaces, this scenario is not covered in this specification.

In the first instance, authors are encouraged to provide equivalent functionality for all forms of input by responding to high-level events such asfocus,blurandclick.However, when using low-level events (such as Pointer Events), authors are encouraged to ensure that all types of input are supported. In the case of keyboards and keyboard-like interfaces, this might require the addition of explicit keyboard event handling. SeeWCAGGuideline 2.1 Keyboard Accessible[[WCAG22]] for further details.

Pointer input combines input from mouse, pen, touch, etc.
A pointer is a hardware agnostic representation of input devices that can target a specific coordinate (or set of coordinates) on a screen.

The events for handling generic pointer input look a lot like those for mouse: {{GlobalEventHandlers/pointerdown}}, {{GlobalEventHandlers/pointermove}}, {{GlobalEventHandlers/pointerup}}, {{GlobalEventHandlers/pointerover}}, {{GlobalEventHandlers/pointerout}}, etc. This facilitates easy content migration from Mouse Events to Pointer Events. Pointer Events provide all the usual properties present in Mouse Events (client coordinates, target element, button states, etc.) in addition to new properties for other forms of input: pressure, contact geometry, tilt, etc. So authors can easily code to Pointer Events to share logic between different input types where it makes sense, and customize for a particular type of input only where necessary to get the best experience.

While Pointer Events are sourced from a variety of input devices, they are not defined as being generated from some other set of device-specific events. While possible and encouraged for compatibility, this spec does not require other device-specific events be supported (e.g. mouse events, touch events, etc.). A user agent could support pointer events without supporting any other device events. For compatibility with content written to mouse-specific events, this specification does provide an optional section describing how to generatecompatibility mouse eventsbased on pointer input from devices other than a mouse.

This specification does not provide any advice on the expected behavior of user agents that support both Touch Events (as defined in [[TOUCH-EVENTS]]) and Pointer Events. For more information on the relationship between these two specifications, see theTouch Events Community Group.

Examples

The following are basic examples that demonstrate how some of the APIs in this specification might be used by authors. Further, more specific examples are provided in the relevant sections of this document.

/* Bind to either Pointer Events or traditional touch/mouse */

if (window.PointerEvent) {
// if Pointer Events are supported, only listen to pointer events
target.addEventListener( "pointerdown", function(e) {
// if necessary, apply separate logic based on e.pointerType
// for different touch/pen/mouse behavior
...
});
...
} else {
// traditional touch/mouse event handlers
target.addEventListener('touchstart', function(e) {
// prevent compatibility mouse events and click
e.preventDefault();
...
});
...
target.addEventListener('mousedown',...);
...
}

// additional event listeners for keyboard handling
...
window.addEventListener( "pointerdown", detectInputType);

function detectInputType(event) {
switch(event.pointerType) {
case "mouse":
/* mouse input detected */
break;
case "pen":
/* pen/stylus input detected */
break;
case "touch":
/* touch input detected */
break;
default:
/* pointerType is empty (could not be detected)
or UA-specific custom type */
}
}
<div style= "position:absolute; top:0px; left:0px; width:100px;height:100px;" ></div>
<script>
window.addEventListener( "pointerdown", checkPointerSize);

function checkPointerSize(event) {
event.target.style.width = event.width + "px";
event.target.style.height = event.height + "px";
}
</script>
const event1 = new PointerEvent( "pointerover",
{ bubbles: true,
cancelable: true,
composed: true,
pointerId: 42,
pointerType: "pen",
clientX: 300,
clientY: 500
});
eventTarget.dispatchEvent(event1);

let pointerEventInitDict =
{
bubbles: true,
cancelable: true,
composed: true,
pointerId: 42,
pointerType: "pen",
clientX: 300,
clientY: 500,
};
const p1 = new PointerEvent( "pointermove", pointerEventInitDict);
pointerEventInitDict.clientX += 10;
const p2 = new PointerEvent( "pointermove", pointerEventInitDict);
pointerEventInitDict.coalescedEvents = [p1, p2];
const event2 = new PointerEvent( "pointermove", pointerEventInitDict);
eventTarget.dispatchEvent(event2);
<div style= "position:absolute; top:0px; left:0px; width:100px;height:100px;" ></div>
<script>
window.addEventListener( "pointerdown", assignPenColor);
window.addEventListener( "pointermove", assignPenColor);
const colorMap = new Map();

function assignPenColor(event) {
const uniqueId = event.persistentDeviceId;
// Check if a unique Id exists.
if (uniqueId == 0) {
return;
}
// Check if a color has been assigned to the device.
if (map.has(uniqueId)) {
return;
}
// Assign a color to the device.
let newColor = getNewColor();
map.set(uniqueId, newColor);
return newColor;
}

function getNewColor() {
/* return some color value */
}
</script>

Pointer Events and interfaces

PointerEventinterface

dictionary PointerEventInit: MouseEventInit {
long pointerId = 0;
double width = 1;
double height = 1;
float pressure = 0;
float tangentialPressure = 0;
long tiltX;
long tiltY;
long twist = 0;
double altitudeAngle;
double azimuthAngle;
DOMString pointerType = "";
boolean isPrimary = false;
long persistentDeviceId = 0;
sequence<PointerEvent> coalescedEvents = [];
sequence<PointerEvent> predictedEvents = [];
};

[Exposed=Window]
interface PointerEvent: MouseEvent {
constructor(DOMString type, optional PointerEventInit eventInitDict = {});
readonly attribute long pointerId;
readonly attribute double width;
readonly attribute double height;
readonly attribute float pressure;
readonly attribute float tangentialPressure;
readonly attribute long tiltX;
readonly attribute long tiltY;
readonly attribute long twist;
readonly attribute double altitudeAngle;
readonly attribute double azimuthAngle;
readonly attribute DOMString pointerType;
readonly attribute boolean isPrimary;
readonly attribute long persistentDeviceId;
[SecureContext] sequence<PointerEvent> getCoalescedEvents();
sequence<PointerEvent> getPredictedEvents();
};
pointerId

A unique identifier for the pointer causing the event. User agents MAY reserve a genericpointerIdvalue of0or1for the primary mouse pointer. ThepointerIdvalue of-1MUST be reserved and used to indicate events that were generated by something other than a pointing device. For any other pointers, user agents are free to implement different strategies and approaches in how they assign apointerIdvalue. However, allactive pointersin the [=top-level browsing context=] (as defined by [[HTML]]) must be unique, and the identifier MUST NOT be influenced by any other top-level browsing context (i.e. one top-level browsing context cannot assume that thepointerIdof a pointer will be the same when the pointer moves outside of the browsing context and into another top-level browsing context). The user agent MAY recycle previously retired values forpointerIdfrom previous active pointers, or it MAY always reuse the samepointerIdfor a particular pointing device (for instance, to uniquely identify particular pen/stylus inputs from a specific user in a multi-user collaborative application). However, in the latter case, to minimize the chance of fingerprinting and tracking across different pages or domains, thepointerIdMUST only be associated explicitly with that particular pointing device for the lifetime of the page / session, and a new randomizedpointerIdMUST be chosen the next time that particular pointing device is used again in a new session.

ThepointerIdselection algorithm is implementation specific. Therefore authors cannot assume values convey any particular meaning other than an identifier for the pointer that is unique from all other active pointers. As an example, user agents may simply assign a number, starting from0,to any active pointers, in the order that they become active — but these values are not guaranteed to be monotonically increasing.

width

The width (magnitude on the X axis), in CSS pixels (see [[CSS21]]), of thecontact geometryof the pointer. This value MAY be updated on each event for a given pointer. For inputs that typically lack contact geometry (such as a traditional mouse), and in cases where the actual geometry of the input is not detected by the hardware, theuser agentMUST return a default value of1.

height

The height (magnitude on the Y axis), in CSS pixels (see [[CSS21]]), of thecontact geometryof the pointer. This value MAY be updated on each event for a given pointer. For inputs that typically lack contact geometry (such as a traditional mouse), and in cases where the actual geometry of the input is not detected by the hardware, theuser agentMUST return a default value of1.

pressure

The normalized pressure of the pointer input in the range of[0,1],where0and1represent the minimum and maximum pressure the hardware is capable of detecting, respectively. For hardware and platforms that do not support pressure, the value MUST be0.5when in theactive buttons stateand0otherwise.

tangentialPressure

The normalized tangential pressure (also known as barrel pressure), typically set by an additional control (e.g. a finger wheel on an airbrush stylus), of the pointer input in the range of[-1,1],where0is the neutral position of the control. Note that some hardware may only support positive values in the range of[0,1].For hardware and platforms that do not support tangential pressure, the value MUST be0.

Despite the property's name, in practice the hardware controls/sensors that generate the values for this property may not necessarily be pressure sensitive. As an example, in most cases the finger wheel on most airbrush/painting stylus implementations can be freely set, rather than requiring the user to apply a constant pressure on the wheel to prevent it from returning to the zero position.
tiltX

The plane angle (in degrees, in the range of[-90,90]) between the Y-Z plane and the plane containing both the transducer (e.g. pen/stylus) axis and the Y axis. A positivetiltXis to the right, in the direction of increasing X values.tiltXcan be used along withtiltYto represent the tilt away from the normal of a transducer with the digitizer. For hardware and platforms that do not report tilt or angle, the value MUST be0.

tiltX explanation diagram
PositivetiltX.
tiltY

The plane angle (in degrees, in the range of[-90,90]) between the X-Z plane and the plane containing both the transducer (e.g. pen/stylus) axis and the X axis. A positivetiltYis towards the user, in the direction of increasing Y values.tiltYcan be used along withtiltXto represent the tilt away from the normal of a transducer with the digitizer. For hardware and platforms that do not report tilt or angle, the value MUST be0.

tiltY explanation diagram
PositivetiltY.
twist

The clockwise rotation (in degrees, in the range of[0,359]) of a transducer (e.g. pen/stylus) around its own major axis. For hardware and platforms that do not report twist, the value MUST be0.

altitudeAngle

The altitude (in radians) of the transducer (e.g. pen/stylus), in the range[0,π/2]— where0is parallel to the surface (X-Y plane), andπ/2is perpendicular to the surface. For hardware and platforms that do not report tilt or angle, the value MUST beπ/2.

The default value defined here foraltitudeAngleisπ/2, which positions the transducer as being perpendicular to the surface. This differs from theTouch Events - Level 2specification's definition for thealtitudeAngleproperty, which has a default value of0.
altitudeAngle explanation diagram
ExamplealtitudeAngleofπ/4(45 degrees from the X-Y plane).
azimuthAngle

The azimuth angle (in radians) of the transducer (e.g. pen/stylus), in the range[0, 2π]— where0represents a transducer whose cap is pointing in the direction of increasing X values (point to "3 o'clock" if looking straight down) on the X-Y plane, and the values progressively increase when going clockwise (π/2at "6 o'clock",πat "9 o'clock",3π/2at "12 o'clock" ). When the transducer is perfectly perpendicular to the surface (altitudeAngleofπ/2), the value MUST be0.For hardware and platforms that do not report tilt or angle, the value MUST be0.

azimuthAngle explanation diagram
ExampleazimuthAngleofπ/6( "4 o'clock" ).
pointerType

Indicates the device type that caused the event (mouse, pen, touch, etc.). If the user agent is tofire a pointer eventfor a mouse, pen/stylus, or touch input device, then the value ofpointerTypeMUST be according to the following table:

Pointer Device TypepointerTypeValue
Mousemouse
Pen / styluspen
Touch contacttouch

If the device type cannot be detected by the user agent, then the value MUST be an empty string. If the user agent supports pointer device types other than those listed above, the value ofpointerTypeSHOULD be vendor prefixed to avoid conflicting names for different types of devices. Future specifications MAY provide additional normative values for other device types.

SeeExample 2for a basic demonstration of how thepointerTypecan be used. Also note that developers should include some form of default handling to cover user agents that may have implemented their own custompointerTypevalues and for situations wherepointerTypeis simply an empty string.
isPrimary

Indicates if the pointer represents theprimary pointerof this pointer type.

persistentDeviceId

A unique identifier for the pointing device. If the hardware supports multiple pointers, pointer events generated from pointing devices MUST only get apersistentDeviceIdif those pointers are uniquely identifiable over the session. If the pointer is uniquely identifiable, the assignedpersistentDeviceIdto that pointing device will remain constant for the remainder of the session. ThepersistentDeviceIdvalue of0MUST be reserved and used to indicate events whose generating device could not be identified. LikepointerId,to minimize the chance of fingerprinting and tracking across different pages or domains, thepersistentDeviceIdMUST only be associated explicitly with that particular pointing device for the lifetime of the page / session, and a new randomizedpersistentDeviceIdMUST be chosen the next time that particular pointing device is used again in a new session.

Due to digitizer and pointing device hardware constraints, apersistentDeviceIdis not guaranteed to be available for all pointer events from a pointing device. For example, the device may not report its hardware id to the digitizer in time forpointerdownto have apersistentDeviceId.In such a case, thepersistentDeviceIdmay initially be0and change to a valid value.
getCoalescedEvents()

A method that returns the list ofcoalesced events.

getPredictedEvents()

A method that returns the list ofpredicted events.

ThePointerEventInitdictionary is used by the {{PointerEvent}} interface's constructor to provide a mechanism by which to construct untrusted (synthetic) pointer events. It inherits from the {{MouseEventInit}} dictionary defined in [[UIEVENTS]]. See theexamplesfor sample code demonstrating how to fire an untrusted pointer event.

The [=event constructing steps=] forPointerEvent clonesPointerEventInit'scoalescedEventstocoalesced events listand clonesPointerEventInit'spredictedEventstopredicted events list.

ThePointerEventinterface inherits from {{MouseEvent}}, defined in [[[UIEVENTS]]]. Also note the proposed extension in [[[CSSOM-VIEW]]], which changes the various coordinate properties fromlong todoubleto allow for fractional coordinates. For user agents that already implement this proposed extension for {{PointerEvent}}, butnotfor regular {{MouseEvent}}, there are additional requirements when it comes to theclick,auxclick,andcontextmenuevents.

Button states

Chorded button interactions

Some pointer devices, such as mouse or pen, support multiple buttons. In the [[UIEVENTS]] Mouse Event model, each button press produces amousedownandmouseupevent. To better abstract this hardware difference and simplify cross-device input authoring, Pointer Events do not fire overlapping {{GlobalEventHandlers/pointerdown}} and {{GlobalEventHandlers/pointerup}} events for chorded button presses (depressing an additional button while another button on the pointer device is already depressed).

Instead, chorded button presses can be detected by inspecting changes to thebuttonandbuttonsproperties. Thebuttonandbuttonsproperties are inherited from the {{MouseEvent}} interface, but with a change in semantics and values, as outlined in the following sections.

The modifications to thebuttonandbuttonsproperties apply only to pointer events. For anycompatibility mouse eventsthe value ofbuttonandbuttonsMUST follow [[UIEVENTS]].

Thebuttonproperty

To identify button state transitions in any pointer event (and not just {{GlobalEventHandlers/pointerdown}} and {{GlobalEventHandlers/pointerup}}), thebuttonproperty indicates the device button whose state change fired the event.

Device Button Changesbutton
Neither buttons nor touch/pen contact changed since last event-1
Left Mouse,
Touch contact,
Pen contact
0
Middle Mouse1
Right Mouse,
Pen barrel button
2
X1 (back) Mouse3
X2 (forward) Mouse4
Pen eraser button5
During a mouse drag, the value of thebuttonproperty in a {{GlobalEventHandlers/pointermove}} event will be different from that in amousemoveevent. For example, while moving the mouse with the right button pressed, the {{GlobalEventHandlers/pointermove}} events will have thebuttonvalue -1, but themousemoveevents will have thebuttonvalue 2.

Thebuttonsproperty

Thebuttonsproperty gives the current state of the device buttons as a bitmask (same as inMouseEvent,but with an expanded set of possible values).

Current state of device buttonsbuttons
Mouse moved with no buttons pressed,
Pen moved while hovering with no buttons pressed
0
Left Mouse,
Touch contact,
Pen contact
1
Middle Mouse4
Right Mouse,
Pen barrel button
2
X1 (back) Mouse8
X2 (forward) Mouse16
Pen eraser button32

Theprimary pointer

In a multi-pointer (e.g. multi-touch) scenario, theisPrimaryproperty is used to identify a master pointer amongst the set ofactive pointersfor each pointer type.

  • At any given time, there can only ever be at most one primary pointer for each pointer type.
  • The first pointer to become active for a particular pointer type (e.g. the first finger to touch the screen in a multi-touch interaction) becomes the primary pointer for that pointer type.
  • Only a primary pointer will producecompatibility mouse events.In the case where there are multipleprimary pointers,these pointers will all producecompatibility mouse events.
Authors who desire single-pointer interaction can achieve this by ignoring non-primary pointers (however, see the note below onmultiple primary pointers).
When two or more pointer device types are being used concurrently, multiple pointers (one for eachpointerType) are considered primary. For example, a touch contact and a mouse cursor moved simultaneously will produce pointers that are both considered primary.
Some devices, operating systems and user agents may ignore the concurrent use of more than one type of pointer input to avoid accidental interactions. For instance, devices that support both touch and pen interactions may ignore touch inputs while the pen is actively being used, to allow users to rest their hand on the touchscreen while using the pen (a feature commonly referred to as "palm rejection" ). Currently, it is not possible for authors to suppress this behavior.
In some cases, it is possible for the user agent to fire pointer events in which no pointer is marked as a primary pointer. For instance, when there are multiple active pointers of a particular type, like a multi-touch interaction, and the primary pointer is removed (e.g. it leaves the screen), there may end up being no primary pointers. Also on platforms where the primary pointer is determined using all active pointers of the same type on the device (including those targeted at an application other than the user agent), if the first (primary) pointer is outside of the user agent and other (non-primary) pointers targeted inside the user agent, then the user agent may fire pointer events for the other pointers with a value offalseforisPrimary.
Current operating systems and user agents don't usually have a concept of multiple mouse inputs. When more than one mouse device is present (for instance, on a laptop with both a trackpad and an external mouse), all mouse devices are generally treated as a single device — movements on any of the devices are translated to movement of a single mouse pointer, and there is no distinction between button presses on different mouse devices. For this reason, there will usually only be a single mouse pointer, and that pointer will be primary.

Firing events using thePointerEventinterface

Tofire a pointer eventnamed |e| means to [=fire an event=] named |e| usingPointerEventwhose attributes are set as defined in {{PointerEvent}} Interface andAttributes and Default Actions.

If the event is not a {{GlobalEventHandlers/gotpointercapture}}, {{GlobalEventHandlers/lostpointercapture}},click,auxclickorcontextmenuevent, run theprocess pending pointer capturesteps for thisPointerEvent.

The target object at which the event is fired is determined as follows:

Let |targetDocument| be target's [=Node/node document=] [[DOM]].

If the event is {{GlobalEventHandlers/pointerdown}}, {{GlobalEventHandlers/pointermove}}, or {{GlobalEventHandlers/pointerup}} setactive documentfor the event'spointerIdto |targetDocument|.

If the event is {{GlobalEventHandlers/pointerdown}}, the associated device is a direct manipulation device, and the target is an {{Element}}, thenset pointer capturefor thispointerIdto the target element as described inimplicit pointer capture.

Before firing this event, the user agent SHOULD treat the target as if the pointing device has moved over it from the |previousTarget| for the purpose ofensuring event ordering[[UIEVENTS]]. If the |needsOverEvent| flag is set, a {{GlobalEventHandlers/pointerover}} event is needed even if the target element is the same.

Fire the event to the determined target.

Save the determined target as the |previousTarget| for the given pointer, and reset the |needsOverEvent| flag tofalse. If the |previousTarget| at any point will no longer be [=connected=] [[DOM]], update the |previousTarget| to the nearest still [=connected=] [[DOM]] parent following the event path corresponding to dispatching events to the |previousTarget|, and set the |needsOverEvent| flag totrue.

Using thepointer capture target overrideas the target instead of the normal hit-test result may fire some boundary events, as defined by [[UIEVENTS]]. This is the same as the pointer leaving its previous target and entering this new capturing target. When the capture is released, the same scenario may happen, as the pointer is leaving the capturing target and entering the hit-test target.

Attributes and default actions

Thebubblesandcancelableproperties and the default actions for the event types defined in this specification appear in the following table. Details of each of these event types are provided inPointer Event types.

Event TypeBubblesCancelableDefault Action
{{GlobalEventHandlers/pointerover}} Yes Yes None
{{GlobalEventHandlers/pointerenter}} No No None
{{GlobalEventHandlers/pointerdown}} Yes Yes Varies: when the pointer is primary, all default actions of themousedownevent
Canceling this event also prevents subsequent firing ofcompatibility mouse events.
{{GlobalEventHandlers/pointermove}} Yes Yes Varies: when the pointer is primary, all default actions ofmousemove
{{GlobalEventHandlers/pointerrawupdate}} Yes No None
{{GlobalEventHandlers/pointerup}} Yes Yes Varies: when the pointer is primary, all default actions ofmouseup
{{GlobalEventHandlers/pointercancel}} Yes No None
{{GlobalEventHandlers/pointerout}} Yes Yes None
{{GlobalEventHandlers/pointerleave}} No No None
{{GlobalEventHandlers/gotpointercapture}} Yes No None
{{GlobalEventHandlers/lostpointercapture}} Yes No None

Viewport manipulations (panning and zooming) — generally, as a result of adirect manipulationinteraction — are intentionally NOT a default action of pointer events, meaning that these behaviors (e.g. panning a page as a result of moving a finger on a touchscreen) cannot be suppressed by canceling a pointer event. Authors must instead usetouch-actionto explicitlydeclare the direct manipulation behaviorfor a region of the document. Removing this dependency on the cancelation of events facilitates performance optimizations by the user agent.

For {{GlobalEventHandlers/pointerenter}} and {{GlobalEventHandlers/pointerleave}} events, the {{EventInit/composed}} [[DOM]] attribute SHOULD befalse;for all other pointer events in the table above, the attribute SHOULD betrue.

For all pointer events in the table above, the {{UIEvent/detail}} [[UIEVENTS]] attribute SHOULD be 0.

Many user agents expose non-standard attributesfromElementandtoElementin MouseEvents to support legacy content. We encourage those user agents to set the values of those (inherited) attributes in PointerEvents tonullto transition authors to the use of standardized alternates (i.e.targetandrelatedTarget).

Similar toMouseEvent{{MouseEventInit/relatedTarget}}, therelatedTargetshould be initialized to the element whose bounds the pointer just left (in the case of a {{GlobalEventHandlers/pointerover}} orpointerenterevent) or the element whose bounds the pointer is entering (in the case of a {{GlobalEventHandlers/pointerout}} or {{GlobalEventHandlers/pointerleave}}). For other pointer events, this value will default to null. Note that when an element receives the pointer capture all the following events for that pointer are considered to be inside the boundary of the capturing element.

For {{GlobalEventHandlers/gotpointercapture}} and {{GlobalEventHandlers/lostpointercapture}} events, all the attributes except the ones defined in the table above should be the same as the Pointer Event that caused the user agent to run theprocess pending pointer capturesteps and fire the {{GlobalEventHandlers/gotpointercapture}} and {{GlobalEventHandlers/lostpointercapture}} events.

Process pending pointer capture

The user agent MUST run the following steps whenimplicitly releasing pointer captureas well as when firing Pointer Events that are not {{GlobalEventHandlers/gotpointercapture}} or {{GlobalEventHandlers/lostpointercapture}}.

  1. If thepointer capture target overridefor this pointer is set and is not equal to thepending pointer capture target override,then fire a pointer event named {{GlobalEventHandlers/lostpointercapture}} at thepointer capture target overridenode.
  2. If thepending pointer capture target overridefor this pointer is set and is not equal to thepointer capture target override,then fire a pointer event named {{GlobalEventHandlers/gotpointercapture}} at thepending pointer capture target override.
  3. Set thepointer capture target overrideto thepending pointer capture target override,if set. Otherwise, clear thepointer capture target override.

As defined in the section forclick,auxclick,andcontextmenuevents,even after the {{GlobalEventHandlers/lostpointercapture}} event has been dispatched, the correspondingclick,auxclickorcontextmenuevent, if any, would still be dispatched to the capturing target.

Suppressing a pointer event stream

The user agent MUSTsuppress a pointer event streamwhen it detects that a pointer is unlikely to continue to produce events. Any of the following scenarios satisfy this condition (there MAY be additional scenarios):

  • The user agent has opened a modal dialog or menu.
  • A pointer input device is physically disconnected, or a hoverable pointer input device (e.g. a hoverable pen/stylus) has left the hover range detectable by the digitizer.
  • The pointer is subsequently used by the user agent to manipulate the page viewport (e.g. panning or zooming). See the section ontouch-actionCSS property for details.
    User agents can trigger panning or zooming through multiple pointer types (such as touch and pen), and therefore the start of a pan or zoom action may result in the suppression of various pointers, including pointers with different pointer types.
  • As part of the drag operation initiation algorithm as defined in thedrag and drop processing model[[HTML]], for the pointer that caused the drag operation.

Other scenarios in which the user agent MAYsuppress a pointer event streaminclude:

  • A device's screen orientation is changed while a pointer is active.
  • The user attempts to interact using more simultaneous pointer inputs than the device supports.
  • The user agent interprets the input as accidental (for example, the hardware supports palm rejection).

Methods for detecting any of these scenarios are out of scope for this specification.

The user agent MUST run the following steps tosuppress a pointer event stream:

  • Fire a {{GlobalEventHandlers/pointercancel}} event.
  • Fire a {{GlobalEventHandlers/pointerout}} event.
  • Fire a {{GlobalEventHandlers/pointerleave}} event.
  • Implicitly release the pointer captureif the pointer is currently captured.

Converting betweentiltX/tiltYandaltitudeAngle/azimuthAngle

Pointer Events include two complementary sets of attributes to express the orientation of a transducer relative to the X-Y plane:tiltX/tiltY(introduced in the original Pointer Events specification), andazimuthAngle/altitudeAngle(adopted from theTouch Events - Level 2specification).

Depending on the specific hardware and platform, user agents will likely only receive one set of values for the transducer orientation relative to the screen plane — eithertiltX/tiltYoraltitudeAngle/azimuthAngle.User agents MUST use the following algorithm for converting these values.

When the user agent calculatestiltX/tiltYfromazimuthAngle/altitudeAngleit SHOULD round the final integer values usingMath.round[[ECMASCRIPT]] rules.

/* Converting between tiltX/tiltY and altitudeAngle/azimuthAngle */

function spherical2tilt(altitudeAngle, azimuthAngle) {
const radToDeg = 180/Math.PI;

let tiltXrad = 0;
let tiltYrad = 0;

if (altitudeAngle == 0) {
// the pen is in the X-Y plane
if (azimuthAngle == 0 || azimuthAngle == 2*Math.PI) {
// pen is on positive X axis
tiltXrad = Math.PI/2;
}
if (azimuthAngle == Math.PI/2) {
// pen is on positive Y axis
tiltYrad = Math.PI/2;
}
if (azimuthAngle == Math.PI) {
// pen is on negative X axis
tiltXrad = -Math.PI/2;
}
if (azimuthAngle == 3*Math.PI/2) {
// pen is on negative Y axis
tiltYrad = -Math.PI/2;
}
if (azimuthAngle>0 && azimuthAngle<Math.PI/2) {
tiltXrad = Math.PI/2;
tiltYrad = Math.PI/2;
}
if (azimuthAngle>Math.PI/2 && azimuthAngle<Math.PI) {
tiltXrad = -Math.PI/2;
tiltYrad = Math.PI/2;
}
if (azimuthAngle>Math.PI && azimuthAngle<3*Math.PI/2) {
tiltXrad = -Math.PI/2;
tiltYrad = -Math.PI/2;
}
if (azimuthAngle>3*Math.PI/2 && azimuthAngle<2*Math.PI) {
tiltXrad = Math.PI/2;
tiltYrad = -Math.PI/2;
}
}

if (altitudeAngle!= 0) {
const tanAlt = Math.tan(altitudeAngle);

tiltXrad = Math.atan(Math.cos(azimuthAngle) / tanAlt);
tiltYrad = Math.atan(Math.sin(azimuthAngle) / tanAlt);
}

return { "tiltX":tiltXrad*radToDeg, "tiltY":tiltYrad*radToDeg};
}

function tilt2spherical(tiltX, tiltY) {
const tiltXrad = tiltX * Math.PI/180;
const tiltYrad = tiltY * Math.PI/180;

// calculate azimuth angle
let azimuthAngle = 0;

if (tiltX == 0) {
if (tiltY > 0) {
azimuthAngle = Math.PI/2;
}
else if (tiltY < 0) {
azimuthAngle = 3*Math.PI/2;
}
} else if (tiltY == 0) {
if (tiltX < 0) {
azimuthAngle = Math.PI;
}
} else if (Math.abs(tiltX) == 90 || Math.abs(tiltY) == 90) {
// not enough information to calculate azimuth
azimuthAngle = 0;
} else {
// Non-boundary case: neither tiltX nor tiltY is equal to 0 or +-90
const tanX = Math.tan(tiltXrad);
const tanY = Math.tan(tiltYrad);

azimuthAngle = Math.atan2(tanY, tanX);
if (azimuthAngle < 0) {
azimuthAngle += 2*Math.PI;
}
}

// calculate altitude angle
let altitudeAngle = 0;

if (Math.abs(tiltX) == 90 || Math.abs(tiltY) == 90) {
altitudeAngle = 0
} else if (tiltX == 0) {
altitudeAngle = Math.PI/2 - Math.abs(tiltYrad);
} else if (tiltY == 0) {
altitudeAngle = Math.PI/2 - Math.abs(tiltXrad);
} else {
// Non-boundary case: neither tiltX nor tiltY is equal to 0 or +-90
altitudeAngle = Math.atan(1.0/Math.sqrt(Math.pow(Math.tan(tiltXrad),2) + Math.pow(Math.tan(tiltYrad),2)));
}

return { "altitudeAngle":altitudeAngle, "azimuthAngle":azimuthAngle};
}

Pointer Event types

Below are the event types defined in this specification.

In the case of theprimary pointer,these events (with the exception of {{GlobalEventHandlers/gotpointercapture}} and {{GlobalEventHandlers/lostpointercapture}}) may also firecompatibility mouse events.

Thepointeroverevent

The user agent MUSTfire a pointer eventnamed {{GlobalEventHandlers/pointerover}} when a pointing device is moved into thehit testboundaries of an element. Note thatsetPointerCapture()orreleasePointerCapture()might have changed thehit testtarget. Also note that while a pointer is captured it is considered to be always inside the boundaries of the capturing element for the purpose of firing boundary events. The user agent MUST also fire this event prior to firing a {{GlobalEventHandlers/pointerdown}} event fordevices that do not support hover(see {{GlobalEventHandlers/pointerdown}}).

Thepointerenterevent

The user agent MUSTfire a pointer eventnamed {{GlobalEventHandlers/pointerenter}} when a pointing device is moved into thehit testboundaries of an element or one of its descendants, including as a result of a {{GlobalEventHandlers/pointerdown}} event from a device thatdoes not support hover(see {{GlobalEventHandlers/pointerdown}}). Note thatsetPointerCapture()orreleasePointerCapture()might have changed thehit testtarget. Also note that while a pointer is captured it is considered to be always inside the boundaries of the capturing element for the purpose of firing boundary events. This event type is similar to {{GlobalEventHandlers/pointerover}}, but differs in that it does not bubble.

There are similarities between this event type, themouseenterevent described in [[UIEVENTS]], and the CSS:hoverpseudo-class described in [[CSS21]]. See also the {{GlobalEventHandlers/pointerleave}} event.

Thepointerdownevent

The user agent MUSTfire a pointer eventnamed {{GlobalEventHandlers/pointerdown}} when a pointer enters theactive buttons state.For mouse, this is when the device transitions from no buttons depressed to at least one button depressed. For touch, this is when physical contact is made with thedigitizer.For pen, this is when the pen either makes physical contact with the digitizer without any button depressed, or transitions from no buttons depressed to at least one button depressed while hovering.

For mouse (or other multi-button pointer devices), this means {{GlobalEventHandlers/pointerdown}} and {{GlobalEventHandlers/pointerup}} are not fired for all of the same circumstances asmousedownandmouseup.Seechorded buttonsfor more information.

For inputdevices that do not support hover,the user agent MUST alsofire a pointer eventnamed {{GlobalEventHandlers/pointerover}} followed by a pointer event named {{GlobalEventHandlers/pointerenter}} prior to dispatching the {{GlobalEventHandlers/pointerdown}} event.

Authors can prevent the firing of certaincompatibility mouse eventsby canceling the {{GlobalEventHandlers/pointerdown}} event (if theisPrimaryproperty istrue). This sets thePREVENT MOUSE EVENTflag on the pointer. Note, however, that this does not prevent themouseover,mouseenter,mouseout,ormouseleaveevents from firing.

Thepointermoveevent

The user agent MUSTfire a pointer eventnamed {{GlobalEventHandlers/pointermove}} when a pointer changes any properties that don't fire {{GlobalEventHandlers/pointerdown}} or {{GlobalEventHandlers/pointerup}} events. This includes any changes to coordinates, pressure, tangential pressure, tilt, twist, contact geometry (i.e.widthandheight) orchorded buttons.

User agents MAY delay dispatch of the {{GlobalEventHandlers/pointermove}} event (for instance, for performance reasons). Thecoalesced eventsinformation will be exposed via thegetCoalescedEvents()method for the single dispatched {{GlobalEventHandlers/pointermove}} event. The final coordinates of such events should be used for finding the target of the event.

Thepointerrawupdateevent

The user agent MUSTfire a pointer event named {{GlobalEventHandlers/pointerrawupdate}}, and only do so within a [=secure context=], when a pointer changes any properties that don't fire pointerdownorpointerupevents. Seepointermoveevent for a list of such properties.

In contrast with {{GlobalEventHandlers/pointermove}}, user agents SHOULD dispatch {{GlobalEventHandlers/pointerrawupdate}} events as soon as possible and as frequently as the JavaScript can handle the events.

Thetargetof {{GlobalEventHandlers/pointerrawupdate}} events might be different from the {{GlobalEventHandlers/pointermove}} events due to the fact that {{GlobalEventHandlers/pointermove}} events might get delayed or coalesced, and the final position of the event which is used for finding thetargetcould be different from its coalesced events.

Note that if there is already another {{GlobalEventHandlers/pointerrawupdate}} with the samepointerIdthat hasn't been dispatched in the [=event loop=], the user agent MAY coalesce the new {{GlobalEventHandlers/pointerrawupdate}} with that event instead of creating a new [=task=]. This may cause {{GlobalEventHandlers/pointerrawupdate}} to have coalesced events, and they will all be delivered ascoalesced eventsof one {{GlobalEventHandlers/pointerrawupdate}} event as soon as the event is processed in the [=event loop=]. SeegetCoalescedEvents()for more information.

In terms of ordering of {{GlobalEventHandlers/pointerrawupdate}} and {{GlobalEventHandlers/pointermove}}, if the user agent received an update from the platform that causes both {{GlobalEventHandlers/pointerrawupdate}} and {{GlobalEventHandlers/pointermove}} events, then the user agent MUST dispatch the {{GlobalEventHandlers/pointerrawupdate}} event before the corresponding {{GlobalEventHandlers/pointermove}}.

Other than thetarget,the concatenation of coalesced events lists of all dispatched {{GlobalEventHandlers/pointerrawupdate}} events since the last {{GlobalEventHandlers/pointermove}} event is the same as the coalesced events of the next {{GlobalEventHandlers/pointermove}} event in terms of the other event attributes. The attributes of {{GlobalEventHandlers/pointerrawupdate}} are mostly the same as {{GlobalEventHandlers/pointermove}}, with the exception of cancelablewhich MUST be false for {{GlobalEventHandlers/pointerrawupdate}}.

User agents SHOULD not firecompatibility mouse eventsfor {{GlobalEventHandlers/pointerrawupdate}}.

Adding listeners for the {{GlobalEventHandlers/pointerrawupdate}} event might negatively impact the performance of the web page, depending on the implementation of the user agent. For most use cases the other pointerevent types should suffice. A {{GlobalEventHandlers/pointerrawupdate}} listener should only be added if JavaScript needs high frequency events and can handle them just as fast. In these cases, there is probably no need to listen to other types of pointer events.

Thepointerupevent

The user agent MUSTfire a pointer eventnamed {{GlobalEventHandlers/pointerup}} when a pointer leaves theactive buttons state.For mouse, this is when the device transitions from at least one button depressed to no buttons depressed. For touch, this is when physical contact is removed from thedigitizer.For pen, this is when the pen is removed from the physical contact with the digitizer while no button is depressed, or transitions from at least one button depressed to no buttons depressed while hovering.

For inputdevices that do not support hover,the user agent MUST alsofire a pointer eventnamed {{GlobalEventHandlers/pointerout}} followed by a pointer event named {{GlobalEventHandlers/pointerleave}} after dispatching the {{GlobalEventHandlers/pointerup}} event.

All {{GlobalEventHandlers/pointerup}} events have apressurevalue of0.

The user agent MUST alsoimplicitly release the pointer captureif the pointer is currently captured.

For mouse (or other multi-button pointer devices), this means {{GlobalEventHandlers/pointerdown}} and {{GlobalEventHandlers/pointerup}} are not fired for all of the same circumstances asmousedownandmouseup.Seechorded buttonsfor more information.

Thepointercancelevent

The user agent MUSTfire a pointer eventnamed {{GlobalEventHandlers/pointercancel}} when it detects a scenario tosuppress a pointer event stream.

The values of the following properties of the {{GlobalEventHandlers/pointercancel}} event MUST match the values of the last dispatched pointer event with the samepointerId:width,height,pressure,tangentialPressure,tiltX,tiltY,twist,altitudeAngle,azimuthAngle,pointerType,isPrimary,and the coordinates inherited from [[UIEVENTS]]. ThecoalescedEventsandpredictedEventslists in the {{GlobalEventHandlers/pointercancel}} event MUST be empty, and the event's {{Event/cancelable}} attribute MUST be false.

Thepointeroutevent

The user agent MUSTfire a pointer eventnamed {{GlobalEventHandlers/pointerout}} when any of the following occurs:

  • The pointing device is moved out of thehit testboundaries of an element. Note thatsetPointerCapture()orreleasePointerCapture()might have changed thehit testtarget and while a pointer is captured it is considered to be always inside the boundaries of the capturing element for the purpose of firing boundary events.
  • After firing the {{GlobalEventHandlers/pointerup}} event for a device thatdoes not support hover(see {{GlobalEventHandlers/pointerup}}).
  • The user agent has detected a scenario tosuppress a pointer event stream.

Thepointerleaveevent

The user agent MUSTfire a pointer eventnamed {{GlobalEventHandlers/pointerleave}} when any of the following occurs:

  • The pointing device is moved out of thehit testboundaries of an element and all of its descendants. Note thatsetPointerCapture()orreleasePointerCapture()might have changed thehit testtarget and while a pointer is captured it is considered to be always inside the boundaries of the capturing element for the purpose of firing boundary events.
  • After firing the {{GlobalEventHandlers/pointerup}} event for a device thatdoes not support hover(see {{GlobalEventHandlers/pointerup}}).
  • The user agent has detected a scenario tosuppress a pointer event stream.

This event type is similar to {{GlobalEventHandlers/pointerout}}, but differs in that it does not bubble and that it MUST not be fired until the pointing device has left the boundaries of the element and the boundaries of all of its descendants.

There are similarities between this event type, themouseleaveevent described in [[UIEVENTS]], and the CSS:hoverpseudo-class described in [[CSS21]]. See also thepointerenterevent.

Thegotpointercaptureevent

The user agent MUSTfire a pointer eventnamed {{GlobalEventHandlers/gotpointercapture}} when an element receives pointer capture. This event is fired at the element that is receiving pointer capture. Subsequent events for that pointer will be fired at this element. See thesetting pointer captureandprocess pending pointer capturesections.

Thelostpointercaptureevent

The user agent MUSTfire a pointer eventnamed {{GlobalEventHandlers/lostpointercapture}} after pointer capture is released for a pointer. This event MUST be fired prior to any subsequent events for the pointer after capture was released. This event is fired at the element from which pointer capture was removed. All subsequent events for the pointer exceptclick,auxclick,andcontextmenueventsfollow normal hit testing mechanisms (out of scope for this specification) for determining the event target. See thereleasing pointer capture,implicit release of pointer capture,andprocess pending pointer capturesections.

Theclick,auxclick,andcontextmenuevents

This section is an addition toclick, auxclickandcontextmenu events defined in [[UIEVENTS]]. These events are typically tied to user interface activation, and are fired even from non-pointer input devices, such as keyboards.

These events MUST be of typePointerEvent,and are subject to the additional requirements mentioned in the rest of this section.

Event attributes

For these events, allPointerEventspecific attributes (defined in this spec) other thanpointerIdandpointerTypeMUST have their default values. In addition:

  • If the events are generated by a pointing device, theirpointerIdandpointerTypeMUST be the same as the PointerEvents that caused these events.
  • If the events are generated by a non-pointing device (such as voice recognition software or a keyboard interaction),pointerIdMUST be-1andpointerTypeMUST be an empty string.

Event coordinates

As noted in {{PointerEvent}}, [[[CSSOM-VIEW]]] proposes to redefine the various coordinate properties (screenX,screenY,pageX,pageY,clientX, clientY,x,y,offsetX,offsetY) asdouble,to allow for fractional coordinates. However, this change — when applied only to {{PointerEvent}}, but not to regular {{MouseEvent}} — has proven to lead to web compatibility issues with legacy code in the case ofclick,auxclick,andcontextmenu.For this reason, user agents that have implemented the proposed change in [[[CSSOM-VIEW]]] only for {{PointerEvent}} MUST convert the various coordinate properties for theclick,auxclick,andcontextmenu tolongvalues (as defined in the original [[[UIEVENTS]]]) usingMath.floor[[ECMASCRIPT]].

Event dispatch

Aclick,auxclickorcontextmenuevent MUST follow the dispatch process defined in the [[UIEVENTS]] spec except that the event target is overridden using the algorithm below:

  1. Let |event| be theclick,auxclickorcontextmenuevent being dispatched, and |userEvent| be the user interaction event that caused the firing of |event|.

    Event |userEvent| could be a non-PointerEvent;for example, it is aKeyboardEventwhen aclickevent dispatch is caused by hitting the spacebar on a checkbox element.

    When |userEvent| is aPointerEvent,|userEvent| is a {{GlobalEventHandlers/pointerup}} for aclickorauxclickevent, and either a {{GlobalEventHandlers/pointerdown}} or a {{GlobalEventHandlers/pointerup}} event (depending on native platform convention) for acontextmenuevent.

  2. If |userEvent| is not aPointerEvent,dispatch |event| following the [[UIEVENTS]] spec without overriding |event| target and skip the remaining steps below.
  3. Define |target| as follows:

    If |event| is acontextmenuevent, or |userEvent| was dispatched while the corresponding pointer was captured, then let |target| be the target of |userEvent|.

    Otherwise (|event| is aclickorauxclickevent for which |userEvent| is apointerupevent that was dispatched uncaptured) let |target| be the nearest common inclusive ancestor of the correspondingpointerdownandpointeruptargets in the DOM at the moment |event| is being dispatched.

  4. Dispatch |event| to |target| following the [[UIEVENTS]] spec.

    If |userEvent| was captured, |event| is dispatched to the capturing target of |userEvent| even though the {{GlobalEventHandlers/lostpointercapture}} event with the samepointerIdhas been dispatched already.

Extensions to the `Element` interface

The following section describes extensions to the existing {{Element}} interface to facilitate the setting and releasing of pointer capture.

partial interface Element {
undefined setPointerCapture (long pointerId);
undefined releasePointerCapture (long pointerId);
boolean hasPointerCapture (long pointerId);
};
setPointerCapture()

Set pointer capturefor the pointer identified by the argumentpointerIdto the element on which this method is invoked. For subsequent events of the pointer, the capturing target will substitute the normal hit testing result as if the pointer is always over the capturing target, and they MUST always be targeted at this element until capture is released. The pointer MUST be in itsactive buttons statefor this method to be effective, otherwise it fails silently. When the provided method's argument does not match any of theactive pointers,[=exception/throw=] a {{ "NotFoundError" }} {{DOMException}}.

releasePointerCapture()

Release pointer capturefor the pointer identified by the argumentpointerIdfrom the element on which this method is invoked. Subsequent events for the pointer follow normal hit testing mechanisms (out of scope for this specification) for determining the event target. When the provided method's argument does not match any of theactive pointers,[=exception/throw=] a {{ "NotFoundError" }} {{DOMException}}.

hasPointerCapture

Indicates whether the element on which this method is invoked haspointer capturefor the pointer identified by the argumentpointerId.In particular, returnstrueif thepending pointer capture target overrideforpointerIdis set to the element on which this method is invoked, andfalseotherwise.

This method will return true immediately after a call tosetPointerCapture(),even though that element will not yet have received a {{GlobalEventHandlers/gotpointercapture}} event. As a result it can be useful for detectingimplicit pointer capturefrom inside of a {{GlobalEventHandlers/pointerdown}} event listener.

Extensions to the `GlobalEventHandlers` mixin

The following section describes extensions to the existing {{GlobalEventHandlers}} mixin to facilitate the event handler registration.

partial interface mixin GlobalEventHandlers {
attribute EventHandler onpointerover;
attribute EventHandler onpointerenter;
attribute EventHandler onpointerdown;
attribute EventHandler onpointermove;
[SecureContext] attribute EventHandler onpointerrawupdate;
attribute EventHandler onpointerup;
attribute EventHandler onpointercancel;
attribute EventHandler onpointerout;
attribute EventHandler onpointerleave;
attribute EventHandler ongotpointercapture;
attribute EventHandler onlostpointercapture;
};
onpointerover
The [=event handler IDL attribute=] for the {{GlobalEventHandlers/pointerover}} event type.
onpointerenter
The [=event handler IDL attribute=] for thepointerenterevent type.
onpointerdown
The [=event handler IDL attribute=] for the {{GlobalEventHandlers/pointerdown}} event type.
onpointermove
The [=event handler IDL attribute=] for the {{GlobalEventHandlers/pointermove}} event type.
onpointerrawupdate
The [=event handler IDL attribute=] for the {{GlobalEventHandlers/pointerrawupdate}} event type.
onpointerup
The [=event handler IDL attribute=] for the {{GlobalEventHandlers/pointerup}} event type.
onpointercancel
The [=event handler IDL attribute=] for the {{GlobalEventHandlers/pointercancel}} event type.
onpointerout
The [=event handler IDL attribute=] for the {{GlobalEventHandlers/pointerout}} event type.
onpointerleave
The [=event handler IDL attribute=] for the {{GlobalEventHandlers/pointerleave}} event type.
ongotpointercapture
The [=event handler IDL attribute=] for the {{GlobalEventHandlers/gotpointercapture}} event type.
onlostpointercapture
The [=event handler IDL attribute=] for the {{GlobalEventHandlers/lostpointercapture}} event type.

Extensions to the `Navigator` interface

The {{Navigator}} interface is defined in [[HTML]]. This specification extends theNavigatorinterface to provide device detection support.

partial interface Navigator {
readonly attribute long maxTouchPoints;
};
maxTouchPoints

The maximum number of simultaneous touch contacts supported by the device. In the case of devices with multiple digitizers (e.g. multiple touchscreens), the value MUST be the maximum of the set of maximum supported contacts by each individual digitizer.

For example, suppose a device has 3 touchscreens, which support 2, 5, and 10 simultaneous touch contacts, respectively. The value ofmaxTouchPointsshould be10.

While amaxTouchPointsvalue of greater than0indicates the user's device is capable of supporting touch input, it does not necessarily mean the userwilluse touch input. Authors should be careful to also consider other input modalities that could be present on the system, such as mouse, pen, screen readers, etc.
maxTouchPointsis often used to ensure that the interaction model of the content can be recognized by the current hardware. UI affordances can be provided to users with less capable hardware. On platforms where the precise number of touch points is not known, the minimum number guaranteed to be recognized is provided. Therefore, it is possible for the number of recognized touch points to exceed the value ofmaxTouchPoints.

Declaring direct manipulation behavior

As noted inAttributes and Default Actions,viewport manipulations (panning and zooming) cannot be suppressed by canceling a pointer event. Instead, authors must declaratively define which of these behaviors they want to allow, and which they want to suppress, using thetouch-actionCSS property.

While the issue of pointers used to manipulate the viewport is generally limited to touch input (where a user's finger can both interact with content and pan/zoom the page), certain user agents may also allow the same types of (direct or indirect) manipulation for other pointer types. For instance, on mobile/tablet devices, users may also be able to scroll using a stylus. While, for historical reasons, thetouch-actionCSS property defined in this specification appears to refer only to touch inputs, it does in fact apply to all forms of pointer inputs that allowdirect manipulationfor panning and zooming.

Thetouch-actionCSS property

Name:touch-action
Value:auto|none| [ [pan-x|pan-left|pan-right] || [pan-y|pan-up|pan-down] ] |manipulation
Initial:auto
Applies to:all elements except: non-replaced inline elements, table rows, row groups, table columns, and column groups.
Inherited:no
Percentages:N/A
Media:visual
Computed value:Same as specified value.

Thetouch-actionCSS property determines whetherdirect manipulationinteractions (which are not limited to touch, despite the property's name) MAY trigger the user agent's panning and zooming behavior. See the section ontouch-actionvalues.

Right before starting to pan or zoom, the user agent MUSTsuppress a pointer event streamif all of the following conditions are true:

Some user agents implement complex gestures for behaviors that involve a series of separate discrete gestures, but which are all treated as part of a single continuous gesture. For example, consider a "fling to scroll" gesture on a touchscreen: a user starts panning the document with a rapid finger movement, lifts the finger from the touchscreen, and the document continues panning with simulated inertia. While the document is still moving, the user may place their finger on the touchscreen and execute another "fling" to provide further momentum for the panning, or counteract the current panning to slow it down, stop panning altogether, or reverse the direction of the panning. As this specification does not normatively define how gestures and behaviors are implemented, it is left up to the user agent to decide whether or not the second touch (before it is interpreted as a second "fling" or counteraction of the current panning) fires pointer events or not.
touch-actiondoes not apply/cascade through to embedded browsing contexts. For instance, even applyingtouch-actionto an<iframe>won't have any effect on the behavior of direct manipulation interactions for panning and zooming within the<iframe>itself.

Determining supported direct manipulation behavior

When a user interacts with an element using adirect manipulationpointer (such as touch or stylus on a touchscreen), the effect of that input is determined by the value of thetouch-actionproperty, and the default direct manipulation behaviors of the element and its ancestors, as follows:

Some user agents support panning and zooming interactions involving multiple concurrent pointers (e.g. multi-touch). Methods for processing or associating thetouch-actionvalues of multiple concurrent pointers is out of scope for this specification.

Details oftouch-actionvalues

Thetouch-actionproperty covers direct manipulation behaviors related to viewport panning and zooming. Any additional user agent behaviors, such as text selection/highlighting, or activating links and form controls, MUST NOT be affected by this CSS property.

The terms "panning" and "scrolling" are considered synonymous (or, more aptly, "panning" is "scrolling" using a direct manipulation input). Defining an interaction or gesture for triggering panning/scrolling, or for triggering behavior for theautoornonevalues, are out of scope for this specification.
auto
The user agent MAY consider any permitted direct manipulation behaviors related to panning and zooming of the viewport that begin on the element.
none
Direct manipulation interactions that begin on the element MUST NOT trigger behaviors related to viewport panning and zooming.
pan-x
pan-left
pan-right
pan-y
pan-up
pan-down
The user agent MAY consider direct manipulation interactions that begin on the element only for the purposes of panning that starts in any of the directions specified by all of the listed values. Once panning has started, the direction may be reversed by the user even if panning that starts in the reversed direction is disallowed. In contrast, when panning is restricted to a single axis (eg.pan-y), the axis cannot be changed during panning.
manipulation
The user agent MAY consider direct manipulation interactions that begin on the element only for the purposes of panning andcontinuouszooming (such as pinch-zoom), but MUST NOT trigger other related behaviors that rely on multiple activations that must happen within a set period of time (such as double-tap to zoom, or double-tap and hold for single-finger zoom).
Additionaltouch-actionvaluescommon in implementations are defined in [[COMPAT]].
Thetouch-actionproperty only applies to elements that support both the CSSwidthandheightproperties (see [[CSS21]]). This restriction is designed to facilitate user agent optimizations forlow-latencydirect manipulationpanning and zooming. For elements not supported by default, such as<span>which is anon-replaced inline element,authors can set thedisplayCSS property to a value, such asblock,that supportswidthandheight.Future specifications could extend this API to all elements.

The direction-specific pan values are useful for customizing some overscroll behaviors. For example, to implement a simple pull-to-refresh effect the document'stouch-actioncan be set topan-x pan-downwhenever the scroll position is0andpan-x pan-yotherwise. This allows pointer event handlers to define the behavior for upward panning/scrolling that start from the top of the document.

The direction-specific pan values can also be used for composing a component that implements custom panning with pointer event handling within an element that scrolls natively (or vice-versa). For example, an image carousel may usepan-yto ensure it receives pointer events for any horizontal pan operations without interfering with vertical panning of the document. When the carousel reaches its right-most extent, it may change itstouch-actiontopan-y pan-rightso that a subsequent scroll operation beyond its extent can scroll the document within the viewport if possible. It's not possible to change the behavior of a panning/scrolling operation while it is taking place.

Disabling some default direct manipulation behaviors for panning and zooming may allow user agents to respond to other behaviors more quickly. For example, withautouser agents typically add 300ms of delay beforeclickto allow for double-tap gestures to be handled. In these cases, explicitly settingtouch-action: noneortouch-action: manipulationwill remove this delay. Note that the methods for determining a tap or double-tap gesture are out of scope for this specification.
<div style= "touch-action: none;" >
This element receives pointer events for all direct manipulation interactions that otherwise lead to panning or zooming.
</div>
<div style= "touch-action: pan-x;" >
This element receives pointer events when not panning in the horizontal direction.
</div>
<div style= "overflow: auto;" >
<div style= "touch-action: none;" >
This element receives pointer events for all direct manipulation interactions that otherwise lead to panning or zooming.
</div>
<div>
Direct manipulation interactions on this element MAY be consumed for manipulating the parent.
</div>
</div>
<div style= "overflow: auto;" >
<div style= "touch-action: pan-y;" >
<div style= "touch-action: pan-x;" >
This element receives pointer events for all direct manipulation interactions because
it allows only horizontal panning yet an intermediate ancestor
(between it and the scrollable element) only allows vertical panning.
Therefore, no direct manipulation behaviors for panning/zooming are
handled by the user agent.
</div>
</div>
</div>
<div style= "overflow: auto;" >
<div style= "touch-action: pan-y pan-left;" >
<div style= "touch-action: pan-x;" >
This element receives pointer events when not panning to the left.
</div>
</div>
</div>

Pointer capture

Introduction

Pointer capture allows the events for a particular pointer (including anycompatibility mouse events) to be retargeted to a particular element other than the normalhit testresult of the pointer's location. This is useful in scenarios like a custom slider control (e.g. similar to the [[HTML]]<input type= "range" >control). Pointer capture can be set on the slider thumb element, allowing the user to slide the control back and forth even if the pointer slides off of the thumb.

Custom Volume Slider
Example of a custom slider control that chooses a value by sliding the thumb element back and forth. After {{GlobalEventHandlers/pointerdown}} on the thumb, pointer capture can be used to allow the user to slide the thumb even if the pointer drifts off of it.

Setting pointer capture

Pointer capture is set on an |element| of type {{Element}} by calling theelement.setPointerCapture(pointerId)method. When this method is invoked, the user agent MUST run the following steps:

  1. If thepointerIdprovided as the method's argument does not match any of theactive pointers,then [=exception/throw=] a {{ "NotFoundError" }} {{DOMException}}.
  2. Let the |pointer| be theactive pointerspecified by the givenpointerId.
  3. If the |element| is not [=connected=] [[DOM]], [=exception/throw=] an {{ "InvalidStateError" }} {{DOMException}}.
  4. If this method is invoked while the |element|'s [=Node/node document=] [[DOM]] has a locked element ([[PointerLock]] {{DocumentOrShadowRoot/pointerLockElement}}), [=exception/throw=] an {{ "InvalidStateError" }} {{DOMException}}.
  5. If the |pointer| is not in theactive buttons stateor the |element|'s [=Node/node document=] is not theactive documentof the |pointer|, then terminate these steps.
  6. For the specifiedpointerId,set thepending pointer capture target overrideto the {{Element}} on which this method was invoked.

Releasing pointer capture

Pointer capture is released on an element explicitly by calling theelement.releasePointerCapture(pointerId)method. When this method is called, the user agent MUST run the following steps:

  1. If thepointerIdprovided as the method's argument does not match any of theactive pointersand these steps are not being invoked as a result of theimplicit release of pointer capture,then [=exception/throw=] a {{ "NotFoundError" }} {{DOMException}}.
  2. IfhasPointerCaptureis false for the {{Element}} with the specifiedpointerId,then terminate these steps.
  3. For the specifiedpointerId,clear thepending pointer capture target override,if set.

Implicit pointer capture

Inputs that implementdirect manipulationinteractions for panning and zooming (such as touch or stylus on a touchscreen) SHOULD behave exactly as ifsetPointerCapture()was called on the target element just before the invocation of any {{GlobalEventHandlers/pointerdown}} listeners. ThehasPointerCaptureAPI may be used (eg. within any {{GlobalEventHandlers/pointerdown}} listener) to determine whether this has occurred. IfreleasePointerCapture()is not called for the pointer before the next pointer event is fired, then a {{GlobalEventHandlers/gotpointercapture}} event will be dispatched to the target (as normal) indicating that capture is active.

This is a breaking change from [[PointerEvents]], but does not impact the vast majority of existing content. In addition to matching typical platform UX conventions, this design for implicit capture enables user agents to make a performance optimization which prevents the need to invoke hit-testing on touch movement events without explicit developer opt-in (consistent with the performance properties of existing dominant native and web APIs for touch input).
In addition, user agents may implement implicit pointer capture behavior for all input devices on specific UI widgets such as input range controls (allowing some finger movement to stray outside of the form control itself during the interaction).

Implicit release of pointer capture

Immediately after firing the {{GlobalEventHandlers/pointerup}} or {{GlobalEventHandlers/pointercancel}} events, the user agent MUST clear thepending pointer capture target override for thepointerIdof the {{GlobalEventHandlers/pointerup}} or {{GlobalEventHandlers/pointercancel}} event that was just dispatched, and then runprocess pending pointer capturesteps to fire {{GlobalEventHandlers/lostpointercapture}} if necessary. After runningprocess pending pointer capturesteps, if the pointer supports hover, user agent MUST also send corresponding boundary events necessary to reflect the current position of the pointer with no capture.

When thepointer capture target overrideis no longer [=connected=] [[DOM]], thepointer capture target overrideSHOULD be set to the document.

When thepending pointer capture target overrideis no longer [=connected=] [[DOM]], thepending pointer capture target overridenode SHOULD be cleared.

The previous two paragraphs result in a {{GlobalEventHandlers/lostpointercapture}} event corresponding to the captured pointer being fired at the document during the nextProcess pending pointer captureafter the capture node is removed.

When a pointer lock [[PointerLock]] is successfully applied on an element, the user agent MUST run the steps as if thereleasePointerCapture()method has been called if any element is set to be captured or pending to be captured.

Coalesced and predicted events

This specification does not define how user agents should coalesce or predict pointer movement data. It only specifies the API for accessing this information.

Coalesced events

For performance reasons, user agents may choose not to send a {{GlobalEventHandlers/pointermove}} event every time ameasurable property (such as coordinates, pressure, tangential pressure, tilt, twist, or contact geometry) of a pointer is updated. Instead, they may coalesce (combine/merge) multiple changes into a single {{GlobalEventHandlers/pointermove}} or {{GlobalEventHandlers/pointerrawupdate}} event. While this approach helps in reducing the amount of event handling the user agent must perform, it will naturally reduce the granularity and fidelity when tracking a pointer position, particularly for fast and large movements. Using the getCoalescedEvents()method it is possible for applications to access the raw, un-coalesced position changes. These allow for a more precise handling of pointer movement data. In the case of drawing applications, for instance, the un-coalesced events can be used to draw smoother curves that more closely match the actual movement of a pointer.

Close-up view of a curve, showing coalesced and un-coalesced points
Example of a curve in a drawing application — using only the coalesced coordinates from {{GlobalEventHandlers/pointermove}} events (the grey dots), the curve is noticeably angular and jagged; the same line drawn using the more granular points provided by getCoalescedEvents()(the red circles) results in a smoother approximation of the pointer movement.

APointerEventhas an associatedcoalesced events list(a list of zero or morePointerEvents). For trusted {{GlobalEventHandlers/pointermove}} and {{GlobalEventHandlers/pointerrawupdate}} events, the list is a sequence of allPointerEvents that were coalesced into this event. The "parent" trusted {{GlobalEventHandlers/pointermove}} and {{GlobalEventHandlers/pointerrawupdate}} event represents an accumulation of these coalesced events, but may have additional processing (for example to align with the display refresh rate). As a result, the coalesced events lists for these events always contain at least one event. For all other trusted event types, it is an empty list. Untrusted events have their coalesced events listinitialized to the value passed to the constructor.

Since a trusted parent event is a summary or aggregation of the coalesced events, developers should only need to process either the parent events or all of the coalesced events, but not both.

The events in the coalesced events list of a trusted event will have:

<style>
/* Disable intrinsic user agent direct manipulation behaviors (such as panning or zooming)
so that all events on the canvas element are given to the application instead. */

canvas { touch-action: none; }
</style>

<canvas id= "drawSurface" width= "500px" height= "500px" style= "border:1px solid black;" ></canvas>

<script>
const canvas = document.getElementById( "drawSurface" ),
context = canvas.getContext( "2d" );

canvas.addEventListener( "pointermove", (e)=> {

if (e.getCoalescedEvents) {
for (let coalesced_event of e.getCoalescedEvents()) {
paint(coalesced_event); // Paint all raw/non-coalesced points
}
} else {
paint(e); // Paint the final coalesced point
}
});

function paint(event) {
if (event.buttons>0) {
context.fillRect(event.clientX, event.clientY, 5, 5);
}
}

</script>
The PointerEvent's attributes will be initialized in a way that best represents the events in the coalesced events list. The specific method by which user agents should do this is not covered by this specification.

The order of all these dispatched events MUST match the actual order of the original events. For example if a {{GlobalEventHandlers/pointerdown}} event causes the dispatch for the coalesced {{GlobalEventHandlers/pointermove}} events the user agent MUST first dispatch one {{GlobalEventHandlers/pointermove}} event with all those coalesced events of apointerIdfollowed by the {{GlobalEventHandlers/pointerdown}} event.

Here is an example of the actual events happening with increasing {{Event/timeStamp}} values and the events dispatched by the user agent:

Actual eventsDispatched events
pointer (pointerId=2) coordinate change{{GlobalEventHandlers/pointerrawupdate}} (pointerId=2) w/ one coalesced event
pointer (pointerId=1) coordinate change{{GlobalEventHandlers/pointerrawupdate}} (pointerId=1) w/ one coalesced event
pointer (pointerId=2) coordinate change{{GlobalEventHandlers/pointerrawupdate}} (pointerId=2) w/ one coalesced event
pointer (pointerId=2) coordinate change{{GlobalEventHandlers/pointerrawupdate}} (pointerId=2) w/ one coalesced event
pointer (pointerId=1) coordinate change{{GlobalEventHandlers/pointerrawupdate}} (pointerId=1) w/ one coalesced event
pointer (pointerId=2) coordinate change{{GlobalEventHandlers/pointerrawupdate}} (pointerId=2) w/ one coalesced event
pointer (pointerId=1) button press {{GlobalEventHandlers/pointermove}} (pointerId=1) w/ two coalesced events
{{GlobalEventHandlers/pointermove}} (pointerId=2) w/ four coalesced events
{{GlobalEventHandlers/pointerdown}} (pointerId=1) w/ zero coalesced events
pointer (pointerId=2) coordinate change{{GlobalEventHandlers/pointerrawupdate}} (pointerId=2) w/ one coalesced event
pointer (pointerId=2) coordinate change{{GlobalEventHandlers/pointerrawupdate}} (pointerId=2) w/ one coalesced event
pointer (pointerId=1) button release {{GlobalEventHandlers/pointermove}} (pointerId=2) w/ two coalesced events
{{GlobalEventHandlers/pointerup}} (pointerId=1) w/ zero coalesced events

Predicted events

Some user agents have built-in algorithms which, after a series of confirmed pointer movements, can make a prediction (based on past points, and the speed/trajectory of the movement) what the position of future pointer movements may be. Applications can use this information with thegetPredictedEvents()method to speculatively "draw ahead" to a predicted position to reduce perceived latency, and then discarding these predicted points once the actual points are received.

A line drawn using coalesced points, showing predicted future points
Example of a line in a drawing application (the result of a drawing gesture from the bottom left to the top right), using the coalesced coordinates from {{GlobalEventHandlers/pointermove}} events, showing the user agent's predicted future points (the grey circles).

APointerEventhas an associatedpredicted events list(a list of zero or more PointerEvents). For trusted {{GlobalEventHandlers/pointermove}} events, it is a sequence of PointerEvents that the user agent predicts will follow the event in the future. For all other trusted event types, it is an empty list. Untrusted events have theirpredicted events listinitialized to the value passed to the constructor.

Whilepointerrawupdateevents may have a non-emptycoalesced events list, theirpredicted events listwill, for performance reasons, usually be an empty list.

The number of events in the list and how far they are from the current timestamp are determined by the user agent and the prediction algorithm it uses.

The events in the predicted events list of a trusted event will have:

Note that authors should only consider predicted events as valid predictions until the next pointer event is dispatched. It is possible, depending on how far into the future the user agent predicts events, that regular pointer events are dispatched earlier than the timestamp of one or more of the predicted events.


let predicted_points = [];
window.addEventListener( "pointermove", function(event) {
// Clear the previously drawn predicted points.
for (let e of predicted_points.reverse()) {
clearPoint(e.pageX, e.pageY);
}

// Draw the actual movements that happened since the last received event.
for (let e of event.getCoalescedEvents()) {
drawPoint(e.pageX, e.pageY);
}

// Draw the current predicted points to reduce the perception of latency.
predicted_points = event.getPredictedEvents();
for (let e of predicted_points) {
drawPoint(e.pageX, e.pageY);
}
});

Populating and maintaining the coalesced and predicted events lists

When a trustedPointerEventis created, user agents SHOULD run the following steps for each event in the coalesced events listandpredicted events list:

  1. Set the event'spointerId,pointerType, isPrimaryand {{Event/isTrusted}} to match the respective properties of the "parent" pointer event.
  2. Set the event's {{Event/cancelable}} and {{Event/bubbles}} to false (as these events will never be dispatched in isolation).
  3. Set the event'scoalesced events listandpredicted events listto an empty list.
  4. Initialize all other attributes to default {{PointerEvent}} values.

When a trustedPointerEvent's {{Event/target}} is changed, user agents SHOULD, for each event in the coalesced events listandpredicted events list:

  1. Set the event's {{Event/target}} to match the {{Event/target}} of the "parent" pointer event.

Compatibility mapping with mouse events

The vast majority of web content existing today codes only to Mouse Events. The following describes an algorithm for how the user agent MAY map generic pointer input to mouse events for compatibility with this content.

The compatibility mapping with mouse events is an OPTIONAL feature of this specification. User agents are encouraged to support the feature for best compatibility with existing legacy content.

At a high level, compatibility mouse events are intended to be "interleaved" with their respective pointer events. However, this specific order is not mandatory, and user agents that implement compatibility mouse events MAY decide to delay or group the dispatch of mouse events, as long as their relative order is consistent.

Particularly in the case of touchscreen inputs, user agents MAY apply additional heuristics for gesture recognition (unless explicitly suppressed by authors throughtouch-action). During a sequence of events between a {{GlobalEventHandlers/pointerdown}} event and a {{GlobalEventHandlers/pointerup}} event, the gesture recognition may have to wait until the {{GlobalEventHandlers/pointerup}} event to detect or ignore a gesture. As a result the compatibility mouse events for the whole sequence may be dispatched together after the last {{GlobalEventHandlers/pointerup}} event, if the user agent determined that an interaction was not intended as a particular gesture. These specifics of user agent gesture recognition are not defined in this specification, and they may differ between implementations.

Regardless of their support for compatibility mouse events, the user agents MUST always support theclick,auxclickandcontextmenuevents because these events are of typePointerEventand are therefore notcompatibility mouse events.CallingpreventDefaultduring a pointer event MUST NOT have an effect on whetherclick,auxclick,orcontextmenuare fired or not.

The relative order of some of these high-level events (contextmenu,focus,blur,etc.) with pointer events is undefined and varies between user agents. For example, in some user agentscontextmenuwill often follow a {{GlobalEventHandlers/pointerup}}, while in others it'll often precede a {{GlobalEventHandlers/pointerup}} or {{GlobalEventHandlers/pointercancel}}, and in some situations it may be fired without any corresponding pointer event (for instance, as a result of a keyboard interaction).

In addition, user agents may apply their own heuristics to determine whether or not aclick,auxclick,orcontextmenuevent should be fired. Some user agents may choose not to fire these events if there are other (non-primary) pointers of the same type, or other primary pointers of a different type. User agents may determine that a particular action was not a "clean" tap, click, or long-press (for instance, if an interaction with a finger on a touch screen includes too much movement while the finger is in contact with the screen) and decide not to fire aclick,auxclick,orcontextmenuevent. These aspects of user agent behavior are not defined in this specification, and they may differ between implementations.

Unless otherwise noted, the target of any mapped mouse event SHOULD be the same target as the respective pointer event unless the target is no longer participating in itsownerDocument's tree. In this case, the mouse event should be fired at the original target's nearest ancestor node (at the time it was removed from the tree) that still participates in itsownerDocument's tree, meaning that a new event path (based on the new target node) is built for the mouse event.

Authors can prevent the production of certain compatibility mouse events by canceling the {{GlobalEventHandlers/pointerdown}} event.

Mouse events can only be prevented when the pointer is down. Hovering pointers (e.g. a mouse with no buttons pressed) cannot have their mouse events prevented.

Themouseover,mouseout,mouseenter,andmouseleaveevents are never prevented (even if the pointer is down).

Compatibility mouse events can't be prevented when a pointer event {{EventListener}} is set to be {{AddEventListenerOptions/passive}} [[DOM]].

Tracking the effective position of the legacy mouse pointer

While onlyprimary pointerscan produce compatibility mouse events,multiple primary pointerscan be active simultaneously, each producing its own compatibility mouse events. For compatibility with scripts relying on MouseEvents, the mouse transition events (mouseover,mouseout,mouseenterandmouseleave) SHOULD simulate the movement of asinglelegacy mouse input. This means that the entry/exit state for every event target is valid, in accordance with [[UIEVENTS]]. Users agents SHOULD guarantee this by maintaining theeffective position of the legacy mouse pointerin the document as follows.

Right before firing a {{GlobalEventHandlers/pointerdown}}, {{GlobalEventHandlers/pointerup}} or {{GlobalEventHandlers/pointermove}} event, or a {{GlobalEventHandlers/pointerleave}} event at thewindow,the user agent SHOULD run the following steps:

  1. Let |T| be the target of the {{GlobalEventHandlers/pointerdown}}, {{GlobalEventHandlers/pointerup}} or {{GlobalEventHandlers/pointermove}} event being dispatched. For the {{GlobalEventHandlers/pointerleave}} event, unset |T|.
  2. If |T| and currenteffective legacy mouse pointer positionare both unset or they are equal, terminate these steps.
  3. Dispatchmouseover,mouseout,mouseenterandmouseleaveevents as per [[UIEVENTS]] for a mouse moving from the currenteffective legacy mouse pointer positionto |T|. Consider an unset value of either currenteffective legacy mouse pointer positionor |T| as an out-of-window mouse position.
  4. Seteffective legacy mouse pointer positionto |T|.

Theeffective position of the legacy mouse pointermodels the fact that we cannot always have a direct mapping from pointer transition events (i.e.,pointerover,pointerout,pointerenter andpointerleave) to corresponding legacy mouse transition events (i.e.,mouseover, mouseout,mouseenterandmouseleave). The following animation illustrates a case where a user agent needs to dispatch more legacy mouse transition events than pointer transition events to be able to reconcile two primary pointers using a single legacy mouse input.

Simultaneous mouse pointer (white cursor) and touch pointer (white "hand" cursor) causing the single legacy mouse input (orange cursor) to move between the two pointers.

In this animation, note the time period between the mouse click and the touch tap. Button 1 receives no pointeroutevent (because the "real" mouse pointer didn't leave the button rectangle within this period), but Button 1 receives amouseoutevent when theeffective position of the legacy mouse pointermoves to Button 2 on touch tap. Similarly, in the time period between the touch tap and the moment before the mouse leaves Button 1, Button 1 receives nopointeroverevent for the same reason, but Button 1 receives amouseoverevent when theeffective position of the legacy mouse pointermoves back inside Button 1.

Mapping for devices that support hover

Whenever the user agent is to dispatch a pointer event for a device that supports hover, it SHOULD run the following steps:

  1. If theisPrimaryproperty for the pointer event to be dispatched isfalsethen dispatch the pointer event and terminate these steps.
  2. If the pointer event to be dispatched is a {{GlobalEventHandlers/pointerdown}}, {{GlobalEventHandlers/pointerup}} or {{GlobalEventHandlers/pointermove}} event, or a {{GlobalEventHandlers/pointerleave}} event at thewindow,dispatch compatibility mouse transition events as described inTracking the effective position of the legacy mouse pointer.
  3. Dispatch the pointer event.
  4. If the pointer event dispatched was {{GlobalEventHandlers/pointerdown}} and the event wascanceled,then set thePREVENT MOUSE EVENTflag for thispointerType.
  5. If thePREVENT MOUSE EVENTflag isnotset for thispointerTypeand the pointer event dispatched was:
    • {{GlobalEventHandlers/pointerdown}}, then fire amousedownevent.
    • {{GlobalEventHandlers/pointermove}}, then fire amousemoveevent.
    • {{GlobalEventHandlers/pointerup}}, then fire amouseupevent.
    • {{GlobalEventHandlers/pointercancel}}, then fire amouseupevent at thewindow.
  6. If the pointer event dispatched was {{GlobalEventHandlers/pointerup}} or {{GlobalEventHandlers/pointercancel}}, clear thePREVENT MOUSE EVENTflag for thispointerType.

Mapping for devices that do not support hover

Some devices, such as most touchscreens, do not support hovering a coordinate (or set of coordinates) while not in the active state. Much existing content coded to mouse events assumes that a mouse is producing the events and thus certain qualities are generally true:

Hover is sometimes used to toggle the visibility of UI elements in content designed for mouse (e.g. "hover menus" ). This content is often incompatible withdevices that do not support hover.This specification does not define a mapping or behavior for compatibility with this scenario. It will be considered in a future version of the specification.

This requires that user agents provide a different mapping for these types of input devices. Whenever the user agent is to dispatch a pointer event for a device thatdoes not support hover,it SHOULD run the following steps:

  1. If theisPrimaryproperty for the pointer event to be dispatched isfalsethen dispatch the pointer event and terminate these steps.
  2. If the pointer event to be dispatched is {{GlobalEventHandlers/pointerover}} and the {{GlobalEventHandlers/pointerdown}} event has not yet been dispatched for this pointer, then fire amousemoveevent (for compatibility with legacy mouse-specific code).
  3. If the pointer event to be dispatched is a {{GlobalEventHandlers/pointerdown}}, {{GlobalEventHandlers/pointerup}} or {{GlobalEventHandlers/pointermove}} event, or a {{GlobalEventHandlers/pointerleave}} event at thewindow,dispatch compatibility mouse transition events as described inTracking the effective position of the legacy mouse pointer.
  4. Dispatch the pointer event.
  5. If the pointer event dispatched was {{GlobalEventHandlers/pointerdown}} and the event wascanceled,then set thePREVENT MOUSE EVENTflag for thispointerType.
  6. If thePREVENT MOUSE EVENTflag isnotset for thispointerTypeand the pointer event dispatched was:
    • {{GlobalEventHandlers/pointerdown}}, then fire amousedownevent.
    • {{GlobalEventHandlers/pointermove}}, then fire amousemoveevent.
    • {{GlobalEventHandlers/pointerup}}, then fire amouseupevent.
    • {{GlobalEventHandlers/pointercancel}}, then fire amouseupevent at thewindow.
  7. If the pointer event dispatched was {{GlobalEventHandlers/pointerup}} or {{GlobalEventHandlers/pointercancel}}, clear thePREVENT MOUSE EVENTflag for thispointerType.

If the user agent supports both Touch Events (as defined in [[TOUCH-EVENTS]]) and Pointer Events, the user agent MUST NOT generateboththe compatibility mouse events as described in this section, and thefallback mouse eventsoutlined in [[TOUCH-EVENTS]].

The activation of an element (click) with a primary pointer thatdoes not support hover(e.g. single finger on a touchscreen) would typically produce the following event sequence:

  1. mousemove
  2. {{GlobalEventHandlers/pointerover}}
  3. pointerenter
  4. mouseover
  5. mouseenter
  6. {{GlobalEventHandlers/pointerdown}}
  7. mousedown
  8. Zero or more {{GlobalEventHandlers/pointermove}} andmousemoveevents, depending on movement of the pointer
  9. {{GlobalEventHandlers/pointerup}}
  10. mouseup
  11. {{GlobalEventHandlers/pointerout}}
  12. {{GlobalEventHandlers/pointerleave}}
  13. mouseout
  14. mouseleave
  15. click

If, however, the {{GlobalEventHandlers/pointerdown}} event iscanceledduring this interaction then the sequence of events would be:

  1. mousemove
  2. {{GlobalEventHandlers/pointerover}}
  3. pointerenter
  4. mouseover
  5. mouseenter
  6. {{GlobalEventHandlers/pointerdown}}
  7. Zero or more {{GlobalEventHandlers/pointermove}} events, depending on movement of the pointer
  8. {{GlobalEventHandlers/pointerup}}
  9. {{GlobalEventHandlers/pointerout}}
  10. {{GlobalEventHandlers/pointerleave}}
  11. mouseout
  12. mouseleave
  13. click

Security and privacy considerations

This appendix discusses security and privacy considerations for Pointer Events implementations. The discussion is limited to security and privacy issues that arise directly from implementation of the event model, APIs and events defined in this specification.

Many of the event types defined in this specification are dispatched in response to user actions. This allows malicious event listeners to gain access to information users would typically consider confidential, e.g., the exact path/movement of a user's mouse/stylus/finger while interacting with a page.

Pointer events contain additional information (where supported by the user's device), such as the angle or tilt at which a pen input is held, the geometry of the contact surface, and the pressure exerted on the stylus or touch screen. Information about angle, tilt, geometry and pressure are directly related to sensors on the user's device, meaning that this specification allows an origin access to these sensors.

This sensor data, as well as the ability to determine the type of input mechanism (mouse, touch, pen) used, may be used to infer characteristics of a user, or of the user's device and environment. These inferred characteristics and any device/environment information may themselves be sensitive — for instance, they may allow a malicious site to further infer if a user is using assistive technologies. This information can also be potentially used for the purposes of building a user profile and/or attempting to "fingerprint" and track a particular user.

As mitigation, user agents may consider including the ability for users to disable access to particular sensor data (such as angle, tilt, pressure), and/or to make it available only after an explicit opt-in from the user.

Beyond these considerations, the working group believes that this specification:

Glossary

active buttons state
The condition when a pointer has a non-zero value for thebuttonsproperty. For mouse, this is when the device has at least one button depressed. For touch, this is when there is physical contact with the digitizer. For pen, this is when either the pen has physical contact with the digitizer, or at least one button is depressed while hovering.
active document
For everyactive pointer,the document that received the last event from that pointer.
active pointer
Any touch contact, pen/stylus, mouse cursor, or other pointer that can produce events. If it is possible for a given pointer (identified by a uniquepointerId) to produce additional events within the document, then that pointer is still considered active. Examples:
  • A mouse connected to the device is always active.
  • A touch contact on the screen is considered active.
  • If a touch contact or pen/stylus is lifted beyond the range of the digitizer, then it is no longer considered active.
On some platforms, the set of active pointers includes all pointer input to the device, including any that are not targeted at the user agent (e.g. those targeted at other applications).
canceled event
An event whose default action was prevented by means ofpreventDefault(),returningfalsein an event handler, or other means as defined by [[UIEVENTS]] and [[HTML]].
contact geometry
The bounding box of an input (most commonly, touch) on a digitizer. This typically refers to devices with coarser pointer input resolution than a single pixel. Some devices do not report this data at all.
digitizer
A type of input sensing device in which a surface can detect input which is in contact and/or in close proximity. Most commonly, this is the surface that senses input from the touch contact or a pen/stylus.
direct manipulation
Certain user agents (such as browsers on a touchscreen device) implement a "direct manipulation" metaphor where a pointer not only interacts with controls, but is also used to directly pan or zoom the current page, providing the illusion of direct physical contact. As an example, users on a touchscreen device are generally able to use a finger or a stylus to "grab" a page and pan it by moving the pointer, directly manipulating the page. Contrast this with a mouse pointer on a regular desktop/laptop, where panning is done by using a scrollbar, rather than by "dragging" the page.
In some cases, touchpads (like those found on a laptop) will allow the user to scroll by "dragging" on the touchpad. However, this is generally achieved by the touchpad generating "fake" mouse wheel events, so this wouldn't count as a direct manipulation.
hit test
The process by which the user agent determines a target element for a pointer event. Typically, this is determined by considering the pointer's location and also the visual layout of elements in a document on screen media.
measurable properties

Measurable properties represent values relating to continuous pointer sensor data that is expressed using a real number or an integer from a large domain. For pointer events,width,height,pressure, tangentialPressure,tiltX,tiltY,twist, altitudeAngle,azimuthAngle,and the [[UIEVENTS]] Mouse Event model properties screenX,screenY,clientX,clientYare measurable properties.

In contrastpointerId,pointerType,isPrimary,and the [[UIEVENTS]] Mouse Event model propertiesbutton,buttons,ctrlKey, shiftKey,altKey,andmetaKeyare not considered measurable properties, as they don't relate to sensor data.

pointer
A hardware agnostic representation of input devices that can target a specific coordinate (or set of coordinates) on a screen, such as a mouse, pen, or touch contact.
user agent
A program, such as a browser or content authoring tool, normally running on a client machine, which acts on a user's behalf in retrieving, interpreting, executing, presenting, or creating content.

Acknowledgments

Many thanks to lots of people for their proposals and recommendations, some of which are incorporated into this document. The group's Chair acknowledges contributions from the following past and present group members and participants: Mustaq Ahmed, Arthur Barstow, Ben Boyle, Matt Brubeck, Rick Byers, Marcos Cáceres, Cathy Chan, Bo Cupp, Domenic Denicola, Ted Dinklocker, Robert Flack, Dave Fleck, Mike Fraser, Ella Ge, Scott González, Kartikaya Gupta, Dominique Hazael-Massieux, Philippe Le Hégaret, Hayato Ito, Patrick Kettner, Patrick H. Lauke, Scott Low, Sangwhan Moon, Olli Pettay, Alan Pyne, Antoine Quint, Jacob Rossi, Kagami Sascha Rosylight, Doug Schepers, Ming-Chou Shih, Brenton Simpson, Dave Tapuska, Liviu Tinta, Asir Vedamuthu, Lan Wei, Navid Zolghadr

Special thanks to those that helped pioneer the first edition of this model, including especially: Charu Chandiram, Peter Freiling, Nathan Furtwangler, Thomas Olsen, Matt Rakow, Ramu Ramanathan, Justin Rogers, Jacob Rossi, Reed Townsend and Steve Wright.

Revision history

The following is an informative summary of substantial and major editorial changes between publications of this specification, relative to the [[PointerEvents3]] specification. See thecomplete revision history of the Editor's Drafts of this specification.