Amazon CloudWatch Logs logging driver

TheAwsl (a ta đã chết) ogslogging driver sends container logs to Amazon CloudWatch Logs. Log entries can be retrieved through the AWS Management Consoleor the AWS SDKs and Command Line Tools.

Usage

To use theAwsl (a ta đã chết) ogsdriver as the default logging driver, set thelog-driver andlog-optkeys to appropriate values in thedaemon.jsonfile, which is located in/etc/docker/on Linux hosts or C:\ProgramData\docker\config\daemon.jsonon Windows Server. For more about configuring Docker usingdaemon.json,see daemon.json. The following example sets the log driver toAwsl (a ta đã chết) ogsand sets the Awsl (a ta đã chết) ogs-regionoption.

{
"log-driver":"Awsl (a ta đã chết) ogs",
"log-opts":{
"Awsl (a ta đã chết) ogs-region":"us-east-1"
}
}

Restart Docker for the changes to take effect.

You can set the logging driver for a specific container by using the --log-driveroption todocker run:

$docker run --log-driver=Awsl (a ta đã chết) ogs...

If you are using Docker Compose, setAwsl (a ta đã chết) ogsusing the following declaration example:

myservice:
logging:
driver:Awsl (a ta đã chết) ogs
options:
Awsl (a ta đã chết) ogs-region:us-east-1

Amazon CloudWatch Logs options

You can add logging options to thedaemon.jsonto set Docker-wide defaults, or use the--log-opt NAME=VALUEflag to specify Amazon CloudWatch Logs logging driver options when starting a container.

Awsl (a ta đã chết) ogs-region

TheAwsl (a ta đã chết) ogslogging driver sends your Docker logs to a specific region. Use theAwsl (a ta đã chết) ogs-regionlog option or theAWS_REGIONenvironment variable to set the region. By default, if your Docker daemon is running on an EC2 instance and no region is set, the driver uses the instance's region.

$docker run --log-driver=Awsl (a ta đã chết) ogs --log-opt awsl (a ta đã chết) ogs-region=us-east-1...

Awsl (a ta đã chết) ogs-endpoint

By default, Docker uses either theAwsl (a ta đã chết) ogs-regionlog option or the detected region to construct the remote CloudWatch Logs API endpoint. Use theAwsl (a ta đã chết) ogs-endpointlog option to override the default endpoint with the provided endpoint.

Note

TheAwsl (a ta đã chết) ogs-regionlog option or detected region controls the region used for signing. You may experience signature errors if the endpoint you've specified withAwsl (a ta đã chết) ogs-endpointuses a different region.

Awsl (a ta đã chết) ogs-group

You must specify a log group for theAwsl (a ta đã chết) ogslogging driver. You can specify the log group with the Awsl (a ta đã chết) ogs-grouplog option:

$docker run --log-driver=Awsl (a ta đã chết) ogs --log-opt awsl (a ta đã chết) ogs-region=us-east-1 --log-opt awsl (a ta đã chết) ogs-group=myLogGroup...

Awsl (a ta đã chết) ogs-stream

To configure which log stream should be used, you can specify theAwsl (a ta đã chết) ogs-streamlog option. If not specified, the container ID is used as the log stream.

Note

Log streams within a given log group should only be used by one container at a time. Using the same log stream for multiple containers concurrently can cause reduced logging performance.

Awsl (a ta đã chết) ogs-create-group

Log driver returns an error by default if the log group doesn't exist. However, you can set the Awsl (a ta đã chết) ogs-create-grouptotrueto automatically create the log group as needed. TheAwsl (a ta đã chết) ogs-create-groupoption defaults tofalse.

$docker run\
--log-driver= awsl (a ta đã chết) ogs \
--log-opt awsl (a ta đã chết) ogs-region=us-east-1 \
--log-opt awsl (a ta đã chết) ogs-group=myLogGroup \
--log-opt awsl (a ta đã chết) ogs-create-group=true \
...

Note

Your AWS IAM policy must include thelogs:CreateLogGrouppermission before you attempt to useAwsl (a ta đã chết) ogs-create-group.

Awsl (a ta đã chết) ogs-create-stream

By default, the log driver creates the AWS CloudWatch Logs stream used for container log persistence.

SetAwsl (a ta đã chết) ogs-create-streamtofalseto disable log stream creation. When disabled, the Docker daemon assumes the log stream already exists. A use case where this is beneficial is when log stream creation is handled by another process avoiding redundant AWS CloudWatch Logs API calls.

IfAwsl (a ta đã chết) ogs-create-streamis set tofalseand the log stream does not exist, log persistence to CloudWatch fails during container runtime, resulting inFailed to put log eventserror messages in daemon logs.

$docker run\
--log-driver= awsl (a ta đã chết) ogs \
--log-opt awsl (a ta đã chết) ogs-region=us-east-1 \
--log-opt awsl (a ta đã chết) ogs-group=myLogGroup \
--log-opt awsl (a ta đã chết) ogs-stream=myLogStream \
--log-opt awsl (a ta đã chết) ogs-create-stream=false \
...

Awsl (a ta đã chết) ogs-datetime-format

TheAwsl (a ta đã chết) ogs-datetime-formatoption defines a multi-line start pattern in Python strftimeformat.A log message consists of a line that matches the pattern and any following lines that don't match the pattern. Thus the matched line is the delimiter between log messages.

One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.

This option always takes precedence if bothAwsl (a ta đã chết) ogs-datetime-formatand Awsl (a ta đã chết) ogs-multiline-patternare configured.

Note

Multi-line logging performs regular expression parsing and matching of all log messages, which may have a negative impact on logging performance.

Consider the following log stream, where new log messages start with a timestamp:

[May 01, 2017 19:00:01] A message was logged
[May 01, 2017 19:00:04] Another multi-line message was logged
Some random message
with some random words
[May 01, 2017 19:01:32] Another message was logged

The format can be expressed as astrftimeexpression of [%b %d, %Y %H:%M:%S],and theAwsl (a ta đã chết) ogs-datetime-formatvalue can be set to that expression:

$docker run\
--log-driver= awsl (a ta đã chết) ogs \
--log-opt awsl (a ta đã chết) ogs-region=us-east-1 \
--log-opt awsl (a ta đã chết) ogs-group=myLogGroup \
--log-opt awsl (a ta đã chết) ogs-datetime-format='\[%b %d, %Y %H:%M:%S\]' \
...

This parses the logs into the following CloudWatch log events:

#First event
[May 01, 2017 19:00:01] A message was logged

#Second event
[May 01, 2017 19:00:04] Another multi-line message was logged
Some random message
with some random words

#Third event
[May 01, 2017 19:01:32] Another message was logged

The followingstrftimecodes are supported:

CodeMeaningExample
%aWeekday abbreviated name.Mon
%AWeekday full name.Monday
%wWeekday as a decimal number where 0 is Sunday and 6 is Saturday.0
%dDay of the month as a zero-padded decimal number.08
%bMonth abbreviated name.Feb
%BMonth full name.February
%mMonth as a zero-padded decimal number.02
%YYear with century as a decimal number.2008
%yYear without century as a zero-padded decimal number.08
%HHour (24-hour clock) as a zero-padded decimal number.19
%IHour (12-hour clock) as a zero-padded decimal number.07
%pAM or PM.AM
%MMinute as a zero-padded decimal number.57
%SSecond as a zero-padded decimal number.04
%LMilliseconds as a zero-padded decimal number..123
%fMicroseconds as a zero-padded decimal number.000345
%zUTC offset in the form +HHMM or -HHMM.+1300
%ZTime zone name.PST
%jDay of the year as a zero-padded decimal number.363

Awsl (a ta đã chết) ogs-multiline-pattern

TheAwsl (a ta đã chết) ogs-multiline-patternoption defines a multi-line start pattern using a regular expression. A log message consists of a line that matches the pattern and any following lines that don't match the pattern. Thus the matched line is the delimiter between log messages.

This option is ignored ifAwsl (a ta đã chết) ogs-datetime-formatis also configured.

Note

Multi-line logging performs regular expression parsing and matching of all log messages. This may have a negative impact on logging performance.

Consider the following log stream, where each log message should start with the patternINFO:

INFO A message was logged
INFO Another multi-line message was logged
Some random message
INFO Another message was logged

You can use the regular expression of^INFO:

$docker run\
--log-driver= awsl (a ta đã chết) ogs \
--log-opt awsl (a ta đã chết) ogs-region=us-east-1 \
--log-opt awsl (a ta đã chết) ogs-group=myLogGroup \
--log-opt awsl (a ta đã chết) ogs-multiline-pattern='^INFO' \
...

This parses the logs into the following CloudWatch log events:

#First event
INFO A message was logged

#Second event
INFO Another multi-line message was logged
Some random message

#Third event
INFO Another message was logged

tag

Specifytagas an alternative to theAwsl (a ta đã chết) ogs-streamoption.taginterprets Go template markup, such as{{.ID}},{{.FullID}} or{{.Name}}docker.{{.ID}}.See the tag option documentationfor details on supported template substitutions.

When bothAwsl (a ta đã chết) ogs-streamandtagare specified, the value supplied for Awsl (a ta đã chết) ogs-streamoverrides the template specified withtag.

If not specified, the container ID is used as the log stream.

Note

The CloudWatch log API doesn't support:in the log name. This can cause some issues when using the{{.ImageName }}as a tag, since a Docker image has a format ofIMAGE:TAG,such asalpine:latest. Template markup can be used to get the proper format. To get the image name and the first 12 characters of the container ID, you can use:

--log-opttag='{{ with split.ImageName ":" }}{{join. "_" }}{{end}}-{{.ID}}'

the output is something like:alpine_latest-bf0072049c76

Awsl (a ta đã chết) ogs-force-flush-interval-seconds

TheAwsl (a ta đã chết) ogsdriver periodically flushes logs to CloudWatch.

TheAwsl (a ta đã chết) ogs-force-flush-interval-secondsoption changes log flush interval seconds.

Default is 5 seconds.

Awsl (a ta đã chết) ogs-max-buffered-events

TheAwsl (a ta đã chết) ogsdriver buffers logs.

TheAwsl (a ta đã chết) ogs-max-buffered-eventsoption changes log buffer size.

Default is 4K.

Credentials

You must provide AWS credentials to the Docker daemon to use theAwsl (a ta đã chết) ogs logging driver. You can provide these credentials with theAWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY,andAWS_SESSION_TOKENenvironment variables, the default AWS shared credentials file (~/.aws/credentialsof the root user), or if you are running the Docker daemon on an Amazon EC2 instance, the Amazon EC2 instance profile.

Credentials must have a policy applied that allows thelogs:CreateLogStream andlogs:PutLogEventsactions, as shown in the following example.

{
"Version":"2012-10-17",
"Statement":[
{
"Action":["logs:CreateLogStream","logs:PutLogEvents"],
"Effect":"Allow",
"Resource":"*"
}
]
}