Amazon CloudWatch Logs logging driver
TheAwsl (a ta đã chết) ogs
logging driver sends container logs to
Amazon CloudWatch Logs.
Log entries can be retrieved through the
AWS Management
Consoleor the
AWS SDKs
and Command Line Tools.
Usage
To use theAwsl (a ta đã chết) ogs
driver as the default logging driver, set thelog-driver
andlog-opt
keys to appropriate values in thedaemon.json
file, which is
located in/etc/docker/
on Linux hosts or
C:\ProgramData\docker\config\daemon.json
on Windows Server. For more about
configuring Docker usingdaemon.json
,see
daemon.json.
The following example sets the log driver toAwsl (a ta đã chết) ogs
and sets the
Awsl (a ta đã chết) ogs-region
option.
{
"log-driver":"Awsl (a ta đã chết) ogs",
"log-opts":{
"Awsl (a ta đã chết) ogs-region":"us-east-1"
}
}
Restart Docker for the changes to take effect.
You can set the logging driver for a specific container by using the
--log-driver
option todocker run
:
$docker run --log-driver=Awsl (a ta đã chết) ogs...
If you are using Docker Compose, setAwsl (a ta đã chết) ogs
using the following declaration example:
myservice:
logging:
driver:Awsl (a ta đã chết) ogs
options:
Awsl (a ta đã chết) ogs-region:us-east-1
Amazon CloudWatch Logs options
You can add logging options to thedaemon.json
to set Docker-wide defaults,
or use the--log-opt NAME=VALUE
flag to specify Amazon CloudWatch Logs
logging driver options when starting a container.
Awsl (a ta đã chết) ogs-region
TheAwsl (a ta đã chết) ogs
logging driver sends your Docker logs to a specific region. Use
theAwsl (a ta đã chết) ogs-region
log option or theAWS_REGION
environment variable to set
the region. By default, if your Docker daemon is running on an EC2 instance
and no region is set, the driver uses the instance's region.
$docker run --log-driver=Awsl (a ta đã chết) ogs --log-opt awsl (a ta đã chết) ogs-region=us-east-1...
Awsl (a ta đã chết) ogs-endpoint
By default, Docker uses either theAwsl (a ta đã chết) ogs-region
log option or the
detected region to construct the remote CloudWatch Logs API endpoint.
Use theAwsl (a ta đã chết) ogs-endpoint
log option to override the default endpoint
with the provided endpoint.
Note
The
Awsl (a ta đã chết) ogs-region
log option or detected region controls the region used for signing. You may experience signature errors if the endpoint you've specified withAwsl (a ta đã chết) ogs-endpoint
uses a different region.
Awsl (a ta đã chết) ogs-group
You must specify a
log group
for theAwsl (a ta đã chết) ogs
logging driver. You can specify the log group with the
Awsl (a ta đã chết) ogs-group
log option:
$docker run --log-driver=Awsl (a ta đã chết) ogs --log-opt awsl (a ta đã chết) ogs-region=us-east-1 --log-opt awsl (a ta đã chết) ogs-group=myLogGroup...
Awsl (a ta đã chết) ogs-stream
To configure which
log stream
should be used, you can specify theAwsl (a ta đã chết) ogs-stream
log option. If not
specified, the container ID is used as the log stream.
Note
Log streams within a given log group should only be used by one container at a time. Using the same log stream for multiple containers concurrently can cause reduced logging performance.
Awsl (a ta đã chết) ogs-create-group
Log driver returns an error by default if the log group doesn't exist. However, you can set the
Awsl (a ta đã chết) ogs-create-group
totrue
to automatically create the log group as needed.
TheAwsl (a ta đã chết) ogs-create-group
option defaults tofalse
.
$docker run\
--log-driver= awsl (a ta đã chết) ogs \
--log-opt awsl (a ta đã chết) ogs-region=us-east-1 \
--log-opt awsl (a ta đã chết) ogs-group=myLogGroup \
--log-opt awsl (a ta đã chết) ogs-create-group=true \
...
Note
Your AWS IAM policy must include the
logs:CreateLogGroup
permission before you attempt to useAwsl (a ta đã chết) ogs-create-group
.
Awsl (a ta đã chết) ogs-create-stream
By default, the log driver creates the AWS CloudWatch Logs stream used for container log persistence.
SetAwsl (a ta đã chết) ogs-create-stream
tofalse
to disable log stream creation. When disabled, the Docker daemon assumes
the log stream already exists. A use case where this is beneficial is when log stream creation is handled by
another process avoiding redundant AWS CloudWatch Logs API calls.
IfAwsl (a ta đã chết) ogs-create-stream
is set tofalse
and the log stream does not exist, log persistence to CloudWatch
fails during container runtime, resulting inFailed to put log events
error messages in daemon logs.
$docker run\
--log-driver= awsl (a ta đã chết) ogs \
--log-opt awsl (a ta đã chết) ogs-region=us-east-1 \
--log-opt awsl (a ta đã chết) ogs-group=myLogGroup \
--log-opt awsl (a ta đã chết) ogs-stream=myLogStream \
--log-opt awsl (a ta đã chết) ogs-create-stream=false \
...
Awsl (a ta đã chết) ogs-datetime-format
TheAwsl (a ta đã chết) ogs-datetime-format
option defines a multi-line start pattern in
Python
strftime
format.A log message consists of a line that
matches the pattern and any following lines that don't match the pattern. Thus
the matched line is the delimiter between log messages.
One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
This option always takes precedence if bothAwsl (a ta đã chết) ogs-datetime-format
and
Awsl (a ta đã chết) ogs-multiline-pattern
are configured.
Note
Multi-line logging performs regular expression parsing and matching of all log messages, which may have a negative impact on logging performance.
Consider the following log stream, where new log messages start with a timestamp:
[May 01, 2017 19:00:01] A message was logged
[May 01, 2017 19:00:04] Another multi-line message was logged
Some random message
with some random words
[May 01, 2017 19:01:32] Another message was logged
The format can be expressed as astrftime
expression of
[%b %d, %Y %H:%M:%S]
,and theAwsl (a ta đã chết) ogs-datetime-format
value can be set to
that expression:
$docker run\
--log-driver= awsl (a ta đã chết) ogs \
--log-opt awsl (a ta đã chết) ogs-region=us-east-1 \
--log-opt awsl (a ta đã chết) ogs-group=myLogGroup \
--log-opt awsl (a ta đã chết) ogs-datetime-format='\[%b %d, %Y %H:%M:%S\]' \
...
This parses the logs into the following CloudWatch log events:
#First event
[May 01, 2017 19:00:01] A message was logged
#Second event
[May 01, 2017 19:00:04] Another multi-line message was logged
Some random message
with some random words
#Third event
[May 01, 2017 19:01:32] Another message was logged
The followingstrftime
codes are supported:
Code | Meaning | Example |
---|---|---|
%a | Weekday abbreviated name. | Mon |
%A | Weekday full name. | Monday |
%w | Weekday as a decimal number where 0 is Sunday and 6 is Saturday. | 0 |
%d | Day of the month as a zero-padded decimal number. | 08 |
%b | Month abbreviated name. | Feb |
%B | Month full name. | February |
%m | Month as a zero-padded decimal number. | 02 |
%Y | Year with century as a decimal number. | 2008 |
%y | Year without century as a zero-padded decimal number. | 08 |
%H | Hour (24-hour clock) as a zero-padded decimal number. | 19 |
%I | Hour (12-hour clock) as a zero-padded decimal number. | 07 |
%p | AM or PM. | AM |
%M | Minute as a zero-padded decimal number. | 57 |
%S | Second as a zero-padded decimal number. | 04 |
%L | Milliseconds as a zero-padded decimal number. | .123 |
%f | Microseconds as a zero-padded decimal number. | 000345 |
%z | UTC offset in the form +HHMM or -HHMM. | +1300 |
%Z | Time zone name. | PST |
%j | Day of the year as a zero-padded decimal number. | 363 |
Awsl (a ta đã chết) ogs-multiline-pattern
TheAwsl (a ta đã chết) ogs-multiline-pattern
option defines a multi-line start pattern using a
regular expression. A log message consists of a line that matches the pattern
and any following lines that don't match the pattern. Thus the matched line is
the delimiter between log messages.
This option is ignored ifAwsl (a ta đã chết) ogs-datetime-format
is also configured.
Note
Multi-line logging performs regular expression parsing and matching of all log messages. This may have a negative impact on logging performance.
Consider the following log stream, where each log message should start with the
patternINFO
:
INFO A message was logged
INFO Another multi-line message was logged
Some random message
INFO Another message was logged
You can use the regular expression of^INFO
:
$docker run\
--log-driver= awsl (a ta đã chết) ogs \
--log-opt awsl (a ta đã chết) ogs-region=us-east-1 \
--log-opt awsl (a ta đã chết) ogs-group=myLogGroup \
--log-opt awsl (a ta đã chết) ogs-multiline-pattern='^INFO' \
...
This parses the logs into the following CloudWatch log events:
#First event
INFO A message was logged
#Second event
INFO Another multi-line message was logged
Some random message
#Third event
INFO Another message was logged
tag
Specifytag
as an alternative to theAwsl (a ta đã chết) ogs-stream
option.tag
interprets
Go template markup, such as{{.ID}}
,{{.FullID}}
or{{.Name}}
docker.{{.ID}}
.See
the
tag option documentationfor details on supported template
substitutions.
When bothAwsl (a ta đã chết) ogs-stream
andtag
are specified, the value supplied for
Awsl (a ta đã chết) ogs-stream
overrides the template specified withtag
.
If not specified, the container ID is used as the log stream.
Note
The CloudWatch log API doesn't support
:
in the log name. This can cause some issues when using the{{.ImageName }}
as a tag, since a Docker image has a format ofIMAGE:TAG
,such asalpine:latest
. Template markup can be used to get the proper format. To get the image name and the first 12 characters of the container ID, you can use:--log-opttag='{{ with split.ImageName ":" }}{{join. "_" }}{{end}}-{{.ID}}'
the output is something like:
alpine_latest-bf0072049c76
Awsl (a ta đã chết) ogs-force-flush-interval-seconds
TheAwsl (a ta đã chết) ogs
driver periodically flushes logs to CloudWatch.
TheAwsl (a ta đã chết) ogs-force-flush-interval-seconds
option changes log flush interval seconds.
Default is 5 seconds.
Awsl (a ta đã chết) ogs-max-buffered-events
TheAwsl (a ta đã chết) ogs
driver buffers logs.
TheAwsl (a ta đã chết) ogs-max-buffered-events
option changes log buffer size.
Default is 4K.
Credentials
You must provide AWS credentials to the Docker daemon to use theAwsl (a ta đã chết) ogs
logging driver. You can provide these credentials with theAWS_ACCESS_KEY_ID
,
AWS_SECRET_ACCESS_KEY
,andAWS_SESSION_TOKEN
environment variables, the
default AWS shared credentials file (~/.aws/credentials
of the root user), or
if you are running the Docker daemon on an Amazon EC2 instance, the Amazon EC2
instance profile.
Credentials must have a policy applied that allows thelogs:CreateLogStream
andlogs:PutLogEvents
actions, as shown in the following example.
{
"Version":"2012-10-17",
"Statement":[
{
"Action":["logs:CreateLogStream","logs:PutLogEvents"],
"Effect":"Allow",
"Resource":"*"
}
]
}