Docker 搭建ELK环境

  |  
阅读次数
  |  
字数 4,798
  |  
时长 ≈ 25 分钟

Mac使用Docker搭建便捷的ELK环境。

ELK + redis 使用docker搭建,FileBeat由具体应用机进行搭建。

架构如下:

Docker 搭建ELK环境-图例1

1.获取ELK镜像.

1
2
3
4
5
6
7
8
9
10
11
# tidy @ TidydeMBP in ~ [10:02:28]
$ sudo docker pull sebp/elk
Password:
Using default tag: latest
latest: Pulling from sebp/elk
281a73dee007: Pulling fs layer
...
...
...
Digest: sha256:94c4aa7f9bfe0fe7047f20e1c747ac19538605f8ea0bbe401c7bed923614905e
Status: Downloaded newer image for sebp/elk:latest

2.创建并运行ELK容器

2.1.使用命令行

2.1.1.使用命令行参数

使用下面命令创建并启动一个elk容器,并映射宿主机端口。

1
$ sudo docker run -d -p 5601:5601 -p 9200:9200 -p 5044:5044 -v ~/Documents/docker_env/elasticsearch:/var/lib/elasticsearch -v ~/Documents/docker_env/logstash/config:/opt/logstash/config -v  ~/Documents/docker_env/logstash/conf.d:/etc/logstash/conf.d -it --name elk sebp/elk:651

2.1.2.使用Docker Compose

编写docker-compose.yml文件

1
2
3
4
5
6
7
8
9
10
elk:
image: sebp/elk:651
ports:
- "5601:5601"
- "9200:9200"
- "5044:5044"
volumes:
- ~/Documents/docker_env/elasticsearch:/var/lib/elasticsearch
- ~/Documents/docker_env/logstash/config:/opt/logstash/config
- ~/Documents/docker_env/logstash/conf.d:/etc/logstash/conf.d

然后运行以下命令:

1
$ sudo docker-compose up -d

2.1.3.检查挂载的目录

如果挂载的宿主机目录已经有文件,有可能会直接将docker里面的文件夹直接全部情况再同步宿主机文件夹,遇到这种情况需要第一次构建时不挂载虚拟目录,将需挂载的文件夹文件复制到宿主机后,重新创建容器,再将原本的文件放到宿主机挂载的文件夹里面,确保文件不丢失后再自己修改或者新增配置文件。

2.1.4.重启elk容器

当停止运行之后可以直接使用以下命令直接启动、或者重启elk

1
2
3
docker start elk

docker restart elk

2.2.使用kiteMatic

2.2.1.创建ELK镜像

安装Docker Desktop,在里面安装kiteMatic,如下图,点击后根据提示下载安装包拖到/Applications即可。

Docker 搭建ELK环境-图例2
完成之后打开Kitematic,给刚刚拉下来的ELK镜像创建新容器。

点击new按钮 -> 点击My Images -> 找到elk镜像 -> 点击CREATE创建镜像。

界面操作如下:

Docker 搭建ELK环境-图例3

创建完成之后镜像将按照默认参数启动。

2.2.2.使用kiteMatic修改宿主机映射端口

点击左边的侧边栏选中刚刚创建好的镜像 -> 点击右边的主界面的Settings -> 点击Hostname/Posts -> 然后修改localhost:*后面的端口,跟左边保持一致,将左边的Logstash、Kibana、ElasticSearch的默认端口映射到宿主机上面。

界面操作如下:
Docker 搭建ELK环境-图例4

修改完之后点击SAVE按钮,保存设置,等待容器重启应用设置。
就可以通过默认端口访问对应服务了。

Kibana web interface: http://localhost:5601/app/kibana#/home?_g=()
Elasticsearch JSON interface:http://localhost:9200/
Logstash Beats interface:http://localhost:5044

3.创建虚拟日志条目

3.1.进入容器bash

3.1.1.使用命令行

可以使用以下命令获取正在运行的docker容器。

1
2
3
4
# tidy @ TidydeMBP in ~ [14:48:22] C:1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
491409ca8e44 sebp/elk:latest "/usr/local/bin/st..." 3 hours ago Up 3 hours 0.0.0.0:5044->5044/tcp, 0.0.0.0:5601->5601/tcp, 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp elk

然后进入容器bash,格式如下:

$ docker exec -it /bin/bash

1
2
3
# tidy @ TidydeMBP in ~ [14:56:54]
$ docker exec -it elk /bin/bash
root@4ae9e2a871fa:/#

3.1.2.使用kiteMatic

也可以直接使用kiteMatic,点击EXEC直接打开容器命令行:

Docker 搭建ELK环境-图例5

3.2.指定logstash输入流为控制台

在命令行输入以下命令,以控制台方式input数据,制造测试数据。

1
2
# /opt/logstash/bin/logstash --path.data /tmp/logstash/data \
-e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'

然后等待logstash重启,如果正常则界面如下:

Docker 搭建ELK环境-图例6

如果重启后,命令行界面有类似docker logstash No living connections错误,则需要做如下操作,调整以下docker的内存参数:

Docker 搭建ELK环境-图例7

默认为2G,此处调整为4G之后,应用并重启,然后继续上面操作,重新执行命令即可。

3.3.输入测试数据

当Logstash成功重启完成之后,可以直接在控制台输入一条测试数据。

1
this is a dummy entry

Docker 搭建ELK环境-图例8

你可以根据你的喜好,增加多条测试数据。

^C可以返回bash界面。

3.4.验证数据

在浏览器按照格式输入:

1
2
3
http://<your-host>:9200/_search?pretty
如:
http://localhost:9200/_search?pretty

获取到以下数据,说明数据插入成功。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
{
"took" : 4,
"timed_out" : false,
"_shards" : {
"total" : 6,
"successful" : 6,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 2,
"max_score" : 1.0,
"hits" : [
{
"_index" : ".kibana",
"_type" : "doc",
"_id" : "config:6.4.0",
"_score" : 1.0,
"_source" : {
"type" : "config",
"updated_at" : "2018-09-04T03:51:42.086Z",
"config" : {
"buildNum" : 17929,
"telemetry:optIn" : false
}
}
},
{
"_index" : "logstash-2018.09.04",
"_type" : "doc",
"_id" : "v72Ao2UB4ryUqOiyKpIL",
"_score" : 1.0,
"_source" : {
"message" : "this is a dummy entry",
"@version" : "1",
"host" : "4ae9e2a871fa",
"@timestamp" : "2018-09-04T07:33:29.887Z"
}
}
]
}
}

你也可以使用Kibana’s web interface查看和管理es的索引。

4.配置FileBeat

4.1.下载

打开 FileBeat下载页 获取需要的FileBeat版本进行下载。

如我本机是mac,我选择了mac版本进行下载。

1
2
3
4
5
6
7
8
9
10
11
12
# tidy @ TidydeMBP in ~/Documents/elk-6.2.4/filebeat [17:29:51]
$ wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-darwin-x86_64.tar.gz
--2018-09-04 17:31:23-- https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-darwin-x86_64.tar.gz
Resolving artifacts.elastic.co... 107.21.239.197, 107.21.237.188, 184.73.245.233, ...
Connecting to artifacts.elastic.co|107.21.239.197|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7941770 (7.6M) [application/x-gzip]
Saving to: ‘filebeat-6.4.0-darwin-x86_64.tar.gz’

filebeat-6.4.0-darwin-x86_64.tar.gz 100%[=======================================================================>] 7.57M 626KB/s in 13s

2018-09-04 17:31:37 (581 KB/s) - ‘filebeat-6.4.0-darwin-x86_64.tar.gz’ saved [7941770/7941770]

4.2.解压FileBeat

1
2
# tidy @ TidydeMBP in ~/Documents/elk-6.2.4/filebeat [17:31:37]
$ tar xzvf filebeat-6.4.0-darwin-x86_64.tar.gz

4.3.修改filebeat.yml配置

1.新增filebeat嗅探器加载目录,并启动自动加载配置。
2.输出到redis。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

# Change to true to enable this input configuration.
enabled: false

# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*

# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']

# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']

# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']

# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1

### Multiline options

# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation

# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[

# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false

# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after


#============================= Filebeat modules ===============================
filebeat.config:
modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml

# Set to true to enable config reloading
reload.enabled: false

# Period on which files under path should be checked for changes
#reload.period: 10s

prospectors:
enabled: true
path: ${path.config}/prospectors.d/*.yml
reload.enabled: true
reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Redis output ------------------------------
output.redis:
enabled: true
hosts: ["localhost:6379"]
key: filebeat
db: 15
worker: 1
timeout: 5s
max_retries: 3

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]

# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]

# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

4.4.启动Redis

使用Docker Redis

4.4.1.使用命令行

4.4.1.1.获取镜像
1
sudo docker pull redis
4.4.1.2.启动redis

创建并后台运行一个redis容器

1
docker run --name redis -d redis

当停止运行之后可以直接使用以下命令直接启动redis

1
docker start redis

4.4.2.使用Kitematic

4.4.2.1.创建并启动redis容器

点击左边的NEW按钮 -> 在搜索框里面输入’redis’ -> 点击’CREATE’按钮
这样既可创建一个redis容器。

Docker 搭建ELK环境-图例9

4.4.2.2.修改宿主机映射端口

点击左边的侧边栏选中刚刚创建好的镜像 -> 点击右边的主界面的Settings -> 点击Hostname/Posts -> 然后修改localhost:*后面的端口,跟左边保持一致,将左边的redis的默认端口映射到宿主机上面。

Docker 搭建ELK环境-图例10

4.4.3.验证

宿主机使用redis客户端工具连接刚创建好的redis的docker容器实例进行测试。

4.5.启动FileBeat

然后根据系统,从 启动帮助页 里选择对应命令启动FileBeat.
这里是Mac:

1
2
3
4
5
# tidy @ TidydeMBP in ~ [17:50:36]
$ cd /Users/tidy/Documents/elk-6.2.4/filebeat/filebeat-6.4.0-darwin-x86_64

# tidy @ TidydeMBP in ~/Documents/elk-6.2.4/filebeat/filebeat-6.4.0-darwin-x86_64 [17:50:41]
$ ./filebeat -e -c filebeat.yml -d "publish"

4.6.编写FileBeat嗅探器

假设我们有一个应用日志info.log.18.07.31.3,内容如下:

1
2
3
4
5
6
7
18-09-01 16:47:57 INFO pool-2-thread-3 c.t.w.c.o.i.LoggingInterceptor.intercept(39)  | url: https://api.weixin.qq.com/cgi-bin/user/info?access_token=13_AarGHRP0xBZtUbM5bbIm1Z0Ha_pvHVjYA4EF3OFvF1ylp9AEkcOW8rkLKthe4DRo8_5MwgGCr5QQ0-UTeluD3Nh-n10PqGpCN9HZH3kjVWPpgoFs3cG0Yw6nMqbceKEPMtEWmvghvOxgd62hOAFgCIAGSX&openid=oBBmBjsjaAooCLlqMxzRhPLMHzy8&lang=zh_CN, method: GET, time: 93, request(), response({"subscribe":0,"openid":"oBBmBjsjaAooCLlqMxzRhPLMHzy8","tagid_list":[]})
18-09-01 16:47:57 INFO pool-2-thread-1 c.t.w.c.o.i.LoggingInterceptor.intercept(39) | url: https://api.weixin.qq.com/cgi-bin/user/info?access_token=13_AarGHRP0xBZtUbM5bbIm1Z0Ha_pvHVjYA4EF3OFvF1ylp9AEkcOW8rkLKthe4DRo8_5MwgGCr5QQ0-UTeluD3Nh-n10PqGpCN9HZH3kjVWPpgoFs3cG0Yw6nMqbceKEPMtEWmvghvOxgd62hOAFgCIAGSX&openid=oBBmBjsI6r3hSarOyPRvK1e0RTzU&lang=zh_CN, method: GET, time: 88, request(), response({"subscribe":0,"openid":"oBBmBjsI6r3hSarOyPRvK1e0RTzU","tagid_list":[]})
18-09-01 16:47:57 INFO pool-2-thread-2 c.t.w.c.o.i.LoggingInterceptor.intercept(39) | url: https://api.weixin.qq.com/cgi-bin/user/info?access_token=13_AarGHRP0xBZtUbM5bbIm1Z0Ha_pvHVjYA4EF3OFvF1ylp9AEkcOW8rkLKthe4DRo8_5MwgGCr5QQ0-UTeluD3Nh-n10PqGpCN9HZH3kjVWPpgoFs3cG0Yw6nMqbceKEPMtEWmvghvOxgd62hOAFgCIAGSX&openid=oBBmBjggVlN5KMsOTfTroqlhV1vg&lang=zh_CN, method: GET, time: 63, request(), response({"subscribe":0,"openid":"oBBmBjggVlN5KMsOTfTroqlhV1vg","tagid_list":[]})
18-09-01 16:47:57 INFO pool-2-thread-5 c.t.w.c.o.i.LoggingInterceptor.intercept(39) | biz: aA
18-09-01 16:47:57 INFO pool-2-thread-3 c.t.w.c.o.i.LoggingInterceptor.intercept(39) | biz: bB
18-09-01 16:47:57 INFO pool-2-thread-4 c.t.w.c.o.i.LoggingInterceptor.intercept(39) | biz: cC
18-09-01 16:47:57 INFO pool-2-thread-4 c.t.w.c.o.i.LoggingInterceptor.intercept(39) | biz: dD

filebeat-6.4.0-darwin-x86_64目录下新建prospectors.d文件夹,并创建tisson-open-rpc-service.yml文件,配置应用嗅探路径已经解析规则。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
- type: log
enabled: true
paths:
- /Users/tidy/Documents/apache-tomcat8/apache-tomcat-8.5.20-shuniu/logs/info.log.18.07.31.3
encoding: utf-8
include_lines: ['url']
fields:
log_type: interface
fields_under_root: true

- type: log
enabled: true
paths:
- /Users/tidy/Documents/apache-tomcat8/apache-tomcat-8.5.20-shuniu/logs/info.log.18.07.31.3
encoding: utf-8
include_lines: ['biz']
fields:
log_type: biz
fields_under_root: true

- type: log
enabled: true
paths:
- /Users/tidy/Documents/apache-tomcat8/apache-tomcat-8.5.20-shuniu/logs/info.log.18.07.31.3
encoding: utf-8
include_lines: ['^\d{2,4}-\d{1,2}-\d{1,2}']

当编写完应用嗅探文件tisson-open-rpc-service.yml之后,一保存,filebeat就会自动监测指定的路径的日志文件并开始解析。

这时候打开redis的第15个数据库就可以看到已经解析完成的数据了。

Docker 搭建ELK环境-图例11

5.配置ELK容器的Logstash

配置Logstash从Redis上面读取,并解析到ES。

先进入ELK容器EXEC控制台,然后修改以下配置。

5.1.修改LogStash自动加载配置

1
2
root@4ae9e2a871fa:/etc/init.d# cd /opt/logstash/config/
root@4ae9e2a871fa:/opt/logstash/config# vi logstash.yml

按字符串匹配auto关键字,输入:

1
2
3
/auto

回车

就可以看到光标指向了

1
# config.reload.automatic: false

修改配置,将注释去掉,然后修改值为true

1
config.reload.automatic: true

保存退出。

1
:wq

5.2.重启容器

5.3.新增input

新增配置,从redis读取日志数据进行解析。

进入/etc/logstash/conf.d

1
2
3
$ docker exec -it elk /bin/bash
root@4ae9e2a871fa:/# cd /etc/logstash/conf.d/
root@4ae9e2a871fa:/etc/logstash/conf.d#

5.3.1.修改30-output.conf

修改30-output.conf文件内容为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
output {
if [log_type]{
elasticsearch {
hosts => "localhost:9200"
action => "index"
index => "%{log_type}-%{+YYYY.MM.dd}"
}
}else{
elasticsearch {
hosts => "localhost:9200"
action => "index"
index => "logstash-%{+YYYY.MM.dd}"
}
}
stdout {
codec => rubydebug
}
}

5.3.2.重命名30-output.conf

重命名30-output.conf为elasticsearch_output.conf

1
2
3
4
5
6
7
8
9
root@4ae9e2a871fa:/etc/logstash/conf.d# mv 30-output.conf elasticsearch_output.conf
root@4ae9e2a871fa:/etc/logstash/conf.d# ll
total 24
drwxr-xr-x 1 logstash logstash 4096 Sep 4 10:36 ./
drwxr-xr-x 1 logstash logstash 4096 Aug 27 20:07 ../
-rw-r--r-- 1 root root 177 Aug 27 20:02 02-beats-input.conf
-rw-r--r-- 1 root root 456 Aug 27 20:02 10-syslog.conf
-rw-r--r-- 1 root root 113 Aug 27 20:02 11-nginx.conf
-rw-r--r-- 1 root root 414 Sep 4 10:33 elasticsearch_output.conf

5.3.3.新增filter-test1.conf

1
root@4ae9e2a871fa:/etc/logstash/conf.d# vi filter-test1.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
filter {
if [log_type]{
if [log_type] == "interface" {
grok {
match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*%{LOGLEVEL:level}\s*%{JAVAFILE:thread}\s*%{JAVAFILE:class}\(%{NUMBER:lineNumber}\)\s*\|\s*url:\s*%{GREEDYDATA:url}, method:\s*%{GREEDYDATA:method}, time:\s*%{NUMBER:requestTime}, request\(%{GREEDYDATA:requestBody}\), response\(%{GREEDYDATA:responseBody}\)"}
}
json {
source =>"responseBody"
target => "responseBody"
}
}
if [log_type] == "biz" {
grok {
match => { "message" => "\s*%{TIMESTAMP_ISO8601:time}\s*%{LOGLEVEL:level}\s*%{JAVAFILE:thread}\s*%{JAVAFILE:class}\(%{NUMBER:lineNumber}\)\s*\|\s*biz:%{GREEDYDATA:bizLog}"}
}
}

date {
match => [ "time", "yy-MM-dd HH:mm:ss"]
}
mutate {
remove_field =>["message"]
}
}
}

5.3.1.新增redis_input.conf

这里host配置占位符,直接访问宿主机的redis

1
2
3
4
5
6
7
8
9
10
input {
redis {
data_type => "list"
key => "filebeat"
host => "docker.for.mac.localhost"
port => 6379
threads => 5
db => 15
}
}

到这里当我们保存配置文件,退出之后,看一下redis,里面的数据已经全部被消费完毕,并通过logstash的pipeline生成了我们配置的三个索引。

Docker 搭建ELK环境-图例12

6.配置Kibana的Index Patterns

过滤完的数据顺利存入es之后,我们通过Kibana创建Index Patterns,来查看数据。

6.1.配置Index Patterns

打开 Kibana’s web interface ,然后点击Management -> Kibana Index Patterns -> Create Index Pattern -> 在文本框输入interface-* -> 点击> Next step -> 选择下拉框@timestamp -> 最后点击Create index pattern按钮

这样就完成了索引模式的创建,最后我们点击Discover就可以看到我们的索引模式了。

Docker 搭建ELK环境-图例13

7.备份容器

上面我们已经完成了Docker ELK的个性化配置,现在我们需要将这份我们配置过的容器备份起来,以便日后在其他地方需要使用的时候可以直接使用,而不是重新拉一个全新的ELK镜像又进行一次全新的配置。

7.1.查看已存在的容器

1
2
3
4
5
# tidy @ TidydeMBP in ~ [14:34:25]
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
43a05b5ae610 redis:latest "docker-entrypoint..." 4 hours ago Exited (0) 3 hours ago redis
052e57f262ae sebp/elk:latest "/usr/local/bin/st..." 18 hours ago Exited (137) 3 hours ago elk

7.2.创建本地容器快照

使用 docker commit -p 容器ID 镜像名称 的格式创建。

1
2
3
4
5
6
7
8
9
10
# tidy @ TidydeMBP in ~ [14:34:28]
$ docker commit -p 052e57f262ae elk_filebeat_redis
sha256:42df48a426925959b5cba3d7793cc2caeceef750c0187e8547b47b85fcd8d324

# tidy @ TidydeMBP in ~ [14:42:03]
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
elk_filebeat_redis latest 42df48a42692 6 seconds ago 1.52GB
sebp/elk latest 415660a78986 8 days ago 1.45GB
redis latest 4e8db158f18d 4 weeks ago 83.4MB

如上所见,我们已经创建好修改后的容器镜像,接下来我们将这个镜像推送到 DockerHub 镜像中心进行备份。

7.3.DockerHub

7.3.1.注册

在上面的DockerHub注册账号。

7.3.2.本地登录

使用如下命令进行登录。

1
2
3
$ docker login
Username:
Password:

7.3.3.创建镜像标签

使用 docker tag 容器ID 登录用户名/镜像名称:标签 的格式创建。

1
2
3
4
5
6
7
8
9
10
# tidy @ TidydeMBP in ~ [14:42:09]
$ docker tag 42df48a42692 tidyko/elk_filebeat_redis:v1

# tidy @ TidydeMBP in ~ [14:52:58]
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
elk_filebeat_redis latest 42df48a42692 11 minutes ago 1.52GB
tidyko/elk_filebeat_redis v1 42df48a42692 11 minutes ago 1.52GB
sebp/elk latest 415660a78986 8 days ago 1.45GB
redis latest 4e8db158f18d 4 weeks ago 83.4MB

这时,我们能看到我们的本地镜像仓库里面出现了名为 tidyko/elk_filebeat_redis 的镜像。

7.3.4.推送镜像

使用 docker push REPOSITORY 的格式推送。

1
2
3
4
5
6
7
8
9
10
# tidy @ TidydeMBP in ~ [14:53:10]
$ docker push tidyko/elk_filebeat_redis
The push refers to a repository [docker.io/tidyko/elk_filebeat_redis]
74ac712f31c2: Preparing
0ecb078d2795: Preparing
...
...
...
ff986b10a018: Mounted from sebp/elk
v1: digest: sha256:e4afdd36a986bd6848a391f8a3294bb21f6617dd752cc819a3b1411e1720af17 size: 8855

这样就顺利将镜像备份到Docker Hub了。

以后我们使用就可以直接获取备份的镜像来用了。

Docker 搭建ELK环境-图例14

8.参考

Docker Docs

sebp/elk

Elasticsearch, Logstash, Kibana (ELK) Docker image documentation