运维开发网

Centos7部署ELK+filebeat日志收集系统

运维开发网 https://www.qedev.com 2020-10-14 12:05 出处:51CTO 作者:wx5ed6455937203
简介ELK是Elasticsearch、Logstash、Kibana三款开源软件的简称,对外可以作为日志管理系统,它可以收集任何来源的日志,并且对日志进行分析与可视化展示Elasticsearch是一款开源分布式搜索引擎,它的主要功能为提供收集、分析、存储数据Logstash是一款服务端的数据传输软件,它的主要功能日志的收集、分析、过滤工具,它可以从不同的来源中提取数据,转换并存储到Elasti

简介

ELK是Elasticsearch、Logstash、Kibana三款开源软件的简称,对外可以作为日志管理系统,它可以收集任何来源的日志,并且对日志进行分析与可视化展示

Elasticsearch是一款开源分布式搜索引擎,它的主要功能为提供收集、分析、存储数据

Logstash是一款服务端的数据传输软件,它的主要功能日志的收集、分析、过滤工具,它可以从不同的来源中提取数据,转换并存储到Elasticsearch中供后续处理

Kibana是一款基于web的图形界面,它的主要功能是搜索、分析和可视化存储在Elasticsearch中的日志数据

Filebeat:是一款轻量级的开源日志文件数据搜集器,通常在需要采集数据的客户端安装Filebeat,并指定目录与日志格式,Filebeat就能快速收集数据后发送给logstash进行解析,或发送给Elasticsearch存储

前期准备

准备三台Centos7虚拟机,配置IP地址和hostname,关闭防火墙和seLinux,同步系统时间,配置IP地址和hostname映射

hostname ip
192.168.29.143 node1
192.168.29.142 node2
192.168.29.144 node3

三台机器的部署架构为

结点 部署架构
node1 elasticsearch+logstash+kibana
node2 elasticsearch
node3 redis+nginx+httpd+filebeat

从官网下载elasticsearch、logstash、kibana、filebeat的压缩包

安装Java环境

从官网下载jdk压缩包并解压

三个结点均要安装java环境

#添加环境变量
[[email protected] ~]# vi /etc/profile
JAVA_HOME=/usr/local/java
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$PATH:$JAVA_HOME/bin
export PATH JAVA_HOME CLASS_PATH
#重新读取环境变量
[[email protected] ~]# source /etc/profile
#查看java环境配置情况
[[email protected] ~]# java -version
java version "1.8.0_241"
Java(TM) SE Runtime Environment (build 1.8.0_241-b07)
Java HotSpot(TM) 64-Bit Server VM (build 25.241-b07, mixed mode)

node3安装配置redis

[[email protected] ~]# yum install epel-release -y
[[email protected] ~]# yum install redis -y
#修改配置文件
[[email protected] ~]# vi /etc/redis.conf
bind 0.0.0.0
daemonize yes
#启动服务
[[email protected] ~]# systemctl start redis

node3安装配置Nginx

从nginx官网下载yum源配置文件

[[email protected] ~]# yum install nginx -y
#配置日志输出格式为json
[[email protected] ~]# vi /etc/nginx/nginx.conf 
http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    log_format   main  '{ "time_local": "$time_local", '
                             '"remote_addr": "$remote_addr", '
                             '"remote_user": "$remote_user", '
                             '"body_bytes_sent": "$body_bytes_sent", '
                             '"request_time": "$request_time", '
                             '"status": "$status", '
                             '"host": "$host", '
                             '"request": "$request", '
                             '"request_method": "$request_method", '
                             '"uri": "$uri", '
                             '"http_referrer": "$http_referer", '
                             '"http_x_forwarded_for": "$http_x_forwarded_for", '
                             '"http_user_agent": "$http_user_agent" '
                        '}';
                           access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    keepalive_timeout  65;
    include /etc/nginx/conf.d/*.conf;
}
#为了使Nginx和httpd不产生冲突,把Nginx监听端口改为8080
[[email protected] ~]# vi  /etc/nginx/conf.d/default.conf 
listen      8080;
#启动服务
[[email protected] ~]# systemctl start nginx

node3安装配置httpd

[[email protected] ~]# yum install httpd -y
#配置日志输出格式为json
[[email protected] ~]# vi /etc/httpd/conf/httpd.conf
<IfModule log_config_module>
    LogFormat "{ \
    \"@timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \
    \"@version\": \"1\", \
    \"tags\":[\"apache\"], \
    \"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \
    \"clientip\": \"%a\", \
    \"duration\": %D, \
    \"status\": %>s, \
    \"request\": \"%U%q\", \
    \"urlpath\": \"%U\", \
    \"urlquery\": \"%q\", \
    \"bytes\": %B, \
    \"method\": \"%m\", \
    \"site\": \"%{Host}i\", \
    \"referer\": \"%{Referer}i\", \
    \"useragent\": \"%{User-agent}i\" \
    }" ls_apache_json
    <IfModule logio_module>
      # You need to enable mod_logio.c to use %I and %O
      #LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
    </IfModule>
    CustomLog "logs/access_log" ls_apache_json
</IfModule>
#启动服务
[[email protected] ~]# systemctl start httpd.service 

系统设置

#node1和node2结点设置
[[email protected] ~]# cat /etc/security/limits.conf 
*               soft    nofile           65540
*               hard    nofile           65540
#设置完需要重启机器 生效

创建用户

#由于elasticsearch不能使用root用户运行,需要在node1和node2结点上创建kawhi用户
[[email protected] ~]# useradd kawhi
[[email protected] ~]# echo 123456 | passwd --stdin kawhi

[[email protected] ~]# useradd kawhi
[[email protected] ~]# echo 123456 | passwd --stdin kawhi

Filebeat部署

node3安装配置filebeat

[[email protected] ~]# tar -zxvf filebeat-7.9.2-Linux-x86_64.tar.gz
[[email protected] ~]# mv filebeat-7.9.2-Linux-x86_64 /usr/local/filebeat

#修改配置文件
[[email protected] ~]# vi /usr/local/filebeat/filebeat.yml
- type: log
  enabled: true
  paths:
    - /var/log/nginx/*.log
    - /var/log/httpd/*log
# ------------------------------ Redis Output -------------------------------
output.redis:
  hosts: ["192.168.29.144:6379"]
  key: "web_log"

#启动服务
[[email protected] ~]# cd /use/local/filebeat
[[email protected] ~]# ./filebeat &

测试验证

#查看redis
127.0.0.1:6379> keys *
1) "web_log"
#数据量大不建议操作
127.0.0.1:6379> LRANGE web_log 0 -1
1) "{\"@timestamp\":\"2020-10-12T07:46:21.017Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"7.9.2\"},\"ecs\":{\"version\":\"1.5.0\"},\"host\":{\"containerized\":false,\"ip\":[\"192.168.29.144\"],\"mac\":[\"00:0c:29:88:ce:5c\"],\"hostname\":\"node3\",\"architecture\":\"x86_64\",\"name\":\"node3\",\"os\":{\"name\":\"CentOS Linux\",\"kernel\":\"3.10.0-1062.el7.x86_64\",\"codename\":\"Core\",\"platform\":\"centos\",\"version\":\"7 (Core)\",\"family\":\"redhat\"},\"id\":\"46913e559f2444fe9a1dccf77210e87c\"},\"log\":{\"offset\":457,\"file\":{\"path\":\"/var/log/nginx/access.log\"}},\"message\":\"{ \\\"time_local\\\": \\\"12/Oct/2020:15:46:13 +0800\\\", \\\"remote_addr\\\": \\\"192.168.29.1\\\", \\\"remote_user\\\": \\\"-\\\", \\\"body_bytes_sent\\\": \\\"0\\\", \\\"request_time\\\": \\\"0.000\\\", \\\"status\\\": \\\"304\\\", \\\"host\\\": \\\"192.168.29.144\\\", \\\"request\\\": \\\"GET / HTTP/1.1\\\", \\\"request_method\\\": \\\"GET\\\", \\\"uri\\\": \\\"/index.html\\\", \\\"http_referrer\\\": \\\"-\\\", \\\"http_x_forwarded_for\\\": \\\"-\\\", \\\"http_user_agent\\\": \\\"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36\\\" }\",\"input\":{\"type\":\"log\"},\"agent\":{\"name\":\"node3\",\"type\":\"filebeat\",\"version\":\"7.9.2\",\"hostname\":\"node3\",\"ephemeral_id\":\"d4174c89-4b5d-4ff1-940a-d5241454d237\",\"id\":\"7fecaabc-6727-4c7d-b1bf-d0e5c7dd69b1\"}}"
2) "{\"@timestamp\":\"2020-10-12T07:46:21.018Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"7.9.2\"},\"agent\":{\"hostname\":\"node3\",\"ephemeral_id\":\"d4174c89-4b5d-4ff1-940a-d5241454d237\",\"id\":\"7fecaabc-6727-4c7d-b1bf-d0e5c7dd69b1\",\"name\":\"node3\",\"type\":\"filebeat\",\"version\":\"7.9.2\"},\"log\":{\"offset\":912,\"file\":{\"path\":\"/var/log/nginx/access.log\"}},\"message\":\"{ \\\"time_local\\\": \\\"12/Oct/2020:15:46:13 +0800\\\", \\\"remote_addr\\\": \\\"192.168.29.1\\\", \\\"remote_user\\\": \\\"-\\\", \\\"body_bytes_sent\\\": \\\"0\\\", \\\"request_time\\\": \\\"0.000\\\", \\\"status\\\": \\\"304\\\", \\\"host\\\": \\\"192.168.29.144\\\", \\\"request\\\": \\\"GET / HTTP/1.1\\\", \\\"request_method\\\": \\\"GET\\\", \\\"uri\\\": \\\"/index.html\\\", \\\"http_referrer\\\": \\\"-\\\", \\\"http_x_forwarded_for\\\": \\\"-\\\", \\\"http_user_agent\\\": \\\"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36\\\" }\",\"input\":{\"type\":\"log\"},\"ecs\":{\"version\":\"1.5.0\"},\"host\":{\"ip\":[\"192.168.29.144\"],\"mac\":[\"00:0c:29:88:ce:5c\"],\"hostname\":\"node3\",\"architecture\":\"x86_64\",\"name\":\"node3\",\"os\":{\"version\":\"7 (Core)\",\"family\":\"redhat\",\"name\":\"CentOS Linux\",\"kernel\":\"3.10.0-1062.el7.x86_64\",\"codename\":\"Core\",\"platform\":\"centos\"},\"id\":\"46913e559f2444fe9a1dccf77210e87c\",\"containerized\":false}}"

ELK部署

node1和node2部署elasticsearch集群

上传压缩包并解压

[[email protected] ~]# tar -zxvf elasticsearch-7.6.0-Linux-x86_64.tar.gz -C /usr/local/elasticsearch
[[email protected] ~]# tar -zxvf elasticsearch-7.6.0-Linux-x86_64.tar.gz -C /usr/local/elasticsearch

#修改node1配置文件
[[email protected] ~]# vi /usr/local/elasticsearch/config/elasticsearch.yml
cluster.name: my-cluster
node.name: node1
bootstrap.memory_lock: false
network.host: 192.168.29.143
http.port: 9200
discovery.seed_hosts: ["192.168.29.143","192.168.29.142"]
cluster.initial_master_nodes: ["192.168.29.143"]
node.max_local_storage_nodes: 100
http.cors.enabled: true                
http.cors.allow-origin: "*"      

#修改node2配置文件
[[email protected] ~]# vi /usr/local/elasticsearch/config/elasticsearch.yml
cluster.name: my-cluster
node.name: node2
bootstrap.memory_lock: false
network.host: 192.168.29.142
http.port: 9200
discovery.seed_hosts: ["192.168.29.143","192.168.29.142"] 
cluster.initial_master_nodes: ["192.168.29.143"]

#把elasticsearch的权限改为kawhi用户
[[email protected] ~]# chown -R kawhi:kawhi /usr/local/elasticsearch 
[[email protected] ~]# chown -R kawhi:kawhi /usr/local/elasticsearch 

启动服务

[[email protected] ~]# su kawhi
[[email protected] ~]# cd /usr/local/elasticsearch/bin/
[[email protected] ~]# nohup ./elasticsearch > /dev/null 2>&1 

[[email protected] ~]# su kawhi
[[email protected] ~]# cd /usr/local/elasticsearch/bin/
[[email protected] ~]# nohup ./elasticsearch > /dev/null 2>&1 

查看elasticsearch运行情况

访问http://node1:9200

Centos7部署ELK+filebeat日志收集系统

访问http://node2:9200

Centos7部署ELK+filebeat日志收集系统

node1部署elasticsearch-head-master插件

#安装npm
[[email protected] ~]# yum install npm -y

上传压缩包并解压

[[email protected] ~]# unzip elasticsearch-head-master.zip
[[email protected] ~]# mv elasticsearch-head-master/ /usr/local/head-master
[[email protected] ~]# cd /usr/local/head-master/_site
[[email protected] ~]# npm install

启动服务

[[email protected] ~]# cd /usr/local/head-master/
[[email protected] ~]# nohup npm run start >/dev/null 2>&1

查看head-master插件运行情况

访问http://node1:9100

Centos7部署ELK+filebeat日志收集系统

node1结点部署logstash

上传压缩包并解压

[[email protected] ~]# tar -zvxf logstash-7.6.0.tar.gz -C /usr/local/logstash
#创建日志收集配置文件夹
[[email protected] ~]# mkdir /usr/local/logstash/conf.d/

node1添加日志收集文件,从node3的redis数据库读取数据,存储到elasticsearch中

[[email protected] ~]# cat /usr/local/logstash/conf.d/redis_to_elk.conf 
input {
  redis {
    host => "192.168.29.144"
    port => "6379"
    data_type => "list"
    type => "log"
    key => "web_log"
  }
}

output {
  elasticsearch {
        hosts => ["192.168.29.143"]
    index => "beats_log-%{+YYYY.MM.dd}"
    }
}

启动服务

[[email protected] ~]# cd /usr/local/logstash/bin/
[[email protected] bin]# ./logstash -f /usr/local/logstash/conf.d/redis_to_elk.conf &

查看logstash运行情况

#命令行最后输出以下语句证明启动成功
Successfully started Logstash API endpoint {:port=>9600}

node1结点部署 kibana

上传压缩包并解压

[[email protected] ~]# tar -zxvf kibana-7.6.0-Linux-x86_64.tar.gz -C /usr/local/kibana 
#修改配置文件
[[email protected] ~]# vi /usr/local/kibana/config/kibana.yml
server.host: "0.0.0.0"

启动服务

[[email protected] ~]# cd /usr/local/kibana/bin/
[[email protected] bin]# ./kibana --allow-root &

查看kibana运行情况

[[email protected] ~]# netstat -tnlp |grep 5601
tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      3644/./../node/bin/ 

Centos7部署ELK+filebeat日志收集系统

ELK日志收集系统部件部署完成

测试验证

生成日志

通过浏览器访问node3的Nginx服务器和apache服务器产生日志

Centos7部署ELK+filebeat日志收集系统

Centos7部署ELK+filebeat日志收集系统

配置kibana的模板

Centos7部署ELK+filebeat日志收集系统

Centos7部署ELK+filebeat日志收集系统

根据node1中logstash日志收集文件中定义的output标签中的index属性格式进行填写查找模板

Centos7部署ELK+filebeat日志收集系统

Centos7部署ELK+filebeat日志收集系统

Centos7部署ELK+filebeat日志收集系统

查看日志收集情况

Centos7部署ELK+filebeat日志收集系统

Centos7部署ELK+filebeat日志收集系统

自此ELK+filebeat日志收集系统部署完成并成功运转

扫码领视频副本.gif

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号