运维开发网
广告位招商联系QQ:123077622
 
广告位招商联系QQ:123077622

开源日志管理平台ELK Stack 7.9.0 部署

运维开发网 https://www.qedev.com 2020-09-15 12:21 出处:51CTO 作者:juestnow
开源日志管理平台ELK Stack 7.9.0 部署

环境介绍

为简化安装和升级,Elastic Stack各组件版本是同步发布的。本次安装,采用最新的7.9通用版。官方有多种安装方式:tar、rpm、docker、yum形式,我选择用tar包安装。

Elastic Stack 组件:

Beats7.9 (filebeat)

Elasticsearch7.9

Kibana7.9

Logstash7.9

操作系统:CentOS8.2.2004

JDK版本:jdk-14.0.2_Linux-x64_bin.rpm(Logstash依赖JDK)

Redis版本:5.0.3 (yum 安装)

下载地址:

https://artifacts.elastic.co/downloads/kibana/kibana-7.9.0-Linux-x86_64.tar.gz
https://artifacts.elastic.co/downloads/logstash/logstash-7.9.0.tar.gz
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.0-Linux-x86_64.tar.gz
https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.9.0-Linux-x86_64.tar.gz
https://www.oracle.com/java/technologies/javase-jdk14-downloads.html

IP 规划:

matser 节点: 192.168.2.175 192.168.2.176 192.168.2.177
data 节点: 192.168.2.185 192.168.2.187
查询节点: 192.168.3.62

操作系统初始化

时钟同步:

设置主机时区,并启动chronyd时钟同步服务。

timedatectl set-timezone Asia/Shanghai
systemctl start chronyd

系统参数设置

Elasticsearch默认监听127.0.0.1,这显然无法跨主机交互。当我们对网络相关配置进行修改后,Elasticsearch由开发模式切换为生产模式,会在启动时进行一系列安全检查,以防出现配置不当导致的问题。

为满足Elasticsearch的启动要求,需要调整以下参数:

1、max_map_count:

Elasticsearch默认使用混合的NioFs( 注:非阻塞文件系统)和MMapFs( 注:内存映射文件系统)存储索引。请确保配置的最大映射数量,以便有足够的虚拟内存可用于mmapped文件。此值设置不当,启动Elasticsearch时日志会输出以下错误:

[1] bootstrap checks failed

[1]: max Virtual memory areas vm.max_map_count [65530] is too low,increase to at least [262144]

解决方法:

# vim /etc/sysctl.conf
vm.max_map_count=262144
# sysctl –p

2、修改最大文件描述符

# vim /etc/security/limits.conf
*soft nofile 655350
*hard nofile 655350

3、修改最大线程数

# vim /etc/security/limits.conf
*soft nproc 40960
*hard nproc 40960

4、创建elastic 账号 启动elk 需要

useradd elastic -s /sbin/nologin -M

5、创建 elk 部署目录

 mkdir -p /apps/elk 

安装

安装顺序

1、elasticsearch

2、kibana

3、jdk-14.0.2_Linux-x64_bin.rpm

4、Redis

5、logstash

6、filebeat

7、示例 nginx 日志收集

安装Elasticsearch

1、下载 Elasticsearch

cd /apps/elk
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.0-Linux-x86_64.tar.gz

2、安装 Elasticsearch

tar -xvf elasticsearch-7.9.0-Linux-x86_64.tar.gz

3、集群相关配置

Elasticsearch配置文件采用YAML格式,通过tar包及yum方式安装,默认在当前解压目录/config,修改主配置文件elasticsearch.yml

cd elasticsearch-7.9.0/config
cat elasticsearch.yml
#  #集群名称,只有cluster.name相同时,节点才能加入同一个集群。建议使用描述性名称,不建议在不同环境中使用相同的集群名
cluster.name: k8s-es
 #节点描述名称,默认情况下,Elasticsearch将使用随机生成的UUID的前7个字符作为节点名称。此值支持系统变量。
node.name:  ${HOSTNAME} 
#启动后锁定内存,禁用swap交换,提高ES性能。伴随这个参数还需要调整其他配置,后面讨论。
bootstrap.memory_lock: true
# 禁用 SecComp
bootstrap.system_call_filter: false
# 监听的主机地址,客户端通过哪个地址访问此节点。
network.host: 192.168.2.175
 #监听的WEB端口。
http.port: 9200
# 设置压缩tcp传输时的数据
transport.tcp.compress: true
#集群内节点发现,通过扫描9300-9305端口。列出集群中所有符合主节点的节点地址。
discovery.seed_hosts: ["192.168.2.175","192.168.2.176", "192.168.2.177"]
#在一个全新的集群中设置符合主节点条件的初始节点集。默认情况下,此列表为空,这意味着这个节点希望加入已经引导的集群
cluster.initial_master_nodes:  ["192.168.2.175","192.168.2.176", "192.168.2.177"]
# 选主过程中需要 有多少个节点通信
discovery.zen.minimum_master_nodes: 2
# 只要指定数量的节点加入集群,就开始进行恢复
gateway.recover_after_nodes: 2
# 如果期望的节点数量没有达标,那么会等待一定的时间,然后就开始进行shard recovery
gateway.recover_after_time: 10m
# 要求必须有多少个节点在集群中,当加入集群中的节点数量达到这个期望数值之后,每个node的local shard的恢复就会理解开始,默认的值是0,也就是不会做任何的等待
gateway.expected_nodes: 3
# 初始化数据恢复时,并发恢复线程的个数
cluster.routing.allocation.node_initial_primaries_recoveries: 8
# 设置在节点中最大允许同时进行分片分布的个数
cluster.routing.allocation.node_concurrent_recoveries: 8
#  数据在节点间传输最大带宽
indices.recovery.max_bytes_per_sec: 100mb
# 一台机子能运行的节点数目
node.max_local_storage_nodes: 1
# #此节点是否具有成为主节点的资格。 # 192.168.2.175-177 设置为true 192.168.2.185,187,3.62 设置为false
node.master: true 
# 此节点是否作为数据节点存储数据。 # 192.168.2.175-177,3.62 设置为false  192.168.2.185,187 设置为true
node.data: false
#  内存的限额
indices.fielddata.cache.size: 30%
# 请求中熔断器
network.breaker.inflight_requests.limit: 80%
# (收费,需要预先设置xpack.ml.enabled=true,本文不考虑)
node.ml: false
# (收费,需要预先设置xpack.ml.enabled=true,本文不考虑)
xpack.ml.enabled: false
# 开启X-Pack监视功能
xpack.monitoring.enabled: true
# ES线程池设置
thread_pool:
    write:
       queue_size: 200
# 开启 es 安装 设置
xpack.security.enabled: true
# 开启集群ssl 连接 配置集群账号密码必须开启
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: ./elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: ./elastic-certificates.p12
# 9200 端口 https 连接 启用
#xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.keystore.path: ./elastic-certificates.p12
#xpack.security.http.ssl.truststore.path: ./elastic-certificates.p12
# jvm.options 根据自己服务器配置修改

4、p12 文件生成 及 启动文件生成

# 创建ssl 证书文件
cd elasticsearch-7.9.0/config
../bin/elasticsearch-certutil ca
../bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
# 创建启动文件
cat > /usr/lib/systemd/system/elastic.service << EOF
[Unit]
Description=elasticsearch service
After=syslog.target
After=network.target

[Service]
User=elastic
Group=elastic
LimitNOFILE=128000
LimitNPROC=128000
LimitMEMLOCK=infinity
Restart=on-failure
KillMode=process
ExecStart=/apps/elk/elasticsearch-7.9.0/bin/elasticsearch
ExecReload=/bin/kill -HUP \$MAINPID
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF
# 分发文件
cd /apps/
scp -r elk 192.168.2.176:/apps/
scp -r elk 192.168.2.177:/apps/
scp -r elk 192.168.2.185:/apps/
scp -r elk 192.168.2.187:/apps/
scp -r elk 192.168.3.62:/apps/
scp /usr/lib/systemd/system/elastic.service 192.168.2.176: /usr/lib/systemd/system/elastic.service 
scp /usr/lib/systemd/system/elastic.service 192.168.2.177: /usr/lib/systemd/system/elastic.service 
scp /usr/lib/systemd/system/elastic.service 192.168.2.185: /usr/lib/systemd/system/elastic.service 
scp /usr/lib/systemd/system/elastic.service 192.168.2.187: /usr/lib/systemd/system/elastic.service 
scp /usr/lib/systemd/system/elastic.service 192.168.3.62: /usr/lib/systemd/system/elastic.service 
# 设置 运行目录 用户
chown -R elastic:elastic /apps/elk
# 192.168.2.176,192.168.2.177 节点修改
node.name:
network.host:
# 设置 运行目录 用户
chown -R elastic:elastic /apps/elk
# 192.168.2.185,192.168.2.187 节点修改
node.name:
network.host:
node.master: false 
node.data: true
# 删除或者注释一下配置  
# cluster.initial_master_nodes:  ["192.168.2.175","192.168.2.176", "192.168.2.177"]
# discovery.zen.minimum_master_nodes: 2
# 设置 运行目录 用户
chown -R elastic:elastic /apps/elk
# 192.168.3.62 节点修改
node.name:
network.host:
node.master: false 
node.data: false
# 删除或者注释一下配置  
# cluster.initial_master_nodes:  ["192.168.2.175","192.168.2.176", "192.168.2.177"]
# discovery.zen.minimum_master_nodes: 2
# 设置 运行目录 用户
chown -R elastic:elastic /apps/elk
# 启动所有es 节点 并设置开机启动
systemctl start elastic.service
systemctl enable elastic.service

生成elasticsearch 登陆账号密码

# 任意节点执行
bin/elasticsearch-setup-passwords auto
# 最后输出 账号密码 请记录好
Changed password for user apm_system
PASSWORD apm_system = 4zmSk6NdfNblKFCdZnHK

Changed password for user kibana_system
PASSWORD kibana_system = hfcUg1rInYoWBASZFQTE

Changed password for user kibana
PASSWORD kibana = hfcUg1rInYoWBASZFQTE

Changed password for user logstash_system
PASSWORD logstash_system = JIQJnlMjUJPRXvYRH5L9

Changed password for user beats_system
PASSWORD beats_system = SHNpqmnILwilor2T3Nga

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = 8LpqFw336wrwubkZiEwZ

Changed password for user elastic
PASSWORD elastic = yqyY8P3PJ5CP1GrT7xxR
# 说明如果配置xpack.security.http.ssl.enabled: true  请先注释不然生成账号密码会报错
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: ./elastic-certificates.p12
xpack.security.http.ssl.truststore.path: ./elastic-certificates.p12

安装kibana (节点 192.168.3.62)

1、下载Kibana的tar包

cd /apps/elk
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.9.0-Linux-x86_64.tar.gz

2、安装Kibana的tar包

tar kibana-7.9.0-Linux-x86_64.tar.gz

3、修改配置文件

cd kibana-7.9.0-Linux-x86_64/config
vim kibana.yml
# Default: 5601。Kibana监听的端口
server.port: 5601
#  Default: "localhost"。Kibana监听的地址
server.host: "192.168.3.62"
# 您的主机名
server.name: "k8s_es"
#定义Elasticsearch主节点名称
elasticsearch.hosts: ["http://192.168.2.175:9200","http://192.168.2.176:9200","http://192.168.2.177:9200"]
#elasticsearch 登陆账号
elasticsearch.username: "kibana_system"
#elasticsearch 登陆密码
elasticsearch.password: "hfcUg1rInYoWBASZFQTE"
 # 请求服务的最大数据量,单位字节
server.maxPayloadBytes: 1048576
#等待Elasticsearch响应ping的时间,单位ms
elasticsearch.pingTimeout: 1500  
# 等待来自后端或Elasticsearch的响应时间,必须是正整数。单位ms
elasticsearch.requestTimeout: 30000  
 # 要发送到Elasticsearch的Kibana客户端标头列表。要不发送客户端标头,请将此值设置为[](空列表)
elasticsearch.requestHeadersWhitelist: [ authorization ] 
 # 要发送到Elasticsearch的标头名称和值。无论elasticsearch.requestHeadersWhitelist配置如何,客户端标头都不能覆盖任何自定义标头
elasticsearch.customHeaders: {}  
 # 对Elasticsearch等待碎片响应的时间(以毫秒为单位)。设置为0禁用
elasticsearch.shardTimeout: 30000   
 # 在重试之前在Kibana启动时等待Elasticsearch的时间
elasticsearch.startupTimeout: 5000  
# 设置kibana 中文
i18n.locale: "zh-CN"
# 开启监控
xpack.monitoring.ui.container.elasticsearch.enabled: true
#  至少32 位
xpack.encryptedSavedObjects.encryptionKey: "ae3ca37a74386e07e471eeb842720384"

4、配置kibana service

cat > /usr/lib/systemd/system/kibana.service << EOF
[Unit]
Description=kibana service daemon
After=network.target
[Service]
User=elastic
Group=elastic
LimitNOFILE=65536
LimitNPROC=65536
ExecStart=/apps/elk/kibana-7.9.0-Linux-x86_64/bin/kibana
ExecReload=/bin/kill -HUP \$MAINPID
KillMode=process
Restart=on-failure
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
chown -R elastic:elastic /apps/elk
# 设置开机启动
systemctl enable kibana.service
# 启动 kibana
systemctl start kibana.service

开源日志管理平台ELK Stack 7.9.0 部署

输入生成的账号密码登陆

开源日志管理平台ELK Stack 7.9.0 部署

jdk-14.0.2 安装及配置(节点192.168.3.62)

1、下载 jdk-14.0.2

https://www.oracle.com/java/technologies/javase-jdk14-downloads.html

开源日志管理平台ELK Stack 7.9.0 部署

2、安装 jdk-14.0.2

rpm -ivh jdk-14.0.2_Linux-x64_bin.rpm

3、 查看安装是否正常

java -version
# 返回
java version "14.0.2" 2020-07-14
Java(TM) SE Runtime Environment (build 14.0.2+12-46)
Java HotSpot(TM) 64-Bit Server VM (build 14.0.2+12-46, mixed mode, sharing)

Redis安装及配置(节点192.168.3.62)

1、安装 redis

yum install redis

2、配置redis

vim /etc/redis.conf
bind 0.0.0.0
protected-mode no

3、启动redis

# 设置开机启动
systemctl enable redis.service
# 启动redis
systemctl start redis.service

Logstash安装及配置 (节点192.168.3.62)

1、下载tar 包

cd /apps/elk
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.9.0.tar.gz

2、安装 logstash

tar -xvf logstash-7.9.0.tar.gz

3、 修改 logstash 配置

cd logstash-7.9.0/config/
cat logstash.yml
# CPU内核数(或几倍cpu内核数)
pipeline.workers: 8
#  每次发送的事件数
pipeline.batch.size: 5000
# 发送延时
pipeline.batch.delay: 3
# 设置在Logstash正常退出时,如果还有未处理事件是否强制退出
pipeline.unsafe_shutdown: false
# 管道事件排序
pipeline.ordered: auto
# 日志配置文件地址
path.config: "/apps/elk/logstash-7.9.0/conf.d"
# 自动加载/apps/elk/logstash-7.9.0/conf.d 配置文件
config.reload.automatic: true
# 配置刷时间
config.reload.interval: 3s
# 监听的地址
http.host: 192.168.3.62
# 监听端口
http.port: 9600
# 启用持久队列 存在在memory 当中
queue.type: memory
# 日志输出路径
path.logs: /apps/elk/logstash-7.9.0/logs
# 开启xpack 监控 es 开启https 是这个配置失效
xpack.monitoring.enabled: true
# es 账号
xpack.monitoring.elasticsearch.username: logstash_system
# es 密码
xpack.monitoring.elasticsearch.password: JIQJnlMjUJPRXvYRH5L9
# es 节点
xpack.monitoring.elasticsearch.hosts: ["http://192.168.2.175:9200", "http://192.168.2.176:9200","http://192.168.2.177:9200"]
# jvm.options 根据自己服务器配置修改

4、配置logstash 启动脚本

cat >  /usr/lib/systemd/system/logstash.service << EOF
[Unit]
Description=logstash service
After=syslog.target
After=network.target

[Service]
Environment="CONFFILE=/apps/elk/logstash-7.9.0/conf.d"
LimitNOFILE=65536
LimitNPROC=65536
Restart=on-failure
KillMode=process
ExecStart=/apps/elk/logstash-7.9.0/bin/logstash -f \$CONFFILE
ExecReload=/bin/kill -HUP \$MAINPID
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF
# 设置开机启动
systemctl enable logstash.service

5、设置 logstash 输入日志到es 账号

# 打开 kibana 选择 开发工具 
# 创建logstash_write_role 组的规则
POST /_security/role/logstash_write_role
{
    "cluster": [
      "monitor",
      "manage_index_templates"
    ],
    "indices": [
      {
        "names": [
          "logstash*"
        ],
        "privileges": [
          "write",
          "create_index",
          "delete",
          "create_index",
          "manage",
          "manage_ilm"
        ],
        "field_security": {
          "grant": [
            "*"
          ]
        }
      }
    ],
    "run_as": [],
    "metadata": {},
    "transient_metadata": {
      "enabled": true
    }
}
# 返回{"role":{"created":true}}
# 创建账号 logstash_writer
POST /_security/user/logstash_writer
{
  "username": "logstash_writer",
  "roles": [
    "logstash_write_role"
  ],
  "full_name": null,
  "email": null,
  "password": "JIQJnlMjUJPRXvYRH5L9", # 设置密码
  "enabled": true
}
# 返回{"user":{"created":true}}

以使用 Kibana Users UI(Kibana 用户 UI)创建:

开源日志管理平台ELK Stack 7.9.0 部署

开源日志管理平台ELK Stack 7.9.0 部署

6、配置一个测试日志 以nginx 为例

mkdir -p /apps/elk/logstash-7.9.0/conf.d
cd /apps/elk/logstash-7.9.0/conf.d
vim nginx.conf
input {
      redis {
        host => "192.168.3.62"
        port => "6379"
        data_type => "list"
        key => "nginx_key"
        db => "0"
      }

}
filter {

         if [fields][service] == "rockman_ngx_acs" {
                grok {
                        patterns_dir => ["/apps/elk/logstash-7.9.0/patterns"]
                        match => { "message" => "%{IP:client_ip} \- %{DATA:username}\[%{HTTPDATE:timestamp}\] (?<server>%{IPORHOST}(?:\S\d+)?|-) \"%{WORD:method} %{URIPATHPARAM:uripath} %{URIPROTO:protocol}/%{NUMBER:httpversion}\" %{NUMBER:status_code} (?:%{NUMBER:bytes}|-) \"%{DATA}\" %{QS:agent} (%{QS:x_forwarded_for}?) (%{IP:CDN_IP}?)"}
                        add_tag => ["nginx_aces"]
                        remove_field => ["message"]
                }
                date {
                        match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z","ISO8601"]
                        timezone => "Asia/Shanghai"
                        target => "@timestamp"
                        remove_field => [ "timestamp" ]
                }
                geoip {
                        source => "client_ip"
                        target => "geoip"
                        database => "/apps/elk/logstash-7.9.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"
                        #add_field => {"[geoip][coordinates]" => "%{[geoip][longitude]}"}
                        #add_field => {"[geoip][coordinates]" => "%{[geoip][latitude]}"}
                        remove_field => ["[geoip][latitude]","[geoip][longitude]"]
                }
              #ruby {
              #        code => "
              #                timestamp = event.get([email protected]')
              #                localtime = timestamp.time + 28800
              #                localtimeStr = localtime.strftime('%Y%m%d%H%M%S')
              #                event.set('localtime', localtimeStr)
              #       "
              #       }
                mutate {
                        convert => {"bytes" => "integer"}
                }
       } else if [fields][service] == "rockman_ngx_err" {
                grok {
                        patterns_dir => ["/apps/elk/logstash-7.9.0/patterns"]
                        match => { "message" => ["(?<timestamp>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:severity}\] %{POSINT:pid}#%{NUMBER}: %{GREEDYDATA:errormessage}(?:, client: (?<remote_addr>%{IP}|%{HOSTNAME}))(?:, server: %{IPORHOST:server}?)(?:, request: %{QS:request})?(?:, upstream: (?<upstream>\"%{URI}\"|%{QS}))?(?:, host: %{QS:request_host})?(?:, referrer: \"%{URI:referrer}\")?"]}
                        add_tag => ["nginx_err"]
                        overwrite => ["message"]
                }
                date {
                        match => ["timestamp","yyyy/MM/dd HH:mm:ss","ISO8601"]
                        timezone => "Asia/Shanghai"
                        target => "@timestamp"
                        remove_field => [ "timestamp" ]
                }
                geoip {
                        source => "remote_addr"
                        target => "geoip"
                        database => "/apps/elk/logstash-7.9.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"
                        #add_field => {"[geoip][coordinates]" => "%{[geoip][longitude]}"}
                        #add_field => {"[geoip][coordinates]" => "%{[geoip][latitude]}"}
                        remove_field => ["[geoip][latitude]","[geoip][longitude]"]
                }
        }
}

output {
          if [fields][service] == "rockman_ngx_acs"{
            if "nginx_aces" in [tags] {
           elasticsearch {
                        hosts => ["http://192.168.2.175:9200","http://192.168.2.176:9200","http://192.168.2.177:9200"]
                        sniffing => true
                        index => "logstash-nginx-access-rockman-%{+YYYY.MM}"
                        user => "logstash_writer"
                        password => "JIQJnlMjUJPRXvYRH5L9"
                        #keystore => "/apps/elk/logstash-7.9.0/config/logstash.p12"
                        #keystore_password => ""
                        #truststore => "/apps/elk/logstash-7.9.0/config/logstash.p12"
                        #truststore_password => ""
                        #ssl => true
                        #ssl_certificate_verification => false
                        #cacert => "/apps/elk/logstash-7.9.0/config/logstash.pem"
                   }
            }
      } else if [fields][service] == "rockman_ngx_err" {
                if "nginx_err" in [tags] {
                   elasticsearch {
                        hosts => ["http://192.168.2.175:9200","http://192.168.2.176:9200","http://192.168.2.177:9200"]
                        sniffing => true
                        index => "logstash-nginx-error-%{+YYYY.MM}"
                        user => "logstash_writer"
                        password => "JIQJnlMjUJPRXvYRH5L9"
                        #keystore => "/apps/elk/logstash-7.9.0/config/logstash.p12"
                        #keystore_password => ""
                        #truststore => "/apps/elk/logstash-7.9.0/config/logstash.p12"
                        #truststore_password => ""
                        #ssl => true
                        #ssl_certificate_verification => false
                        #cacert => "/apps/elk/logstash-7.9.0/config/logstash.pem"
                   }
                }
    }
}
# nginx 格式化
http 节点
    map $http_x_forwarded_for  $clientRealIp {
        ""      $remote_addr;
       ~^(?P<firstAddr>[0-9\.|:|a-f\.|:|A-F\.|:]+),?.*$  $firstAddr;
        }

    log_format  main escape=json '$clientRealIp - $remote_user [$time_local] $http_host "$request" '
                                  '$status $body_bytes_sent "$http_referer" '
                                  '"$http_user_agent" "$http_x_forwarded_for" $remote_addr '
                                  'ups_add:$upstream_addr ups_resp_time: $upstream_response_time '
                                  'request_time: $request_time  ups_status: $upstream_status request_body: $request_body';

7、启动# 启动logstash

systemctl start logstash.service

filebeat 安装及其配置 收集日志节点配置

1、下载 tar 包

mkdir -p /apps/elk
cd /apps/elk
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.9.0-Linux-x86_64.tar.gz

2、 安装filebeat

tar -xvf filebeat-7.9.0-Linux-x86_64.tar.gz

3、配置修改

cd filebeat-7.9.0-Linux-x86_64
vim  filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /apps/nginx/log/access.*
  fields:
    service: rockman_ngx_acs # logstash 判断使用
  exclude_files: [".gz$"]
- type: log
  enabled: true
  paths:
    - /apps/nginx/log/error.*
  fields:
    service: rockman_ngx_err  # logstash 判断使用
  exclude_files: [".gz$"]
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
output.redis:
  enabled: true
  hosts: ["192.168.3.62:6379"]
  key: "nginx_key" # redis key name
  db: 0
  timeout: 5
  datatype: "list"
  worker: 5

4、配置 filebeat 启动文件

cat > /usr/lib/systemd/system/filebeat.service << EOF
[Unit]
Description=filebeat Server Daemon
After=network.target
[Service]
User=root
Group=root
ExecStart=/apps/elk/filebeat-7.9.0-Linux-x86_64/filebeat -e -c /apps/elk/filebeat-7.9.0-Linux-x86_64/filebeat.yml
ExecReload=/bin/kill -HUP \$MAINPID
KillMode=process
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=multi-user.target
EOF
# 配置开机启动
systemctl enable filebeat.service
# 启动 filebeat
systemctl start filebeat.service

开源日志管理平台ELK Stack 7.9.0 部署

开源日志管理平台ELK Stack 7.9.0 部署

开源日志管理平台ELK Stack 7.9.0 部署

开源日志管理平台ELK Stack 7.9.0 部署

开源日志管理平台ELK Stack 7.9.0 部署

扫码领视频副本.gif

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号