一.部署前环境介绍:

es集群5台(es01,es02,es03,es04,es05),logstash服务器1台(logstash2),kibana服务器1台(kibana2),模拟apache服务及filebeat(收集日志工具)1台(web2);以上均由虚拟机模拟实现;

SRE实战 互联网时代守护先锋,助力企业售后服务体系运筹帷幄!一键直达领取阿里云限量特价优惠。

ip分配如下:

192.168.1.11 es01  

192.168.1.12 es02

192.168.1.13 es03

192.168.1.14 es04

192.168.1.15 es05

192.168.1.21 logstash2

192.168.1.22 kibana2

192.168.1.31 web2

真机:192.168.1.254 

通过ftp共享真机yum源在/var/ftp/elk和centos-1804

二.ansible-playbook应用

ansible服务器ip:192.168.1.40

配置ansible:

1 echo "[es]
2 es01
3 es02
4 es03
5 es04
6 es05" >> /etc/ansible/hosts

1.部署脚本elk.yml

  1 ---
  2 - name: 环境部署
  3   hosts: es,logstash2,kibana2,web2
  4   tasks:
  5     - name: 环境部署
  6       script: /root/elk.sh --some-arguments 1234
  7 
  8 - name: es集群部署
  9   hosts: es
 10   tasks:
 11     - name: 安装jdk,es
 12       yum:
 13         name: 'java-1.8.0-openjdk'
 14         state: latest 
 15     - yum:
 16         name: 'elasticsearch'
 17         state: latest 
 18     - name: 修改配置文件
 19       lineinfile: 
 20         path: /etc/elasticsearch/elasticsearch.yml 
 21         regexp: "{{ item.old }}"
 22         line: "{{ item.new }}"
 23       with_items:
 24         - {old: '# cluster.name',new: 'cluster.name: myelk' }
 25         - {old: '# network.host',new: 'network.host: 0.0.0.0' }
 26         - {old: '# discovery.zen.ping.unicast.hosts',new:'discovery.zen.ping.unicast.hosts: ["es01", "es02","es03"]' }
 27         - {old: '# node.name',new: 'node.name: {{ ansible_nodename }}' }
 28     - name: reload es
 29       service:
 30         name: elasticsearch
 31         state: restarted
 32         enabled: yes
 33 #必须在es部署之后执行
 34 - name: es01的head和kopf插件安装
 35   hosts: es01
 36   tasks:
 37     - name: 安装head插件
 38       shell: '/usr/share/elasticsearch/bin/plugin install ftp://192.168.1.254/elk/elasticsearch-head-master.zip'
 39     - name: 安装kopf插件
 40       shell: '/usr/share/elasticsearch/bin/plugin install ftp://192.168.1.254/elk/elasticsearch-kopf-master.zip'
 41 
 42 - name: logstash部署
 43   hosts: logstash2
 44   tasks:
 45     - name: 安装jdk,logstash
 46       yum:
 47         name: 'java-1.8.0-openjdk'
 48         state: latest
 49     - yum:
 50         name: 'logstash'
 51         state: latest
 52     - name: 方便apache日志读取
 53       script: /root/elk2.sh --some-arguments 1234
 54 
 55 - name: kibana部署
 56   hosts: kibana2
 57   tasks:
 58     - name: 安装kibana
 59       yum:
 60         name: 'kibana'
 61         state: latest
 62     - name: 修改配置文件
 63       lineinfile:
 64         path: /opt/kibana/config/kibana.yml
 65         regexp: "{{ item.old2 }}"
 66         line: "{{ item.new2 }}"
 67       with_items:
 68         - {old2: 'server.port',new2: ' server.port: 5601' }
 69         - {old2: 'server.host',new2: ' server.host: "0.0.0.0"' }
 70         - {old2: 'elasticsearch.url',new2: ' elasticsearch.url: "http://192.168.1.11:9200"' }
 71         - {old2: 'kibana.index',new2: ' kibana.index: ".kibana"' }
 72         - {old2: 'kibana.defaultAppId',new2: ' kibana.defaultAppId: "discover"' }
 73         - {old2: 'elasticsearch.pingTimeout',new2: ' elasticsearch.pingTimeout: 1500' }
 74         - {old2: 'elasticsearch.requestTimeout',new2: ' elasticsearch.requestTimeout: 30000' }
 75         - {old2: 'elasticsearch.startupTimeout',new2: ' elasticsearch.startupTimeout: 5000' }
 76     - name: reload kibana
 77       service:
 78         name: kibana
 79         state: restarted
 80         enabled: yes
 81 
 82 - name: web服务和filebeat部署
 83   hosts: web2
 84   tasks:
 85     - name: 安装apache,filebeat
 86       yum:
 87         name: 'httpd'
 88         state: latest
 89     - yum:
 90         name: 'filebeat'
 91         state: latest
 92     - name: 修改配置文件
 93       lineinfile:
 94         path: /etc/filebeat/filebeat.yml
 95         regexp: "{{ item.old3 }}"
 96         line: "{{ item.new3 }}"
 97       with_items:
 98         - {old3: 'elasticsearch:',new3: '#  elasticsearch:' }
 99         - {old3: 'localhost:9200"',new3: '#hosts: ["localhost:9200"]' }
100         - {old3: '#logstash:',new3: '  logstash:' }
101         - {old3: 'localhost:5044"',new3: '    hosts: ["192.168.1.21:5044"]' }
102     - replace:
103         path: /etc/filebeat/filebeat.yml
104         regexp: '{{ item.old4 }}'
105         replace: '{{ item.new4 }}'
106         backup: yes
107       with_items:
108         - {old4: '\*\.',new4: 'access_' }
109     - name: reload http,filebeat
110       service:
111         name: 'httpd'
112         state: restarted
113         enabled: yes
114     - service:
115         name: 'filebeat'
116         state: restarted
117         enabled: yes

2.调用的shell脚本

/root/elk.sh

 1 #!/bin/bash
 2 echo "127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4
 3 192.168.1.11 es01
 4 192.168.1.12 es02
 5 192.168.1.13 es03
 6 192.168.1.14 es04
 7 192.168.1.15 es05
 8 192.168.1.21 logstash2
 9 192.168.1.22 kibana2" > /etc/hosts
10 mkdir /var/ftp/elk
11 echo "[local_repo]
12 name=CentOS-$releasever - Base
13 baseurl="ftp://192.168.1.254/centos-1804"
14 enabled=1
15 gpgcheck=1
16 [elk]
17 name=elk
18 baseurl="ftp://192.168.1.254/elk"
19 enabled=1
20 gpgcheck=0
21 " > /etc/yum.repos.d/local.repo #elasticsearch,logstash,kibana,filebeat安装包
22 yum clean all
23 yum repolist

/root/elk2.sh

 1 #!/bin/bash
 2 touch /etc/logstash/logstash.conf
 3 echo 'input{
 4         stdin{codec => "json"}
 5         beats{
 6                 port => 5044
 7         }
 8         file{
 9                 path => ["/tmp/c.log"]
10                 type => "test"
11                 start_position => "beginning"
12                 sincedb_path => "/var/lib/logstash/sincedb"
13         }
14 }
15 filter{
16         if [type] == "apache_log"{
17         grok{
18                 match => {"message" => "%{COMBINEDAPACHELOG}"}
19         }}
20 }
21 output{
22         stdout{ codec => "rubydebug" }
23         if [type] == "apache_log"{
24         elasticsearch{
25                 hosts => ["192.168.1.51:9200","192.168.1.52:9200"]
26                 index => "filelog"
27                 flush_size => 2000
28                 idle_flush_time => 10
29         }}
30 }
31 ' > /etc/logstash/logstash.conf
扫码关注我们
微信号:SRE实战
拒绝背锅 运筹帷幄