赞
踩
安装docker:
(4条消息) Centos7安装Docker_玩物丧志的快乐的博客-CSDN博客_centos7 docker
安装docker-compose:
(4条消息) 安装docker-compose的两种方式_沙漠之鹰的博客-CSDN博客_如何安装docker-compose
修改系统参数:
进入目录 /etc/sysctl.conf 加入
vm.max_map_count=655360
进入目录 /etc/security/limits.conf 加入
* soft nofile 65535
* hard nofile 65535
* soft nproc 65535
* hard nproc 65535
* soft memlock unlimited
* hard memlock unlimited
创建网络并查看
docker network create elk
docker network ls
创建一个主目录存放安装文件在进入该目录
mkdir /root/elkf
cd elkf
vi /root/elkf/docker-compose.yml
version: "3" #docker-compose版本
services: #需要定义运行的服务
nginx:
restart: always
image: nginx
container_name: nginx
hostname: nginx
ports: #映射端口主机到容器
- 80:80
volumes: #卷挂载路径主机到容器
- /var/log/nginx:/var/log/nginxfilebeat:
restart: always
depends_on:
- "nginx"
build:
context: ./filebeat
dockerfile: Dockerfile
container_name: filebeat
hostname: filebeat
volumes:
- /var/log/nginx:/var/log/nginxelasticsearch:
restart: always
depends_on:
- "nginx"
build:
context: ./elasticsearch
dockerfile: Dockerfile
container_name: elasticsearch
hostname: elasticsearch
ports:
- 9200:9200
- 9300:9300
volumes:
- /var/log/elasticsearch:/var/log/elasticsearchlogstash:
restart: always
depends_on:
- "nginx"
build:
context: ./logstash
dockerfile: Dockerfile
container_name: logstash
hostname: logstash
ports:
- 5044:5044
volumes:
- /opt/logstash/conf:/opt/logstash/confkibana:
restart: always
depends_on:
- "nginx"
build:
context: ./kibana
dockerfile: Dockerfile
container_name: kibana
hostname: kibana
ports:
- 5601:5601
networks: #定义添加的网络
default:
external:
name: elk
这里边 就定义了需要运行的elkf的四个服务
软件包下载地址
http://www.haojiang.online/other/download.tar.gz (ps:这里我已经打包好了全部4个安装包)
1、构建elacticsearch镜像
cd /root/elkf/
mkdir elasticsearch
cd elasticsearch
创建Dockerfile文件并写入信息
vi Dockerfile
FROM centos:7.9.2009
MAINTAINER wzlu
RUN yum -y install java-1.8.0-openjdk vim telnet lsof
ADD elasticsearch-6.1.0.tar.gz /usr/local/
RUN cd /usr/local/elasticsearch-6.1.0/config
RUN mkdir -p /data/behavior/log-node1
RUN mkdir /var/log/elasticsearch
COPY elasticsearch.yml /usr/local/elasticsearch-6.1.0/config/
RUN useradd es && chown -R es:es /usr/local/elasticsearch-6.1.0
RUN chmod +x /usr/local/elasticsearch-6.1.0/bin/*
RUN chown -R es:es /var/log/elasticsearch/
RUN chown -R es:es /data/behavior/log-node1
EXPOSE 9200
EXPOSE 9300
CMD su es /usr/local/elasticsearch-6.1.0/bin/elasticsearch
创建一个elasticsearch.yml文件 写入 如下信息
vi elasticsearch.yml
- # ======================== Elasticsearch Configuration =========================
- #
- # NOTE: Elasticsearch comes with reasonable defaults for most settings.
- # Before you set out to tweak and tune the configuration, make sure you
- # understand what are you trying to accomplish and the consequences.
- #
- # The primary way of configuring a node is via this file. This template lists
- # the most important settings you may want to configure for a production cluster.
- #
- # Please consult the documentation for further information on configuration options:
- # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
- #
- # ---------------------------------- Cluster -----------------------------------
- #
- # Use a descriptive name for your cluster:
- #
- cluster.name: my-elk
- #
- # ------------------------------------ Node ------------------------------------
- #
- # Use a descriptive name for the node:
- #
- node.name: node-1
- #
- # Add custom attributes to the node:
- #
- #node.attr.rack: r1
- #
- # ----------------------------------- Paths ------------------------------------
- #
- # Path to directory where to store the data (separate multiple locations by comma):
- #
- #path.data: /path/to/data
- #
- # Path to log files:
- #
- #path.logs: /path/to/logs
- #
- # ----------------------------------- Memory -----------------------------------
- #
- # Lock the memory on startup:
- #
- #bootstrap.memory_lock: true
- #
- # Make sure that the heap size is set to about half the memory available
- # on the system and that the owner of the process is allowed to use this
- # limit.
- #
- # Elasticsearch performs poorly when the system is swapping the memory.
- #
- # ---------------------------------- Network -----------------------------------
- #
- # Set the bind address to a specific IP (IPv4 or IPv6):
- #
- network.host: 0.0.0.0
-
- #
- # Set a custom port for HTTP:
- #
- http.port: 9200
- #
- # For more information, consult the network module documentation.
- #
- # --------------------------------- Discovery ----------------------------------
- #
- # Pass an initial list of hosts to perform discovery when new node is started:
- # The default list of hosts is ["127.0.0.1", "[::1]"]
- #
- #discovery.zen.ping.unicast.hosts: ["host1", "host2"]
- #
- # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
- #
- #discovery.zen.minimum_master_nodes:
- #
- # For more information, consult the zen discovery module documentation.
- #
- # ---------------------------------- Gateway -----------------------------------
- #
- # Block initial recovery after a full cluster restart until N nodes are started:
- #
- #gateway.recover_after_nodes: 3
- #
- # For more information, consult the gateway module documentation.
- #
- # ---------------------------------- Various -----------------------------------
- #
- # Require explicit names when deleting indices:
- #
- #action.destructive_requires_name: true
再导入elacticsearch的安装包
elactissearch下目录结构
2、构建logstash镜像
cd /root/elkf/
mkdir logstash
cd logstash
写入logstash配置文件
mkdir -p /opt/logstash/conf
vim /opt/logstash/conf/logstash-nginx-log.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats { port => 5044 }
}
filter {
date { match => [ "timestamp","yyyy-MM-dd HH:mm:ss,SSS" ] }
if "nginx-access" in [tags] {
grok {
match => {
"message" => '%{IP:remote_addr} - (%{WORD:remote_user}|-)"%{WORD:method} %{NOTSPACE:request} HTTP/%{NUMBER}" %{NUMBER:status} %{NUMBER:body_bytes_sent} %{QS} %{QS:http_user_agent}'
}
}
urldecode {
all_fields => true
}
date {
match => [ "time_local" , "dd/MMM/YYYY:HH:mm:ss Z" ]
}
}
}
output {
if "nginx-access" in [tags] {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
manage_template => false
index => "nginx-access-%{+YYYY.MM.dd}"
}
}
}
创建Dockerfile文件并写入信息
vi Dockerfile
FROM centos:7.9.2009
MAINTAINER wzlu
RUN yum -y install java-1.8.0-openjdk vim telnet lsof
ADD logstash-6.1.0.tar.gz /usr/local/
RUN cd /usr/local/logstash-6.1.0
ADD run.sh /run.sh
RUN chmod 755 /*.sh
EXPOSE 5044
CMD ["/run.sh"]
编写执行的脚本文件
#!/bin/bash
/usr/local/logstash-6.1.0/bin/logstash -f /opt/logstash/conf/logstash-nginx-log.conf
注意:一定要与刚才存放的logstash配置文件路径一致
再导入logstash的安装包
logstash下目录结构
3、构建kibana镜像
cd /root/elkf/
mkdir kibana
cd kibana
创建Dockerfile文件并写入信息
vi Dockerfile
FROM centos:7.9.2009
MAINTAINER wzlu
RUN yum -y install java-1.8.0-openjdk vim telnet lsof
ADD kibana-6.1.0-linux-x86_64.tar.gz /usr/local/
RUN cd /usr/local/kibana-6.1.0-linux-x86_64
COPY kibana.yml /usr/local/kibana-6.1.0-linux-x86_64/config/
EXPOSE 5601
CMD ["/usr/local/kibana-6.1.0-linux-x86_64/bin/kibana"]
创建kibana.yml文件 在写入一下信息
vi kibana.yml (复制即可)
- # Kibana is served by a back end server. This setting specifies the port to use.
- server.port: 5601
-
- # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
- # The default is 'localhost', which usually means remote machines will not be able to connect.
- # To allow connections from remote users, set this parameter to a non-loopback address.
- server.host: "0.0.0.0"
-
- # Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
- # the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
- # to Kibana. This setting cannot end in a slash.
- #server.basePath: ""
-
- # The maximum payload size in bytes for incoming server requests.
- #server.maxPayloadBytes: 1048576
-
- # The Kibana server's name. This is used for display purposes.
- server.name: "kibana"
-
- # The URL of the Elasticsearch instance to use for all your queries.
- elasticsearch.url: "http://elasticsearch:9200"
-
- # When this setting's value is true Kibana uses the hostname specified in the server.host
- # setting. When the value of this setting is false, Kibana uses the hostname of the host
- # that connects to this Kibana instance.
- #elasticsearch.preserveHost: true
-
- # Kibana uses an index in Elasticsearch to store saved searches, visualizations and
- # dashboards. Kibana creates a new index if the index doesn't already exist.
- #kibana.index: ".kibana"
-
- # The default application to load.
- #kibana.defaultAppId: "home"
-
- # If your Elasticsearch is protected with basic authentication, these settings provide
- # the username and password that the Kibana server uses to perform maintenance on the Kibana
- # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
- # is proxied through the Kibana server.
- #elasticsearch.username: "user"
- #elasticsearch.password: "pass"
-
- # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
- # These settings enable SSL for outgoing requests from the Kibana server to the browser.
- #server.ssl.enabled: false
- #server.ssl.certificate: /path/to/your/server.crt
- #server.ssl.key: /path/to/your/server.key
-
- # Optional settings that provide the paths to the PEM-format SSL certificate and key files.
- # These files validate that your Elasticsearch backend uses the same key files.
- #elasticsearch.ssl.certificate: /path/to/your/client.crt
- #elasticsearch.ssl.key: /path/to/your/client.key
-
- # Optional setting that enables you to specify a path to the PEM file for the certificate
- # authority for your Elasticsearch instance.
- #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
-
- # To disregard the validity of SSL certificates, change this setting's value to 'none'.
- #elasticsearch.ssl.verificationMode: full
-
- # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
- # the elasticsearch.requestTimeout setting.
- #elasticsearch.pingTimeout: 1500
-
- # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
- # must be a positive integer.
- #elasticsearch.requestTimeout: 30000
-
- # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
- # headers, set this value to [] (an empty list).
- #elasticsearch.requestHeadersWhitelist: [ authorization ]
-
- # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
- # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
- #elasticsearch.customHeaders: {}
-
- # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
- #elasticsearch.shardTimeout: 0
-
- # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
- #elasticsearch.startupTimeout: 5000
-
- # Specifies the path where Kibana creates the process ID file.
- #pid.file: /var/run/kibana.pid
-
- # Enables you specify a file where Kibana stores log output.
- #logging.dest: stdout
-
- # Set the value of this setting to true to suppress all logging output.
- #logging.silent: false
-
- # Set the value of this setting to true to suppress all logging output other than error messages.
- #logging.quiet: false
-
- # Set the value of this setting to true to log all events, including system usage information
- # and all requests.
- #logging.verbose: false
-
- # Set the interval in milliseconds to sample system and process performance
- # metrics. Minimum is 100ms. Defaults to 5000.
- #ops.interval: 5000
-
- # The default locale. This locale can be used in certain circumstances to substitute any missing
- # translations.
- #i18n.defaultLocale: "en"
再导入kibana的安装包
kibana下目录结构
4、构建filebeat镜像
cd /root/elkf/
mkdir filebeat
cd filebeat
创建Dockerfile文件并写入信息
vi Dockerfile
FROM centos:7.9.2009
MAINTAINER wzlu
RUN yum -y install java-1.8.0-openjdk vim telnet lsof
ADD filebeat-6.1.0-linux-x86_64.tar.gz /usr/local/
RUN cd /usr/local/filebeat-6.1.0-linux-x86_64
COPY filebeat.yml /usr/local/filebeat-6.1.0-linux-x86_64
ADD run.sh /run.sh
RUN chmod 755 /*.sh
CMD ["/run.sh"]
编写执行的脚本文件
#!/bin/bash
/usr/local/filebeat-6.1.0-linux-x86_64/filebeat -e -c /usr/local/filebeat-6.1.0-linux-x86_64/filebeat.yml
创建filebeat.yml文件 写入如下信息
vi filebeat.yml
- ###################### Filebeat Configuration Example #########################
-
- # This file is an example configuration file highlighting only the most common
- # options. The filebeat.reference.yml file from the same directory contains all the
- # supported options with more comments. You can use it as a reference.
- #
- # You can find the full configuration reference here:
- # https://www.elastic.co/guide/en/beats/filebeat/index.html
-
- # For more available modules and options, please see the filebeat.reference.yml sample
- # configuration file.
-
- #=========================== Filebeat prospectors =============================
-
- filebeat.prospectors:
-
- # Each - is a prospector. Most options can be set at the prospector level, so
- # you can use different prospectors for various configurations.
- # Below are the prospector specific configurations.
-
- - type: log
-
- # Change to true to enable this prospector configuration.
- enabled: true
-
- # Paths that should be crawled and fetched. Glob based paths.
- paths:
- - /var/log/nginx/access.log
- #- c:\programdata\elasticsearch\logs\*
-
- tags: ["nginx-access"]
- clean_*: true
- # Exclude lines. A list of regular expressions to match. It drops the lines that are
- # matching any regular expression from the list.
- #exclude_lines: ['^DBG']
-
- # Include lines. A list of regular expressions to match. It exports the lines that are
- # matching any regular expression from the list.
- #include_lines: ['^ERR', '^WARN']
-
- # Exclude files. A list of regular expressions to match. Filebeat drops the files that
- # are matching any regular expression from the list. By default, no files are dropped.
- #exclude_files: ['.gz$']
-
- # Optional additional fields. These fields can be freely picked
- # to add additional information to the crawled log files for filtering
- #fields:
- # level: debug
- # review: 1
-
- ### Multiline options
-
- # Mutiline can be used for log messages spanning multiple lines. This is common
- # for Java Stack Traces or C-Line Continuation
-
- # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
- #multiline.pattern: ^\[
-
- # Defines if the pattern set under pattern should be negated or not. Default is false.
- #multiline.negate: false
-
- # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
- # that was (not) matched before or after or as long as a pattern is not matched based on negate.
- # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
- #multiline.match: after
-
-
- #============================= Filebeat modules ===============================
-
- filebeat.config.modules:
- # Glob pattern for configuration loading
- path: ${path.config}/modules.d/*.yml
-
- # Set to true to enable config reloading
- reload.enabled: false
-
- # Period on which files under path should be checked for changes
- #reload.period: 10s
-
- #==================== Elasticsearch template setting ==========================
-
- setup.template.settings:
- index.number_of_shards: 3
- #index.codec: best_compression
- #_source.enabled: false
-
- #================================ General =====================================
-
- # The name of the shipper that publishes the network data. It can be used to group
- # all the transactions sent by a single shipper in the web interface.
- #name:
-
- # The tags of the shipper are included in their own field with each
- # transaction published.
- #tags: ["nginx-access"]
-
- # Optional fields that you can specify to add additional information to the
- # output.
- #fields:
- # env: staging
-
-
- #============================== Dashboards =====================================
- # These settings control loading the sample dashboards to the Kibana index. Loading
- # the dashboards is disabled by default and can be enabled either by setting the
- # options here, or by using the `-setup` CLI flag or the `setup` command.
- #setup.dashboards.enabled: false
-
- # The URL from where to download the dashboards archive. By default this URL
- # has a value which is computed based on the Beat name and version. For released
- # versions, this URL points to the dashboard archive on the artifacts.elastic.co
- # website.
- #setup.dashboards.url:
-
- #============================== Kibana =====================================
-
- # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
- # This requires a Kibana endpoint configuration.
- setup.kibana:
-
- # Kibana Host
- # Scheme and port can be left out and will be set to the default (http and 5601)
- # In case you specify and additional path, the scheme is required: http://localhost:5601/path
- # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
- #host: "localhost:5601"
-
- #============================= Elastic Cloud ==================================
-
- # These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
-
- # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
- # `setup.kibana.host` options.
- # You can find the `cloud.id` in the Elastic Cloud web UI.
- #cloud.id:
-
- # The cloud.auth setting overwrites the `output.elasticsearch.username` and
- # `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
- #cloud.auth:
-
- #================================ Outputs =====================================
-
- # Configure what output to use when sending the data collected by the beat.
-
- #-------------------------- Elasticsearch output ------------------------------
- #output.elasticsearch:
- # Array of hosts to connect to.
- #hosts: ["localhost:9200"]
-
- # Optional protocol and basic auth credentials.
- #protocol: "https"
- #username: "elastic"
- #password: "changeme"
-
- #----------------------------- Logstash output --------------------------------
- output.logstash:
- # The Logstash hosts
- hosts: ["logstash:5044"]
-
- # Optional SSL. By default is off.
- # List of root certificates for HTTPS server verifications
- #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
-
- # Certificate for SSL client authentication
- #ssl.certificate: "/etc/pki/client/cert.pem"
-
- # Client Certificate Key
- #ssl.key: "/etc/pki/client/cert.key"
-
- #================================ Logging =====================================
-
- # Sets log level. The default log level is info.
- # Available log levels are: critical, error, warning, info, debug
- #logging.level: debug
-
- # At debug level, you can selectively enable logging only for some components.
- # To enable all selectors use ["*"]. Examples of other selectors are "beat",
- # "publish", "service".
- #logging.selectors: ["*"]
导入filebeat离线安装包
filebeat目录下结构
elkf目录结构
先拉一个新的nginx镜像
docker pull nginx
然后使用docker-compose命令一键部署 (ps:注意检查容器端口是否被占用)
docker-compose up -d
查看容器状态
docker-compose ps
获取日志信息
watch -n 2 curl -k 192.168.25.100(本机ip)
登录 http://本机IP地址:5601 查看
登录成功
查看日志
elkf日志系统搭建成功
如果您们发现里边有什么错误和问题可以联系作者欢迎指正,原创,谢谢支持!
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。