標籤:filebeat

ELK Stack安裝的眉眉角角-Part3:FileBeat

ELK Stack安裝的眉眉角角-Part3:FileBeat

上一篇尾聲記得我還提到,當時沒有使用FileBeat的原因是因為還沒架起來

結果今天重新從 getting started with filebeat的官網文件(傳送門)重新照表操課了一次

結果意外地就架起來了(所以讀官網文件是何等重要的一件事),雖然成果相當的陽春,但是也很足夠記錄下來

首先-設定檔,僅保留最簡單且最必要的部分

我如往常的在mnt下建立了filebeat的目錄,並加入了/mnt/filebeat/filebeat.yml檔案,內容如下

filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /var/log/mesocollection/*/*.log
  
  reload.enabled: true
  reload.period: 30s

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  template.enabled: false
  index: "meso_sys_localfile_fb"

註:這邊path因為上一篇google 到可以用*追蹤動態目錄(例如以日期、小時分類目錄)的情況,filebeat一樣可以支持,另外關於elasticsearch的index,我們一併在這邊設定好,後續可以透過es的api來檢查寫入的狀況。關於輸入elasticsearch,我們這次與logstash的file寫入少了定義filelds parsing的流程,這一段目前查詢起來,除了general像是apache、mysql等大應用的log格式(所以可以參考他們的log格式,可以省點事),否則可能必須透過filebeat轉接logstash再透過fileter parsing後,才能寫入自定義的fields到elasticsearch的index中。(待查證)

接著key入以下指令,我們就可以把filebeat的image run起來:

docker run -d --link elasticsearch-test:elasticsearch -v /mnt/filebeat/filebeat.yml:/filebeat.yml -v /mnt/python-app/mesocollection/logs:/var/log/mesocollection prima/filebeat:5

接著我們可以觸發一下記錄log的行為,例如call api,我們輸入docker logs的指令追蹤一下看看:

2017/07/31 02:57:30.841300 log.go:116: INFO File is inactive: /var/log/mesocollection/20170731/grpc_server_10.log. Closing because close_inactive of 5m0s reached.
2017/07/31 02:57:30.841296 log.go:116: INFO File is inactive: /var/log/mesocollection/20170731/RepositorySettingsHelper_10.log. Closing because close_inactive of 5m0s reached.
2017/07/31 02:57:32.836147 log.go:116: INFO File is inactive: /var/log/mesocollection/20170731/MC_Entry_10.log. Closing because close_inactive of 5m0s reached.
2017/07/31 02:57:32.836248 log.go:116: INFO File is inactive: /var/log/mesocollection/20170731/MC_MainService_10.log. Closing because close_inactive of 5m0s reached.
2017/07/31 02:57:32.836211 log.go:116: INFO File is inactive: /var/log/mesocollection/20170731/MC_Return_10.log. Closing because close_inactive of 5m0s reached.
2017/07/31 02:57:45.646417 metrics.go:39: INFO Non-zero metrics in the last 30s: filebeat.harvester.closed=5 filebeat.harvester.open_files=-5 filebeat.harvester.running=-5 publish.events=5 registrar.states.update=5 registrar.writes=1

若出現以上的logs,代表filebeat有偵測到檔案的變動。

接著我們一樣透過$ curl -XPOST http://yourhost:9200/meso_sys_filelog_fb/_search?pretty=true來看看es寫入的狀況

{
  "took" : 8,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 41,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "meso_sys_localfile_fb",
        "_type" : "log",
        "_id" : "AV2WiYNRVf3DdKZYkE65",
        "_score" : 1.0,
        "_source" : {
          "@timestamp" : "2017-07-31T02:46:15.762Z",
          "beat" : {
            "hostname" : "6c8d38b686db",
            "name" : "6c8d38b686db",
            "version" : "5.3.0"
          },
          "input_type" : "log",
          "message" : "2017-07-31 10:46:11,021 - MC_MainService - INFO - entry log : behavior:update_system_user_rdb, data={\"$data$\": \"{\\\"firstName\\\": \\\"I-Ping\\\", \\\"email\\\": \\\"paul@meso-tek.com\\\", \\\"lastName\\\": \\\"Huang\\\"}\", \"$query$\": \"{\\\"id\\\": \\\"0e5564e8-1ac7-45db-9d0b-6f79643ca857-1501469142.056618\\\", \\\"account\\\": \\\"paul\\\"}\"}, files_count=0",
******************************************************略

若有以上的json記錄,那代表elasticsearch也可以正常的寫入了

大部分到此,kibana也不太有什麼問題了。

但問題仍是,因為log message沒有被拆解,所以我們只能把每一行當成是log message欄位來搜尋,對於報表並沒有直接幫助。

因此關於設定log message的機制,究竟只能透過logstash轉接,還是有filebeat的plugin可以用呢?這個列為本週任務來研究看看吧~

以上先記錄了filebeat最簡單的啟用與串接方式囉

 

參考:

FileBeat getting started:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html

How Filebeat works:https://www.elastic.co/guide/en/beats/filebeat/current/how-filebeat-works.html

 

WP Facebook Auto Publish Powered By : XYZScripts.com