好得很程序员自学网

<tfoot draggable='sEl'></tfoot>

elasticsearch+logstash并使用java代码实现日志检索

为了项目日志不被泄露,数据展示不采用Kibana

1、环境准备

1.1 创建普通用户

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

#创建用户

useradd querylog

 

#设置密码

passwd queylog

 

#授权sudo权限

查找sudoers文件位置

whereis sudoers

#修改文件为可编辑

chmod -v u+w /etc/sudoers

#编辑文件

vi /etc/sudoers

#收回权限

chmod -v u-w /etc/sudoers

#第一次使用sudo会有提示

 

We trust you have received the usual lecture from the local System

Administrator. It usually boils down to these three things:

 

  # 1 ) Respect the privacy of others.

  # 2 ) Think before you type.

  # 3 ) With great power comes great responsibility.

 

用户创建完成。

1.2 安装jdk

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

su queylog

cd /home/queylog

#解压jdk-8u191-linux-x64.tar.gz

tar -zxvf jdk-8u191-linux-x64.tar.gz

sudo mv jdk1. 8 .0_191 /opt/jdk1. 8

#编辑/ect/profile

vi /ect/profile

export JAVA_HOME=/opt/jdk1. 8

export JRE_HOME=$JAVA_HOME/jre

export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH

export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

#刷新配置文件

source /ect/profile

#查看jdk版本

java -verion

1.3 防火墙设置

?

1

2

3

4

#放行指定IP

firewall-cmd --permanent --add-rich-rule= "rule family=" ipv4 " source address=" 172.16 . 110.55 " accept"

#重新载入

firewall-cmd --reload

2、安装elasticsearch

2.1 elasticsearch配置

注意:elasticsearch要使用普通用户启动要不然会报错

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

su queylog

cd /home/queylog

#解压elasticsearch- 6.5 . 4 .tar.gz

tar -zxvf elasticsearch- 6.5 . 4 .tar.gz

sudo mv elasticsearch- 6.5 . 4 /opt/elasticsearch

#编辑es配置文件

vi /opt/elasticsearch/config/elasticsearch.yml

# 配置es的集群名称

cluster.name: elastic

# 修改服务地址

network.host: 192.168 . 8.224

# 修改服务端口

http.port: 9200

 

#切换root用户

su root

#修改/etc/security/limits.conf 追加以下内容

vi /etc/security/limits.conf

* hard nofile 655360

* soft nofile 131072

* hard nproc 4096

* soft nproc 2048

 

#编辑 /etc/sysctl.conf,追加以下内容:

vi /etc/sysctl.conf

vm.max_map_count= 655360

fs.file-max= 655360

 

#保存后,重新加载:

sysctl -p

 

#切换回普通用户

su queylog

#启动elasticsearch

./opt/elasticsearch/bin/elasticsearch

#测试

curl http: //192.168.8.224:9200

#控制台会打印

{

  "name" : "L_dA6oi" ,

  "cluster_name" : "elasticsearch" ,

  "cluster_uuid" : "eS7yP6fVTvC8KMhLutOz6w" ,

  "version" : {

  "number" : "6.5.4" ,

  "build_flavor" : "default" ,

  "build_type" : "tar" ,

  "build_hash" : "d2ef93d" ,

  "build_date" : "2018-12-17T21:17:40.758843Z" ,

  "build_snapshot" : false ,

  "lucene_version" : "7.5.0" ,

  "minimum_wire_compatibility_version" : "5.6.0" ,

  "minimum_index_compatibility_version" : "5.0.0"

  },

  "tagline" : "You Know, for Search"

}

2.2 把elasticsearch作为服务进行管理

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

#切换root用户

su root

#编写服务配置文件

vi /usr/lib/systemd/system/elasticsearch.service

[unit]

Description=Elasticsearch

Documentation=http: //www.elastic.co

Wants=network-online.target

After=network-online.target

 

[Service]

Environment=ES_HOME=/opt/elasticsearch

Environment=ES_PATH_CONF=/opt/elasticsearch/config

Environment=PID_DIR=/opt/elasticsearch/config

EnvironmentFile=/etc/sysconfig/elasticsearch

WorkingDirectory=/opt/elasticsearch

User=queylog

Group=queylog

ExecStart=/opt/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid

 

# StandardOutput is configured to redirect to journalctl since

# some error messages may be logged in standard output before

# elasticsearch logging system is initialized. Elasticsearch

# stores its logs in /var/log/elasticsearch and does not use

# journalctl by default . If you also want to enable journalctl

# logging, you can simply remove the "quiet" option from ExecStart.

StandardOutput=journal

StandardError=inherit

 

# Specifies the maximum file descriptor number that can be opened by this process

LimitNOFILE= 65536

 

# Specifies the maximum number of process

LimitNPROC= 4096

 

# Specifies the maximum size of virtual memory

LimitAS=infinity

 

# Specifies the maximum file size

LimitFSIZE=infinity

 

# Disable timeout logic and wait until process is stopped

TimeoutStopSec= 0

 

# SIGTERM signal is used to stop the Java process

KillSignal=SIGTERM

 

# Send the signal only to the JVM rather than its control group

KillMode=process

 

# Java process is never killed

SendSIGKILL=no

 

# When a JVM receives a SIGTERM signal it exits with code 143

SuccessExitStatus= 143

 

[Install]

WantedBy=multi-user.target

 

 

vi /etc/sysconfig/elasticsearch

 

elasticsearch #

#######################

 

# Elasticsearch home directory

ES_HOME=/opt/elasticsearch

 

# Elasticsearch Java path

JAVA_HOME=/home/liyijie/jdk1. 8

CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOMR/jre/lib

 

# Elasticsearch configuration directory

ES_PATH_CONF=/opt/elasticsearch/config

 

# Elasticsearch PID directory

PID_DIR=/opt/elasticsearch/config

 

#############################

# Elasticsearch Service #

#############################

 

# SysV init.d

# The number of seconds to wait before checking if elasticsearch started successfully as a daemon process

ES_STARTUP_SLEEP_TIME= 5

 

################################

# Elasticsearch Properties #

################################

# Specifies the maximum file descriptor number that can be opened by this process

# When using Systemd, this setting is ignored and the LimitNOFILE defined in

# /usr/lib/systemd/system/elasticsearch.service takes precedence

#MAX_OPEN_FILES= 65536

 

# The maximum number of bytes of memory that may be locked into RAM

# Set to "unlimited" if you use the 'bootstrap.memory_lock: true' option

# in elasticsearch.yml.

# When using Systemd,LimitMEMLOCK must be set in a unit file such as

# /etc/systemd/system/elasticsearch.service.d/override.conf.

#MAX_LOCKED_MEMORY=unlimited

 

# Maximum number of VMA(Virtual Memory Areas) a process can own

# When using Systemd, this setting is ignored and the 'vm.max_map_count'

# property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf

#MAX_MAP_COUNT= 262144

 

# 重新加载服务

systemctl daemon-reload

#切换普通用户

su queylog

#启动elasticsearch

sudo systemctl start elasticsearch

#设置开机自启动

sudo systemctl enable elasticsearch

3、安装logstash

3.1、logstash配置

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

su queylog

cd /home/queylog

#解压 logstash- 6.5 . 4 .tar.gz

tar -zxvf logstash- 6.5 . 4 .tar.gz

sudo mv logstash- 6.5 . 4 /opt/logstash

#编辑es配置文件

vi /opt/logstash/config/logstash.yml

xpack.monitoring.enabled: true

xpack.monitoring.elasticsearch.username: elastic

xpack.monitoring.elasticsearch.password: changeme

xpack.monitoring.elasticsearch.url: [ "http://192.168.8.224:9200" ]

#在bin目录下创建logstash.conf

vi /opt/logstash/bin/logstash.conf

input {

# 以文件作为来源

file {

# 日志文件路径

path => "/opt/tomcat/logs/catalina.out"

start_position => "beginning" # (end, beginning)

type=> "isp"

}

}

#filter {

#定义数据的格式,正则解析日志(根据实际需要对日志日志过滤、收集)

#grok { 

# match => { "message" => "%{IPV4:clientIP}|%{GREEDYDATA:request}|%{NUMBER:duration}" }

#}  

#根据需要对数据的类型转换 

#mutate { convert => { "duration" => "integer" }}

#}

# 定义输出

output {

elasticsearch {

hosts => "192.168.43.211:9200" #Elasticsearch 默认端口

index => "ind"

document_type => "isp"

}

}

#给该用户授权

chown queylog:queylog /opt/logstash

#启动logstash

./opt/logstash/bin/logstash -f logstash.conf

 

# 安装并配置启动logstash后查看es索引是否创建完成

curl http: //192.168.8.224:9200/_cat/indices

4、java代码部分

之前在SpringBoot整合ElasticSearch与Redis的异常解决

查阅资料,这个归纳的原因比较合理。
原因分析:程序的其他地方使用了Netty,这里指redis。这影响在实例化传输客户端之前初始化处理器的数量。 实例化传输客户端时,我们尝试初始化处理器的数量。 由于在其他地方使用Netty,因此已经初始化并且Netty会对此进行防范,因此首次实例化会因看到的非法状态异常而失败。

解决方案

在SpringBoot启动类中加入:

?

1

System.setProperty( "es.set.netty.runtime.available.processors" , "false" );

4.1、引入pom依赖

?

1

2

3

4

<dependency>

    <groupId>org.springframework.boot</groupId>

    <artifactId>spring-boot-starter-data-elasticsearch</artifactId>

   </dependency>

4.2、修改配置文件

?

1

2

3

4

spring.data.elasticsearch.cluster-name=elastic

# restapi使用 9200

# java程序使用 9300

spring.data.elasticsearch.cluster-nodes= 192.168 . 43.211 : 9300

4.3、对应的接口以及实现类

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

import org.springframework.data.elasticsearch.annotations.Document;

import org.springframework.data.elasticsearch.annotations.Field;

 

@Document (indexName = "ind" , type = "isp" )

public class Bean {

 

 

  @Field

  private String message;

 

  public String getMessage() {

   return message;

  }

 

  public void setMessage(String message) {

   this .message = message;

  }

 

  @Override

  public String toString() {

   return "Tomcat{" +

     ", message='" + message + '\ '' +

     '}' ;

  }

}

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

import java.util.Map;

public interface IElasticSearchService {

 

   Map<String, Object> search(String keywords, Integer currentPage, Integer pageSize) throws Exception ;

 

     //特殊字符转义

   default String escape( String s) {

   StringBuilder sb = new StringBuilder();

 

   for ( int i = 0 ; i < s.length(); ++i) {

    char c = s.charAt(i);

    if (c == '\\' || c == '+' || c == '-' || c == '!' || c == '(' || c == ')' || c == ':' || c == '^' || c == '[' || c == ']' || c == '"' || c == '{' || c == '}' || c == '~' || c == '*' || c == '?' || c == '|' || c == '&' || c == '/' ) {

     sb.append( '\\' );

    }

 

    sb.append(c);

   }

   return sb.toString();

  }

 

}

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

import org.elasticsearch.index.query.BoolQueryBuilder;

import org.elasticsearch.index.query.QueryBuilders;

import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;

import org.springframework.beans.factory.annotation.Autowired;

import org.springframework.data.domain.PageRequest;

import org.springframework.data.elasticsearch.core.ElasticsearchTemplate;

import org.springframework.data.elasticsearch.core.aggregation.AggregatedPage;

import org.springframework.data.elasticsearch.core.query.NativeSearchQueryBuilder;

import org.springframework.stereotype.Service;

 

import javax.annotation.Resource;

import java.util.ArrayList;

import java.util.HashMap;

import java.util.List;

import java.util.Map;

 

/**

  * ElasticSearch实现类

  */

@Service

public class ElasticSearchServiceImpl implements IElasticSearchService {

 

  Logger log = LoggerFactory.getLogger(ElasticSearchServiceImpl. class );

 

  @Autowired

  ElasticsearchTemplate elasticsearchTemplate;

 

  @Resource

  HighlightResultHelper highlightResultHelper;

 

  @Override

  public Map<String, Object> search(String keywords, Integer currentPage, Integer pageSize) {

   keywords= escape(keywords);

    currentPage = Math.max(currentPage - 1 , 0 );

 

   List<HighlightBuilder.Field> highlightFields = new ArrayList<>();

  //设置高亮 把查询到的关键字进行高亮

   HighlightBuilder.Field message = new HighlightBuilder.Field( "message" ).fragmentOffset( 80000 ).numOfFragments( 0 ).requireFieldMatch( false ).preTags( "<span style='color:red'>" ).postTags( "</span>" );

   highlightFields.add(message);

 

   HighlightBuilder.Field[] highlightFieldsAry = highlightFields.toArray( new HighlightBuilder

     .Field[highlightFields.size()]);

  

   //创建查询构造器

   NativeSearchQueryBuilder queryBuilder = new NativeSearchQueryBuilder();

   //过滤 按字段权重进行搜索 查询内容不为空按关键字、摘要、其他属性权重

   BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();

   queryBuilder.withPageable(PageRequest.of(currentPage, pageSize));

   if (!MyStringUtils.isEmpty(keywords)){

    boolQueryBuilder.must(QueryBuilders.queryStringQuery(keywords).field( "message" ));

   }

 

   queryBuilder.withQuery(boolQueryBuilder);

 

   queryBuilder.withHighlightFields(highlightFieldsAry);

 

   log.info( "查询语句:{}" , queryBuilder.build().getQuery().toString());

 

   //查询

   AggregatedPage<Bean> result = elasticsearchTemplate.queryForPage(queryBuilder.build(), Bean

     . class ,highlightResultHelper);

   //解析结果

   long total = result.getTotalElements();

   int totalPage = result.getTotalPages();

   List<Bean> blogList = result.getContent();

   Map<String, Object> map = new HashMap<>();

   map.put( "total" , total);

   map.put( "totalPage" , totalPage);

   map.put( "pageSize" , pageSize);

   map.put( "currentPage" , currentPage + 1 );

   map.put( "blogList" , blogList);

   return map;

  }

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

import com.alibaba.fastjson.JSONObject;

import org.apache.commons.beanutils.PropertyUtils;

import org.elasticsearch.action.search.SearchResponse;

import org.elasticsearch.common.text.Text;

import org.elasticsearch.search.SearchHit;

import org.elasticsearch.search.fetch.subphase.highlight.HighlightField;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;

import org.springframework.data.domain.Pageable;

import org.springframework.data.elasticsearch.core.SearchResultMapper;

import org.springframework.data.elasticsearch.core.aggregation.AggregatedPage;

import org.springframework.data.elasticsearch.core.aggregation.impl.AggregatedPageImpl;

import org.springframework.stereotype.Component;

import org.springframework.util.StringUtils;

 

import java.lang.reflect.InvocationTargetException;

import java.util.ArrayList;

import java.util.List;

 

/**

  * ElasticSearch高亮配置

  */

@Component

public class HighlightResultHelper implements SearchResultMapper {

 

  Logger log = LoggerFactory.getLogger(HighlightResultHelper. class );

 

 

  @Override

  public <T> AggregatedPage<T> mapResults(SearchResponse response, Class<T> clazz, Pageable pageable) {

   List<T> results = new ArrayList<>();

   for (SearchHit hit : response.getHits()) {

    if (hit != null ) {

     T result = null ;

     if (StringUtils.hasText(hit.getSourceAsString())) {

      result = JSONObject.parseObject(hit.getSourceAsString(), clazz);

     }

     // 高亮查询

     for (HighlightField field : hit.getHighlightFields().values()) {

      try {

       PropertyUtils.setProperty(result, field.getName(), concat(field.fragments()));

      } catch (IllegalAccessException | InvocationTargetException | NoSuchMethodException e) {

       log.error( "设置高亮字段异常:{}" , e.getMessage(), e);

      }

     }

     results.add(result);

    }

   }

   return new AggregatedPageImpl<T>(results, pageable, response.getHits().getTotalHits(), response

     .getAggregations(), response.getScrollId());

  }

 

 

  public <T> T mapSearchHit(SearchHit searchHit, Class<T> clazz) {

   List<T> results = new ArrayList<>();

   for (HighlightField field : searchHit.getHighlightFields().values()) {

    T result = null ;

    if (StringUtils.hasText(searchHit.getSourceAsString())) {

     result = JSONObject.parseObject(searchHit.getSourceAsString(), clazz);

    }

    try {

     PropertyUtils.setProperty(result, field.getName(), concat(field.fragments()));

    } catch (IllegalAccessException | InvocationTargetException | NoSuchMethodException e) {

     log.error( "设置高亮字段异常:{}" , e.getMessage(), e);

    }

    results.add(result);

   }

   return null ;

  }

 

  private String concat(Text[] texts) {

   StringBuffer sb = new StringBuffer();

   for (Text text : texts) {

    sb.append(text.toString());

   }

   return sb.toString();

  }

}

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

import org.junit.Test;

import org.junit.runner.RunWith;

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;

import org.springframework.beans.factory.annotation.Autowired;

import org.springframework.boot.test.context.SpringBootTest;

import org.springframework.test.context.junit4.SpringRunner;

 

@RunWith (SpringRunner. class )

@SpringBootTest (classes = CbeiIspApplication. class )

public class ElasticSearchServiceTest {w

 

  private static Logger logger= LoggerFactory.getLogger(EncodePhoneAndCardTest. class );

 

  @Autowired

  private IElasticSearchService elasticSearchService;

 

     @Test

  public ResponseVO getLog(){

   try {

   Map<String, Object> search = elasticSearchService.search( "Exception" , 1 , 10 );

    logger.info( JSON.toJSONString(search));

   } catch (Exception e) {

    e.printStackTrace();

   }

}

例如:以上就是今天要讲的内容,本文仅仅简单介绍了elasticsearch跟logstash的使用, 文章若有不当之处,欢迎评论指出~

到此这篇关于elasticsearch+logstash并使用java代码实现日志检索的文章就介绍到这了,更多相关elasticsearch logstash日志检索内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!

原文链接:https://blog.csdn.net/qq_41175749/article/details/113877527

查看更多关于elasticsearch+logstash并使用java代码实现日志检索的详细内容...

  阅读:17次