日志分析项目

发布 : 2016-03-12 分类 : 大数据 浏览 :

项目结构:

1.启动tenginx服务

1
2
3
[root@node1 tengine-2.1.0]# ./sbin/nginx
[root@node1 tengine-2.1.0]# service nginx status
nginx (pid 18094 18093) is running...

2.配置/opt/modules/tengine-2.1.0/conf/nginx.conf文件中虚拟主机设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
 server {
listen 192.168.230.10:8099;
server_name www.sparsematrix.com;

location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
server {
listen 192.168.230.10:8089;
server_name www.sparsematrix.com;

location / {
root /opt/html;
index index.html index.htm;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}

2.1.在/opt/modules/tengine-2.1.0/conf/nginx.conf文件配置nginx存放日志目录

1
access_log  logs/access.log;

3.在浏览器地址栏中输入:http://192.168.230.10:8099/

4.启动web项目

4.1.项目结构

5.在浏览器地址栏中输入:http://localhost/BIG_DATA_LOG2/demo.jsp

1
在浏览器地址栏中输入:http://localhost/BIG_DATA_LOG2/demo3.jsp

6.实时监控日志情况

1
[root@node1 tengine-2.1.0]# tail -f logs/access.log

7.配置/opt/modules/tengine-2.1.0/conf/nginx.conf文件中的日志输出格式,和日志存放目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
log_format  my_format '$remote_addr^A$msec^Ahttp_host^A$request_uri';

server {
listen 192.168.230.10:8099;
server_name www.sparsematrix.com;

location /log.gif {
root html;
index index.html index.htm;
access_log /opt/data/access.log my_format;

}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}

server {
listen 192.168.230.10:8089;
server_name www.sparsematrix.com;

location / {
root /opt/html;
index index.html index.htm;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}

7.1.查看nginix监控日志情况

1
[root@node1 tengine-2.1.0]# tail -f /opt/data/access.log

7.2.Flume中在/opt/modules/flume-1.6.0/option2文件配置Flume主动去收集数据的目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# example.conf: A single-node Flume configuration

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = spoolDir
a1.sources.r1.spoolDir = /opt/data/
a1.sources.r1.ignorePattern = ^(.)*\\.tmp$

# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://matrix/usr/flume/%Y-%m-%d/%H-%M
a1.sinks.k1.hdfs.rollInterval = 0
a1.sinks.k1.hdfs.rollSize = 10240
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 5
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.callTimeout = 360000


# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

7.3.配置Flume主动到/opt/data目录下收集nginx产生的日志文件

本文作者 : Matrix
原文链接 : https://matrixsparse.github.io/2016/03/12/日志分析项目/
版权声明 : 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明出处!

知识 & 情怀 | 二者兼得

微信扫一扫, 向我投食

微信扫一扫, 向我投食

支付宝扫一扫, 向我投食

支付宝扫一扫, 向我投食

留下足迹