当前位置:首页 > 开发 > 开源软件 > 正文

【Hadoop十五】Hadoop Counter

发表于: 2015-05-30   作者:bit1129   来源:转载   浏览:
摘要:    1. 只有Map任务的Map Reduce Job File System Counters FILE: Number of bytes read=3629530 FILE: Number of bytes written=98312 FILE: Number of read operations=0 FILE: Number of lar

 

 1. 只有Map任务的Map Reduce Job

	File System Counters
		FILE: Number of bytes read=3629530
		FILE: Number of bytes written=98312
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=8570654
		HDFS: Number of bytes written=1404469
		HDFS: Number of read operations=6
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Launched map tasks=1
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=14522
		Total time spent by all reduces in occupied slots (ms)=0
		Total time spent by all map tasks (ms)=14522
		Total vcore-seconds taken by all map tasks=14522
		Total megabyte-seconds taken by all map tasks=14870528
	Map-Reduce Framework
		Map input records=7452
		Map output records=7452
		Input split bytes=146
		Spilled Records=0
		Failed Shuffles=0
		Merged Map outputs=0
		GC time elapsed (ms)=241
		CPU time spent (ms)=9750
		Physical memory (bytes) snapshot=184406016
		Virtual memory (bytes) snapshot=893657088
		Total committed heap usage (bytes)=89653248
	File Input Format Counters 
		Bytes Read=8570508
	File Output Format Counters 
		Bytes Written=1404469

 

2. 既有Map又有Reduce的MapReduce Job

 

	File System Counters
		FILE: Number of bytes read=879582
		FILE: Number of bytes written=198227
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=2729649
		HDFS: Number of bytes written=265
		HDFS: Number of read operations=7
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Launched map tasks=1
		Launched reduce tasks=1
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=7071
		Total time spent by all reduces in occupied slots (ms)=7804
		Total time spent by all map tasks (ms)=7071
		Total time spent by all reduce tasks (ms)=7804
		Total vcore-seconds taken by all map tasks=7071
		Total vcore-seconds taken by all reduce tasks=7804
		Total megabyte-seconds taken by all map tasks=7240704
		Total megabyte-seconds taken by all reduce tasks=7991296
	Map-Reduce Framework
		Map input records=20
		Map output records=1
		Map output bytes=167
		Map output materialized bytes=182
		Input split bytes=139
		Combine input records=1
		Combine output records=1
		Reduce input groups=1
		Reduce shuffle bytes=182
		Reduce input records=1
		Reduce output records=1
		Spilled Records=2
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=122
		CPU time spent (ms)=3620
		Physical memory (bytes) snapshot=451244032
		Virtual memory (bytes) snapshot=1823916032
		Total committed heap usage (bytes)=288882688
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=2729510
	File Output Format Counters 
		Bytes Written=265

 

 

 

【Hadoop十五】Hadoop Counter

  • 0

    开心

    开心

  • 0

    板砖

    板砖

  • 0

    感动

    感动

  • 0

    有用

    有用

  • 0

    疑问

    疑问

  • 0

    难过

    难过

  • 0

    无聊

    无聊

  • 0

    震惊

    震惊

编辑推荐
hadoop计数器的主要价值在于可以让开发人员以全局的视角来审查程序的运行情况,及时做出错误诊断并
一:背景 Hadoop计数器的主要价值在于可以让开发人员以全局的视角来审查程序的运行情况,及时作出错
hadoop集群中发现使用Parallel Scavenge+Parallel Old收集器组合进行垃圾收集(这也是server端jvm默
当我们定义一个Counter时,我们首先要定义一枚举类型: [html] view plain copy print ? public sta
一.DataXceiverServer类图 二.DataXceiverServer Server used for receiving/sending a block of da
6 Hadoop
如今Apache Hadoop已成为大数据行业发展背后的驱动力。Hive和Pig等技术也经常被提到,但是他们都有
7 Hadoop
《Hadoop: The Definitive Guide》reading notes: This, in a nutshell, is what Hadoop provides:
8 Hadoop
《Hadoop: The Definitive Guide》reading notes: This, in a nutshell, is what Hadoop provides:
倒排索引是文档检索系统中最常用到的数据结果,应用于搜索引擎,根据内容来查找文档的一种方式。进
10 hadoop
hadoop官方网站: http://hadoop.apache.org/ 中文介绍参考网站: http://hadoop.apache.org/docs/r1.0
版权所有 IT知识库 CopyRight © 2009-2015 IT知识库 IT610.com , All Rights Reserved. 京ICP备09083238号