hadoop面试题(5)

2018-12-04 16:52

The default partitioner computes a hash value for the key and assigns the partition based on this result Q10. What is a Combiner

The Combiner is a \by a mapper. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.

Q11. Give an example scenario where a cobiner can be used and where it cannot be used

There can be several examples following are the most common ones - Scenario where you can use combiner Getting list of distinct words in a file - Scenario where you cannot use a combiner Calculating mean of a list of numbers

Q12. What is job tracker

Job Tracker is the service within Hadoop that runs Map Reduce jobs on the cluster

Q13. What are some typical functions of Job Tracker The following are some typical tasks of Job Tracker - Accepts jobs from clients

- It talks to the NameNode to determine the location of the data

- It locates TaskTracker nodes with available slots at or near the data

- It submits the work to the chosen Task Tracker nodes and monitors progress of each task by receiving heartbeat signals from Task tracker Q14. What is task tracker

Task Tracker is a node in the cluster that accepts tasks like Map, Reduce and Shuffle operations - from a JobTracker

Q15. Whats the relationship between Jobs and Tasks in Hadoop One job is broken down into one or many tasks in Hadoop.

Q16. Suppose Hadoop spawned 100 tasks for a job and one of the task failed. What willhadoop do ?

It will restart the task again on some other task tracker and only if the task fails more than 4 (default setting and can be changed) times will it kill the job Q17. Hadoop achieves parallelism by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the

program and slow down the program. What mechanism Hadoop provides to combat this Speculative Execution

Q18. How does speculative execution works in Hadoop

Job tracker makes different task trackers process same input. When tasks

complete, they announce this fact to the Job Tracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing

speculatively, Hadoop tells the Task Trackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first.

Q19. Using command line in Linux, how will you - see all jobs running in the hadoop cluster - kill a job

- hadoop job -list

- hadoop job -kill jobid

Q20. What is Hadoop Streaming

Streaming is a generic API that allows programs written in virtually any language to be used asHadoop Mapper and Reducer implementations

Q21. What is the characteristic of streaming API that makes it flexible run map reduce jobs in languages like perl, ruby, awk etc.

Hadoop Streaming allows to use arbitrary programs for the Mapper and Reducer phases of a Map Reduce job by having both Mappers and Reducers receive their input on stdin and emit output (key, value) pairs on stdout.

Q22. Whats is Distributed Cache in Hadoop

Distributed Cache is a facility provided by the Map/Reduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.

Q23. What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it

This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 mappers or reducer, it will use the same copy of distributed cache. On the other hand, if you put code in file to read it from HDFS in the MR job then every mapper will try to access it from HDFS hence if a task tracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS is not very efficient when used like this.

Q.24 What mechanism does Hadoop framework provides to

synchronize changes made in Distribution Cache during runtime of the application

This is a trick questions. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution

Q25. Have you ever used Counters in Hadoop. Give us an example scenario

Anybody who claims to have worked on a Hadoop project is expected to use counters

Q26. Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job

Yes, The input format class provides methods to add multiple directories as input to a Hadoop job

Q27. Is it possible to have Hadoop job output in multiple directories. If yes then how

Yes, by using Multiple Outputs class

Q28. What will a hadoop job do if you try to run it with an output directory that is already present? Will it - overwrite it

- warn you and continue

- throw an exception and exit

The hadoop job will throw an exception and exit.

Q29. How can you set an arbitary number of mappers to be created for a job in Hadoop

This is a trick question. You cannot set it

Q30. How can you set an arbitary number of reducers to be created for a job in Hadoop

You can either do it progamatically by using method setNumReduceTasksin the JobConfclass or set it up as a configuration setting 119.


hadoop面试题(5).doc 将本文的Word文档下载到电脑 下载失败或者文档不完整,请联系客服人员解决!

下一篇:2017-2022年中国可控硅整流器行业市场调研与投资战略咨询报告(目

相关阅读
本类排行
× 注册会员免费下载(下载后可以自由复制和排版)

马上注册会员

注:下载文档有可能“只有目录或者内容不全”等情况,请下载之前注意辨别,如果您已付费且无法下载或内容有问题,请联系我们协助你处理。
微信: QQ: