Tag: <span>Yarn</span>

Tag: Yarn

Yarn Job Failed with Error: “Split metadata size exceeded 10000000”

When you run a really big job in Hive that failed with the following error: This indicated that the value for mapreduce.job.split.metainfo.maxsize is too small for your job (default value of 10000000). There are two options to fix this: 1. Set the value of mapreduce.job.split.metainfo.maxsize to be “-1” (unlimited) specifically …

Hive query failed with error: Killing the Job. mapResourceReqt: 1638 maxContainerCapability:1200″

This article explains how to fix the following error when running a hive query: This error might not be obvious, however, this is caused by the following config not setup properly in YARN: mapreduce.map.memory.mb = 1638 yarn.scheduler.maximum-allocation-mb = 1200 yarn.nodemanager.resource.memory-mb = 1300 The solution here is to change the above …

Hive query failed with error: Killing the Job. mapResourceReqt: 1638 maxContainerCapability:1200

When running a Hive query, get the following error in the jobhistory: This is caused by the following settings in YARN: The solution is to setup the settings mentioned above in the following way: mapreduce.map.memory.mb < yarn.nodemanager.resource.memory-mb < yarn.scheduler.maximum-allocation-mb Then the problem should be resolved.

My new Snowflake Blog is now live. I will not be updating this blog anymore but will continue with new contents in the Snowflake world!