“No data or no sasl data in the stream” Error in HiveServer2 Log

“No data or no sasl data in the stream” Error in HiveServer2 Log

I have seen lots of users complain about seeing lots of “No data or no sasl data in the stream” errors in the HiveServer2 server log, yet they have not noticed any performance impact nor query failure for Hive. So I think it would be good to write a blog about the possible reason behind this to clarify and remove the concerns that users have. The following shows the full error message and stacktrace taken from HiveServer2 log:
 
ERROR org.apache.thrift.server.TThreadPoolServer: [HiveServer2-Handler-Pool: Thread-533556]: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:765)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:762)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1687)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:762)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream
The likely cause is below:
  1. You have kerberos enabled
  2. You have multiple HiveServer2 hosts
  3. You have Load Balancer enabled in front of all HS2 servers that have such errors
If you have above setup, the error message you saw in HiveServer2 is harmless and can be safely ignored. This just indicated that SASL negotiation failed for one particular Hive client, which in this case would be the Load Balancer who pings regularly to those HiveServer2’s to check for connectivity. Those pings from LB were trying with PLAIN TCP connection, hence those messages. There are a couple of ways to avoid those messages: 1. Reduce the frequency of pings from LB, this will reduce the errors in the log, however, won’t avoid it. I do not know a way to configure the LB to avoid PLAIN TCP connection, this is outside of scope of this blog, you might need to consult to F5 or HAProxy manual for further info. 2. Add filter to HiveServer2’s logging to filter out those exceptions: a. Using Cloudera Manager, navigate to Hive > Configuration > “HiveServer2 Logging Advanced Configuration Snippet (Safety Valve)” b. Copy and paste the the following configuration into the safety valve:
log4j.appender.RFA.filter.1=org.apache.log4j.filter.ExpressionFilter 
log4j.appender.RFA.filter.1.Expression=EXCEPTION ~= org.apache.thrift.transport.TSaslTransportException 
log4j.appender.RFA.filter.1.AcceptOnMatch=false
c. Then save and restart HiveServer2 service through Cloudera Manager. Hope above helps.

6 Comments

  1. HRISHI

    Hi, thanks for this post.
    I have hive setup on uat and prod ,no kerbros enabled,configured NONE for authentication in hive-site.xml. using load balancer for both uat and prod.
    In uat using haproxy as load balancer but in prod I am using VIP.
    I don’t get these messages even once in UAT but on PROD I get after every 3-6 secs.
    According to your analysis here it seems load balancer ping but why it’s not even once in UAT.
    Please let me know what’s your thought on this.

    1. Eric Lin

      Hi Hrishi,

      Thanks for visiting and posting comment on my blog. However, I am not familiar with VIP, so I can’t comment much. Maybe get some help from your VIP admin? There must be some setting to control the regular pinging and checking from VIP.

      Sorry, I can’t help much.

      Cheers

  2. hazhir

    Hi Eric,

    Thank you for your post. In my case I had Keepalived in front checking the Hive port every 5 second using nc command. I have changed the check script to use mapr specific command and the error is gone.

    1. Eric Lin

      Hi Hazhir,

      Thanks for posting comment on my blog and also thanks for letting me know about and share your particular scenario, I think it should also help others who might have the same issue.

      Cheers

  3. Jack

    Hi Eric,
    I use the following configuration in my conf/hive-log4j2.properties,but it not works
    log4j.appender.RFA.filter.1=org.apache.log4j.filter.ExpressionFilter
    log4j.appender.RFA.filter.1.Expression=EXCEPTION ~= org.apache.thrift.transport.TSaslTransportException
    log4j.appender.RFA.filter.1.AcceptOnMatch=false

    how can I filter the logs use this configuration?

Leave a Reply

Your email address will not be published.

My new Snowflake Blog is now live. I will not be updating this blog anymore but will continue with new contents in the Snowflake world!