Quantcast

Memory requirement

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Memory requirement

Daniel Chan
We have a Zookeeper (3.4.6) data store with
zk_approximate_data_size 1.88G
zk_znode_count 4.43 millions

99% of the znodes has dataLen around 600 bytes.

The Zookeeper instance is configured with "-Xms4G -Xmx4G" but it failed on startup.

Is there any way to project memory requirement for running Zookeeper based on its data size? something like 2X or 3X of the data size?

Thanks,
Daniel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Memory requirement

Shawn Heisey
On 4/26/2017 12:25 PM, Daniel Chan wrote:
> We have a Zookeeper (3.4.6) data store with
> zk_approximate_data_size 1.88G
> zk_znode_count 4.43 millions
>
> 99% of the znodes has dataLen around 600 bytes.
>
> The Zookeeper instance is configured with "-Xms4G -Xmx4G" but it failed on startup.
>
> Is there any way to project memory requirement for running Zookeeper based on its data size? something like 2X or 3X of the data size?

Disclaimer: I'm technically deficient on ZK in many ways.  I hang out
here because the project where I *do* know a thing or two (Solr) uses
ZK.  To the best of my knowledge, what I am saying below is correct, but
I could be wrong.

Even without knowing all that much about ZK or what its memory
requirements are, the first thing that comes to mind is that your
problem description of "it failed on startup" is extremely vague.  What
happens *exactly*?  If there are error messages logged, can you give us
those, including the full Java stacktrace if it's present?  Maybe you
are correct to think that there's not enough memory, but without error
messages, it's not possible to say for sure.

I see that the day before you sent this message, you sent another
message where you DID include an error message seen at startup:

java.lang.OutOfMemoryError: GC overhead limit exceeded

My research on this error says the following:  If this is the error you
are getting when you start with a 4G heap, you're not actually running
out of heap, but the amount of heap needed is so close to the 4G you've
specified that Java is spending most of its time doing garbage
collection without freeing very much memory.

Probably the first thing I would have tried is to run it with an 8G heap
and see whether that works.  Based on the discussion on the thread where
you opened ZOOKEEPER-2714, with that large dataset startup may be VERY
slow even when there is enough memory to avoid massive GC overhead.

Further discussion:  The second thing that comes to mind, and this will
need to be addressed by someone with intimate knowledge, is that your
data size makes me wonder if ZK is the correct choice for whatever you
are doing.  It is not my intent to badmouth the project, but there are
certain things that it does not handle well.  If the amount of data that
is changed and/or read at any given moment is very small, then you MIGHT
be OK aside from a very slow startup.

Thanks,
Shawn

Loading...