mysqld memory usage over time

TLDR: why does mysqld seem to just consume more and more memory over time and never give it back (even after stress tests finish)?

This is all for testing/demo/poc type work. I realize ultimately separating the DB and having a much more scalable solution in general is the right target state. I’ll be ready for that when the time comes. I also know that trying to run a server on bare minimum VM has limitations and ultimately throwing more resources is always a given.

All that aside, right now I’m running an AWS ec2 instance with 1GB ram.

Quick specs:
– 1 GB RAM
– Ubuntu: 18.04.4 LTS
– Apache: 2.4.29 (prefork)
– MySQL: mysqld 8.0.22
– PHP: 7.2.24

Monitoring RAM usage via 1 min cron via combination of
free -m
and
ps_mem.py (https://github.com/pixelb/ps_mem)

Relevant graph available here:
https://imgur.com/a/UKJYd3N

My biggest concern is MySQL’s memory usage over time. Before I was really monitoring it, it was running for a few days/weeks and had slowly worked it’s way up to 510MB and sat there pretty consistently. Then I ran a jmeter stress test of about 10,000 connections per minute (for 5 min) which called a php page that queried the database (pretty basic query that doesnt have any changing parameters). MySQL’s memory usage jumped to 545MB. Then up to 550MB a few minutes after the stress test was over. Then it just sat there at 550MB even though there was no activity on the system. It dropped from 550MB to 388MB after restart. There are basically no connections to this server (only I’m testing on it right now). Memory slowly began to creep up. It got to 425MB with basically no activity. I ran a much smaller jmeter test. It was running about 170 requests per minute. It ran for 16 hours. Memory went from 425MB -> 445MB. Now 10 hours later (with no activity), it’s up to 465MB. If I look at a graph, it’s a very slow linear growth since restart.

I realize that after a restart isn’t a good measure cause nothing is cached and mysql builds up buffers and whatnot as people use the system. However, what concerns me is that it basically never goes down, even after a stress test where there is no activity for hours. I guess it did kind of plateau at 510MB previously, but then jumped to 545MB and sat there. So who is to say it wouldn’t have went to 600MB and then 650MB if I kept running tests. And never dropping back down? In my mind, i would expect memory usage to reflect current system activity and not some aggregate activity over time.

I haven’t seen any good explanations or documentation on how things are supposed to work. How can I test that mysql will eventually top out? Or will it never? I’m trying to figure out that with 1GB ram I can support X users and with 4GB I can support 4x users, etc. I know it was a simple test but I was shocked at how easily the system handled 10k connections a min on 1GB RAM.

I know this question is all over the place. I see apache do weird shit too. It’ll be sitting there running at 40MB, then jump up to 80 (with basically no activity) then back down to 50MB. There doesn’t seem to be any rhyme or reason. However, apache seems less concerning than mysql at this point.

I guess at the end of the day, does anyone have any good guides on understanding mysql and linux memory management? Am I wasting my time trying to mirco-optimize this? Should I just wait until I see missed/dropped connections (or slowdowns) and then increase RAM? I’m trying to be proactive and plan, but I feel like I’m just fooling myself.

Thanks!

submitted by /u/pickleback11
[link] [comments]


Go to Source of this post
Author Of this post: /u/pickleback11
Title Of post: mysqld memory usage over time
Author Link: {authorlink}