Sample Header Ad - 728x90

different kernels report different amounts of total memory on the same machine

4 votes
1 answer
356 views
I have two x86_64 kernels compiled on the same machine against the same code (4.15.0 in [Linus' source tree](https://github.com/torvalds/linux)) . The config files were produced by running make localmodconfig against that source, using different, larger original config files coming from different distros: Arch and Slackware respectively. I'll nickname them __arch__ [config](https://pastebin.com/vXd4w34R) and __slk__ [config](https://pastebin.com/NWrCCDab) for that reason. The issue: running cat /proc/meminfo consistently reports about 55-60 MB more in the MemTotal field for __arch__ than for __slk__: MemTotal: 32600808 kB for __arch__ vs MemTotal: 32544992 kB for __slk__ I say 'consistently' because I've tried the experiment with the same config files against earlier versions of the source (a bunch of the 4.15-rc kernels, 4.14 before that, etc., rolling over from one source to the next with make oldconfig). This is reflected in the figures reported by htop, with __slk__ reporting ~60MB less usage on bootup than __arch__. This is consistent with the [htop dev's explanation](https://stackoverflow.com/questions/41224738/how-to-calculate-system-memory-usage-from-proc-meminfo-like-htop) of how htop's used memory figures are based on MemTotal. My question is: any suggestions for which config options I should look at that would make the difference? I of course don't mind the 60MB (the machine the kernels run on has 32 GB..), but it's an interesting puzzle to me and I'd like to use it as a learning opportunity. Memory reporting on Linux is discussed heavily on these forums and outside in general, but searching for this specific type of issue (different kernels / same machine => different outcome in memory reporting) has not produced anything I found relevant. --- __Edit__ As per the suggestions in the post linked by @ErikF, I had a look at the output of journalctl --boot=# where # stands for 0 or -1, for the current and previous boots respectively (corresponding to the two kernels). These lines do seem to reflect the difference, so it is now a little clearer to me where it stems from: __arch__ (the one reporting larger MemTotal): Memory: 32587752K/33472072K available (10252K kernel code, 1157K rwdata, 2760K rodata, 1364K init, 988K bss, 884320K reserved, 0K cma-reserved) __slk__ (the one reporting smaller MemTotal): Memory: 32533996K/33472072K available (14348K kernel code, 1674K rwdata, 3976K rodata, 1616K init, 784K bss, 938076K reserved, 0K cma-reserved) That's a difference of ~55 MB, as expected! I know the __slk__ kernel is larger, as verified by comparing the sizes of the two vmlinuz files in my /boot/ folder, but the brunt of the difference seems to come from how much memory the two respective kernels reserve. I'd like to better understand what in the config files affects *that* to the extent that it does, but this certainly sheds some light. --- __Second edit__ Answering the questions in the comment by @Tim Kennedy. > Do you have a dedicated GPU, or use shared video memory No dedicated GPU; it's a laptop with on-board Intel graphics. > and do both kernels load the same graphics driver? Yes, i915. > Also, compare the output of dmesg | grep BIOS-e820 | grep reserved As you expected, does not change. In all cases it's 12 lines, identical in every respect (memory addresses and all). --- __(Final?) edit__ I believe it may just be as simple as this: the kernel reporting less MemTotal has *much* more of the driver suite built in; I just hadn't realized it would make such a noticeable difference.. I compared: du -sh /lib/modules//modules.builtin returns 4K, while du -sh /lib/modules//modules.builtin returns 16K. So in the end, I believe I was barking up the wrong tree: it won't be a single config option (or a handful), but rather the accumulated effect of many more built-in drivers.
Asked by grobber (59 rep)
Feb 3, 2018, 10:51 PM
Last activity: Feb 4, 2018, 01:34 PM