Understanding Linux RAM Usage

I am still in the process of learning how RAM management works inside Linux environments, so I read alot about that topic and make some notes.

Today I read an interesting blog post about how RAM is used in Linux, furthermore the comments on that article really gave me some additional information about that topic.

For those still learning, just like me, my notes might be somehow interesting. Please feel free to mail me if you find any wrong information.

Understanding Linux RAM Usage

cat /proc/<pid>/smaps or calling pmaps
cat /proc/<pid>/pagemap (binary)
cat /proc/<pid>/statm: memory status information
cat /proc/<pid>/status: same as above, but human readable
cat /proc/<pid>stack
cat /proc/slabinfo
cat /proc/
$ free | awk '/^\-\/\+/ {print $3 }'

Linux-Tool ps and shared libraries
==========================================================
- Two possible outputs regarding RAM:
  * VSZ = Virtual Set Size
  * RSS = Resident Set Size
- Does not always show real RAM usage
- Instead, ps shows how much real memory each process would take up
  if it were the only process running
- The reason for this: shared libraries and how Linux uses them
- Example: KDE text editing software will use several shared libraries,
  e.g. X libraries and general generic system libraries, such as libc
  due to this sharing, Linux is using a trick: it loads a single copy
  of the shared libraries into memory and use that one copy for every
  program that references it
  so ps shows shared memory usage + process usage
  this means, that the one time RAM consumption of a single shared lib
  will be displayed in many processes as "consumpted" ram, but in fact
  the value gets only consumed one time
  so the real one time consumption of a shared lib is reported several times,
  which is of course wrong
- ps -lyu <user> shows in the SZ column the writable/private data, like pmap

Understanding the output of pmap
==========================================================
- Each shared library is listed twice
  * a) one time for its code segment, modus: r-x--
  * b) one time for its data segment, modus: rw---
- When a shared lib is listed three times, e.g. with r----,
   is it protected memory space?
- segments with large memory consumptions are usually
  the code segments of the included shared libs
- those code segments can be shared between processes
- if you factor out all the parts that are shared between
  processes, you end up with the "writeable/private" total,
  which is showed at the bottom of the output
- so like 2 MB is more realisting than like 24
  24 = with shared libs
  2  = without, so the "real" consumption
- so 2 mb are more realistic than what ps shows
- rwxps = read write execute private shared
- column major_minor (major device number, minor number, has to do
  with drivers)
- colum afterwards: inode number of the file from which the
  memory is being mapped
- anon = anonymous memory = private memory mapped to
  the process = data segment??

misc
==========================================================
- each of those shared libraries take up a number of virtual to
  physical page mappings, these, unlike memory, are a very
  precious resource
- consequence: try running something like KDE on non x86 hardware that
  doesn't have huge translation tables; on mibs hardware, he has seen
  apps spend over 30 percent of their time updating tlb entries
- same principal of shared lib memory consumption applies to executables
  as well; e.g. if you have several xterms running, you will only have
  one copy of the xterm executable program text in memory, which will be
  shared between all instances, but each instance ofc has its own stack and
  heap
- shared libs share backing store as long as the code is not modified,
  but if the code segment needs to be modified as it's loaded, it will
  requre its own (probably anonymous memory) swap pages

Fragmentation
==========================================================
- an app can still end up eating memory that it isn't actually using
- e.g. the app needs to make a number of large allocations for some
  temporary working space for some operation; it also needs to allocate some
  book keeping information (undo history, e.g.) and other bits of 
  info to store/utilize the computed data
  when ram is allocated, the heap grows
  heap can only shrink by truncation
  this only happens when the program can  only release memory back to the
  OS by shrinking the heap, not by cutting it into pieces
  so if the app makes many allocations, and frees only some of them,
  the heap may end up like UUUXXXXXXXXXXXXXXXXXXXXXUU
  where X is free and U is used memory
  so fragmentation occurs
  that's four pages of used and 12 pages of unused memory that can't be freed
  when new allocations are made, it can use all of that unused heap, but if
  it isn't making any more allocations anytime soon, it'll appear to be
  gobbling up memory, which it is.

  with linux -> no problem, unused pages will be swapped out to disk so you
  won't run out of memory
  -> bad performance
  and when the app needs to reallocate memory it'll swap back in a bunch of
  gargabe/unused data

  this phenomen is one of the many reasons why good modern garbage collectors
  are far more efficient than manual memory management with
  malloc/free (c functions)
  a compacting collector will completely negate the above problem
- the linux allocator manages large objects with mmap, which can free
  memory chunks no matter where they are in the heap
- garbage collection requires more space and always swaps far more than
  malloc; why? because it has to touch every page periodically, including
  those swapped to disk, while it looks for garbage
- lsof shows memory/VFS ?

sources:
http://virtualthreads.blogspot.com/2006/02/understanding-memory-usage-on-linux.html