Its been a while since I worked with linux, but I was quite involved in the original Commvault HyperScale appliance launch.
I feel like with Linux memory is rarely returned to the free pool but used opportunistically for caching - especially to speed up file access (which is exactly what a backup will cause the kernel to do).
Free = wasted, 100% unused. If the memory is being used to cache files to improve performance, that is better than having a heap of free memory that is providing absolutely no benefit.
Its a complex topic with a lot of articles - but unless you can link it to a performance issue I think this is totally expected.
https://haydenjames.io/measure-web-server-memory-usage-correctly/
The command ‘free’ will never let you down!
From the Linux command line, using the free command or (or free -m or free -h) will often reveal that you are “using” more memory, thank you think! See this example below from Red Hat’s docs:
$ free total used free shared buffers cachedMem: 4040360 4012200 28160 0 176628 3571348-/+ buffers/cache: 264224 3776136Swap: 4200956 12184 4188772
Notice there’s 28160KB “free.” However, below that line, look at how much memory has been consumed by buffers and cache! Linux always tries to use memory first to speed up disk operations by using available memory for buffers (file system metadata) and cache (pages with actual contents of files or block devices). This helps the system to run faster because disk information is already in memory which saves I/O operations. If more space is required, Linux will free up the buffers and cache to yield memory for the applications. If there’s not enough “free” space, then the cache will be saved (swapped) to disk. It would be wise to monitor this, keep swap and cache contention within an acceptable range that does not affect performance. – Source: Red Hat.