I've actually had buggy desktop applications use several gigabytes of RAM as shared memory, so this can make a big difference.
That sentence:
> There are no downsides, except for confusing newbies.
Just isn't true. There is the downside that we can't know how much memory is available.
I use that to help determine "least incorrect" ram usage on linux. It is ok, maybe not perfect but it at least lets you get a better view on memory.
(this is tautologically true)
https://news.ycombinator.com/item?id=3698652 (1 year ago)
https://news.ycombinator.com/item?id=6368562 (2 months ago)
https://news.ycombinator.com/item?id=1831322 (3 years ago, no comments)
I'm tickled that what once was a big "technical" advantage becomes a major source of confusion. Presumably, OSes of Windows NT heritage have retained file system caching, which offers a huge performance benefit. Why don't Windows users have confusion over this issue? Doesn't Windows offer an easy way to check or is the whole thing presented clearly, or is Windows so complicated not many people get this far?
I'll also draw a parallel to DUI - without a breath-o-meter, DUI is a lot like driving while distracted. It's impossible to quantify, and therefore not that much of a crime. With a breath-o-meter, or with "top", it's possible to put a number on drunkeness or lack of free memory, and the whole thing becomes a problem.
Windows 8 has a nice graph which shows memory usage over time in which the filesystem cache is not visible. If it was included, memory usage would constantly hover at about 90% which wouldn't be very useful and very confusing.
When Windows XP was introduced famous tech journalist John Dvorak complained that "System Idle Process" was using 99% of his cpu and slowing down his computer. In later Windowses, System Idle Process is no more.
Also even when you subtract the fs cache and get number of megabytes free memory you have available. That number has almost no bearing on how many number of megabytes an application of yours can allocate. Linux memory management is much more complicated than that.
I don't know if I'm being trolled, but memory management in any modern OS that makes proper use of shared memory is fundamentally very complicated. I suspect what is happening is Windows is lying to you to give you a simple (though essentially wrong) answer.
Iknow my taste in user interfaces is very different than most, but this really explains a lot.
IDLE-TIME PROCESS. Once in a while the system will go into an idle mode, requiring from five minutes to half an hour to unwind. It's weird, and I almost always have to reboot. When I hit Ctrl-Alt-Delete, I see that the System Idle Process is hogging all the resources and chewing up 95 percent of the processor's cycles. Doing what? Doing nothing? Once in a while, after you've clicked all over the screen trying to get the system to do something other than idle, all your clicks suddenly ignite and the screen goes crazy with activity. This is not right.
Sigh, "tech journalism".
Whatever happened to him?
Because windows users don't care about how much memory a particular application is using. New Windows developers get confused by it all the time. There are plenty of questions on stack overflow where someone is using task manager to try and measure their application's working set.
Top and task man are like stepping on the scale to keep track of your weight the exact number doesn't mean much, and it regularly varies in an unmeaningful way vs a memory profiler which would be like a high precision body fat measurement. Right tool for the job and all that crap.
Windows Vista started using RAM very aggressively for caching, including pre-filling cache with frequently used pages (apps/data) from disk... ReadyBoost was basically an extension of this...
Windows' Task Manager has generally counted memory occupied by the disk cache as "available."
Windows Vista got lots of complaints about memory usage, because Task Manager listed memory as Total, Cached, and Free. People saw that "Free" was close to 0, and complained that Windows Vista ate all their RAM.
Windows 7 changed this to: Total, Cached, Available, and Free. People saw that they had a lot of "Available" memory, so they didn't panic. Well, most of them didn't panic.
Windows 8 removed the "Free" line entirely, and now lists memory as: Total, Available, and Cached. If you care about the actual amount of free memory, you can run Resource Monitor or Process Explorer.
* Green = resident memory used (probably what you want most of the time)
* Blue = buffers
* Yellow = cache
The "Linux calls it" column should probably be called "top calls it"...
I just about fell off my chair when I realized I could click with my mouse on the columns to sort them!
Here's a screenshot of it showing systemwide and per-process memory information: http://www.atoptool.nl/images/screenshots/memory.png
Here's a case study of using it to observe a memory leak: http://www.atoptool.nl/download/case_leakage.pdf
root@joe-pc:~# free -m
total used free shared buffers cached
Mem: 32057 30347 1710 0 1312 21568
-/+ buffers/cache: 7466 24591
Swap: 0 0 0
root@joe-pc:~# sync; echo 3 > /proc/sys/vm/drop_caches
root@joe-pc:~# free -m
total used free shared buffers cached
Mem: 32057 5628 26429 0 4 891
-/+ buffers/cache: 4732 27325
Swap: 0 0 0
root@joe-pc:~#I don't think they were expecting my answer:
"Sure you can, just use plan 9."
> Have an iPhone 3? Download RAM twice to make it a iPhone 5.
https://twitter.com/DownloadMoreRam/status/38502626000858726...
Here's how it looked: http://web.archive.org/web/20131011053519/http://www.linuxat...
This explanation which has been replicated thousands of times on every linux forum on the planet, prevents people from solving real problems. It started as a useful piece of information but nowadays it is spam.
I've seen that happening several times, but I'm not the GP, so I may still be missunderstanding him.
But it isn't only a Google search issue. Every time you will ask a question about a memory problem in Linux, everybody will give you this answer and classify you as newbie. When you explain them that this isn't the case, they will simply leave the thread/question and nobody would ever bother reading your problem again.
People just go in denial mode because of this common knowledge, as Linux can't have memory leaks or any memory related problem.
So you get Joe Chucklefuck, he installed Ubuntu three months ago and posted on the Ubuntu forums about how he has no free RAM and Linux sucks. Someone showed him this page, and now when he sees your question about why malloc isn't working right, he thinks, "Aha! A memory problem! I shall post that linux ate my ram site!" Meanwhile, Ulrich Drepper isn't reading the list, even though he could tell you that the version of glibc you're using has a known bug in malloc. (He'll also tell you that you're stupid, but that's just because he's an asshole)
So when I find an issue that I can't be sure if it isn't my own fault, or I don't know the root cause of it, I try to gather some information before heading to the bugzilla. Memory issues are the only thing it is absolutely impossible to gather any useful information. Your only option is to learn C and debugging tools.
Places like stackoverflow and IRC support channels are really only useful if the answer you are looking for can probably be found in some form of documentation somewhere. For things like suspected bugs or issues with specific implementation, it is best to go to the source (either the actual source, or the responsible devs (which almost always means going to a mailing list).
Some particular projects are infamously dismissive specifically of memory leaks, really no matter how much detail you put into your report (Mozilla, I am looking squarely at you), but those should be the minority. Typically if you see run-away memory usage, and are able to provide enough details about what you are seeing, the developers should be receptive. That's been my experience anyway.
I think describing the phenomenon as 'noise' is reasonable. You find a moderate number of answers with a reasonable degree of certainty as to their accuracy and a large number of answers that are just whatever came to mind first (these aren't necessarily provided by completely distinct groups of people). The result is a lot of almost-but-not-quite-related nonsense that drowns out the answers with any relevant content.
This probably isn't purely a problem of numbers, either. Even attempting to be helpful takes a certain amount of effort that naturally limits the impact of the former group at any particular instant. It's also easy (as evidenced by this very article) to take helpful information and render it non-helpful by repeating it without context.
Doing this, I can't say I've ever been brushed off using this page.
I had similar experience with MacOS memory issues, and all the answers I was getting: "MacOS is superior to Windows, its just Cache, the memory is actually free". Yet my applications could not allocate memory and were hogging whole OS as a consequence.
Back on topic, never had similar 'cache' problems on Linux though.
Now I'll just point them to this website.
You can drop cache by sending a byte into drop_cache.
PS: An interesting thing is that Windows people always seek for a way to enlarge disk cache. It's fine on server versions, but on desktop versions such as Windows 7, if I don't remember wrong, disk cache is capped at ~4GB(not sure if globally or per file). Oh well.
The application I develop consumes most of the ram on my machine when running, and takes several minutes to rebuild and start up to begin with. When most of my memory is being used for the cache, this process takes several minutes longer, because I'm making millions upon millions of calls for more memory -- and each one has to get some of that cached memory back for itself. If I simply drop the cache all at once, every minute, it takes a split second. If I shrink it over a million increments, it takes around a minute.
Even typing that I feel I must be doing something wrong; and yet, it worked.
You control hope many times you call malloc. Just call it once with what you need.
I'm more and more inclined to either use stupider algorithm that at least I can explain to the user, incremental stuff where they can check intermediate results or taking into account the fact that this algorithm has to be conveyed to users in one way or another since inception.
I guess in this case a measure of the swap activity would be a good indicator that the current limiting factor is the RAM, but it also has to be credible. Somehow having 16G written on the box of the ram and 7G written in the "available ram" of a software makes everything more credible that saying "I have a pressure of 1.5madeupunit on the ram front". Human factors, again.
It also makes perfect sense from the perspective of writing the software. The free list really is the list of VM pages that have been free'd and that has a specific meaning, and changing that name internally would be terrible. You just need to understand how the machine actually works.
while true; do sync && grep Dirty /proc/meminfo; done
Lots of gold to be mined from /proc/meminfo .. bonus points for wiring it up to gnuplot so you can get a real picture of the dynamics of the Linux kernel .. ! :)Ages ago I was hacking linux onto an ARM soc that claimed to have memory it didn't (it wasn't aware of the display's DMA segment). It was near the top of the address space, so generally speaking it didn't cause too many problems, but until I could debug it I needed to keep memory usage low, wchih meant disabling the disk cache. What a shitshow that was :/
If Linux is consistently doing this, why would you still need a swap partition?
You don't, but if something decides to grab memory, the OOM-killer will kill processes abruptly. Swap may allow the system to limp along until there's some solution.
It's not like 10 or 20GB HD these days matter much either.
That's why my desktop and laptop systems don't have swap enabled.
Myself, I'd generally prefer to evict the older stuff to swap in order to cache the newer stuff. But maybe that's just me.
Much gnashing of teeth ensued until HP fixed that ...
Because sometimes you (read applications) need to use all/most of your memory. Ability to swap out all the not being used memory to disk maximizes amount of memory available to applications.
Linux never needs a 'purge' or equivalent. Doing a 'drop_cache' only slows the system slightly because nothing is cached. 'sudo purge' can prevent OS X from using swap – i.e. running out of RAM even though there is RAM it could reclaim. This behavior has been tweaked but not fixed in Mavericks.
Some small amount of swap usage (compared to total RAM) is perfectly normal, by the way. More cache, for instance, is a better use of RAM than avoiding swap usage by keeping never-accessed pages present.
So linux sometimes eats your RAM (but in different way).
There may be a command-line utility that lets you pin an arbitrary running a process too, I haven't looked for it on linux.
But could be cases where it could be side effects of it (i.e. running in a VM could make less memory available for the host, and it could be caching disk already) so you could consider to disable it sometimes.
I just bought 4 GB of PC-6400 for a laptop at 40ドル. A fast 128GB SSD is 100ドル.
Summary: use RAM judiciously
Unused memory is wasted memory.
A typical modern SSD reads at 500 MiB/s, consumes 3 watts when reading, and 0.3 watts when idle.
A stick of DDR3 RAM consumes about 3 watts, just sitting idle.
The power consumption from prefetching is minor compared to that of the RAM.