I want an arbitrarily-large buffer in Linux/C/C++
Basically I want an arbitrarily large stack. I know that's not possible, but could I set aside a few terabytes of my virtual address space for it? I'd like to be able to start at the beginning and walk up the buffer as far as I need, and Linux can bring in pages from physical memory on an as-needed basis. Is something like that possible? Would it have the same performance as just malloc-ing a buffer? Would there be a way to signal to Linux that you're done with the memory once you pop the stack?
EDIT: I'm wanting this because I want to optimize a recursive/parallel algorithm that allocates lots of memory on each call. Malloc is too slow for me and I don't want all the threads to trip on each other inside of malloc's locks. So basically it would be my own run-time stack alongside the real one (one for each thread).
Actually, as long as the run-time stack is big enough that should be good enough. Is there a way to know/ensure the size of the stack? In a 64-bit address space there's enough room for several threads to be stack-allocating gigabytes of data. Is that doable?
It looks like pthread_attr_setstacksize could work for new threads, but that doesn't help much if you can be called from any thread ...
You can create such an allocation using mmap() with the MAP_ANONYMOUS | MAP_NORESERVE | MAP_PRIVATE flags. When you want to free it back to the system, use munmap().
Note that the only way you will find out if insufficient memory is available to satisfy your actual use is by your process receiving a SIGSEGV - but this is comes with the territory of asking for a giant mapping much larger than available memory.
Well, if you want a large stack, you could always change ulimit -s. It can be very large, at least on 64-bit platforms.
If you mmap anonymous space, it won't actually be allocated physical pages until you touch it. You'll need to either set vm_overcommit or have enough swap (unless you have that much physical RAM). Of course, mmap'ing anonymous space is exactly what malloc does when you ask for a lot of memory.
You can tell Linux you're done with the pages using madvise. If you call MADV_DONTNEED, then Linux will discard the pages and they should be zero-filled next time you try to access them.
First of all, mallocing memory that size (or newing it, or creating it on the stack) will almost definitely result in a std::bad_alloc error because you're trying to create it all in contiguous memory - and you most definitely don't have that much uninterrupted, contiguous memory due to fragmentation.
If you use a std:: container like a deque or a container adapter like a stack (anything without contiguous memory - i.e. not a vector) then it will use heap memory in pieces which are expandable. This will get you around that bad_alloc error.
So, its possible, but don't try and make the memory contiguous - and if you're allowed use a std::container/adapter.
PS: if you need large memory spaces, you could consider a using a list of dynamically allocated buffers (malloced pointers) that are of a size large enough to be usable but small enough to be likely not to trigger a bad_alloc exception).
Don't forget that resources are always limited by physical constraints: the number of atoms in the universe is probably finite... So arbitrarily large buffers cannot stricto sensu exist...
However, you could use Linux mmap system call to view a portion of file as a virtual memory segment. If it is very big (bigger than RAM), you'll have performance issues. Consider tuning it with madvise (& perhaps readahead) system calls.
Maybe even using GCC __bultin_prefetch function could be useful (but I'm sure that madvise is more relevant).
If you really have a terabyte sized stack, tuning your application will be very important. Learn to use oprofile. I hope you have a powerful machine!
But what do you really want to achieve? Very big call stacks seem suspicious to me... Or is your stack not the call stack??