• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

What's the maximum amount of ram possible?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

blazenarrow

Member
Joined
Apr 22, 2004
Hello there,

So i'm a grad student studying neuroscience and I am currently working on the connectome project. Part of this involves working with electron microscopy files that are about 350 gigs. What I do is navigate through the images and trace neurons, in order to reconstruct them in 3d.

However, in order to do this efficiently requires tons of ram.

Thus, what is the most ram you could possibly put into a system?

What about a windows 7 system?

Linux?

Mac?

From what I understand, the max for Windows 7 was 64 gigs. However now I am finding different things.

Anyway, can you please point me in the right direction. I would like to build a rig that has the max ram possible. What Motherboard is necessary for this?

Thanks for any help you can offer!
 
Depends on hardeware too. Server boards hold more than a home pc. X99 holds more than z97. I believe you can have 128gb on x99 (16x8), and 32gb on z97 (8x4). Look up the board you have/want and it will tell you its limit... same with OS. I think W7/8 64bit can allocate at least 64gb, I think 128.

I don't do linux or Mac.
 
With Windows 7 as long as you are running 64-bit Pro, Enterprise, or Ultimate you can use up to 192GB of RAM, not sure why the limit here.

With 64-Bit Linux you will hit the hard limit of your hardware, or your wallet before you hit its limit, 64-bit linux can address up to 2^64 of ram, and I think the most I have seen in even a server is 1TB of ram, and that was a dell R910.

I would imaging that Mac would have the same limitation as Linux since it is running on a BSD unix kernel, but from what it looks like the Mac Pro can hold the most ram at 128GB (4 x 32GB) which is the hardware limit until 64GB dimms are release.
 
Thanks guys.

So it looks like on Newegg you can't eve find 16 gb ram sticks...thus it would max out at 64 gigs. :(
 
I thnk DDR4 will get them eventually, but not in DDR3 for the home PC.
 
Last edited:
That would require a server board (ECC registered ram)... which some do of course have more than 8 slots, yep!
 
You're going to have to go with a server platform, as others have said. That's the only way you're getting over 64GB of RAM right now. 16GB desktop sticks don't even exist yet. We're stuck at a max of 8X8 (64GB) on X99 and a puny 32GB (4x8) on Z97.

You're going to spend more on a server board than you would get a reasonable daily driver desktop board for, but if absolutely epic amounts of RAM are what you require, there is currently no other option. Desktop will catch up with your needs in a few years but in 2015 you need a server platform for what you want to do.
 
I'm a developer, and over my 30+ years I've been a consultant on a number of projects, sometimes consulting with PhD candidates, others like yours in scope and requirement, though not in the field of biomedical research.

You've received some good information thus far. There are specialized workstation and server products with high RAM potential, and some of your requirements may depend on how the software operates...perhaps you're just in need of viewing?

Are you familiar with virtual memory?

Are you familiar with paging or deblocking?

What resolution is the montor(s) you're using for this work?

What software are you using for this research?

What is the storage solution (if there are so much as 100 images of the size you give, that's a significant database compared to most desktop workstations).

I'm happy to keep this discussion as public as you prefer; other members may find that fascinating, but feel free to PM me if you'd like a private exchange, more detailed information or, perhaps, someone experienced and qualified to help.

At present prices, RAM sufficient to store one full image, assuming the file size you mention isn't compressed, would cost perhaps $3000 to $4000, but there are catches involved. As others mentioned, there are some few boards intended for servers and workstations that support up to 256 Gbytes, and I assume you've investigated hardware intended for this size of image processing priced upwards of $30,000. In my experience these kinds of projects run for a year or two, so leased machines were more common in the previous epoch. I assume you realize it should be possible to assemble capable hardware for much less, and that is a fact that has never been more applicable than in the past.

Yet, you might not require that much RAM, if the software is suitably designed. At 350 Gbytes, assuming 8 bit color is sufficient, and a roughly square aspect ratio (might even be a circular photographic target), I'm estimating your images are about 350,000 pixels wide (similar height). That estimate is probably sufficient for my observations to follow.

There is no monitor beyond approximately 30,000 pixels (width) to my knowledge, and such devices are enormously expensive. You might have 4K monitors, which would be assistive, or standard 1920 x 1080. No doubt you are scrolling through 100 to 200 "columns" by 100 to 200 "rows" around the source imagery, either to view, process (intelligent analysis perhaps) or copy. I would assume at least 3, possibly 9 monitors under one workstation's control would be appropriate.

If you were using 4K monitors, each requires about 24 MBytes for a full display, and 9 active monitors would require under 256 Mbytes of RAM.

Appropriately constructed software can page or deblock from source materials. That is, loading the 9 regions around the local view position so that scrolling is fluid results in about 2 Gbytes for 9 monitors. Add to that other buffers, control information...an application would perform reasonably well requiring less that 16 Gbytes of RAM.

The relies on the notion of disk paging performance. As one scrolls any, or all, of the 9 displays, the data source must be able to keep up with any reasonable user interface demands. A typical paging scenario involves retrieving 3 new adjacent blocks from the source image (while dropping 3 that trail the scrolling direction), at a demand rate of merely 75 Mbytes. Typical modern rotational hard disks can supply sustained output of 120 Mbytes per second, meaning that one could scroll entire screens in less than one second without ever really noticing a delay.

That is, on a well tuned machine.

There are also obvious features of zooming, which includes processing, indexing and caching. If one assumed a RAM only operation of the software, a typical 350 Gbyte image would require an additional 100 to 200 Gbytes for support of this feature.

I assume you may require multiple layers to be processed, representing depth of the subject you're studying, but it's likely layered image processing is focused on small regions rather than the entire 350Gbyte source imagery.

If a local image were loaded into a pre-staged worksurface comprised of SSD's, the result would be much faster.

Supporting active scrolling of 9 displays would, of course, reach demands in the region of 750 Mbytes per second, which SSD's in RAID, or other high performance solutions, can support.

In other words, you may not require an unusually high amount of RAM, depending on just how your software requirements stress the various systems in a more typical machine. You may merely require customized software on a robust machine.

That may be simpler than it sounds. It also happens to be the focus of my work over several decades (high performance applications, drivers, 3D rendering engines, etc).

You do require a robust machine, there's no doubt. 32 Gbytes could work if your requirements are simple enough, 64 Gbytes would serve better, but I think 128 Gbytes would be luxurious. That is, unless the software is NOT well tuned for this particular function. Then you may well need 1 Tbyte of RAM.

Exactly where the focus of the cost of the equipment should be placed is a matter of design, software and possibly some creative solutions which help to curb costs.

Unless I completely misunderstand what you require, $3000 is not best spent on RAM for your requirements. It's probably best put to large SSD's as temporary working storage (like the scratch disks of Photoshop), with perhaps $600 on RAM.
 
Last edited:
I'm a developer, and over my 30+ years I've been a consultant on a number of projects, sometimes consulting with PhD candidates, others like yours in scope and requirement, though not in the field of biomedical research.

You've received some good information thus far. There are specialized workstation and server products with high RAM potential, and some of your requirements may depend on how the software operates...perhaps you're just in need of viewing?

Are you familiar with virtual memory?

Are you familiar with paging or deblocking?

What resolution is the montor(s) you're using for this work?

What software are you using for this research?

What is the storage solution (if there are so much as 100 images of the size you give, that's a significant database compared to most desktop workstations).

I'm happy to keep this discussion as public as you prefer; other members may find that fascinating, but feel free to PM me if you'd like a private exchange, more detailed information or, perhaps, someone experienced and qualified to help.

At present prices, RAM sufficient to store one full image, assuming the file size you mention isn't compressed, would cost perhaps $3000 to $4000, but there are catches involved. As others mentioned, there are some few boards intended for servers and workstations that support up to 256 Gbytes, and I assume you've investigated hardware intended for this size of image processing priced upwards of $30,000. In my experience these kinds of projects run for a year or two, so leased machines were more common in the previous epoch. I assume you realize it should be possible to assemble capable hardware for much less, and that is a fact that has never been more applicable than in the past.

Yet, you might not require that much RAM, if the software is suitably designed. At 350 Gbytes, assuming 8 bit color is sufficient, and a roughly square aspect ratio (might even be a circular photographic target), I'm estimating your images are about 350,000 pixels wide (similar height). That estimate is probably sufficient for my observations to follow.

There is no monitor beyond approximately 30,000 pixels (width) to my knowledge, and such devices are enormously expensive. You might have 4K monitors, which would be assistive, or standard 1920 x 1080. No doubt you are scrolling through 100 to 200 "columns" by 100 to 200 "rows" around the source imagery, either to view, process (intelligent analysis perhaps) or copy. I would assume at least 3, possibly 9 monitors under one workstation's control would be appropriate.

If you were using 4K monitors, each requires about 24 MBytes for a full display, and 9 active monitors would require under 256 Mbytes of RAM.

Appropriately constructed software can page or deblock from source materials. That is, loading the 9 regions around the local view position so that scrolling is fluid results in about 2 Gbytes for 9 monitors. Add to that other buffers, control information...an application would perform reasonably well requiring less that 16 Gbytes of RAM.

The relies on the notion of disk paging performance. As one scrolls any, or all, of the 9 displays, the data source must be able to keep up with any reasonable user interface demands. A typical paging scenario involves retrieving 3 new adjacent blocks from the source image (while dropping 3 that trail the scrolling direction), at a demand rate of merely 75 Mbytes. Typical modern rotational hard disks can supply sustained output of 120 Mbytes per second, meaning that one could scroll entire screens in less than one second without ever really noticing a delay.

That is, on a well tuned machine.

There are also obvious features of zooming, which includes processing, indexing and caching. If one assumed a RAM only operation of the software, a typical 350 Gbyte image would require an additional 100 to 200 Gbytes for support of this feature.

I assume you may require multiple layers to be processed, representing depth of the subject you're studying, but it's likely layered image processing is focused on small regions rather than the entire 350Gbyte source imagery.

If a local image were loaded into a pre-staged worksurface comprised of SSD's, the result would be much faster.

Supporting active scrolling of 9 displays would, of course, reach demands in the region of 750 Mbytes per second, which SSD's in RAID, or other high performance solutions, can support.

In other words, you may not require an unusually high amount of RAM, depending on just how your software requirements stress the various systems in a more typical machine. You may merely require customized software on a robust machine.

That may be simpler than it sounds. It also happens to be the focus of my work over several decades (high performance applications, drivers, 3D rendering engines, etc).

You do require a robust machine, there's no doubt. 32 Gbytes could work if your requirements are simple enough, 64 Gbytes would serve better, but I think 128 Gbytes would be luxurious. That is, unless the software is NOT well tuned for this particular function. Then you may well need 1 Tbyte of RAM.

Exactly where the focus of the cost of the equipment should be placed is a matter of design, software and possibly some creative solutions which help to curb costs.

Unless I completely misunderstand what you require, $3000 is not best spent on RAM for your requirements. It's probably best put to large SSD's as temporary working storage (like the scratch disks of Photoshop), with perhaps $600 on RAM.

Can somebody please make this guy blue already? He probably knows more than anybody else here...
 
I'm a developer, and over my 30+ years I've been a consultant on a number of projects, sometimes consulting with PhD candidates, others like yours in scope and requirement, though not in the field of biomedical research.

You've received some good information thus far. There are specialized workstation and server products with high RAM potential, and some of your requirements may depend on how the software operates...perhaps you're just in need of viewing?

Are you familiar with virtual memory?

Are you familiar with paging or deblocking?

What resolution is the montor(s) you're using for this work?

What software are you using for this research?

What is the storage solution (if there are so much as 100 images of the size you give, that's a significant database compared to most desktop workstations).

I'm happy to keep this discussion as public as you prefer; other members may find that fascinating, but feel free to PM me if you'd like a private exchange, more detailed information or, perhaps, someone experienced and qualified to help.

At present prices, RAM sufficient to store one full image, assuming the file size you mention isn't compressed, would cost perhaps $3000 to $4000, but there are catches involved. As others mentioned, there are some few boards intended for servers and workstations that support up to 256 Gbytes, and I assume you've investigated hardware intended for this size of image processing priced upwards of $30,000. In my experience these kinds of projects run for a year or two, so leased machines were more common in the previous epoch. I assume you realize it should be possible to assemble capable hardware for much less, and that is a fact that has never been more applicable than in the past.

Yet, you might not require that much RAM, if the software is suitably designed. At 350 Gbytes, assuming 8 bit color is sufficient, and a roughly square aspect ratio (might even be a circular photographic target), I'm estimating your images are about 350,000 pixels wide (similar height). That estimate is probably sufficient for my observations to follow.

There is no monitor beyond approximately 30,000 pixels (width) to my knowledge, and such devices are enormously expensive. You might have 4K monitors, which would be assistive, or standard 1920 x 1080. No doubt you are scrolling through 100 to 200 "columns" by 100 to 200 "rows" around the source imagery, either to view, process (intelligent analysis perhaps) or copy. I would assume at least 3, possibly 9 monitors under one workstation's control would be appropriate.

If you were using 4K monitors, each requires about 24 MBytes for a full display, and 9 active monitors would require under 256 Mbytes of RAM.

Appropriately constructed software can page or deblock from source materials. That is, loading the 9 regions around the local view position so that scrolling is fluid results in about 2 Gbytes for 9 monitors. Add to that other buffers, control information...an application would perform reasonably well requiring less that 16 Gbytes of RAM.

The relies on the notion of disk paging performance. As one scrolls any, or all, of the 9 displays, the data source must be able to keep up with any reasonable user interface demands. A typical paging scenario involves retrieving 3 new adjacent blocks from the source image (while dropping 3 that trail the scrolling direction), at a demand rate of merely 75 Mbytes. Typical modern rotational hard disks can supply sustained output of 120 Mbytes per second, meaning that one could scroll entire screens in less than one second without ever really noticing a delay.

That is, on a well tuned machine.

There are also obvious features of zooming, which includes processing, indexing and caching. If one assumed a RAM only operation of the software, a typical 350 Gbyte image would require an additional 100 to 200 Gbytes for support of this feature.

I assume you may require multiple layers to be processed, representing depth of the subject you're studying, but it's likely layered image processing is focused on small regions rather than the entire 350Gbyte source imagery.

If a local image were loaded into a pre-staged worksurface comprised of SSD's, the result would be much faster.

Supporting active scrolling of 9 displays would, of course, reach demands in the region of 750 Mbytes per second, which SSD's in RAID, or other high performance solutions, can support.

In other words, you may not require an unusually high amount of RAM, depending on just how your software requirements stress the various systems in a more typical machine. You may merely require customized software on a robust machine.

That may be simpler than it sounds. It also happens to be the focus of my work over several decades (high performance applications, drivers, 3D rendering engines, etc).

You do require a robust machine, there's no doubt. 32 Gbytes could work if your requirements are simple enough, 64 Gbytes would serve better, but I think 128 Gbytes would be luxurious. That is, unless the software is NOT well tuned for this particular function. Then you may well need 1 Tbyte of RAM.

Exactly where the focus of the cost of the equipment should be placed is a matter of design, software and possibly some creative solutions which help to curb costs.

Unless I completely misunderstand what you require, $3000 is not best spent on RAM for your requirements. It's probably best put to large SSD's as temporary working storage (like the scratch disks of Photoshop), with perhaps $600 on RAM.



***MIND BLOWN***
 
Geeze, thanks guys!

Two additional points to underscore.

When I say the requirement could demand 1 TByte of RAM, I'm not exaggerating by more than a small amount. If, for example, the source images are 24 bit color (3 color planes, 8 bits each), they would be loaded into RAM on memory aligned boundaries, expanding that to 4 color planes, the new one being an alpha probably that's always opaque. The reason is that the CPU can only access RAM on aligned boundaries, even though instructions can point to unaligned memory. When pointing to unaligned memory, the CPU must perform multiple steps to extract the target from the aligned load. This takes time. Depending on what the application does (convolutions associated with zooming, for example), unaligned data could cause anywhere from 30% performance drain to 10 times the performance drain. The application could crawl, comparitively. So, a 350 Gbyte source image could expand to 440 Gbytes.

My point is that if the OP assembled a machine with, say, 256 Gbytes of RAM, expecting that to improve performance, that could lead to a profound disappointment in the result. Virtual memory would be required to the tune of 256 GBytes, and that much paging could turn what is theoretically a 25 minute file load into hours, or DAYS.

You read that correctly. A poorly designed viewer will need at least 25 minutes to load a 350 GByte image from a typical rotational hard disk, with potential alternates at 10 to 15 minutes in raid (I assume the source isn't SSD if there are over 30 TBytes of source imagery).

After loading, nearly 500 Gbytes of RAM will be consumed, and then the software has to actually function, requiring at least some operating overhead. Poorly designed viewers (relative to this data) might need double.

To illustrate what I've described about paging, visit or recall Google Maps or Google Earth and contemplate just how they're showing satellite imagery of the entire planet, resolved to a detail showing individual buildings at about 15 to 25 pixels wide.

They're able to push that over a 1 MByte per second (or less) data stream. At 100 Mbytes per second, a good viewer could function as if the image were all loaded in RAM, without requiring the RAM.

....as in the Lady Gaga song....don't be scared, I've done this before ;)
 
I didn't read the comprehensive responses entirely, but I'll +1 the point that for your workload, RAM is cost prohibitive if not unachievable. We have several servers at work that are packed with 1TB of RAM each, but are carved up among many LPARs. What you likely need is extremely fast storage to use as scratch space. Assuming the software you use is designed to handle this sort of thing, it ought to be optimized to use disk for storage. I'd lean much more heavily to an array of SSDs to use for scratch space rather than attempting to rely on RAM. That said, I'm sure more RAM would complement the work you do, as would more horsepower under the hood.

Let's be honest, though, 99% of the people that read this thread haven't worked in the field you're in, let alone with the particular software. Give the vendor a call and ask what their recommendations are for workstations, and build or purchase something to fit or exceed those specs.
 
Let's be honest, though, 99% of the people that read this thread haven't worked in the field you're in, let alone with the particular software. Give the vendor a call and ask what their recommendations are for workstations, and build or purchase something to fit or exceed those specs.

That's a really good idea actually...
 
The OP is in California, 50% odds are that's Berkeley, the origin of open source UNIX and probably half of the basis of computer science.

My contention is that through software alone, a more typical machine can be made to perform this task brilliantly, and there just HAS to be someone near the OP who would jump at the chance for 1/3rd the cost of the hardware implied thus far to make the software.

I'm also available for the same proposal.

For example (as much for the OP as anyone), through another forum like this, focused on software development, I met a PhD candidate online completing his thesis. He wrote a C++ application that was, frankly, a masterpiece, to prove his thesis, as a requirement for completing his PhD. The only problem was that the software basically assumed a perfect computer; unlimit RAM in particular, much like the OP's inquiry suggests.

At the time, though, even something close to a perfect computer wasn't possible (this was perhaps 5 years ago). A typical workstation class machine available to him could only manage 0.1% of the database he required. He was also in need of 1 TByte of RAM.

A real machine, on the other hand, can't operate a theoretically perfect piece of software if it requires a perfect machine. The difference between academic excellence and reality is engineering. What this PhD candidate needed was an engineer to assist.

I was able to fasion a solution, which plugged into his existing code, to solve his problem. It took me about 2 weeks (part time, occasional attention) in his case, just in time for the board to review his candidacy, and thus he achieved his PhD.

Instead of requiring a $100,000 computer, he spent a few hundred on my services and used a $1,500 computer to similar effect.

Custom software for specific research demands is a common avenue, more so than in most industrial applications.
 
Back