Quantcast
Channel: PowerEdge General HW Forum - Recent Threads
Viewing all articles
Browse latest Browse all 5887

PE R420, lower memory bandwidth than expected compared to old systems and modern desktops

$
0
0

Short: A new server seems to have too limited memory bandwidth, any ideas why?

I hope I come to the right place to ask this (first post). At my company we among others have a server for the development database. It is well overdue to replace the last server with a new one, not for lack of performance but warranty.

The previous server is a PE 2950 with a pair of Intel Xeon X5450 (3 GHz) and 16 GiB RAM (8 x 2 GiB). I was not here when the server was set up but it runs Red Hat Enterprise Linux Server release 5.1. We like to do some benchmark before the new server goes into production so we compared with the old one (the 2950) which gets these results from hdparm -T

> hdparm -T /dev/mapper/VolGroup00-LogVol00
dev/mapper/VolGroup00-LogVol00:
 Timing cached reads:   25904 MB in  1.99 seconds = 13001.34 MB/sec

From the manual of hdparm one can read about the -T option:

"This displays the speed of reading directly from the Linux buffer cache without disk access.  This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test."

The server wasn't idle when tested either (but presumable not under a too high workload). I go and test the same on my Optiplex 9020 desktop (running linux mint 15):

> sudo hdparm -T /dev/sda
/dev/sda:
 Timing cached reads:   25820 MB in  2.00 seconds = 12923.22 MB/sec

So we've established what seems to be normal. So now on the new server, which is a PE R420 with a pair of Xeon E5-2407 processors, 64 GiB (4x16) RAM, running CentOS 6.4 and is clean (no database software installed, it is doing nothing. I've also had the same result with hdparm from system rescue live cd):

> hdparm -T /dev/mapper/vg_neptunus-lv_root
/dev/mapper/vg_neptunus-lv_root:
 Timing cached reads:   15616 MB in  2.00 seconds = 7820.36 MB/sec

It is very consistent around 7820 MB/sec, got +-5 MB. When testing 4 times in a row. I haven't gone around installing any special driver for linux, but as I didn't see any special for chipset etc I assume there isn't anything I can get from that that would change this figure.

I'm aware that the E5-2407 can only run the RAM at data rate of 1066 MT/s (PC3-8500). The 2950 has PC2-5300 memories (with a 666 MT/s data rate). Looking a the peek transfer rate (MB/s, on wikipedia) it means that the 2950 must run the memories in quad channel (?) but is the R420 only able to use single channel (which the theoretical peek is 8533⅓ MB/s).


I looked in the server, the memory modules are installed in A1, A2, B1 & B2. The ones and twos are not physically adjacent to each other so this should mean the memory should operate in dual channel for each processor.

Let me say that I've updated the BIOS to latest version, the memory setting is on performance (can't remember the exact name of the BIOS setting).
 Any hints to how I can improve it as the hardware configuration is now? Is hdparm -T perhaps a bad test to run?


Hardware spec for the R420 from the quotation:

2 Intel Xeon E5-2407 2.20GHz, 10M Cache, 6.4GT/s QPI, No Turbo, 4C, 80W
1 PCIE Riser for Chassis with 2 Proc
1 Chassis with up to 8, 2.5" Hot Plug Hard Drives
1 Performance Optimized
1 1600 MHz RDIMMs
4 16GB RDIMM, 1600MHz, Low Volt, Dual Rank, x4
4 Module,Solid State Drive,480GB,Serial Ata,2.5,SMSG  (replaced with Intel S3500)
1 PERC H710 Integrated RAID Controller, 512MB NV Cache
1 Dual Hot Plug Power Supplies 550W
1 On Board Network Adapter
1 Baseboard Management Contoller (12G)

Thanks for anyone manged to read all this and extra thanks for any advice!


Best regards

Christer


Viewing all articles
Browse latest Browse all 5887

Trending Articles