Does anyone know how to fix the task scheduler, It's said Reading Data Failed, underneath active tasks, it's said active are tasks that are currently enable and have not expired and on the bottom it Reading data fail.
windows 2008r2 server active tasks
PCI parity error on new R720
Recently took delivery of a new R720, was configuring the RAID setup via the iDRAC remotely via virtual console, and the system became unresponsive. I tried reconnecting but was presented with the same frozen PERC config screen. A power cycle was then needed, this happened twice. On investigating the system log I found the following:
Both time the freeze happened after I tried to initialize the virtual disk in the PERC:
VDR32 Background initialization has started for Virtual Disk 0 on Integrated RAID Controller 1.
PCI1308 A PCI parity error was detected on a component at bus 0 device 5 function 0.
CPU9000 An OEM diagnostic event occurred.
Both freezes/lockup's had the same errors.
From reading around this the suggestions look to be reseat all PCI/PCI-E cards/devices and risers.
Anyone got any suggestions? Or encountered this before?
Dell System Hardware & Software Updates for November 17th – 21st 2014
changing the Server name
Hi,
My current server name is ptmb1. is there anyway i can change it to something else?
iDRAC7 ping and snmp issues
Hi,
We're currenting deploying a number of R620, and have started to integrate the iDRAC snmp features in to the monitoring system. However we're experiencing some oddities we suspect could be bugs:
Packet loss on ping
We regularily see packetloss when the the monitoring system pings the iDRAC. Not much or often, but enough to trigger alarms randomly. We've ruled out the network infrastructure as a source, as we get the same results connecting the iDRAC directly to the monitoring system using high-quality CAT6 cable(s). Also we don't get any packet loss to other equipment running Linux/Windows/embedded RTOS os'es using the same link and switches. It's seem to be consistent to all iDRAC interfaces and version we're running (including v 1.57.57).
After some diagnostic I've found a plausible cause; when two simultanous "ping-sessions" are initiated from the same host to a iDRAC interface, one of the sessions stop receiving replies after the first 5-10 packets:
ping 10.100.104.15
PING 10.100.104.15 (10.100.104.15) 56(84) bytes of data.
64 bytes from 10.100.104.15: icmp_seq=1 ttl=64 time=0.295 ms
64 bytes from 10.100.104.15: icmp_seq=2 ttl=64 time=0.341 ms
64 bytes from 10.100.104.15: icmp_seq=3 ttl=64 time=0.291 ms
64 bytes from 10.100.104.15: icmp_seq=4 ttl=64 time=0.342 ms
64 bytes from 10.100.104.15: icmp_seq=5 ttl=64 time=0.332 ms
64 bytes from 10.100.104.15: icmp_seq=6 ttl=64 time=0.352 ms
64 bytes from 10.100.104.15: icmp_seq=7 ttl=64 time=0.349 ms
64 bytes from 10.100.104.15: icmp_seq=8 ttl=64 time=0.341 ms
64 bytes from 10.100.104.15: icmp_seq=9 ttl=64 time=0.362 ms
64 bytes from 10.100.104.15: icmp_seq=10 ttl=64 time=0.344 ms
64 bytes from 10.100.104.15: icmp_seq=11 ttl=64 time=0.335 ms
^C
--- 10.100.104.15 ping statistics ---
11 packets transmitted, 11 received, 0% packet loss, time 9998ms
rtt min/avg/max/mdev = 0.291/0.334/0.362/0.032 ms
which is the ok one, but on the second session:
ping 10.100.104.15
PING 10.100.104.15 (10.100.104.15) 56(84) bytes of data.
64 bytes from 10.100.104.15: icmp_seq=1 ttl=64 time=0.395 ms
64 bytes from 10.100.104.15: icmp_seq=2 ttl=64 time=0.302 ms
64 bytes from 10.100.104.15: icmp_seq=3 ttl=64 time=0.312 ms
64 bytes from 10.100.104.15: icmp_seq=4 ttl=64 time=0.355 ms
64 bytes from 10.100.104.15: icmp_seq=5 ttl=64 time=0.318 ms
64 bytes from 10.100.104.15: icmp_seq=6 ttl=64 time=0.425 ms
64 bytes from 10.100.104.15: icmp_seq=7 ttl=64 time=0.297 ms
64 bytes from 10.100.104.15: icmp_seq=15 ttl=64 time=0.265 ms
64 bytes from 10.100.104.15: icmp_seq=16 ttl=64 time=0.264 ms
64 bytes from 10.100.104.15: icmp_seq=17 ttl=64 time=0.264 ms
^C
--- 10.100.104.15 ping statistics ---
17 packets transmitted, 10 received, 41% packet loss, time 15996ms
rtt min/avg/max/mdev = 0.264/0.319/0.425/0.057 ms
When stopping the "ok" ping session, the second "stalled" session recovers. For reference the TCPDump of the session:
15:25:33.574152 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 1, length 64
15:25:33.574531 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14532, seq 1, length 64
15:25:34.573153 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 2, length 64
15:25:34.573438 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14532, seq 2, length 64
15:25:35.572154 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 3, length 64
15:25:35.572449 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14532, seq 3, length 64
15:25:35.588938 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14533, seq 1, length 64
15:25:35.589219 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14533, seq 1, length 64
15:25:36.573325 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 4, length 64
15:25:36.573662 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14532, seq 4, length 64
15:25:36.588858 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14533, seq 2, length 64
15:25:36.589180 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14533, seq 2, length 64
15:25:37.572325 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 5, length 64
15:25:37.572626 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14532, seq 5, length 64
15:25:37.587857 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14533, seq 3, length 64
15:25:37.588131 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14533, seq 3, length 64
15:25:38.571329 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 6, length 64
15:25:38.571737 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14532, seq 6, length 64
15:25:38.587015 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14533, seq 4, length 64
15:25:38.587337 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14533, seq 4, length 64
15:25:39.570972 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 7, length 64
15:25:39.571250 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14532, seq 7, length 64
15:25:39.586984 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14533, seq 5, length 64
15:25:39.587297 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14533, seq 5, length 64
15:25:40.571005 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 8, length 64
15:25:40.586967 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14533, seq 6, length 64
15:25:40.587300 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14533, seq 6, length 64
15:25:41.571003 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 9, length 64
15:25:41.587007 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14533, seq 7, length 64
15:25:41.587336 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14533, seq 7, length 64
15:25:42.571002 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 10, length 64
15:25:42.587005 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14533, seq 8, length 64
15:25:42.587327 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14533, seq 8, length 64
15:25:43.571009 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 11, length 64
15:25:43.587009 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14533, seq 9, length 64
15:25:43.587351 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14533, seq 9, length 64
15:25:44.570981 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 12, length 64
15:25:44.586969 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14533, seq 10, length 64
15:25:44.587294 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14533, seq 10, length 64
15:25:45.571004 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 13, length 64
15:25:45.586996 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14533, seq 11, length 64
15:25:45.587312 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14533, seq 11, length 64
15:25:46.570971 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 14, length 64
15:25:47.570997 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 15, length 64
15:25:47.571242 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14532, seq 15, length 64
15:25:48.571005 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 16, length 64
15:25:48.571248 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14532, seq 16, length 64
15:25:49.571001 IP 10.100.104.254 > 10.100.104.15: ICMP echo request, id 14532, seq 17, length 64
15:25:49.571246 IP 10.100.104.15 > 10.100.104.254: ICMP echo reply, id 14532, seq 17, length 64
Do any other see similar behavoir?
Inconsistent SNMP replies between iDRACS:
Strangely, identical servers running the same version (1.57.57) and to my knowledge identical configuration, returns different SNMP replies. More precise, some interfaces does not respond to all OIDS, e.g.:
snmpwalk -v2c -c public -On 10.100.104.13 .1.3.6.1.4.1.674.10892.5.4.1100.30
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.1.1.1 = INTEGER: 1
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.2.1.1 = INTEGER: 1
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.3.1.1 = INTEGER: 0
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.4.1.1 = INTEGER: 2
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.5.1.1 = INTEGER: 3
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.7.1.1 = INTEGER: 3
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.8.1.1 = STRING: "Intel"
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.9.1.1 = INTEGER: 3
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.10.1.1 = INTEGER: 21
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.11.1.1 = INTEGER: 3600
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.12.1.1 = INTEGER: 2100
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.13.1.1 = INTEGER: 6400
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.14.1.1 = INTEGER: 1200
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.16.1.1 = STRING: "E5"
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.17.1.1 = INTEGER: 6
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.18.1.1 = INTEGER: 6
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.19.1.1 = INTEGER: 12
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.20.1.1 = INTEGER: 4
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.21.1.1 = INTEGER: 29
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.22.1.1 = INTEGER: 29
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.23.1.1 = STRING: "Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz"
.1.3.6.1.4.1.674.10892.5.4.1100.30.1.26.1.1 = STRING: "CPU.Socket.1"
and
snmpwalk -v2c -c public -On 10.100.104.15 .1.3.6.1.4.1.674.10892.5.4.1100.30
.1.3.6.1.4.1.674.10892.5.4.1100.30 = No Such Object available on this agent at this OID
On iDRACS which do not respond, we also miss some other information like the Bios version, but we do get information of voltages, fans, temperatures, disk status. Rebooting/resetting/disconnecting power does not seem to help.
Any thoughts on what might be wrong?
Thanks in advance
staale
Multiple DSM version mismatch
I have a 2 node (Dell R720's) cluster that was just rebuilt from 2012 to 2012 R2. It's linked to a Dell MD3200 array for the CSV.
I knew multi-pathing had to be installing to see the array properly as one drive but wasn't certain how it should be done. I installed the MPIO feature in Server Manager but I knew that there was software missing that had been there on the nodes when they ran 2012.
I figured out that it was the Dell MDSM so I installed that. Now when running the cluster validation I get errors about a DSM mismatch between nodes. It shows the Microsoft and Dell DSM's installed on both nodes which I now figure is the MPIO and the MDSM. the versions of each manufacturer's DSM do agree between the 2 nodes but of course they don't match each other.
I'm assuming I am not supposed to run both DSM's and need to uninstall one of them. Since the array is a Dell and I'm sure I remember the software being there before I wonder if I should uninstall the MPIO feature via Server Manager. If I do, will this mess up anything? I'd rather try and live with the version mismatch errors than mess up the array or cluster in any way.
Jonathan
R 620 AHCI bios upgrade
I have a R620 system that I would like to be able to upgrade AHCI bios to at least version 1.3.
Currently the system is at AHCI bios version 1.0.3.
I have been through all the downloads for this system and cannot find anything concerning this.
I would be very grateful for any insight or direction you might have with this.
Thanks,
Dave
PERC 4/Di warning: Embedded RAID firmware is not present
Hello,
I have a Dell PowerEdge 2600 (running Windows 2003), whom the RAID controller PERC 4/Di appears to be faulty, I continually get the message "Warning: Embedded RAID firmware is not present".
I've bought a replacement kit (used), I obtain the same warning!
I've found some information on the web, but nothing help me to correct it, it's possible that it comes from the motherboard.
But before to forget the RAID option on this server, I open this post with some screen shots attached, in case someone have a solution!
For example, is it an option to reset/reinstall the BIOS or something?
If not, I've a second server which is identical, whose RAID works well.
Can I take the PERC 4/Di components from this second server, and try it on my broken server?
That could help me to check if the problem comes from PERC or not!
But, it's very important that this manipulation will not unset my current RAID settings on the second server... and I can not afford that second server no longer works after having reinstalled the PERC.
Thank you for your help,
Chris
turn on the redirect printer
Hi,
My 2008r2 server is running the Lytec software and people remotely to the server from their home. Is there a way to have them print to their printer when they click print on their side?
Poweredge C1100 PERC 6/1 HP SAS drives not recognized
Hello,
First, please pardon my ignorance. I'm setting up my PE C1100. I bought it and some HP 300GB 15k SAS drives, model ef0300fatfd. The drives weren't detected by the BIOS, though I'm getting a grean light on them. I then found out that though the backplane supports the SAS drives the motherboard chipset doesn't. So, I bought a PERC 6/i and a cable, and connected the card to the SATA connectors on the backplane. The drives still aren't found. I updated the BIOS, but I'm getting an error trying to update the BMC/ESM. My understanding is that the PERC is not locked down, but isn't necessarily guaranteed to see non-dell drives. Any help / feedback is hugely appreciated.
Upgrade server and pc's
We are needing to upgrade our server and pc's. We have an hp pro lint. We have two locations. Need terminal services, need to host our email.
R610 Memory Possibilities
I have an R610 with dual processors. I had 16GB of RAM in the form of 8 - 2GB in a cofiguration like so:
[0] [0] [2] [2] [2] [2] PROC PROC [2] [2] [2] [2] [0] [0]
This seemed to work fine. I just purchased 16GB more ram in the form of 2 x 8GB sticks. I am unable to determine an optimal configuration. What are my options to utilize more than 16GB of ram with the new 8GB sticks (that also doesn't require an F2 on reset)?
As of now I just put the two 8GB sticks in A1/B1 with all the 2GB sticks left out. I feel like my only recourse is to buy 2 more 8GB sticks.
Thank you
JHuggans
Edit: I should probably add I'm on BIOS 3.0.0....that may also be an issue.
PE2950 III not seeing PERC6/E or SAS 6gbps cards
Hello,
I have several PE2950 III's. They're all equipped identically, with PCIe risers, two Intel NICs, DRAC, and PERC6/I RAID. All firmware and bios have been flashed to current within the last 6 months, and they're all working fine.
All systems work great in this configuration. It was time to add some external storage, so I picked up an MD1000 expansion unit and two SAS cards - One PERC6/E and one Dell SAS 6gbps HBA.
I removed the working Intel NIC from one of the x8 PCIe slots and installed the SAS 6gbps HBA. For unknown reasons, it is not detected during POST. I don't see the "Press CTRL-C for blah blah".
I figured that's weird, so I reseated the PCIe riser, reseated the SAS 6gbps HBA, and rebooted. No luck. No card detected - no messages from it (like Press CTRL-C), and nothing in the BIOS when looking at IRQ assignment.
I tried a few more times, and oddly enough it worked yesterday. I was able to access the MD1000 and things seemed OK. I rebooted today, and it vanished! No more SAS 6gbps HBA!
I figured I might as well try the PERC6/E card (I have two on hand). Neither are detected during POST/boot, and neither appear in the IRQ assignment settings in the BIOS.
I figured it might be that PE2950, maybe it's riser is toast or something. I tried a different PE2950 III, same config as described above, followed the same procedure, and no cards recognized.
I'm at a complete loss.. Are all three cards bad?! Am I doing something wrong? Is there anything I should try?
R710 Boot Failure
Hello everyone,
I've got an R710 that will not boot.
At power up, the fans run full speed for perhaps half a second, then the server goes into standby mode. Attempting to start the server with the power switch results in nothing at all. If I pull the power I can repeat this same cycle. If I leave the server plugged in long enough, it tries to reboot but does not succeed.
Nothing is displayed on the screen at all, and there is zero indication from the server as to the nature of the error except every so often the front display will show a System Software Error, but no details are shown. This error is only shown very briefly before the server shuts down.
I have removed everything down to CPU1, a single DIMM in slot A1 with the same result.
Connecting via DRAC produces nothing in the system logs.
History:
I've been fighting with a failing PERC 6/i in this server, and I have recently replaced the card, replaced two drives (might not have needed to do this, but the original 6/i showed them as failed), and rebuilt the array. For a few days after doing this, the server seemed fine, but then it started crashing again (but I could still restart the server)
Not being sure what the issue was, I tried another battery on the 6/i, and tested the BIOS battery for the heck of it. No change.
Can someone please help shed some light on this? Am I looking at a failed main board?
Thank you,
Scott
PowerEdge T310 BMC web GUI
All docs that I have read say that I should be able to access the BMC via web interface. I have tried both http and https but no luck. I verified that I can PING the BMC just fine, and I can access it through the impish utility fine too. Any suggestions? Thank you in advance.
M600 Bios Update through IDRAC Virtual Media
I have an m1000e chassis with an M600 blade running bios 1.0.1. I would like to upgrade it to the latest bios version. However, from the dell support site, the only bios files I see are .exe and .bin files. How do I perform an update to the bios either through the CMC on the chassis or in the idrac itself?
Thanks
JP
poweredge 2900 newbie help
hello all i am new here and this is my first post,
first or all let my start by saying i know nothing about servers, but i do know my way around windows xp and 7 a little.
i had an old Dell Poweredge SC1420 that i had installed windows 7 on, and i was using it on my home network for sharing files on my pc, my wifes laptop and kids tablet etc. then i discovered Plex and it rapidly became used more as a media server.
i wanted to upgrade my server to above 4gb of ram, but i couldn't do this as as the server needed a special ram cooling fan, and to be honest the cost of that plus some extra ram work out more expensive then upgrading servers, and i found a Poweredge 2900 single cpu (2.66) with 6gb of ram and 4x 160 sata and 8 caddys, it was local on ebay and i got it for just under £28.
this is where my problem starts, i was hoping to just put a HD on SATA A for an OS, then use the 4x 160's as raids, but 2 separate raids, so i had in effect 2x 160 mirrored (i think that's the correct term), and that would leave me 4 slots left, i was hoping to just put my 4 other full HD's from my old SC1420 straight into those bays. is this possible?
i have read the forum trying to find an answer, but if im honest many of the replies are above me, so could people please remember that many of the terms to do with servers and raid are alien to me.
thanks for reading
RD1000 - cant eject
Dell T610 Server - SBS 2012 - Shadow Protect
Hi there
I have a Dell T610 Server which runs SBS2012. We also have ShadowProtect to take regular backups of the server. These backups, backup onto a RD1000 Internal Removable Disk device.
The problem, is that I can not eject the disks. I receive the error "The disk is currently in use"
Using Process Monitor and handle.exe, I can see that 2 Open Handles exist on the removable disk. These open handles originate from the exe called SMSvcHost.exe.
Does anyone have any ideas why these handles are open on this disk?? How can I close these handles??
I assume if I can stop/clos these open handles, I will then be able to eject the disk?
Many thanks
Lee Parvin
Dell PowerEdge T710 update failure: system services is disabled
Hi all, I have a PowerEdge T710 server that conked out in the middle of the night due to reported power supply issues. I had to pull the plug and wait a minute, then it let me boot up. I saw there were a number of updates available, so I installed the latest BIOS update, rebooted, then the ESM iDrac updates. There was a power supply firmware update available, but it won't let me install it. That was the main one I wanted to install as I'd read about the update handling communication issues between the power supplies and the rest of the system...
When I try to run that patch I get a message "Update Failure: System Services is disabled". I also see "System Services disabled" when I restarted the host server during POST. I haven't found a lot on this issue other than a Dell note saying to try restarting the server. How do I fix this?
Thanks in advance,
Sir_Timbit
PE1955 HBA
I am trying to find a list of supported 4GB HBA's for the PE1955.
The server is running W2003. The HBA can be QLOGIC or EMULEX.
Is there an Hardware Compatibility List I can download?
Sean