Affected
- ResolvedUpdate
This incident has been resolved.
- ResolvedResolved
This incident has been resolved.
- IdentifiedIdentified
We have received an update from the datacenter that the server has been rebuilt into a new chassis/motherboard and has been relocated to another cabinet with cooler temperatures. We are waiting for a further update as the networking department has to reconfigure the VLAN to the new switch for our IPv4 subnets.
- InvestigatingInvestigating
We are aware of another outage on this specific host node in Los Angeles. We are once again in contact with remote hands.
- ResolvedResolved
This incident has been resolved as of yesterday.
- InvestigatingInvestigating
We are aware of another outage on this host node and we are investigating.
- ResolvedResolved
This incident has been resolved as of a few hours ago.
- IdentifiedIdentified
We apologize for the delay in our response. We are aware of the problems and are still working on the incident with remote hands technicians in Los Angeles, as it is a hardware issue that cannot be easily resolved until a technician physically replaces parts in the host node. Generally this process only takes 10-20 minutes, but it appears that there was a scheduling conflict with our remote hands technicians resulting in an extraordinarily delayed response from our datacenter. Once remote hands can be deployed, we should be easily able to restore service as all replacement parts are on-site. The reason this was caused in the first place was due to an issue with the cooling of the memory modules in the host node, resulting in the memory modules overheating leading to the shutdown of the host node.
We truly apologize for the inconvenience caused and we will be switching from our provider as soon as we can, when service is fully restored. This delayed response time from remote hands is not typical and not something that we expected from our upstream in Los Angeles, given that they have had a history of fast replacements and remote hands requests, especially as they are among of the biggest dedicated server providers of Ryzen servers in the United States. We typically build and extensively test our host nodes before shipment, in order to detect problems like these before they occur, but in this case it was a rented server that was urgently deployed due to a previous incident.
We will be issuing SLA credit in line with our SLA policy located in our knowledgebase, please open a ticket requesting SLA once your service is back online.
You can look for updates on https://status.advinservers.com
- InvestigatingInvestigating
We have detected a partial outage on one of our nodes in US California, we sent a technician to investigate once the outage occurred but we are still waiting on a response from our upstream. There appears to be a hardware problem with the host node, hence the downtime. Due to conflicts in schedules/availability, the remote hands has been delayed and hence the node remains offline until we can get a technician to resolve the problem.