12:30 PM central time 6/26/16
edit: maintenance complete, let me know if there are any issues
Were there any issues before? I found that the servers loaded a lot faster myself
Were they faster after this maintenance?
Kalle has suggested that they MS get restarted every 20-30 days. This is the first restart that I have done in about 30 days.
Quote from: Led on June 26, 2016, 12:56:21 PM
Were they faster after this maintenance?
Kalle has suggested that they MS get restarted every 20-30 days. This is the first restart that I have done in about 30 days.
Is all I noticed was the TPS went up from 30 to 40. Although it did seem like better connection as I managed to rack up a lot more kills than usual ;)
tps is set at 40 for the ICW6, not related to MS restart
Quote from: Led on June 26, 2016, 12:56:21 PM
Were they faster after this maintenance?
Kalle has suggested that they MS get restarted every 20-30 days. This is the first restart that I have done in about 30 days.
Doesn't feel any faster than it was before, so idk
Performing some firewall tasks to see if we can somehow block DDoS attacks. A friend of mine and I are testing this right now, if anyone has any problem in the meantime, post here --
EDIT: Finished it!
Switch reboot
Jul 15 2016 04:02:32 PM PT
At approximately 5:49pm CDT, our primary aggregation switch in Chicago rebooted itself, causing a disconnection event for all customers at that location.
This is the second time this has happened. In both cases, we have been unable to determine a cause of the switch failure.
In response to this, to prevent further service interruptions, we will be working to replace this switch in the near future. We will post a maintenance event when we have final information on when the maintenance will be performed.
Is there any Maintenance going on with SWBFSpy lately? There's next to no servers across both games.
Quote from: TheGangstarTY on July 28, 2016, 04:04:47 AM
Is there any Maintenance going on with SWBFSpy lately? There's next to no servers across both games.
Unless we post something, there are no maintenance events going on.
[spoiler]
Subject and date Description
(No maintenance events recorded)
Subject and date Description
(No events recorded)
Subject and date Description
(No bandwidth events recorded)
[/spoiler]
Upcoming router maintenance early 8/5
Aug 04 2016 12:17:19 PM PT Between 3am and 5am CDT on Friday, August 5, we plan to update the OS on one of our routers in Chicago, which will require a reboot. We expect for this to cause approximately 10 minutes of connectivity loss for your service at this location.
We are performing this update because the new software release should offer higher performance than the old one, which will make the router able to better-handle particularly high-packet-rate DDoS attacks. We have seen several extremely large DDoS attacks against a customer in Chicago today that have saturated all of our upstream links, and would have needed to be null-routed no matter what, but they caused some additional impact because this router could not process all of the traffic internally.
If there is a problem with the new software release, we will roll back to the older version.
Location down
Aug 08 2016 02:33:53 PM PT Our primary aggregation switch in Chicago appears to have crashed, causing the location to go offline. We are currently investigating.
We already have a new switch on order to replace this one, since it has crashed twice in the past.
Update @4:43pm CDT: We are waiting for an on-site technician to arrive at the cabinet and physically survey it to determine what has happened. The main possibilities are that the switch has failed or that the PDU that it is plugged into has failed. If it is the latter, it should be a quick fix once the technician arrives; if the former, it may take a little longer for us to rearrange cables.
The replacement switch that we ordered is higher-end, with additional redundancy, including redundant PSUs.
Intermittent packet loss over Telia
We have been noticing intermittent packet loss over one of our upstreams in Chicago (Telia) tonight, on the inbound from some endpoints (from New York/Ashburn, at minimum). We have contacted Telia to ask them about this. Clients whose ISP reaches us via Telia may notice skipping or other performance problems with services in Chicago.
Master Server Maintenance Has commenced. and is now over, seems to work :cheers:
Charter packet loss
Aug 25 2016 01:29:49 PM PT We have received reports from Charter customers that they are seeing intermittent packet loss to our Chicago location. The data we have seen so far suggest that the problem is occurring between Charter and Voxel (Internap's peering network). We have emailed Internap to ask them to investigate, and we have also emailed Charter.
Upcoming reboot on 9/8
Sep 06 2016 11:31:29 PM PT Between 12:30am and 4am local time for your service on Thursday, September 8, we plan to gracefully shut down your VDS, upgrade the machine hosting it to an updated version of Xen/Linux, reboot the machine, and bring your VDS back online. We expect for this process to cause approximately 30-60 minutes of downtime for your service.
Because this maintenance will require a full reboot, we recommend that you configure any services which need to always be running to start automatically when the VDS is booted.
We do not anticipate problems due to this update, but to be safe, we recommend that you externally back up any critical (irreplaceable) files before the maintenance event and before any other service adjustment.
This Xen upgrade is required because of an important Xen vulnerability. We are rebooting all of our VDS-hosting machines that are running anything but the latest Xen release the same morning.
MS is back up now after the NFO maintenance :cheers:
There are problems in swbf2 after the maintenance.
From around 8 servers I guess, now there are only shown 2 servers now.
Approximately after we will have played one map now on a server that still runs, there will be only one left.
Because the last one just crashed after one map that was still running.
I am in the spy gaming team with server access. I restarted all servers, servers still do not show up.
I tried hosting a server from my computer, the server wont start.
So something is wrong there right now. :D
Just to let you know!
They aren't loading at all for me. Hopefully it will fix itself
the ps2 servers are working. here is a wireshark capture of the stuck pc swbf2 server
https://mega.nz/#!bcxjxIja!F-xGyO3VijwF0OJjFuOxd1jjWKJqNabEKxFUm6DwwbQ
I will reboot the MS and restart everything.
Yo nice man.
Servers show up + running againg.
Thank you.
Quote from: strohhut on September 08, 2016, 02:26:10 PM
Yo nice man.
Servers show up + running againg.
Thank you.
Yah
Brief problems due to several DDoS attacks
Sep 18 2016 07:12:52 PM PT
Extremely large attacks against multiple servers run by a single customer in Chicago led to brief packet loss events shortly after 9pm CDT. These attacks led to immediate null-routes, but the impact was still significant because of the sheer amount of traffic that this botnet generated (well over 100 Gbps).
We are continuing to watch for further attacks and will continue to process any that we see.
Additionally, we are in the late stages of adding another 10 Gbps of upstream capacity in Chicago. We expect it to be online within the next few days, following additional testing.
Problem with an upstream in Chicago
Sep 23 2016 12:41:37 PM PT One of our upstreams in Chicago (GTT) appears to be experiencing a partial outage in Chicago. We are currently investigating. Clients reaching us, or being reached through, GTT may be experiencing packet loss or a loss of connectivity entirely right now.
Upcoming upstream bandwidth maintenance early 10/18
Oct 12 2016 10:58:36 PM PT
Between 1am and 8am CDT on Tuesday, October 18, one of our upstream providers (Telia) has informed us that they will be doing maintenance on our connections. This maintenance may cause us to be unreachable by some clients for approximately 15-30 minutes at two times during the morning, they tell us.
Upcoming Xen upgrade with reboot @ 1:30am CST on Nov 21
Nov 18 2016 06:05:10 PM PT We are planning to reboot the machine hosting your VDS at approximately 1:30am CST on November 21, 2016, in order to upgrade it to the most recent version of Xen. This is necessary because the new version includes a fix for a newly-discovered, critical Xen vulnerability.
For the reboot, your VDS will need to be shut down, then booted back up. We estimate that downtime will last 15-60 minutes.
Windows Server 2012 R2 still has a bug that causes VDSes running that OS to sometimes experience boot failures (but no problems at any other time), particularly when there is a high I/O load for the machine. If you have a Win2012 R2 VDS and it does not come back online after the maintenance on its own, please wait a bit and then try rebooting it again through the Server Control page."
Problems due to system reboots are exceedingly rare, but to be safe, we recommend that you remotely back up irreplaceable files before the maintenance event.
I will restart the master server when I wake up. :cheers:
bump. only SWBF1 servers are running at the moment. After the maintenance, I will start the SWBF2 and PS2 servers.
Update @ 1:26am PST on 11/21: The start of this maintenance had to be delayed by 10-30 minutes for many machines while we worked out a few kinks in the process and did some last-minute testing. Most machines were back online within 30 minutes of the trigger, falling within the 15-60 minute estimate given in the post. The next time that an emergency maintenance is required, though, we will be more specific about what we mean by 'approximately' in reference to when it is planned to start, to avoid any confusion, and we will include significant padding to help compensate for unforseen events.
We changed our procedure so that VDSes were saved to disk, then restored, instead of fully shutting them down, after we determined that a bug with Xen which previously prevented this had been fixed. If the save operation worked properly, this means that your VDS should have only seen network connectivity loss, instead of a full reboot. You may need to check the clock in your VDS to make sure that it is synced properly, as this could have caused it to be knocked off. (If the save/restore didn't work properly,
Upcoming switch maintenance early 12/14
Dec 11 2016 10:19:59 PM PT Between 2am and 6am CST on Wednesday, December 14, we plan to replace our main aggregation switch in Chicago with a newer, faster, and more reliable one. When we swing over the router uplinks to the new switch, we expect for customers to see approximately 5-15 minutes of connectivity loss.
the MS is currently being rebooted
update: maintenance complete. Let me know if there are any issues.
MS rebooted
MS rebooted
MS rebooted
I will shutdown the game server and Master Server tonight, and bring everything back up Tuesday morning.
----------------
Upcoming Xen upgrade with reboot @ 1:30am Central Time on June 20
Jun 19 2017 12:38:13 AM PT We are planning to reboot the machine hosting your VDS at approximately 1:30am Central Time on June 20, in order to upgrade it to the most recent version of Xen. This is necessary because the new version includes a fix for a newly-discovered, critical Xen vulnerability.
For the reboot, your VDS will need to be shut down, then booted back up. We estimate that downtime will last 15-60 minutes.
Windows Server 2012 R2 and Xen still have a bug that causes VDSes running that OS to sometimes experience boot failures (but no problems at any other time), particularly when there is a high I/O load for the machine. If you have a Win2012 R2 VDS and it does not come back online after the maintenance on its own (our system should notice most such crashes and automatically try again), please wait a bit and then try rebooting it again through the Server Control page."
Problems due to system reboots are exceedingly rare, but to be completely safe, we recommend that you remotely back up irreplaceable files before the maintenance event.
Quote from: Led on June 19, 2017, 07:08:18 AM
I will shutdown the game server and Master Server tonight, and bring everything back up Tuesday morning.
----------------
Upcoming Xen upgrade with reboot @ 1:30am Central Time on June 20
The [FC] servers will also be down during this time for the same upgrades.
MS and game servers shutdown
Servers restarted
MS was down last night (services were still running). I've rebooted it and the server listing is available again.
MS was down last night about 11 PM central time (services were still running). I've rebooted it and the server listing is available again.
Upcoming Xen upgrade with reboot @ 1:30am Central Time on September 12
Sep 07 2017 11:50:57 PM PT We are planning to reboot the machine hosting your VDS at approximately 1:30am Central Time on September 12, in order to upgrade it to the most recent version of Xen. This is necessary because the new version includes a fix for a newly-discovered, critical Xen vulnerability.
For the reboot, your VDS will need to be shut down, then booted back up. We estimate that downtime will last 15-60 minutes.
Windows Server 2012 R2 and Xen still have a bug that causes VDSes running that OS to sometimes experience boot failures (but no problems at any other time), particularly when there is a high I/O load for the machine. If you have a Win2012 R2 VDS and it does not come back online after the maintenance on its own (our system should notice most such crashes and automatically try again), please wait a bit and then try rebooting it again through the Server Control page."
Problems due to system reboots are exceedingly rare, but to be completely safe, we recommend that you remotely back up irreplaceable files before the maintenance event.
gameserver and master server maintenance this morning
Router maintenance early 11/30 will cause some connectivity loss
Nov 29 2017 12:30:14 AM PT Between approximately 3am and 6am CST on Thursday, 11/30, we will be moving our routers and aggregation switch to a different cabinet in the datacenter. We will be doing this very carefully and as efficiently as possible. We anticipate between 30 and 120 minutes of connectivity loss during the maintenance window because of the change.
Upcoming routine Internap upstream maintenance on 1/17 and 1/18
Jan 13 2018 10:21:17 PM PT
One of our upstream providers will be performing maintenance on one of its routers between 10pm CST and 1am CST, starting on 1/17 and again starting on 1/18. This maintenance may cause brief periods of connectivity loss or increased latency for some clients to your service.
Upstream failure event
Feb 01 2018 01:49:03 PM PT All of our connections to one of our transit providers in Chicago (GTT) went down at approximately 3:31pm CST. They came partially back up at approximately 3:37pm CST but we are still seeing problems with them now.
This occurred across all routers on our end and no other upstreams failed, so this was an issue entirely within GTT. We are following up with them to ask what happened and for an ETA on a fix.
INAP problems around 12:30pm CST
Mar 05 2018 11:01:10 AM PT We saw some internal packet loss within one of our upstreams in Chicago, INAP, between approximately 12:30pm CST and 12:45pm CST. They have confirmed the problem within their network and say that they are still investigating it now.
The loss seems to have subsided, but while it was occurring, a portion of clients reaching our network through this specific upstream would have seen up to 90% packet loss, essentially making services at this location unreachable.
Upcoming routine upstream maintenance (Telia)
Apr 11 2018 01:43:46 PM PT One of our upstream providers will be performing maintenance on one of its routers between 4:30am CDT and 5:30am CDT on April 13. This maintenance may cause brief periods of connectivity loss or increased latency for some clients to your service.
Chicago problem
May 31 2018 01:16:30 PM PT We are currently investigating an issue with our Chicago PoP that seems to be breaking connectivity for most clients.
---
If anyone has any issue related to this, report it here, please! :cheers:
Upcoming INAP maintenance early 7/17
Jul 16 2018 11:26:06 AM PT - Between 2:30am and 4:30am CDT on Tuesday, July 17
INAP will be performing maintenance on one of our links to them in Chicago. This may cause a brief connection interruption and/or routing reconvergence for customers reaching us, or being reached over, our INAP connections at this location (INAP is one of three transit providers that we use in Chicago, in addition to direct peering with many ISPs).
Brief upstream (Telia) issue a short time ago
[spoiler]Between approximately 8:13pm and 8:27pm CDT, one of our upstreams in Chicago (Telia) experienced some sort of problem with the router that we connect to. This brought down our links to that upstream and caused some lost packets and routing reconvergence for customers coming in, or going out, over that transit provider.
We have opened a ticket with Telia to ask about this and will continue to follow up with them.
Update @ 9:52pm CDT: Telia has responded that this was a "very brief major outage" and that they will send us an RFO at a later (unspecified) time.[/spoiler]
If anyone has any problem, let us know here! Thanks :cheers:
Upcoming Xen upgrade with reboot @ 1:30am Central Time on September 14
Sep 13 2018 09:34:17 AM PT We are planning to reboot the machine hosting your VDS at approximately 1:30am Central Time on September 14, in order to apply critical Intel microcode and Xen updates to address a new speculation-related vulnerability (Foreshadow/L1TF).
We have avoided the need for other reboots over the last year by using the Xen livepatching functionality to update running code. Because this specific new flaw requires the application of updated microcode at boot-time, and because its Xen code updates cannot be patched into a running system, that was not possible in this case. We plan to continue avoiding reboots on our end as much as possible in the future.
As part of the fix for Foreshadow/L1TF, we are also being forced to disable hyperthreading (SMT) globally for our machines. This means that customer virtual cores can no longer be assigned to exclusive hyperthreaded cores. However, our systems have such low overall CPU usage that customer VDSes with heavy CPU usage on specific virtual cores usually have them assigned to physical cores with very light-usage neighbor threads by the Xen scheduler already, essentially turning virtual cores into full physical cores. As a result, we expect (and have so far observed) minimal, if any, performance impact from the switch away from SMT. If you do notice reduced performance after the maintenance or see unusual CPU usage on your VDS (now also visible through newly-added CPU usage graphs on the "Server usage" page), please contact us, and we can explore a possible move to a different physical machine.
We also recommend that all customers take this opportunity to apply the latest security updates from their OS distributions. The vendors for all currently-supported operating systems have released patches for the new vulnerability.
For the reboot, your VDS will need to be shut down for approximately 15-30 minutes. We will attempt to gracefully shut it down on our end through Xen, but sometimes this doesn't work perfectly, so we advise that you turn off applications that might write to disk before the maintenance event. If you are running Windows 2012 R2, please also note your VDS may need additional reboots to work properly, due to a bug within Xen related to how it boots up (that only occurs during bootup) -- our system will attempt to detect when this is the case and perform the extra reboot operations, but we recommend checking afterward yourself, as well.
---------------------------
Servers should be back up; server port order is off at the moment and will be adjusted some time late this weekend.
Upcoming move to a new machine after 3am CST on Nov. 18
Dec 17 2018 12:26:40 PM PT Starting at approximately 3am CST on Nov. 18, we plan to begin moving all customers off of the machine hosting your server, to other machines at the same location. We are doing this in order to decommission this old machine, which is nearing the end of its useful life because it is no longer acceptably fast or power-efficient.
We expect the process to take at least a few hours, as we move the servers one by one. At some point in the morning, you will see your VDS go offline for a period of time as it is moved. The amount of time will depend on how much hard drive space you have used, from as little as 5 minutes to as much as a couple of hours. You will be able to monitor the progress of the move through the "Server control" page in the control panel.
If the timeframe of this maintenance event will not work for you, please let us know ASAP. We can individually move your server earlier than the maintenance event, for instance, if that works better for you.
I think that should say December 18th, i.e. tomorrow as I post this.
MS move completed; MS restarted :cheers:
Subject and date Description
Some attacks in Chicago causing null-routes
May 20 2019 10:13:24 PM PT We have seen a few attacks in Chicago today that have forced our system to implement emergency null-routes against the target customers' IP addresses. Null-routes are always a big deal for us to see because they mean that some clients will have experienced short bursts of partial packet loss (generally lasting for 5-10 seconds seconds with each null) -- and we know how much even short periods of packet loss can hurt a game server or other latency-sensitive streaming-type service.
Since DDoS attacks are always getting larger, we have been exploring upgrade options in Chicago since last October, when it became clear that INAP (our primary upstream at most locations) has an inadequately-sized network and has no immediate or even long-term plans to upgrade it or otherwise improve their rudimentary systems for dealing with attacks (our own mitigation systems are highly robust). We have quotes in from other upstreams that we already partner with, and we chose an upgrade path some time ago. For the last few weeks, we have been waiting for INAP, which is also our facilities partner at this location, to work on some physical components of the upgrade.
We will continue to push the upgrades through as much as we can from our end. After upgrades are complete, this location will have significantly higher capacity and much more resistance to attacks.
We also have other upgrades in the pipe for late this year/early next year that will further increase our internal and external capacity in Chicago and other locations (including Seattle). We are always considering and implementing more upgrades!
Upcoming Xen upgrade with reboot @ 1:30am Eastern Time on June 20
You may need to reboot your servers depending on your location. I had to restart the MS processes.
Upcoming router maintenance early 8/13 that might cause brief blip
Aug 12 2019 02:45:35 PM PT
We will be shifting traffic to our secondary router in Chicago between 1am and 4am CDT on Tuesday, 8/13, in order to take the primary router offline and perform upgrades to it, including installing new 100G ports and upgrading its internal connection to our aggregation switch. We expect for this to cause a brief (few seconds-long) blip in connectivity for your service at this location, though there is the possibility of a slightly longer downtime if there is more than an expected amount of reconvergence.
Facility power maintenance on 8/16 and 8/20 will cause downtime
Aug 14 2019 12:01:10 AM PT
We have been notified by the facilities provider in Chicago (Equinix) that they will be replacing half of their Automatic Static Transfer Switches on Friday, August 16, between 10pm CDT and 6am of the next day, and the rest of the transfer switches on Tuesday, August 20, between 10pm CDT and 6am of the next day.
I will not that this maintenance will shutdown the Master Server, so no hosting or playing will be possible during these maintenance windows.
Update @ 2:58pm CDT on 8/15: We have asked the site to migrate the network switches that will be impacted by the Friday night to alternate power in advance -- specifically, between 7am and 9am CDT on Friday morning, as that is a far lower-usage time than 10pm. Most customers will see a short (few-minute-long) connectivity blip during this window on Friday morning as a result.
We plan to schedule a maintenance for next week to move those, and other switches, back to the power feeds that have already been fixed. This will impact all customers and will likely occur on Tuesday morning at 7am. We will update this event with more specific details.
Apparently the MS was down for a short while due to an event from our hosting company --
QuoteBrief packet loss bursts due to a few attacks
May 14 2020 04:17:19 PM PT High-PPS spoofed/reflected attacks against several IP addresses in Chicago have caused a few bursts of elevated packet loss for some clients in Chicago in the last hour.
We are currently in the process of turning up a new 100G upstream link and updating our redundant router. That should help to reduce the impact of further attacks.
Last night the MS was shut down for some reason, but should be back up now :cheers:
Sorry if someone was affected by the downtime.
Elevated attack activity today
Jun 16 2020 04:22:10 PM PT
We have seen several particularly large attacks in Chicago today that have saturated our multiple 100G upstream links and/or our 20G link to the Eqinix exchange there. These are botnet attacks involving large amounts of newly-compromised devices, and we are sending out abuse notifications and diligently processing them as we normally do.
We are also working on the latest upgrade to our infrastructure in Chicago, which will bring up our capacity to the Equinix exchange from 20G to 100G. This is our smallest inbound upstream link at this location, and the upgrade should prevent it from being saturated by all but the very largest attacks for quite some time. We will post a separate maintenance notification about the physical upgrade in the near future; that process should not involve more than a very short period of downtime and routing reconvergence, when it happens in the early morning on a later date.
Upcoming platform upgrade with reboot shortly after 1:30am Central Time on December 15 and/or the next day
Dec 14 2020 12:25:33 AM PT We are planning to reboot the machine hosting your VDS at approximately 1:30am Central Time on December 15, in order to apply critical platform updates that address serious vulnerabilities. We are also scheduling a backup maintenance window for the next day at the same time; this backup will be used if there are problems on the first day (such as an issue with packaging the update, or last-minute bugs that are found in the code) such that we have to try again.
We have generally avoided the need for reboots over the last several years by invisibly using livepatches to update running code. However, the new security patches (and microcode updates that also need to be applied) cannot be applied to a running system.
For the reboot, your VDS will need to be shut down for approximately 15-60 minutes. We will attempt to gracefully shut it down on our end through Xen, but on occasion this doesn't work perfectly, so you may consider turning off applications that might write to disk before the maintenance event. If you are running Windows 2012 R2 or later, please also note your VDS may need additional reboots to work properly, due to a bug within Xen related to how it boots up (that only occurs during bootup) -- our system will attempt to detect when this is the case and perform the extra reboot operations, but we recommend checking afterward yourself, as well.
While we don't expect any problems related to the reboot of your machine, most machines here have not been rebooted in over a year, and there is always a remote possibility that hardware could fail as a result of the reboot. We recommend performing an extra external backup of irreplaceable files today, just to be completely safe.
All servers (in all platforms) have been restarted -- if we're missing any, let us know :cheers:
Hello {4Я}1ИCΘ6И17Θ,
At the end of this month, SWBFSpy will be shut down because it is out of maintenance and we want you to please fix it again.
It would make some players like me very sad because we play the game almost every day and we find Spy better than the Steam version. There has been enough sadness in this world and when SWBFSpy shuts down it makes me even sadder and it's my favorite game since I was a kid.
I would be very happy if you can fix it please.
Greetings {AR}Starkiller21
Quote from: {AR}STARKILLER21 on April 01, 2022, 12:06:23 PMHello {4Я}1ИCΘ6И17Θ,
At the end of this month, SWBFSpy will be shut down because it is out of maintenance and we want you to please fix it again.
It would make some players like me very sad because we play the game almost every day and we find Spy better than the Steam version. There has been enough sadness in this world and when SWBFSpy shuts down it makes me even sadder and it's my favorite game since I was a kid.
I would be very happy if you can fix it please.
Greetings {AR}Starkiller21
I pay the bills...SWBFspy is not shutting down. :)
Where did you hear such false information?
Quote from: Led on April 01, 2022, 03:31:30 PMI pay the bills...SWBFspy is not shutting down. :)
Where did you hear such false information?
Someone posted an April fools on discord, don't worry about it. 😉
Quote from: {PLA}gdh92 on April 02, 2022, 02:40:14 AMSomeone posted an April fools on discord, don't worry about it. 😉
Omg right hahaha. How I can`t forgett it.... :censored: that was badass :rofl: :rofl: :tu: