Can Ransomware Infect Backups?

If you’ve been paying any attention to the field of cybersecurity in the last several years, you’ve probably asked yourself is if ransomware can infect backups? This particularly nasty form of malware encrypts the files and applications on your computer, and then charges you a hefty sum of money to regain access.

Ransomware has the potential to bring your operations to a shuddering halt—which means that it’s rapidly becoming the preferred attack vector for malicious actors looking to make a quick buck. In 2019, hundreds of U.S. government agencies, hospitals, and educational institutions were hit by ransomware attacks, with an estimated cost of $7.5 billion.

The good news is that backups are one of the best strategies you have to defend your organization against ransomware. The bad news is that backups aren’t themselves immune to ransomware—if you don’t protect them well enough, your backups could become encrypted along with the files themselves.

In this article, we’ll go over everything you need to know about ransomware and backups: both how ransomware can infect backups, and what you can do to protect your backups from ransomware.

How Ransomware Attacks Work

Ransomware attacks can spread in a variety of ways:

  • Malicious hyperlinks or email attachments
  • “Drive-by downloads” from visiting a compromised website
  • Attacks through Microsoft’s Remote Desktop Protocol (RDP)
  • USB drives and other removable media
  • Exploits and security vulnerabilities in networks and web servers

Once present on your system, ransomware begins encrypting your files and applications, preventing you from accessing them without the associated decryption key. To hike up the urgency, the attacker will give you a deadline by which you need to pay the ransom, which can cost hundreds or thousands of dollars. (Depending on the generosity of the attacker, you may or may not receive the right decryption key after paying this fee.)

Theoretically, backups should help you survive a ransomware attack without too much disruption. Even if the contents of your system are encrypted, you can simply restore the non-encrypted versions from backup, keeping downtime to a minimum. As we’ll discuss in the next section, however, backups aren’t necessarily a foolproof solution for ransomware.

How Ransomware Targets Backups

Many ransomware attackers are producing sophisticated attacks that are intended to thwart the strategy of keeping backups:

  • Local backups: Backups that are locally connected to an infected computer can easily fall prey to ransomware themselves. Once the ransomware is present on your system, it can spread to external hard drives or file servers that are connected to your computer, as well as other computers on the network.
  • Cloud storage: You might think that the cloud keeps your files more secure by storing them on a different server, but this is very often not true. Cloud storage solutions such as Dropbox and Microsoft OneDrive are usually set to automatically synchronize their files with their local versions on your computer. This means that once your local files are encrypted by ransomware, the encrypted versions may also propagate to the cloud.
  • System Restore: Windows’ System Restore feature helps you fix crashes and problems by reverting to a previous working state. However, System Restore only preserves the drivers, settings, and system files that Windows needs to run, not your own personal files—which makes it of limited use during a ransomware attack. What’s more, smart attackers are developing ransomware that deletes the automatic backups that System Restore depends on, such as restore points and shadow copies.

How to Make Your Backups Ransomware-Proof

If ransomware can infect backups, then what steps can you take to protect backups from ransomware attacks?

1. Keep multiple local backups

The key to defeating ransomware is to diversify your local backups as much as possible. Ideally, you should maintain at least two different local backups of your files and applications on multiple forms of storage media (e.g. local drives, file servers, tape drives, etc.)

In addition, at least one backup copy should be isolated from your network and stored offsite. This is not only a good practice for ransomware, but also protects you from natural disasters such as fires, floods, and storms.

2. Protect your cloud backups

If you want to use the cloud as part of your ransomware defense strategy, make sure that you have the right solution in place. “Cloud storage” offerings keep your data in the cloud, but they don’t necessarily include versioning features that allow you to revert to previous versions of a file.

“Cloud backup” solutions, on the other hand, should have built-in file versioning, as well as additional features such as strong encryption and status reports. Many cloud backups also provide automatic malware scanning in order to detect and neutralize threats.

3. Prepare yourself

The better prepared you are for a ransomware attack or other cyber disaster, the more likely you are to come out unscathed on the other side. Every business should have a clear, well-developed disaster recovery plan that you test on a regular basis. Determine what level of data loss you’re comfortable with (i.e. the maximum recovery point objective), and then determine how often you need to make backups to meet this target.

Final Thoughts

While ransomware can infect backups, the good news is that you can lower this risk and protect yourself by taking some common-sense precautions. Looking for a robust cloud backup solution that can help defend you from ransomware? Get in touch with NetDepot’s team of experts to develop a smart ransomware strategy for your business.

NetDepot’s Atlanta Dedicated Server Hosting

Congratulations—you’ve outgrown your shared hosting plan and are considering investing in a dedicated server.

This probably means that your business is growing, and you are starting to feel some performance pain. Maybe your website is loading too slowly—despite good maintenance on the front end—or jams at peak times are creating other headaches and even lost revenue or critical data. You might be concerned about better data security and backup, more efficient storage, or wanting to use software or configurations not supported by your shared hosting plan.

Some common reasons businesses switch to a dedicated server

  • You anticipate rapid growth and increased traffic
  • Timing can be tricky because you want increased revenue before adding in the additional server costs, but the best time to make the transition is well before your resources are overwhelmed. Depending on the kind of content you host, you should plan for peak traffic 5 to 30 times your average daily traffic.
  • Page load speed is getting bogged down
  • Slow page load speeds are leading to bounces and low engagement rates. If you’ve optimized everything you can on the front end, it might be time to look at a dedicated server, especially if you are losing sales or leads. Page loads are a critical component of the digital experience, and a dedicated server can give you the speed you need to compete.
  • Security is business-critical
  • You are handling sensitive records or storing user data and a breach would put you out of business. Cybercriminals target small and medium businesses in the vast majority of their attacks, and 60% of SMEs are out of business 6 months after a breach. A dedicated server can provide enterprise-grade security and help you keep your data and operations safe.

Dedicated hosting unlocks the power of your digital infrastructure 

Digital services and online customer experiences are the backbone of business growth, allowing businesses to innovate and develop new revenue streams, but if a website performs poorly or doesn’t have enough power behind it, this can result in a poor user experience, limited functionality and a loss of revenue.

Dedicated hosting can give you the power and control you crave, along with enterprise-class security. It also significantly reduces your personal risk and eliminates the need of having engineers on call, since you are leasing rather than owning and maintaining your own servers.

It is the best of both worlds: you have your own server and complete control, but since you are backed up by the greater capacity and performance of a data center, it is easier to scale your business quickly, and to leverage leading-edge technology to provide a world-class experience.

Dedicated hosting is the top shelf solution for most enterprises today, because it is both costly and risky to host your own servers, especially when you can have the same control with a dedicated server.

Datacenter location is critical to cybersecurity

The best technology is only as good as the people and infrastructure behind it, and this is why proudly showcase Netdepot’s SSAE 16 certified data center, in the heart of Georgia’s ‘Silicon Peach’. Atlanta is a beautiful and innovative city that has made considerable investments in infrastructure and smart technologies, and is an optimally located interconnection point for the south-eastern United States. Two of the largest fiber optic trunk lines in the U.S. intersect in the metropolitan Atlanta region, as well as the research line Internet2, making this location a smart choice for future-proofing your digital infrastructure.

A gateway to the Southeast 

As the ninth largest U.S. metropolitan area and a critical communications hub, Atlanta is an ideal colocation market due to its reliable infrastructure and low power costs. Atlanta supports a healthy startup ecosystem, and thousands of companies in fintech, biotech, cybersecurity and mobility call it home. In recent years, the region has seen explosive growth in information and communication technology, and has become a key B2B technology hub, hosting the third largest concentration of Fortune 500 companies in a major city.

Dedicated hosting in Atlanta diversifies risk

Atlanta not only acts as a regional gateway to the Southeastern United States, but is also a major East Coast fiber convergence point and cross connect of long haul carriers currently present in our carrier. In addition to access to competitive low cost and reliable power rates, Atlanta also benefits from low incidence of natural disasters and provides vendors a location to diversify risk exposure on the east coast.

What does this mean for you? 

NetDepot’s Atlanta site is fiber connected and lightning fast, making it an ideal site for any business who can’t afford down time. With a guaranteed uptime of 99.9% and premium disaster recovery protocols, our dedicated servers are SSAE 13 certified, HIPPA compliant, and a reliable option even for hospital and medical facilities with live operations.

Some of key features include:

  • Physical security and around the clock staffing at our state-of-the art facility
  • Multiple 10 GBIT uplinks for high speed network applications
  • Redundant network design for the most robust, fast and secure private and public networks
  • Advanced automation for greater customer control and power
  • Buydown options for lower monthly costs
  • Disaster recovery as a service


In order to meet the demands of digital transformation, more businesses have shifted to data centers for the greater flexibility, security and power they provide. NetDepot is a debt-free and financially secured operation with direct access to management and world-class customer support. Our owners and staff pride themselves on the strength of our network, premium disaster recovery protocols, and our epic customer support.

For twenty years, NetDepot has provided leading-edge self-managed servers and cloud servers to a wide variety of corporate and enterprise clients. Today is a world-class provider with datacenter locations in Atlanta, Houston, NYC and San Francisco and plans to grow our footprint. As a net-Income profitable company with a strong financial and managerial foundation, NetDepot is the perfect partner for your growing business. Contact us at 1-844-25-CLOUD today to see how NetDepot can help take your business to the next level.

Disaster Recovery in Atlanta with NetDepot

Every business thinks that it won’t suffer a disaster—until disaster actually strikes. The good news is that through careful planning and forethought, you can minimize the impact of a catastrophic event on your business. If you need a service for disaster recovery in Atlanta, continue reading to learn about your options.

What is Disaster Recovery?

The term “disaster recovery” refers to the preparations and practices that enable businesses to quickly recover from an unforeseen catastrophic event, insulating themselves from the negative effects and repercussions. Disasters may have a wide range of causes, including:

  • Natural disasters (floods, tornadoes, earthquakes, hurricanes, extreme heat and cold, etc.)
  • Power outages and equipment failure
  • Cyber attacks and data breaches
  • Human error

In recent years, disaster recovery has received a growing amount of attention due to many high-profile outages—in particular, the rise of devastating cyber attacks and data breaches that expose confidential and mission-critical data to malicious actors and unauthorized third parties.

Disaster recovery is closely related to, yet distinct from, the objective of business continuity, which seeks to keep your organization up and running during and after a disaster. By recovering from disasters more quickly, you can minimize the disruption to your business, ensuring that employees can still fulfill their functions and customers can still obtain your products and services.

Why Do You Need a Disaster Recovery Plan?

According to a 2018 survey, 95 percent of organizations have some form of disaster recovery plan, including 90 percent that have plans for data integrity and backups. However, it’s likely that a significant portion of these plans are destined to fail when a real disaster strikes: 23 percent of organizations have never tested their disaster recovery plan, and just 34 percent test their plan at least once per quarter.

These statistics are even more shocking when you consider the potentially catastrophic effects of a disaster on your business. According to the U.S. Federal Emergency Management Agency (FEMA), 40 to 60 percent of small businesses never reopen their doors after suffering a disaster.

Even a relatively minor IT disaster can have a major impact on your financial bottom line. Research and advisory firm Gartner estimates that the average cost of IT downtime for businesses is $5,600 per minute, or more than $300,000 per hour. What’s more, a full third of organizations say that an hour of IT downtime would cost their business over $1 million.

You might be under the impression that cloud computing is more resilient than on-premise systems. For example, many organizations store their primary backups on-premises, while maintaining duplicate versions of their data in the cloud.

However, moving to the cloud won’t offer you foolproof protection from a disaster. Even cloud services can go down unexpectedly, leaving you in the lurch.

Consider the cautionary example of Amazon Web Services, which was responsible for a massive outage in May 2017 that brought down websites such as Trello, Quora, and all sites built with the web development platform Wix. Amazon later reported that human error was responsible for the outage when an employee accidentally took down more web servers than intended.

There’s a critical lesson to be learned here: the cloud should be an important part of most organization’s disaster recovery strategy, but it too can suffer from the same problems and points of failure. If you’re running software and storing data in the cloud, your disaster recovery plan needs to include a different, secondary cloud provider so that you can continue operations without a hiccup.

Every disaster recovery plan should include concerns such as:

  • A list of the key personnel who will be involved in disaster recovery, including backups for each role if the primary employee is unavailable.
  • A plan for how to communicate critical information to employees, customers, and vendors.
  • A complete inventory of the hardware and software in your IT environment.
  • A backup site for employees to work from if your primary office is damaged or destroyed.
  • A review of your service-level agreements (SLAs) with vendors and suppliers to ensure coverage in the event of a disaster.
  • A description of your theoretical loss tolerance following a disaster. How long of a recovery time would be acceptable? How much data could you afford to lose—could you go back 1 minute, 1 hour, or 1 day?

If you need help formulating a disaster recovery plan, speak with a qualified IT managed services provider.

Why a Disaster Recovery Center in Atlanta?

We’ve gone over the devastating impact that a disaster can have on your business, and the need for every organization, large and small, to have a disaster recovery plan. But why should you choose to locate your disaster recovery infrastructure in Atlanta in particular?

Atlanta is the ninth-largest metropolitan area in the United States, as well as a regional gateway to the Southeast. In particular, the city has a growing reputation as a high-tech tub with a top-tier skilled IT workforce, which is why it’s sometimes referred to as the “Silicon Peach.” The state of Georgia has earned accolades such as “The Top 5 States for Innovation” and “The Top 10 States for High-Tech Job Growth” in recent years.

IT companies such as MailChimp,, and BitPay are all headquartered in the greater Atlanta area. That’s not to mention Fortune 500 businesses such as UPS, The Home Depot, Delta Air Lines, and Coca-Cola. Major tech firms like Amazon, Google, IBM, Microsoft, Oracle, and Twitter have also opened offices in Atlanta.

Atlanta and the state of Georgia provide a business-friendly climate for organizations to call home. Georgia is sixth in the Forbes “Best States for Business” ranking, for reasons such as low labor costs and low corporate tax rates.

What’s more, Atlanta is a major fiber convergence point and cross-connect for long-haul communications. Two of the largest fiber optic trunk lines in the U.S., North/South and East/West, both intersect in the metropolitan Atlanta region, as well as the research line Internet2. Massive carrier hotels such as 56 Marietta in downtown Atlanta offer unparalleled space, security, power, and convenience.

The greater Atlanta area is well-connected in general, acting as a major transport hub for the region and the country as a whole. This includes Hartsfield-Jackson Atlanta International Airport, the world’s busiest airport, as well as many highways and rail connections.

Finally, Atlanta also has geographic benefits that make it a smart location for a disaster recovery center. The city has a relatively low probability of extreme weather events and natural disasters, decreasing the likelihood of an unexpected disruption. Strategically located on the Eastern Seaboard but away from the coast, Atlanta is less vulnerable to catastrophic acts of nature, such as hurricanes and earthquakes, that can bring down your business operations at a moment’s notice.

These factors all combine to make Atlanta a very appealing option for diversifying your risk exposure along the U.S. East Coast.

NetDepot’s Disaster Recovery Service

Whether you manufacture widgets or run an e-commerce store, disaster recovery is essential in order to minimize the financial and reputational impact of a disaster and preserve the continuity of your business.

The problem is that you might not be an expert in disaster recovery. You might not know how to tell your RPOs from your RTOs, and you aren’t sure about the best way to protect your mission-critical data and software during and after a disaster.

Enter disaster recovery as a service. “DRaaS” is a term for solutions offered by IT managed services providers that help you recover quickly from a disaster. First, your IT environments and data are replicated on a third-party server to protect them in the event of a catastrophe. If a catastrophe does occur, end users are redirected to these backup servers while the DRaaS provider works to restore your operations, transferring your environments and data back to you.

DRaaS offerings are invaluable to organizations of all sizes and industries. They allow you to focus on what you do best—running your business—while leaving the technical issues and questions of disaster recovery in the hands of the pros.

NetDepot is a leading provider of premium cloud and dedicated server solutions, with data centers located across the United States. Our DRaaS offerings include benefits such as:

  • Lightning-fast recovery times: In the moments immediately after a disaster, every second counts. NetDepot ensures that our customers have access to the data and systems they need to get back up and running as soon as possible.
  • Ransomware protection: Ransomware is a leading cause of IT disaster, encrypting your data and applications and holding them “hostage” until you pay a hefty ransom. 1 in 5 small and medium businesses have fallen victim to a ransomware attack. NetDepot leverages state-of-the-art technology for quickly and easily defending against and recovering from ransomware.
  • Cloud interoperability: With an array of cloud providers to choose from, every organization’s disaster recovery plans will be different. That’s why it’s so important for your choice of DRaaS solution to offer cloud interoperability. NetDepot allows you to backup and/or restore from multiple cloud providers and virtual machine services, including Amazon Web Services, Microsoft Azure, VMware, Hyper-V, and Nutanix.
  • Flexible and affordable plans: NetDepot’s DRaaS offerings were built with flexibility and affordability in mind, and they can easily scale as your business grows and evolves. We have a DRaaS solution for every budget and every need.


Quick reaction times and a solid backup plan are essential for disaster recovery. If your business is located in or near the metropolitan Atlanta area, you need a DRaaS solution with minimal latency, in a secure and stable location, offered by an IT managed services provider with decades of experience in the field.

If you’re looking for disaster recovery in Atlanta, NetDepot can help. NetDepot’s DRaaS offerings are ideal for any organization that wants to make its business operations stronger and more resilient in the face of IT catastrophe. To learn how we can protect your business and help you recover from disaster, contact the NetDepot team today for a chat about your needs and objectives.

How To Choose The Right NYC Private Cloud Service

As the city that never sleeps, NYC demands consistently fast network speeds and always on security. Thus, private cloud environments are highly sought-after throughout the NYC area for their ability to provide superior up-time and service availability. Since private cloud environments are dedicated to specific users, the sheer control over the system is viewed as a major advantage for most companies.

Of course, not all NYC private cloud services are created equal. Below is more information to help you make an informed decision when searching for a private cloud provider in the NYC area.

What Is The Private Cloud?

In a private cloud environment, users do not have to share resources with third-parties. This means a company will have their own dedicated cloud environment, giving them direct control over everything within it and enabling them to easily meet regulatory requirements and internal security standards. Private cloud environments are also advantageous for businesses that have unpredictable or often changing computing needs.

When properly structured and implemented, a private cloud environment offers the same self-service benefits and scalability as other cloud environments. Users will also be able to configure virtual machines and other equipment as necessary while optimizing and utilizing any computing resources quickly and effectively. The implementation of certain tools will even allow them to accurately track their computing usage and help them ensure their cloud is not too extensive or too restrictive.

When compared to a public cloud environment, it’s easy to see the benefits a private cloud environment has over shared clouds. First and foremost, many businesses choose private simply because of its isolation. The isolation equates to inherently improved security and more control over the data within the environment. Secondly, a private cloud will offer better performance than a public cloud since all resources are dedicated to a single business. Finally, there is no limit to the customization, allowing an organization to configure their cloud exactly how they need.

Why Choose a Private Cloud Environment?

There are many benefits that accompany a private cloud environment, including:

  • Cost: The Total Cost of Ownership (TCO) is a major consideration when considering a cloud environment and better control over costs is one of the key benefits private cloud environments offer. Private cloud solutions can be less expensive than public cloud services due to more optimized infrastructure and resource utilization.
  • Efficiency: When it comes to control, no other solution beats a private cloud environment. Whether hosted on-site or at a third-party datacenter, the organization that pays for the private cloud will have total control over the infrastructure and data within it. This allows the organization to monitor and optimize on an advanced level while predicting and avoiding bottlenecks, downtime, and other setbacks.
  • Customization: When it comes to IT infrastructure, there is no such thing as a one-size-fits-all solution. The level of customizability a private cloud environment offers is simply unmatched. Regardless of your business or technical requirements, the organization will be able to change their storage and networking setup to perfectly match their needs.
  • Security: When compared to a public cloud solution, the privacy and security of a private cloud environment is simply unmatched. Any data stored within is strictly confidential and no other organization on the server will be able to access it or impact it. With NetDepot, the physical infrastructure itself is stored in an extremely secure datacenter, further ensuring no physical tampering.
  • Compliance: Given the improved customization and security of a private cloud environment, a private cloud is often preferred (if not required) for businesses that operate in industries where national or even internal policies require special data handling practices. This especially applies to businesses in the health sector where confidential patient information must be carefully stored.
  • Continuity: In today’s constantly changing markets, many cloud service providers have come and gone. NetDepot, however, remains strong. After 20 years in business, NetDepot remains in good standing with no debt and plans to continue gradually expanding with new datacenters being added across the country. So, you can rest assured NetDepot will be here for years to come.

What Is Latency and Why Does It Matter?

Network latency is defined as the amount of time it takes for a request to be received and processed. Simply put, network latency describes the total time it takes for a request to complete the trip from browser to server back to the browser. The lower the network latency, the better.

Meanwhile, “bandwidth” is another term you’ll hear used when talking about latency. If you picture requests traveling through pipes, the bandwidth tells you how wide or narrow that pipe is. As you can imagine, a narrower pipe cannot allow as much data through at once as a wider pipe, so more bandwidth is considered better.

In theory, data is capable of traveling across optical fiber network cables at the speed of light. However, data usually travels much more slowly. The speed at which data travels depends on many factors, and all of these factors end up impacting how quickly users are able to interact with a cloud environment. If a network connection lacks available bandwidth, for instance, the data people are trying to access won’t be able to travel. Instead, it will queue up, as pieces of data wait their turn to travel across the line.

In some instances, service providers may have networks set up that do not use optimal network paths. That means data could be sent hundreds (or even thousands) of miles off-route, slowing down its trip to the destination. Data delays and detours like these result in increased network latency, which means pages load slower, files take longer to download, and every other activity related to the network isn’t as fast.

The industry measures network latency in milliseconds with 1,000 milliseconds equaling 1 second. On paper, a few thousandths of a second may not sound like a big deal, but even just a tiny bit of network latency will have a ripple effect across an organization. Additionally, if the cloud hosts customer-facing information, like your website, any kind of network latency can greatly impact bounce rates.

How to Minimize Latency

When it comes to minimizing network latency, the most logical tactic organizations pursue first is trying to limit how many variables impact the speed in which data moves. No one has complete control over how data traverses the internet, but proper data distribution, high-capacity network ports, and good routing practices can all help minimize network latency within your cloud environment.

However, it’s possible for you to “implement” these practices as an end-user by selecting a cloud service provider that takes care of these things for you. As a business in New York City, NetDepot’s New Jersey location will help ensure the lowest latency for your business and its users by utilizing industry best practices and ensuring that data is always routed using the shortest, most optimal route.

When you choose a private cloud server from NetDepot, you are guaranteed 100% uptime. NetDepot will ensure that your organization is always protected from server issues, with automated monitoring, priority support around-the-clock, and many other failsafes.

NetDepot’s Cloud Services

With more than 20 years of experience in the industry, NetDepot is a world-class Infrastructure as a Service (IaaS) provider. With cutting-edge cloud servers, NetDepot is capable of servicing corporate and enterprise clients from around the world. With datacenters in Houston, Atlanta, NYC, and San Francisco, clients enjoy the quickest servicing available.

All content you host on your private cloud server will be mirrored on at least one other server. This means the private cloud environment provides the same fail-safe redundancy you’d expect from any cloud hosting platform while also giving you the power and control of a dedicated hosting environment.

These services are accompanied by a dedicated support manager, dedicated sales managers, and priority 24/7 support to keep your private cloud environment up-and-running optimally every day of the week.

When it comes to the NYC area infrastructure specifically, NetDepot has private backend connectivity along with multiple 10 GBIT Uplinks, redundant network design, a redundant power setup, and a completely secured datacenter to ensure all of your servers stay safe, protected, and always functional. Plus, with additional datacenters in Texas, Georgia and California, NetDepot is able to offer excellent data distribution and routing no matter where your users are located.

Interested in learning more about how NetDepot can help your business improve its performance? Email us at to contact a team member for more information.

NetDepot’s Houston Data Center

Looking for a data center in Houston? You’re not alone. Thanks to its geography, climate, strong economy, and access to tech talent, Houston is an excellent location for a data center facility.

That’s why NetDepot’s Houston data center, which we operate in partnership with our sister company TRG Datacenters, is ideally located. Below, we’ll discuss everything you need to know about our new data center in Houston: the reasons for our choice of location, the disaster preparation that we’ve enacted, and the features that our Houston data center customers can enjoy.

Why a Data Center in Houston, Texas?

You might be wondering: “Why a data center in Texas?” or “Why a data center in Houston?” Below, we’ll discuss the factors that went into NetDepot’s decision to open a Houston data center.

Why a Data Center in Texas?

First, Texas plays host to some of the world’s top high-tech energy and technology companies. After New York and California, Texas is the U.S. state with the third-highest number of Fortune 500 companies’ headquarters. Giant multinational firms such as AT&T, Texas Instruments, and ExxonMobil have chosen to locate their headquarters in the Dallas–Fort Worth region’s “Silicon Prairie”.

Texas is also home to many excellent public universities, including the University of Texas system, Texas A&M, Texas Tech, and the University of Houston. With a host of world-renowned faculty and research institutes, Texas universities consistently produce top-shelf tech talent.

What’s more, Texas offers a highly business-friendly climate, with an economy that would make it the 10th-largest in the world as an independent country. The state does not have corporate or personal income taxes, and the costs of land and energy are relatively low. Both Forbes and CNBC ranked Texas the second-best state for business in their 2019 rankings.

Why a Data Center in Houston?

Given the facts above, opening a Texas data center sounds like a great idea. But why a data center in Houston in particular?

For one, the city of Houston is a major player both regionally and nationally. Houston has a population of 2.3 million people and a metropolitan area with 7 million, making it the largest city in the Southern U.S. The city acts as an economic and cultural hub for the region, attracting residents from across the entire South and across the world.

The economy of Houston is also very strong. Once based primarily on the energy industry, Houston’s economy has rapidly diversified in recent years. Healthcare, manufacturing, aeronautics, and biomedical research companies all now call Houston home—not to mention NASA’s Johnson Space Center. Twenty-two Fortune 500 companies are headquartered in the Houston area, the fourth-highest number in the U.S.

Houston’s geography and climate also make it a good location for data centers in Texas. The Houston region has many stable, dry, and flat areas that are ideal for hosting data centers. Houston enjoys hot and dry summers and mild winters, and the city is located outside “Tornado Alley,” which stretches down into North Texas. (More on natural disasters in the next section.)

When choosing the site for NetDepot’s Houston data center, we wanted to find a central, convenient location for our current and future clients. Our data center in Houston is located in the 77388 area code and is easily accessible from many of the city’s largest business hubs:

  • 5 miles from the Interstate 45 corridor.
  • 0 miles from the Grand Parkway project.
  • 5 miles from The Woodlands.
  • 15 miles from Houston International Airport.
  • 22 miles from downtown Houston.
  • 35 miles from the Energy Corridor.

Ready for Anything: Disaster Preparation for NetDepot’s Houston Data Center

We know how essential it is to offer uninterrupted data center services to our clients. According to the research firm Gartner, the average cost of IT downtime is $5,400 per minute. What’s more, a full third of businesses say that an hour of downtime would cost them over $1 million.

This means that disaster preparation must be a critical concern for any data center. When planning our data center in Houston, we were especially concerned with preventing the risk of hurricane damage, given the devastation unleashed by Hurricane Harvey in 2017.

Most importantly, our Houston data center facility has been built to withstand a wind load ratio of 185 mph. This limit is significantly higher than the estimated 110-127 mph that a Category 5 hurricane would reach as it moved inland over Texas from the Gulf of Mexico. For more information, download the Texas Weather Evaluation document completed by our partners at TRG.

In the event of a hurricane in the Houston area, our data center is contained within a reinforced concrete structure. The building’s sloping roof is 4 inches thick with fully redundant leak protection and no rooftop equipment, which minimizes the risk of roof damage.

NetDepot’s choice of location for our Houston data center was also made with minimizing all possible risks and disasters in mind. Our data center is:

  • Outside Houston’s 500-year flood plains and at an elevation of 37 meters, minimizing flood risk. In addition, the data center is located 65 miles inland to protect it from tsunamis and tidal surges, which are rare occurrences in the Gulf of Mexico.
  • At least 1 mile away from all major highway thoroughfares. This location protects the data center from flooding and water damage due to exceptional rainfall, as well as car accidents (including those involving hazardous materials).
  • 5 miles away from railway lines and not under any commercial flight paths, which makes a train or air crash extremely unlikely.
  • Not located near oil or gas pipelines, hazardous material stores, or recycling centers, which reduces the risk of hazardous material releases and contamination.

Fires, earthquakes, and tornadoes are three other natural disaster risks that we have sought to mitigate:

  • Our Houston data center is equipped with a state-of-the-art automated fire suppression system.
  • Houston is located in an area with very little seismic activity. Since 1900, the closest earthquake to Houston has been more than 40 miles away, with a rating of 3.8 (“minor”) on the Richter scale.
  • As mentioned above, Houston is located outside of Tornado Alley, which significantly lowers the risk of a devastating tornado.

Features of NetDepot’s Houston Data Center

With an excellent hand-picked location and rock-solid disaster preparation, let’s now discuss the biggest features and selling points of our data center in Houston.

  • Fiber-optic cables: Fiber-optic cables can offer lightning-quick connections that businesses need in order to run their most critical and time-sensitive workflows, with some offering bandwidths of 50 Gb/s and above. Our Houston data center is in close proximity to many fiber-optic Internet providers so that you can reach your full speed and potential.
  • Electrical substations: NetDepot’s Houston data center is also close to multiple electrical substations. These locations are the parts of the electrical grid where electricity is converted from high voltage to low voltage and made ready for use by homes and businesses. Being in close proximity to multiple electrical substations gives us an ample power supply, without having to rely too much on a single source of electricity.
  • Carrier-neutral status: Our Houston data center is carrier-neutral. This means that you can choose your preferred service provider from among 8 options: AT&T, Comcast, CenturyLink, Phonoscope, Cogent, Zayo, LightWave, and Crown Castle. In addition, you can use interconnections between multiple service providers, and NetDepot offers free cross connects between our other facilities.
  • Certified construction: The data center has been built by an accredited tier designer (ATD) certified by the Uptime Institute, which develops IT industry standards for data center design, construction, and operation. The building’s high-quality and sturdy construction will dramatically lower the risk of a catastrophic event.
  • Special facility privileges: NetDepot owns the Houston data center together with our sister company TRG Datacenters. This gives us special privileges within the facility that our customers can enjoy.


Finding the right data center in Houston, or in any location that fits your business, is both a challenging and an essential task. Your choice of Houston data center must be reliable enough for you to entrust it with your critical and confidential information, ensuring that you can enjoy constant, unbroken access to this data 24/7/365.

NetDepot was drawn to Houston for its many appealing qualities, including its convenient geography, a business-friendly climate, and lessened risk of natural disasters. We look forward to providing our customers with high-quality data center services and continued availability, thanks to our hand-picked, low-risk location and our wide range of cutting-edge features.

Are you looking for a data center in Houston for your business? Look no further than NetDepot. Get in touch with our team today for a chat about your needs and objectives.

2019 Cloud Trend Wrap-Up

In less than a decade, cloud computing has gone from being a tech buzzword to a well-established business best practice. By dramatically accelerating digital transformation initiatives, the cloud can help you beat your competitors and better serve your customers.

As 2019 draws to a close, it’s clearer than ever before that cloud computing is a must for businesses of all sizes and industries. RightScale’s 2019 “State of the Cloud” report finds that 94 percent of organizations now use the cloud in some form or fashion. What’s more, 83 percent of enterprise workloads will be in the cloud by 2020, according to predictions from the SaaS performance monitoring platform LogicMonitor.

With just days left before the new year arrives, this is the perfect occasion to look back at some of 2019’s most important trends in cloud computing, and make some predictions about what 2020 will have in store.

1. Outages prove the need for backups

The cloud has a reputation as a stable, secure, and highly available solution—and for the most part, it deserves that reputation. However, even the largest players in the cloud computing industry aren’t immune to sudden outages that can bring your business to a shuddering halt.

In 2019, cloud computing users struggled with major outages such as:

  • January: Multiple outages for customers of Microsoft products such as Office 365, Azure Active Directory, Dynamics 365, and Skype.
  • March 12: Global outages for customers of Google products such as Gmail, Google Drive, Google Hangouts, and Google Maps, lasting roughly 3.5 hours.
  • March 13: Outages at Facebook, Instagram, and WhatsApp lasting more than 24 hours, which some media outlets called Facebook’s “worst outage ever.”
  • August 29: Amazon Web Services outage due to a power failure at a data center in northern Virginia, which made 7.5 percent of instances in the US East 1 region unavailable.

The four cases above are just a few of the 2019 cloud outages that created bumps in the road for cloud computing users. Given the unpredictability of these outages, it’s always a good idea to maintain backups of your data and applications.

When backing up your IT environment, the most robust strategy is to create backups both in the cloud (to protect against natural disasters and physical damage) and on-premises (to preserve business continuity during cloud computing outages).

2. Emerging smaller cloud businesses

According to the 2019 RightScale report, 91 percent of organizations are using the public cloud. Public cloud giants like Amazon Web Services, Google Cloud Platform, and Microsoft Azure still remain dominant forces in the industry.

Yet despite this fact, many companies are finding that smaller cloud providers can be cheaper and more trustworthy than the major players. Small cloud providers are able to compete in this crowded marketplace by offering streamlined offerings, niche and specialty services, affordable pricing, and better attention to customers.

Of course, just because a company is smaller doesn’t always mean that you can expect reliable service. In October this year, for example, new cloud object storage provider Wasabi experienced an outage lasting more than 72 hours, due to excess customer demand in the US East 1 region.

With the dramatic growth in cloud computing, a number of cloud providers have sprung up, some of them higher-quality and more trustworthy than others. NetDepot is a more dependable option with over 20 years of experience offering premium cloud, managed, and dedicated server solutions.

3. Calls for cheaper cloud storage

The convenience and cost-effectiveness of public cloud storage providers like AWS is one reason why so many companies are attracted to their offerings. But once they sign up, these companies often have problems with hidden fees and wasted resources that break their budgetary expectations.

Large public cloud providers like AWS often have expenses such as:

  • Compute instances that are unused or underutilized, often because they haven’t been properly terminated.
  • Storage volumes that are unused or underutilized, often because they aren’t attached to an instance.
  • Using “pay as you go” pricing instead of reserved instances, which can be significantly cheaper.
  • Fees for transferring your data off the provider’s servers (while being free or cheap to transfer it in).

With these and other fees, it’s understandable that many companies are looking for cheaper cloud options. This search often brings them to smaller cloud providers such as NetDepot, whose S3 cloud object storage is 80 percent less expensive than AWS.

NetDepot offers new users 5 terabytes of free cloud storage for 6 months. What’s more, storage is at a flat rate of $0.005 per gigabyte. Inbound and outbound data transfers are completely free for up to 100 percent storage usage.

4. More and more digital natives

The term “digital natives” refers to people who have grown up in an era where digital technologies such as the Internet and mobile phones are commonplace. Digital natives (largely millennials and members of Generation Z) are intimately familiar with the use of these technologies, making them “native speakers” in the digital world. The term is often contrasted with “digital immigrants,” people who adopted digital technologies later in life.

As cloud computing becomes more and more mainstream, the “digital native” generation will enter the workforce with preexisting knowledge of cloud technologies. This could be a massive boom for companies looking to move more of their operations and infrastructure into the cloud.

However, organizations also need to ensure that their “digital immigrant” employees aren’t left behind by this shift toward cloud computing. Looking toward 2020, cloud training and education programs will be a vital tool to increase productivity and help older workers adapt to a changing technology landscape. That is why the team at NetDepot is constantly evolving, learning, and receiving new certifications.

5. Changing data center ecosystems

Cloud data centers typically function along two separate poles of technology: hardware and software. In 2020 and beyond, we expect to see these two different functions become more closely fused and integrated.

This integration will enable a number of conveniences and benefits. For example, users will be able to manage their various cloud software and hardware components via a single touchpoint. In so doing, companies will be able to take advantage of automation capabilities for tasks such as updates and patching.

Greater integration between hardware and software in cloud data centers will also help with availability and scalability. Data centers will be able to expand and contract the resources they provision on an as-needed basis, improving efficiency and cutting unnecessary costs.


The 2019 cloud computing trends we’ve highlighted above are some of the most interesting developments we’ve seen in the industry over the past year. What does 2020 hold for you in terms of cloud computing? Let us give you a free assessment to start your year off right. Call 1-844-25-CLOUD to speak with one of our sales engineers today. To keep up to date with the latest cloud trends follow us on Facebook, Twitter, and Linkedin.

New Year – New Cloud Budget

With all the hype about the benefits of cloud computing and storage, it’s not surprising that many companies are jumping in before completing their due diligence research. Unfortunately, too many of them take on big cloud contracts and then are shocked at the costs they’ve incurred when the first bill arrives. Fortunately, your company can gain all the benefits of cloud computing and storage without also assuming any big-box price tags when they use NetDepot’s cloud services, including their 3S cloud object storage service.

Why Most Cloud Storage Costs are Sky High

There are many great reasons for accessing cloud services:

  • they reduce hard- and software investments;
  • they flex as demands ebb and flow;
  • they offer agility that can’t be matched by most on-prem systems, and
  • they’re (supposed to be) less expensive than building your own internal computing/processing system.

If these attributes are met, then customers can count on receiving excellent services at a reasonable price.

Big Three: Big Expenses

However, many customers of the ‘Big Three’ cloud services providers (Amazon Web Services [AWS], Google Cloud, and Microsoft Azure) are experiencing sticker shock only after they’ve signed on to significant contracts. And they’re not at all happy about it.

On the one hand, stiff competition between the Big Three has kept their on-boarding costs competitive. Each is careful to price their base packages within a reasonable range of the others, and all three offer (with some distinctions) similar versions of comparable services. On the other hand, though, it’s only after the contract is signed and users are making their migration that the real cost of using the big-box services becomes apparent.

Too often, customers have been dismayed at how the actual cost of their service differs significantly from the estimated price they thought they were going to pay. Amazon Web Services (AWS) offers a good example. In a 2019 study by Dao Research (sponsored by Oracle), AWS customers were frank about their expectations and realities after migrating to the AWS cloud. They were unnerved by the complexity of the pricing strategy and genuinely flummoxed when their monthly bills began escalating.

  • Most signed on with the web giant because of advertising that promised a ‘pay-as-you-use’ plan. The ability to scale up when necessary and down as needed suggested an opportunity to reduce costs by using fewer resources. However, their reality was that they needed significantly more compute and storage capacities to match their legacy on-prem systems, so they spent more than they budgeted just to achieve what they already had.
  • Many were also confused by the service providers’ pricing system. AWS uses a complex pricing strategy that doesn’t necessarily include all aspects of the application development process. For example, users found they had to order additional resources because what they thought they had purchased didn’t meet their needs. Not only that but the received bills weren’t sufficiently detailed to support multi-client environments, forcing businesses to add the cost of manual oversight to sort out and properly bill their own customers.
  • Another unexpected cost was incurred by having to manually decommission resources that were not necessary but had been migrated within the larger migration process — having to pay for that extra human oversight added unexpected costs to the overall cloud computing bill.
  • Not least distressing to many customers was the high cost of AWS support. Good support is invaluable, but AWS prices their support based on the underlying value of the computing services; as that rises, so does the expense for support services.

As an example, in 2018, Pinterest was compelled to spend and extra $20M over their $170M budget to obtain the excess capacity it needed over the holiday season, and the price for those additional services was higher than their contract price, to boot.

How to Set Your Cloud Budget

Lack of transparency, lack of knowledge, and lack of research all contribute to the confusion around the cost of accessing cloud services. Too many companies go into the process without a strategy and pay the price for that gap. However, a comprehensive assessment can help your company avoid accruing excessive and unexpected costs in its cloud services purchases. Consider these steps as you plan your cloud services contract and budget:

  1. Assess your entire IT environment to be sure you know which departments will or should be consuming the most cloud resources. In many cases, end-users will go over contract limits without knowing it, which increases those costs without appropriate authorization. Tracking potential cloud usage every day will help you limit those extra expenditures. It will also help you make better decisions for future cloud service agreements.
  2. Bring your whole organization into the cloud acquisition process. IT decisions – especially cloud IT and service decisions – should be made with inputs of the entire team, from the C-Suite and finance department to remote end-users. It’s only after you’re fully informed about how your enterprise will use the cloud resource that you can estimate both the volume and cost of the services you need.
  3. The assessment should also reveal where purchased assets are unused (get rid of those if you don’t truly need them), or are obsolete (programming gets old, too. Purge what you can). Many companies keep all their data from the beginning of the enterprise and then pay to store those unused, aged assets. In most cases, it’s safe to get rid of a significant proportion of obsolete information, which also reduces your data storage costs.
  4. Recognize that the cloud processes things differently. Simply migrating existing apps to the cloud isn’t often the best use of that investment. Instead, retune your applications, processes, and practices to maximize the values that the cloud can bring, and that which you can’t create in your on-prem systems.
  5. Shop around. Not all cloud providers are expensive, and many can offer the same service as the Big Three at a much lower price. Plus, smaller retailers are often more transparent in their pricing (because that’s just good business), so customers don’t get surprised with the bill comes in.

Avoid the Big Three: NetDepot Offers Big Service For a Small Price

Data storage can be a big ticket item with the Big Three providers, but NetDepot provides comparable 3S data storage service for up to 80% less. NetDepot also offers backup and disaster recovery services, too, including support for reducing damages caused by malware such as ransomware. These services are and will remain critical for all businesses as the rate of cybercrimes, both internal and external, continues to rise.

  • Ransomware continues to threaten all businesses, but small businesses are the preferred targets for most ransomware attacks, accounting for 71% of all corporate victims.
  • The threat posed by external criminal activity is also growing as more end-user devices are added to network systems.
  • And internal threats are also multiplying. Accenture’s 2019 Cost of Cybercrime study reveals that employees cause more damage than hackers because of intentional activities (exploiting passwords, etc.), but mostly because of inadvertent errors (inappropriately sharing data, for example).

Today’s global marketplace is highly competitive, and keeping costs down while maintaining a competitive edge is harder to accomplish than ever. The cloud offers exceptional values for those who are careful to choose the right provider to meet their needs and their budget. NetDepot can provide your organization with the flexible cloud computing and storage services you need at a price you can afford. Call today. at 1-844-25-CLOUD to see how they can service you today.

Cheap S3 Storage: What’s the Wait?

NetDepot’s S3 Storage Savings

Companies are amassing more information every day, across every sector of business, related to all aspects of their industry, consumer base, and supply lines. That’s a lot of data to collect, store, access, and mine. The are finding a need for a data storage facility that provides all those services, as well as top-of-the-line data security features and flexible scalability a must. And they need it all at a reasonable cost.

NetDepot’s S3 Cloud Object storage (Simple Storage Service) meets all those needs while also promising a multitude of additional support services.

Why Cloud Object Storage?

The short answer is because cloud object storage is the most comprehensive way to manage today’s burgeoning lakes of unstructured (‘object’) data. Cloud servers are larger and more complex than most on-premise servers, so they can handle more information, and handle it better than most on-premise servers, too. And they’re managed by cloud management experts whose sole focus is to monitor and maintain those systems.

The longer answer includes all the benefits and features provided by cloud storage, such as its flexibility. As your company’s needs change, so can your cloud storage services, so you’re not locked into a single ‘solution’ for many computing challenges. Cloud storage also offers easy access to your files, so your workers can access critical corporate data from anywhere they may be. Cloud storage providers also often provide invaluable backup and restore services in the event your organization gets hacked or suffers an outage. Backups and disaster recovery services are usually always best handled by experienced, skilled service providers.

Only you can know what type of cloud configuration will work best for your enterprise, and NetDepot offers all. Depending on the computing demands of your operation, you may elect to establish a private or a hybrid cloud storage option at a proprietary data center. However, if both of these options are often too expensive or require too much work, many companies choose to access the cloud storage services of a public cloud services provider, such as NetDepot.

Why NetDepot’s S3 Cloud Storage Service?

Two factors pop up when comparing NetDepot’s S3 Cloud Storage services to those of the other industry giants: reliability and affordability.


You’d think that the ‘Big Guys’ would get it right all the time, but you’d be wrong. In many cases, these prominent industry leaders suffer major failures simply because their organizations are such a behemoth, and there are so many opportunities for disaster.

For example, in 2017, Amazon’s S3 (Simple Storage Service) failed when an Amazon team member, intending to remove just a small number of servers from a particular network, instead, inadvertently removed many more servers than was planned. The single erroneous entry launched a cascading effect that eventually took out AWS services in the entire US-EAST-I region. The system failure caused inordinately slow loading of the websites of 54 of the nation’s top 100 retailers, including Lululemon, One King’s Lane, and Express. The outage itself lasted only four hours, but customers throughout the affected area experienced service delays of up to 11 hours. Clearly, just because it’s big and charges high prices doesn’t mean that AWS can guarantee its customers safe and reliable services.

In contrast, NetDepot has spent its 20 years in the business perfecting the services and options of its much more selective enterprise. The company’s Infrastructure-as-a-Service, colocation, dedicated servers, managed hosting, and cloud services live in two Tier 4 Data Centers located in Houston and Atlanta (with a third coming soon in San Francisco), and guarantee 100% of uptime service levels. These ‘big’ services offered through the smaller but state-of-the-art NetDepot servers ensure that no NetDepot customers experience losses like those experienced by AWS clientele.


Pricing matters, too, and every company must maintain a vigilant eye on the bottom line if it intends to stay in business. Although Amazon pioneered the S3 technology, providers like NetDepot can offer the same high-quality service at a much lower rate than the tech giant offers its clients.

As an example, consider the need to store and access a large quantity of TBs, say, 210,000 TBs. With AWS, that service would cost over $331,000, while at NetDepot, the exact same service costs a mere $26,000, an 80% savings. For the same service.

NetDepot’s debt-free management style is a testament to the company’s financial intelligence, which is demonstrated by the way they share their economic savvy with their customers. And its customers gain the same great support and service as they would at AWS, but at exceptional prices:

  • Direct Connect options are as low as $150 per month for 1 Gigabit; only $2500 per month for 40 Gigabits.
  • Expert migration support is available, too, to assist the move from your current S3 provider to NetDepot, and it’s free when you sign up for a new account.
  • Scaling your enterprise is less expensive, too. Just multiply the same low price you pay for your cloud data storage service by the value of scale you want to achieve. NetDepot will help you get there.
  • Finally, but not insignificantly, as noted above, NetDepot’s 100% uptime accessibility guarantees that access to your stored data is always available, which makes NetDepot’s S3 storage service a virtually priceless asset to your company.

Any company requires access and management of its data at every moment of every day. NetDepot’s S3 Storage Service gives you the storage capacities you need to accomplish today’s and tomorrow’s goals at a price you’re happy to pay. Perhaps it’s time to chat with the experts at NetDepot to find out why their S3 cloud storage service is the best solution to your data storage problem.

4 Steps to Creating Your IT Disaster Recovery Plan

Catastrophe can strike at any moment, often in the most unexpected of ways. Depending on your business, your IT environment, and your location, you may face disasters such as:

  • Natural (tornadoes, hurricanes, fires, floods)
  • Physical (power outages, hardware failures)
  • Human (insider threats, data breaches, cyberattacks)

Most business owners are at least subconsciously aware that these events could happen to them—yet these concerns are hand-waved away as something that only happens to “other people.” As a result, far too many businesses are unfortunately flying blind when it comes to a disaster recovery plan.

This happy-go-lucky attitude is one reason why disasters are so devastating for so many companies. According to a report by the Federal Emergency Management Agency (FEMA), 40 percent of businesses never reopen after suffering a disaster, and another 25 percent of them fail within a year.

Of course, disasters are inherently sudden and unexpected—but that doesn’t mean that you have to be unprepared when disaster strikes. There are methods and steps you can take before, during, and after a disaster to protect the continuity of your business processes and the integrity of your organization.

In this article, we’ll discuss 4 of the most important actions to take when creating a disaster recovery plan, so that you can be as prepared as possible if and when you face a catastrophe.

1. Build a risk assessment plan

A risk assessment plan is a concise yet comprehensive summary of the various risks that you face as an organization, helping you understand your most critical vulnerabilities.

If your headquarters is located in Florida, for example, then you’re much more likely to suffer a hurricane than an earthquake. On the other hand, earthquakes and other catastrophes such as wildfires are a top-level concern for disaster-prone regions like the Los Angeles area.

Risk assessment plans should discuss a variety of possible disasters, from those that are merely inconvenient to those that could threaten the existence of your business. Many companies overemphasize the potential worst-case scenarios in their risk assessment plans, believing that this will make them more knowledgeable and prepared. However, this tendency can be dangerous: it draws attention away from less critical (yet still dangerous) events that might be far more likely to occur.

In addition, don’t forget to include an assessment of all the possible risks to your business: natural, physical, and human. While you might fear falling victim to the latest malware or virus, for example, you should more likely watch out for the insider threats posed by your employees and contractors instead. IBM reports that insider threats (both intentional and unintentional) account for 60 percent of all cyberattacks.

2. Perform a business impact analysis

Once you better understand the risks you face as an organization, you can create a business impact analysis that evaluates the potential impacts that these risks would have on your business.

Your business impact analysis should include an estimate of the costs and repercussions to your organization in the event that a catastrophe occurs. The impact of a disaster on your business is likely greater than you realize, even for relatively minor events.

According to a 2016 survey, for example, 98 percent of businesses say that an hour of downtime would cost them more than $100,000, while a full third say that it would cost more than $1 million.

To calculate the costs of downtime for your own organization, don’t forget to consider the following factors:

  • Your average hourly revenue
  • The number of your employees and the hours they work per week
  • The number of your employees who would be affected by a disaster
  • The lost productivity for each employee affected by the disaster

With the hourly cost of downtime in mind, you can then decide on two parameters which are essential to any IT disaster recovery plan: RTO and RPO.

  • Recovery time objective (RTO) is the maximum amount of time that can elapse before your data, applications, and processes are fully restored. Essentially, RTO determines the level of comfort that your business has with experiencing downtime. Businesses that require a high level of availability (perhaps on the order of seconds) will have a lower RTO than businesses that can survive downtime lasting minutes or even hours.
  • Recovery point objective (RPO) is the maximum age of the backups that can be restored in order to preserve business continuity. In other words, RPO determines how much data your organization can afford to lose: could you survive after losing 5 minutes’ worth of data? 1 hours’ worth?

3. Cloud backups, on-premises, or hybrid?

Speaking of backups, we all know that backing up your data and software applications is a must. It’s perhaps the most important step your business can take to make yourself more resilient and protect yourself from disaster.

By storing your IT essentials in an off-site location, you can more quickly and easily restore operations in the event of a catastrophe that could otherwise cripple your business.

Yet not all backups are created equal. The first question to answer when backing up your data: will you back up to an on-premise server, to the cloud, or to a hybrid solution that combines both options?

Cloud backups are an increasingly popular option for companies who want to preserve their business continuity after a disaster. Storing your data “in the cloud” means sending it to a secondary off-site location with a server that is managed by a third party.

There are several different types of cloud backups:

  • Public cloud backups store data on a remote server owned and managed by a third party known as the “cloud provider.”
  • Private cloud backups store data on a server that has been exclusively designated for your use.
  • Hybrid cloud backups combine the public and private cloud, offering a more flexible cloud backup solution.

Whichever option you choose, cloud backup solutions are on the rise. In a 2019 survey, 60 percent of organizationsreport using cloud backup features such as short-term data storage, cloud archiving, and DRaaS (disaster recovery as a service). What’s more, of the remaining 40 percent, more than half are planning to adopt cloud backups in the year ahead.

Meanwhile, on-premise backups store data on a server that is under your exclusive ownership and control. This server may be located within the physical confines of your business, or off-site. Note that on-premise backups stored in the same location will be vulnerable to the same natural disasters that threaten your primary servers.

Of course, you can also opt for a hybrid backup strategy that combines the cloud and on-premises storage. Many organizations decide to use a hybrid backup strategy when they have certain data that cannot be stored in the cloud due to compliance or security reasons. A hybrid backup strategy also gives you the benefits of both options: the scalability of the cloud, combined with the speed of access of on-premise storage.

4. Document and test your plan

Just like any other emergency plan, your IT disaster recovery should be well-documented and well-tested in advance of a catastrophe. Every employee has a role to play following a disaster, and your plan should make it obvious what that role is and how to execute it successfully.

Your complete IT disaster recovery plan should include:

  • A brief overview and summary of the plan.
  • The contact information for executives, critical personnel, and members of the recovery team.
  • Clear, comprehensive steps to follow in the immediate wake of a disaster.
  • A list of the most important elements in your IT infrastructure, and the maximum RTO and RPO for each one.
  • Insurance documents and contact information for your insurance provider(s).
  • Suggestions for dealing with the financial, legal, and reputational repercussions of the disaster.

The more information your plan includes, the more important it is to test it on a regular basis. Full tests should run at least every quarter, and smaller-scale tests can run more frequently outside of standard business hours.


Creating an IT disaster recovery plan isn’t the easiest or the most fun part of running a business—but in a world where natural and digital disasters can strike at any second, it’s an absolute necessity.

Working with a skilled managed services provider can be a lifesaver when creating an IT disaster recovery plan. Look for an MSP with experience in disaster recovery and qualities such as:

  • Fast recovery speeds
  • Ease of use
  • Scalability of storage and backups
  • Knowledge of security and compliance issues

NetDepot’s cloud-based DRaaS (disaster recovery as a service) platform can give your business the peace of mind you need. Our DRaaS solution is flexible, scalable, and offers near-zero recovery times that can get you back up and running within a matter of seconds. To learn more about how we can help preserve your business continuity, get in touch with our team today.

Disaster Recovery in the Cloud

Protecting your data from threats requires the same level of advanced thinking and planning as any other aspect of your business. However, too many companies continue to rely on old-style programming and systems as ‘protections’ against today’s advanced techno cybercriminals who specifically target data stores and bases. Worse, environmental concerns are now also presenting equally significant challenges to the security of corporate data. For forward-thinking company owners, embracing the values offered by the cloud’s advanced data disaster recovery (DR) technology is the only way to maintain the safety and security of their enterprise information, whether they are attacked by criminals or Mother Nature.

The value of a data backup and DR plan

Surviving a cyberattack or natural disaster is usually the result of proper planning, not good luck. Disasters can happen at any time for almost any reason. In many cases, the debacles cost both the company and its customers millions of dollars in damages and recovery expenses.

Things don’t have to go that way, however. Having a full-fledged data DR plan in place provides you with assurances that access to and use of your corporate databases will continue both during and after an attack, and can even reduce your organization’s exposure to excessive damages.

A ‘full-fledged’ data DR plan accomplishes many of your corporate goals:

  • It will minimize disruptions in your operations while your IT department addresses the crisis. Keeping services in motion ensures ongoing revenues and satisfied customers.
  • It will limit the extent of the damage that occurs. The plan can direct your IT professionals to the location of the breach or failure so they can make appropriate repairs as quickly as possible.
  • It also anticipates when alternative data sources are required to maintain productivity, so it ensures that your organization has backup data and processing resources available when needed.
  • Finally, the DR plan provides a structure to support full recovery practices, ensuring that your critical corporate data and processing aspects return to their pre-event condition but improved by the knowledge gained during the incident.

Once you’ve established your plan, you’ll want to ensure you optimize its capacities by engaging optimal resources.

Secondary data centers as DR resources

For many companies, a secondary data center is the DR response of choice. They simply duplicate their primary data stores in the secondary data center to use as a fall back if or when a disaster occurred.

The reliance on that resource is becoming less than optimal, however, as technologies and threats emerge. Many legacy data centers don’t have the protections needed to keep corporate information safe from today’s predatory cybercriminals. Further, their design and architecture are often expensive to maintain, and the DR purpose usually doesn’t require access to their full (and expensive) range of services.

Unprotected threats

Emerging cyber and environmental concerns now threaten the previously safe haven of the secondary data center, making it difficult – and expensive – for you to prepare for and maintain sufficient and appropriate DR resources on those secondary servers.

Cybercrime attacks

Innovations in technologies are the most significant contributors to the rise in cybercrimes in recent years. The Internet of Things (IoT) is of particular concern as each individual smartphone, tablet, and computer expands the vulnerability field of every system it accesses. People who use those digital tools may not follow their recommended security protocols. When they don’t, any level of cybercriminal can access the data contained on their device, as well as, potentially, data held in any system with which that device engages.

This challenge is significant in organizations that permit workers to use personal devices for work purposes. The data security for those companies is only as sound as their least attentive employee, and any worker who doesn’t utilize security practices on their device leaves their employer open to an attack. As organizations add more devices – both personal and corporate – to their networks, they are also adding new opportunities for cyber-attacks on all of their data centers through any one of them. As those threat levels rise, so does your cost to protect against them across your data center campus.

Environmental woes

The range of environmental threats to secondary data centers continues to grow as those legacy back-ups and DR systems become more elaborate and complex. Almost any environmental risk can become a disaster when there aren’t sufficient preparations made for the secondary data center:

  • In 2017, an unexpected electrical surge in a data center forced British Airways to ground hundreds of planes and thousands of passengers. Baggage handling, ticketing, and check-in services all went offline as the data center failed, and the company didn’t have a backup plan for those servers if or when they failed. (In the U.S., squirrels are often the cause of power outages.)
  • Lightning strikes can also take out a data center; both Amazon and Microsoft lost computing services when an electrical storm targeted their data centers.
  • Storms, in particular, have been especially damaging to data centers in recent years. Hurricane Sandy took out several data centers across New York and New Jersey, as rising flood-water inundated the generators that powered those servers. The Huffington Post, Buzzfeed, and Gizmodo all lost power as a result of that storm.

Chances are, data security is not your company’s primary business. Therefore, it will become increasingly challenging for your organization to engage the technologies and expertise required to keep safe both your primary and secondary data centers.

 The cloud as the optimal data DR resource

Just as technologies flex to accommodate your operational needs, so do cloud-based DR technologies flex to accommodate your specific data security needs. The cloud DR strategy can both backup and restore your data if disaster strikes, giving you a foundation on which to work while your primary servers are under repair (or being replaced). Cloud DR services offer users multiple options, based on their particular use cases and corporate goals. Plus, the wide variety of implementation strategies can accommodate almost any budget, and give even small companies robust recovery plans that they could otherwise not afford.

How it works – as a backup and recovery tool

Fundamentally, the cloud ensures continuity of your operations by providing a second, cloud-based site from which they can operate if hard- or software systems go down. Known as a ‘failover,’ this service replicates the capacities of a second ‘backup’ data center, ensuring your organization has access to its data if its primary center becomes unavailable. It differs from your secondary centers, however, because its costs are born mostly by the service providers, who charge you on a pay-per-use model, a capacity model, or a bandwidth model. You aren’t paying for the hardware, software, physical plant, maintenance or upkeep costs, so using the cloud is much less expensive for your company than running a second data center for DR purposes.

Further, the cloud offers varying options for what and how your organization can ‘failover’ its information:

  • You can choose to failover just your data, keeping it safe from intrusions that may occur in your primary data center.
  • You could also choose to failover entire applications, which is significant for companies that rely on proprietary programming to achieve corporate success.
  • A third choice option is a virtual machine, which can replicate single or multiple operations on a virtual machine in the cloud itself. The virtual machine performs all your activities in the cloud until you get your home network restored.

Each of these options includes a ‘failback’ service that returns your data and programming to your primary data centers after they’ve been recovered.

How it works – as a strategy tool

Beyond gaining the peace of mind that comes from knowing your data is safe, a cloud-based DR plan also offers other benefits to improve the strength of your enterprise. Developing the DR plan, then tying it to cloud services provides a series of opportunities for the evaluation and analysis of how well your business works and how you can use cloud services to improve those processes.

  • Threat analysis

Your company faces threats that are unique to its business. Some companies may be vulnerable to data hacks, while others may be more vulnerable to floods or fires. Determining where your vulnerabilities are highest will help you decide which cloud services best address those challenges.

  • Impact analysis

Another point to ponder is how well your organization would bear the brunt of an attack. A loss of inventory data might not be as significant as the loss of consumer data, so you might consider investing more resources in one over the other.

  • Downtime analysis

You’ll also want to determine how long your enterprise might be down as it recovers from an attack. You may have internal resources to tide you over till the crisis passes, or your company may go down altogether within minutes of the onset of the event. Your strategy should address the challenges posed by down times to keep them as short as possible and facilitate a return to full functionality as fast as possible.

Crafting your data DR plan using cloud resources can improve your understanding of how your company functions and how well it’s working to achieve its goals.

Cloud providers as backup support systems

Not insignificantly, cloud-based DR services with NetDepot come with a team of dedicated cloud professionals who are available to help you solve all your data security concerns today and into tomorrow. With 24/7 support, NetDepot is here to help you keep your data safe from known and future threats, and to help maximize your DR strategy.