What would happen if the fire alarm in your office went off at this very moment? Would there be a mad rush for the doors, or would most people just ignore it and keep working until they smelled smoke?

As indicated by the Institute for Research in Construction, only about 25% of occupants react to fire alarms as if they were potential indicators of a real emergency. Instead, most people assume that the alarm is merely a drill.

In other words, fire drills DECREASE the life-saving effectiveness of fire alarms.

Here’s another example to consider.

In 2010, an 89-year-old patient died of heart failure at Massachusetts General Hospital. For 20 minutes, a series of alarms, beeps, and messages had been sending urgent warnings to the hospital staff. It was so distracting that an employee had to manually shut off the crisis alarm on the patient’s bedside monitor. Instead of taking appropriate action, these signals were ignored entirely.

These nurses weren’t evil or cold-blooded. Instead, they had become desensitized by years of constant false-alarms from oversensitive and malfunctioning medical devices. When an actual crisis was detected, everyone assumed it was simply a false alarm.

This phenomenon is called “Alarm Fatigue”, and it can easily lead to accidental data loss and failure of critical business systems.

As an IT administrator, you are constantly juggling priorities, multi-tasking and keeping up with unplanned work. Alarms are constantly going off; also, your job is to choose which fires are generally imperative.

In most cases, if you miss a legitimate alarm, the outcomes are generally minor. But there’s one area where the results can be very severe.

Compared to other conflicting responsibilities, backup and disaster recovery rarely feel like an urgent priority. But when they do become a priority, it’s usually too late to do anything about it.

To be an active IT leader, and respond more efficiently, you need to manage your signal-to-noise ration. Before you’re notified of an alarm, it helps to have someone that will verify and triage for you. You should only be notified of actual emergencies that require your immediate attention.

Specialization and delegation are an effective way to deal with alarm fatigue. When you delegate backup to a specialist, you can eliminate all of the distractions and conflicting priorities that might lead to alarm fatigue. Instead, you establish a consistent, repetitive process where every potential problem is proactively investigated and fixed. At that point, you can execute additional layers of auditing and monitoring to get any issues that might fall through the cracks.

By making this a dedicated and focused role within your IT function, you can significantly reduce the chances of alarm fatigue creeping into your backup and disaster recovery process.

Of course, this might not be a feasible option for most organizations. If you can’t afford to have a dedicated internal data protection and business continuity team, then outsourcing your backups to a dedicated provider can give you the same benefits as a fraction of the cost.

By outsourcing your data protection and business continuity to a specialized service provider, you can guarantee exhaustive protection and total peace of mind.

In 1992, the Royal Majesty cruise ship ran aground because of an electrical problem with their GPS system. Despite the fact that it should have been clear to any experienced crewmember that the ship had been veering off-course, most simply assumed that the GPS system would correct itself or someone else would take on the responsibility of fixing the problem.

Humans have a bias to trust computers over humans. And this bias grows over time, as computers continue to prove their accuracy and trustworthiness. When a human operator notices something wrong with an automated system, they are often likely to disregard reality and go with what the computer is saying.

This is an excellent example of the “Automation Paradox”.

As automation becomes more effective, the role of the human operator turns out to be more vital. In the same way that automation can create exponential benefits and efficiencies, it can also scale out the harm caused by human error and poor implementation.

In the early days of computing, mainframes were very expensive and difficult to use. Administrators took great care in their maintenance & implementation, and hacking was very unlikely. The process of provisioning a new machine could take months and required approval from many different departments. If these mainframes ever crashed, the company could still maintain some level of operations through their paper-based processes.

Today, virtualization makes it easy to launch new servers with default security settings quickly. IT departments must deal with virtualization sprawl, shadow IT, and employees working on unauthorized systems. Provisioning has become so easy that IT administrators are struggling to prevent new systems from getting added to the network. And as a result, tolerances for data loss, security breaches, and unplanned downtime have virtually dropped to zero.

A day in the life of the average IT manager often resembles the broom scene from Disney’s Fantasia.

Thankfully, the tools have also improved. Today’s IT administrators have access to backup and disaster recovery systems that are both – potent and elementary to use. But the automation paradox also applies to backup and disaster recovery systems.

If you can protect all of your virtualized systems from a single application, that’s great. But this also means that human error has the potential to cause much more damage. As your data protection and business continuity tools become more powerful, you likewise have a duty to be extra-cautious with their management, monitoring, and implementation.

This is why we recommend delegating your data protection and business continuity to a dedicated specialist that exclusively does this kind of work, and nothing else. When you outsource your backup and disaster recovery to a specialist, you know that this work is being done by dedicated experts who have the training, experience, and resources to ensure that your systems are always protected.
When disaster strikes, you can take comfort in the fact that these specialists perform real-world recoveries every day. They know how to take care of business right, inevitably, without fail.

You need the best automation tools. But they have to be managed by the best-trained and most skilled technicians. The more efficient the automation, the more crucial the role of the human operator. If you want total peace of mind, make sure that you have the best people implementing, managing and monitoring your backups.

Imagine the following scenario:

  • You’re the IT administrator for your company.
  • To eliminate human error and physical media failure, you’ve implemented a fully-automated network backup solution that creates redundant backup copies across multiple physical sites.
  • You’re following the greater part of the best practices. In any case, at that point something terrible happens.
  • One of your trusted internal systems gets hacked, and this system becomes a gateway for the hacker to install malware on all of your production servers. You log in to assess the situation but are met with a ransom note stating that you must pay $10,000 in Bitcoin to decrypt your files.
  • You check your other copies but find that your backup servers have also been compromised. Despite your best backup plans, all of your data is gone.
  • Reluctantly, you pay the fine. But instead of the decryption key, the blackmailers now demand another $50,000. What would you do?

Ransomware has become a nightmarish epidemic that’s wreaking havoc on the IT industry.

Today’s ransomware attacks have evolved in sophistication to become incredibly aggressive, destructive and resilient. Worst of all, Bitcoin and other cryptocurrencies have become a practical and anonymous means for criminals to extort money from helpless victims.

How can you protect yourself? One solution might be to augment your existing backup and disaster recovery plan with additional precautionary Air-Gap backup copies.

With Air-Gap backups, copies of your data are kept completely isolated… physically disconnected from any networks. Thusly, they’re protected from even the most forceful hackers.

Of course, as you adapt your tactics to threats, the threats will continue adapting to your tactics. It’s a constant war to protect your company’s most valuable assets.

That’s why you need to surround yourself with the most highly-trained, well-equipped and experienced experts you can find. By allowing Storagepipe to assist in your backup and disaster recovery plan, you’ll have more peace of mind when facing potential future ransomware attacks.

Happy Holidays From Storagepipe

THREAT! The virus scanner pings
Your system needs some updating
Yes it has been quite a while
Your OS is out of style
Careful with your documents
It’s just good ol’ common sense
Always backup your hard drive
And any threat you will survive
THREAT! The virus scanner pings
Cryptocurrency mining

Using storage in the cloud
Covered by encryptions shroud
Passwords just a few are told
Keep them fresh and not too old
Always check for system bugs
and loading times that move like slugs
Fear not if your screen goes blue
Storagepipe’s looking out for you
THREAT! The virus scanner pings
For botnets, you’ll be spamming

Anti-social social engineers
Have CIOs cowering in fear
Trusted users compromised
Packet sniffers, prying eyes
Don’t bring down your company
by shrugging off security
terms, conditions we say “YES”
though not read, we must confess
THREAT! The virus scanner finds
Storagepipe for peace of mind

The term “Black Swan” dates back to the first century A.D. Originally. The term was ordinarily used to show an event that was exceptionally improbable.

For example:

  • “I’ll see a Black Swan before you ever beat me at chess.”
  • “An honest politician is like a Black Swan.”

And then, in 1697, something incredible happened. The Dutch explorer Willem de Vlamingh travelled to Australia, where he discovered a species of black swans!

Since then, the importance of the expression has changed to depict a type of logical fallacy. The fact that you’ve never seen something before is no indication that it won’t occur in the future.

Likewise, the fact that you’ve planned for every likely disaster recovery scenario is no indication that you’ve designed for every possible threat. In fact, the most serious data breaches and data loss scenarios are often the results of unanticipated events or perfect storms.

So how do you prepare for Black Swans in your data protection strategy?

One way is through specialization. Delegate all of your backup, information compliance and disaster recovery tasks to a team of dedicated specialists with years of experience in this field. Ensure these experts have the best training, the best infrastructure, and the best tools at their disposal.

As specialists, they learn from each disaster recovery incident and become more prepared by learning from each new disaster they encounter. These experts have seen it all, and nothing surprises them.

In spite of the fact that there is no 100% guaranteed way to anticipate all possible Black Swan disasters, putting your recovery plan in the hands of a specialist like Storagepipe is the best way to get peace of mind for your data protection.

In September of 2015, Amazon experienced a major downtime incident that knocked out availability at many leading cloud services, including Amazon’s own Echo smart speaker.

Of all the companies affected by this breach, none seem to have fared better than Netflix. Despite the major disruption that knocked out more than 20 critical AWS services, Netflix were quickly able t restore full streaming video services to the 50 million homes that depend on the service.

When asked, Netflix attributed this amazing feat to what they describe as “Chaos Engineering”. Netflix has developed a software solution called Simian Army, which acts as a benevolent malware which perpetually lives within the company.

This suite of tools is constantly triggering events such as:

  • Randomly disabling production servers
  • Introducing latency into client-server communications
  • Simulating outages of entire AWS data centers

As the name would suggest, it’s like letting a pack of rabid monkeys loose in your datacenter.

For most AWS clients, the outage was a major disruption. But for Netflix, this was just another routine battle against the Simian Army.

When was the last time you’d experienced a major unplanned downtime event, server outage or data loss incident? Most small businesses might experience one every year or two.

But at Storagepipe, disaster recovery is all we do. Clients outsource their nightmares to us, and we’re constantly providing consistent, reliable recovery from any emergency you can possibly imagine.

  • Ransomware?
  • Natural disasters?
  • Employee sabotage?

We’ve seen it all. And it’s all we do. All day, every day.

It’s only a matter of time until your company gets hit by a major IT emergency. When that time comes, what would you rather do?

  • Improvise a do-it-yourself recovery plan?
  • Or get the most experienced and qualified people to save the day?

Because we live in a world of IT chaos, Storagepipe has the experience, knowledge, infrastructure and tools to ensure total peace of mind for your data protection.

For over a decade, cloud computing has been one of the most talked-about trends in IT. Compared to traditional infrastructure, leveraging the cloud has been consistently shown to offer overwhelmingly positive business benefits.

And despite a long track record of success, some IT administrators are still reluctant to trust the cloud when it comes to data protection.

One of the most common anti-cloud arguments you might hear is what’s often referred to as “The Bank Fallacy”.

The Bank Fallacy says that robbers target banks, because that’s where all the money is. This logic seems to suggest that the risk of theft goes up when you consolidate your assets.

Instead of going after hundreds of small targets, a thief would supposedly prefer to go after a single massive heist.

In other words, by moving your backups to the cloud, you would actually be creating an incentive for a data breach. There are a number of serious problems with this argument.

Let’s look at the example of a typical self-managed manual backup process.

Every day, the IT administrator would make a single unencrypted incremental backup copy to tape. In
many cases, these tapes would never be taken off-site.

But if they are, these unencrypted tapes would simply be mailed to another unsecured office, where they
would probably be kept in a closet or a cardboard box. Compare this to a typical cloud backup provider.

First, the data is encrypted locally from the client’s machine. Once encrypted, this data cannot be decrypted without the client’s secret credentials.

From here, the data is transmitted over a secure SSL connection, into a state-of-the-art datacenter. In addition to the most robust technological safeguards, the building is also physically secure.

Once in storage, the data is protected by 24/7 security guards, video cameras, and network security experts.
As an added precaution, a second redundant copy of the encrypted data is also created, in case the first copy is somehow destroyed.

It’s extremely rare that a company would have the in-house resources and expertise to implement a backup and disaster recovery process that’s as secure as those available from a full-service data protection and disaster recovery provider.

And even if they could implement such a process, the cost would be significantly higher.

When someone brings up the bank fallacy, they are essentially arguing that storing your life savings in a
bank is more dangerous than keeping the cash under your mattress.

They’re also arguing that – if EVERYONE kept their cash under a mattress in their home – that this would
act as a deterrent to crime.

As we all know, this is completely untrue. It’s untrue when it comes to money, and it’s also untrue when it comes to data protection. If you’re looking for complete protection that provides you with total peace of mind, the cloud is simply your most secure and reliable option.

Smart cities are the epicentre of infrastructure and technology.

By 2050, 2/3 of the world’s population is expected to live in cities. To limit strain on resources and infrastructure, cities from around the world are taking the necessary steps to not only live better but smarter.

Storagepipe is at the forefront these innovations – offering comprehensive data protection for businesses and white label cloud solutions for resellers.

In our latest infographic, “The Internet of Things and Smart Cities,” we explore how cities from around the world are using technology to decrease congestion, reduce energy waste, improve quality of life, and enhance security.

What are Smart Cities?

Smart cities are connected devices that are integrated into a city’s infrastructure, such as sensors embedded into paved roads, street lamps and garbage bins. The Internet of Things refers to connected devices sending data to the Cloud.

Major cities around the world are tackling a number of infrastructure problems with this latest technology:

  1. Traffic Congestion and Parking:
    In Barcelona, drivers use an app to find available parking, limiting city congestion. The app gathers information from sensors embedded in parking lots.
  2. Energy Waste and Fuel Emissions:
    San Diego replaced 3,000 streetlights with smart lights, saving the city $250,000/year. Smart traffic and street lights reduce fuel emissions.
  3. Lifestyle and Convenience:
    In 2015, Barcelona won the 2015 World’s Smartest City for it’s free city-wide public wifi.
  4. Safety and Security:
    England is currently testing streetlamps that become brighter and take video footage to the sound of banging and hollering. AI, heat maps, and police cruiser cameras all assist police work.

The World’s Smartest City

Songdo, South Korea was built as a smart city from the ground up. The city took 10 years and $35 billion to develop. Sensors are embedded in streets to monitor temperature, traffic, and road conditions, and well as fire, water level, and water quantity.

Video conferencing is readily available in offices, hospitals, shopping centres, and homes. Even more, all garbage is rerouted to waste collection through underground pipes, and water pipes prevent waste of drinkable water.

Check out the below infographic to learn more about the Internet of Things and Smart Cities.


In 1999, NASA invested over $300 million in the Mars Climate Orbiter project. The spacecraft crashed into mars and disintegrated, because scientists had accidentally mistaken metric measurements for imperial units. How is it that some of the most intelligent people in the world, working on such an important project, could make such an obvious mistake?

Imagine the following situation. You’re downloading a new app onto your smartphone, and it requires confirmation of certain basic permissions in order to install.

  • Access to your microphone? Yes
  • Access to your camera? Yes
  • Access to your Bluetooth? Yes
  • Access to your personal contacts? Yes. No wait! I don’t want to…

But it’s too late. You’ve just given the application permission to send spam to all your personal contacts, and hurt your personal reputation in the process.

This is a classic example of a mode error.

Mode errors occur when someone makes a bad decision out of habit, resulting in negative consequences.

Every day, we read news stories of people whose bank accounts were wiped out because they forgot a decimal point on a check. Criminals also leverage mode errors to collect on fake invoices for services they’ve never provided.

If a mode error works its way into your backup process, the results for your company can be catastrophic. You may configure certain settings, thinking that you’re doing something completely different. And these errors might go undiscovered until you need to perform an emergency recovery.

At Storagepipe, we’ve seen the following examples of mode errors lead to real-world data disasters:

  • It’s common for clients to back up their laptops, but neglect their Outlook PST files.
  • Because it’s easy to create new virtual machines, they are sometimes forgotten and omitted from the backup process.
  • A company might be performing the same backup process for years without testing. And when it comes time to recover, they find that their backups are either empty or completely unusable.

Although automation is great for eliminating human error from your backup process, it can often also sometimes enhance the damage caused by mode errors. And in an automated process, these mistakes can go undetected for years.

There are many factors that can lead to mode errors, such as distractions, unfamiliarity, multi-tasking, complexity and lack of oversight.

Specialization can help eliminate many instances of mode errors. If you have a team or individual that only handles a narrow set of duties, they will be familiar enough to anticipate and avoid the most common types of mode errors. And because they have a narrow scope of responsibility, they also avoid the complexity and distractions that can often lead to such mistakes.

Also, there should be some sort of monitoring and review process that proactively looks for potential errors that might creep into the process. This is where backup testing can be of great help.

Mode errors are one of the reasons why many companies will outsource their backup and disaster recovery process to a team of highly specialized data protection and business continuity experts. These specialists are trained, experienced, and focused on just one objective: ensuring that you can recover quickly and consistently from the worst possible IT disaster.

Outsourcing your system protection can help you minimize the possibility of mode errors, and provide you with total peace of mind.

Do you have any questions or ideas for future videos? Please leave them in the comments section below. And if you enjoyed this video, please like and subscribe.