THREAT! The virus scanner pings
Your system needs some updating
Yes it has been quite a while
Your OS is out of style
Careful with your documents
It’s just good ol’ common sense
Always backup your hard drive
And any threat you will survive
THREAT! The virus scanner pings
Cryptocurrency mining

Using storage in the cloud
Covered by encryptions shroud
Passwords just a few are told
Keep them fresh and not too old
Always check for system bugs
and loading times that move like slugs
Fear not if your screen goes blue
Storagepipe’s looking out for you
THREAT! The virus scanner pings
For botnets, you’ll be spamming

Anti-social social engineers
Have CIOs cowering in fear
Trusted users compromised
Packet sniffers, prying eyes
Don’t bring down your company
by shrugging off security
terms, conditions we say “YES”
though not read, we must confess
THREAT! The virus scanner finds
Storagepipe for peace of mind

The term “Black Swan” dates back to the first century A.D. Originally. The term was ordinarily used to show an event that was exceptionally improbable.

For example:

  • “I’ll see a Black Swan before you ever beat me at chess.”
  • “An honest politician is like a Black Swan.”

And then, in 1697, something incredible happened. The Dutch explorer Willem de Vlamingh travelled to Australia, where he discovered a species of black swans!

Since then, the importance of the expression has changed to depict a type of logical fallacy. The fact that you’ve never seen something before is no indication that it won’t occur in the future.

Likewise, the fact that you’ve planned for every likely disaster recovery scenario is no indication that you’ve designed for every possible threat. In fact, the most serious data breaches and data loss scenarios are often the results of unanticipated events or perfect storms.

So how do you prepare for Black Swans in your data protection strategy?

One way is through specialization. Delegate all of your backup, information compliance and disaster recovery tasks to a team of dedicated specialists with years of experience in this field. Ensure these experts have the best training, the best infrastructure, and the best tools at their disposal.

As specialists, they learn from each disaster recovery incident and become more prepared by learning from each new disaster they encounter. These experts have seen it all, and nothing surprises them.

In spite of the fact that there is no 100% guaranteed way to anticipate all possible Black Swan disasters, putting your recovery plan in the hands of a specialist like Storagepipe is the best way to get peace of mind for your data protection.

In September of 2015, Amazon experienced a major downtime incident that knocked out availability at many leading cloud services, including Amazon’s own Echo smart speaker.

Of all the companies affected by this breach, none seem to have fared better than Netflix. Despite the major disruption that knocked out more than 20 critical AWS services, Netflix were quickly able t restore full streaming video services to the 50 million homes that depend on the service.

When asked, Netflix attributed this amazing feat to what they describe as “Chaos Engineering”. Netflix has developed a software solution called Simian Army, which acts as a benevolent malware which perpetually lives within the company.

This suite of tools is constantly triggering events such as:

  • Randomly disabling production servers
  • Introducing latency into client-server communications
  • Simulating outages of entire AWS data centers

As the name would suggest, it’s like letting a pack of rabid monkeys loose in your datacenter.

For most AWS clients, the outage was a major disruption. But for Netflix, this was just another routine battle against the Simian Army.

When was the last time you’d experienced a major unplanned downtime event, server outage or data loss incident? Most small businesses might experience one every year or two.

But at Storagepipe, disaster recovery is all we do. Clients outsource their nightmares to us, and we’re constantly providing consistent, reliable recovery from any emergency you can possibly imagine.

  • Ransomware?
  • Natural disasters?
  • Employee sabotage?

We’ve seen it all. And it’s all we do. All day, every day.

It’s only a matter of time until your company gets hit by a major IT emergency. When that time comes, what would you rather do?

  • Improvise a do-it-yourself recovery plan?
  • Or get the most experienced and qualified people to save the day?

Because we live in a world of IT chaos, Storagepipe has the experience, knowledge, infrastructure and tools to ensure total peace of mind for your data protection.

For over a decade, cloud computing has been one of the most talked-about trends in IT. Compared to traditional infrastructure, leveraging the cloud has been consistently shown to offer overwhelmingly positive business benefits.

And despite a long track record of success, some IT administrators are still reluctant to trust the cloud when it comes to data protection.

One of the most common anti-cloud arguments you might hear is what’s often referred to as “The Bank Fallacy”.

The Bank Fallacy says that robbers target banks, because that’s where all the money is. This logic seems to suggest that the risk of theft goes up when you consolidate your assets.

Instead of going after hundreds of small targets, a thief would supposedly prefer to go after a single massive heist.

In other words, by moving your backups to the cloud, you would actually be creating an incentive for a data breach. There are a number of serious problems with this argument.

Let’s look at the example of a typical self-managed manual backup process.

Every day, the IT administrator would make a single unencrypted incremental backup copy to tape. In
many cases, these tapes would never be taken off-site.

But if they are, these unencrypted tapes would simply be mailed to another unsecured office, where they
would probably be kept in a closet or a cardboard box. Compare this to a typical cloud backup provider.

First, the data is encrypted locally from the client’s machine. Once encrypted, this data cannot be decrypted without the client’s secret credentials.

From here, the data is transmitted over a secure SSL connection, into a state-of-the-art datacenter. In addition to the most robust technological safeguards, the building is also physically secure.

Once in storage, the data is protected by 24/7 security guards, video cameras, and network security experts.
As an added precaution, a second redundant copy of the encrypted data is also created, in case the first copy is somehow destroyed.

It’s extremely rare that a company would have the in-house resources and expertise to implement a backup and disaster recovery process that’s as secure as those available from a full-service data protection and disaster recovery provider.

And even if they could implement such a process, the cost would be significantly higher.

When someone brings up the bank fallacy, they are essentially arguing that storing your life savings in a
bank is more dangerous than keeping the cash under your mattress.

They’re also arguing that – if EVERYONE kept their cash under a mattress in their home – that this would
act as a deterrent to crime.

As we all know, this is completely untrue. It’s untrue when it comes to money, and it’s also untrue when it comes to data protection. If you’re looking for complete protection that provides you with total peace of mind, the cloud is simply your most secure and reliable option.

Smart cities are the epicentre of infrastructure and technology.

By 2050, 2/3 of the world’s population is expected to live in cities. To limit strain on resources and infrastructure, cities from around the world are taking the necessary steps to not only live better but smarter.

Storagepipe is at the forefront these innovations – offering comprehensive data protection for businesses and white label cloud solutions for resellers.

In our latest infographic, “The Internet of Things and Smart Cities,” we explore how cities from around the world are using technology to decrease congestion, reduce energy waste, improve quality of life, and enhance security.

What are Smart Cities?

Smart cities are connected devices that are integrated into a city’s infrastructure, such as sensors embedded into paved roads, street lamps and garbage bins. The Internet of Things refers to connected devices sending data to the Cloud.

Major cities around the world are tackling a number of infrastructure problems with this latest technology:

  1. Traffic Congestion and Parking:
    In Barcelona, drivers use an app to find available parking, limiting city congestion. The app gathers information from sensors embedded in parking lots.
  2. Energy Waste and Fuel Emissions:
    San Diego replaced 3,000 streetlights with smart lights, saving the city $250,000/year. Smart traffic and street lights reduce fuel emissions.
  3. Lifestyle and Convenience:
    In 2015, Barcelona won the 2015 World’s Smartest City for it’s free city-wide public wifi.
  4. Safety and Security:
    England is currently testing streetlamps that become brighter and take video footage to the sound of banging and hollering. AI, heat maps, and police cruiser cameras all assist police work.

The World’s Smartest City

Songdo, South Korea was built as a smart city from the ground up. The city took 10 years and $35 billion to develop. Sensors are embedded in streets to monitor temperature, traffic, and road conditions, and well as fire, water level, and water quantity.

Video conferencing is readily available in offices, hospitals, shopping centres, and homes. Even more, all garbage is rerouted to waste collection through underground pipes, and water pipes prevent waste of drinkable water.

Check out the below infographic to learn more about the Internet of Things and Smart Cities.

Transcript:

In 1999, NASA invested over $300 million in the Mars Climate Orbiter project. The spacecraft crashed into mars and disintegrated, because scientists had accidentally mistaken metric measurements for imperial units. How is it that some of the most intelligent people in the world, working on such an important project, could make such an obvious mistake?

Imagine the following situation. You’re downloading a new app onto your smartphone, and it requires confirmation of certain basic permissions in order to install.

  • Access to your microphone? Yes
  • Access to your camera? Yes
  • Access to your Bluetooth? Yes
  • Access to your personal contacts? Yes. No wait! I don’t want to…

But it’s too late. You’ve just given the application permission to send spam to all your personal contacts, and hurt your personal reputation in the process.

This is a classic example of a mode error.

Mode errors occur when someone makes a bad decision out of habit, resulting in negative consequences.

Every day, we read news stories of people whose bank accounts were wiped out because they forgot a decimal point on a check. Criminals also leverage mode errors to collect on fake invoices for services they’ve never provided.

If a mode error works its way into your backup process, the results for your company can be catastrophic. You may configure certain settings, thinking that you’re doing something completely different. And these errors might go undiscovered until you need to perform an emergency recovery.

At Storagepipe, we’ve seen the following examples of mode errors lead to real-world data disasters:

  • It’s common for clients to back up their laptops, but neglect their Outlook PST files.
  • Because it’s easy to create new virtual machines, they are sometimes forgotten and omitted from the backup process.
  • A company might be performing the same backup process for years without testing. And when it comes time to recover, they find that their backups are either empty or completely unusable.

Although automation is great for eliminating human error from your backup process, it can often also sometimes enhance the damage caused by mode errors. And in an automated process, these mistakes can go undetected for years.

There are many factors that can lead to mode errors, such as distractions, unfamiliarity, multi-tasking, complexity and lack of oversight.

Specialization can help eliminate many instances of mode errors. If you have a team or individual that only handles a narrow set of duties, they will be familiar enough to anticipate and avoid the most common types of mode errors. And because they have a narrow scope of responsibility, they also avoid the complexity and distractions that can often lead to such mistakes.

Also, there should be some sort of monitoring and review process that proactively looks for potential errors that might creep into the process. This is where backup testing can be of great help.

Mode errors are one of the reasons why many companies will outsource their backup and disaster recovery process to a team of highly specialized data protection and business continuity experts. These specialists are trained, experienced, and focused on just one objective: ensuring that you can recover quickly and consistently from the worst possible IT disaster.

Outsourcing your system protection can help you minimize the possibility of mode errors, and provide you with total peace of mind.

Do you have any questions or ideas for future videos? Please leave them in the comments section below. And if you enjoyed this video, please like and subscribe.

Transcript:

What if there was an easily-preventable safety problem that was causing death and destruction all around you? Would you take immediate action to fix it?

In 1999, the US Institute of Medicine released a shocking study, where they revealed that over 98,000 people died every year from preventable medical errors.

  • That’s the equivalent of about one fully loaded jumbo jet crashing every single day.
  • At this rate, a population the size of Boston’s could be wiped out every 6 years.

Just like other humans, doctors make mistakes. And because their time is valuable, they must make lots of potentially life-or-death decisions, in a fast-paced work environment that’s full of multi-tasking and distractions.

This is why the US Institute of Medicine’s “To Err Is Human” report recommended computerized automation as one of the simplest and most effective ways to drastically cut down on the number of preventable patient deaths.

Although computerized automation tools had existed for a long time before this report, many doctors still preferred their traditional paper-based systems. They simply saw no urgency in changing the way they’ve always worked. And as a result, thousands of people were negatively affected.

The IT industry is undergoing a very similar crisis, when it comes to data protection. Although human error and physical media failure are the 2 most easily-preventable causes of critical data loss, they still account for 82% of all incidents. And as a result, 45% of all SMB IT executives have reported experiencing critical data loss.

And the biggest reasons for this are orthodoxy and tradition. Just like with doctors, IT time is valuable. IT administrators make lots of important decisions, in a fast-paced work environment that’s full of multi-tasking and distractions.

Most IT administrators prefer the comfort and control of manually managing their own backup systems, despite the fact that automation and specialized delegation clearly offer better overall security.

When it comes to data protection and business continuity, the stakes are high.

If a doctor makes a medical mistake, it can hurt one patient. But if an IT administrator makes a backup mistake, it can bring down large companies and destroy the livelihoods of thousands of people.

If left untreated, errors in your business continuity and data protection can become a ticking time bomb within your company. Preventable data loss needs to be addressed immediately, before it’s too late.

First, make sure that all work related to data protection, compliance and business continuity are delegated to a trusted, well-trained specialist that exclusively does this kind of work without any distractions. This will ensure that you’re prepared and well-rehearsed for any emergency scenarios.

Second, make sure that this individual has all of the best-of-breed automation tools, backed by the best infrastructure & equipment, so that they can confidently automate every aspect of your data protection, compliance and business continuity.

Through specialization, automation, and redundancy, you can eliminate the leading causes of preventable data loss from your IT management, while also freeing up time for other work. And hopefully, this will give you total peace of mind for your data protection.

Transcript:

For midsized companies around 1000 employees, the average costs of unplanned downtime ranges from $1,200 to $3,600 per hour, depending on the industry. And the average midsize company experiences 16–20 business hours of downtime per year.

As you can see, there is a significant financial motivation for companies to reduce or eliminate unplanned downtime. Despite this, companies often lack the IT budgets to implement the measures necessary to appropriately deal with this problem.

One of the simplest and most cost-effective ways of dealing with this challenge is through the use of cloud-enabled backup and disaster recovery appliances.

Cloud appliances are becoming very popular amongst IT administrators, when it comes to the protection of both physician and virtual servers.

Compared to in-house solutions, they are easier to manage, require no capital investment, eliminate the need for tape, and usually offer a lower Total Cost of Ownership. But more importantly, cloud backup appliances also offer a number of important recovery process advantages, when compared to conventional off-site backup methods. Before going into these advantages, let’s take a moment and examine – at a very high level – how cloud backup DR appliances work.

The cloud backup provider hosts a backup appliance in the client’s datacenter. All backups are first performed locally to the backup appliance.

Then, the appliance then uploads this data to a remote datacenter.

At minimum, 2 copies of the backup data are preserved. One copy is kept locally for fast recovery, and the others are kept off-site for major disasters.

This is what’s often referred to as a “Disk to Disk to Cloud” or D2D2C backup.

Speed of recovery is the main advantage of having an on-site copy of the data, since this eliminates the need to download large volumes of data over an Internet connection.

Most common data recoveries can easily be performed from the appliance. The off-site copies are usually only used in worst-case scenarios where restoring to the primary location would not be ideal or even possible.

In the event of a full server failure, a local cloud appliance can be used to quickly load a recent system image snapshot to perform a bare metal recovery to a local server. For large data volumes, this is faster and more convenient than downloading over the Internet.

If the local server is destroyed, the local appliance could also recover the system image to a virtual machine or another device.

Of course, when it comes to unplanned downtime, recovery time is critical. If the primary file server crashes, all productivity in the company can come to a complete stops.

Since a cloud backup appliance is already installed and configured for your internal network, this device can immediately be mounted as a temporary storage server. Within a few minutes, all users will get access to their files directly from the appliance, until the production server can be brought back online.

And in the event that a more complex system – such as a database or email server – experiences a major failure, a system image can hosted directly on the appliance until you’re ready to bring your production servers back.

Finally, there might be some scenarios, such as major natural disasters, where rebuilding locally simply isn’t an option. In these cases, the servers can be temporarily hosted as virtual machines in the cloud provider’s “warm site” environment until a new primary production site can be made available.

Not only doe cloud DR appliances offer fast recovery times, but they also provide the flexibility to adapt to whatever your emergency recovery requirements. And mentioned earlier, the use of cloud-enabled offsite backup appliances offers many other benefits that have nothing to do with disaster recovery.

But if unplanned downtime of your critical business systems has the potential to seriously impact your business, a cloud backup and DR appliance can provide a lot of value.

Transcript:

Implementing an effective DRaaS solution is not a simple matter. Your provider will act as a trusted partner, ensuring not just that you have the most appropriate DR plan, but also that this plan is managed, maintained, monitored, adapted and scaled properly for years to come.

There are a few key points that you need to consider when choosing a DRaaS provider:

  • Location: Where will your data reside? For compliance and regulatory reasons, it is commonly required for data to be kept in your own country. Also, many companies will often request both an on-site and an off-site recovery location. Your DRaaS provider should be able to accommodate this without requiring significant hardware investments.
  • Performance: What Recovery Time and Recovery Point Objectives are outlined in your provider’s Service Level Agreements? Do they have a track record of meeting their SLAs? Also consider their responsiveness to customer questions and issues. If disaster strikes, are they available 24/7 to support you?
  • Testing: Does the provider monitor your backups for consistency? Do they make it easy to perform periodic disaster recovery and failover testing? Do they offer backup verification? Backup testing is the most effective way to ensure your data and systems can be recovered consistently in an emergency.
  • Specialized Experience: Make sure that your service provider is not just a generalist cloud service provider looking to get into the DRaaS business. You want them to have a proven track record in the space. You should also ensure that their support staff are well-trained and have all of the appropriate industry certifications.
  • Pricing: What is their pricing structure? While it may appear low at first, some companies charge exorbitant fees to access your data when you need it most. And others have pricing tiers that can become problematic as you scale. Make sure pricing is completely transparent before agreeing to a contract, and ensure that costs won’t fluctuate unexpectedly as your requirements change in the future.
    Breadth Of Capabilities: Whether you need to protect laptops, file servers, hypervisors or cloud applications, your DRaaS provider should offer a broad spectrum of services to cover all of your needs through a single provider. And as your needs change in the future, your DRaaS provider should adapt around your growing requirements.
  • Levels of Service: Different systems require different levels of availability and require different recovery features. Whether you want simple file recovery from a local appliance or near-instant off-site server failover to a remote facility, your provider should offer multiple recovery options in order to keep costs manageable.

By selecting the right cloud DRaaS provider, you can ensure total peace of mind in the face of any data disaster.

Do you have any questions or ideas for future videos? Please leave them in the comments section below. And if you enjoyed this video, please like and subscribe.