Showing posts with label Recovery. Show all posts
Showing posts with label Recovery. Show all posts

Saturday, 4 January 2014

Inline Deduplication

Inline deduplication looks as a very impressive term. You are made to believe that the magic would happen on the wire but at the end of the day, there are many caveats.
We have had a very recent experience where someone sized it with assumptions and commitments of reducing the backup & recovery windows tremendously but it did not really go as down as it was expected to be going.
Inline deduplication starts working on the source server itself and is followed by some more processing on the network. The left overs are taken care of by the media device’s memory. So, if you think you have a lot of extra resources on the production servers, go for this. If you are low on resources, you should first upgrade the production servers and then you are expected to have atleast two 10 Gig ports dedicated for the deduplication device and a well sized media server.
Just expecting wonders by replacing the backup device would not help much. It will reduce the backup & recovery windows a bit especially if you move from tape to disk while getting deduplication. However, you should be extremely careful in terms of your upgrade plans and the expectations that you set for yourself.

Monday, 30 December 2013

Source Based Deduplication


Choose the unique content from source itself when you start a backup. It does utilize some processing and memory from the source system so size it well.
Source based deduplication is also very powerful in ensuring that you utilize minimum network bandwidth during the backups. The backup application will create blocks of data on source and then store their hashes there at source and send unique data on the network. This is good for backups only if it is sized well. Catalog created by some applications is large enough to cause trouble for the performance of the source system which could be a production system.
Source based deduplication also gives good results for file system backups. A traditional approach takes long for file system backup that has millions of small files taking days for getting written especially during a full backup cycle. Source based deduplication in this case picks up only the changed content of the changed files reducing the amount of data travelling on the network irrespective of the backup level set.
Global deduplication on the target further reduces the amount of data stored.

Saturday, 14 December 2013

Protecting Small Databases


A lot of SME’s get concerned about protecting their databases – typically SQL database. The interesting challenge is the fact that they are really small databases having extremely critical data.

While traditionally a standalone tape based backup solution would be considered ideal, it is not so simple. Since a 50-60 GB database normally compresses down to 10-15 GB, it does not deserve that kind of investment into tapes, each of which is capable of holding terabytes. It ends up holding too small a data and therefore costs more per GB.

With the changing times, better options are available depending on what you want to achieve:

A simple backup could help maintain multiple versions and copies and give you old and new recoveries when required. While this gives flexibility of versioning, the recovery process would take some time based on the kind of resources available.

Alternatively, if you are looking for a quick access to your data even after a disaster, replicating –especially mirroring – is the best option. To keep it in economical range, you can use native mirroring capabilities rather than investing in third party tools.

Ace Data Abhraya offers both: Cloud based backup for option 1 and cloud based infrastructure for option 2 with committed recovery SLA. Infact for SQL databases, you can opt for recovery on cloud infrastructure with option of recovering only database or complete server on cloud while enjoying the flexibility of versioning and compliance, and investing very less based on the backup size usage only.

Monday, 2 December 2013

How to backup Big Data?


The industry has been struggling a lot with the backups of the data they have been having for long now. Traditional tape based backup solutions seem good only for small size environments now. Though they have been growing in individual capacities and number of slots, better disk based options are pushing them more towards being secondary rather than the primary backup mediums.
Big data needs better care anyway for being big, and perhaps a bit more meaningful than the databases with invoice records or product records. The new technologies like deduplication and better compression algorithms like LZOB and ZLIB are making it more cost effective to back them up by bringing down their size.
What is also important is the cost of retaining this large volume of data and the varied sources of this unstructured data.
Ace Data’s Abhraya Cloud based backup offering resolves this challenge for its customers. Its flexible backup policies allow organizations to keep latest data close to them locally, and send the remaining to a cloud based offering. Being cloud based, they pay for what they backup and not invest on large growth assumptions. Furthermore as the backup grows old, it can be automatically archived to low cost disks reducing the cost of long term retention while ensuring data availability for long time.
The solution is capable of backing up smartphones, mobile laptops, large volumes of file servers apart from backing up the large servers and databases thereby ensuring that all sources of data can be backed up through a single solution.

Wednesday, 13 November 2013

How to Store and Manage BIG Data?


While I mentioned in my previous blog that any size of data is no problem, I often get questioned upon how to store and manage the huge volumes. This is a typical concern of an enterprise faced with increasing data size.
Storage vendors have seen and known this problem as it grew, and have scaled-up or rather scaled-out to help handle this massive growth. Both NAS and SAN vendors have gone beyond the traditional methods of upgrading the storage infrastructure by adding additional shelves and disks. The challenge that the traditional method has is that you end up upgrading capacity with shelves and disks with limited enhancements in processing power. This ends up in performance reduction.
The Scale-out method helps upgrade the storage by adding new nodes which include processing power, memory and capacity, thereby keeping the overall performance consistent with practically no dip in user experience. This is true for both SAN and NAS based storages. These storages can be expanded to PBs on a single storage, or even a single file system, by simply plugging in a new node. It is viable commercially also, as the cost per GB goes down as you keep adding more nodes.
So don’t worry about handling your Big Data as the storage devices are now available to store them more efficiently.

Thursday, 26 September 2013

Using Cloud for Disaster Recovery

The base premise of disaster recovery is the way you handle your business in case your primary business site meets a disaster. In my opinion, disaster recovery can be considered as an IT subset of the overall Business Continuity Planning including non IT factors of business as well.
When an organization chooses to host applications on cloud, it automatically gets into the first level of disaster recovery. The application along with all its data is already in a remote data center and the loss of your primary business premise does not affect access to applications and data. They can still be accessed from any other computer and internet connection.
You can also opt for a disaster recovery site for your service provider. This could add to the budget but ensures your data safety and continuity even if their data center is in trouble.
For a low cost disaster recovery option, Ace Data Abhraya would be the right choice for organizations where investment in full online disaster recovery site is still a long way to go. Ace Data Abhraya offers unique backup propositions wherein your backed up data can be used remotely in the event of a primary site disaster, while it continues to offer all the benefits of daily backups which are compressed and deduplicated on-premise to minimize the load on the bandwidth.
In the coming days, we would explain how using Ace Data Abhraya in different deployment scenarios helps organizations achieve disaster recovery for their critical data.