Monday 9 December 2013

Processing Big Data


While many think it is difficult to manage data beyond a volume, technologists don’t agree with this. Newer technologies keep coming in to handle and process large volume of data. Consider Google search. While you are typing what you want to search, it starts autocompleting for you and starts showing you results as well.
All this is done by using clusters of servers at the backend. Data that goes in is processed by these servers so you get a good pool of processors and memory to take it in. Further to this, the storage network used behind to read/write this data offers a huge choice.
If it is a file based data, you can go for a sale-out NAS. If you have to handle block level data, scale-out SAN options are available. To help the really heavy databases, pure Flash based storage is now available.
Flash based storages help achieve upto a few million IOPS especially when they perform on inline deduplication done in the memory. Scale out storage there ensures that adding more capacity automatically gives you more memory to handle the new IOs and deduplication help control the requirement of storage since it could otherwise go for a big on budget.

No comments:

Post a Comment