Constant Thinking | Technology Thoughts for the Quality Geek | by Constantin Gonzalez
Obsolete Post

This post is probably obsolete

This blog post is older than 5 years and a lot has changed since then: Meanwhile, I have changed employer, blogging platform, software stack, infrastructure, and other technologies I work with and interests I follow.
The stuff I wrote about in this article likely has changed, too, so you may find more recent information about things like Solaris, Oracle/Sun systems, Drupal, etc. somewhere else.
Still, this content is provided for your convenience, in case it is still useful for you. Just don’t expect it to be current or authoritative at this point.
Thanks for stopping by!

Checking Out the Amplidata Storage Cloud Technology


Last week during WorldHostingDays, I had the opportunity to visit Tom (@tomme), a former colleague of mine who came with Q-Layer to Sun, then to Oracle. Today, he works for a new Belgian startup called Amplidata, a company that specializes in building storage clouds. He introduced me to Wim, their CEO and we discussed their optimized object storage technology, some parallels to ZFS and the newest trends in cloud computing storage. Amplidata is a spin-off of Incubaid, a technology incubator which is responsible for the success of two good old Sun friends: Innotek (VirtualBox) and Q-Layer (The company that powered the Sun Cloud).

The Amplidata idea is simple: Take the data and spread it across many storage nodes in a clever form, so that it is available in a scaleable, reliable and power-efficient way. Cloud storage in a box, err in multiple boxes, if you will.

The Technology

The way this works is slightly more complex, and Amplidata's secret sauce is composed of three technologies:

  • BitSpread: This is a clever new algorithm that splits up a given block of data into multiple blocks that can be spread across multiple nodes. The trick here is that it can reconstruct the original data out of a sufficient number of blocks, which is smaller than the total number of blocks. Sounds weird? Let's try an example: Take a block of data and pass it through the BitSpread codec and it will yield, say, 10 blocks. The codec is set so that 7 blocks are sufficient to reconstruct the data. Now, you can take any subset of 7 blocks out of the 10 and the codec will be able to reconstruct the original data. Kinda like RAID, but with no distinction between data and parity blocks, and with an arbitrary number of total blocks and sufficient blocks, which is configurable. The more obvious advantage is that now any 3 blocks (assuming we stay with 10 total blocks and 7 sufficient ones) can be missing and we can still reconstruct the data out of the remaining 7. Since blocks are spread across nodes, it's a bit like a distributed RAID algorithm. The other advantage is that when you retrieve data, you can ask all ten nodes to send you their blocks, and you can deliver the original block as soon as the first 7 have arrived. This can help performance if some of your nodes are more loaded than others. Or, you can choose to bring down up to 30% of your storage nodes to save power and still be able to deliver your data. Neat. There's also a security aspect to this: When data is spread across many nodes, an attacker can't learn much if he only steals one node's worth of data (or even a rack, if the installation is big enough). The BitSpread parameters are configurable: For each file, the user can specify the number of resulting blocks to spread across and how many of them are to be sufficient for reconstruction. Finally, and contrary to RAID-5 and similar algorithms, the BitSpread codec is designed to have a low overhead, so it doesn't get in the way of performance.

  • BitLog: Many of the concepts behind BitLog reminded me of ZFS although they're implemented quite differently. It's a combination of smart caches (similar to ZFS' ZIL and L2ARC) to store written and read data locally in order to help performance, combined with a special data structure that enables snapshots, clones, I/O optimization and other more advanced data services. Again, BitLog is spread across nodes and helps the whole system to provide transactional robustness and higher level features.

  • BitDynamics: This is the management software that manages the Amplidata storage nodes. It manages all the housekeeping between the nodes such as data integrity, scrubbing, node management, self-healing and garbage collection.

The three technologies form the foundation of the Amplidata storage system, which adds block device, iSCSI, Fiber Channel and Amazon S3 interfaces on top. This enables customers to build storage clouds out of Amplidata's nodes and deliver services similar to both traditional SANs and cloud computing vendors.

Two Business Models

One of the main challenges for a startup is figuring out how to bring their technology to market. Amplidata has two models:

The Amplidata company offers their technology as an OEM solution to partners who want to build storage clouds.

The AmpliStor product is an optimized object storage solution for petabyte-scale, unstructured data applications.

AmpliStor comes with its own hardware, using high capacity storage enclosures, low-power drives, pre-installed software and some extra temperature and power management intelligence to make the solution more energy-efficient. For example, AmpliStor can switch off individual drives or full nodes when not used, to save energy.

Uses and Applications

Amplidata and AmpliStor are different from your good old NFS server, therefore it's important to know where they fit and where not. The distributed nature of the system is a powerful concept, but it comes at the price of higher latency compared to direct attach or traditional NAS storage (though I haven't seen benchmarks yet). Here are some typical applications:

  • Web 2.0 Style Media: Petabytes of photos, movies, audio etc. are created every day by social networks and their users. This is a natural fit for the Amplidata storage model, because the data is largely unstructured (here's my data and a hash, when I show you the hash, gimme the data back) and the distributed nature makes it easy to provide reliable storage at web-style performance for affordable cost.

  • Media Archives: There's a growing space in archiving that requires less power-use than disk storage, but is more demanding than tape storage. For example, digital media archives (think YouTube) want to archive large amounts of data, but they're not always happy with the latency of tape (think YouTube, but with minutes of waiting before the video starts). Still, they want to save power and cooling, so this would be an attractive solution here.

  • Cloud Storage: Of course, if you're a service provider or a public cloud, you'll likely want to provide something similar to Amazon's S3 service. This solution comes with a pre-built S3-compatible API, too. Cloud storage in boxes.

The Verdict

Amplidata has some very innovative technology. It reminds me of a distributed kind of ZFS, though some parts are quite different. Today, they're using standard Linux nodes as the base of their storage servers. It would be cool to try their solution with ZFS and Solaris on their nodes for some extra robustness, performance, and data integrity (through ZFS), reliability (through Solaris FMA and SMF) and observability (DTrace). Maybe something for the future?

Anyway, if you're looking for an optimized object storage solution in the cloud, check them out.

More Information

The Amplidata web site is evolving quickly, so check often. There's a small page on technology but you can request a technology paper if you want more details.

Full disclosure and disclaimer: Tom gave me a free ticket to visit WorldHostingDays in Rust, where Amplidata had a booth, though with no strings attached. Thanks, Tom! Also remember that I'm an Oracle employee, but this is solely my own, personal and independent opinion and not my employer's.

By Constantin Gonzalez , 29.03.2011, updated: 03.10.2017 in Reviews.

Related posts:



This is the blog of Constantin Gonzalez, a Solutions Architect at Amazon Web Services, with more than 25 years of IT experience.

The views expressed in this blog are my own and do not necessarily reflect the views of my current or previous employers.

Copyright © 2017 – Constantin Gonzalez – Some rights reserved.
This site uses Google Analytics, Alexa, Feedburner, Amazon Affiliate, Disqus and possibly other web tracking code. See Imprint for details and our information policy. By using this site you agree to not hold the author responsible for anything related to this site.

This page was built using Python, Jinja2, Bootstrap, Font Awesome, AWS Step Functions, and AWS Lambda. It is hosted on Amazon S3 and distributed through Amazon CloudFront.