Steve Modica of Small Tree Communications has strong feelings about the future of 10 Gig Ethernet shared storage…and that future is now. Small Tree has long been poised on the cutting edge of server-based storage, which cuts out the need for clustered solutions. As technological forces align to finally bring 10 Gig into the hands of of studios—at ever more reasonable prices—Steve shares his views on the coming evolution and how Small Tree will further adapt to fit a faster gigabit marketplace.
StudioBytes: How much real-time shared storage editing does Small Tree presently allow?
Steve Modica: It depends on the storage of course. If you have a maxed out server from us, it can handle approximately 12 ProRes HQ streams. I tell customers that they can usually assume that much activity on one server. And, if they need more than that, we can always add more servers to the network. Everyone can see all the servers so it’s not as if we’re limiting them in any way; we’re just breaking the work into pieces.

Have you seen a big upsweep in people real-time editing on shared storage?

For us, this January is 30% over last January and that’s largely due to people picking up on the shared storage concept. They’ve picked up that it’s so inexpensive to do it this way. Compared to more expensive XSAN, they don’t have to buy any licenses, fibre channel cards or switches. They can take all the dollars they would have spent on infrastructure and put them into storage—now they have more space.

When do you think everyone’s going to be able to do this?

I think when people start to buy the systems Apple’s released that use the new Intel i7 and i5 processors they will find that they have a lot more capability than they’ve seen in the past. 10 Gigabit and those i7 and i5 (or Nehalem) processors combined are going to allow the central storage server to be much larger and accessible to more people without a big upfront cost.  As an example, I told you a server could handle 12 ProRes HQ streams maxed out. Now if I replaced all the spinning disks with solid-state disks and all the Gigabit ports with 10 Gigabit ports, I will be able to feed somewhere between 5 and 10 times as much data through that system. Now, instead of 12 threads, we could be talking 50-100 threads. At that point it becomes very easy to deploy for almost any customer. And, at that point the complexity of XSAN becomes unnecessary. A customer will be able to connect 6 people to a server without even buying a switch.

We have a 6 port 10Gb Ethernet card coming.  This means a card with 6 independent 10Gb ports on it.  Today, we sell 6 port Gigabit cards and dual port 10Gb cards.


How do you think this will affect the storage industry?

The notion of a server and a client, a server machine where all the data lives and a client or many clients where many people pull data from that server, is quite an old concept. Various clustered file systems were invented because the server couldn’t really keep up with all the demands being placed on it by the clients. I think now as we get into 10 Gig, new processors and solid state disks that allow the server to be that much faster, you’re going to see it enable more of that server-client architecture and phase out clustered file system.

Why do you think it has taken so long for 10 Gig to catch on?

10 Gigabit is very late. It should have been out on the market back in 2003.

The problem was Intel couldn’t make processors fast enough to drive it and so everyone was out chasing different solutions to make something that could drive 10 Gig. Gigabit, the current technology, has been out all these extra years because of it.

It’s a plumbing problem. How do you deal when your clients are all using pipes that are the same speed? If you can imagine a septic system, what would you do if every house had pipes as large as the system you’re piping the waste—you would be overwhelmed. You need to have a big enough pipe. It’s the same problem with the network.

10 Gig needed to be here about 7 years ago and because it didn’t show up, we invented the clustered file system, etc. And now you’re going to see that change.

How’s the change happening? Are people redoing their whole infrastructure?

I would say everyone is sitting around waiting for 10 Gig to be deployed through Cat-6a. Cat-6 cable is essentially the cable plugged into your computer to put it on the network. Everyone is waiting for 10 Gig to run over that cable. Their building’s already plumbed with it; all they need are computers with 10 Gig cards on them and switches with 10 Gig ports.

We’ve already got the cards and are writing the drivers,  but people don’t want to deploy optical or custom copper—they want to deploy over Cat-6.

Are there other factors involved?

The gating factor is power consumption—every networked card thing has what’s called a PHY, a physical chip that drives the wire. That PHY for a 10 Gig card on a Cat-6 today needs about 6 watts to do the work and on a card that’s fine. The problem is what if you have a switch and you want to have 24 or 48 of those parts tacked together in one little strip that everyone’s going to plug into. If you multiply 6 watts by 24 or 48, that’s a lot of Watts in a very tight space. If you think about a light bulb that’s 100 or 200 Watts, it gets pretty hot. The plastic would melt.

So the last piece of the puzzle is a chip problem. They just need to respin a chip to use a new process that uses less power. Then all of a sudden you’ll see switches with 10 Gig Cat-6 ports on them and cards with Cat-6 ports.

Why didn’t people push to have the 10 Gig infrastructure sooner?
The issue was cost really. The first 10 Gig card we held in our hands was back in 2003 from Intel. It was huge with a big steel heat sink on it, which it needed to dissipate the heat, it was optical only and cost $4,770. Now, if you find an Apple customer who has a $2-3K Mac Pro and can convince him to spend $5k on a card, send them my way. It’s very rare to find a customer who will spend more for a card than on the machine it’s going into.

Today, you’re seeing 10 Gig cards on the market for roughly $500 a port in copper. In the very near future, you’re going to them go for more like $200 or $300 a port. When that happens, 10 Gig in the server space is going to take over. Gigabit will remain—it’s a great plateau, most people don’t need more than Gigabit Ethernet. They won’t want to spend money on the switches to go 10-Gig everywhere, but it’s going to allow the server to handle way more than they can handle today.

What’s the next step for Small Tree?

We’ll soon have a 6 port 10 Gig card and other 10 Gig cards and CAT-6A so people can deploy more switches less expensively. I think the next big step for GraniteSTOR after that is going to be solid-state disk drives (SSDs).

If you’re reading a megabyte of data off a RAID today in a shared environment, it might take 20 milliseconds to get that data off. SSDs on the other hand are like memory—the same memory that’s in your iPod or USB stick. Imagine if you broke your jump drive and removed the chip and put a hundred of them together and you decided to make a 4 Gig jump drive into a 400 Gig jump drive. Instead of having them all in sequence, you stuff them in parallel and plug them in as a SCSI device. Now you have a device that is amazingly fast and doesn’t care where the data lives, because it’s just as fast to access one memory location as another. Disk drives have a problem with sharing. If you and I and someone else are sharing the same disk drive, we’re making those poor heads bounce all over the place. If you have SSDS, they don’t care where the data is anymore than the RAM cares—it’s all the same speed.

Instead of taking 2-5min to start up, a SSD takes 20 seconds; a browser will pop up instantly.

SSDs do have some unique issues. They’re not a panacea. Writing to an SSD has some challenges with regards to latency and those are what we have to work through with the vendors. We’re already looking at samples and doing testing.

Do you have a timeline?
In the next three months, we should be ready to push some drives that are up in the Terabyte space with 10 Gig networking to back them up.