Linux AHEM! Let me clear my throat...
#1
***MOD EDIT: Image ***


This is the bare-metal backend running Openstack Juno (provisioned with MAAS & juju) (not real picture, of course):
12x 4TB SATA HDD per Storage Node

18 Nodes in Total; 3 Dedicated compute Nodes (8-core CPU, 32GB RAM)

Total Installed Storage: 768TB

Image
Reply
#2
Wowwwwww
Reply
#3
you, sir, are my HERO
Reply
#4
Perhaps I'm just expressing my own personal limitations - I cannot for the life of me imagine having half a petabyte of media. I have about 20TB myself and often get stuck trying to figure out what to watch.

Well done for going the whole hog though, no point in messing about!

From your other thread though it sounds like this is configured in a rather "unique" way. Any reason for this, rather than just RAIDing the whole lot together into a single volume?
Reply
#5
I did not see any point in using RAID (other than cluster-ring everything into a single volume), as my thought at the time (and still is now) was to eventually move to an object storage model (my own dark reasons for that :-) ). A distributed object store such as openStack/Swift, HDFS or something along these lines would eliminate the need to worry about keeping track of individual files, directories, partitions and the rest...
Reply
#6
As a systems admin, I appreciate threads like this!

Personally I would like it all flash storage these days haha
Reply
#7
@zag,
I was brutalized in the General Help Forum for commiting the cardinal sin of having 8x 16x 4TB drives spinning without some form of RAID. Although I understand their argument (resundancy, availability...pixie dust) I decided (after spending a month reseaching RAID) that is was NOT worth it. Opting instead to just partition my drives into 1TB chunks and wait for something better.

I believe I found a better solution with an Object Store. The most immediate option is to setup the ObjS (Object Store) along with a middleware mapping mechanism that would expose my data as Volumes, files, and folders that I can export via SMB and use it as-it.

The better solution would be a feature add-on to XBMC that would interact with the ObjS directly. I was told, in no uncertain terms that such a feature would NEVER be added.

I understand that the concept of an Object Store is complex (we are conditioned to think of data, specifically media, as files inside a folder; inside a physical drive; inside a physical server...in our basement); but one should not be too quick to reject new technologies or ideas simply because one does not understand it.

If you would like to, I would like to continue the discusion off-list and bounce some ideas; as I am writting up a feature proposal to submit to the developer's forum.
Reply
#8
This is sexy. So just to be absurdly clear you just run JBOD here - each node independent of each other, and each disk within each node also independent..with 4x1TB partitions?

I agree with the logic that RAID would just be a headache at this kind of scale, whether done in hardware or software.

Mind sharing hardware, OS and current filesystem details? Do you expose the storage to clients currently via SMB then?
Reply
#9
JBOD...pretty much...but I am already working on implementing and object store model then mapping it to a file/folder structure to XBMC...
Reply
#10
(2014-01-20, 15:23)jacintech.fire Wrote: I believe I found a better solution with an Object Store. The most immediate option is to setup the ObjS (Object Store) along with a middleware mapping mechanism that would expose my data as Volumes, files, and folders that I can export via SMB and use it as-it.

What implementation would you use?

(2014-01-20, 15:23)jacintech.fire Wrote: I understand that the concept of an Object Store is complex (we are conditioned to think of data, specifically media, as files inside a folder; inside a physical drive; inside a physical server...in our basement); but one should not be too quick to reject new technologies or ideas simply because one does not understand it.

Couldn't agree more.

(2014-01-20, 15:23)jacintech.fire Wrote: If you would like to, I would like to continue the discusion off-list and bounce some ideas; as I am writting up a feature proposal to submit to the developer's forum.

Not on-list? Especially given your unique storage setup, I'd enjoy following this.
HTPC RPI3 Kodi 17 (Krypton) v8.0.1 MR
Storage BPI 1x 500GB SSD UPnP server
Display Sony Bravia 32"
Reply
#11
Go to the general help forum. This thread is going on there right now. Topic: URGENT 512TB Array
Reply
#12
No one is going to mention the fact that 512 TB worth of 4TB harddrives is ....

$19,200 USD?

You spend close to $20,000 on harddrives? WhyHuh
Reply
#13
@tental,
Who said I spent $20,000.00 in hard drives....?
The cost of a 4TB harddrive is close to U.S. $80.00 wholesale...and that asumes you are not part of a quality control group tasked with testing said hard drives :-)
I am just saying...you are asuming a lot there my friend...
Reply
#14
Assuming a lot? I assumed 1 thing. The price of the hard drive lol.

Still a LOT of hard drives and a LOT of cash. $80 would be $10240.

10k on just Hard Drives is still a lot!!!!!!

I am struggling to fill up 20 TB of space let alone 512. What do you have on there lol.....
Reply
#15
(2014-01-22, 20:01)tential Wrote: Assuming a lot? I assumed 1 thing. The price of the hard drive lol.

Still a LOT of hard drives and a LOT of cash. $80 would be $10240.

10k on just Hard Drives is still a lot!!!!!!

I am struggling to fill up 20 TB of space let alone 512. What do you have on there lol.....

You also assumed EVERY partition, on EVERY DRIVE to be 100% full...and ALL bought TODAY at TODAY's prices. Remember how cheap they were before the flood in Thailand?
You should visit your good will store (if you are in North America) or flea market in the rest of the world...pick up used DVDs and BluRays for pennies...
You also discover there is a world of cinema outside of hollywood :-)
It all adds up quickly...
Reply

Logout Mark Read Team Forum Stats Members Help
AHEM! Let me clear my throat...0