by GJ51 » Wed Sep 18, 2013 2:22 am
It's all about the relationship between structure and speed.
When you are on the local server where all the processing is done locally and retrieved from the hard drive directly, the local hardware can actually cope with poor structure and still deliver reasonable search delivery speed until it reaches the limits of the hardware's processing power. Faster, more powerful CPU, hard drive and ram allocation are all factors, but eventually, enough data will choke even the most powerful hardware if it isn't structured reasonably.
Now consider the Android environment where you're having to consider data search, retrieval and transmission, often under less than ideal connection speeds. In this environment with it's limitations, just throwing everything in root just becomes unwieldy too quickly to be practical, thus, sensible structure is imposed so that the app can deliver usable performance.
The concept is scalable when you read a bit about Hadoop and distributed file systems handling petabytes worth of data that now actually distribute a single file over more than one server to speed retrieval and search. At that scale it's faster to actually retrieve different sectors from different servers in order to get and sort huge quantities of data.
It's all about learning to deal with the right amount of data with the right storage and retrieval strategy to achieve usable performance levels.
It's nothing personal about anyone not wanting to organize their data.
You can always create a folder "not organized" and move all the files in the root into it. If the file numbers are really big, you not only see the files, eventually, but you'll also see how the performance degrades when there are a large number of files in a single directory, rather than smaller numbers of files in more folders.
That's just the nature of the beast.
Gary J
http://bios-mods.com
http://www.maplegrovepartners.com
http://theaverageguy.tv/category/tagpodcasts/cyberfrontiers/