Spellbound in the White Citadel

As a darksider (Windows user) in the content provision field (web dev, audio production, and a bit of video), I am in a position to watch the travails of my iBrothers and iSisters who work the white side as they struggle to keep their systems — often heavily dependent on a broad mix of software from Apple and a number of third parties — updated — if they haven’t already decided to “lock down” their current, working system so that Apple’s numerous, non-backward-compatible updates and system changes don’t upset the precarious ecology of those systems.

I understand the basic thinking — as well as the economy of scale and development structure (Apple often shifts key dev teams from one task to another in such a way that problems in one sector ripple into delays in addressing another)  — which seems to steer Apple through these continuing dramas.

What I don’t always understand is the at-times Eloi-like docility of frequently vexed high end users as they contort themselves, their practices, and even their business planning around the latest Apple issues.

To be sure, on occasion, there is a widespread revolt, as there was over the extraordinary dumbing down of Final Cut Pro to what many professional video editors — dependent on the previous FCP versions’ broad and flexible support for Apple and third party productivity and collaborative work flow enhancements that made FCP a staple in many multi-seat video editing facilities — derisively now call iMovie Pro.” And, even in the audio world, which once sneered at Windows as a platform for serious audio production work (sometimes foolishly in the view of someone who has been carefully observing that tech milieu since the mid-90s and who was impressed when Win XP ended up being a stable, efficient platform for heavy duty audio production that typically outperformed OS X on equivalent hardware) there has been a real sea change in the attitude of many.

Of course, fears that Apple will abandon the more extensible, if quite pricey, MacPro — fears that look increasingly realistic — and that Apple will follow their own  lead on Final Cut Pro X and turn their audio production flagship, Logic, into “GarageBand Pro” play heavily into grumbling, open discontent and platform-jumping.

And, of course, the availability of cross-platform tools whose Windows versions appear in many/most cases to outperform the OS X versions is also a big factor in that growing discontent. Whether the creative communities’ restlessness and frustration will spread to the consumers that the now not-so New Apple increasingly focuses on is anyone’s guess.

But as a long time observer,I have found much that perplexes and bemuses me in the odd thrall in which Apple holds many of its customers.

[posted earlier today as a comment in this PC Magazine blog article‘s comment thread.]

The mystery of capturing electric guitar tone…

After having recorded myself and other electric guitarists for around 3 decades, I’ve firmly arrived at the conclusion that many electric guitarists don’t really start out with a very good idea of what their tone actually sounds like.

How could that be? one might ask. I know I asked myself that a lot at one point.

Part of the answer lies in the fact that our psychoacoustic systems (ears and brain) are not designed for objective sound texture analysis — they’re designed as personal space mapping, danger-sensing systems. And that has really pervasive — if not immediately easy to grasp — significance to recordists.

Here’s an example: walk through a room with a small radio playing a long song.

Does the sound change?

Of course, not, unless someone tinkers with the volume or the actual content changes, right?

Well, close your eyes and try the experiment again, really concentrating just on the sound of the radio as someone leads you blind through the room.

You’ll note that the sound does change, and probably quite substantially, and not just in volume as your distance varies.

But when you walk through the room, the elaborate perceptual system devoted to making sense of acoustic environment is continually processing information returned from the senses and reintegrating your interpretation of that sensory data so that, unless you really stop, break down the processes, and reanalyze the raw data, your brain just essentially treats that radio as a ‘stable’ factor unless something ‘extraordinary’ happens to change that assessment.

Amp tone in a room is a bit different, but some of the same processes are going on: the guitarist may well have (and particularly studio newbs seem to have) a substantially different impression of what their amp ‘sounds like’ than is likely to be captured by a mic (or two).

There are a number of reasons, some of which I alluded to, and some of which relate to the fact that, as a guitarist plays his guitar through an amp in a room, he will be likely be continually changing his orientation (however slightly) in the room, moving his head from side to side, at an angle, up or down, or even getting up and walking around. And any and all of those changes in aspect relate to changes in sound — even if that is not immediately apparent until one has learned how to listen not to the ‘processed’ sound delivered by the complex spatial perceptual analysis but rather to the “raw sound” as it hits the nervous system. (And the one place Mr Guitarist is most likely NOT to be deriving his idea of the sound of his guitar/amp/tone is from 2″ from the speaker cone, off-axis — which is, of course, a ‘favorite’ spot of studio engineers to mic a cabinet from.)

Acoustic engineers must learn how to ‘hear a room’ in sort of a reverse process, disentangling their own brain’s interpretation of what is being heard from the actual sound.

The processes that go into a guitarist’s estimation of his own sound are related, but even more complex (at least for some), as ego and desire and even that ‘awesome rush of your first fuzz pedal’ mix with all the other factors…