Thursday, March 23, 2006

What I did in grad school

Last week I picked up Innumeracy by John Allen Paulos. It’s an old one, but a good one. In it, he talks a little about how mathematicians are partially to blame for innumeracy.

“It is almost always possible to present an intellectually honest and engaging account of any field, using a minimal of technical apparatus. This is seldom done, however, since most priesthoods (mathematicians included) are inclined to hide behind a wall of mystery and to commune only with their fellow priests.”

I don’t think jargon is necessarily a way to hide away from the outside world. It’s got a lot of uses, but not recognizing that it exists is a barrier to communication with non-initiates. Personally, though, I just find it very hard to explain things I know very well to people and part of the problem is thinking you need to start at the very beginning.

Now that I’m out of academics, I spend a great deal of time with normal people. When someone asks me what I wrote my dissertation on, I usually just mumble something about long consonants in the world’s languages and look embarrassed. I feel like without explaining the general assumptions of generative grammar and then how phonology fits in there and then how Optimality Theory works not to mention the representation of segmental length (moraic theory), my dissertation makes no sense. A lot of the advice I’ve seen on research say that the core idea of your research should be explainable to non-technical audience. I’ve never been able to get a hang of this skill.

Maybe I’m just slow, because lately I think I know what to say. Here’s my attempt.

There is good evidence that the brain uses very economical representations to store information about sounds in language. For example, in languages that have them, long consonants are basically twice as long as short ones—when someone says a word with a long consonant they hold the consonant twice as long as they would a short one. So, there are two possible ways the brain could represent them. They could be two short consonants put together or they one consonant that is marked “long.”

Suppose you use the symbol t to represent a short t sound. Then a long t sound could be made by two t’s in a row (tt) or one t that is somehow marked as being “long” (like t:).

All of the data from languages point to t: being the right representation. That is, human brains say, “don’t use two symbols to represent a sound when you can use one.”

How do we state this generalization in our model of how linguistics knowledge is stored in our brains? Some models just flat-out say, “don’t use two symbols when you can use one.” My dissertation showed that every component of the model has to be constructed so that none of them prefer two symbols to one. Which ends up being quite tricky. And we have to wonder if this is just a property of consonants.

I’d appreciate it if you could let me know if this is understandable or not. And if you want to read the gory details you can go here.

4 comments:

Anonymous said...

That's pretty good -- but then again, I'm probably the wrong person to judge. I'll have to try this for my own dissertation sometime.

The biggest problem I find in making my research understandable to "normal people" is that it's hard to connect it to something independently useful meaningful. I can't end any of my research spiels with something like "... and that will make speech synthesis/recognition work better" or "... and that will bring about world peace" (yeah, right).

My feeling is that someone in the natural sciences or one of the (other) social sciences can either make that connection more readily, or can at least rely on the belief many normal folks have that research in those areas somehow contributes more directly to our understanding of more practical / larger issues. Linguistics doesn't even seem to have the benefits of some of the humanities: normal folks may think that studying literature is a waste of time, but they know there are tons of folks out there who do it and that it's somehow necessary for there to be folks to do it. But linguistics? Why linguistics?

OK, don't get me started ...

Ed Keer said...

But linguistics? Why linguistics?
I agree that that is a big problem with linguistics. Do you think linguists spend too much time thinking in "problem set" mentality? Sometimes I feel like we wonder at the natural fecundity of language and leave it at that. Or maybe we think too much in terms of modularity and fail to make broader claims about how the brain works.

g said...

i think that explanation of your thesis is much clearer than the actual creature. but i suppose the question is, why does it matter? at the most basic level, isn't your research meant to inform other researchers? who then inform other researchers? and the resulting combination of works builds a discipline which enriches our culture by making the rest of the shlubs feel secure in knowing some hardy pioneers are out there exploring the weird and wooly areas of the human brain that nobody else really cares about but most people really hope someone will explain to them one day?

sure, upper level linguistic theory isn't exactly easily assimilated into the world views of the average person, in much the same way theoretical physics isn't much use to anyone who isn't a big huge nerd of that ilk. it probably trickles down somehow.

as far as i can tell, dissertations themselves aren't meant to be the windows help file. they're more like linux developers manuals or something. the nitty gritty technical stuff. if you want to synthesize it and make it palatable for the barnes&noble set, i think that's a different kind of work. like, writing a book.

Anonymous said...

Why do we say fings?

Site meter

Search This Blog