Tuesday, July 7, 2009

Emote in Gawd's Eye

One oddity about Second Life is that most avatars look and behave like zombies. We amble rather clumsily from place to place, then sit (or stand, or dance) repeating the same motions in a loop over and over again, like we're on some really good virtual medication. Adding to the effect is that most avatars wear the same botox-inspired impassive expression 99 percent of the time, which makes us all seem like stoned dullards. (Or, another thought: maybe we're all royalty?)

Most avatars talk using open chat, but our faces don't move or animate when we do it. By default there's a typing animation that plays when we're typing in public—we lean to one side and wave our hands over an invisible keyboard while a typing sound plays—but that's over-ridden for many folks who have custom animations, and our faces don't move when we type or (I believe) even when people use Second Life's voice feature, which lets people talk to each other a la Skype.

It's not that Second Life avatars are incapable of facial expressions: there are several "emotes" available by default, and a bunch of in-world tools (or HUDs) that can let avatars use facial expressions. And some animations also have expressions built into them by default: for instance, when many people stand up, they smile by default, as if standing up somehow amuses them. But all these expressions are generic and typically way over the top: the pictures to the right show what I mean. They're expressions a stage actor might give trying to emote to the back of the house, but not the sort of thing people do much in real life. Some so-called "modeling poses"—static poses avatars can use for portraits or to model clothes—also have facial expressions, and they're also usually exaggerated and outlandish.

So, most of the time, most avatars have no facial expression at all.

In part it's a limitation of today's computers: creating some sort of interface that can be used quickly and intuitively to manage an avatar's facial expresson would be no small feat—especially considering that, in Second Life, users already have to figure out so much just to get around in the world. Walking, talking, sitting, touching and manipulating objects, flying, creating and personalizing your avatar, finding and fitting clothes and hair…it's already enough to drive you dingy.

And in part it's the nature of Second Life's client software: even when you're standing right in front of an avatar "speaking" with them, your client's camera is a few meters above and behind you, and the other avatar's face is perhaps the size of an icon on your desktop. That's not really enough real estate to calculate and communicate meaningful facial expressions on the fly. And you don't want to be in a situation where you're zooming your camera from face to face in a group conversation, trying to read intent. You'd never be able to keep up.

But the lack of facial expressions in SL is also one more thing that makes it so cartoony. Already you can't rely on an avatar's body language to tell you much about the person—we're nearly all running default animations, or else animations we found around the grid or purchased from in-world businesses. The number of people who have done motion capture of their own body language and are using it in-world in Second Life can probably be counted on the fingers (or thumb!) of one hand.

The situation is probably fine for folks who are essentially playing characters: I have no doubt many of the people I call friends in Second Life bear little-to-no resemblance to their avatars. Those of us who model our avatars on our real-life selves are providing only a vague approximation—even if we're good at it! Sure, some of the standing animations I use are kind of Lou-like, but no one on Second Life is seeing my typical slouchy bad posture, my tendency to wrap my legs around themselves into a knot under the table when I'm writing, or my bizarre need today to constantly contort my left arm behind me to scratch my back (where I swear some mutant mosquito must have detonated a small explosive…scritch scritch scritch.)

I was thinking today about video chat software, how some applications now have the ability to put cartoony avatars over people's images in video chat, tracking facial motion and letting people use their real expressions to drive their on-camera appearance. And I suddenly wondered whether one day that might be applicable to Second Life. Sure, the processing required would be significant and I'm sure most of the time people would want to disable it or play pre-programmed facial expression sequences so other SL users don't see their avatars eating dinner, picking their teeth, blowing their noses, or doing other things. But that kind of technology, one day, might make virtual worlds a little more compelling…at least for users who understand the technology.

7 comments:

  1. A while back, I was at an educators' group meeting in SL, and this topic came up. The technology is already in development. I don't think it'll prove very useful in most circumstances, though, and probably only popular in even narrower ones. The primary reason is one you state: that people don't want others to see what they're doing in RL. I don't think most people are constantly attending to what's going on in their SL window when they're logged on at most times. The type of technology that would transfer your real-time facial and body language to your avatar would probably be useful only in the same types of interactions as you'd use a webcam for: continuously focused one-on-one communication with few distractions. It might be a good thing for people who want to communicate via webcam but still want to maintain confidentiality. But for any other purpose, it seems like a webcam would be more practical.

    I'd also suggest that it would be undesirable to try to replicate RL facial expressions. First of all, related to another one of your points, in order to even appear on an avatar face, the expression would need to be greatly exaggerated. Maybe avatar anatomy will become more detailed in the future, but for now, subtle changes in facial expression are hardly noticeable except by camming all the way in. Second, there are advantages to everyone going around like zombies. It might be one of the reasons that SL tends to be a good place for people with Asperger's. There are fewer social stimuli to process (and potentially miss), and everyone is on the same level in terms of what kinds of mannerisms we develop in order to work around the lack of body language.

    I also wanted to comment briefly on your distinction between "playing characters" and modeling one's avatar on one's RL self because it sounds as though you're dichotomizing those approaches and leaving other variations to the side. I'd like to recommend not perceiving the physical RL self as part and parcel of what the "real" self is. Maybe you don't -- I'm only commenting on what I see in this post. Obviously, I don't have green skin in RL, but does that mean that when I use my green skin in SL, I'm being someone other than myself? Of course not: there must be some part of me that is comfortable in a green skin or I wouldn't do it. It's simply an expression of self that makes use of tools that don't exist in RL, and to avoid using those tools in the name of creating a xerox copy of myself would be a suppression of creativity.

    Or to put it another way: when you create your av, you're doing a self-portrait like Rembrandt would. When I create my av, it's a self-portrait like Picasso.

    ReplyDelete
  2. It's not quite facial expressions, but there is an option in the client to have the mouth sync with voice chat. It's pretty impressive even though lag makes it look funny sometimes. It's in Advanced, Character, Enable Lip Sync.

    For sex you have to use emotes or you look like an uninterested attendee ;)

    Sometimes I see someone sparingly use an emote HUD during a presentation, adding odd smiles or frowns and it has quite an effect. But that's a case where most people are staring at one person for a considerable time, I think your're right that most of the time it's too small to notice if they were doing it.

    ReplyDelete
  3. Excellent comment from Lette. Hear, hear to all that, especially the points about Aspergers and about the RL self.

    Rach and I were at Abranimations last night playing with goofy realistic anims for sale (like muscle flexing, scratching itches, nose picking, panic, etc.) and I thought, "Some of these are great and really make the avatar more dynamic, but it would suck to have to manually trigger these responses during conversation..." which got me thinking about keyword and gesture triggers and such. It would equally suck to have those animations/gestures on some sort of auto trigger that misfired lol. Ultimately it all just seemed like a huge pain in the ass. but then, I'm lazy -- HUDs are even a little more than I want to bother with (which touches on the learning curve you mention -- I can't help wondering whether new users embrace HUDs or see them as one more complicated interface thing to wrangle). Anyway, here's to us zombies.

    ReplyDelete
  4. Surely, i am not the only one who has set their body fat slider to zero....

    My dad wanted me to get a web cam for skype (is that what it is called?) so i could wave at my niece. I have resisted on the grounds that when my mum calls she will see normally look like waynetta slob. I would like to maintain a sophisticated facade as much as possible and don't want to have to get dressed up for veging on my pc in the evenings.

    ReplyDelete
  5. Lette: point taken about "playing a character" versus emulating one's RL self. I was kind of shorthanding—it's a blog post, not an essay, dangit!—and you're right that it's an inaccurate over-simplification. My point was merely that for folks looking to create or present particular personae, the existing tools are probably OK - I just get frustrated because I don't see much of RL Lou coming through in SL Lou, nor am I finding ways to make that happen. I agree that somehow mapping facial expressions to an avatar isn't going to be a game-changer, except for some specialized situations. (Mechanima might be one.) Oh: and where Picasso would be proud of you, Rembrandt would have some urchin intern gesso over my canvas and start again. :)

    Chadd: I've never seen anyone use emotes in presentations or whatnot, but I don't go to many. I wonder how many people notice?

    Mako: I've been looking into trying to find some little gestures and set up triggers for them. I may ping you for details. ;)

    Ana: My body fat slider is set to 18, and body thickness to about 40. I started out converting my own body fat percentage SL terms (assuming SL's "100" would equate to about 35% in RL terms), but wound up futzing it by eye: reducing my body fat slider and increasing "thickness" seemed more accurate, but I haven't been able to get my body shape correct, particularly my arms and shoulders. And I don't use video chat in RL, though the cartoony thing is tempting.

    ReplyDelete
  6. I've seen a few animations recently that do have built in emoters - but not many. They look quite effective too. But for the most part I do wander round with that vacant SL expression we all have. I do have an emoter, but I just can't be bothered to keep pushing buttons to smile, frown etc, and some of the AO's you get look so unnatural. Some of the model poses are even more unnatural, but even some of these are being made a little more realistic as well.

    The voice animation Chad mentioned is pretty good though for anyone using voice.

    It would be good though to sit with my feet tucked under me when I sit down, and to be able to laugh like me, take my glasses off when I get up etc., but my personality is the same as it is in RL, so for me that's enough. And, yes, I may have tweaked the appearance a bit, but just like everyone else - I don't get dressed up to sit at my PC:) (and No Ana, I'm not quite on 0 - I'm on 3)

    ReplyDelete
  7. One of the things that's always kinda bugged me about SL is that I have so little control over my virtual body. If I sit down and want to tuck a leg under me or lean back in a seat…I can't just virtually grab my leg and put it where I want, or tell my av to lean back: I'm at the mercy of whatever animations I (or the furniture) have. I guess the lack of control increases the sense of disconnectedness for me.

    ReplyDelete

Comments are moderated. You can use some HTML tags, such as <b>, <i>, <a>. If you'd like to contact me privately, use a blog comment and say you don't want it published.