Failing Better

 

Play is a vital part of being a person, and failure is a vital part of play. One of the things I’ve been thinking about lately is the connection between autonomy and failure. When children, adolescents and adults for that matter play video games, they fail a lot. In fact, according to Nicole Lazzaro, 80% of the time we are playing video games we are failing. What other activities in our daily lives can we say that about?

Education, on the other hand, at least the traditional model, grades us on a 100% model very differently. If you get 70% of a test or a class material “correct” you get to pass it. If you get 69% it needs to be done over again, or you don’t get any credit at all. This system actually flies in the face of what educators and therapists know about learning, that it is a matter of trial, error, course correction, trial, error, course correction… and so on.

This in some ways answers a question I have often wondered about: Why are we willing to be failing 80% of the time in video games, and so reluctant to risk failure in “real life” even a fraction of the time? One answer the percentages above point to is that education often stacks the deck against us, effectively rendering any mastery of content below 70% as a failure. This failure has attached to it, shame, sense of time wasted, futility, and hopelessness.

But there is another aspect of failing in video games that I think we need to pay attention to, and that is the role of autonomy. In a 2009 study, Jesper Juul found that people prefer to play games where they feel responsible for failing. The majority of those surveyed didn’t want to attribute it to bad luck, but something the did or didn’t do. They wanted a sense of autonomy in their game play, not luck. Conversely, they didn’t want to feel victimhood either, but rather optimism.

I have been playing a game on the iPad called Incoboto which has given me pause to reflect on fun failure. (An aside for gamers who have also played this and Dark Souls, have you considered Incoboto as a cutified version of Dark Souls, trying to link the fire and bring light to a darkened solitary world? Just saying..) The game has a series of puzzles which one needs to solve in order to collect star pieces to feed to the kawaii sun Helios following your character around. There have been a few places where I got “stuck,” and spent in my opinion too much time having to throw something exactly the right way. This highlighted for me the subjective experience I had for the majority of gameplay, that I was being challenged but would eventually be able to overcome the unneccessary obstacle. On those occasions I called getting stuck, I began to experience feelings of victimization and externalize responsibility. The game was not “being fair,” it was too hard, there was a “bug” in it making the ball not land “right.”

What helped me persevere was both compelling graphics and gameplay, but also a sense of faith in the game. Ok, sometimes I cheated too, by looking up spoilers on the game forum. In those moments, you could say that I was giving up the voluntary attempt to overcome an unnecessary obstacle of the game. But, and this is what’s important, I was also ceding my sense of autonomy. It’s a weird balancing act, in one case I didn’t look at the cheat to find the solution as much as to get reassurance that what I was trying was the solution. But even though I was exercising my digital literacy here, I was also giving up for the moment my sense of autonomy, and agency.

Failure, and tolerance of failure is a subject thing, which is why Lazzaro’s presentations illustrate zones, not points, of fiero, frustration, relief, and bored. Everyone has variations in how they experience emotions, and failure in video games. And if I didn’t keep that in mind, I might feel very disheartened when I read this review of Incoboto:

“Great mix of platformer and puzzle game, very smooth learning/difficulty curve, and quite a nice gameplay experience too”

Now I am not going to get into a discussion on norms and trends and the importance of betas, because my point here is to compare and contrast the experiences of failure in video games and education.

Education in our country is trying to overcome some serious design flaws of its own. Children and adolescents are given tremendous responsibility for their performance without a commensurate amount of autonomy. This creates a culture of victimhood. Rather than noticing they got more than half of something right, we flunk them. Rather than setting meaningful individual goals, we create industrialized curriculum. And if we do give someone an individualized set of goals in the form of an IEP, we label them as learning disabled first to justify it!

We need to improve the quality and experience of failure in schools. Because video games don’t occur in a separate reality from the point of view of our minds. That mind/body split of Descartes has been debunked for ages, and yet we’re still talking about “real” life. The reality is that mastering challenges and fun failure creates a feeling of optimism, which neurologically and emotionally improves our ability to learn in the future. If we think we are capable of solving a problem, we will keep at it. Therefore, we need to foster a sense of autonomy in learning. The minute we start talking about “my special needs child,” we are taking away their autonomy.

Am I saying we should expect everyone to perform the same at school or other work? Not at all, I am saying we should be better curators of children in learning environments, and let them have less stigma around failure. In a real sense, every child should have an individualized education plan, because we are moving (hopefully) out of an industrial model of education.

As a therapist and educator who has worked in and with school systems and parents for nearly two decades, I have struggled with this frequently, both within myself and with my patients. The language of diagnoses and learning disabilities is a language I speak all too well, and I have unintentionally colluded at times with parents and systems who use it as shorthand for, “my kid can’t ___.” Maybe if failure was more tolerable and fun in school we wouldn’t be so quick to adopt these identities, and maybe if we curated environments that allowed for more autonomy we would notice different varieties of success as well.

The other night I was on a Minecraft server I participate in, founded by educators and edutechs for their children. Several of the kids were on and chatting when I logged in, and shortly thereafter this huge flame war erupted. Capital sentences of “I HATE YOU” flew across the screen. Kids stormed out of the chat room, returned, then logged off again. Some of the young moderators were instigating further conflict, while others were earnestly trying to figure out why people’s feelings had been hurt in the first place. From the therapeutic point of view, they were failing miserably, exhibiting poor social skills, dysregulated affect, and poor impulse control. It took a herculean act of will not to jump in and actively curate this group and allow them to exercise their autonomy.

They kept at the chatting, and over the next several minutes they began to collaborate on understanding what had happened. This did not have the grown up version of a happy ending where the aggrieved parties apologized and made up, so much as the group told one party that they appreciated the apology and weren’t ready to accept it then (my translation) told a second party to stop instigating in the guise of defending someone, and encouraged the third to come and build something to take his/her mind off of it.

In my mind, the fact that this took place in a game environment where failure is destigmatized and autonomy is presumed made it easier for people to keep at the challenge until it had been resolved “enough.” There was no adult who was forcing them to stay on and work at this, they were voluntarily engaged. There were several halting starts and stops of chat. But social emotional learning was occurring.

This in my opinion is an example of “failing better,” and I think this is a skill that not only can be translated from video game experience, but desperately needs to. The more we except failure as an essential part of learning and work, the less stigmatizing it will be. The less we stigmatize failure, the more we encourage autonomy and optimism. Autonomy and optimism make you a better learner, a better collaborator, and a better worker. Personally I think the world could use a lot more of that.

Like this post? There’s more where that came from, for only $2.99 you can buy my book. I can rant in person too, check out the Press Kit for Public Speaking info.

Worth a Thousand Words?: Infographics on Video Games

 For those of you who haven’t heard, Pinterest is a pinboard style of social media which emphasizes visuals.  Recently I was trying to learn more about it and how it might apply to the psychotherapy/psychology field.  So far I can see possibilities:

DBT- Using Pinterest to create worksheet boards, or better yet boards of images which provide self-soothing for distress tolerance.

Behavior Charts which are visual and available instantly from home instead of going home in a book bag and being forgotten.

Virtual Comic Books to help adolescents learn and practice sequencing and pragmatic speech.

Screenshots of video games that can be shared by gaming patients with gamer-affirmative therapists.

Psychoeducation Tools for a variety of issues, including the above example.  Click on the image to see my board on Infographics for video games and gaming.  They are not intended as professionally vetted research, and you’ll not the heading encourages viewers to check out the research.

There are obviously things to be concerned about, such as privacy and how best to bring Pinterest into the therapeutic session, office and process.  Pinterest is not HIPAA-compliant, for example, so would a link sent via hushmail be secure enough for some uses?  How might we make sure our patients could use this powerful visual tool in a way that did not disclose what health information described what they were using it for.

What do you think?  How might we use this powerful visual medium to enhance our treatment with patients in an best-practice way?

Like this post? There’s more where that came from, for only $2.99 you can buy my book. I can rant in person too, check out the Press Kit for Public Speaking info.

But Not Your Thoughts: Social Media & Children

 

Like this post? There’s more where that came from, for only $2.99 you can buy my book. I can rant in person too, check out the Press Kit for Public Speaking info.

 

When They Hate You.

Many therapists I work with dream of expanding their practice to being a consultant and presenter. In our initial appointments they ask me with a lot of excitement about my experience doing these things. And I am usually very positive and optimistic about it. But although I am “living the dream,” there are many rude awakenings along the way.

One such awakening came this week when I received my evaluations back from a recent talk. Out of the 760 people who attended, 566 of them did evaluations. It isn’t often that I have a chance to get feedback from 566 colleagues at once. What struck me is how I tended to react to them and how I had to fight the urge to focus on the negative. If 535 people rated me as good or excellent, my eye was drawn more often to the 2 “poor” ones. No matter that .35% is a really small percentage, that fraction of a percent that delivered a poor rating was hard to overlook.

The comments were even more challenging, as I noticed that my eyes flew over comment after comment describing me as interesting, great, edgy, fresh, thought-provoking, relevant, a gem, and passionate. But boy did they stop when I read this: “His bias towards media enraged me,” or this: “Seems like he has a chip on his shoulder, perhaps because he was told he had poor social skills.”

Ouch.

Would-be presenters please take heed. When you put yourself out there, people will take shots at you. This will hurt, even when it is a fraction of a percent. Part of what hurts is the asynchronous and anonymous nature of these comments, because you have little recourse to respond, correct an error you may have made, or just plain defend your point of view. But if you want to do public speaking, you’ll need to get a thick skin.

Part of why you need a thick skin is to allow for the accurate appraisal of your work. Here’s how I do it:

I divide the critical comments into one of three categories: Absolutely Useful, Fair Enough, and No ROI.

  1. Absolutely Useful: These comments are ones that don’t make me defensive, where I can imagine myself saying to the person, “absolutely.” An example of this kind is “it was somewhat difficult to follow along in the booklet because he seems to have changed the order of slides.” This comment was extremely useful, as I can put more emphasis in my prep to not change my slides at the last minute. This is an easy fix, and will benefit the audience.
  2. Fair Enough: These comments do make me a little defensive, but there is some benefit in spending time to acknowledge or address them. I can imagine myself saying “your point is well taken, however…” For example the comment “limited research” is fair enough. Your point is well taken, however I was only allowed 45 minutes to present, and needed to choose from my copious slides only 60. Another commentator expressed that they wished I had spoken more about the impact of violent video games and how they are a problem. This is fair enough, however there are plenty of places people can get that information or misinformation, and few places that they can get my take. What I can take from these comments are points to consider weaving in or addressing when there is more time.
  3. No ROI. These are the comments that are clearly ad hominem arguments. A good clue is if they hurt my feelings or make me feel extremely defensive. “Seems like he has a chip on his shoulder, perhaps because he was told he had poor social skills” is an example of this sort. There is little return on investment of time or energy I should expend on this. Who knows why a person would think that comment would help anyone, but more importantly, how would it help make a presentation’s argument more effective? These need to be set aside ASAP to focus on more helpful comments.

The irony is that the most useful comments are usually not the ones that are extremely validating or invalidating, but matter of fact, like the slide order comment. The job of a presenter is to become a better presenter. Whether you like the information and opinions I present is none of my business really, my job is to present it.

In my opinion, part of what makes a person an effective speaker is also bound to make them hated: namely, their passion and conviction. Of course I am biased, of course I think that my point of view is important. Would you really want me up there talking about things I don’t feel or think strongly about? At an old internship of mine a colleague once asked me, “have you ever been hated by somebody?” At the time I thought I hadn’t, and said so. “That’s too bad,” she replied, “It’s very defining.”

Since then I have come to realize that I, like most of us, have in fact been hated. Merriam Webster defines the noun hate as “intense hostility and aversion usually deriving from fear, anger, or sense of injury.” People are hated because they are black, or white; LGBT or straight; rich or poor; Nazi or Jew. In everyday affairs we like to pretend this is not true, and when we do so it is crazy-making. It is often a bittersweet relief to a patient when we say, “you weren’t crazy, you really were experiencing hatred.” Finally, someone told the truth.

When I present about technology and video games, I speak out explicitly or implicitly about adultism. This comes across when I challenge people around the concept of screen time. One very prescient member of my audience stated that my message seemed to be focused on changing adult behavior, not child behavior. Bingo.

When it comes to gaming, technology, and education, we need to take a good hard look at how adultism is implicit in many of our practices. We think we know better than our youth, and we think we know better than they how they should spend their time. Back in grade school, well-meaning adults decided that my time would be better spent memorizing multiplication tables, drilling them into my mind, giving me A’s for knowing them. Yet, now I live in a world where I am never more than few feet away from my phone, laptop, or dedicated calculator, and I have to question whether that time couldn’t have been spent better learning other things. What we are taught as important is bound by the history and culture of the adults in power at the time, and it isn’t always a good thing. In retrospect, I’d have been more prepared for life if I’d learned about the subjugation of indigenous people in school rather than drawing hand turkeys.

So if you are passionate about something, it will give you the passion to devote time and energy to it, go above and beyond the workaday life that we often lead. But it will put you out in front of people who don’t agree with you, see you as a threat to what they believe as good and true. You will be hated. You will get tired and hurt and frustrated. And when that happens I recommend that you take some solace from loved ones and friends, and then get back to work.

Some posts, like this one, are written for me as much as for my colleagues and consultees. We all get discouraged and need to be reminded that we are choosing to strike a blow for freedom in whatever path we choose. But I want to give the last word to one of my commentators, who said exactly what I need to hear when I have moments of flagging confidence and doubt: “Mike’s presentation changed my outlook on technology in my professional and parenting roles. Thank you so much, from a FORMER technophobe!!”

Like this post? There’s more where that came from, for only $2.99 you can buy my book. I can rant in person too, check out the Press Kit for Public Speaking info.

 

I Come To Praise First-Person Shooters, Not To Bury Them

 

I should begin by saying that I don’t personally enjoy the type of video game known as a first-person shooter (FPS) very much.  They make me jittery when I play, and I am easily overwhelmed by them.  I’m still stuck in the tutorial room with Jacob in Mass Effect 2.  If there are settings to disable gore and swearing on a game I’ll click ’em.  But as I looked back on my past posts I realized that I have neglected to weigh in on FPS, and in doing so am guilty of the same kind of dismissal I critique in colleagues.  (Note to gamers: I know there are several important distinctions between FPS and TPS or third-person shooters, but that’s for another post.)

There’s a lot to like about FPS games, and here’s a few examples.

  1. Many FPS such as Halo 2 can be collaborative as well as violent.  Players join platoons and need to learn how to coordinate, communicate and problem-solve in a fast-paced environment.  Games like Halo also provide environments for players to learn how to assume leadership roles, follow directions from other players, and think critically about stressful in-world situations.
  2. FPS encourage impulse control as well as aggression.  Crucial to success in FPS games is the ability to time attacks and maneuvers.  This requires the ability to control the impulse to pull that trigger.  Although we tend to focus on the aggression in FPS, there’s often a lot of sneaking going on as well.  In Bioshock there are actual decision points in the game where refraining from killing characters changes the entire outcome of the game.  Even though the player is not learning teamwork in single-player games, they are often learning the same sorts of forms of decision-making and impulse control in good old-fashioned “Red Light, Green Light.”
  3. First-person shooters improve hand-eye coordination.  One important component of hand-eye coordination is visuospatial attention.  Research by Green et al. suggests that video games improve visuospatial attention, and further that FPS video games do it even better than games like Tetris.  Hand-eye coordination is a skill most of us would agree is a good thing to have.  It helps improve your readiness to learn, increases your ability to excel at sports, increases your confidence and makes juggling less stressful.
  4. First-Person Shooters may increase a sense of mastery and alertness.  So many parents and educators lament how children aren’t able to pay attention.  And yet, what makes FPS games so compelling is their immersive quality.  As Grimshaw et al. discuss, the literature describes immersion in varying ways, such as ‘the state of consciousness when an immersant’s awareness of physical self is diminished or lost by being surrounded in an engrossing total environment, often artificial’  Further, in order to be completely immersed in an environment, “players ‘must have a non-trivial impact on the environment.”  Wandering around the game world may not be sufficient to immerse players into a flow-like state, and shooting people, whatever else you may say about it, does not lend itself to feeling trivial in an environment.  Imagine if classrooms could harness the ability to create such immersive qualities in the classroom.  Much more effective than saying loudly, “Pay attention!” which usually has the exact opposite effect than the statement intended.

Given the above compelling reasons to think well of FPS, why are they so often singled out as the bad seed of video games?  The answer, I would suggest, is a sociopolitical one that gamers as a whole ignore at their peril.

Science is often, maybe always, political, and has an uneasy relationship with civil rights movements.  The example that springs to my mind is the LGBTQ civil rights movement.  Back when a preponderance of science was pathologizing of all LGBTQ people, there was a more predominant solidarity amongst the various thinkers, activists, and citizens of those subcultures.  From Stonewall up through the early AIDS crisis, there was less fragmentation and more coordination, with the understanding that civil rights benefited everyone.

But within the past two decades, many members of the LGBTQ community have begun to receive recognition and acceptance in society as a whole.  At this writing 7 states have legalized gay marriage (Welcome Washington!) and more accept domestic partnerships between same-sex couples.  Bullying based on sexual orientation and hate crimes have received more coverage from media with sympathetic stances towards LGBTQ youth.  And I can’t remember the last time I heard talk about the latest study locating the “gay gene.”

And yet, science and politics have turned their gaze towards specific subsets of the LGBTQ population.  Transgender rights (a notable recent gain in my home state) are still ignored or reduced to bathroom conversations and debates about the poor parenting of those who don’t make their children conform to Cisgender norms.  The status of LGBTQ youth of color as a priority population is met with grumbling.  Bisexuals are still considered in transition or confused, asexuals frigid or repressed.  Polyamory is confused with lack of commitment or neurotic ambivalence, and BDSM isn’t even recognized as worthy of any sort of advocacy.

And to a large extent, whenever one of these specific subcultures are targeted, the other factions of the LGBTQ community remain silent.  And in doing so, they become allied with the perpetrator.  As Judith Herman points out in her seminal work, Trauma And Recovery, “It is very tempting to take the side of the perpetrator. All the perpetrator asks is that the bystander do nothing.”  This is exactly what members of the LGBTQ community are doing when they cease to maintain the solidarity and mutual support that helped get homosexuality removed from the DSM-III.

And so the focus shifts from the general “gay people are bad/sick” to the more specific populations also under the LGBTQ umbrella, and rather than fighting for them we allow them to be omitted from civil rights.  A case in point was made by openly trans HRC member Donna Rose, when she resigned in protest to HRC supporting an Employment Non-Discrimination Act which included sexual orientation but didn’t include protections for transgender people.  A group may only be as strong as its weakest member, but solidarity often ends when the strongest members of an alliance get what they want.

The gaming community would do well to take a lesson here.  Recently video games have been getting increasing recognition as an art form, an educational tool, and possible solution to world problems ranging from poverty to AIDS.  As society moves to a more progressive stance on technology and video games, studies come under scrutiny for their sweeping and pathological generalizations of a complex and diverse group.

(The most pernicious example of this in my opinion is the concept of the “screen” and “screentime.”  Studies ask questions about how much time subjects spend in front of electronic devices, as if all activities were identical in experience and effect.  Watching television, playing video games, surfing on Facebook are all treated as similar neurological phenomenon, when they aren’t.  It’s much more complicated than that, and different physiological systems are affected in different ways.  Even the idea that all screen time dysregulates sleep the same way is being questioned recently, with televisions showing less repression of melatonin than iPads.  So what screen you’re doing things on makes a difference.  And then there’s what you are doing.

Watching television is a more passive and anergic activiy than playing video games in my experience.  No, I’m not going to cite a particular study here, because I want us to focus on thinking critically about the designs of studies not the data.  And as Paul Howard Jones points out in his video, learning itself activates different parts of the brain at different phases of the individual’s learning cycle of a particular activity.  So yes, video game users have different looking brains than those that aren’t using them, that doesn’t mean it is bad, but that they are using different parts of their brain function and learning different things.  Most people in the gaming community would have some solidarity here with other gamers, and balk at the idea that a screen is a screen is a screen.  And “screen time” is usually implying screens watching television, playing games or surfing the net, not screens compiling doctoral dissertation lit reviews, planning a vacation, doing your homework, or looking up a recipe.)

So gamers are solidly behind fighting these blanket generalizations.  That’s great.  But I find that where gamer solidarity is starting to fall apart is around the more specific attacks that are being levied in science and politics around FPS and violent games.  Studies says these desensitize children to violence, increase aggression and correlate to hostile personalities.  There are also studies that conflict these findings, but I want to ask a different, albeit more provocative question:

What’s wrong with being aggressive?

I think that child’s play has a long history of being aggressive:  Cops and Robbers, water pistols, football, wrestling, boxing, tag all encourage some element of aggression.  Most of us have played several of these in our lifetime with some regularity, have we become desensitized and aggressive as a result?  Am I sounding too hostile?  🙂

And we are sending children and adolescents a mixed message if we label aggression as all out bad.  Not everyone or every job requires the same amount of aggression.  Wanting to be #1 and competing, whether it be in a boxer or a president, requires some aggression.  Aggression is in fact a leadership quality.  It allows us to take risks, weigh the potential hazards, and go for something.  Feelings of aggression heighten our sense acumen, can speed up our assessment of a situation and help us stand up to bullies.  Whether we agree with this war or that, would we really want our soldiers to be in-country with no aggression to help them serve and defend?  Fortune, as the saying goes, favors the bold, not the timid.

FPS games have a place on the Gamestop shelf and a place in the gaming community.  They allow us to engage in virtual activities that have real-life benefits.  They are a focal point for millions of gamers, and I believe unlocking their DNA will go a long way to discovering how to improve work and learning environments.  Stop letting critics shoot them down, or don’t be surprised if you’re in the crossfires next.

Like this post? There’s more where that came from, for only $2.99 you can buy my book. I can rant in person too, check out the Press Kit for Public Speaking info.

Protect Your Online Privacy: Start Blogging!

Many therapists have lamented about the lack of privacy the internet has created.  More to the point in my view, the internet has taken away the veil of secrecy psychotherapy has frequently sought refuge behind.  It used to be that the anonymity of large urban areas, or the possibility of a commute to the suburbs insulated therapists from their patients after the analytic hour came to a close.  I friend of mine once went for years before discovering that Therapist A, who had referred him to Therapist B when treatment was stymied, was actually married to Therapist B.  They did not share last names, but my friend in a moment of high curiosity and low impulse control drove over to Therapist A’s home address and discovered Therapist B’s name there as well.  He terminated therapy thereafter.

For myself, I learned that privacy is to a large extent illusory, not from the internet, but from my first job.  I worked in a community mental health center on a 13 mile long by 7 mile wide island which was 2 1/2 hours by boat from the mainland.  You get used to a diminished sort of privacy on an island.  I couldn’t avoid my patients if I wanted to, unless I wanted to avoid the library, most restaraunts, coffeeshops, the beach, or the one movie theater we had in the winter.  Nor could I find privacy in limiting the type of work I did there.  The Community Mental Health Center was the only one on the island.  We were responsible for, and I did, school counseling, Psychiatric hospitalizations (which involved flying with often psychotic people in a Cesna six-seater airplane,) outpatient therapy, Alcohol counseling and DUI classes, drug testing, and court-ordered counseling for domestic violence perpetrators.  I can still remember how when a colleague and I went out to dinner at a local pub one night one-third of the people at the bar left.  It wasn’t just my privacy that was affected here.

You have a choice in situations like that.  You can hide out in your house with a cat and television (which I did at first) or you can start living your life in the community and negotiate boundary crossings on a case by case basis (which I settled upon as my strategy.)  I learned to cultivate a sense of never-too-uptight-never-too-relaxed when I was in public.  It became second nature in many ways.

When I moved to Cambridge, MA, it felt very anonymous by comparison.  But as many practitioners in “The most opinionated zipcode in the US” will tell you, Cambridge is really a village in many ways.  I still ran into people, and by this time, technology was becoming more of a factor.

As Thomas Friedman has observed, “The World Is Flat” in the 21st century.  Globalization and technology have removed many of the barriers to, and some would say protections from, knowing each other.  Our patients can Google us, Yelp hangs up a business page of us whether we like it or not, and are often only one Facebook friend away from connecting with us.

Even if you want to make the poor business decision of staying off the internet in terms of a website, eventually your contributions to the Democratic Party, your address, and notes about you in your alumni magazine are still going to find their way out to the world.  We’re all on an island today.

So what can you do?  Well, my advice is to start blogging.  I know sounds counterintuitive, but it makes sense on a number of levels:

1.  Buddhism tells us to move into the places that scare you.  We exert so much energy trying to avoid things, find a spot where we can stay safe and stop the awkward and uncomfortable learning process.  And yet we ask our patients to do the exact opposite so often: to look underneath those rocks, descend into the depths of the psyche, face their fears.  Our obsessive quest for privacy is perhaps not that different.

2.  Make the internet work for you.  One of the best ways to protect your privacy is to generate a lot of content that you consciously know is public-facing.  Google “Mike Langlois, LICSW” for example.  Go ahead, I’ll wait.  What came up probably is pages of my website, professional picture, Youtube videos, and blog posts.  Dig a little deeper and you’ll see me commenting on a few blogs.  This is the practice of radical transparency.  All of that content was written with all of you in mind, my patients, colleagues, friends, family members, potential coaching clients, high school classmates, potential employers, my future children and grandchildren and the FBI.  The way Google and other search engines work, the more content I put out there that is public, the further back any unintentional pieces about me will be.  By embracing that the world is flat I have learned to cultivate a style that I can negotiate in my work life while still feeling authentic.  And it is great advertising, or fair warning, if you are considering working with me.

3.  Radical transparency protects your patient’s privacy as well.  Whether we like it or not, therapists are finding themselves on review sites like Yelp.  Yes, anyone can post a review, and no, Yelp will not taking it down if you ask.  More importantly, your patients might not understand the ramifications for their privacy or PHI if they post a review.  Keeley Kolmes has great resources on this, and you are welcome to use my version of her version as well.  Take a look:

Notice that half of my allotted space is not advertising, but a direct message to any potential commenters.  Rather than hide out and try to get Yelp to take my name down, I have used it as a platform to market my business, model what I feel is ethical professional standards, and provide some information to patients in the spirit of informed consent.  Do I want to get bad reviews?  Of course not, who does?  But that is not an excuse to hide my head in the cybersand.

4. Last, but not least, get over your bad self.  Sometimes listening to our colleagues you would get the feeling that they are dealing with the paparazzi, not the public.  Sure patients and others may be curious about your life, but really most people in the blogosphere just aren’t that interested.  On a good day, my blog gets 200 views, on an amazing day last August I got 689 views.  There are 7 billion people on the planet.  Feel free to correct my math here but according to my calculations that means on a busy day 0.000009842857142857142% of the people on the planet are checking out my most visible presence on the internet.

Am I saying you should blog for the sake of blogging? No.  I am saying that there is a Copernican revolution going on in the 21st century, and therapists need to join it.  Rather than avoiding technology and the internet we need to start understanding it and harnessing it.  You can be googled whether you like it or not.  Yelp doesn’t care about contaminating your transference.

Being professional is about how we rise to the occasion of Web 2.0, not deciding to skip out on the party.

Like this post? There’s more where that came from, for only $2.99 you can buy my book. I can rant in person too, check out the Press Kit for Public Speaking info.

The Kids Are All Right

image courtesy of gamerfit.com

image courtesy of gamerfit.com

Last week some family friends came over to our house for dinner. The children, we’ll call them Larry, Curly and Moesha, were ages 12, 4, and 8 respectively. As you may imagine, children enjoy coming over to the Gamer Therapist house, which has 3 gaming consoles and a dedicated big-screen TV. After a quick tour of the gaming room, Larry and Moesha sorted through my games and located Portal 2, and within minutes had set themselves up to play cooperative mode. Curly was content to sit between them, and the adults retired to the first floor to hang out and prepare dinner.

Throughout the evening we could hear the sounds of the happy gamers and the game, while my friend Rebecca talked frankly about her ambivalence about their gameplay. The ambivalence sprang primarily from a well-meaning friend who had criticized her parenting style. Another set of parents had told them that their children wouldn’t be able to play with them anymore because they thought that the children were picking up bad messages from video games.

This conversation was interrupted by the children twice, both by Moesha. The first time she came down to inform us that Larry had succeeded in unlocking a very difficult achievement. The second time she came down to ask me if I could join them and help them solve a puzzle in the game they had been struggling with. I went up, and within a few minutes a fresh perspective and Larry having some patience as I familiarized myself with the controls had advanced them to the next level. A polite thank you let me know that I was no longer required, and I returned to our conversation.

A discussion about education and video games was in full swing, and a debate about how much screen time is too much. At this juncture I pointed out how hard it is for us to catch children when they are doing things right. I observed that for the past 2 hours children, siblings, spanning an age difference of 8 years had been engaged in cooperative learning. What’s more they had voluntarily engaged with the adults on two occasions. The first was for one child to express pride in the achievement of her sibling. The second was to request adult assistance with some problem solving. And all along our parental ears had heard not one whit of conflict or argument. Yet all of this would have been easy to miss, or worse, dismiss as “parking the children in front of a screen.”

Every day, parents like Rebecca are bombarded with much-hyped exposes on how dangerous video games are to children. Horror stories like the one my colleague at BC psychologist Peter Gray blogs about are touted, in which a South Korean young man plays for 50 hours straight and dies after going into cardiac arrest. Gray goes on to put this tragedy in perspective: There are 7 billion people on the planet, and this incident represents 0.000000014% of the population. It is by contrast far more difficult to catch the number of children and adult gamers doing things like learning physics, researching a vaccine for HIV, or gaming to raise money for hospitals.

And although studies linking the dangerous connection video games have to childhood obesity, the media somehow never manages to pickup ones like this that showed that children who had electronic devices in their room were more likely to engage in outdoor play. Perhaps the Obama administration did read this study though, as they are moving to a more gamer-friendly position with the appointment of Constance Steinkhueler to study the civic potential of playing games (Thanks to my colleague Uriah Gilford for calling my attention to this.) The video game danger is overhyped.

Parents are playing a game far more dangerous than any video game, and that is the “Who Is Parenting Best” game. Over and over I hear conversations about what the best school district is, how to set privacy settings and enrichment activities. This despite, as Rob Evans EdD. points out in his book Family Matters that by age 18 children and adolescents will have only spent about 10% of their total lives in school. Setting privacy settings is the first and most minimal step to helping children navigate the 21st century community that relies on the internet. And enrichment activities are unfortunately often activities that appeal to what a parent wants quality time to look like, rather than what a child enjoys.

In discussing school districts, privacy settings and quality time, parents are often missing the point. Worse, they are confusing worry with effort.

We need to stop “phoning it in” with our kids, finding the “right” school, program or setting to park them on so we don’t need to worry. It is human nature to want to get to a safe spot to relax and stop changing: We need to fight it.

Often when I speak with parents who complain about the amount of time their offspring are spending playing I ask them if they have ever played the game with their child. The answer is invariably that they haven’t, even though a recent study showed that girls who gamed with their fathers reported lower levels of depression. Quality time always has to be some Puritanical ideal: a bracing hike, the symphony, or a museum. I love doing all of those things, which is why I do them. That doesn’t mean that a child or adolescent will have them as a preferred activity.

I know, I’m criticizing your parenting, please bear with me. Because you can take it, and your children need you to stop playing the Who Is Parenting Best Game. They need you to try playing Portal 2 with them if that is what they’re into. If you fumble with the controller, all the better, because we adults have forgotten how clumsy and awkward learning makes children feel. Education has gone taught us to find a safe spot and stop learning about anything that is outside that comfort zone.

We have long known that one of the strongest protective factors for children is an involved parent, but somewhere along the line we have gotten the mistaken idea that that means control. To be clear, parental control does not equal parental involvement. At best it is one element of involvement, at worst it is phoning it in.

Does this mean schools have no responsibility? Absolutely not. But I would like to suggest we return to Larry, Curly, and Moesha for a different example, namely a curatorial model. That means setting up a place for them to explore and negotiate things on their own for the most part, while we are constantly available for engagement. Yes that is a tall order, and one that requires that we think beyond the nuclear family and school as factory models.

I am fortunate to belong to massivelyminecraft.org a server where children and adults from the UK, US, Australia and other places play the sandbox game Minecraft together. It is not segregated by age, but vetted by the server moderators, who require parental permission for children to join. Within the game world, you can see adults and children learning and playing together at any hour of the day or night. They are voluntarily learning about geometry, math, physics, animal husbandry, chemistry, geology, economy, social skills, communication and a host of other things, in a way that is curatorial rather than proscriptive. Adults are there and occasionally on chat or via Skype will step in to mediate a conflict between two children, or if help with a task is requested.

Why can’t 21st century education be like this? Imagine a virtual classroom where parents and teachers can privately chat while both observing unobtrusively a child’s progress. Imagine homework at the table replaced by building a virtual pyramid together. Imagine virtual trips to Paris in Second Life, where no one is too poor to come along. Imagine time on learning that is maximizing the child’s engagement and minimizing unnecessary supervision. Imagine adolescents finally having educational settings that run synchronously with their biological clocks. Imagine a collaborative effort that doesn’t segregate human beings by rigid grades, parent/teacher roles or socioeconomic status. Imagine recess that can happen year-round, sometimes in a field, sometimes on the Wii Fit or Kinnect.

I believe the research is showing more and more that these things are possible. And I believe that our children are counting on us taking a leadership role in technology and education rather than a fear-based ones.

I leave you with this image. As he was leaving my house, Larry looked at me and declared that I “wasn’t entirely horrible” for an adult. What parent wouldn’t love to hear such high praise from their tween?

Like this post? There’s more where that came from, for only $2.99 you can buy my book. I can rant in person too, check out the Press Kit for Public Speaking info.

Ethics & Technology: A Mild Rant

Like this post? There’s more where that came from, for only $2.99 you can buy my book. I can rant in person too, check out the Press Kit for Public Speaking info

A Follow Up to Dings & Grats

My last post, “Dings & Grats,” generated quite a lot of commentary from both therapists and gamers alike.  I was surprised at many of the comments, which tended to fall into one of several groups.  I’ll summarize and paraphrase them below, following with my response.

1. “I haven’t seen any research that shows video games can increase self-confidence, but I have seen research that shows they cause violent behavior.”

Fair enough, not everyone keeps up to date on research in this area, and the media certainly hypes the research that indicates “dire” consequences.  So let me direct you to a study here which shows that using video games can increase your self-confidence.  And here is a study from which debunks the mythology of video games causing violence.

2. “I find gamers to be generally lacking in confidence, introverted, reactive and aggressive, lacking in social skills, etc.”

These responses amazed me.  Gamers are part of a culture, and I doubt that many of my colleagues would say such overarching generalizations about other groups, at least in public.  Would you post “I find women to be generally lacking in confidence,” or “I find obese people introverted,” or “I find African American people lacking in social skills?” And yet the open way many mental health professionals denigrated gamers without any sense of observing ego was stunning.  I was actually grateful that most of these comments were on therapist discussion groups, so that gamers didn’t have to read them.  This is cultural insensitivity and I hope that if my colleagues aren’t interested in becoming culturally competent around gaming they will refer those patients out.

3. “Real relationships with real people are more valuable than online relationships.”

This judgment confused me.  Who do we think is behind the screen playing video games online, Munchkins?  Those are real people, and they are having real relationships, which are just as varied as relationships which aren’t mediated by technology.  Sure some relationships online are superficial, and others are intense; just like in your life as a whole some of your relationships are superficial and others are intense and many between the two.  I’ve heard from gamers who met online playing and ended up married.  And if you don’t think relationships online are real, stop responding to your boss’s emails because you don’t consider them real, see what happens.

4. “Video Games prevent people from enjoying nature.”

I am not sure where the all or nothing thinking here comes from, but I was certainly not staying that people should play video games 24 hours a day instead of running, hiking, going to a petting zoo, or kayaking.  I know I certainly get outside on a daily basis.  But even supposing that people never came up for air when playing video games, I don’t think that would be worse than doing anything else for 24 hours a day.  I enjoy running, but if I did it 24/7 that would be as damaging as video games.  What I think these arguments were really saying is, “we know what is the best way to spend time, and it is not playing video games.”  I really don’t think it is our business as therapists to determine a hierarchy of leisure activities for our patients, and if they don’t want to go outside as much as we think they ought to, that’s our trip.

5. “I’m a gamer, and I can tell you I have seen horrible behavior online.”

Me too, and I have seen horrible behavior offline as well.  Yes, some people feel emboldened by anonymity, but we also tend to generalize a few rotten apples rather than the 12 million + people who play WoW for example.  Many are friendly or neutral in their behavior.  And there is actually research that shows although a large number of teens (63%) encounter aggressive behavior in online games, 73% of those reported that they have witnessed others step in to intervene and put a stop to it.  In an era where teachers turn a blind eye in”real” life to students who are bullied or harassed, I think video games are doing a better, not worse job on the whole addressing verbally abusive behavior.  Personally, I hate when people use the phrase “got raped by a dungeon boss,” and I hope that people stop using it.  But I have heard language like that at football games and even unprofessional comments at business meetings.  I don’t think we should hold gamers to a higher standard than anyone else.  Look, we’ve all seen jerks in WoW or Second Life, but we’ve seen jerks in First Life as well.  Bad behavior is everywhere.

6. “Based on my extensive observations of my 2 children and their 3 best friends, it seems clear to me that…”

Ok, this one does drive me nuts.  If you are basing your assertions on your own children, not only do you have a statistically insignificant N of 2 or so, but you are a biased observer.  I know it is human nature to generalize based on what we know, but to cite it as actually valid data is ludicrous.

7. “I think face to face contact is the gold standard of human contact.”

Ok, that’s your opinion, and I’m not going to argue with it.  But research shows that it is not either/or, and the majority of teens are playing games with people they also see in their offline life.  And let’s not confuse opinion with fact.  You can think that video game playing encourages people to be asocial, but that is not what the research I’ve seen shows.  In fact, I doubt it could ever show that, because as we know from Research 101 “correlation is not causation.”

By now, if you’re still with me, I have probably hit a nerve or too.  And I’ve probably blown any chance that you’ll get my book, which is much more elaborate and articulate at this post.  But I felt compelled to sound off a little, because it seemed that a lot of generalizations, unkind ones, were coming out and masquerading as clinical facts.  Twenty-First Century gaming is a form of social media, and gamers are social.  What’s more they are people, with unique and holisitic presences in the world.  I wasn’t around to speak up in the 50s, 60s and 70s when therapists were saying that research showed all gays had distant fathers and smothering mothers.  I wasn’t around when mothers were called schizophrenogenic and cited as the cause of schizophrenia.  And I wasn’t around when the Moynihan Report came out to provide “evidence” that the Black family was pathological.  But I am around to push back when digital natives in general and gamers in particular are derided in the guise of clinical language.

To those who would argue that technology today is causing the social fabric to unravel, I would cite a quote by my elder, Andy Rooney, who once said, “It’s just amazing how long this country has been going to hell without ever having got there.

Like this post?  There’s more where that came from, for only $2.99 you can buy my book.  I can rant in person too, check out the Press Kit for Public Speaking info

Radical Transparency

By now you may have read the story of the student in Manchester, NH who was arrested in his school cafeteria by a police officer who lifted him out of his seat, and forced him into a prone position on a table.  Another student captured the video, and you can see it and the story here.  Although the police handling of the situation was clearly disturbing, even more disturbing are the voices of the teachers caught trying to get the student recording to stop, and then attempted to take the phone away from him.  By his report they did make him delete a couple of pictures, but the video went undiscovered until it went viral.

It’s time we get real about transparency in the professional world.  My prediction is that the school administration will address this situation by trying to either enforce a no-cell phone policy or create a policy that prohibits the use of electronics on school grounds to record such incidents.  I hope they don’t, and use this as an opportunity to open some conversations between school staff, parents, students, and officers.  But I will be pleasantly surprised if that happens.

Professionals who work with people “in their care,” be it therapy, education or something else, often cite privacy concerns when it comes to transparency.  I’m convinced that the reality is often that they want to protect their privacy as much as if not more than that of their patients.  What happens behind closed doors is secret.

Remember that phrase “you’re only as sick as your secrets?”

Other professionals want to commute to work so they have a “private life.”  They are outraged with the amount of information available about them online, information that their patients, students, anyone, can access about them.  When I do public speaking about technology and therapy and education, I often find that privacy concerns boil down to this sort of fear and outrage.  Sure, HIPAA is brought up, but that is usually in the context of another fear, getting sued.

I practice and encourage my colleagues to practice what I call “radical transparency.”  I define Radical Transparency as engaging with technology as if it is always in the Public sphere visible to anyone.  To be clear, this does not mean either never using technology to communicate about one’s personal or professional life.  Nor does it mean telling everyone everything all the time.  Rather, radical transparency means that before you “utter” anything via technology, and before any choices you make with technology, you consider what would happen if it one day comes to light.

I’m not saying you have to like radical transparency, I’m just saying that it is time we get clear with our relationship to technology and others with it.  And I’m not saying I am perfect with it, but I try to comport myself with authenticity.  If you search online you will (hopefully) not find my public posts or comments cutting or snide.  If you somehow got hold of my emails over the past few years, what would emerge is an acerbic, funny, tart guy who is prone to arrogance and does not suffer fools gladly.  You’d find a good deal of kindness and wisdom as well, but certainly you’d find frustration, self-righteousness and negativity.  In short, you’d glimpse my human condition.  But there you have it, I am prepared to accept the revelation of any warts that may come along.

Radical transparency, I am suggesting, is not just about what you “put out there” on the internet.  It is not about gussying yourself up so you are acceptable to everyone.

Radical Transparency is about getting clear, clear with yourself.

I have found two spiritual traditions especially helpful with this idea.  The first is Buddhism, which talks about nonattachment and going to the places that scare you.  But in this post I want to focus more on the second tradition which has influenced me, and I think may have some good insights into technology and our place in the world.  That second tradition is Quakerism.

What I have learned from Quakers and my own connection to the Society of Friends, is the importance of gaining clearness, and discernment.  One quote that sums up what I am saying is from an article written by M.L. Morrison in the book Spirituality, religion and peace education.  In it she says:

“Key to a Quaker philosophy of education is the belief that each individual has the capacity for discerning the truth.  The truth does not solely come from the teacher or mentor… The process of getting clear about a particular discernment implies testing it out in a community of fellow seekers.  In this way individuals are accountable to the communities in which they live and learn and the community can support the strength and leadings of its members.” (Italics mine)

What if we started seeing the world, online and off as that sort of community?  Get clear with who you are and what you’re about.  Be authentic.  And after you have achieved a certain amount of clarity have a discerning attitude about what you put out there about yourself, and above all behave as we feel we ought.  Am I saying that we all need to adopt Buddhism or Quakerism?  Of course not.  But we need to start focussing first on who we are in the world, not who shouldn’t be videotaping us.

Technology is not going away folks.  And adolescents are rightly exploring and testing the limits of it, because they will be using it to maintain, more accurately repair, the world we have given them.  September 11th taught these kids that media can be used to bear witness to terrorism and injustice in real-time.  And since then, Youtube has proliferated with videos of the atrocities professionals have perpetrated.  I have seen a juvenile collapse walking around a courtyard of lockup, only to be kicked and ignored by the warden when he was in need of medical attention.  I have seen a college student tasered in a library.  We have seen an Iranian woman shot to death and die before our eyes.

And these images change us, and they go viral.  This is what globalization is, this is our whole planet struggling to get clear.  And there are lots of people, those in power, who want the status quo.  Keep the doors shut so people have to “go through the proper channels.”  But technology is trending towards dialogue and democracy.  You just can’t get away with being cruel unobserved and often unchallenged.  Make fun of a teen who may have Asperger’s and he’ll post a rebuttal on Youtube.

These are the same people as the teachers who try to take away the student’s cellphone, or the administrators that forced Matt Gomez to shut down his class Facebook page.  All the parents had signed off on it, but concerns about privacy were still cited.  And that again, I believe is often a professional rhetoric for “controlling access to information.”

I have worked on the inside of several school districts, and in all of them I saw stellar educators, people who were always taking risks and getting creative.  And I saw lots of lazy, verbally abusive educators there as well.  The way our education, and our mental health, systems are set up there are a lot of disempowered angry people working with even more disempowered angry people.  And many are in the middle, trying to just keep their head low and not make waves.  I know, because I have been all of these at one point or another.

This is not going to be as easy from now on.  If you swear at a student, someone’s going to record it on their phone and have it posted on Youtube before you can blink.  If you gripe about a patient on your Facebook page they’ll find it and call you on it.  And those of you who are trying to just keep out of it all, we’ll see you too.  And more importantly, you’ll see you, and when the kid you ignored being bullied because you didn’t want to deal with it that day kills himself, you’ll have to live with the guilt that thousands of people he never knew reach out to assure those like him that it gets better, while you, the person who saw him every day or week just sat there and did nothing.

Talking about patients online, getting rough with a student, shooting a woman–Yes, these are all very different events.  But they all connect around the idea of an ethics of radical transparency.  Or as Rainer Maria Rilke put it in “Archaic Torso of Apollo:”
for here there is no place
that does not see you. You must change your life.

Like this post?  There’s more where that came from, for only $2.99 you can buy my book.  I can rant in person too, check out the Press Kit for Public Speaking info