Category Archives: Uncategorized

Slides from my PDF 2011 Talk “The Power of Strong Ties, The Power of Weak Ties” #pdf11

The Powerpoint slides from my talk at Personal Democracy Forum in 2011 are available. My talk was titled “The Power of Strong Ties, The Power of Weak Ties.”

The video of the talk  has been uploaded here and on Youtube:

For download:

Zeynep PDF

At Slideshare:

 

The Revolution Will be Self-Organized, Tahrir, #May27 (part 1)

My plane landed in Cairo in the early hours of the morning on May 27, the day of a key protest in Tahrir, Cario dubbed the “Revolution Part II.” After a few hours of jet-lagged sleep, I headed to the square with my hosts. In this blog post, I cannot give an account of the complex political discussions taking place among activists and among the Egyptian. Instead, I am going to try to communicate the “spirit of Tahrir” as I witnessed it: festive, self-organized cautious, sharply political and ambitious. (Also, I will blog in two parts as I am still in Cairo and don’t have time to write everything in one post)!

 

There was a lot of tension before May 27; rumors were flying around. In fact, after announcing that I was planning to attend, I received several emails from local friends cautioning me to be careful. The key demands of the protest were respect for law, constitution, and end to the military tribunals of dubious legality and transparency.

 

A few days before the protest, the Supreme Council of the Armed Forces issued a statement that it would not protect the Friday protest due to “the possibility that suspicious elements will try to carry out acts designed to drive a wedge between the Egyptian people and its armed forces.” This only heightened tensions as some saw this as a veiled threat. Muslim Brotherhood, too, declared that they were not joining the protests and talked about it as an undesirable event.

 

However, the end result was that the protest was self-organized by the Coalition of the Youth of the Revolution. There was not a single police or Army officer in sight. (There were apparently four policeman quietly stationed near the Egyptian museum). And, to the relief of many, it was a festive, peaceful and rambunctious event. While it could also be seen as a more liberal event, I saw many people who were clearly not from the youth coalition, including some young people from the Muslim Brotherhood and other religious groups as well as many families and others. The square was packed with tens of thousands of people, along with street merchants selling flags, posters, Egyptian “Jan25” themed t-shirts, fruit juices, sweets and whatever else one may need! Many people brought their families and many children, some with colors of the Egyptian flag painted on their cheeks, were happily running around under the hazy, hot Cairo sky.

 

Esme and Besmele Enjoying #May27 from their parents' shoulders

 

A wicked sense of humor was on display everywhere. Two young men with distinctly Islamic beards carried a large sign with a sign for: “Salofya Costa”, i.e. a Salafi Coffeshop – a wordplay on the coffee shop “Costa Coffe”. Their banner declared that the drinks/coffees were on them since, as Salafis, they always get stuck with the bill and the blame for everything. I do not mean to minimize the harm that can come from religious extremism, but surely interaction with a sense of humor and irony may be a first step in trying to figure out peaceful, tolerant ways of co-existence among people of different persuasions. I have since learned that these men are from a breakaway religious youth group who are bothered by the fact that their elders refused to interact with larger society or with people of opposing views.

Salafi youth declaring the cofffe is on them, since they always get stuck with the blame/bill

Most of the humor, however, was directed against Mubarak and the old regime. Here, this poster asks about where the revenue is and shows Hosni Mubarak and head of the Suez Canal authority, Ahmed Fadel, sharing it amongst themselves – here’s your part, here’s my part.

 

Also, Egyptian protesters are clearly aware of the world attention on their remarkable experience. In fact, as I have argued in many places, one of the main dynamics fueling the revolution derives from the structure of the attention economy in the networked world. In a strong feedback loop with Al-Jazeera, citizen-media activists on the ground were able to mobilize the attention of their countrymen and women, as well as the whole world, to one relatively small spot in a large, sprawling city in a large, diverse country. It is from that masterful, heroic attention-grab grew out the power of Tahrir. Here, below, two pictures tell this story. On the right, is the Al-Jazeera camera on the 8th floor of a  building overlooking Tahrir (here’s the hilarious story of how it got there). On the left is a activist on the ground, filming with his smart phone which is surely connected to social networks which open up the world.

 

 

 

There was even live broadcasts from the square with people using 3G modems with laptops:

Broadcasting live from Tahrir on #May27

 

Aware of all this attention, protesters were often strategic in their messaging. There were many signs in English; some clearly intended as an active effort control their own narrative. In this sign below, a protestors chides Obama, declaring that pride and dignity are motivating them more than just concerns about food prices:

In fact, this was a common-theme I heard again and again: that a feeling of pride and ownership of the country had emerged after the revolution and this was the main driver of the protests.  Clearly, in this brief visit, I have been talking more to people who are not desperately poor so I don’t have the opinions of the wider cross-section of the population. (In fact, Al-Ahram was in the field doing a survey which will compare the views of the people protesting on the street on May 27 with those elsewhere in Cairo). However, everyone I talked to found a way to express to me, sometimes in perfect English, sometimes with barely more than hand gestures and aids of dictionaries, how proud they were of their country, of the protests, of the fact that they were able to oust Mubarak.

After every protest in Tahrir, volunteers carefully clean up all the trash and leave Tahrir sparkling, something, I am told, almost unimaginable during the days of the old regime when such civic pride and engagement. I got interviewed by a young man who apparently has started a Facebook page inviting people to visit Egypt again! He had taken it upon himself, with his friends, to try to reattract tourists. (By the way, it is an excellent, excellent time to visit Egypt. Prices are down, tourist sights are not crowded and the people are beaming).

Many guidebooks will warn you against Cairo cab drivers trying to scam tourists; here, cab driver after cab driver diligently tried to give me back exact change and only reluctantly accepted (modest) tips. I asked around if this was typical and if I were breaking some custom with (modest) tips – it appears, no, this is the new Egypt. People are eager, hopeful, desperate and ready, ready, ready for a new start in this beautiful, neglected country.

Self Organization:

There was not a single police or army officer in sight. If there were any, they must have been station quiet away from Tahrir square. The organizers had set up rope and human barriers at key entrances and were searching everyone. This was more an effort to show ownership of the square than to provide deterrence to a truly determined party as the square, as the police found out in January, is quite large with multiple large and small entrances.

Self-organized Checkpoint from the Qasr Al-Nil bridge area to Tahrir on May 27

The organizers were identified with orange badges and took turns manning (and womanning as females were searched by women volunteers) the many entrances. At my last entrance, the polite young woman doing the search apologized to me, as she seemed to do to everyone she had to search, even as she did a fairly good job of looking through my small purse. Unlike regular police, she was not socialized into the idea that there is nothing disturbing about treating people as if they may do something wrong before they’ve done anything wrong. She did her job diligently, knowing it needed to be done, but also clearly uncomfortable with her role as treating people as presumed troublemakers. It was as if she symbolized the tense transition facing the idealist street activists of Cairo who are now struggling with questions of governance, of organization, how to contest elections and how to deal with the myriad of powerful forces from the Army to others.

Volunteer from youth coalition with her badge

 

In fact, these discussions seem to come up quite often among the activists, especially regarding the question of the “power of the street” and these kinds of protests versus electoral results. Some argue that the street has its own power even if this doesn’t necessarily translate into electoral power; others are worried about better organized groups like the Muslim Brotherhood and allies of the Army gaining the lions share of the seats in the next election; and therefore writing the constitution to their liking, before the more liberal, secular or anti-authoritarian groups have a chance to organize. Indeed, this seems to be the challenge for this transition. At the end of the night, there was discussion among the activists whether to continue with a sit-in or whether to leave. In the end, they decided to leave, to organize for a bigger protest next week.

As the night fell, the protesters left  Tahrir with hope, with determination and with the well-deserved pride in having pulled off a festive, self-organized demonstration. Their challenges are many and their battles uphill, but their level of organization and optimism is nothing less than impressive.

(to be continued)

 

Egypt! Last week of May, 2011!

I am going to be in Egypt the last week of May! My panel (with Sarah Abdel Rahman (@sarrahsworld) and Mahmoud Salem (@sandmonkey) is scheduled on the 30th of May at the University of Cairo at 4pm. Please come by if you are in Cairo — or contact me at zeynep at umbc.edu if you would like to set up a meeting during my visit. I am most interested in listening to people and learning from their experiences!

See below:

Why Twitter’s Oral Culture Irritates Bill Keller (and why this is an important issue)

Bill Keller of the New York Times has just written a provocative piece lamenting that new technologies are eroding essential human characteristics. I would certainly agree that almost all technologies, especially those with a cognitive element, transform the way we organize, value and manage our intellectual and social lives–-indeed, such complaints were raised, most famously by Plato about how writing was emptying words of their soul by disconnecting them from their living speakers. However, Keller makes not one but at least three distinct claims in his piece. I want to primarily discuss the one that he makes least explicitly and perhaps has never formulated directly himself.

But first, let’s clarify the other two which are explicit.

First Keller talks about how we no longer need to remember everything and how his father used to use a slide rule and now there are calculators and who knows their multiplication table anymore… This is a familiar argument from cognitive replacement and I believe it is worth discussing not necessarily because there is something inherently wrong with machines making certain cognitive tasks easier, but I do deeply worry about what this means for valuing humans. Cheaper computers increasingly capable of taking over human tasks means that we face a profound human problem: how will we deal with the billions of people who will be potentially redundant if the only way of measuring a human’s worth is their price on the labor market? For me, this is an important political question rather than a technological lament. It’s not about what machines can do, it’s about the criteria by which we judge the worth of our fellow human beings, and how advances information technology increasingly leads us to devalue each other.

Second, Keller argues that “there is something decidedly faux about the camaraderie of Facebook, something illusory about the connectedness of Twitter.” This line of argument, that our social ties are being hollowed out by digital sociality, is also fairly common. I’d like to start by saying that it is not supported by empirical research. Almost all research I have seen shows that people who are social online tend to be social offline, or at most the effect is neutral, and that most people interact socially online with people with whom they also interact offline—i.e. the relationship between online and offline sociality is mostly one of complement and reinforcement rather than displacement and replacement. Increasing numbers of people even make connections online which then they turn into offline connections (See Wang and Wellman, for example), so that even actual “virtual” connections –which I have just argued are less common—are valuable for many communities who otherwise do not have abundant peers around them, say cancer patients or gay youth in small towns.

I do, however, agree that the integration of digital sociality is transforming our social networks, but I believe the worst off are not those who do use these social media platforms but those who are unable or unwilling to get on Facebook or similar tools or use them effectively – such folks are in grave danger of falling out of rhythms of sociality of their social networks. The effect is particularly exacerbated if one is in a vertically-integrated social network—i.e. if all your friends and relatives are still using the phone and mailing out the postcards and invitations, you are fine. However, if most of your social circle is now taken to chatting about everything on Facebook and sending out email invites, and you are sitting by the phone, that’s not a good situation. I am hoping to write longer and more about this topic of social isolation and digital connectivity so let’s leave this one, for the moment, at least.

But here are the parts of Keller’s comments which have intrigued me and convinced me to write this post: (quote order mixed, original here).

My mistrust of social media is intensified by the ephemeral nature of these communications. They are the epitome of in-one-ear-and-out-the-other, which was my mother’s trope for a failure to connect.

Eavesdrop on a conversation as it surges through the digital crowd, and more often than not it is reductive and redundant. Following an argument among the Twits is like listening to preschoolers quarreling: You did! Did not! Did too! Did not!

In an actual discussion, the marshaling of information is cumulative, complication is acknowledged, sometimes persuasion occurs. In a Twitter discussion, opinions and our tolerance for others’ opinions are stunted. Whether or not Twitter makes you stupid, it certainly makes some smart people sound stupid.

The shortcomings of social media would not bother me awfully if I did not suspect that Facebook friendship and Twitter chatter are displacing real rapport and real conversation, just as Gutenberg’s device displaced remembering. The things we may be unlearning, tweet by tweet — complexity, acuity, patience, wisdom, intimacy — are things that matter.

 

… Then along came the Mark Zuckerberg of his day, Johannes Gutenberg. As we became accustomed to relying on the printed page, the work of remembering gradually fell into disuse.

 

But this comparison between Gutenberg and Zuckerberg makes little sense unless you realize that Keller is actually trying to complain about the reemergence of oral psychodynamics in the public sphere rather than about memory falling out of favor. If the latter were the case, his ire would be more about Google; instead, most of his frustration is directed against social media, and mostly Twitter, the most conversational, and thus most oral of these mediums.

The key to understanding this is that while writing did displace the value of memory, the vast abundance of printed material it did something else also, something less remarked upon, both to the shape of our public sphere and also to our psychodynamics. It replaced the natural, visceral human oral psychodynamics with those of literate and written ones. Most of us are so awash in this new form that we notice it as much as fish notice water; however, writing is but a blip and the printed from a flash in human history. Orality, on the other hand, is perhaps the most human of our characteristics, and ironically, the comeback of which into the public sphere is the one Keller is lamenting while worrying about losing our human characteristics. What he seems to actually mean is that, with the advent of writing and printing, we *acquired* these new cognitive tools and novel psychodynamic [and I should note that they never took that much root in most recesses of culture and thus remain fragile] and they are threatened by social media which re-introduces older forms which, of course, never died out but receded from public importance.

Here I am going to be drawing upon scholarship of Walter Ong and others who distinguish the characteristics of oral societies with those which are dominated by writing—and Europe and the United states are thoroughly dominated by the written culture even though oral culture is still with us because orality is deeply and intrinsically human; all human societies are also oral cultures. (This is true even for Deaf communities; the only difference is their orality is visual, not spoken). Primary orality refers to cultures which are untouched by writing whereas residual orality is cultures like ours where writing dominates even our speaking.

The oral world is ephemeral, exists only suspended in time, supported primarily through interpersonal connections, survives only on memory, and rather than building final, cumulative works, it is aimed at conversation and remembering knowledge by rendering it memorable, which can often mean snarky, witty, rhythmic and rhyming. (Think poet slams rather than essays).

In oral psychodynamics, the conversational, formulaic styling dominates (which aides memory) as well as back-and-forth, redundancy, an emphasis on being less analytic and more aggregative, being more additive rather than developing complex and subordinate clauses (classic example is the Genesis which, like Homer’s Odyssey, is indeed an oral work which was later written down). Oral pschodynamics also tend to be more antogonistic, interpersonal and participatory. (Wikipedia does a pretty good job of summarizing these arguments but I strongly advise reading Ong’s Orality and Literacy: The Technologizing of the Word for a more thorough treatment—though I have some issues with Ong’s arguments I think they are well worth taking seriously).

Sounds a lot like social media, does it not? In fact, Andy Carvin often refers to his Twitter reporting as part preserving oral history, and I think he is spot on. This distinction is probably a bit harder to observe in the English Twitter-verse since English is so thoroughly colonized by writing. Whenever I dive into the Turkish Twitter, I notice tweets employing many forms of Turkish which are solely found in oral Turkish and almost never written down in literate culture. I think this distinction may be more visible in other societies where oral culture was not as decisively beaten back as in the English speaking world — this makes it harder to explain the issue in English. (Although I think the so-called “black-tags” fit very well into oral culture traditions and is likely reflective of the fact that African-Americans are more steeped in oral culture due to their history in this country. Farhad Manjoo once examined this issue concluding that these witty, snarky, back-and-forth became trending topics because African-Americans on Twitter tend to be in denser, interconnected networks (small world networks, so to speak). However, that explains the how, not the why. The strong phatic nature of these “black-tags” points to oral culture as their root.

The difference between oral language and written language is also why bad scripts in movies sound so stilted and written transcripts often look so funny. Those bad script writers are stuck in literate English rather than the spoken word. Oral/spoken language is related to but different from written language, and not just in phrases and grammar but also in mood, effect and rhythms.

What we are seeing with social media is the public sphere, hitherto dominated by written culture, has been more opened up to oral psychodynamics. And this is particularly difficult to deal with for intellectuals who rely on their competence with, and dominance of, the written form as hallmark of their place in society. (As I will argue, there are reasons to be concerned but it is important to separate these issues). Also, television, too, is secondary literacy in that television acts in a way which assumes and implies writing. (I am not going to go into this at length here but there is a lot of work on this topic, starting with Ong).

So, should we be concerned? Does this raise problems? Yes and no. A good chunk of social media is dominated by social grooming. And social grooming is definitely rooted in oral psychodynamics; however, there probably isn’t more of it because of social media but it’s just more visible. This is nothing to be alarmed at. Let me quote from my review of Carr’s book:

Which brings me to another common complaint which Carr does not highlight as much but which I have been hearing more often lately. What about all the “crap” on the Internet? The silly cat pictures, the trivial Twitter updates, the banal Facebook postings, the million Youtube videos of pets, kids, household accidents, pranks, etc.? Surely, that is evidence of intellectual decline?

That, my dear friends, is called humanity. That’s what humans do. We are a deeply social species and we engage in “social grooming” all the time, i.e. acts that have no particular informational importance but are about connecting, forming, displaying and strengthening bonds, affirming and challenging status, creating alliances, gossiping, exchanging tidbits about rhythms of life. I personally doubt that there is substantially more social grooming going on today, on average, compared to the pre-Internet era. The only difference is that the Internet makes it visible. What used to be spoken is now written and published potentially for the world to see. That’s it. There isn’t more or less of it.

I think all the horror and outrage at txtspeak and other unconventional spelling is part of this story. I think this is mostly turf wars by the literate classes against the encroaching oral culture. English spelling is quirky, illogical and result of historical accidents. If the Great Vowel Shift had not happened when it did, we might have had a reasonable system worth defending. Yes, I, too, am a product of this system, and I, too, cringe at “c u l8r.” However, I suspect I just need to get over it just as any logical, reasonable learner of English has to get over her horror of the fact that “tough” “thought” “through”, and“thorough” are all spelled so similarly when they sound so different. A lof this angst is about conventions, and conventions evolve which always horrifies those who have acquired privilege and power by mastering certain conventions while dismissing others. Cultural capital, in other words.

But, I do share some concerns. Oral pyschodynamics are not well-suited to specific kinds of public discourse which are based in affordances of writing, especially long-form writing. Let me again quote from an earlier critique of the iPad I wrote:

Writing, especially writing at length is a different modality of thought than talking and it also allows a different kind of exchange and discourse. (I refer specially to the scholarship of Neil Postman and Walter Ong.) As Postman argues, writing and the spread of the printed word through literacy and the printing press created a culture in which it is possible to debate ideas at length and produce analytic thought which can be produced, advanced, discussed, refuted, rejected, improved and otherwise churned through the public sphere. As Postman writes in Amusing Ourselves to Death: “almost all of the characteristics we associate with the mature discourse were amplified by typography, which has the strongest possible bias toward exposition: a sophisticated ability to think conceptually, deductively, and sequentially; a high valuation of reason and order; and abhorrence of contradiction; a large capacity for detachment and objectivity; and a tolerance for delayed response.” (p.63).

In other words, I do believe those Twitter-like environments are not well-suited to certain kinds of complex argument development and closure. It’s not solely because they are social but that is part of the picture.

The pressure to provide the memorable quote (so that one gets retweeted, the Twitter equivalent of the oral psychodynamic of striving to be remembered); the ephemerality of the conversations and the difficulty of making sense of those which one did not participate in (just like spoken ones); the length limit (just like the oral world since it is hard to have a conversation paragraphs or pages at a time), the visceral, interpersonal nature of the discussion do mean that a world in which Twitter became the sole means of discussing important public issues would indeed be a poorer one. There is great need to preserve and expand the long form through not just newspapers but through blogs and other forms.

However, Twitter and other such tools also present a great opportunity to bring into the public sphere, and into important conversations, greater number of people who would otherwise be excluded. Rather than seeing this as a turf war in which the literate classes must defend their turf against the barbarians at the gate, the questions should be how we can preserve the better aspects of the ideal of the reasoned, complex and rational public sphere without descending into elitism. (I say the ideal because, as Dave Parry often points out, usually on Twitter, the Habermassian ideal of the public sphere, well, never really was).

I see the recent interest in “storify” and other curation and preservation tools as an important step in this direction of integrating oral social media with the rest of the public sphere. I think there should be an effort to preserve longer-form blogging and not abandon it in favor of the quick exchange of Twitter (as Anil Dash said, it [almost] does not exist if you did not blog it). I think rather than dismissing Keller’s concerns, the digiterati should dig into this unease shared by many members of the literate classes and take apart the various issues.

And Bill Keller should understand that, at its best, Twitter is not a broadcast medium but a medium of conversation. What he has done so far on Twitter is the equivalent of walking into a party and saying a provocative sentence, followed by sitting at the corner sipping his cocktail – as in “#twittermakesyoustupid. Discuss.” Social encounters are satisfying and worth mostly to the degree that one participates in conversations, rather than announces witticisms and withdraws. Yes, I am a professor but I do not walk into random rooms and expect people to quietly take notes on what I am saying while I launch into a speech, projecting my voice to the back of the room. Keller cannot understand this medium if he treats it as something different than what it is, and to understand requires participation in its indigenous form, conversation.

I thus urge the Literati to come join the social media conversation with the understanding that some of their strengths will not be as valued, that they will need to relearn certain skills, and some parts of the experience will be annoying – but just like some good literature, it often takes some effort to grasp the value of a new form. I think the literate should accept that this is now an inseparable part of the public sphere and increasing numbers of people who were otherwise excluded can now be heard; yes, they don’t always think or say what I wish people thought or said but what else is new? Given the complexities of the issues facing humanity, engaging this expanded public sphere is of crucial importance to anyone concerned about how we, as humans, will continue to live our lives, socially, economically and politically.

And I urge the Digerati not to always dismiss these anxieties as signs of “get of my lawn” malady. Certainly, I occasionally get that sense that as well, but this is an opportunity have significant discussions on the ongoing reshaping of global networked public spheres. This debate needs to happen based more on substance rather than sides and turfs and their defense.

 

Faster is Different. Brief Presentation at Theorizing the Web, 2011

Over the weekend, I attended an great conference called “Theorizing the Web.”  Lead organizers were Nathan Jurgenson and PJ Rey, two awesome graduate students at University of Maryland. We have been meeting regularly for years and a common complaint among us has been the lack of suitable academic outlets for the kind of work we do.

Well, if it doesnt exist, make.  And thus this conference was born.

I was on a symposium with Dave Parry, Deen Freelon, March Lynch and Henry Farrel titled “Revolution 2.0? The Role of the Internet in the Uprisings from Tahrir Square and Beyond”  In order to make sure that we had enough time to interact with the audience as well as with the backchannel, we limited speakers to just seven minutes.

Audio from the panel is here and my powerpoints for my brief presentation are here.

Please keep in mind that this was not a comprehensive presentation due to the conscious time limit.

I basically argue against the misconception that acceleration in the information cycle means would simply mean same things will happen as would have before, but merely at a more rapid pace. So, you can’t just say, hey, people communicated before, it was just slower.

That is wrong. Faster is Different.

Combined with the reshaping of networks of connectivity from one/few-to-one/few (interpersonal) and one-to-many (broadcast) into many-to-many, we encounter qualitatively different dynamics. I draw upon epidemiology and quarantine models to explain why resource-constrained actors, states, can deal with slower diffusion of protests using “whack-a-protest” method whereas they can be overwhelmed by simultaneous and multi-channel uprisings which spread rapidly and “virally.” (Think of it as a modified disease/contagion model). I use comparison between the unsuccessful Gafsa protests in 2008 in Tunisia and the successful Sidi Bouzid uprising in Tunisia in 2010 to illustrate the point.

Under normal circumstances, autocratic regimes need to lock up only a few people at a time, as people cannot easily rise up all at once. Thus, governments can readily fight slow epidemics, which spread through word-of-mouth (one-to-one), by the selective use of force (a quarantine). No country, however, can jail a significant fraction of their population rising up; the only alternative is excessive violence. Thus, social media can destabilize the situation in unpopular autocracies: rather than relatively low-level and constant repression, regimes face the choice between crumbling in the face of simultaneous protests from many quarters and massive use of force. While, unfortunately, we do see violent reactions from regimes, it is certainly not a desirable or sustainable outcome for the autocrats. They want to rule, not fight civil wars.

I will write a much more detailed paper and post about this at some point but I’m throwing this out there as initial foor for thought. Feedback welcome!

Hi-Tech does not mean high-quality jobs (My Washington Post op-ed from 2004)

Reading about the jobs report today, and noticing the discussion started by Umair Haque about how, contrary  to the way it is being covered, it is not a good report, I decided to repost an op-ed I had published in 2004 in the Washington Post in 2004.  As Haque said on Twitter:

My conclusion? We’re broken. When we do create jobs, they’re low quality, no-future McJobs. It’s not a positive jobs report–a terrible one.

Original link to the op-ed is  here (however, for some reason the full text does not show up).

They Can Point and Click, But Still End Up Painting Walls

By Zeynep Tufekci

The Washington Post. Sunday, January 25, 2004; Page B04

AUSTIN

One proposal in President Bush’s State of the Union address that sparked
enthusiastic bipartisan applause was the “Jobs for the 21st Century” initiative.
“We must ensure that older students and adults can gain the skills they need to
find work now,” the president said, adding that “many of the fastest-growing
occupations require strong math and science preparation, and training beyond the
high school level.”

Among unemployed workers or those stuck in menial jobs, however, it will take
more than good job training programs to elicit a standing ovation.

If there were ever a job training program to match the description of the
president’s “Jobs for the 21st Century” initiative, it would be the one I have
been studying in this high-tech metropolis for the past three years. The program
is well-run, adequately funded, staffed by enthusiastic people and attended by
resolute, hard-working individuals. It is spearheaded by local businesses, the
city and institutions of higher education. People who may have never touched a
computer learn how to do word and data processing, acquire an e-mail address,
and search and apply for jobs online.

And most of them still cannot find decent jobs, if they can find jobs at all.

Ironically, many of them have been laid off from low-paying, assembly line jobs
in the high-tech industry. They can rattle off how to connect the motherboard
and the disk drive, but they have never pointed and clicked. Most have been
through numerous cycles of layoffs and rehirings, each time taking a pay cut and
losing seniority — until the rehiring that never came.

They start the program desperate to find jobs to pay the bills. The schedule —
three hours an evening, four days a week — is grueling, especially for those
without reliable cars or adequate child care. Nevertheless, many of them never
miss a class. They come early and stay late. And after months of intensive
training, they go into the job market confident and hopeful.

When I interviewed many later, though, some were still fixing furniture,
answering phones, cleaning houses and painting walls. Others remained
unemployed. The problem is that the only available jobs that use computers are
those as secretaries and receptionists. The men believe they can’t get such
jobs, and many of the women feel the bosses will hire only those who are young,
thin and pretty. And even then, those jobs have meager pay and benefits, and
long hours. (In another research project in economically depressed high schools,
we found that some boys actively avoid learning about computers because they
consider such jobs “women’s work” — in other words, low paying, uninteresting
and unglamorous.)

The lesson here is that not every job that uses high-tech tools involves
high-level skills or high pay. In his speech, Bush said “as technology
transforms the way almost every job is done, America becomes more productive,
and workers need new skills.” Sometimes that’s exactly the problem. More
productivity means more can be done with less, which often means fewer jobs,
less skilled work and, consequently, less pay.

Although some of the fastest-growing types of jobs do require advanced training,
they are only a small proportion of the market — making the total number of new
jobs in those areas very small. The Bureau of Labor Statistics (BLS) projects
that by 2010, only 20.7 percent of all jobs will require a college degree or
more, something 25 percent of the population already has. The BLS also projects
that by 2010 almost 70 percent of job openings will only require work-related
training and 42.7 percent only short-term on-the-job training — mostly, “Here’s
your apron; don’t be late.” The fields adding the largest number of jobs are
“combined food preparation and serving workers, including fast food,” followed
by “customer service representatives,” “registered nurses,” “retail
salespersons,” “computer support specialists,” “cashiers” and “office clerks.”
Even computer support specialists require only an associate degree; they had
median annual earnings of $36,460 in 2000.

Some of the well-paying fields that the president mentioned, such as
biotechnology, are simply beyond the reach of the unemployed. And if the number
of good jobs continues to decrease, advanced education will be no panacea for
today’s students, either.

When good jobs are few, higher skills can become part of a zero-sum competition,
as in the computer sector following the dot-com bust. One trainee told me that
she was there simply to “learn the lingo” of computers. She said that employers
had so many applicants they were discriminating on an arbitrary basis. Even if
the job didn’t really require computer skills, she said, they still wouldn’t
hire you if you couldn’t say you “knew” computers. There seemed to be something
to her method, as she did manage to find a job as an accounting assistant.

While promoting his program in Ohio, Bush said, “The key is to train people for
the work which actually exists.” That’s true. That’s why we must create better
jobs. The Economic Policy Institute recently found that in 48 states, jobs are
shifting from higher-paying to lower-paying industries. It’s time to take the
bipartisan blinders off and stop pretending that, if only people got training,
they would find good jobs.

Training programs should still be supported not because they magically
compensate for the lack of good jobs, but for the civic and personal empowerment
they provide. Trainees tell me how they have refinanced their houses online and
cured the goats they were raising by looking up information on the Internet.
Most important, they no longer feel stupid and left behind in an age, as one
trainee put it, “where everyone and their momma is computerized.”

When even Wal-Mart accepts applications at computerized kiosks in its stores,
losing one’s fear of computers can make a real difference. One day, a trainee
proudly handed me a flier advertising her services that she had made on the
computer. Furthermore, she explained, she now keeps her accounts on a
spreadsheet and uses MapQuest.com to get directions to the houses that she
cleans on her hands and knees, seven days a week, 12 hours a day, for a
pittance.

Before they learned these skills, the trainees thought that it was their lack of
computer skills that prevented them from getting those good information-age jobs
(touted by every president since Ronald Reagan). They thought something was
wrong with them. Now they know something is wrong with our job market.

Twitter and the Anti-Playstation Effect on War Coverage

As I follow the remarkable political transformations ongoing in the Middle East and North Africa through social media, I’m struck by the depth of the difference between news curation and anchoring on Twitter versus Television. In this post, I’d like to argue that Television functions as a distancing technology while social media works in the opposite direction: through transparency of the process of narrative construction, through immediacy of the intermediaries, through removal of censorship over images and stories (television never shows the truly horrific pictures of war), and through person-to-person interactivity, social media news curation creates a sense of visceral and intimate connectivity, in direct contrast to television, which is explicitly constructed to separate the viewer from the events.

Although it is the first factor most people think of, I believe that the distancing effect of TV isn’t just because TV is broadcast and social media is interactive. In fact, while the potential for interactivity is a significant factor, most people interact with only a few people on social media and people who act like news hubs do mostly broadcast—their messages reach many. (Check out this brand new study). I think the substantive differences also lie elsewhere and in this post, I want to examine two key mechanisms which alter this sense of distance: the construction of the role of the anchor or the curator, and the role of content filtering.

On television, the traditional broadcast news anchor is explicitly distant and unmoved by the events he or she is covering. This distancing is structured through the visual framing (sitting upright, staring ahead, crisp but predictable intonation, professional and muted dress), the effortless but abrupt transition from story to story. (“Now I am going to tell you about starving children. Next up, a new treatment that can ease your wrinkles. Later: will it rain tomorrow?), the semi-frozen, plastic demeanor of the anchor no matter the story and the clear lack of personal involvement or concern, and the constant interruption by ads—a very surreal and disorienting experience unless one has been thoroughly conditioned into accepting them as normal. All this positions the anchor between us and the event and signals to the viewer that the event is merely something to be watched, and then you move on.

Compare that to the closest example of an anchor which has emerged during this period. I’m going to use Andy Carvin, who has been curating and anchoring about the uprisings since they began in Tunisia, sending hundreds of tweets and retweets per day usually from early morning into the night, as an example.

Carvin himself compares his role to that of an anchor, except with his Twitter followers as his producers and his news sources rather than traditional professionals. However, there are significant differences. First, Carvin is immersed in the story. He does not move from unrelated topic to unrelated topic the way a traditional news anchor does. Second, his tweets are his own words so we have a distinct sense of a person between us and the events rather than a figurehead reading words from a teleprompter he or she did not write or think. Third, he does not construct his position as one of distance and uncaring. He is not hiding his opinions or sympathies. Fourth, his news gathering and curation process is transparent–and that evokes a different level of engagement with the story even if you are only a viewer of the tweet stream and never respond or interact. I believe the fourth point is often underappreciated.

With Carvin’s constant and transparent efforts at verification and confirmation, the followers get a visceral sense of how news is “cooked.” Rather than the “final” package we encounter on television, delivered to us as a relatively infallible, ready-to-consume, product for us to uncritically accept, on Twitter curation feeds, we are often in a position to observe the process by which a narrative emerges, trickle by trickle. “Polished” and “final” presentation of news invites passivity and consumption whereas visibility of the news gathering process changes our interaction with it into a “lean-forward” experience. Carvin’s reporting is not infallible–although most of the stories from citizen-media sources often turn out to be fairly accurate, belying the idea that Twitter is a medium in which crazy rumors run amok—but it wears that fallibility on its sleeve and is openly submerged in a self-corrective process in which reports and points-of-view from multiple sources, including citizen and traditional media, are intertwined in an evolving narrative.

Thus, the process engages the “audience” not necessarily because most of Carvin’s tens of thousands of followers actually will contribute to the story or interact directly with him or his sources but because, unlike the opacity of modern production systems in which everything is delivered to the consumer shrink-wrapped, “cleansed” of hints of its origin and the process by which it was produced, news curation on Twitter is somewhat holistic, messy, and very much connected with its origins .Consequently,  the foibles and pitfalls –the unverified stories, the difficulty of getting reliable news from closed regimes and war-zones, translation issues, misunderstandings—are viewable by the audience in real time.

This visibility of the process is a step in the opposite direction from French philosopher Jean Baudrillard’s famous assertion that we are increasingly moving towards a “procession of simulacra” in which the simulation (“the news”) increasingly overtakes any notion of the real and breaks the link between representation and the object –often in the form of spectacle–, ultimately erasing the real. Baudrillard famously wrote a series of essays titled “The Gulf War Did Not Happen” – he was not claiming that bombs were not dropped and people killed. Rather, he had argued that, for the Western audiences, the First Gulf war was experienced merely as green flickers on TV screens narrated by familiar anchors and without much connection with actual reality – reality as inhabited by human beings at a human scale.

Censorship and Graphic Content : Shrink-Wrapped Humanity

One important way in which the curator/anchor role differs on Twitter compared to television relates to the second mechanism, the availability of unfiltered, graphic content. There has been a steady stream of photographs and videos depicting the reality of war and violence: images of the dead, dying and wounded of all ages, from small babies to elderly men and women have been circulating on social media sites. Understandably, victims of such violence did not feel a need to censor their very real suffering and keep it from us, the way television steps between us and the victims and isolates us, the audience, from reality.

After months of such coverage, I have found it harder and harder to look at more photographs and I do understand the need to provide mechanisms by which people can choose not to look at a particular photograph at a particular moment. Such images have been haunting my dreams and I’ve chosen, for the moment, to stop looking at any more. So, this is not a blind endorsement of flooding everyone with images of death and gore; I understand that there are awful things happening somewhere, every minute, and we cannot always be immersed in such misery and sorrow.

However, I am firmly of the opinion that the massive censorship of reality and images of this reality by mainstream news organizations from their inception has been incredibly damaging. It has severed this link of common humanity between people “audiences” in one part of the world and victims in another. This censorship has effectively relegated the status of other humans to that of livestock, whose deaths we also do not encounter except in an unrecognizable format in the supermarket. (And if anyone wants to argue that this is all done to protect children from inadvertent exposure, I’d reply that there are many mechanisms by which this could be done besides constant censorship for everyone.) While I cannot discuss the reasons behind this censorship in one blog post, suffice it to say it ranges from political control to keeping audiences receptive to advertisements.

Such images take a visceral and deep toll, and the fact that anchors do not pretend to be unaffected increases their immediacy. After a particularly harrowing video showing newborn babies in a damaged hospital, Andy Carvin started tweeting about how shaken he was:

Soon, a pediatrician and a neonatal unit nurse stepped up to assure Carvin—and the rest of us—that the babies seemed okay and rather than being injured, one appeared to be born with a cleft palate. In most other cases, however, the news was not so good and pictures of dead or wounded children floated around Carvin’s stream.

I am not arguing that we would look at hurt children and be unmoved were it not for Carvin’s open display of emotion. I am arguing that traditional news anchors effectively invite us to do just that: to distance ourselves.Humans naturally react to suffering and it takes a very contrived environment to dampen that response. Other examples I mentioned as emerging anchor-hubs also display this tendency: they react to horrific news with appropriate horror.

By not playing the traditional game of journalistic disconnectedness, the emerging Twitter anchors are inviting us to remain human, to react, to cry, to be outraged, to feel helpless, to be moved while the television anchor appears only to care that we do not change the channel during commercial break. Thus, watching Andy Carvin deal with his own vulnerability to imagining children hurt –children just like his– dramatically creates a mechanism in the opposite direction of that created by traditional news.

And our distance to events around the world is of crucial importance. In a fascinating book about the science of killing, (“On Killing: The Psychological Cost of Learning to Kill in War and Society”), Dave Grossman traces the role distance plays in the psychological cost of harming another human being. Contrary to assumptions, virtually everyone who does not fall into the rare breed of aggressive psychopaths who kill with ease has to be trained to kill. To the chagrin of military trainers and leaders throughout history, humans have an innate aversion to taking of human life. Untrained soldiers are historically averse to killing the “enemy” even when their life is in direct danger. Most will hide, duck, fire in the air, load and unload their weapons repeatedly, fire over the heads of the “enemy” and take other evasive actions, anything, to avoid killing. For example, in World War II, only about 20 percent of the riflemen were found to actually fire their weapons directly at enemy soldiers.

As Grossman explains it, every new military technology that increases the distance between the soldier and the person who is being attacked increases the rate and ease with which soldiers will obey orders to kill. Thus, bayonets are psychologically hardest to use and much more difficult than rifles even though rifles kill more people more easily. Artillery is easier, especially manned by multi-person crews which create an environment of “mutual surveillance”, making it harder to take evasive action by the soldiers. Planes are the easiest of all. In fact, the rates of PTSD among soldiers follow this pattern quite directly; pilots, who often end up killing the most number of people, are less likely to suffer from PTSD compared to combat troops who deal with the “enemy” at a very close and personal level.

So, I propose that Twitter and social media curated-news distribution is quite different compared with traditional news dissemination through television. Twitter-curated news often puts us at bayonet distance to others –human, immediate and visceral– while television puts us on a jet flying 20,000 feet above the debris –impersonal, distant and unmoved.

There have long been concerns that “drone-wars” would create a playstation mentality among the soldiers, controlling robotic aircraft from thousands of miles away, isolated from the effects from their actions, going home at night, they would kill more easily and readily. (Ironically, it turns out drone operators do suffer from increased rates of PTSD because unlike pilots of jets or even artillery operators, they spend a lot of time viewing high-resolution pictures of their targets before and after bombing).

Most of mainstream news however, functions exactly in the manner feared, creating a playstation mentality to war and suffering. In that sense, Twitter news curation is the anti-playstation for wars. Or, as one of Carvin’s followers put it:

Can “Leaderless Revolutions” Stay Leaderless: Preferential Attachment, Iron Laws and Networks

Many commentators relate the diffuse, somewhat leaderless nature of the uprisings in Egypt and Tunisia (and now spreading elsewhere) with the prominent role social-media-enabled peer-to-peer networks played in these movements. While I remain agnostic but open to the possibility that these movements are more diffuse partially due to the media ecology, it is wrong to assume that open networks “naturally” facilitate “leaderless” or horizontal structures. On the contrary, an examination of dynamics in such networks, and many examples from history, show that such set-ups often quickly evolve into very hierarchical and ossified networks not in spite of, but because of, their initial open nature.

This question has been raised here by David Weinberger who asks the question, and, here, by Charlie Beckett who argues that the diffuse nature of these networks makes them less hierarchical and stronger:

The diffuse, horizontal nature of these movements made them very difficult to break. Their diversity and flexibility gave them an organic strength. They were networks, not organisations.”

I agree and have said before that this was the revolution of a networked public, and as such, not dominated by traditional structures such as political parties or trade-unions (although such organizations played a major role, especially towards the end). I have also written about how this lack of well-defined political structure might be both a weakness and a strength.

A fact little-understood but pertinent to this discussion, however, is that relatively flat networks can quickly generate hierarchical structures even without any attempt at a power grab by emergent leaders or by any organizational, coordinated action. In fact, this often occurs through a perfectly natural process, known as preferential attachment, which is very common to social and other kinds of networks.

In order to understand how this process works, consider the potential mechanisms by which a node in a network grows in importance. Let’s do a short-hand conceptualization and accept the number of followers in a Twitter network as a measure of importance.

Followers may increase through any of the following mechanisms:

1- Random growth: Here, we can assume that everyone gets some number of followers every time they tweet and that it all averages out over time. Nobody has any particular advantage and over time, most everyone acquires some followers, although some more than others. This is analogous to the movement of gas molecules in containers: they all bounce around in a way that is impossible to calculate, but important parameters (like temperature) can be calculated very accurately as averages. Random does not mean without a pattern–this would not mean it all end ups with everyone as equal but rather with a Maxwell-Boltzmann-type distribution. For a fascinating study of how the bulk of the economy (except for the very rich) function in this manner, see this paper by Victor Yakovenko .

2- Meritorious growth: In this model, the better, the more relevant, the more informative your tweets, the more followers you get. Surely, there is a lot of this going on. While this sounds good, it brings us to the next question: how will people know your tweets are so good? One mechanism, of course, is retweets. The number of retweets, however, may depend on how many followers you have to catch and retweet your posts in the first place. This means that those who have a large number of followers end up with an advantage even in terms of being recognized as meritorious. (Recent studies do show that influence is a lot more complicated than number of followers, but we are trying to abstract some basic mechanisms, so this will do for the moment).

3- Preferential attachment: This is the “rich-gets-richer” model, sometimes dubbed the “Matthew Effect” after the biblical saying “For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away.”

In the preferential attachment scenario, the more followers you have, the more followers you will add, ceteris paribus –  i.e. even if the merit of your tweets is the same as someone of fewer followers, your followers will grow at a faster rate. Multiple mechanisms can facilitate preferential attachment — this need not be a mere exposure effect but will likely be confounded by a popularity effect. In almost all human processes, already having a high status makes it easier to claim and re-entrench that high-status. Thus, not only will more people see your tweets, they will see you as having the mark of approval of the community as expressed in your follower count.

This third kind of process, defined by preferential-attachment dynamics, tends to give rise to what network scientists call “scale free” networks, which end up exhibiting power-law distributions. (They are scale invariant because they look the same at whatever scale you look at). Sometimes informally-called the 80-20 networks, such networks are very common in multiple natural and social processes and create top-heavy structures in which a few have a lot and most have fairly little. (See this related paper by Yakovenko for an analysis of how the rich really are different than you and me in that their wealth indeed accumulates by power-law dynamics. They really are getting richer because they are rich to begin with).

Many networked structures, including the World Wide Web, have been shown to be such  scale free networks (See this paper by Adamic and Huberman in Science for an interesting discussion which also touches upon the merit aspect). More importantly, blogs and other influence-ecologies of the Internet often display such a shape. Here’s what a power-law distribution of blogs looks like from Clay Shirky’s widely-read post on the topic (this is a bit outdated, if anyone wants to generate a new one using Technorati’s top 100, I’ll happily include that one instead):

So, what does all this have to do with revolutions and leaders? A lot, it turns out. Preferential attachment means that a network exhibiting this dynamic can quickly transform from a flat, relatively unhierarchical one to a very hierarchical one – unless strong counter-measures are quickly and firmly employed. It is not enough for the network to start out as relatively flat and it is not enough for the current high-influence people to wish it to remain flat, and it is certainly not enough to assume that widespread use of social media will somehow automatically support and sustain flat and diffuse networks.

On the contrary, influence in the online world can actually spontaneously exhibit even sharper all-or-nothing dynamics compared to the offline world, with everything below a certain threshold becoming increasingly weaker while those who first manage to cross the threshold becoming widely popular. (Imagine Farmville versus hundreds of games nobody plays. In fact, don’t imagine this and read this great study by Jukka-Pekka and Reed-Tsochas that was just published in the Proceedings of the National Academy of Sciences. Turns out that’s exactly how app diffusion on Facebook works).

First, let me say that many late-20th century uprisings which predate the Internet happened without strong leadership so a “leaderless revolution” is not a new phenomenon. Iran in 1979 did not start as a theocratic movement at all—the despotic and unpopular Shah was overthrown by a broad-based movement including the secular middle-classes, organized labor, communists, etc. Many of the 1989 revolutions did not have strong leadership as they were happening. My initial impression is that the Egyptian and Tunisian uprisings have been even more diffuse and that this is related to the role social media played in facilitating certain kinds of organizing—but I am willing to remain agnostic on that question till we have more data.

However, few revolutions remain leaderless—which is exactly why it is very important to understand that the diffused nature of this revolution is hardly an inoculation  against the emergence of this dynamic; in fact, it might even contain the seeds of extreme hierarchy

To try to understand whether this might be happening in Egypt, I used this “infograph” –which is, in fact, visualization of a social network analysis of Twitter users employing the hashtag “Jan 25”—to identify some of the more influential nodes. (In this graph, influence is visualized as size; the bigger, the more influential) Not having the underlying data, I eyeballed the graph and asked my twitter friends for names of Egyptians with an influential social media presence.

Thus, while my final list is somewhat arbitrary, let me assure you that I tried many variations of this top 10 out of the potential few hundred and constantly found the same pattern. In the one I present here, I’ve included male as well as female micro-bloggers, those tweeting in only Arabic as well as those tweeting in English and Arabic and two  traditional politicians, Ayman Nour (@ayman_nour) and Mohamed El-Baradei (@ElBaradei). (Baradei was not included in the infograph but I added him due to his obvious importance. I tried to check for other potential figures as well, none were as prominent).

What I found is that @ghonim, or Wael Ghonim, and @ElBaradei, Mohamed El-Baradei, are both definitely showing a different kind of growth-pattern compared to every other person of influence I have tested them against in this portion of the twitter-verse. Of course, you can see this pattern without any quantitative analysis; Ghonim is the one that has been crowned the “leader of the leaderless revolution” by Newsweek and he’s the one who is tweeting about meeting with top generals in the military. Take a look at his and Baradei’s follower growth compared to 10 other top tweeters.

By all accounts, Wael Ghonim deserves an important leadership role. I absolutely do not mean for this post to be taken as a personal assessment of any leader of this nascent revolution. In fact, the point is that it does not matter who they are. Wael Ghonim especially has been careful to talk about how this is a revolution without heroes because so many are heroes—starting, of course, with the hundreds of people who lost their lives. He has dubbed the Egyptian uprising “Revolution 2.0” and has constantly talked about keeping it participatory, especially through the use of social media.

However, Ghonim and other emerging leaders of this revolution would be well-advised to keep in mind that social media not only do not guard against one of the strongest findings of sociology called the “iron law of oligarchy,” they may even facilitate it. The iron law of oligarchy works rather simply.  Basically, take an organization. Any organization. Stir a bit. Wait. Not too long. Watch a group of insiders emerge and vigorously defend their turf, and almost always succeed. (Example one could be Western democracies — See work by Robert Michels for more details). Further, revolutions almost always depend upon or create figures who possess what sociologists call charismatic authority. Both of these processes are so widespread in human history that it would be foolish to ever discount them. But to discount them by hoping that social media, as it stands, can provide a strong-counter force would be naïve.

In fact, if anything, it is quite likely that preferential-attachment processes are part of the reason for the rise of oligarchies and charismatic authorities. Ironically, this effect is likely exacerbated in peer-to-peer media where everything is accessible to everybody. Since it is just as easy to look at one person’s twitter feed as another’s, no matter where you are or where the other person is, it is easier to draw more from the total pool and further entrenching an advantage compared to the offline world where there are more barriers to exposure and attachment. Thus, networks which start out as diffuse can and likely will quickly evolve into hierarchies not in spite but because of their open and flat nature.

Disposition is not destiny. In one of my favorite books as a teenager, The Dispossessed, Ursula K. Leguin imagines a utopian colony under harsh conditions and describes their attempts to guard against the rise of such a ossified leadership through multiple mechanisms: rotation of jobs, refusal of titles, attempts to use a language that is based on sharing and utility rather than possession and others. The novel does not resolve if it is all futile but certainly conveys the yearning for a truly egalitarian society.

If the nascent revolutionaries in Egypt are successful in finding ways in which a movement can leverage social media to remain broad-based, diffused and participatory, they will truly help launch a new era beyond their already remarkable achievements. Such a possibility, however, requires a clear understanding of how networks operate and an explicit aversion to naïve or hopeful assumptions about how structures which allow for horizontal congregation will necessarily facilitate a future that is non-hierarchical, horizontal and participatory. Just like the Egyptian revolution was facilitated by digital media but succeeded through the bravery, sacrifice, intelligence and persistence of its people, ensuring a participatory future can only come through hard work  as well as the diligent application of thoughtful principles to these new tools and beyond.

Interview with Voice of America–in Turkish

I was interviewed earlier today by Voice of America in Turkish. Yes, the interview is in Turkish. (VoA page for the interview can be found here).

P.S. I just installed this plug-in so I’m hoping it works well across browsers. If the controls to play aren’t visible, try right-clicking on the image. There is an option to make controls visible and another one to play the video.

PPS. Yes, I got a haircut between yesterday’s interview and today’s.

Let’s Mess with Stereotypes: Is Social Media Finally There?

Perhaps no region of the world is more subject to stereotypes than the Middle East. Being a woman from that region, I have encountered these stereotypes on many occasions. While I was a teen, my family lived in Europe for a few years where I was often asked question reflecting these stereotypes. Do all Turkish women wear the headscarf? Um, obviously not. Do you ride camels? I have never seen one in my life outside of a zoo.

At one dinner party, I witnessed my mother get interrogated on whether she was just dressing in a modern way because she was now in Europe. She kept trying to explain that she had changed nothing in her wardrobe. “But, can you actually wear a one-piece bathing suit to swim in a beach,” one of her obnoxious interrogators persisted, unable to believe she might be telling the truth. “Well, now that I am a bit older, I do wear the top as well,” she deadpanned. Ah, the joys of messing with stereotypes.

It’s not that people outside the region should not be interested in these questions. These are important topics. And perhaps no issue is more complicated than that of the headscarf and what it means for women in Middle Eastern societies. It’s truly complicated. The issue of women’s rights and their presence in the public sphere ranges from appalling to you’d be surprised. I cannot go into the topic in depth it one blog pos, but suffice it to say is that it is neither always a direct sign of passivity and enforced subjugation, nor is it always a freely-asserted choice with no other implications.

But the next time I get a simplicity-seeking answer to this question, the first thing I will do will be to direct the person to this video of this young woman with a hot-pink headscarf and skinny jeans facing down a line of riot police while leading a crowd of young men in a chant of “Security forces are the lowest scum” :

I think this directly links to the other issue of the culture of masculinity in the Middle East. This, too, is complicated, ever-changing and very dynamic. And I think nothing demonstrates this better than the emergence of a new kind of hero in Egypt– one that breaks down sobbing on television when contemplating the unimaginable loss of parents whose children were killed by the regime. In the now famous interview, at one point, Wael Ghonim asserts that “we are men and not children” and that “we are ready to die but not willing to raise a hand against anyone” asserting both a new kind of masculinity and generational manifesto.

While it is true that most people will still get majority of their news from television broadcasts, the current media ecology means that images like these–images that are disruptive of easy stereotypes and simple answers– will also find their way to those screens. Of course, all this does not mean that we will now enter into an era of global peace and understanding. Every major communication technology in the industrial age has been greeted with shouts about how *this* one will finally bring people together around the world by allowing us to glimpse each other’s humanity and by challenging our stereotypes and misconceptions. Telegraph, radio, telephone, television… You name it, it has been greeted with cries of “it will humanize the distant other! It will make it harder to have racist and xenophobic beliefs!”

Of course, we know that this did not happen for any of the above-mentioned technologies. For one thing, being able to glimpse each other’s lives does not magically cure us of prejudices; it may even make them worse by providing ammunition for forces of hatred. As we saw in the Rwanda Genocide, radio broadcasts played an ugly role in inciting and coordinating the violence.

And cyberspace is full of vitriolic racism which had been largely pushed behind closed doors. However, a difference worth thinking about is that all of the technologies named above were never implemented in a true “peer-to-peer powered broadcast” at a global scale–the one with most promise, the radio, was quickly taken away from the enthusiasts and the amateurs by corporate interests and the military. Television basically inherited that framework from radio while sharpening its monopolistic nature. And the telephone was never a broadcast medium.

Will social media break this pattern of co-optation by the powerful and the hateful?  I am more hopeful for this medium because the Internet combines peer-to-peer structure along with rich-media broadcast capabilities. No previous technology had this particular combination. Telephone was peer-to-peer but you could not truly talk to strangers and it was limited to voice. Television barely had a chance to be anything beyond a vehicle for delivering eyeballs to advertisers in most countries, and has never truly been controlled by a non-state or non-corporate entity. In any case, next time I get a question asking me for my soundbite answer on the issue of the headscarf or masculinity in the Middle East, I will start with “It’s complicated,” continue with directing people to images such as these and then ask if the person is actually interested in discussing this question in a manner that goes beyond the boring, predictable and too-simple-even-to-wrong images that dominate Hollywood and a lot of television.