Monday, April 19, 2010

Limiting Children is Childish

Adora Svitak, only twelve years old, has already lectured to children and adults across the United States and the United Kingdom promoting literacy and sharing her love of writing. Most recently she was invited to be the youngest speaker ever at the February TED Conference, a week-long conference with top intellects from around the world who lecture on “ideas worth spreading.” After becoming recognized as a “child” prodigy from her essays, poems, and blogs at the age of 6, she was first denied a publishing deal with one major children’s publisher who said they “don’t work well with kids.” Frankly, this kind of behavior is more childish than her.

Often when children are called childish, it’s because of some kind of irrational demands or irresponsible behavior that does not line up with the rules and expectations set out by adults. But it is important to remember that things like temper tantrums are just as irrational as 7-year-old Charlie Simpson thinking he could raise £500 for Haiti earthquake relief by cycling 5 miles around South Park, London. And indeed £500 was irrational, considering he ended up raising over £200,000, or over $300,000. The difference between this and a temper tantrum however, is the potential value of the irrational behavior – a temper tantrum is very egocentric, with aims at obtaining immediate personal benefits (i.e. ice cream or an extra hour of television). But the potential value of the seemingly absurd dream to raise money by riding your bike is obviously beneficial to a large population of people. Childish should not be associated with creativity that boarders irrationality, but with the egocentrism in Jean Piaget’s 2nd of 4 stages of child development that is outgrown by age 7.

Although sometimes it seems this stage 2 egocentrism is never outgrown – imperialism or world wars could be called just as irrational and selfish as a nation-scale temper tantrum. But we can already look back at some events like this throughout history as childish because we have progressed. And as many of us have read on inspirational posters in school classrooms, the children of today are the leaders of tomorrow. The whole reason we’ve progressed beyond cavemen or the dark ages is because each generation improves upon the one before it. That is because kids still dream big, and as Adora says, “in order to make anything a reality, you have to dream about it first.”

A simple example she uses, is the Museum of Glass in her home of Tacoma, Washington. Here, glass blowers invite children to draw and design some pieces they will make, which are often much more imaginative and creative because they aren’t limited by the knowledge of what’s easy or hard to shape. In a larger sense, children aren’t burdened with as much knowledge of past failures, limitations, or road blocks in order to make their dreams a reality – their imagination often pushes the boundaries of possibility. They are fortunate to come into the world a blank slate of personal experience but with the knowledge of past failures to guide their thinking.

When children are called childish, or discouraged from dreams adults consider irrational, we are lowering our expectations for them to the level of our own past accomplishments. We should not be trying to turn kids into our kind of adult, we should provide them with opportunities to lead and succeed so they can be better adults than their parents. Age discrimination is self-defeating to our future. We cannot call Adora or Charlie childish for dreaming big because the brilliant ideas that will benefit our world in the future lie in the dreams of children today. We need more child-involved programs like GreenMyParents to continue to foster children’s imaginations and confidence so that their dreams can become a reality.

Sunday, April 4, 2010

Cars 2: Parking In Walmart After A Race Through Theaters

DreamWorks Animation’s last week release of How To Train Your Dragon included the deployment of Viking ships full of Dragon merchandise in over 2500 WalMarts worldwide and Happy Meal toys at McDonalds. As part of promotions leading up to its release, DWA and Paramount also negotiated a first of its kind partnership with The Boy Scouts and Girl Scouts of America to create a special Dragon Training badge (which of course required scouts to watch an advanced screening so they could name and identify at least three dragons from the film). This isn’t the first and surely won’t be the last time that major retailers, manufacturers, and production companies team together to squeeze as much money out of a single idea as they can.

But at least DreamWorks Animation has produced a non-sequel film, which is rare in these days, that has also brought technological innovations and much positive critical attention. However, when the attractive potential of auxiliary markets out-performs the desire to retain artistic integrity, we’ll continue to see films like Cars 2 released solely for consumers to drive their way to WalMart instead of parking themselves in the movie theater.

Pixar has established itself as an animation company that releases only one feature film a year, that is a compelling, original story, which has earned them an astounding 24 Academy Awards from only ten films. And of all their critically acclaimed successes, Cars, the only film to receive less than a 90% rating on RottenTomatoes with a C-rating of 75%, will extend its franchise with a sequel in the summer of 2011. Worse yet, this film was also their poorest performance in the box office in over ten years, leading many movie fans to wonder what the motivation behind the production really is, and what happened to Pixar's originality?

Follwing the Cars release in June of 2006, Disney Consumer Products reported that “Cars is recording 10 to 1 more retail volume than Finding Nemo at the same point in its release.” In fourth quarter reports later that year, Disney’s chief executive Robert Iger promised investors “we expect to see a holiday boost for Cars merchandise, which has been one of our biggest lines of the year, with retail sales around $1 billion." In an interview last year once Cars 2 had been announced, Iger explained that he conceived of the sequel while still promoting the original film on a worldwide tour – this, coincidentally, was when those first reports of '10 to 1 retail volume' were coming in.

It’s strictly business when a production company signs licensing and merchandising deals to get more promotion and marketing, or even to make more money from auxiliary markets, but it’s downright lazy and manipulative when they depend on these contracts instead of retaining their own artistic integrity. The kind of integrity that would prevent them from making a sequel to a film where three actors for five of the characters’ voices have all died - the writer of Beauty and the Beast and The Lion King, Joe Ranfit, and legendary performers Paul Newman and George Carlin. Whether they're re-cast or written-out, don't worry too much over this slap in the face to real artists, because Larry The Cable Guy is still around to reprise his role.

Coming to theaters near you next summer, your kids’ favorite car toy Lightning McQueen will venture outside the country for the first time. This international racing adventure, that has already encountered lots of story problems and re-writes, will lead him to encounter a vast array of new cars whose limited edition toys will also soon be available for purchase. Also next summer, look for films based off the board game Battleship and the ViewMaster toy. And summer of 2012, in Pixar’s third consecutive year releasing a summer sequel, look for Monsters Inc. 2 along with Stretch Armstrong and Legos movies.

Saturday, March 27, 2010

Fool Me Once, Shame On You; Fool Me Twice, Shame On Me

This old idiom is a lesson that you should not let people take repeated advantage of you. But it's also a lesson that the first time they take advantage of you can be forgiven. In a 2003 joint study between the department of Economics in the Yale School of Management, and the Harvard Psychology Department, scientists uncovered an innate capacity for mutual altruism in Tamarin monkeys that follows these exact same parameters.

In the study, Tamarin monkeys were put into a variety of situations where one monkey has the opportunity to help the other at a cost to itself, and vice versa. They made sure to use unrelated monkeys in the scenarios, as previous studies have shown that throughout the animal kingdom, cooperation is correlated with genetic similarity. In this particular study, two monkeys in separate cages next to each other repeatedly took turns at a task where one monkey could pull a lever that released food into the other monkey's cage but not their own.

What they found was that if a monkey refused to pull the lever and help out the other monkey, it did not affect that monkey's behavior significantly. Essentially, a "fool me once" was disregarded, and that monkey was still willing to help its peer who would not help them. But once a monkey refused to cooperate twice, the other monkey would also then refuse to cooperate - "fool me twice." Once this mutual altruism broke down from two consecutive acts without cooperation, not only would the monkeys never work together again, but even years later when they re-tested the experiment with the same monkeys, relationships of spite continued to exist.



So to me, the proverb is more an external representation of an innate animal capacity than it is some profound idiom constructed through intelligent reflection. The first part, "shame on you," was seen in these monkeys along with other animals in similar experiments - it did not necessarily hurt the relationship between the two actors, but it was definitely stored in memory for future interactions. And the "shame on me" part, represents that once an individual has refused to cooperate with you twice in a row, you shouldn't trust them anymore. Shame is a negative association with past actions, so if you look at the behaviors of ourselves and other animals in terms of costs and benefits for survival, this "shame on me" is an innate capacity for learning not to trust an individual who has "fooled you twice."

Monday, March 22, 2010

Entertainment, It's Service, It's Future

Entertainment will always be a service, so treating it as a product sacrifices the integrity of the content and the experience of the audience. The Internet and similar recent technologies have placed the mass media industries in a current state of turmoil, with top executives at major television and motion picture production companies losing their jobs left and right due to their inability to continue to capitalize on their market as they had in the past – but this is nothing new. Barrett Garese, founder of Spytap Industries, which specializes in online business development and social media, stated in his most famous essay, “disruptive technologies always put people out of work…temporarily.” He cites the automobile industry destroying the horse drawn carriage industry, but that “the smart ones adapted their businesses or skillsets, and the others went away.” So looking at these revolutions in entertainment from stage to screen, print to radio, cinema to the home television, and now everything to the internet, the business models need to adapt; not the content creators.

In all of these scenarios, no industry has completely disappeared – they have just reestablished their niche in a changing environment. But the film and television industry has not been able to do so despite their complete recognition of this recursive phenomenon. NBC Universal CEO Jeff Zucker said that “the industry has gone through more change in the past five years than it has in the previous 50,” and CEO of the independent finance company Film Department, Mark Gill, agrees similarly that “there’s been more change in the last 18 months than in the preceding 18 years.” In the past, major studios and networks controlled the content and distribution of the dominating entertainment, but this is no longer the case.

In film before the 1980s, to be entertained you needed to go to a specific venue at a specific time and pay a specific price all determined by a small group of companies. With home video, you still needed to wait a specific amount of time to buy a specific product for a specific price but you could now view it at your own leisure at home. For television, you needed to tune in to a certain channel at a certain time, or perhaps wait until it could be purchased on home video or DVD. As Garese explained, the business model was based on “scarcity of product; either physical product scarcity or timed scarcity.” But with Internet downloads and streaming, scarcity ceases to exist. Even films still in theaters, and sometimes prior to release as well as television shows from all time periods, are available at any time and place to the consumer – and that last part is exactly the problem. The audience is considered a consumer, and the content created for them is considered a product.

When people went to the theater for the premier performances of Shakespeare’s plays, they weren’t paying for popcorn, bonbons, and over-sized cokes. They were paying for the service of being emotionally, psychologically, and physiologically aroused through the shared experience of the audience and performers. Entertainment is an innate human capacity that has only changed technologically. Sociability is the most basic human instinct that let us progress culturally beyond any species on Earth, so it is no surprise that it continues to fuel our entertainment today. It’s the same as the tribal dance or the reenactment of a successful hunt when we were nomadic cave dwellers. It’s a way of taking a personal emotion or experience and sharing it with the group in order to learn from and bond with your fellows. We enjoy this act so immensely because for tens of thousands of years it groomed us for survival – whether it be a story to learn to succeed or to learn how not to fail. Entertainment and the happiness is brings are services to our psyche, they are not a product you can buy as so many artists, musicians, and philosophers have pleaded for centuries.

Creating something that the largest possible audience can appreciate and share with one another is true entertainment success, regardless of the medium. It’s easy to use James Cameron’s Avatar as an example – because it was physiologically and emotionally arousing through its stunning visuals, but more importantly it was the most watched and talked about feature film in a long while. But how about the most viewed youtube video of all time – Charlie bit my finger – which has been viewed over 170 million times. It’s less then a minute long, the visual quality barely rivals a cell phone camera, but nearly everybody has joked about it with friends. The under looked aspect of entertainment has always been camaraderie.

Slumdog Millionaire is a perfect example of how corporate Hollywood’s ignorance of this camaraderie and perception of their content as a product is destroying them. The story was universally relatable and emotionally stimulating, but when screened to Warner Brothers executives, they turned down the offer to distribute the film theatrically. Thinking it could not sell well, they proposed printing it straight to DVD. So the producers went with Fox Searchlight, an Independent Film company that looks at acquiring just this niche of films that are seen as bigger risks but have compelling stories that deserve to be seen shared with the world. And we all know what happened – it made back over twelve times its production budget and won the Academy Award for best picture. Steve Hickner, animator and feature film director, told me it’s because Hollywood has become an industry based on “no.” He said that those Warner Brothers executives might have kicked themselves in regret, but at the end of the day they still had jobs. He went on to say that the person who may no longer have a job is the person who said “yes” to a film like Gigli or All About Steve.

Jon Horn, Ben Fritz, and Rachel Abramowitz wrote an article last October about this executive firing ‘horror show’ saying that “film lovers may not rejoice, but it might buy the studio chiefs some job security.” Coming next July to a theater near you will be Transformers 3, and films based off the ViewMaster toy and the board game Battleship. And in development are projects based on Legos, the video game Asteroids, and the toy Stretch Armstrong. As hopeful as one could be that Stretch is a rich, deep, relatable character whose compelling plight can be collectively enjoyed by us all, it does not matter – because his toy after the release will pay off any blockbuster blunder. The ‘creators,’ and not artists, behind these films care more about the bottom line than their own artistic integrity. And Pixar’s film Cars, its only critically bashed film in the company’s history, is getting a sequel because how could one possibly pass up a second opportunity to sell more toy cars than Hotwheels?

Enough bashing the poor decisions and lack of trust in artists by the big guys on top, what can they do to adapt their business model? It’s as simple as changing the equation from churning out a product to sell the consumer, into creating content that sells themselves to the consumer – simply put, reestablishing themselves as service providers. Jason Kilar, the genius behind Amazon.com and Hulu, said, “shows are the brands users care about, not the networks that air them.” When Chuck Salter featured Kilar in an article, this was Kilar’s response to the question of why Hulu, a site “owned by NBC and Fox allows you to search for, say, CSI:Miami – and then provide links to take you to the CBS site?” It’s because Hulu wanted to be the online authority on streaming videos, so they needed to provide the consumer with whatever they wanted – the customer is always right.

A majority of income for the entertainment industry today is advertisement revenue – and this is another system that Hulu has redesigned for the changing atmosphere. They limit their commercials for a half hour show to two minutes, rather than the television eight-minute model. But not only do viewers get a quarter of the entertainment disturbances as television, they can choose when to watch the adds, and give a thumbs up or thumbs down to personalize what they see over time. At the same time, this is increasing the advertisement rates by 50-100% per thousand viewers as compared to broadcast television. Essentially, Hulu has created a two-way street for optimization by both parties – consumers get more of what they want with less commercial time, and because they provide demographic information upon registering with the site, the advertisers also get more accurate and valuable market research through feedback. As Kilar explained, “customers won’t tell you what they want, but their behavior will tell you if you capture and analyze it.”

This advertisement adaptation is the first of many that will drive an overall change in the corporate entertainment business model – but still the most important step is reestablishing each entertainment platform’s niche. Because of Hulu and similar websites, along with home televisions capable of connecting to this content directly, the television and Internet media industries will have the toughest battle. Both thrive off of the viewers’ constantly increasing ability of control – and people love being in control of themselves. Television began to thrive on an aspect of flow – shows the lead into each other well with advertisements targeted to their audience, in an attempt to keep the viewer on one channel for as long as possible. But there were always multiple channels, then came remote controls, and cable, satellite, pay-per-view, DVRs, and the list goes on. Because it lacks the participatory culture of a large cinema audience, viewers’ participation and connectivity to the content is routed in their ability to control what they watch.

Outside of Internet-based television programming, the personal computer still has a much more immense capacity for control through constant and immediate interactivity – gaming, applications, and ‘choose your adventure’ style narratives are only the beginnings. Computer programming and web scripting will continue, as it has, to become more and more interactive and ergonomic. And with so much more collected information about its users behavioral relationship to its content, users are directed to more of what they want more efficiently. If television wants the same big-bucks advertisement revenue it once had, it needs to go back to it’s original niche – event programming. People used to tune in at a certain time to a certain channel to see that new episode of their favorite TV show. However, scripted programming is no longer an event because of the flexibility and immediacy of the Internet. The highest watched shows today are live reality programming like American Idol or So You Think You Can Dance, and major sporting events like the Super Bowl or March Madness even out perform them.

And the film industry is suffering not only because its content is displayed on the two aforementioned formats, but also because it no longer provides as unique of an experience. The cinema has always been a spectacle, a big show, and despite the over-flattery, an almost circus or carnival. The screen is huge, the sound is incredible, and large groups of people immediately respond to the entertainment alongside each other. But home theater systems are huge high-definition screens, with surround sound, and a website like RottenTomatoes will tell you exactly what the rest of the audience feels about it. But thankfully, the cinema has a new secret weapon – 3D. It’s not the blue and red anaglyph glasses our parents wore, its scientifically designed to enhance the viewing experience. Cameras with dual lenses shoot alternate views for each eye in exactly the same way we see the world. As David Zaslav put it, chief executive of Discovery Communications, “3D is bound to gain attention because consumers and producers are always striving for what looks ‘closest to real life.’” And in Japan, theaters are already experimenting with ‘4D,’ where the films include smells, wind, vapor, and even seats that slightly stimulate the viewer physiologically. It may sound cheesy now, but experimentation is the only way to improve the viewer’s experience – and these kinds of technologies will appear in theaters long before your home TV or computer.

In all of these respective media niches, the message remains the same – they provide unique services to the audience. As soon as that service gets systematized into a product, the audience will be disinterested – good entertainment will sell itself. As much as we would like to think that the human race is special, we are still all animals – and animals survive off of mutual altruism amongst their species, flock, school, or pride. For humans, entertainment grew from sociability routed in a form of mutual altruism. We entertained each other with stories and they helped us survive by sharing our knowledge and experiences with one another, because a group is only as strong as it’s weakest link. The entertainment that will continue to prosper will be the entertainment that shows and tells us something new, something vital, something real, and is universally experienced in the same innate capacity that it grew from.



Works Cited:

Garese, Barrett. “Scarcity, Experience, And A New Seat At An Old Table.” [weblog entry.] Barret Garese’s Weblog. July 2009. (http://www.barrettgarese.com/post/141270170/scarcity-experience-and-a-new-seat-at-an-old-table)

Graser, Marc. “Digital Format Adopted: Studios, retailers aim for ‘buy once, play anywhere plan.” Variety 4 January 2010.

Horn, John, Ben Fritz, and Rachel Abramowitz. “Hollywood Studios in Midst of Their Own Horror Show.” Los Angeles Times 6 October 2009.

Salter, Chuck. “Can Hulu Save Traditional TV?” Fastcompany.com/magazine. FastCopmany, 1 December 2009.

Stelter, Brian, and Brad Stone. “Television Begins a Push Into the 3rd Dimension.” New York Times 5 January 2010.

Monday, March 8, 2010

Response To Mack's Climbing The Ladder of Success

In a recent post, Blogger Mack proposed that economic inequality between classes could be a much bigger hit to a country than we realize. From a review by Pickett and Wilkinson of the book The Spirit Level, Mack quotes, "Though Sweden and Japan have low levels of economic inequality for different reasons - the former redistributes wealth, while in the latter case, the playing field is more level from the start, with a smaller range of incomes - both have relatively low crime rates and happier, healthier citizens."

From a capitalist point of view, one could argue that the individual's ability to grow beyond his or her peers in an upper class is a comforting reminder that they live in a culture that rewards hard work and ingenuity. But at the same time, the jealousy created by divided classes could contribute to the higher crime rates, and less happy, less healthy citizens as compared to countries like Sweden and Japan. When looking back on humans from a long-term anthropological standpoint, both sides of the argument are better understood.

For a long period of growth when humans were tight-knit groups of hunter gatherers, the socioeconomic structure was completely focused around the survival of the group as a whole. And these groups could be anywhere from twenty or thirty individuals up through about a hundred and twenty or thirty people, but rarely more than that. So for a lot of our most recent cultural growth and cognitive evolution, we had been naturally selected to survive best as a tribe - a relatively small group of equals, where everybody knows and trusts each other, and everyone's efforts contribute to the well-being of everybody else. There wasn't really any class system, and because it was successful for such a long period of time, humans became accustomed and happy with this structure. It's the same as people today feeling happy and rewarded when they feel part of a team, an organization, or group of friends that has a collective success. It's that same part of our ancient brains that would rejoice and sing and dance after successful hunt that fed the tribe.

It makes sense then, that a more socioeconomically equal country like Sweden or Japan would foster less crime, and happier and healthier citizens. But the flip side of it all is just as ancient - competition between neighboring tribes. Tribes were constantly inventing new tools and technologies, discovering new resources, and working to exploit these advancements in order to out perform the other tribes around them. Basically, if your tribe figured out a better way to hunt or collect water or something than the tribes around them, they'd ultimately obtain a monopoly on the resource. And this division in socioeconomic power from tribe to tribe, would foster greater potential for survival and success for the tribe on top - prehistoric capitalism. So as we progressed into larger tribes, cultures, and now are a globally connected system of nations, we still fall prey to our ancient tribal qualities. Deciding where you stand between socialism or capitalism is ultimately deciding who is part of your tribe and who is not, and who you feel includes you as part of their tribe and who does not.

Sunday, February 21, 2010

David Cameron and The Next Age of Government

David Cameron, leader of the United Kingdom's Conservative Party, gave a very thought provoking speech at a recent TED Conference on how he thinks the information revolution can benefit government and policy. The TED conferences, standing for Technology, Entertainment, and Design are devoted to just that, but more specifically new innovations with the slogan "Ideas Worth Spreading." Speakers are often scientists, researchers, inventors, but rarely politicians. So I was surprised to see that Cameron gave a talk, but it still certainly fit the TED brand.

Here's the gist of what I got out of it - he talked about how information today spreads incredibly quickly, cheaply, and to almost anywhere in the world and government's are not using this to their advantage. He stated that the information age can give more power to the people, and has lead us to understand people better. Understanding people better should lead us to designing policies and programs that treat people the way they actually are, rather than the way we wish them to be. And more power can be given to people simply by translucency of information which is now much cheaper and easier.

One simple example he used was putting government spending, dollar by dollar, available online. He reminded the crowd that the world has over 30 trillion dollars in debt, and the easiest way to start alleviating it is to reduce government spending. And that's exactly why there are always people going line by line through our budgets, trying to analyze what programs work or don't work, in order to decide what to spend money on and what not to spend money on. But what if all of our spending, down to every individual government contract, was put online? Businesses could search the database for contract jobs they could fulfill, and compete to do it for a lower price. And he promised that should the Conservative Party gain control of Parliament, they will do just that in the UK.

He also showed a screen shot from a Chicago website called Clearmap, which updates publicly the occurance, description, and location of crimes committed in the city. Before the internet this is information that only the police department would know, but now it is public for all citizens. Kind of like the e-mails I get from my USC's Department of Public Safety, this information can show people how to avoid being the victim of a crime. It can show where not to go at what hours during the day in a very obvious visual display. Not to mention the fact that this makes the Police Department to their jobs better because they have easier access to and representation of information that had previously been stored in file cabinets or personal memories.

The most interesting thing to me though, being a psychology major, is how he believes we can use behavioral economics and our understanding of ourselves to improve culture. One easy example we're all familiar with is recycling - people started doing it more, when you started offering them money for it. Yes I know it's simple, but that's why it's such an easy way of using something we know about people (they like money) to do something that collectively makes a difference. An example he used, which I'm not sure I agree with but makes a lot of sense, is a simple way of reducing energy usage. Politicians have tried everything from those PSA's telling you to turn the lights of when you leave, to city-wide voluntary no-power hours. Cameron suggests using the vast amount of information we have and giving it to people to really show how they fit in - what if your monthly electrical bill had a bar graph with your energy usage, your neighbors' average energy usage, and an eco-friendly energy aware household's usage? Psychology experiments for years have continued to confirm that people feel uncomfortable giving/taking more/less than everyone else; essentially, people like equality and collaborative altruism. So if Joe Shmoe sees that he's using four times as much energy as the rest of his block, maybe he actually will start turning lights off when he leaves rooms. Obviously this proposal may bring arguments of privacy, but the point of it is really using what we know about people to design more effective policy.

Thursday, February 11, 2010

Prezi and Persuasion

Tuesday night I attended a special guest lecture from Peter Arvai put on by USC's Institute for Multimedia Literacy. Arvai is the CEO of a new presentation tool software called Prezi, the third new media start-up he has been involved with that began in Hungary and is beginning to spread. Personally, I think Prezi beats the shit out of powerpoint, and I'm not the only one. The great thing about the software, is that you have one giant interactive pallet to place all your content - text, images, sound, and videos. You can place it all in different sizes and locations to enhance your argument, and when presenting you can come in and out of any part at any time. So, rather than a strictly linear, almost movie style presentation that we're used to, Prezi allows the presenter and audience to interact with each other and spend more or less time on different topics depending on the flow of the presentation. And what makes it even better, is that it's free to use, and it's all online - you build your presentation in a web browser, and save it to a customized URL that can be called up from any computer. No more worrying about saving the powerpoint file somewhere, or worrying about compatibility with other computers.

The presentation Arvai gave using Prezi was mostly about how to give a VC pitch. But whether intentional or not, his outline for a VC pitch was an outline for persuasion in general. He asked the audience what the goals of a VC pitch are, and how you accomplish them. After some back and forth with the crowd, we came to a list of three things you need to do - display potential for your idea or product, competency in yourself, and do it in a way that can be easily communicated to other people. For a VC pitch this makes complete sense; you need them to like your product and see that it's profitable, they have to believe you can come through with it, and you have to package it all in a simple argument they can remember and recount to convince their partners.

But I think this is important in trying to persuade any audience to do anything. Take Obama's campaign as an example: his goal was to get elected. So he had to present his ideas, or platform to the voting Americans and prove they had potential to work. He had to look good doing it, speak well, and convince us he is capable of coming through. And he had to make it all fit into an easily remembered argument - Yes we can. Obviously there was a lot more that went into his campaign than I just outlined, but he hit the three big parts of persuasion right on the money. And how about McCain? He also had to present his platform to voting Americans and convince us he is capable of coming through. But in proving his competency to complete the tasks at hand, he was not nearly as successful. A lot of the flack he got in the media was his running mate's competency and his own age. Jokes about him possibly dying in office, and viral videos of Sarah Palin sounding like she didn't know what she was saying, killed their campaign. And things like Tina Fey's depiction of Palin became that easily remembered and transferable message. Once again let me point out that these weren't the only factors influencing the election, but when looking at the simple act of persuasion, Obama beat the shit out of McCain.

Tuesday, February 9, 2010

Horizontal Evolution

In this article, a team of scientists outline a new theory of horizontal evolution. And it makes us all ask ourselves, what the hell is that? Well the traditional view of genetic evolution through natural selection, which these scientists are calling vertical evolution, falls under the Darwinian view of "survival of the fittest." Essentially, everyone inherits their genes from their parents, and then pass them on to their kids, and whatever genes in whatever organisms help that species survive best will continue to proliferate the gene pool (and vice versa with genes that aren't fit for survival). But these scientists have found evidence of horizontal gene transfer, meaning the inheritance of genes from one organism to another. But they can explain this much better than I can . . .

In the past few years, a host of genome studies have demonstrated that DNA flows readily between the chromosomes of microbes and the external world. Typically around 10 per cent of the genes in many bacterial genomes seem to have been acquired from other organisms in this way, though the proportion can be several times that. So an individual microbe may have access to the genes found in the entire microbial population around it, including those of other microbe species. "It's natural to wonder if the very concept of an organism in isolation is still valid at this level," says Goldenfeld.

No, this does not mean that by making out with someone you can inherit their genes. In fact, it is still very unlikely that organisms as complex as mammals exchange genetic information with each other. But, considering the prevalence of this genetic transfer between small microbes that scientists have discovered, it is not unlikely that much of the early evolution of single-celled organisms billions of years ago was due to horizontal evolution. And now many scientists are wondering if and how small microbes can influence our genetic codes today - could a virus be considered horizontal evolution? A virus is a small microbe, it enters your blood stream, and it takes over host cells to do its bidding (i.e. implants genetic code within the cell containing information to create more of itself and often also proteins that lead to the observable biological effects of the virus). If you think of your cells as other microbes, how different is this process from that described by scientists in the study?

And at what point do you draw the line between a virus and horizontal evolution? If a virus changes you biologically, and your genes are the blueprints for your biology, is the dividing line just a frame of reference? Obviously you're entire genetic code isn't changing, but what about the infected cells? Which makes us go back to the kssing thing, if you get mono from someone, is that horizontal evolution? Is changing your body temperature, decreasing your energy, and inflaming your throat signs of genetic change in many of your cells? If you say "no, you can take medicine and get healthy again," who's to say the medicine isn't just another microbe exchanging genes with those infected cells?

Please don't take me out of context in thinking that I believe we're all constantly evolving with every bacterium we come in contact with, but at the same time don't discount the possibility that we sometimes might. And remember that if this kind of thing is happening, it's likely not with complicated and specialized genes for individual organisms. You can't get a virus that makes you grow more arms, or at least I highly doubt it. The kinds of genetic information exchanged could only be the kinds of information that the microbes contain to begin with - and I doubt there's a virus out there with anything more drastic than the ability to replicate and produce some simple proteins.

Monday, February 8, 2010

Art as Antibody

I recently read an article on Transvergence written by Joline Blais and Jon Ippolito which they titled Art as Antibody. Transvergence is an idea in the art world of taking things that one might not normally consider to be an art medium, but adapt that medium for some kind of creative expression. Some examples may include genetically engineering a rabbit to have florescent green fur, or spreading a non-harmful computer virus that would be the digital version of graffiti.

In Blais and Ippolito's article, they make a very strong analogy between the way art exists in new media and the way organisms immune systems operate. It's a pretty high-brow article with lots of words I had to look up, but I'll give the gist of what I'm pretty sure they were saying. I'll begin like they did, in describing the basics of our immune systems and then applying those concepts to art won't be nearly as difficult.

Diseases enter our bodies in the form of antigens - little molecules that are all each slightly different in shape and size. What our body does to fight these antigens, is create their opposite - antibodies. Each antigen has one sister antibody that binds to out, allowing our white blood cells to destroy the antigen. This process trillions and trillions of times will, hopefully, eradicate the illness. And to prepare for antigens, our immune system constantly makes antibodies in a process that also randomly modifies them so as to produce small amounts of trillions of different kinds of antibodies. When an antigen enters the blood stream, the immune system eventually finds the appropriate antibody to bind to it, and then begins mass producing it to kill the disease.

Now here is where the article and I begin to disagree. In their article, they then apply this analogy to the internet - the idea that there are trillions of different thoughts and opinions online, and the most effective ones in battling opposing views or societal problems will be mass produced at an alarming rate (via chain e-mails, links, blogs, youtube, etc.) But what I think they should have done is apply this analogy to cultural meme theory in general, and acknowledge that the internet has incredibly increased the speed and effectiveness of memes.

But at the same time, antigens and antibodies should not just be seen as antigens are bad and antibodies are good. Rather, antigens are what your body decides not to include within itself, and antibodies are the means by which your body tries to eradicate them. In the same way, there may not necessarily be good cultural memes and bad cultural memes, just those that the individual chooses to include in their own ideology, and those they choose not to include in their own ideology. And in the same way, a group of people (country, culture, company, anything) may choose to include or exclude memes from their ideology. An easy example would be legislation - what laws do we choose to include or exclude, in order to survive as a country. And as we've progressed over time, our biology and our ideology have both improved in their abilities to survive.

Although they limited their argument to new media, my favorite part of the article is this last part of the analogy I will discuss - the idea that not exposing yourself to enough antigens, may lead your antibodies to turn on the host itself. There are many immune system disorders, or diseases where our own immune systems are the ones doing harm on ourselves. And what some have pointed out, is that these disorders have a strange correlation between how many infectious diseases the individual is exposed to. Many Westernized cultures have nearly eliminated many infectious disease (i.e. Malaria, Polio), but have much higher incidences of immune system disorders. And in countries that still suffer from many of these infectious diseases, immune system disorders are much less common. The argument many make, is that our immune systems are meant to be fighting, and if they don't have anything to fight, they will turn on themselves. In the article I read, they made the same argument for art and culture - if a culture becomes too closed off to allow art to challenge it, then the culture can become its own worst enemy. Biologically, ideologically, psychologically, culturally, and pretty much any giant system you can imagine needs a constant flow if ins and outs in order to retain its vitality. If our immune systems aren't challenged by disease, if our culture isn't challenged by artists and intellectuals, if our own individual brains aren't challenged by new ideas, they will die.

Wednesday, February 3, 2010

Saturday, January 30, 2010

Digital Renaissance for the Public Intellectual

The public intellectual today is much less limited in their potential for exposure because of the internet. An exceedingly larger amount of people now have instant access to virtually endless information – but is this good or bad for the public intellectual? Scholars and critics can publish their work instantly and at no cost, but that also means any Joe Schmoe like me can do the same. The public intellectual derives their power from information. And a noteworthy one is someone who presents new and valid information to the public that invokes a change for the better. Although the internet has been a huge contributor to the Information Age, sifting through the excess fat becomes more difficult.

As blogger Mack put it, “The measure of public intellectual work is not whether the people are listening, but whether they’re hearing things worth talking about.” I’m sorry if I offend anybody, but things like TextsFromLastNight are hardly worth talking about. But it just so happens that Lev Manovich, arguably the leading scholar of new media, does say things worth talking about (and puts them online, too!). His book Language of New Media has been called “the first rigorous and far-reaching theorization of the subject,” because he “places new media within the most suggestive and broad ranging media history since Marshall McLuhan.” (CAA Reviews)

Manovich earned his undergraduate degree in Experimental Psychology from NYU, and a Ph.D. in Visual and Cultural Studies from University of Rochester. He is currently writing three new books, often publishes essays, and teaches Visual Arts at University of California, San Diego. He remains involved in academia and continues to grow as a public intellectual all for the same reason – because “learning the processes of criticism and practicing them with some regularity are requisites for intellectual employment,” (blogger Mack). In his article New Media from Borges to HTML, he criticizes the United States government for not funding new media as early as many countries in Europe. Personally, I think he was forgetting that our Vice President invented the internet.



















But in all seriousness, Manovich has a very valid point. The internet and everything that goes with it are becoming larger and larger players in our global culture, meaning any country wanting a place in that culture needs to keep up. If you go to Japan, they probably already have cell phones out of Star Wars that beam hologram messages to each other. I’m not saying trendy gadgets are quantifiers for a country’s success, but we certainly shouldn’t be falling behind the curve in terms of the bigger things like digital infrastructure and education.