Tuesday, July 26, 2011

Entry 006 - Cash or Credit?

Today's entry examines a question which arose in a conversation with Aimee Hokanson and Bethany Spring, regarding spending behavior. Simply put, the question, which was not specifically addressed to the blog, but I found to be blog worth is:
"Does the payment mode (using cash, credit, or debit cards) impact the amount spent?"
While some people claim that the answer to this question is well established and boarders on common knowledge, I was actually surprised to see very little research. Although it is unclear why people feel that the answer to this question is common knowledge, it seems likely to a variety of misinformation posted on the internet may be the cause.


For example, one commonly cited study, among bloggers to financial gurus such as David Ramsey, favors the notion that people spend more when using credit cards. According to these sources, a study was conducted by Dunn and Bradstreet, who found that people, on average, spend 12-18% more when making credit card purchases, compared to using cash. Additionally, they are reported to have discovered that the average McDonalds transaction increased from $4.50 to $7.00, upon the company's acceptance of credit cards as a viable payment option. However, to date, I have been unable to find this report, and have viewed some sources which claim that even Dunn and Bradstreet are unaware of the validity these figures.

Remember, Ronald "loves to see you smile".

So, if this study is not valid (or worse yet, does not exist), why do a variety of individuals constantly cite it? Well, the citation could be erroneously propelled by credit card corporations. After all, if you are in charge of a business, why would you opt for a payment method which requires time, equipment, and money to process? According to Flagship Merchant Services, gateway, statement, and monthly fees for processing transactions could be anywhere from $25 - $45 a month (and those are the "best" processing fees). Add this with the notion that your credit card may be lost, or stolen, which could end up costing the consumer more than they bargained for, and there seems to be little reason to accept credit or debit cards.

"And how will you be paying Mr. Flanders?"

Regardless of where this misleading information came from, I have decided to stick with books, peer reviewed studies, and dissertations which I was able to access, in order to seek out the correct answer.



ARGUMENTS FOR AND AGAINST:

Regardless of which mode of payment you feel is most likely to prompt spending, there are logical and possibly valid arguments supporting either view. For example, those who feel that someone who has cash on them is likely to spend more will often argue that the person in question withdrew that money and has it on them, so that they could spend it. Essentially, the decision to spend that money was made as soon as they withdrew it from the bank. Additionally, people who favor this view may argue that the money is "burning a hole" in one's pocket, begging to be spent at the first available moment.

Money is the root of all evil!

Alternatively, others claim that using credit cards is more likely to prompt spending. Proponents of this view argue that that, in general, cards distance the consumer with the reality of monetary exchange. Additionally, credit cards are based upon exchanging money one does not have, for goods, with the promise that the money will be paid back. Thus, by a literal - and narrow - definition, people who use credit cards are spending more money than they have. The same argument would not hold true for debit cards, as that is an exchange from money which is available in one's bank, but debit cards may also be viewed as a form of distance.

I'm here to steal your soul!



EMPIRICAL DATA:

Retailers (Borgen, 1976; Huck, 1976), credit researchers (Hirschman, 1979), and popular writers (Galanoy, 1980; Merchants of Debt, 1977) generally argue that credit cards facilitate spending. However, it has been debated if this facilitation significantly exceeds that of cash. Given that the majority of data on this subject is correlational, it has been difficult to objectively determine whether or not credit cards actually prompt more spending.

Hand over your money or 
I will give you this credit card!

To date, the most comprehensive response to the question at hand was provided by Raghubir and Srivastava (2009), who conducted a series of studies attempting to ascertain whether or not payment mode makes a difference. In the first study, participants estimated how much they would spend using cash vs. credit cards for a restaurant meal. The results noted that people are willing to pay more when they use a credit card, versus cash. In a second experiment, researchers prompted participants to estimate food expenses for an imaginary Thanksgiving dinner, item by item. When participants considered the cost, the cash-credit spending gap closed, suggesting that people who are confronted with the reality of expenses, no longer allow the mode to influence their decisions.

Collectively, the results from these studies indicate that people may spend more when using a credit card, due to the expense seeming less real. Raghubir and Srivastava conducted two additional studies which examined gift certificates. In the first gift card study, results suggested that participants spend more when using a gift card than cash. In a second study, participants were given $1 gift cards which could be used to buy candy. Participants were instructed to put a gift card in their wallet for an hour, which the researchers argued, made the value card seem more real. Results indicate that people participants put the gift cards in their wallets; they were less likely to use them.

While the question of using debit cards remains unaddressed, the experimental studies outlined provide sound evidence for the notion that credit cards prompt more spending. For some reason, the use of cards seems to be less transparent when considering monetary exchange.



TRANSPARENT TRANSPARENCY: A HISTORICAL APPROACH:

Having read the previously stated arguments, one may wonder why using a card is viewed as distancing the consumer from their money? After all, if these cards are little more than money, why should it make any difference as to its form? Although the aforementioned study provides some data as to the cause of credit cards facilitating spending, the answer remains somewhat elusive. Sources have suggested that while credit cards do prompt spending, there is no special aspect of credit cards which can be implicated for this cause (Federal Reserve System, 1968; Zipprodt, 1969). Thus, in order to fully understand the reasoning behind the theory proposed by Raghubir and Srivastava , a historical overview may be necessary.

According to historical data, the barter system was commonly used up to 100,000 years ago (Mauss, 1923). However, many cultures around the world soon developed the use of commodity money, as the barter system was limited to use between family and friends (Graeber, 2001). While money originally served as a medium of exchange, individuals were limited to the amount of wealth which they had gained to date.

Torg needed companionship, Gark needed food...
...what to do... what to do?

The concept of using a card for a purchase was first noted in 1887 by Edward Bellamy, author of Looking Backward (http://en.wikipedia.org/wiki/Looking_Backward). Bellamy used the term "credit card" eleven times throughout his novel. In the late 1920s, a variety of companies developed a device called the "Charge Plate", which was issued by large-scale merchants to their regular customers. Since the 1960s, credit card use has dramatically increased in the US (Duca & Whitesell, 1995), such that credit cards are now seen as a vital component of business, banking, and personal money management (Clark, 1975; Savage, 1970).

The reason that a historical approach may allow for additional insights is that, for most of the known economic history, money was used as the primary exchange. It stands to reason that given the explosion of technological advances that devices such as credit cards are likely to be viewed as positive and convenient. However, much of the technology which allows us to monitor credit is hard to obtain (try getting your credit report in five minutes), or at the very least, requires individuals to be proactive (banks may push for online banking to save paper, but also require you to login to monitor your finances). Essentially, the concept of a credit card is new, and it may not be highly associated to our personal finances in the same way that that money is.





CONCLUSION:

As credit cards become more common place, the cash to credit card gap may dissolve. However, until then, research suggests that you may be more likely to spend more when using a credit card. This may sound bad at face value; however, there is nothing inherently wrong with using a credit card. After all, where would the world economy be if entrepreneurs were not able to secure funding for their projects?

Well, for one, Trump would be broke.

If you ask me, a world with Donald Trump is a small price to pay for the convenience afforded to the general public thanks to credit cards. Credit cards require the user to be fiscally aware, and somewhat proactive with their finances, which, if you ask me, is not a bad thing. Although some individuals still end up over their head in debt, I would argue that this is the fault of an unregulated fiscal sector which felt it was too big to fail.

I guess we'll have to sell one of the kids.

On a final note, I will admit that the research presented here is not too conclusive. Additional studies are needed. After all, there could be some third variables which need to be controlled for to see whether or not the pattern would hold. For example, age, experience with credit, or even whether or not students in the study had money in their wallets in that fourth study could all influence the results. While some could argue that the difference may hinder on a variety of personality traits, or individual differences, it is important to approach the subject such that one determines if an effect exists or not. Once an effect is established, it would be useful to consider these others variables and how they may strength or reverse the observed relations.




NOTE: If you have a question for me to research and answer please submit it as a comment, or send it to ELKronos@aol.com / Facebook.com/ELKronos. Submit your name and location if you wish to opine.





CITATIONS:

Borgen, C. W. (1976). Learning Experiences in Retailing: Text and Cases, New York: Goodyear.

Clark, F. (1975). Bank Credit Cards: Attitudes and Decisions of Selected Retail Merchants in Arkansas and Missouri, unpublished dissertation, University of Arkansas, Department of Business Administration, Fayetteville, AR 72701.

Duca, J.V., & Whitesell, W.C. (1995). Credit Cards and Money Demand: A Cross-sectional Study. Journal of money, credit and banking, 27, 604-623.

Federal Reserve System (1968), Bank Credit Card and Check Credit Plans, Publication Services, Division of Administrative Services, Board of Governors, Washington, D.C. 20551.

Galanoy, T. (1980). Charge It: Inside the Credit Card Conspiracy, New York: G.P. Putnam Sons.

Graeber, D. (2001). Toward an Anthropological Theory of Value, 153-154.

Hirschman, E. (1979). Differences in Consumer Purchase Behavior of Credit Card Payment System. Journal of Consumer Research, 6, 58-66.

Huck, L. (1976). Making the Credit Card the Customer. Banking, 68, 37, 80, and 83.

Mauss, M. (1923). The Gift: The Form and Reason for Exchange in Archaic Societies. 36-37.

Merchants of Debt (1977). Time Magazine, 109, 36-40.
Raghubir, P., & Srivastava, J. (2008). Monopoly money: The effect of payment coupling and form on spending behavior. Journal of experimental psychology: Applied, 14, 213-225.

Savage, J. (1970). Bank Credit Cards: Their Impact on Retailers. Banking, 63 (1), 39, 92.

Zipprodt, C. (1969). Bank Charge Cards-An Evaluation. The Journal of Consumer Credit Management, 1, 10-19.

Thursday, July 21, 2011

Entry 005 - Which came first, the chicken or the egg? In that sentence, the chicken.

This entry is devoted to a question posed by Calvin Libby and Adam Murray, from Maine, who want to know:
"What came first, the chicken or the egg?"
Although the question has been debated for centuries, I have attempted to understand what the question truly means, and provide various responses that have emerged throughout history.




QUESTIONING THE QUESTION:

In its most basic form, the question posed by Calvin and Adam is a causality dilemma. Although the question seems amusing at surface value, it helped evoke fundamental questions about the existence and the universe (Theosophy, 1939). The question is not necessarily meant to be directly answered, but has a deep metaphorical meaning which begs the listener to consider, "Whit came first, X that cannot exist without the presence of Y, or Y that cannot exist without the presence of X?" Essentially, this is a philosophical circular reference, where the last object references the first, creating a closed loop.

"You jest about what you suppose to be a triviality, in asking whether the hen came first from an egg or the egg from a hen, but the point should be regarded as one of importance, one worthy of discussion, and careful discussion at that." 
- Macrobius
Although the question was not meant to be definitively answered, many individuals, or groups of people, have attempted to do so throughout history. Given the wide array of responses formulated by this question, I have decided to tackle the question by first acknowledging what all of the viewpoints claim. I will start with the first view point which founded the question, and then move on to a variety of other answers which have been posed over time.




PHILOSOPHICAL POINT OF VIEW:

The origins of the question can be dated back to 300BC, where great minds such as those of Plato and Aristotle devoted much time and alcohol to answering such quandaries. To Plato, ideas were independent of the natural world. For Plato, the idea of the chicken came before both the chicken and the egg. After all, a chicken is defined by its characteristics, and if there is no known entity which encompasses that which is considered to be "chicken", it is irrelevant as to whether or not the egg or chicken existed first. In essence, a quality of "chickeness" must have existed first.

You're going down!

Aristotle disagreed with his teacher, no doubt stemming from a sign of Ancient Greek teen angst and rebellion. Aristotle argued that while a chicken is in someway immutable, the idea of a chicken is a concept which will be based on the number of experiences on has with a chicken. The chicken exists as the result of experience, which will tell us what characteristics the chicken has (Ross, 1953). Aristotle did not believe that there was some immutable mold of a chicken, but rather the form itself was within the chicken.

"If there has been a first man he must have been born without father or mother -- which is repugnant to nature. For there could not have been a first egg to give a beginning to birds, or there should have been a first bird which gave a beginning to eggs; for a bird comes from an egg." 
- Aristotle, (Isis Unveiled I, 428.)
This argument becomes important to the philosophy of Aristotle, as he would argue that every chicken's egg has the potential to become a chicken, and while not all eggs reach this potential (some end up in a frying pan), eggs are not able to alter their potential (a chicken egg cannot birth a kitten, no matter how hard it tries). Given this line of logic, Aristotle argues that the chicken comes before the egg. He states that an object can only be a potential something, if there is already an actual something for that object to become. This argument asserts that the chicken must already exist in actuality, in order for the eggs potential to be reached. There can be no chicken eggs until the species of the chicken has been named.

Or could a chicken egg hatch a cat?


ISSUES WITH PHILOSOPHICAL CLAIMS:

Admittedly, the philosophical point of view may be seen as unsatisfactory to some. After all, if eggs containing chickens existed first, but no one knew what to call them until they gained enough experience with chickens, it would imply that the basis for scientific labeling is limited to human awareness. This is problematic because there are many creatures which existed, and perhaps still exist today, without humans discovering them. Labeling the creation of something by human perception implies that humans have created the being, as opposed to discovering them. As a result, this point of view may be little more than shallow argument over the label of existence (e.g. can something exist if we do not know it exists).

When did we get so technical?





RELIGIOUS POINT OF VIEW:

Depending on the religion, there could be differing beliefs about whether the chicken or egg came first. According to Judeo-Christian belief systems, God created birds along with the universe.

"[19] And the evening and the morning were the fourth day. [20] And God said, Let the waters bring forth abundantly the moving creature that hath life, and fowl that may fly above the earth in the open firmament of heaven. [21] And God created great whales, and every living creature that moveth, which the waters brought forth abundantly, after their kind, and every winged fowl after his kind: and God saw that it was good. [22] And God blessed them, saying, Be fruitful, and multiply, and fill the waters in the seas, and let fowl multiply in the earth."   
- Genesis1:19-22
A literal interpretation of this creation story would imply that the chicken came first, as God was said to have created birds, and commanded them to multiply. This response is unique compared to the other point of views as it is perhaps the only response which directly attempts to solve the riddle. While philosophical and scientific point of views have some doubt, this response maneuvers around the circular reference given God's omnipotence. However, it should be noted that many people of the Judeo-Christian belief system do not insist on a literal interpretation, but may be likely to view the answer to such a question through the lenses of intelligent design. Strictly speaking, those who believe in intelligent design would insist that the chicken did not evolve through natural selection, but rather, an intelligent entity prompted the creation of the modern chicken.

And on the 7th day, God created KFC, and it was good.

While the Judeo-Christian belief system may provide a clear cut answer for some, other religions, such as Buddhism, provide different response. Buddhists believe in a cyclical view of time (Newman, 1987; Bryant, 1995), which implies that there is no "first" cycle. Essentially, there is no creation as the cycle is ever repeating. This response is similar to other belief systems established by Mesoamerican cultures (Coe, 1992; Miller & Taube, 1993), and even observed in the philosophical writings of Nietzsche (Golan, 2007).

Although popular in some sects, the Wheel of Time
was no match for the Wheel of Fortune.


ISSUES WITH RELIGIOUS CLAIMS:

Consequently, the reason some individuals may take issue with a purely religious point of view is that it requires a leap of faith. Religion poses many theories which cannot, currently, be empirically tested. As a result, people of faith are left to interpret how they choose to answer the question based off of the information they seek, and whether or not they presume it is meant to be literal. This could be considered problematic when attempting to reach a finalized conclusion, as people have the tendency to seek out information that confirms their beliefs, while ignoring information that discredits it (Watson, 1968). Given the empirical gaps present in religion, some people find themselves to be more satisfied with a scientific point of view.




SCIENTIFIC POINT OF VIEW:

Perhaps the most significant contribution of science, in regard to the question of which came first, the chicken or the egg, was the theory of evolution. The theory of evolution, proposed by Charles Darwin, suggests that species change over time due to mutations and natural selection (Darwin, 1859). Given that DNA cannot be naturally mutated during an animal’s life time, the mutation to make a modern chicken occurred within the egg, implying that an animal similar to a chicken, but not a chicken as we know it, laid the first chicken egg (CNN, 2006). This theory has been supported by modern philosophers who state that the eggs precedence of the chicken is not just a logical point of view, but a biological necessity (Sorsen, 1992).

Jerry couldn't believe his wife had someone else's baby.

However, even science is at ends when it comes to a definitive answer. For example, a recent study by Freeman et. al. (2010), provides evidence which some claim suggests that the chicken must have preceded the egg. In Freeman's study, scientists found a protein which is only found in a chicken's ovaries, which is necessary for the formation of the egg. Without this protein, the development of the hard shell would be too slow, leaving the yolk unprotected.


Researchers have claimed that the egg can therefore only exist if it has been created inside a chicken. This scientific finding also has philosophical support, such that some philosophers have argued that female animals are the sole authors of their eggs, implying that a chicken cannot be laid by a non-chicken, suggesting that chickens must have come first (Waller, 1998).


ISSUES WITH SCIENTIFIC CLAIMS:

The largest issue with scientific evidence is that it could be interpreted in a variety of different ways. The basis of science is that of objectivity, which leaves little room for the currently un-testable theories proposed by religion, let alone the seemingly pedantic discrepancies argued about by philosophers. The fact of the matter is that short of inventing a time machine, there seems to be little that science can do to actually provide absolute proof. As noted, scientific evidence has been noted by both sides. Thus, it is up to the casual observer to weigh the evidence on their own, and make the most informed decision possible.




CAN I GET AN OPERATIONAL DEFINITION OVER HERE:

The reason that this question is so hard to address logically or even empirically, is that a universal operational definition is lacking. From a philosophical standpoint, vague theorists would argue over what a chicken is (Sanford, 1975). Evolution theorists might claim that a creature similar to a chicken gave birth to the chicken, but the question quickly becomes what exactly do we consider to be a chicken? If prehistoric chickens are excluded, at what point do we call an animal a chicken?

Finally proof that dinosaurs were made of chicken!

Similarly, what is a chicken egg? Some individuals may argue that a chicken egg is any egg laid by a chicken, regardless of what comes out; however, others argue that a chicken egg is any egg which gives life to a chicken, regardless of what laid it. Additionally, one could even adopt the macro-perspective that prehistoric eggs did exist, which gave life to what would become a chicken. From this literal standpoint, "eggs" have been around for millions (or pending your belief system, thousands) of years before any creature which resembles a chicken.



CONCLUSION:

Given all the claims regarding which came first, it seems almost absurd to expect a definitive answer any time soon. It appears that as soon as an answer is put forth, more questions arise. However, regardless of your point of view, it stands to reason that the true answer to this age old riddle will most likely come from an integration of these sources, and perhaps even some that are currently unknown to society.

C.W. Mills, most famous for his book entitled "The Sociological Imagination" (1959), provided a unique theory about belief systems which is likely to put these responses into the appropriate context. Mills pointed out that questions are answered by the belief system which is held by a society at any given time. For example, initially, a rainbow was thought to be derived from magic; however, as Christianity spread, people asserted that the rainbow was God's promise to the human race that he would not flood the world again. Yet, in the modern era, early scientists have found empirical evidence suggesting that a rainbow is nothing more than an optical and meteorological phenomenon that causes a spectrum of light to appear in the sky when the Sun shines on to droplets of moisture. As a society progresses, new ways for questions to be answered could arise (perhaps even a methodology better than 'science'). All we can do is approach these questions with an open mind, and consider the validity of the arguments at hand.

Who "came" first?

Although I am tempted to end this blog with a "I'll let you decide" response, I do not think this will satisfy Calvin or Adam. Based on all of the evidence I have provided, I will say that I am leaning toward the chicken as coming first. Given the scientific evidence noted above, and Aristotle and Platos arguments (e.g. if we have never seen a chicken, we cannot label something as a chicken egg, it is our experience with the animals which lead us to successfully classify its eggs), claiming the chicken came first just seems more reasonable. However, I welcome you to disagree, and provide your own explanation as to why.





NOTE: If you have a question for me to research and answer please submit it as a comment, or send it to ELKronos@aol.com / Facebook.com/ELKronos. Submit your name and location if you wish to opine.






CITATIONS:

Aristotle's Metaphysics. W.D. Ross. 2 vols. Oxford: Clarendon Press, 1924. Reprinted 1953 with corrections.

Bryant, B., (1995). The Wheel of Time Sand Mandala, Snow Lion Publications.

Coe, M.D., (1992). Breaking the Maya Code. London: Thames & Hudson.

CNN (May 26, 2006). "Chicken and egg debate unscrambled". CNN.com. Retrieved 2011-07-09.

Darwin, C. (1895;1979). The Origin of Species. Random House Value Publishing, New York NY.

Freeman, C.L., Harding, J.H., Quigley, P., Rodger, M. (2010). Structural control of crystal nuclei by an eggshell protien. Angewandte Chemie International, 49, 5135-5137.

Miller, M., & Taube, K. (1993). The Gods and Symbols of Ancient Mexico and the Maya: An Illustrated Dictionary of Mesoamerican Religion. London: Thames and Hudson.

Mills, C. W. The Sociological Imagination (Oxford: Oxford University Press, 1959), 5, 7.

Newman, J.R., (1987). The Outer Wheel of Time: Vajrayana Buddhist cosmology in the Kalacakra tantra, dissertation.

Sanford, D., (1975) Infinity and Vagueness. Philosophical Review, 84, pp. 520-535.

Sorsen, R.A. (1992). The egg came before the chicken. Mind, 101, 541-542.

The New Jerusalem Bible. Henry Wansbrough, gen. ed. New York: Doubleday, 1985. Print.

Theosophy (September 1939). "Ancient Landmarks: Plato and Aristotle". Theosophy 27 (11): 483–491.

Waller, D. (1998). The chicken and her egg. Mind, 107, 851-854.

Wason, P.C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20, 273–280.

Sunday, June 26, 2011

Entry 004 - Death and Disney

This entries question comes (albeit inadvertently) from Kaleigh Marie from Maine who writes:
"Disney's Earth is the worst movie I've ever seen. Animals eating and killing each other, baby animals getting slaughtered, and running out of water. Why in the hell would Disney make such a movie?"
The quoted message was from a recent Facebook status update, in which she expresses her distaste for the variety of death scenes found throughout the video meant for children. Although it was not directly asked to the blog, it is a profound question, which has been discussed at some length with others, making it worthy of further investigation.



DEATH IN DISNEY:

Death and Disney, as Forrest Gump may say, go together like "peas and carrots". Although I am uncertain as to how well peas and carrots truly go together, anyone who was raised on Disney films is aware of the variety of horrific death scenes throughout their films. These scenes are what nightmares are made of.


Although I have no specific citation for the frequency of death throughout Disney film, I have decided to display several heart breaking scenes throughout this blog, which will hopefully illustrate this point.

I sincerely apologize for the pun.

However, is all of this death unwarranted? If not, what value could displaying concepts of death to children through film possibly have? Perhaps like many other phenomenon in psychology, Terror Management Theory poses some insight to the questions at hand.

What was truly heartbreaking was the knowledge he 
would have to live with his father.



TERROR MANAGEMENT THEORY:

Terror Management Theory [TMT], derived from Ernest Becker's 1973 Pulitzer Prize-winning non-fiction entitled The Denial of Death, argues that all human action is taken to ignore or avoid the inevitability of death. Greenberg et al., (1992/1986) purposed and tested the main theoretical and experimental paradigms for TMT, which spawned a vast and rich body of literature, all confirming the robust effects of TMT.

Specifically, Psyczynski et. al., (2004) posit that humans share a biological predisposition to continue existence, or at the very least avoid premature termination. However, human’s posses an array of highly develop cognitive abilities which give them awareness of the inevitability of death, which can cause paralyzing terror. In order to manage this terror, humans have developed elaborate cultural world views through mechanisms such as self-esteem, which allow us to believe our existence is meaningful within the universe. Thus, in order to be successful in manage the terror derived from the existential threat, one must have faith in a meaningful conception of reality (the cultural worldview) and hold a belief that one is meeting the standards prescribed by that world view (self-esteem). However, how does TMT apply to children?

NOTE: For a complete review of Terror Management Theory, see The Handbook of Existential Psychology.



CHILDREN AND THE EXISENTIAL THREAT:

Aside from supplying a possible account of the evolution of cultural world views, TMT has also attempted to provide a developmental account of how infants acquire cultural world views and maintain self-esteem in the face of death. Following the works of Bowlby's (1969) formation of infant attachment, TMT is theorized to start with the immaturity of infants from the moment of their birth. Early in life, parents (ideally) provide care for their child, but over the course of the child's development, parental affection becomes contingent upon engaging in certain behaviors while refraining from others. Although some of these behaviors may be to protect the child (e.g. do not play in traffic), other behaviors may be reflective of the society (e.g. do not eat worms). While there is nothing lethal about eating worms, from a biological perspective, most parents frown on this activity.

Old Yeller is rewarded with a brand new bullet
after saving his humans from a wolf.

Throughout development, children are taught to associate good with safe and bad with unsafe, which is transferred from their personal relationships and culture. As they grow older, Children also begin to realize that their parents are mortal and will be unable to provide them with the safety and security they have received in perpetuity. If children do not understand death from birth, when does it develop?




CONCEPTUALIZATIONS OF DEATH:

From an objective perspective, death is an irreversible outcome of a natural process, and a mature conception of death implies that an individual understands it is inevitable, universal, and irreversible (Florian, 1985). However, this concept of death is only reach through cognitive development. Nagy (1948) suggested that there were three developmental stages of death. Those in preschool lacked a fundamental understanding of deaths inevitability and irreversibility. Those aged 5-9 tend to personify death, such that, it could be avoided if swift enough. Only in the third stage (9-10) did children realize the permanence of death, and the fact that it could occur through a variety of circumstances.

Florian (1985) examined the development of death from pre-kindergarten to first grade in Israeli children, and found that with a gradual increase in age, the understanding of death solidified. This finding has also been replicated in Western countries (Smilansky, 1987; Speece & Brent, 1992; Wenestam & Wass, 1987). The critical stage of development of death seems to occur during Piaget's concrete operations (ages 7-11).

Beyond this process, Florian and Kravetz (1985) claimed that a child's physical and cultural environment influence their representation of death. In fact, cultures which emphasize the notions of divine purpose and reincarnation may even inhibit the development of the view of death as the irreversible outcome of a natural process (Bowlby, 1980). Experimentally, Florian and Mikulincer (1998) exposed children (age 7 or 11) to mortality salience, and found that only 11 year olds reacted to mortality salience similar to adults noted in previous studies (Greenberg et al., 1997). This reaffirms the notion that it is not until children reach a mature age that their notions of death fully develop.



WALT DISNEY IS REJUVINATED BY THE TEARS OF CHILDREN:

One of the most evil men in the world, next to Hitler.

Although it is easy (and fun) to think of Walt Disney as an evil person, does he deserve the bad rap? If the children watching these films are too young, they may not fully understand the consequences of death. Additionally, some children who do not have a complete understanding of death often will fill in gaps in understanding with fantasy elements (Baker, Sedney, & Gross, 1992), which may be taken from the media that children view, such as Disney movies. Thus, the question of whether or not Disney portrays death in an accurate and acceptable manner arises.



THE VALUE OF DISNEY DEATHS:

Cox, Garrett, & Graham (2005) conducted a study in which they performed a content analysis on Disney films to locate 23 death scenes from 10 full-length Disney animated films ranging from classics such as Snow White and the Seven Dwarves (1937) to modern movies such as Tarzan (1999). Each characters death was watched by several research assistants who determined the characters status (protagonist or antagonist), depiction of death (implicit or explicit), emotional reactions (were characters in the movie sad or happy regarding the death), and causality (justified or unjustified).

Results from Character Status suggested that both "good" and "bad" characters are susceptible to death, which can be beneficial as children viewing these scenes receive the message that even good characters can die (Brent et al., 1996; Willis, 2002).

Depictions of death were evenly implicit and explicit. Explicit deaths were more likely to be noted where protagonist died; however, this can also be a positive occurrence as these scenes may demonstrate explicit deaths of characters that the viewer cares about, reinforcing the consequences of death.

Implicit deaths occurred mostly for antagonists, which may send a message that their deaths are inconsequential in comparison to those of the protagonists, perhaps sending a mixed message to children about the importance of death.

That's right... twist the knife.

Death status throughout the majority of films were permanent. This is also a positive message, as it enforces the idea that death is a permanent phenomenon, a concept that many young children do not yet grasp (Baker et. al., 1992; Brent et. al., 1996; Grollman, 1990; Willis, 2002). Thus, it is theorized that seeing these scenes may help children develop this understanding of death sooner. However, if left unaided in understanding these scenes, there may be an internal crisis derived from the realization of the permanence of death. Of the deaths which occurred, only six were reversible. All of the reversible deaths occur among protagonists, suggesting that antagonists do not get a second chance at life. This point is one of caution, for, as discussed earlier, the notion that death is not permanent could inhibit the development of death. Additionally, half of the protagonists who died came back in some form. For example, in The Lion King, Mufasa returns to communicate with Simba. Although this scene may show children that loved ones can always be part of them, it could confuse children into thinking that the deceased may actually return (Worden & Silverman, 1996).

...Braaaaainnns....

Emotional reactions were generally negative for protagonists, which may provide some children who lack experience with death a model of grieving (Baker et al., 1992). Presumably, when children see characters grieve and show frustration over the death of loved ones, they may learn that these behaviors are normal and acceptable. Positive emotional reactions occurred solely for antagonists, and were extremely uncommon.

Finally, it was noted that all of the deaths which were justified were among the antagonists, many of whom died from the result of an accident. The fact that these individuals died via accidents allowed them to "get what they deserve" while maintaining the innocence of the protagonists.



CONCLUSION:

Collectively, a child’s understanding of death seems to depend on two factors. The first is their experience with death (Speece & Brent, 1984), with the second aimed at their developmental level (Brent et al., 1996; Willis, 2002). These films may give children something to relate to when they are experiencing a loss. Watching films in which characters die may help children understand real death in a way that is less traumatic and threatening. Depictions of death may also serve as springboards for discussion between children and adults about death, for many parents try to downplay the severity and reality of death when discussing it with children (Grollman, 1990; Ryerson, 1977; Willis, 2002). As long as a parent is available to open the appropriate dialogue, and encourage a healthy discussion surrounding death, the negative consequences of these death scenes in Disney seem to be fewer than the theorized benefits.

If only Chance (voiced by Michael J. Foxx) could go back
in time, he could prevent this tragedy from ever occurring!

However, given the nature of the content analysis conducted by Cox and colleagues (2005), the interpretations should be viewed with some skepticism. In order to infer actual benefits, an experimental study with manipulations is ideal. With the evidence at hand, there seems to be little to no permanent harm in watching Disney movies, but future research should be aimed at examining the consequences of these films, specifically on those who may be older, and susceptible to mortality salience.

Whether or not Disney movies are right for a kid is up to each parent. While the developmental stages of death have been suggested to solidify around the age of 9-10, it is important for parents to realize that this is a generalization, and that no one may know what their child is actually able to comprehend like the parent. With this information in mind, parents must carefully weigh the consequences of letting their children view these movies, and make themselves available to quash confusion while putting the information in the appropriate context. As for Kaleigh’s original statement about the violence in Disney's Earth, I'll let you decide:







NOTE: If you have a question for me to research and answer please submit it as a comment, or send it to ELKronos@aol.com / Facebook.com/ELKronos. Submit your name and location if you wish to opine.




CITATIONS:

Baker, J. E., Sedney, M. A., & Gross, E. (1992). Psychological tasks for bereaved children. American Journal of Orthopsychitray, 62, 105-116.

Becker, Ernest (1973). The denial of death (1st ed.). New York, NY: The Free Press.

Bowlby J [1969] (1999). Attachment, 2nd edition, Attachment and Loss (vol. 1), New York: Basic Books.

Bowlby, J. (1980). Loss: Sadness & Depression. Attachment and Loss (vol. 3); (International psycho-analytical library no.109). London: Hogarth Press.

Brent, S. B., Speece, M. W., Lin, C., Dong, Q., & Yang, C. (1996). The development of the concept of death among Chinese and U.S. children 3-17 years of age: From binary to “fuzzy” concepts? Omega: The Journal of Death and Dying, 33, 67-83.

Cox, M., Garrett, E., & Graham, J.A. (2005). Death in Disney films: Implications for children's understanding of death. Omega: Journal of Death and Dying, 50, 267-280.

Greenberg, J., Pyszczynski, T., & Solomon, S. (1986). The causes and consequences of a need for self-esteem: A terror management theory. In R. F. Baumeister (Ed.), Public self and private self (pp. 189–212). New York, NY: Springer-Verlag.

Greenberg, J., Solomon, S., Pyszczynski, T., Rosenblatt A., et al. (1992). Assessing the terror management Analysis of self-esteem: Converging evidence of an anxiety-buffering function. Journal of Personalilty and Social Psychology, 63, 913-922.

Greenberg, J.; Solomon, S.; Pyszczynski, T. (1997). "Terror management theory of self-esteem and cultural worldviews: Empirical assessments and". Advances in experimental social psychology 29 (S 61): 139.

Greenberg, J.; Koole, S. L., & Pyszczynski, T. (2004). Handbook of experimental existential psychology. Guilford Press.

Grollman, E. A. (1990). Talking about death: A dialogue between parent and child (3rd ed.). Boston: Beacon Press.

Pyszczynski, T., Greenberg, J., Solomon, S., Arndt, J., & Schimel, J. (2004). Why do people need self-esteem? A theoretical and empirical review. Psychological Bulletin, 130, 435–468.

Florian, V. (1985). Children's concept of death: An empirical study of a cognitive and environmental approach. Death Studies, 9, 133-141.

Florian, V., & Kravetz, D. (1985). Children's concept of death. Journal of Cross-Cultural Psychology.

Florian, V.; Mikulincer, M. (1997). "Fear of death and the judgment of social transgressions: a multidimensional test of terror" (Registration required). Journal of Personality and Social Psychology 73 (2): 369–80.

Nagy, M. (1948). The child's theories concerning death. Journal of Genetic Psychology, 73, 3-27.

Ryerson, M. S. (1977). Death education and counseling for children. Elementary School Guidance and Counseling, 11, 165-174.

Smilansky, S. (1987). On death: Helping children understand and cope. New York, Peter Lang.

Speece, M. W., & Brent, S. B. (1984). Children’s understanding of death: A review of three components of a death concept. Development, 55, 1671-1686.

Wenestam, C.G., & Wass, H. (1987). Swedish and U.S. children's thinking about death: A qualitative study and cross-cultural comparison. Death Studies, 11, 99-121.

Willis, C. A. (2002). The grieving process in children: Strategies for understanding, educating, and reconciling children’s perceptions of death. Early Childhood Education Journal, 29, 221-226.

Worden, J. W., & Silverman, P. R. (1996). Parental death and the adjustment of school-age children. Omega: The Journal of Death and Dying, 33, 91-102.

Saturday, June 18, 2011

Entry 003 - Just Say Please

This week's question comes from Amber Marble, currently in Barre Vermont, who asks:
"Is please really the magic word?"
Although this question may have been jocose, I felt it was a question worth further exploration. For those of you unaware, "please" is often referred to as the "magic word" among parents, professionals, and even pop culture:


Now, just for clarification purposes, it should be noted that please is not a magic word in a conventional sense (magician do not say "please" before pulling a rabbit out of there hat). It is said to be magical because it is implied that if one says please before a request, compliance is more likely. In order to understand why please may be a "magic word", it is worthwhile to inspect the origins of the word please, and manners at large.


PLEASE TELL ME THE HISTORY:

According to Webster's dictionary, please is derived from Middle English, Anglo-French, akin to Latin and Greek, with its first known use in the 14th century. Although it is unclear as to how the word please came about, sociologists view manners as the unenforced modern standards of conduct used to demonstrate that one is refined. Words such as "unenforced" and "modern" are especially important to the definition when one considers how manners can vary not only between cultures, but over time.

Although the history of manners is somewhat debatable, a complete and well researched account is offered by Norbert Elias, a German sociologist, in his two volume book "The Civilizing Process". In his book, Elias argues that a complex network of social connections which developed in post-medieval Europe lead to the creation of the "super-ego". Freud's theories aside, Elias theorized that perceptions of violence, sexual behavior, bodily functions, forms of speech, and even table manners were transformed by the increasing amount of shame and repugnance from court etiquette.

Due to the lack of research on how various forms of etiquette began, some speculation is necessary. It stands to reason that as prehistoric man formed societies, it was necessary to learn how to behave in a peaceful manner. Given that other individuals are often a source of self-esteem (Baumeister & Leary, 1995), and that we cooperate with and seek out close others (Murray, 1938), manners could be a way for us to achieve social approve (Cataldi & Reardon, 1996). Speculation aside, the question at hand is whether or not saying please actually prompts a higher rate of compliance.


PLEASE LET THERE BE MAGIC:

According to Francik and Clark (1985), speakers who are requesting information often face potential obstacles in getting information. They devised a study in which participants read a variety of scenarios which contained a high or low obstacle to getting the information they requested. Results suggest that participants who read a scenario which entailed a large obstacle were less likely to make direct requests, such as "Do you remember what time the concert begins tonight" as opposed to "What time does the concert begin tonight". The ladder implies that wording requests in a manner such that they are not authoritarian is suited when the obstacle is seen as large.


While this finding may not be directly related to the word "please" it is interesting to consider in the context of when saying "please" is deemed necessary. Perhaps one of the most common instances in which please is deemed necessary is in early childhood interactions with teachers. Everyone likes to be respected, and teachers may even think it is their job to instill good manners in children; however, what does expecting a child to say "please" really imply? If the child wants something to use the bathroom, and the teacher insists the child say "please", this is similar to the teacher considering themselves a larger obstacle in the child's ability to go to the bathroom than the child itself is willing or able to recognize. Additionally, if an authority figure asks a child to clean up by saying, "please pick up your toys", the word please makes the request sound less important, and may even lead the child to believe that they have the ability to say no to their parental figure. Instead of teaching a child to say please after every instance, it may be more worthwhile to teach children the subtleties of when to say please, as well as reconsidering when you really think a please is necessary.


SAYING PLEASE CAN HELP:

Given that please may not always be necessary, can saying please ever really help? Pennebaker and Sanders (1976) conducted one of the first studies which may shed some light on the question at hand. While their paper is examining the effects of authority on reactance, their experimental manipulation warrants some notice. In this study researchers posted placards in 17 toilet stalls throughout the day. The signs had a message containing a high threat (e.g. "Do NOT write on the walls!") or a low threat message (e.g. "Please, do not write on the walls").


While there is more to the study than what has been described, researchers noted that under certain conditions, proffered threats may cause the behavior to increase rather than decrease. This implies that in a public setting, a high threat message may be worthwhile, but in a private setting, where the chances of getting caught are minimal, a low threat message (one that says please) is more effective than that of a high threat.


SAYING PLEASE CAN HURT:

Firmin et al. (2004) conducted a study which directly tested the please hypothesis by conducting a telephone poll to students on campus. In the study, research assistants called participants asking whether or not they would commit to buying one cookie to support a local homeless shelter. Experimenters kept the script the participants heard the same, except for the addition of the word please.


As suggested by the results, the addition of the word "please" seemed to backfire, such that, when participants were asked to please commit, they were less likely to state they would buy a cookie than when the plea was not used. Researchers attempted to explain these findings by claiming that on a campus setting, which constantly bombarded with pleas to commit their time and money to various causes, students may be more immune to the word like please. Additionally, researchers theorized that students may have been suspicious of the request, as the word please was not really necessary for what was asked, and it may have been perceived as if the person making the request was too good to be true in selling homemade baked goods in support of a homeless shelter.

Although several other theories are postulated as to explaining the results, I personally feel that when participants were asked to make a commitment, they felt good about themselves for supporting others. However, when the word please was tacked on, participants were robbed of that feeling, for the word implies that the request is great, and as a result they may have rationalized that they were buying the cooking for the asker, not the cause. This explanation fits the perception-of-relationship theory proposed by Aune and Basil (1994).


CONCLUSION:

As illustrated in the video below, saying "please" to a computer is not likely to gain additional compliance. However, one should not give up on ever saying please again. It is merely important to know how to use the term correctly.


Collectively, saying "please" can go a long way to making a request seem more reasonable; however, caution should be used as to when the word is said. Please should not be used for just any request, but rather a request that might be considered unreasonable without it. As suggested by Sanders and Fitch (2001), the context in which a statement is made is of great importance. Please can be a magical word, and in the case of our magician, there may be some situations where they would want to say please to get the rabbit out of the hat, but futher research is needed to determine what times make please the most "magical".








NOTE: If you have a question for me to research and answer please submit it as a comment, or send it to ELKronos@aol.com / Facebook.com/ELKronos. Submit your name and location if you wish to opine.






CITATIONS:

Aune, K., & Basil, M. (1994). A relational obligations approach to the foot-in-the-mouth effect. Journal of applied social psychology, 24, 546-556.

Baumeister, R. F., & Leary, M. R. (1995). The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychological Bulletin, 117, 497-529.

Cataldi, A. E., & Reardon, R. (1996). Gender, interpersonal orientation, and manipulation tactic use in close relationships. Sex Roles, 35, 205-218.

Elias, N., (1969). The Civilizing Process, Vol.I. The History of Manners, Oxford: Blackwell.

Firmin, M.W., Helmick, J.M., Iezzi, B.A., & Vaughn, A. (2004). Say please: The effect  of the word "please" in compliance-seeking requests. Social behavior and personality, 32, 67-72.

Firncik, E.P., & Clark, H.H. (1985). How to make requests that overcome obstacles to compliance. Journal of memory and language, 24, 560-568.

Murray, H. A. (1938). Explorations in personality. New York, NY, US: Oxford University Press.

Pennebaker, J.W., & Sanders, D.Y. (1976). American graffiti: Effects of authority and reactance arousal. Personality and social psychological bulletin, 2, 264-269.

Sanders, R., & Fitch, K. (2001). The actual practice of compliance seeking. Communication theory, 11, 263-289.