August 25, 2014

A Summary: the Effect of Implied Orientation Derived from Verbal Context on Picture Recognition

I enrolled in Psych C120 (Basic Issues in Cognition) at University of California, Berkeley this Summer Session. Below is my Alternative Assignment for RPP Credit. I chose a paper in cognitive psychology, read it, and wrote a short summary of the research.

The paper is very interesting. It was originally published by Robert A. Stanfield and Rolf A. Zwaanat on Psychological Science in 2001. The topic is THE EFFECT OF IMPLIED ORIENTATION DERIVED FROM VERBAL CONTEXT ON PICTURE RECOGNITION . Please click here to get an origin copy from Below is my assignment:

Stanfield and Zwaan (2001) were attempting to test the contrasting predictions generated by amodal and perceptual symbol systems. Perceptual symbol systems assume an analogue relationship between a symbol and its referent, whereas amodal symbol systems assume an arbitrary relationship between a symbol and its referent. Barsalou (1999b) found that changes in the referent caused changes in the perceptual symbol. He predicted that if an individual reads a sentence such as “John put the pencil in the cup”, the stimulation should include a vertical orientation for the pencil, while the simulation of a sentence like “John put the pencil in the drawer” should include a horizontal orientation for the pencil. This argument assumed that perceptual symbols relevant to comprehension are activated during a stimulation.

At first Stanfield and Zwaan pilot-tested the materials with some participants. Participants were presented with an object name and then a picture, and had to decide if the picture matched the name. Some pictures served as filler items, while others served as experimental items (Each of the experimental items were rotated on its vertical axis to produce a double number of possible experimental items.) Nearly half of the items required a “yes” response, and it was also required for all experimental sentences, meanwhile their response times were recorded. Participants were also asked to rate the general quality of the picture on a 7-point Likert scale. The items had to met several conditions in order to be used in the current experiment. Then, in order to ensure that the two sets of experimental sentences (horizontal, vertical) equally constrained the potential orientation of their respective objects, other participants were presented on each trial one of the experimental sentences as well as the corresponding horizontal and vertical pictures. They were asked if the two pictures matched the sentences and if there were one matching better, by choosing from four alternatives. Then, four lists of sentence-picture pairs were created: vertical-vertical, horizontal-horizontal, vertical-horizontal, and horizontal-vertical. Each participants was exposed to each condition, and the orientation of the objects was varied. On each trial, participants saw the sentence at first, made sure they understood it, and then saw a picture. Participants had to determine if the pictures had been mentioned in the previous sentence, and made decisions as quickly as possible. After finishing the computerised task, participants were given the Flags test of general spatial ability as a covariate, in order to examine the potentially mediating effect of individual differences in mental representations. The experimenter used a 2 (match vs. mismatch) * 2 (horizontal vs. vertical) * 4 (list) within-participants analysis of covariance(ANCOVA), with score on the Flags test the covariate of interest.

The key independent variable was whether pictures matched the orientation of the object implied by the sentence. The dependent variable was the response time for participants to say “yes”. The recognition of an object mentioned in a sentence was influenced by the orientation of the object. Pictures matching the orientation of the object implied by the sentence were responded faster than pictures that did not match the orientation.

This finding was interpreted as offering support for theories positing perceptual symbol systems. It suggested similar information might be available about all objects contained in the situation model. It also suggested that amodal theories were insufficient to fully explain comprehension, and that perceptual symbol systems were a viable alternative.

Reference: Stanfield and Zwaan (2001), The effect of implied orientation derived from verbal context on picture recognition.

comments powered by Disqus