Sunday, September 23, 2007

What struck me the most about the reading were the experiments that were talked about in the two packets, “Experimental Studies” and “Basic Observations on Remembering.” While first reading about the different experiments, all I could think about was how the paper said that they were great breakthroughs in the field of memory research, yet to me they seemed to barely scratch the surface of the questions that I would want to answer in order to have a psychologically based working model of the different types of memory.
Since I have a lot of background in the sciences I am well versed in the idea of a scientific experiment and the importance of independent and dependent variables as well as the idea of holding variables constant. The Ebbinghaus experiments and the many different experiments performed on interference theory were all very ingeniously created and obviously based in a scientific experimental model. The Ebbinghaus experiment used non-sense syllables to try and hold the variable of experience constant, since the subjects would no have already formed ideas or impressions about the words that may later sway their results. Ebbinhaus thought that if the experiment used non-sense syllables which theoretically had no previous meanings of associations for the test subjects than any significance accrued during the experiment was significant. The experiments conducted on interference theory always used the same lists in the experiments, and seemed to use a very large sample pool.
These experiments are very simplistic and only seem to examine a very small specific aspect of the human memory. The problem seems to be that in order to keep with the scientific model and hold as many variables constant as possible the experiments must be organized like that. I guess that this model is necessary if you want to have results that can be generalizations and be applied to a model of the working memory. Both papers address this issue and how it is impossible to hold many variables constant, which is just an inevitability when working with the human mind.
So many millions of factors could affect someone’s memory. One subject could be having a bad day or not have gotten enough sleep and not perform as well. Another subject might be a biologist so the list of vegetables may have more significance to him and he would remember the items for longer because they had more significance. Even the person conducting the experiment could unconsciously place more emphasis on some words, making them later stick out in the subjects mind. There is much a multitude of variables it seems like not only could they not all be controlled, we might not even be able to think of a lot of them.
It seems to me that the only way to make the results as unbiased as possible is the test as many people as possible. The more results are averaged in, the less likely the random differences between people will affect the overall results, and the more likely it is that a pattern will emerge. After reading about the experiments I tried to think of ways in which experiments could be designed to study more complex aspects of memory and I could not think of any. People come from such different backgrounds, have such different minds, and such different ways of dealing with the world that it seems impossible to test memory in more complex ways except by doing case studies. I wonder if it would be possible to come up with an in-depth explanation for the different types of memory that was applicable to the general population. I know that the results from those simplistic experiments are applicable to a general memory model but they don’t seem to go deep enough. I guess I just ended up somewhat frustrated by my realization at how complex the mind was and how hard it is to study it. I hope that this made sense and wasn’t to scattered.

No comments: