An alternative in language testing research

Childhood Development Papers Icon

Validation in language testing in general and in cloze testing in particular has been mainly based on criterion-related validity in addition to construct and content validity. As the validity of the former validation technique in which tests are considered either valid or invalid based on their correlations with other supposedly valid criterion tests is seriously under question, the present study introduces a new qualitative technique for validation purposes. Researcher research, as it is called here, refers the researcher’s investigation of his/her own internal thought processes at the same time as he/she is taking a test. The idiosyncratic feature of this technique is that while researching others, the inferences of what is happening are only made indirectly and may therefore be wrong, using this technique, the researcher, being involved in the task, himself/herself directly experiences what others can only observe. Such a

technique was applied to 11 cloze tests, constructed out of the researcher’s previous writings. The cloze-taking processes as experienced by the researcher reveal that different cloze items make different demands on the test-taker. Further applications of the technique and also implications for cloze validity as a measure of reading comprehension are discussed.
________________
Background
Validity which refers to how far an instrument really measures what it is intended to is one of the characteristics of a good test, others being reliability and practicality. Traditionally validity has been discussed and researched in the forms of content validity, construct validity, and criterion-related validity including concurrent and predictive validity. Content validity refers to the degree a test measures a representative sample of the content area it is intended to measure. Construct validity deals with whether a testing instrument really measures the underlying construct the test is supposed to measure. The criterion-validity refers to the degree a test measures what another test measures either at the same time (concurrently) or at a later time (predictively). While the two former validity types have been used to study the validity of a test per se without comparing it to others, in criterion-related validity, one test’s validity has been researched based on another measure. This latter kind of validation studies has prevailed language testing research, and as a result, newly constructed tests have been claimed to be either valid or invalid measures of the criterion tests used. In criterion-related validation research, the validity of a test has been established based on the degree of correlation between the new (experimental) test and the old (criterion) test. Namely, if the observed degree of correlation between two tests has been high and significant, the new test has been regarded as valid, and if the correlation has not been high enough, the test being validated
86
has been considered invalid. Based on such a validation procedure, the new test, if concluded as valid, could replace the older test and be used for exactly what that test had been or could be used.
Such a validation procedure has been a norm in language testing research and has been practiced by many well-known testing researchers including Taylor (1957), Carroll at al. (1959), Bormuth (1967), Rankin & Culhane (1969), Oller & Conrad (1971), Oller (1973), Stubbs & Tucker (1974), Irvine et al. (1974), Jonz (1976), Alderson (1979a, b), Hinofotis (1980), Shohamy (1983), Hanania & Shikhani (1986), Illyin et al. (1987), Hale at al. (1989), Stansfield & Hansen (1989), Chapelle & Abraham (1990), Fotos (1991), and Greene (2001) to name a few. (For a review of these studies, see Sadeghi, 2002c). Serious doubts have been cast on this kind of validation in which one test has been proposed to substitute the other simply because they are moderately to highly correlated.
The concern over the validity of criterion-validation stems from the fact that the statistical technique of correlation, which is the main statistical tool used in this kind of validation, has been devised and intended to show the degree of association between two variables, and that the presence of a high degree of relationship or even a perfect correlation coefficient between two variables is not intended to mean that they are of the same nature or that they are interchangeable. Although no such thing has been claimed in the underlying concept of correlation, the technique has been vastly used for this improper purpose, whereby based on high correlations between two tests, for example, cloze and reading tests, they have been concluded to be measuring the same thing, and thus being interchangeable. The arguments against this research trend in language testing have been put forward by Sadeghi (2002a, b, and c). The application of correlational techniques for validation purpose whereby one test is suggested to be a valid measure of another, and therefore, to be able to replace that test is, however, possible only if three conditions are met: 1) The tests are of the same nature and character (for example, if both are tests of language proficiency with similar item types); 2) The tests are intended for the same purpose (both intended to measure language proficiency, for example); and 3) The degree of correlation and the variance overlap between two measures is near perfect, and if we need to lose no significant information by substituting one test for another, the correlation should be +1.00. (For further discussion, see Sadeghi, 2002c).
As a result of his dissatisfaction with criterion-validation in language-testing and particularly in cloze testing where attempts of content-validation and construct validation have been in vain because it is not at all clear what cloze tests are measuring, the present researcher suggests a new validation technique, called ‘researcher research’, which is hoped to clarify more about what cloze tests are exactly doing and whether claims made on cloze tests as to what they measure are substantiated or not. ‘Researcher research’ refers to the active and conscious engagement of a researcher in the test-taking process and is a kind of research in which the researcher and the subject of the research both refer to the same individual. Instead of indirectly observing the test-taking process in others, the researcher becomes an insider and gains direct access to first-hand data by directly experiencing the problem under investigation. The application of the technique to a few cloze tests are presented below, and suggestions are made as to how the technique may be applied in other testing contexts.
87
Method
Subjects. The only subject of this study was the researcher himself.
Materials. The research and measurement tools used in this study were 11 cloze tests. The cloze tests, with the deletion rate of every 7th word, were made from the researcher’s previous writings which ranged from three months to one year old. Another person was instructed to make cloze tests form extracts that the researcher had already selected so as not to contain much quoted material. To allow what is called lead-in and lead-out, the first and the last sentences of each passage were left intact. The cloze tests constructed varied in length: while the shortest cloze test contained 34 items, the longest had 53 blanks. A sample of the cloze tests used in the study appears in the appendix.
Procedure. After cloze tests were constructed, the researcher sat half of the tests in one session, and the other half in another session and made a note of the time used for each cloze test separately. After completing the blanks, cloze tests were scored using both exact-word scoring and acceptable-word scoring methods by the researcher himself. Although there is contradiction in research findings as to whether allowing acceptable-scoring makes significant differences to the results and mismatching conclusions seem to have been arrived in different studies, using acceptable-scoring, at least for non-native speakers of the language, seems to be fairer even if the differences are not large enough. Based on such a justification, the cloze tests used here were also scored using acceptable-scoring procedure.
Findings
Quantitative data. The following table represents the observed mean score in both exact- and acceptable-scoring for all of the tests used, and also the mean score that could be obtained if all items in all tests were answered correctly.
Table 1: mean observed and expected score on cloze tests in exact- and acceptable-scoring
Descriptive characteristic
Scoring method
Observed mean
Expected mean
Exact-scoring
34.36
43.18
Acceptable scoring
42.00
43.18
A comparison of the average for exact-scoring (observed mean) with the total possible average (expected mean) indicates that if exact-scoring only was to be allowed, and if cloze was to be regarded as a measure of reading comprehension, the results obtained would mean that the researcher was very far form understanding what he had himself written. As the table above shows, the scores have improved a lot with acceptable scoring. This was very much expected especially because the texts were the researcher’s own previous written samples, with the content and the written style of which the researcher was well familiar. To give more meaning to the quantitative data in the above table, the following findings should also be taken into account.
88
Acceptable answers that could be counted unacceptable. As the following examples show, some of the answers regarded as acceptable were acceptable taking the contextual clues in the text into account, but if the meaning intended by the original word was to be the criterion for acceptability, most of the items considered ‘acceptable’ here would have been counted as ‘unacceptable’ and thus distancing the observed acceptable score mean even further form the expected score mean, lead us to the conclusion that even acceptable-score average may not be significantly high. Such a consideration will further reduce the chances of cloze being a proper measure of reading comprehension as far as statistics are concerned. Instances of such cases in which scores have been counted as ‘acceptable’ in the context in this study but could be counted as ‘unacceptable’ if the criterion for acceptability was stating the same information as expressed by the original word are the following:
1) Although the beginning of reading dates back to the invention of writing, and since … [original: 5000; acceptable: many] years ago people have been dealing with reading (Orasanu & Penny, 1986: 1), the real nature of reading remained uninvestigated until the mid 19-th century (Vernon, 1984: 48).
2) During this period of research on reading, different people (… [original: psychologists; acceptable: i.e.,], linguists, psycholinguists, educators, second-language researchers, language teachers, etc.) have looked at the same entity from different angels.
3) …it is not clear at all what the cloze tests are intended to measure and that they ‘turn out’ to measure different things based on … [original: correlational; acceptable: different] analyses.
4) In real-life reading, the reader has a purpose and an interest in reading a passage, and because he/she chooses to read one text rather than the other, he/she has an idea of what the text is about and expects to find some … [original: expected; acceptable: specific] information in the text.
5) There is no doubt in the fact that communication whether in spoken or written mode does not occur in a vacuum. The implication for the comprehension of the … [original: communicated; acceptable: written] message is that all elements present in that particular event have their share in affecting the success of communication.
6) K. Brown (1994: 61) points out that different types of lexical sets can be chosen to transfer the meaning …[original: of; written: and/] or the perspective on the same event.
As the above examples illustrate, an ‘acceptable’ answer may not convey the meaning originally intended and can therefore be regarded as ‘unacceptable’ if the comprehension of the original meaning is of interest. In such cases where the word is counted as acceptable, there seems to be a meaning loss, and something is conveyed either less or more than what was originally intended by the writer. And because it cannot be said that the writer has been unable to understand what he has originally written, it can be concluded that cloze procedure may not be an appropriate technique for this purpose, for measuring the comprehension of a written passage.
89
Acceptable answers that could not have been provided without the researcher’s familiarity with the content of the quoted material. Sometimes the different processes involved in cloze-taking (i.e., stopping and thinking about what the whole thing was about) led to giving acceptable answers and other times despite really engaging with the problem, the researcher was unable to make sense of what was originally meant. This latter case led to giving inappropriate and irrelevant answers, clearly showing the lack of comprehension in that part. Has it not been for the researcher’s familiarity with the content of his own writing, instances like these where the flow of reading was interrupted would have been even greater. Instances of the blanks which produced real challenge to the present researcher, leading to inappropriate answers or no answer and could therefore be interpreted as miscomprehension or the lack of comprehension are as follows:
1) One group sat the test in the normal way; with the other group however, after each subject gave his/her answer to an item, the correct word was revealed. Brown (1983: 247) called the first cloze-type ‘… [independent-item original word] cloze’ and the second type ‘… [dependent-item] cloze’.
In this item, had it not been for the researcher’s familiarity with what Brown had called such cloze-tests, he would have been unable to fill in the blank correctly. The problem in this case and the following one is more related to the fact that the omitted words are the original writer’s words being quoted:
2) Discourse-level knowledge has also been called ‘formal’ or ‘…’ [original: textual; written: content] schema (Singhal, 1998: 2).
Similarly, in the following example, had it not been for the researcher’s familiarity with the original writer’s focus of study, the omitted word could not have been restored correctly:
3) Khaldieh (2001: 427), for example, working on reading comprehension of …[Arabic] as a foreign language, found that reading comprehension was a direct result of knowledge of vocabulary.
Answers that were unacceptable and produced challenge leading to miscomprehension and/or incomprehension. In the following cases, obviously the researcher understood the text differently. Reflecting on the actual test-taking process, the researcher remembers how challenging it was to understand the relationships between sentences in these cases where miscomprehension was the result. Comprehension was achieved in other similar cases after much effort and challenge while the same texts would have provided no challenge in normal reading:
4) The implication of these lines for testing reading comprehension is that due attention should be given to selecting those kinds of texts for measurement purposes that are culturally unbiased. … [original: Otherwise; written: However], our estimates of the reader’s comprehension may be incorrect because they will include results gained for testing the subject’s understanding of L2 cultural knowledge rather than L2 linguistic knowledge.
5) That is to say, the ‘content’ validity of the cloze procedure, whether it is supposed to measure reading ability, language proficiency, etc. is under question, because in neither case is a content area identified a priori from which a representative sample may be selected and …[original: whatever; written, the] sample is taken by the cloze procedure is just a random selection of a text or at best influenced by the test-constructor’s judgements of its suitability for the context he/she is working in.
90
6) Generally, context variable refers to all reader-, writer-, and…[original: text-external; written: text?] factors, such as environmental and situational elements, which may affect reading comprehension.
The sign ‘?’ after the word written in the blank shows the reader’s doubt on what he has written and indicates that he had difficulty in getting at the intended meaning at that part.
Answers considered unacceptable but which do not show the researcher’s inability in comprehension or lack of related grammatical knowledge. There were a few other instances which would be considered ‘incorrect’ responses in both exact- and acceptable-scoring of cloze tests, meaning that comprehension had not taken place if cloze scores are accepted as evidence of showing comprehension. Reflecting on such cases, the researcher research process allows the researcher to note that such ‘wrong’ or ‘unacceptable’ answers are perhaps some type of ‘mistakes’, and he is not satisfied that he did not comprehend the part in which he made the mistake. A few such cases are as follows:
1) He also notes other scoring methods like form class scoring where any word coming from the same form class as the original word …[original: is; written: are] deleted.
2) Taylor (1956: 48) found a high negative correlation between exact scoring …[original: and; written: a] clozentropy scoring (r= -0.87), which was taken to mean that ‘cloze scores are dependable estimates of negative entropy’…
3) Validity of a test …[original: means; written: refers] the degree to which a test actually measures what it is intended to measure.
4) Different camps have chosen to look at the same thing from different angles and have focused their attention on particular aspects of language. As a result language has been viewed …[original: as; written: a] a system by some and as an institution or a social act by others.
These examples clearly prove that they are simply mistakes showing the test-taker’s slip of the pen or mind, and they do not show that he did not have relevant knowledge to answer correctly or that because he did not give correct answers, so he was not able to comprehend the relevant parts.
Discussion
The argument in this paper is that if cloze tests measured reading comprehension as a lot of studies based on correlational research claim, then it is expected that a person who is doing cloze tests based on his/her own writing should be able to complete all blanks with no errors, at least in acceptable scoring. Such an argument is based on the assumption that somebody who writes a text has an ability beyond comprehending what he/she writes because without proper comprehension one cannot produce a coherent piece of text. Based on this argument, it can be concluded that somebody who writes something is able to comprehend it completely because otherwise he/she would not have been to write the text. So it follows that if a cloze test can properly measure reading comprehension, it should not present any challenge to somebody who is doing a cloze based on what he/she has written. The fact that the scores of the cloze-taker in this study did not amount to the total possible score expected in neither exact-scoring nor acceptable-scoring gives us some quantitative evidence that cloze may not be testing
91
reading comprehension properly and that it may be testing something below and beyond mere comprehension.
The validity of cloze procedure to measure reading comprehension in such a context is under question not only because the cloze-taker was unable to fill-in all the blanks correctly but also because of the other challenges it produced to the test-taker. Furthermore, not all the blanks which the cloze-taker completed successfully required real comprehension of the passage. Some were either parts of cliché phrases or idioms; others were function words which required some grammatical knowledge only. Not all blanks, however, were like these. It should also be stressed that the familiarity of the texts to the researcher was a great help and he sometimes remembered a whole sentence before he saw it on the paper (and this is why it was decided to select texts written at least three months earlier so as to lessen the role of memory).
Reflecting on test sessions, the researcher remembers that cloze did produce some challenge. First of all, the time for cloze-testing was about one and a half times longer than the time he spent on reading the texts in non-cloze format. In addition to the time-factor, while in the majority of cases in many of which either function words or words frequently used in his writings (such as reading, comprehension, cloze, procedure, measure, test, etc.) were needed to complete the blanks, cloze-taking involved no further challenge, resembling a normal-reading in which the flow of reading was not blocked, in a few other cases, the process of cloze-taking was really very different from that of normal reading. In other words, if the researcher read the original version of the text he had written, he would no doubt have understood every bit without his reading being blocked or the flow disturbed. While taking some cloze-tests, however, the researcher was stopped in some cases and needed to think about what ideas were being talked about and what words should have been inserted into the blanks. Such cases blocked the flow of reading compared to the way the same text would be read with no blanks. Although it is accepted that if somebody is reading a text for the first time, his/her reading may not be so fluent, and he/she may need to stop and think to understand what he/she is reading before he/she can move forward, such a consideration seems unacceptable in this research context because the text was the reader’s own, which he had already read (and understood) several times.
Reflecting on what it was that the researcher was doing when he was taking cloze-tests constructed from his own writing and comparing it with normal reading fallow him to conclude that cloze-tests as used in this study may not be proper testing instruments for measuring one’s degree of reading comprehension. The researcher’s direct involvement in taking cloze-tests allows him to claim that cloze tests may measure some degree of superficial comprehension where the blanks are completed by structural words or frequently used content words. They do not seem to be able to appropriately measure high-level and overall comprehension as shown to the researcher through the researcher research process.
The fact that proper comprehension is expected to take place only after reading a text in its complete and undeleted format, and that only after we have provided the reader such a chance, can we then talk about how we are going to assess his/her degree of comprehension of what he/she has read, force the researcher to conclude that cloze tests are unsuitable for assessing reading comprehension. The reason for this conclusion is
92
mainly that cloze tests put a double-task on the shoulder of the reader. Namely, they require the readers to reconstruct an incomplete passage, and ‘reproduce’ something that has not been presented to them. No doubt this reproduction may be far away from the original text. Based on such a reproduction which needs as much thinking and intelligence as knowledge of language, concluding that the original text has been comprehended or not does not seem to be logical at all simply because the reader has been prevented from access to the total meaning from the very beginning. Requiring the reader to produce something that is partly unknown to him/her and then to comprehend it is a different thing from giving him/her the text in full and then asking him/her to read and understand something which has already been produced. It is not, however, implied here that all the meaning resides in the written text, but that some elements are missing in the negotiation of meaning between the reader and the text in cloze reading.
Supporting this argument are the results from this study in which cloze-taking was found as a different process from normal reading and challenging at times. The application of a technique called ‘researcher research’ here to the cloze tests in this study clarified to the researcher that concluding cloze tests as valid tests of reading comprehension simply based on score correlations with other tests of reading cannot be sustained. A similar finding was arrived based on both qualitative and quantitative data from 213 Iranian EFL students who took different forms of cloze tests as part of the researcher’s PhD project (Sadeghi, 2003).
Conclusions and Suggestions
The paper began with a short review of the dominant validation techniques used in the field of second/foreign language testing research. Casting doubt on the validity of the most widely used validation technique, i.e., criterion-validation, in which the technique of correlation is used for improper purposes, a new technique was introduced in which the validity of a testing instrument could be directly accounted for by the researcher. The technique called ‘researcher research’ was applied to a few cloze tests constructed out of the researcher’s previous writing samples. The conscious involvement of the researcher in the test-taking process allowed him to understand better what it was that was required for the successful completion of cloze items and how cloze reading could be compared to normal non-cloze reading. The findings indicated that contrary to normal reading in which the flow of reading is less interrupted, cloze-reading blocked the access to meaning in some cases, producing serious challenge and leading to miscomprehension or the lack of comprehension in a few cases. In the majority of the instances where blanks produced no interruption and challenge, the deleted words were either function words needing a minimum degree of inter-sentential comprehension or content words which the researcher frequently used in his writing and could therefore be regarded as clichéd words for him. Based on the evidence produced through ‘researcher-research’, the cloze tests studied were regarded as testing something below and beyond mere reading comprehension.
The validation technique presented here is not intended for use with cloze tests only. ‘Researcher research’ can guide the researcher in finding out whether other tests intended to measure reading and listening comprehension or to test knowledge of vocabulary and grammar appropriately serve what they are intended to. The evidence
93
such produced may be combined with other qualitative and quantitative data to support the validity of a test for a particular purpose.
References
Alderson, J. C. (1979a). The cloze procedure and proficiency in English as a foreign language. TESOL Quarterly, 13 (2), 291-228.
Alderson, J. C. (1979b). Scoring procedures for use on cloze tests. In C. A. Yorio, K. Perkins, and J. Schachter (Eds), On TESOL ’79: The learner in focus (pp.193-205). Washington, D.C.: TESOL.
Bormuth, J. R. (1967). Comparable cloze and multiple-choice comprehension test scores. Journal of Reading, 10, 291-299.
Carroll, J. B., Carton, A. S., & Wilds, C. (1959). An investigation of “cloze” items in the measurement of achievement in foreign languages. Cambridge, MA: Graduate School of Education, Harvard University, Laboratory for Research in Instruction. (ERIC ED 021-513)
Chapelle, C. A., & Abraham, R. G. (1990). Cloze method: What difference does it make? Language Testing, 7 (2), 121-146.
Fotos, S. S. (1991). The cloze test as an integrative measure of EFL proficiency: A substitute for essays on college entrance examinations? Language Learning, 41 (2), 313-336.
Greene, B. B. (2001). Testing reading comprehension of theoretical discourse with cloze. Journal of Research in Reading, 24 (1), 82-98.
Hale, G. A., Stansfield, C. W., Rock, D. A., Hicks, M. M., Butler, F. A., & Oller, J. W. (1989). The relation of multiple-choice cloze items to the Test of English as a Foreign Language. Language Testing, 6 (1), 47-76.
Hanania, E, & Shikhani, M. (1986). Interrelationships among three tests of language proficiency: Standardized ESL, cloze and writing. TESOL Quarterly, 20 (1), 97-110.
Hinofotis, F. B. (1980). Cloze an alternative method of ESL placement and proficiency testing. In , J. W. Oller and K. Perkins (Eds), Research in language testing (pp. 121-128). Rowley, MA: Newbury House.
Ilyin, D., Spurling, S., & Seymour, S. (1987). Do learner variables affect cloze correlations? System, 15 (2),149-160.
Irvine, P., Atai, P., & Oller, J. W. (1974). Cloze, dictation, and the test of English as a foreign language. Language Learning, 24 (2), 245-252.
Jonz, J. (1976). Improving on the basic egg: The M-C cloze. Language Learning, 26 (2), 255-265.
Oller, J. W. (1973). Cloze tests of second language proficiency and what they measure. Language Learning, 23 (1), 105-118.
Oller, J. W., & Conrad, C. A. (1971). The cloze technique and ESL proficiency. Language Learning, 21 (2), 183-195.
Rankin, E. F., & Culhane, J. W. (1969). Comparable cloze and multiple-choice comprehension scores. Journal of Reading, 13, 193-198.
Sadeghi, K. (2003). An investigation of cloze procedure as a measure of EFL reading comprehension with reference to educational context in Iran. Unpublished PhD dissertation. Norwich: University of East Anglia.
Sadeghi, K. (2002a). The judgmental validity of cloze as a measure of reading comprehension. Paper presented at the 7th METU International ELT Convention, METU, Ankara, Turkey, 23-25 May.
Sadeghi, K. (2002b). The criterion validity of cloze as a measure of EFL reading comprehension. Paper presented at BERA Research Student Symposium, The University of Exeter, Exeter, UK, 11-12 September.
94
Sadeghi, K. (2002c). Is correlation a valid statistical tool in second language research? Paper presented at the 12th European Second Language Association Conference (EUROSLA12), Basel University, Basel, Switzerland, 18-21 September.
Shohamy, E. (1983). Interrater and intrarater reliability of the oral interview and concurrent validity with cloze procedure in Hebrew. In J. W. Oller (Ed.), Issues in Language testing research (pp. 229-236). Rowley, MA: Newbury House.
Stansfield, C., & Hansen, H. (1983). Field dependence-independence as a variable in second language cloze test performance. TESOL Quarterly, 17 (1), 29-38.
Stubbs, J. B., & Tucker, G. R. (1974). The cloze test as a measure of English proficiency. Modern Language Journal, 58, 239-241.
Taylor, W. L. (1957). ‘cloze’ readability scores as indices of individual differences in comprehension and aptitude. Journal of Applied Psychology, 41, 19-26.
Karim Sadeghi holds a Ph.D. in TEFL/TESOL (Language Testing) from the University of East Anglia in the UK. He has several years of teaching EFL at various levels. After he finished his Ph.D. in August 2003, he returned to Iran and since then has been teaching and researching in Urmia University.
95
Appendix A: The sample cloze test used in the study
There is no doubt in the fact that communication whether in spoken or written mode does not occur in a vacuum. (1)The implication for the comprehension of the (2)communicated message is that all elements present (3)in that particular event have their share (4)in affecting the success of communication, i.e., (5)the comprehension of the intended message. The (6)most obvious of all is the knowledge (7)of the linguistic elements involved such as (8)lexicon and syntax. Although some superficial comprehension (9)may take place in spoken language as (10)a result of contextual clues present, it (11)can be argued that without a certain (12)degree of linguistic competence the achievement of (13)proper comprehension will be out of reach. (14)Knowing the meanings of vocabulary items has (15)been regarded as the most important element (16)of linguistic competence. However, knowing word meaning (17)is no guarantee that comprehension will take (18)place and the knowledge of how words (19)are related to one another and how (20)sentences or utterances are related to one (21)another are crucial in shaping the outcome (22)of communication.
The second important factor in (23)determining the success of communication is knowledge (24)of the context or situation in which (25)the communicative event is taking place. The (26)same sentence or utterance may have totally (27)different and unrelated and sometimes opposite meanings (28)if spoken or written in different situations. (29)Apart from immediate physical context, a knowledge (30)of the larger socio-cultural context in which (31)the message is being conveyed may also (32)shape the way a reader/listener approaches the (33)massage and will therefore lead to the (34)kind of comprehension and interpretation motivated by (35)that context. No doubt, the lack of knowledge of such contextual conditions may sometimes lead to misunderstanding of the message despite having no problem in decoding the linguistic elements present in the message.

Scroll to Top