Abstract
The study evaluates 1st year?English General?Undergraduate (BS, B.E, and BA) semesters' examination papers and their respective aims and objectives collected from Karachi's public and private sector universities. A total of 15 papers were collected to compare vocabulary assessing practices of public and private sector universities. The research centres on checking how well the semester papers of English General assess the vocabulary knowledge compared to devised aims and objectives. The methodology preferred to conduct the study is case study methodology under the qualitative research approach's paradigm. The findings reveal that both universities' sectors test a fraction of words on a superficial level in reading and writing skills in examinations. Both universities follow the old method of testing vocabulary, which cannot guarantee communicative competence in students. However, Public sector Universities are giving more emphasis on checking vocabulary in examination papers. The study suggests revising the overall assessment criterion and course objectives related to vocabulary, which entails updating current teaching and testing practices as a whole.
Key Words
Vocabulary, Examination Papers, Communicative Competence, Assessment, Testing
Introduction
The main requirement of effective communication and one of the building blocks to achieve all the macro skills of language is vocabulary acquisition, and testing is one of the ways to ensure how competently a student can understand and employ a wide range of vocabulary in a variety of settings. Vocabulary is an important factor in determining a student's ability to be a competent learner. Smith (1941) has noted that vocabulary is one of the markers of high achievers. Understanding the needs of vocabulary acquisition Grabe (2009) suggests that L2 learners learn 2,000 words annually i.e., 50 words per week for 40 weeks per year. To fulfil this requirement and to ensure retention, careful testing of vocabulary skills is mandatory. Rawson & Dunlosky (2011) concluded that testing improves learning and retention, and therefore has implications for student teachers and students’ achievement. Therefore, the aims and objectives set in the examination will make sure that they have attained a maximum vocabulary range and comprehension to be successful learners and professionals because after the successful completion of graduation, in addition to their subjective knowledge, they will be checked on the grounds of academic vocabulary in every sort of high achievements and criterion specific tests. Therefore, it is important to analyze the semester examination from the vocabulary aspect to ensure that the students are acquiring and attaining the desired level of competence. The purpose of vocabulary assessment is to ‘‘monitor the learner’s progress in vocabulary learning and to assess how adequate their vocabulary knowledge is to meet their communication needs.’’ (Read, 2000, p. 2)
In Pakistan, English has been taught as a compulsory subject at the Undergraduate level in all Universities as per HEC rules and regulations. Still, the students are not efficient in their spontaneous use of vocabulary according to context. Recent studies reveal that there are various factors involved which contribute to the low English performance of students. Hassan (2019) conceptualized in his study that one of the most prominent concerns in this setting of Pakistani English classrooms’, is a lack of vocabulary. Moreover, Berne & Blachowicz (2008) has identified major issues in L2 vocabulary teaching which make teachers vulnerable because they are not "confident about best practice in vocabulary instruction and at times don't know where to begin to form an instructional emphasis on word learning" ( p. 315). Consequently, the tests administered also follow the same lore, which diligently probes reading and writing test items in general and grammatical items in particular in assessment to marginalize learners' overall English skills. All these factors combine and leave students incompetent in all four skills and constructs of the English language and preferably weaker in acquiring desired communicative competence, which is a prerequisite for professional growth.
It is significant to investigate the test items' focus and nature from a vocabulary point of view to identify gaps and provide valid research-based suggestions to develop better vocabulary testing and, ultimately, teaching. This research analyzes the vocabulary construct of the examination papers of English compulsory at the tertiary level, which includes evaluating vocabulary test items and analyzing whether the objectives of the respected syllabus are the function of the test items or not. The study might help signify the related vocabulary gaps in test papers and corresponding objectives, which may provide guidelines for the development and moderation of vocabulary test items in papers. Moreover, it also may help in revising the aims and objectives, and content in order to include vocabulary elements more clearly.
In order to get insight into how vocabulary is evaluated in various institutions in Karachi, the study compares and contrasts the examinations of public and private sector universities
Research Question
1. How is vocabulary assessed in Public and Private Sector Universities of Karachi?
2. How well the test measures the vocabulary of the students it claimed to check?
3. What is the standard criterion for testing the vocabulary skills of students at the Undergraduate level in Karachi?
Literature Review
Words serve as a vehicle to communicate effectively and a springboard for all the macro skills and constructs of Language; as Wilkins wrote, "---- while without grammar very little can be conveyed, without vocabulary, nothing can be conveyed." (1972 p. 111–112). Students with low vocabulary knowledge exhibit weak academic performance in different courses mainly because they are unable to comprehend the content (Schmitt, 2010). Therefore, according to Espinosa (2003), one of the major responsibilities of a language teacher is to teach and test vocabulary to make students better learners.
Vocabulary is a multifaceted construct (Nation, 2001) which makes it difficult to capture all the dimensions of vocabulary in a test (Schmitt et al, 2001). Vocabulary, in general, can be defined as an inventory of words with their associated meaning (Read, 1993) but defining it from classroom teaching and applied linguistics perspective tends to broaden its scope. Vocabulary as words can be categorized as Content (nouns, verbs, adjectives, adverbs) and Functional words (auxiliaries, conjunctions, pronouns, articles). If we limit our focus to teaching students the knowledge of Content words only, it can be subcategorized further into lemma (inflected forms of a word), word family (set of words sharing a common meaning), and homographs (a word with more than 2 different meanings). Further classifying the dimensions of vocabulary, it includes what Nattinger & Decarigo (1992) called "Lexical Phrase," which categorizes into polywords, institutionalized expression, phrasal constraints, sentence builders, and focusing its position and acquisition; it can be defined in terms of receptive and productive and as the functional vehicle in oral and written communication (NICHD 2000).
The teaching methodology of a given era has a significant impact on the testing of English vocabulary. Tracing down the major trends of teaching English from the Grammar translation method, the testing of vocabulary was highly subjective in nature because of the lack of training in testing (Spolsky, 1995). The audiolingual method of teaching brought objectivity in testing and encouraged discrete point testing of vocabulary ( Lado, 1961), which developed further and conceptualized as Integrative testing of vocabulary (Spolsky, 1995). Lastly, with the advent of communicative language teaching, the emphasis of vocabulary testing is shifted towards achieving communicative competence and prioritizing learners' needs. It has revolutionized developing vocabulary assessment tools, which stress mainly developing realistic tasks and generating reliable tests.
The advancement and focus of researchers in incorporating vocabulary into the curriculum lead to identifying its parameter of measurement. A major theoretical distinction was made by Meara (1996) and Read (2000) to test the second language vocabulary by evaluating "breadth" and "depth" of vocabulary knowledge, where breadth refers to the size and depth is associated with quality. Another theoretical framework described by Nation (2001) to further classify the second language vocabulary assessment, i.e., "Receptive" and "Productive" vocabulary, where Receptive vocabulary, as the name suggests, is interlinked with the receptive understanding of words and recognition of meanings in reading and listening while Productive vocabulary is highly connected with the functional use of vocabulary in speaking and writing. Read (2000) also developed three continuums, namely "Discrete–Embedded," Selective–Comprehensive," and" Context-Independent–Context-Dependent," to evaluate vocabulary assessment purposely
"Embedded vocabulary tests which measure vocabulary, forming part of the assessment of some other larger construct, such as written compositions. Amongst others, we would like to highlight the following assessing instruments: Lexical Frequency Profile (Laufer & Nation, 1995) and Word Smith Tools (Scott 1997)"
Schmitt(2002) has suggested that the test items' focus should be based on learners' needs and vocabulary (high-frequency words first).
Several researchers have analyzed the vocabulary tests and concluded the overall best practices in formulating vocabulary tests (Meara, 1992; Read 1997; Schmitt 1994, 2000).
Kremmel & Schmitt (2016) undertook a task to see whether the "learned" word is classified as "known" as well, where knowing entails productivity and having a deeper knowledge of the word. To study this, they selected the approach of studying the four form-meaning item formats (multiple matching, multiple-choice, and two types of cloze) and their interpreted scores followed by an interview) to understand the function of various test items in form-meaning recall and knowledge of collocation and derivatives and how warranted they are in checking the mastery of a testee's linguistic knowledge. They analyzed 99 Austrian EFL learners in their penultimate and final years of secondary education. There were thirty-six words to test from (Nation & Webb, 2011). The interview was based on four dichotomies to check how accurately the test measures the word's meaning. Cases match if the candidate received a point for both the vocabulary test question and the corresponding meaning recall measure (A) or if neither was correctly answered (D). Assume a candidate received a point for the vocabulary test item but did not demonstrate sufficient knowledge of the word during the interview. It is an example of overestimation (B) in such a scenario (i.e., the test result overstates an individual's word recall knowledge). And vice versa. The research study concluded that knowing a word does not ensure the employability of the word, and the result of his research shows that all four test items are the bad indicator of the production skills of a learner, and the test items were not licensed to provide accurate information about L2 learners' ability to employ the target vocabulary in reading.
Therefore, it is utterly important to identify the target vocabulary's dimensions and test items before incorporating the vocabulary into a test. According to Catalan & Espinosa (2005)," The dimensions of vocabulary involve knowledge of at least the following aspects: 'receptive and productive knowledge of the word,' 'the word grammar, pronunciation, and spelling,' 'word morphology,' 'word collocation,' 'syntactic restrictions on the word,' 'word frequency,' 'word context' 'semantic and syntactic relationships of the word with other words,' 'conceptual meaning of the word,' 'idiomatic expressions." (p 177) The major hurdle in assessing vocabulary is to apprehend its diverse nature and to troubleshoot this problem (Laufer, 1996) suggested a "multiple test approach" where each test tries to encapsulate a single aspect of vocabulary construct briefly.
Vocabulary learning has always remained an important component of academic success. "These four basic language skills are much affected by the learners' vocabulary" (Bhatti, Arshad & Mukhtar, 2020). Most of the attempts made to test the vocabulary knowledge of students were correlated to evaluate the size of the vocabulary knowledge (Nation, 1990), which is a benchmark for reading comprehension (Coady, 1997) and successful writing( Astika, 1993).
Keeping in view the Pakistani context, English language testing items are either based on rote learning or focussed on providing tricks and tips to score better without improving the students' skills (Raza, 2009). These factors are implied in vocabulary assessment as well. Grammatical construct is the vital construct to be tested traditionally (Khan & Ali, 2018), or test items are confined to test literal level reading and writing skills based on cramming of the repeated topics, ended up building tests which are highly subjected by examiner's judgment (Ishaq, 2017). Testing vocabulary has never been the focus though it is the part of course objective. This research study is unique as it evaluated the English language semesters examination papers from a vocabulary perspective, which has never been carried out. It gives a comparative account of the prestigious universities of Karachi and their assessment practice to give an overview of what is tested in the name of vocabulary and up to which level. Similarly, the above discussion highlights that evaluating vocabulary constructs in the examination paper is a critical issue in second language assessment. The evaluation is essential to better understand pedagogical assessment in the classroom and language testing research. This study analyses the test items based on vocabulary in English Compulsory Papers and their respective course objectives of public and private sector universities to check their content validity and understand how vocabulary is tested in the Universities of Karachi
Methodology
For Conducting the research, documents are analyzed from the perspective of vocabulary only using the Document analysis technique following the qualitative approach. This methodology is selected purposely to understand and describe the function of various vocabulary-based test items, explain the findings more insightfully, and draw the conclusion by comparing the examination papers of public and private sector universities descriptively. Course objectives and semester examination papers of 1st year English Compulsory Courses (B.S., B.E., and B.A.) from public and private sector universities are collected for analyses. Purposive sampling was selected to conduct the research, which entails collecting Semester Examination papers of English compulsory course and their respective academic aims and objectives from as many Public and Private sector universities of Karachi as possible to make the findings generalizable. Unfortunately, most universities have strict policies regarding sharing their papers and syllabus outlines, even for research purposes; therefore, a change of strategy from purposive to convenient sampling has been made, to carry out the research. However, I successfully collected nine papers from 3 Public sector Universities and six from 4 private sectors. Since I cannot collect all the English Compulsory Semester Examination Papers of all the Universities located in Karachi, the research findings cannot be over-generalized.
I have protected the secrecy and privacy of university records and reputation in compliance with the Code of Moral and Ethical Practice for Scientific Study.
The current study compares and contrasts the Public and Private Universities' Examination Papers and Course Objectives and examines the content validity by analyzing themes that emerge in the documents related to the vocabulary using thematic analysis techniques, which are coded and analyzed to answer the research questions.
Due to practical implications, I could not focus on the other forms of validity and reliability because of the unavailability of students' transcripts, teachers/students, and score sheets due to the uncertainty caused by the spread of COVID-19.
Data Analysis
The role of teaching in determining
vocabulary cannot be overlooked Since teachers choose which pedagogical
approaches to utilize depending on their knowledge and views, their
conventional beliefs have a significant impact on how they teach in the
classroom. (Borg, 2003). It implies the assessment of vocabulary as well. Coady
(1997) reported that most second language teachers were not
taught a due focus on vocabulary during their studentship and training. Thus
traditionally, they continue to neglect vocabulary teaching with the proper
thrust. It implies an assessment of vocabulary as well. The vocabulary test
designs follow the traditional method of assessing vocabulary from a discrete
point of view or in the form of embedded items in a text or writing, which more
or fewer checks the reading and writing skills of the learner but not, in any
case, true representatives of testing vocabulary. The vocabulary items from
private and public sector universities are compared and contrasted with
checking content validity. The following analysis elaborately describes the
problems in testing vocabulary items in both private and public sector
universities.
Discrete
Point in Test Items
The test items designed in private
sector universities follow discrete point vocabulary testing where items are
presented in isolation. Students have to guess or match the most closely
related meaning, synonym, or definition. In contrast, no discrete point testing
is seen in Public Sector Universities 'Papers. Although researchers (McDaniel
& Mason, 1985) have provided evidence that
it is important to check recognition of vocabulary to strengthen the words in
memory, still, it is a good teaching tool where the teacher helps the students
to reinforce the target vocabulary, but is not a very good choice in testing
vocabulary where the objective of testing is to ensure that student is capable
of producing this knowledge in real-life situations. In line with the
theoretical model of Read (2000), such limited assessment
could not reveal the exact breadth or depth of vocabulary either. Additionally,
it might be possible that the student knows only one meaning of a word.
Moreover, the distractors set in these test items rest assured that only one
answer is chosen.
Example
Private Sector University
Abbreviation
a. Shorten
b. Cure
c. Complete
d. Mitigate
e. Distance.
In this
item, it is seen that the distractors are set to provide the hint to the
correct answer. Furthermore, these items may test the knowledge of distractors
rather than the target word. The ESL teachers criticized this method because
the words selected were often uncommon and likely to serve no purposeful use in
pursuing their academic studies. Pike (1953) suggested
"Words in Context," which is the more appropriate method of testing
in terms of face validity and reliability and is used widely in Foreign
Language Tests (Toefl, ACTFL,e.t.c)
Test Item Focusing on
the Abstract Part of Productive Vocabulary
Both Public
and Private sector examination papers contain descriptive questions. Public
sector Universities included essay questions only, which are not formally
constructed and are outdated. Moreover, some tasks required students to do
brainstorming only, which is not an authentic testing item. Private sector
universities have included various test items, for instance, picture
descriptions, well-defined and well-formed essay questions with clear
instructions, and innovative letter writing, which are also context-sensitive
and of students' level and interest. Productive writing questions are added to
test the students on all language constructs; for instance, an essay or letter
writing question is added in a test that claims to check the learner's
productive vocabulary knowledge, which somehow follows the theoretical foundation
of Embedded vocabulary assessment of Read (2000), but there are two practical difficulties in
testing vocabulary this way. Firstly, it contains a high frequency of
functional words and would not give a clear understanding of the range of
vocabulary available to the learner. Secondly, a task might be
content-specific, which does not allow students to display their acquired
vocabulary (e.g., Laufer & Nation, 1999). Assigning
weightage to multiple forms of words (functional, content, idiom, collocation,
synonym, antonym, word class) is another issue. Finally, research studies have
proved that if the student doesn't know the word, he does not attempt to use it
or avoid it in writing altogether, leading to another question; How do we know
which term a test-taker feels he has a flawed interpretation?
Example
Private Sector University
Letter Writing
Write a letter on any ONE of
the following situations. Use Full Block format.
a) Write a letter to the Municipal
Officer demanding a garbage bin in your area.
b) Write a
letter to the Director of your college inviting him to the seminar arranged by
your class on Artificial Intelligence at the university premises.
Example
Public Sector University
Write a paragraph on any one
of the following topics. Also, do the brainstorming
1.
My Aim in Life Favorite book
2. Problems
of Citizens in a big city
Similarly, one task focussed on testing
integrative vocabulary constructs in writing skills which are restricted to
evaluating the functional use of linking words only, which is again a good
teaching tool but not an appropriate testing item. Moreover, the linking words are
also provided, making the test item weak enough to not even test a learners’
recalling skills.
No Explicit Instructions for Vocabulary
Every
paper which has been analyzed is devoid of clear instructions regarding the use
of vocabulary in descriptive questions. The questions refer to the grading grid
in some papers focussing on organization, grammar, and spelling. Still, not a
single paper has given directions to what was expected in terms of vocabulary.
Not mentioning vocabulary instructions have become the norm, which is why it
has lost its mere significance in Universities' Examination Papers altogether.
Example
Private Sector University
Q: Write an essay of 250-280
words ONLY on one of the following topics. Be particular about the tenses as
the assessment is based on correct usage of grammatical structures. (10 Marks)
1. Parents have no right to control the
lives of their children
2. Are
contemporary people too much reliant on technology?
Testing Form
and Meaning using reading Comprehension
Most of the examination
papers of Public sector universities have tested vocabulary in reading
comprehension. In contrast, only one Private sector university paper has tested
vocabulary integration in reading. The form-meaning format was the cornerstone in
these items, where words in the passage were highlighted. Students must
recognise and then match the meaning, pick the proper meaning from the
alternatives provided, or explain the explanation of that term. Such
recognition formats are inconsistent with actual reading since no book offers
several meanings for (unknown) terms in the text to pick from (Nation &
Webb, 2011). Unfortunately, none of the test items investigated
proved robust enough to be interpretable as representing meaning recall
knowledge enforced by the theoretical models of either Nation(2001) or Read (2000). Moreover, the text used is either very easy or does not
provide enough contextual cues.
Some reading
tests also presented items eliciting responses, such as writing the highlighted
words in a given text. There were two problems in those test items, which are
explained through the following question; Do two words in the entire text
representative of the learners' entire vocabulary range and production?
Secondly, do those two words ensure the reading comprehension of the entire
text and test? This pattern permits only a very limited sampling of a learner's
overall vocabulary.
And finally,
some test items are based on locating the prefixes and suffixes in particular
paragraphs, which is a poor choice to test students' understanding of suffixes
and prefixes at the Undergraduate Level. It might be a good teaching tool at
the beginners' level but not a good choice for testing vocabulary at the undergraduate
level. The tasks set in tests must be challenging yet scoring; if any component
in this paradigm is missing, it will lead to invalid and unreliable results.
Example
Find out five Prefixes and Suffixes each
from the Passage
The above test items raised
some basic questions; what does benefit a student get in professional life by
identifying suffixes and prefixes? Will he be able to imply this theoretical
knowledge in real-life situations? The above test item is beneficial to give
marks to the student but not recommended to test the vocabulary knowledge.
Identification
of Word Class
Another frequently tested test item in
vocabulary evaluation found in Public Sector Universities' examination papers
is recognition of word class. In comparison, such items have been found in
Private Sector University Papers but also required responses in productive use
as well. The exam items are formatted in such a way that the phrases are
underlined., and students just have to identify the word class. Identification
of word class can be a good teaching activity at the school level. Still, it is
not acceptable to lower the standard at the beginners' level at the university
level and in the final examination paper
Example
Public Sector
1.
What a beautiful picture it is.
2. The
doctor says it is a hopeless case.
3.
I have not seen him since he was
a child.
Example
Public Sector
Name the type of underlined
phrases. Also, find its head
1.
Can we rely on it?
2. He
was standing by the door
3.
A baby gave the elephant the banana
The above-stated testing
items are highly mechanical. Testing theoretical knowledge is highly
inappropriate, and cannot guarantee receptive and productive success in target
language use.
Testing Outdated
Idioms
Two Public sector examination papers and
only one private university have also included test questions related to
idioms. The one testing task was based on matching the idiom with its correct
meaning. The other one follows the matching of the correct idiom with
appropriate sentences; the third task was in the form of blanks and required
students to fill in the appropriate idioms in given sentences. The idioms used
were outdated and are still part of the Intermediate curriculum. Moreover, the
idioms selected do not precisely serve the major goal of vocabulary testing,
i.e., Achievement of communicative competence.
Example
Public Sector University
I finished my assignment just
an hour before its submission
a) Goes
with the flow
b) Red
tape
c) Feather
in one’s cap
d) At
the eleventh hour
Discrepancy
between Course Objectives and Test Items
Clear and precise, vocabulary-oriented
course objectives can draw teachers' attention to the vocabulary complexities
they do not know. All university's course objectives are not descriptive and
comprehensive enough to ensure proper benchmarks and guided learning and
testing processes resolutely. The Course objectives which have been analysed
are not research-driven and are old fashioned and do not explicitly build on
the needs of the learners but rather focussed on teaching integrative reading
vocabulary by guessing and looking at the words in dictionaries instead of
taking into consideration the size or depth of the learner's vocabulary or
setting a target of at least 800-900 words per semester through multiple ways.
Following is an example of a course objective:
Example
Public Sector University
a) Vocabulary
building skills
b) Students
should be able to
c) Guess
the meanings of unfamiliar words using contextual cues
d) Use
word formation rules for enhancing vocabulary
e) Use
the dictionary to find out the meaning of unfamiliar words
Another discrepancy related
to vocabulary-oriented course objectives is that they are built on following
the discrete point framework, which resulted in unreliable and invalid teaching
and testing methods. For instance, the one-course objective is designed to
encourage new vocabulary items but failed to identify how? What sort of
vocabulary? Which form/function of vocabulary? For what purpose? Following is
an example of the course objective recently discussed.
Example
Public Sector University
Vocabulary Building
a) basic
morphology
b) Prefixes
and suffixes
c) Homonyms
and Homophones
d) Use
of Idioms
e) Word
formation
Moreover, the syllabus implicitly aims to make
students able to guess the meaning of reading content at a lower intermediate
level which is an irony in itself as at the University level we at least expect
an upper intermediate level or at least strive to achieve that because students
will not be taking more than 1 English Compulsory course( in some cases 2) in
their Undergraduate Degree Program and if the target set is too low then how
they will be able to know their gaps in vocabulary knowledge and improve it for
future growth.
Example
Public Sector University
Read at the Literal
level(guessing the meaning through co-text) and between the lines through lower
intermediate-level texts
Moreover, the
course objective in one of the university's curricula is specified as ambiguous
course content, which leaves everything to the teacher's judgment.
Example
Public Sector University
Use new Vocabulary Items
Another University describes
its course objectives in which it stated vocabulary as subject-specific, but the
test executed does not follow that instead it is based on the inclusion of
random discrete-type vocabulary testing
Example
Private Sector University
Display Knowledge of basic Medical
Vocabulary in Daily use
Here the learning goal is
contextualised teaching and testing of medical-related vocabulary but the test
checked student's general vocabulary in a discrete fashion.
Another private university’s
objective focussed on the functional use of vocabulary for communication use
and the test items were designed to check the cohesive linking devices used in
an argumentative essay
Example
Private Sector University
Demonstrate the usage of Appropriate
Lexis and Rubrics for Formal Correspondence
Table 1
below Describes and Summarizes
the Major Findings in the Assessment of Vocabulary in Public and Private Sector
Universities of Karachi
University Examination Papers |
Weightage of vocabulary in each paper |
Types of vocabulary
tested |
Instruction related to
vocabulary in descriptive questions |
Explicit course
objectives related to vocabulary |
Testing receptive
vocabulary |
Testing productive
vocabulary |
||||||||||
Word grammar |
word morphology |
Pronunciation |
spelling |
Word semantics |
Word collocation |
Syntactic restrictions on the word |
Word Frequency |
Idioms |
semantic and syntactic
relationship |
|||||||
Public sector university Paper 1 |
66% |
|
|
|
|
ü |
|
|
|
ü |
missing |
missing |
present |
missing |
||
Public sector university Paper 2 |
64% |
|
|
|
|
|
|
|
|
|
ü |
missing |
missing |
missing |
present |
|
Public sector university Paper 3 |
20% |
|
|
|
|
ü |
|
|
|
|
|
missing |
present |
present |
missing |
|
Public sector university Paper 4 |
15% |
|
|
|
|
ü |
|
|
|
|
missing |
present |
present |
missing |
||
Public sector university Paper 5 |
20% |
ü |
|
|
|
ü |
|
|
|
|
missing |
present |
present |
missing |
||
Public sector university Paper 6 |
25% |
ü |
|
|
|
ü |
|
|
|
|
ü |
missing |
present |
present |
present |
|
Public sector university Paper 7 |
12% |
|
|
|
|
ü |
|
|
|
|
ü |
missing |
present |
present |
missing |
|
Public sector university Paper 8 |
10% |
ü |
|
|
|
|
|
|
|
|
|
missing |
present |
present |
missing |
|
Public sector university Paper 9 |
20% |
|
ü |
|
|
|
|
|
|
ü |
missing |
present |
present |
present |
||
Private Sector University Paper 1 |
10% |
|
|
|
|
ü |
|
|
|
|
|
missing |
missing |
present |
missing |
|
Private Sector University Paper 2 |
80% |
|
|
|
|
|
|
|
|
|
ü |
missing |
missing |
missing |
present |
|
Private Sector University Paper 3 |
25% |
ü |
|
|
|
|
|
|
|
|
ü |
missing |
present |
present |
present |
|
Private Sector University Paper 4 |
40% |
ü |
|
|
|
|
|
|
|
|
ü |
missing |
present |
present |
present |
|
Private Sector University Paper 5 |
35% |
|
|
|
|
ü |
|
|
|
ü |
ü |
missing |
present |
present |
present |
|
Private Sector University Paper 6 |
25% |
|
|
|
|
|
|
|
|
|
ü |
missing |
missing |
missing |
present |
Discussion
The findings reveal that public sector universities have emphasized vocabulary teaching and testing, which is evident from their course objectives and examination papers. In contrast, most of the private sector universities have not even mentioned the construct of vocabulary implicitly in their course objectives. Some Public universities have attempted to test at least three types of vocabulary in their papers. In contrast, private sector universities have either tested one type or might be considering checking vocabulary in writing, which would rarely be the case. Only one private sector university out of 6 has evaluated three types.
On the one hand, Public sector Universities have emphasized testing receptive(passive) knowledge of vocabulary with no evaluation of the productive aspect of vocabulary except one examination paper of the university or might be considering it in writing skills which are not evident from their test items’ instructions. On the other hand, in private sector universities papers, 4 out of 6 papers evaluated receptive(passive knowledge) only, all universities tested production except one, and one university has checked receptive and productive knowledge. Though most of the private universities attained more percentages in the chart because I have considered integrative vocabulary testing in writing tasks, it was not evident from their test items' instructions, otherwise, the papers have not tested vocabulary in the papers. Moreover, this goes for public sector universities as well. The increase in the percentage of vocabulary is only due to writing tasks in both; otherwise, no efforts have been seen to test knowledge of vocabulary.
It gives rise to condemning questions like; why is vocabulary tested in the same way our ancestors were assessed on? Why other lexical dimensions like collocations, word orthography, specific register,e.t.c. have never become a part of teaching and testing ever? Why are testing items restricted to matching, multiple-choice questions, or in integrative form only? Do these papers are the true reflection of students' communicative competence concerning vocabulary?
The papers at the university level have not been moderated for so long by experienced teachers who are experts in the same field. Another reason for this pattern can be the lack of checks and balances. Hence, the study supports the findings of Khan & Ali ( 2018) who discovered and concluded that most of the teachers at the University level hold BS or Master's Degrees in English; they are undoubtedly qualified but not trained and, therefore, cannot differentiate between teaching and testing, which leads to the problem like those mentioned above. Assessment is a whole field in itself, and Pakistani universities give very less importance to it, which resulted in poorly constructed test items and testing students in an archaic way which gives an impression of good scoring but not good learning and acquisition and the discrepancies of which they face in professional lives amidst in both sector, it is evident from the high stake tests of scholarships (GRE, TOEFL, IELTS)or CIvil Services test(CSS, FPS, SPSC). Most of the students struggle in qualifying for these exams and struggle with English papers, specifically when it comes to vocabulary. Only those students with vast exposure(good Schooling, Extensive Reading/Listening, etc.) to language would be able to ace the exam easily; the rest attempted the test multiple times, and instead of focusing on their majors, they found themselves stuck with language improvement which has already been done after 16years of English Language teaching.
The above discussion explains that something is not right at the assessment level, and if students are tested strictly at the University level, they will be saved from future complications.
The test's objective and the type of information the test creator intends to elicit should both be taken into consideration when choosing an item format( Kremmel & Schmitt, 2016). According to HEC policy, the English Compulsory course's major purpose at University Level is to develop communicative competence through functional teaching of the English Language or Developing Academic Competent Students.
Out of the 15-semester examination papers and course objectives of that specific Paper I have collected, only four universities' course objectives do not include the vocabulary element. It emphasized that vocabulary is an essential aspect of the Undergraduate Level curriculum. However, designed course objectives do not serve their motive purposefully and cannot directly direct the test constructor about what should be assessed. Most of the test items designed in the Universities' papers either check the receptive skills or poorly tests the production skills restricted on the sentence level. It leads to another pertaining question if students' receptive skills are strengthening at this point, does that entail that they are successful readers and listeners? It is surely not the case in Pakistan, where Listening is not even tested, and reading is merely checked beyond the literal level. Another fact highlighted by (Schmitt, 2014) is that if students' passive vocabulary is checked and students answer an item based on passive vocabulary correctly, how deeply (word's collocations, word's family, functional use)they understand a particular item. Moreover, the objectives set are mostly emphasized using new words and vocabulary enhancement to appropriate vocabulary size testing. Still, the test items reveal recognition-based questions of random words.
This research reveals that the criterion or norm set in Universities' semester Examination Papers is based on testing recognition, and recalling skills of students; facilitating the test items by providing hints within the test items. The hints are either in the form of distractors, repetition of the same items, guessing, or matching the already provided meaning. Moreover, the test items are intentionally set to be easily guessed by students. Surprisingly, not more than two vocabulary dimensions are tested imperfectly suitable for testing the beginners' level vocabulary and not at any scenario suitable to test the proficiency and accuracy of vocabulary at the undergraduate level. Another finding of this research sheds light on the fact that receptive knowledge is checked by identifying the meaning, word class, and idioms mostly, and productive level is checked through sentence making using content and functional words essay. No emphasis on checking the size, breadth, and quality of vocabulary has been found anywhere at the university level.
The results of this study support Kremmel and Schmitt's (2016) conclusions that form-meaning item formats should not be perceived as giving precise information about a learner's competence to use the target vocabulary in reading or about their understanding of derivative forms or collocations
Conclusion and Recommendation
In light of the above results, it has been found that vocabulary has not been tested as it has to be, and there are several reasons for that. Firstly, the objectives are not explicitly designed to orient teachers about what to teach in vocabulary, which resulted in leaving everything to the teacher’s judgment, who made random choices in the selection and designing of certain tasks and test items. Secondly, teachers lack training and experience in assessing this construct, and he/she follows the same lore through which he/she has been taught without updating their current knowledge about the current trends of teaching and assessment. Finally, instructors were not verified and followed up on by the HOD or Senior instructors to ensure the standard of evaluations they were delivering which leads to design imperfections.
I want to propose some solutions to combat this situation. The teachers' training sessions should be provided in the institution to train and guide teachers. To check and take follow-up, a panel of experienced teachers can be made within the institution who can ensure the quality, validity, and reliability of every Paper. Initiatives and short-term plans can be made to motivate teachers to get training and set out their ways for professional training and workshops.
There appears to be a critical need to modify the programmes to correspond with the evolving requirements of the learners. To do this, I can suggest a reverse model of designing curriculum utilizing the backward model theory where the focus is shifted on the teacher rather than the students as to what teachers should do to enhance their understanding of students; the ways people learn; and the relationships between teachers, their students, student's families, and the broader community rather than what students should learn. Revise the course outline and course objective following the updated researched trends in the world globally and train teachers accordingly. Moreover, during planning tests, target vocabulary items can be identified according to the level and needs of the learners and contextualized to avoid random selection.
On a broader level, following the footsteps of developed Nations, where individuals are not allowed to practice teaching unless they are certified as teachers, no matter how high degree they hold, by their specialized institute. Similarly, accredited teacher training should be established as a mandatory aspect of educational policy at all levels to assure educational quality and authenticity. This will add credibility to the teaching profession in Pakistan, where most people take teaching for granted as a part-time job with less responsibility and a short time. Furthermore, the teacher’s competencies framework can be employed to check their performances throughout semesters.
Clear criteria must be established, the content must be rigorously examined with that in mind, and the teaching style must be decided to accomplish the objective quickly and efficiently in order to make vocabulary a key element of teaching and evaluation. On a higher level, a teacher should encourage and create an environment for students to involve students in educational-related activities instead of following infotainment on social media. They can be motivated to acquire vocabulary outside the classroom by engaging students in intensive reading, making vocabulary clubs, and innovative and creative projects related to current trends in society.
For future research studies, I propose that reliability and validity in vocabulary testing are unexplored areas in the Pakistani context. Investigating these aspects through classroom observations and teachers' and students' perceptions would lead to fruitful insights into the research field.
References
- Astika, G. (1993). Analytical Assessments of Foreign Students’ Writing. RELC Journal, 24(1), 61–70.
- Berne, J. I., & Blachowich, C. L. Z. (2008). What reading teachers say about vocabulary instruction: Voices from the classroom. The Reading Teacher, 62(4), 314–323.
- Catalán, R., & Espinosa, S. M. (2005). Promoting English vocabulary research in primary and secondary education: test review and test selection criteria. ES: Revista De FilologÃa Inglesa, 26, 171– 188.
- Coady, J. (1996). L2 vocabulary acquisition through extensive reading. In Cambridge University Press eBooks (pp. 225–237).
- Grabe, W. (2008). Reading in a Second Language: Moving from Theory to Practice.
- Farooq, M., Uzair-Ul-Hassan, M., & Wahid, S. (2012). Opinion of Second Language Learners about Writing Difficulties in English Language. South Asian Studies, 27(1), 183.
- Khan, K. R., & Ali, S. S. (2018). Ignoring Pragmatic Competence while Testing Grammatical Competence: The Case of ELT in Pakistan. ELF Annual Research Journal 20, 01-22.
- Kremmel, B., & Schmitt, N. (2016). Interpreting Vocabulary Test Scores: What Do Various Item Formats Tell Us About Learners’ Ability to Employ Words? Language Assessment Quarterly, 13(4), 377–392.
- Lado, R. (1964). Language testing; the construction and use of foreign language tests: A teacher’s book. McGraw-Hill
- Laufer, B., & Nation, P. (1995). Vocabulary Size and Use: Lexical Richness in L2 Written Production. Applied Linguistics, 16(3), 307–322.
- McDaniel, M. A., & Masson, M. E. J. (1985). Altering memory representations through retrieval. Journal of Experimental Psychology, Learning, Memory and Cognition, 11(2), 371–385.
- Espinosa, S. M. (2017). Vocabulary: Reviewing Trends in EFL/ESL Instruction and Testing. Odisea, 4, 97- 112.
- Nation, I., & Webb, S. (2010). Researching and Analyzing Vocabulary.
- Anderson, N. J., Nattinger, J. R., & Decarrico, J. S. (1994). Lexical Phrases and Language Teaching. The Modern Language Journal, 78(2), 242.
- Taylor, A. M. (2004). Learning Vocabulary in Another Language I. English for Specific Purposes, 23(1), 87–90
- Raza, W. (2009). English Language Testing in Higher Education of Pakistan. Market Forces, 4(4), 180-188.
- Rawson, K. A., & Dunlosky, J. (2012). When Is Practice Testing Most Effective for Improving the Durability and Efficiency of Student Learning? Educational Psychology Review, 24(3), 419–435
- Read, J. (1993). The development of a new measure of L2 vocabulary knowledge. Language Testing, 10(3), 355–371.
- Read, J. A. S. (2000). Assessing vocabulary. Cambridge University Press.
- Schmitt, N. (2002). Vocabulary in Language Teaching. TESOL Quarterly, 36(2), 235.
- Schmitt, N., Schmitt, D., & Clapham, C. (2001). Developing and exploring the behaviour of two new versions of the Vocabulary Levels Test. Language Testing, 18(1), 55–88.
- Schmitt, N. (2010). Researching Vocabulary: A Vocabulary Research Manual.
- Schmitt, N. (2014). Size and Depth of Vocabulary Knowledge: What the Research Shows. Language Learning, 64(4), 913–951.
- Smith, M. N. K. (1941). Measurement of the size of general English vocabulary through the elementary grades and high school. Genetic Psychology Monographs, 24, 311–345.
- Spolsky, B. (1995). Measured Words: The Development of Objective Language Testing.
- Linguistics in language teaching. (1974). Lingua, 34(1), 77–79. h
Cite this article
-
APA : Anees, Y. (2023). A Comparative Study of Vocabulary Assessment in Public and Private Sector Universities in Karachi, Pakistan. Global Language Review, VIII(II), 178-193. https://doi.org/10.31703/glr.2023(VIII-II).16
-
CHICAGO : Anees, Yusra. 2023. "A Comparative Study of Vocabulary Assessment in Public and Private Sector Universities in Karachi, Pakistan." Global Language Review, VIII (II): 178-193 doi: 10.31703/glr.2023(VIII-II).16
-
HARVARD : ANEES, Y. 2023. A Comparative Study of Vocabulary Assessment in Public and Private Sector Universities in Karachi, Pakistan. Global Language Review, VIII, 178-193.
-
MHRA : Anees, Yusra. 2023. "A Comparative Study of Vocabulary Assessment in Public and Private Sector Universities in Karachi, Pakistan." Global Language Review, VIII: 178-193
-
MLA : Anees, Yusra. "A Comparative Study of Vocabulary Assessment in Public and Private Sector Universities in Karachi, Pakistan." Global Language Review, VIII.II (2023): 178-193 Print.
-
OXFORD : Anees, Yusra (2023), "A Comparative Study of Vocabulary Assessment in Public and Private Sector Universities in Karachi, Pakistan", Global Language Review, VIII (II), 178-193
-
TURABIAN : Anees, Yusra. "A Comparative Study of Vocabulary Assessment in Public and Private Sector Universities in Karachi, Pakistan." Global Language Review VIII, no. II (2023): 178-193. https://doi.org/10.31703/glr.2023(VIII-II).16