Реферат по предмету "Иностранный язык"


Educational Productivity Essay Research Paper Educational productivity

Educational Productivity Essay, Research Paper

Educational productivity is the improvement of students outcomes with little or no additional financial resources, or a consistent level of student performance at a lower level of spending. Educational productivity is based on effectiveness. This is the linkage between student outcomes and the level and use of finacial resources in the schools. Production functions are concerned with how money is related to student learning and lifetime earnings. Other approaches are cost functions, data envelopment, and the impact of smaller class size on the student learning. Although there has been extensive research about educational productive functions, there are still many disagreement among researchers as to whether or not a statistical link can be found between student outcomes and money. However, it is agreed upon that the single largest expendidture in the public school system is teacher expenditure.

Early production-function research, modeled on classical economic theory, tried to correlate a set of educational “inputs” to a single “output.” Most of these studies were inconclusive. Because of the complexity of the schooling process and factors (like child poverty) outside schools’ control, it has been difficult to isolate statistically significant one-to-one correlations between inputs and student learning.

The most common outcomes measured in such studies are standardized test results, graduation rates, dropout rates, college attendance patterns, and labor-market outcomes. Inputs usually include per-pupil expenditures; student-teacher ratios; teacher education, experience, and salary; school facilities; and administrative factors (Lawrence Picus 1997). The most famous production-function study was the U.S. Department of Education’s “Coleman Report.” This massive survey of 600,000 students in 3,000 schools concluded that socioeconomic background influenced student success more than various school and teacher characteristics (Picus 1997).

Another type of research was culminated in Eric Hanushek’s 1989 study, which analyzed results of 187 production studies published during the previous 20 years. Using a simple vote-counting method to compare data, Hanushek found no systematic, positive relationship between student achievement and seven inputs.

Hanushek’s findings have been challenged by recent studies using more sophisticated research techniques. When Larry Hedges (1994) and associates reanalyzed Hanushek’s syntheses using meta-analysis, they discovered that a $500 (roughly 10 percent) increase in average spending per pupil would significantly increase student achievement. Likewise, Faith Crampton’s comprehensive analysis (1995) of inputs affecting achievement in New York State schools found that expenditures seemed to matter when they bought smaller classes and more experienced, highly educated teachers.

Some educational productivity measurements are as follows: The first one is based on the economic production function that is used to measure the contribution of individual inputs to the output of some product. This function is:

O = f(K,L)

Where:

O = some measurable output K= capital or non labor inputs to the production process L = labor

Based on this equation, an education production function was developed. It is as follows:

P = f(R,S,D)

Where

P = a measure of student performance R = a measure of resources available to students in the school or district. S = a vector of student characteristics. D = a vector of district and school characteristics. (Odden, p. 290)

Although these equations serves as valid equations, there are still many difficulities with these functions. In economic terminology, the effort is to find a production function – a mathematical expression of the relationship between inputs and outputs in education. It is yet to reach an agreement on the proper measure of student performance to serve as the outcome indicator. Also, there is not indicator that takes in account the students that do not speak English, as their first language, and children with disabilities who do not do as well in school as other students. Many production functions rely on cross sectional data to

In his book, Monk uses the production function as the basic element in studying productivity in schools. He defines a production function as a model which links conceptually and mathematically outcomes, inputs, and the processes that transform the latter into the former in schools (Monk, p. 316). He notes that production functions are important for improving both technical and allocative efficiencies. However, despite their potential benefits, Monk recognizes the major obstacles that face the creation of production functions for education. Neither outcomes, inputs, nor processes are easily understood.

In education, outcomes are multiple, jointly produced, and difficult to weigh against one another. The outcomes of education are not all translatable into a standard metric, such as money, which makes it very difficult to give them relative value. A further difficulty with outcomes has to do with the level at which they should be measured. At various times researchers have been interested in outcomes of individual students, classes of students, schools, school districts, states, nations, ethnic groups, age groups, gender groups, and all sorts of other subsets of the population. Monk is aware of the difficulties in dealing with both micro and macro analyses. He concludes that there is no one best approach.

“… it is not always the case that microlevel data are better than macrolevel data. The proper level of analysis depends largely on the nature of the phenomenon being studied. Some phenomena are district rather than school or classroom phenomena and have effects that are felt throughout entire school districts” (p.327).

Monk raises the possibility that there is no production function for education; that no “systematic process governs the transformation of inputs into outcomes” (p. 342).

Many of the same themes are reprised in Monk’s 1992 article. He begins by pointing out the current policy push towards what he calls “outcomes as standards” — the idea that educational outcomes can be improved by setting and enforcing higher standards. He notes that there is a paradox between pessimistic assessments of productivity research in education and the growing drive towards improving productivity, which requires “a nontrivial store of knowledge regarding the ability of state, district, and school officials to enhance productivity” (1992, p.307). Monk’s view is that

“…the underlying model of education productivity is inadequate and has not evolved much…. The weakness of the conceptualization gives rise to much of the policy- making frustration.” (p. 308)

In particular, Monk argues that education productivity research has failed to consider the ways in which production in education is different from other kinds of production. For example, Monk notes that some outcomes of schools are also inputs to later production (e.g., knowledge gained in primary school is an input to learning in secondary school). He points out that many of the most important inputs in education, such as family and peer influences, are not purchased and are difficult to account for. Most relevant to this discussion, he acknowledges that “student time and effort are central ingredients in education production” (p. 315).

David H. Monk’s (1996) study of the New York State K-12 system found a 55 percent increase in secondary-level special-education instructional resources between 1983 and 1992, alongside modest increases in allocations for science and math teachers. These findings raise questions concerning the proper, most efficient distribution of teacher resources across different programs and subject areas.

Monk then distinguishes between two possible strategies. One assumes that there is no “tractable” production function for education, so that central authorities cannot improve outcomes through a standard set of practices. The second approach retains faith in the existence of a production-function, with “the outcomes-as-standards strategy as a new means of gaining insight into the function’s properties” (p. 316). Monk’s discussion of the policy implications of these two alternatives is interesting, but will not be recapitulated here. After examining the alternatives he concludes

“…(a) it is premature to conclude that the production function lacks meaning within education contexts; (b) …approaches to the outcomes-as-standards policy-making response have merit and involve increased efforts to monitor and make sense of the experimentation that occurs; and (c) the embrace of the outcomes-as-standards response ought not to crowd out alternative, more deductively driven strategies.” (p. 320)

Monk goes on to advocate the study of productivity through looking at the properties of classrooms. This proposal is based partly on the belief that teachers will use different instructional approaches with different classes of students. He discusses the ways in which these responses by teachers might occur depending on the students, and suggests that teachers may have individual patterns of adjustment that could be studied and defined in terms of their impact.

Picus’s (1997) ongoing study of school-level data collection in four states (California, Minnesota, Florida, and Texas) explores whether such systems offer researchers and practitioners a boundless opportunity or a bottomless pit. The most significant gleaning: it is as hard to analyze data as it is to obtain them. States set up systems in response to legislative requirements, not researchers’ needs. This situation might be remedied by setting up a licensing system similar to that used by the National Center for Education Statistics (Picus 1997). Researchers’ patience and willingness to develop strong personal relationships with data-production staff are essential.

One limitation on school-level data is the difficulty of comparing data across states (Picus 1997). Some researchers believe equity and effectiveness would be better served if a national system of student-level resource measures could be developed (Berne and Stiefel 1995). Others insist that a student-poverty factor be added to funding analyses (Berne 1995, Consortium 1995, Biddle 1997). Hertert (1995), addressing national equity concerns, sees the NCES and Census Bureau’s jointly developed Common Core of Data (containing standardized, comparable revenue and expenditure data for the nation’s 15,000 districts for 1989-90) as a good first step for measuring interstate disparities.

HOW CAN THEY BE IMPROVED?

Some research, like Crampton’s study of New York schools, has isolated the types of expenditures that matter in the school-productivity equation. A good example is Harold Weglinsky’s study (1997), which found that fourth- and eighth-graders’ math achievement was positively associated with lower student-teacher ratios and with expenditures on instruction and school-district administration. Expenditures on facilities, recruitment of highly educated teachers, or school-level administration were not significantly related.

Another kind of efficiency research explores schools’ resource-allocation practices. David H. Monk (1996) examined how teacher resources are distributed and utilized at various levels of the New York State K-12 system. The study found a 55 percent increase in secondary-level special-education instructional resources between 1983 and 1992, alongside modest increases in allocations of science and math teachers. Of course, legal mandates may prevent an “efficient” distribution of teacher resources across different subject areas.

In another cost-allocation study, Bruce S. Cooper and associates (1994) developed and applied a microfinancial measure, the School Site Allocation Model, to track financial resources through school systems. Test-site data from twenty-five school districts were analyzed to provide indicators of cost ranges required to operate central offices and schools. The model effectively reported schools’ usage of funds by function (administration, operations, staff development, student support, and instruction), level, and type in a “user-friendly” manner.

A third research area takes an organizational-development or restructuring approach to improving school productivity. An example is Levin’s “x-efficiency” study of schools using the Accelerated Schools model to improve efficiency along five dimensions.

Moving to research that falls more closely in the economic domain, it would be highly relevant to study the impact of experimenting with the kinds of changes that are called for in the literature on motivation cited earlier. Monk’s 1992 article calls for experimentation as a means of studying productivity. We have already noted that most of the measures currently proposed for study, however, have little to do with altering the role of students. Researchers might well want to study the differential impact of settings that are more and less coercive in their treatment of students. The existing differences between early childhood settings and those of older students provide one way to study these differences

Bibliography

Crampton, Faith E. “Is the Production Function Dead? An Analysis of the Relationship of Educational Inputs on School Outcomes.” A presentation to the Annual Conference of the American Education Finance Association, March 1995.

Odden, Allan, and William Clune. “Improving Educational Productivity and School Finance.” Educational Researcher 24, 9 (December 1995):6-10, 22. EJ 519 250.

Odden, Allan. “Raising Performance Levels Without Increasing Funding.” School Business Affairs 63, 6 (June 1997): 4-12. EJ 547 295.

Picus, Lawrence O. “Does Money Matter in Education? A Policymaker’s Guide.” In Selected Papers in School Finance 1995, edited by William J. Fowler. 19-35. Washington, D.C.: National Center for Education Statistics, 1997. ED 408 691.

Wenglinsky, Harold. When Money Matters. Princeton, NJ: Educational Testing Service, 1997. 44 pages.

Monk, D. (1990). Educational Finance: An Economic Approach. New York: McGraw-Hill.

Monk, D. (1992). Education productivity research: An update and assessment of its role in education finance reform. Educational Evaluation and Policy Analysis, 14(4), 307-332.




Не сдавайте скачаную работу преподавателю!
Данный реферат Вы можете использовать для подготовки курсовых проектов.

Поделись с друзьями, за репост + 100 мильонов к студенческой карме :

Пишем реферат самостоятельно:
! Как писать рефераты
Практические рекомендации по написанию студенческих рефератов.
! План реферата Краткий список разделов, отражающий структура и порядок работы над будующим рефератом.
! Введение реферата Вводная часть работы, в которой отражается цель и обозначается список задач.
! Заключение реферата В заключении подводятся итоги, описывается была ли достигнута поставленная цель, каковы результаты.
! Оформление рефератов Методические рекомендации по грамотному оформлению работы по ГОСТ.

Читайте также:
Виды рефератов Какими бывают рефераты по своему назначению и структуре.

Сейчас смотрят :