Реферат по предмету "Информатика, программирование"


Algorithmic recognition of the Verb

Министерство образования Республики Беларусь
Учреждение образования
«Гомельский государственный университет
им. Ф. Скорины»
Филологический факультет
Курсовая работа
Algorithmic recognition of the Verb
Исполнитель:
Студенткагруппы К-42
МарченкоТ.Е.
Гомель 2005

Content
Introduction
Basicassumptions and some facts
1 Algorithm forautomatic recognition of verbal and nominal word groups
2 Lists ofmarkers used by Algorithm No 1
3 Text sampleprocessed by the algorithm
Examples of handchecking of the performance of the algorithm
Conclusion
ReferencesIntroduction
The advent and the subsequent wide useof formal grammars for text synthesis and for formal representation of thestructure of the Sentence could not produce adequate results when applied totext analysis. Therefore a better and more suitable solution was sought. Such asolution was found in the algorithmic approach for the purposes of text analysis.The algorithmic approach uses series of instructions, written in NaturalLanguage and organized in flow charts, with the aim of analysing certainaspects of the grammatical structure of the Sentence. The procedures — in theform of a finite sequence of instructions organized in an algorithm — are basedon the grammatical and syntactical information contained in the Sentence. Themethod used in this chapter closely follows the approach adopted by theall-Russia group Statistika Rechi in the 1970s and described in a number ofpublications (Kovcrin, 1972: Mihailova, 1973; Georgiev, 1976). It is to benoted, however, that the results achieved by the algorithmic proceduresdescribed in this study by far exceed the results for the English languageobtained by Primov and Sorokina (1970) using the same method. (To preventunauthorized commercial use the authors published only the block-scheme of thealgorithm.)/>

Basic assumptions and some facts
 
It is a well known fact that manydifficulties are encountered in Text Processing. A major difficulty, which ifnot removed first would hamper any further progress, is the ambiguity presentin the wordforms that potentially belong to more than one Part of Speech whentaken out of context. Therefore it is essential to find the features thatdisambiguate the wordforms when used in a context and to define thedisambiguation process algorithmically.As a first step in thisdirection we have chosen to disambiguate those wordforms which potentially(when out of context, in a dictionary) can be attributed to more than one Partof Speech and where one of the possibilities is a Verb. These possibilitiesinclude Verb or Noun (as in stay), Verbor Noun or Adjective (as in pain, crash), Verbor Adjective (as in calm), Verbor Participle (as in settled, asked, put), Verb or Noun or Participle (as in run, abode, bid), Verb or Adjective or Participle (as in closed), and Verb or Noun or Participle orAdjective (as in cut). We'llstart with the assumption that for every wordform in the Sentence there areonly two possibilities: to be or not to be a Verb. Therefore, onlyprovisionally, exclusively for the purposes of the present type of descriptionand subsequent algorithmic analysis of the Sentence, we shall assume that allwordforms in the Sentence which are not Verbs belong to the non-verbal orNominal Word Group (NG). As a result of this definition, the NG willincorporate the Noun, the Adjective, the Adverb, the Numeral, the Pronoun, thePreposition and the Participle 1st used as an attribute (as in the best selectedaudience) oras a Complement (as in we'll regard this matter settled). All the wordforms in the Sentence whichare Verbs form the Verbal Group (VG). The VG includes all main and AuxiliaryVerbs, the Particle to (usedwith the Infinitive of the Verb), all verbal phrases consisting of a Verb and aNoun (such as take place, take part, etc.) or a Verb and an Adverb (such as go out, get up, setaside, etc.),and the Participle 2nd used in the compound Verbal Tenses (such as had arrived). The formal features which help usrecognize the nominal or verbal character of a wordform are called 'markers'(Sestier and Dupuis, 1962). Some markers, such as the, a, an, at, by,on, in, etc.(most of them are Prepositions), predict with 100 per cent accuracy the nominalnature of the wordform immediately following them (so long as the Prepositionsare not part of a phrasal Verb). Other markers, including wordform endings suchas -ingand -es, or a Preposition which is also aParticle such as to, etc.,when used singly on their own (without the help of other markers) cannotpredict accurately the verbal or nominal character of a wordform. Consideringthe fact that not all markers give 100 per cent predictability (even when allmarkers in the immediate vicinity of a wordform are taken into consideration),it becomes evident that the entire process of formal text analysis using thismethod is based, to a certain degree, on probability. The question is how toreduce the possible errors. To this purpose, the following procedures wereused: a) the context of a wordform wasexplored for markers, moving back and forth up to three words to the left andto the right of the wordform;b; some algorithmicinstructions preceded others in sequence as a matter of rule in order to act asan additional screening;no decision was takenprematurely, without sufficient grammatical and syntactical evidence beingcontained in the markers;no instruction wasconsidered to be final without sufficient checking and tests proving thesuccess rate of its performance.The algorithm presentedin Section 3 below, numbered as Algorithm No 1 i.Georgicv, 1991), when testedon texts chosen at random, correctly recognized on average 98 words out ofevery 100. The algorithm uses Lists of markers.
Algorithm for automatic recognition ofverbal and nominal word groups
The block-scheme of thealgorithm is shown in Figure 1.1.Recognition of Auxiliary Words, Abbreviations, Punctuation Marks and figures of up to 3-letter length !'presented in Lists) Words over 3-lettcr length: search first left, then right (up to 3 words in each direction) for markers (presented in Lists) until enough evidence is gathered for a correct attribution of the running word
Output result: attribution of therunning word to one of the groups (verbal or nominal)Figure 1.1 Block-scheme of Algorithm No 1 Note: Thealgorithm. 302 digital instructions in all, is available on the Internet (seeInternet Downloads at the end of the book).
1 Lists of markersused by Algorithm No 1
(i) ListNo 1: for, nei, two, one, may, fig, any, day, she, his, him, her, you, men,its, six, sex, ten, low, fat, old, few, new, now, sea, yet, ago, nor, all, per,era, rat, lot, our, way, leg, hay, key, tea, lee, oak, big, who, tub, pet, law,hut, gut, wit, hat, pot, how, far, cat, dog, ray, hot, top, via, why, Mrs, ...,etc. (ii) ListNo 2: was, are, not, get, got, bid, had, did, due, see, saw, lit, let, say,met,rot. off, fix, lie, die, dye, lay, sit, try, led, nit,… ., etc. (iii) ListNo 3: pay, dip, bet, age, can, man, oil, end, fun, dry, log, use, set, air,tag, map, bar, mug, mud, tar, top, pad, raw, row, gas, red, rig, fit, own, let,aid, act, cut, tax, put, ..., etc.
(iv) ListNo 4: to, all, thus, both, many, may, might, when, Personal Pronouns, so, must,would, often, did, make, made, if, can, will, shall, ..., etc.
(v) ListNo 5: when, the, a, an, is, to, be, are, that, which, was, some, no, will, can,were, have, may, than, has, being, made, where, must, other, such, would, each,then, should, there, those, could, well, even, proportional, particular(ly),having, cannot, can't, shall, later, might, now, often, had, almost, can not,of, in, for, with, by, this, from, at, on, if, between, into, through, per,over, above, because, under, below, while, before, concerning, as, one, ...,etc.
(vi) ListNo 6: with, this, that, from, which, these, those, than, then, where, when,also, more, into, other, only, same, some, there, such, about, least, them,early, either, while, most, thus, each, under, their, they, after, less, near,above, three, both, several, below, first, much, many, zero, even, hence,before, quite, rather, till, until, best, down, over, above, through, ReflexivePronouns, self, whether, onto, once, since, toward (s), already, every,elsewhere, thing, nothing, always, perhaps, sometimes, anything, something,everything, otherwise, often, last, around, still, instead, foreword, later,just, behind, ..., etc.(vii)       List No 7:Includes all Irregular Verbs, with the following wordforms: Present, Present3rd person singular, Past and Past Participle. (viii)     ListNo 8: -ted, -ded, -ied, -ned, -red, -sed, -ked, -wed, -bed, -hed, -ped -led,-ved, -reed, -ced, -med, -zed, -yed, -ued, ..., etc.(ix)         List No 9:-ous, -ity, -less, -ph, -'s (except in it's, what's, that's, there's, etc.), -ness, -ence, -ic, -ее,-ly, -is, -al, -ty, -que, -(t)er, -(t)or, -th (except in worth), -ul8,-ment, -sion(s), ..., etc.(x)  List No 10:Comprises a full list of all Numerals (Cardinal and Ordinal)./>
2 Text sampleprocessed by the algorithm
Text     Word Group
She      NG
Nodded       VG
Again and      NG
Patted       VG
My arm, a smallfamiliar gesture which always  NG
Managed to convey   VG
Both understanding anddismissal.  NG3 Examples of hand checking of theperformance of the algorithm
Let us see how the following sentencewill be processed by Algorithm No 1, word by word: Her apartment was on a floor by itself at the top of whathad once been a single dwelling, but which long ago was divided into separatelyrented living quarters. Firstthe algorithm picks up the first word of the sentence (of the text), in ourcase this is the word her, withinstruction No 1. The same instruction always ascertains that the text has notended yet. Then the algorithm proceeds to analyse the word her by asking questions about it andverifying the answers to those questions by comparing the word her with lists of other words andPunctuation Marks, thus establishing, gradually, that the word her is not a Punctuation Mark ('operations3-5), that it is not a figure (number) cither (operation 5 7i, and that itslength exceeds two letters (operation 8). The fact that its length exceeds twoletters makes the algorithm jump the next procedures as they follow insequence, and continue the analysis in operation No 31. Using operation No 31the algorithm recognizes the word as a three-letter word and takes it straightaway to operation No 34. Here it is decreed to take the word her together with the word that follows itand to remember both words as a NG. Thus: Her apartment~NG Thenthe algorithm returns again to operation No 1, this time with the word was and goes through the same procedureswith it till it reaches instruction No 38, where it is seen that this word isin fact was. Nowthe algorithm checks if was ispreceded (or followed) by words such as there orit (operation No 39, which instructs thecomputer to compare the adjacent words with there and it), or if it is followed up to two wordsahead by a word ending in -ly orby such words as never, soon, etc.,none of which is actually the case. Then, finally, operation No 39d instructsthe computer to remember the word was asa VG
Was   =VG
And to return to the start again, thistime with the next word on. Goingthrough the initial procedures again, our hand checking of this algorithmreaches instruction No 9 where it is made clear that the word is indeed on. Then the algorithm checks the leftsurroundings of on, tosee if the word immediately preceding it was recognized as a Verb (No 10),excluding the Auxiliary Verbs. Since it was not (was is an Auxiliary Verb), the procedurereaches operation Nos 12 and 12a, where it becomes known to the algorithm that on is followed by a. The knowledge that on is followed by an Article enables theprogram to make a firm decision concerning the attribution of the next twowords (12a): on andthe next two words are automatically attributed to the NG:
On a floor   NG
After that the program again returns tooperation No 1, this time to analyse the word by. The analysis proceeds without any resulttill it reaches operation No 11. Where the word by is matched with its recorded counterpart (seethe List enumerating the other possibilities). In a similar fashion (see on), operation No 12b instructs the computerto take by andthe next word blindfoldedly (i.e. without analysis) and to remember them as aNG. Thus we have:
Byitself= NG
We return again to operation No 1 toanalyse the next word at andwe pass, unsuccessfully, through the first ten steps. Instruction No 11 enablesthe computer to match at withits counterpart recorded in the List (at). Sinceat is followed by the (an Article), this enables the computerto make a firm decision: to take at plusthe plus the next word and to remember themas a NG:
Atthe top   =NG
We deal similarly with the next word — of — and since it is not followed by a wordmentioned in operation No 12, we take only the word immediately following it(12b) and remember them as a NG:
Ofwhat —NG
Since the next word — had — exceeds the two-letter length (operationNo 7), we proceed with it to operation No 31, but we cannot identify it till wereach operation No 38. Operation No 39 checks the immediate surroundings of had, and if we had listed once with the other Adverbs in 39b, we wouldhave ended our quest now. But since once isnot in this list, the algorithm proceeds to the next step (39d) and qualifies had as a VG:
Had =VG
Now we proceed further, starting withoperation No 1, to analyse the next word, once. Being a long word once jumps the analysis destined for theshorter (two- and three-letter) words and we arrive with it at operation No 55.Operations No 55 and 57 ascertain that once doesnot coincide with either of the alternatives offered there. Through operationNo 59 the computer program finds once listedin List No 6 and makes a correct decision — to attribute it to the NG:
Once  =NG
Now we (and the program) have reachedthe word been inthe text. The procedures dealing with the shorter words are similarly ignored,up to operation No 61, where been isidentified as an Irregular Verb from List No 7 and attributed (No 62b) to theVG:
Been  =VG
Next we have the word a (an Indefinite Article) which leads usto operations No 11 and 12 (where it is identified as such), and with operationNo 12b the program reaches a decision to attribute a and the word following it to the NG: a single —NG Nextin turn is dwelling. Itis somewhat difficult to tag, because it can be either a Verb or a Noun. We gowith it through all the initial operations, without significant success, untilwe get to operation No 69 and receive the instruction to follow routines No246-303. Since dwelling doesnot coincide with the words listed in operation No 246, is not preceded by thesyntactical construction defined in No 248 and does not have the word surroundingsspecified by operations No 250, 254, 256, 258, 260, 262, 264, 266, 268, 270,272. 274, 276, 278 and 280, its tagging, so far, is unsuccessful. Finally,operation No 282 finds the right surrounding — to its left there is, up to twowords to the left, an Article (a) — and attributes dwelling to the NG:
Dwelling     ~NG
However, in this case dwelling is recognized as a Gerund, not as aNoun. If we were to use this result in another program this might lead toproblems. Therefore, perhaps, here we can add an extra sieve in order to beable to always make the right choice. At the same time, we must be very carefulwhen we do so, because the algorithms arc made so compact that any furtherinterference (e.g. adding new instructions, changing the order of the instructions)might well lead to much bigger errors than this one. Now,in operation No 3, we come to the first Punctuation Mark since we started ouranalysis. The Punctuation Mark acts as a dividing line and instructs theprogram to print what was stored in the buffer up to this moment. Nextin line is the word but. Beinga three-letter word it is sent to operation No 31 and then consecutively to Nos34, 36, 38 and 40. It is identified in No 42 and sent by No 43 to the NG as aConjunction:
But        =NG
Next, we continue with the analysis ofthe word which, startingas usual from the very beginning (No 1 ) and gradually reaching No 55, wherethe real identification for long words starts. The word which is not listed in No 55 or No 57. We findit in List No 6 of operation 59 and as a result attribute it to the NG:
whuh     — NG
The word long follows, and in exactly the same way wereach operation No 55 and continue further comparing it with other words andexploring its surroundings, until we exhaust all possibilities and reach afinal verdict in No 89:
long       -= NG
Next in turn is the word ago. As a three-letter word it is analysed inoperation No 31 and the next operations to follow, until it is found byoperation No 46 in List No 1, and identified as a NG (No 47): Followingis the word was, whichis recognized as such for the first time in operation No 38. After some briefexploration of its surroundings the program decides that was belongs to the VG: ext in sequence isthe word divided. Stepby step, the algorithmic procedures pass it on to operation No 55, because itis a long word. Again, as in all previous cases, operations No 55, 56, 57, 59,61 and 63 try to identify it with a word from a List, but unsuccessfully until,finally, instruction No 65 identifies part of its ending with -ded from List No 8 and sends the word toinstructions No 128-164 for further analysis. Here it does not take long to seethat dividedis preceded by the Auxiliary Verb was (No 130) and that it should be attributedto the VG as Participle 2nd (No 131):
divided  = VG
The Preposition into comes next and since it is not locatedin one of the Lists examined by the instructions and none of its surroundingscorrespond to those listed, it is assumed that it belongs to the NG (No 89):
Into   =NG
Next, the ending -ly of the Adverb separately is found in List No 9 and this givesenough reason to send it to the NG (No 64):
Separately  =NG
Now we come to a difficult word again,because rented canbe either a Verb or an Adjective, or even Participle 1st. Since its ending -ted is found in List No 8, rented is sent to instructions No 128-164 forfurther analysis as a special case. With instructions No 144 and 145 thealgorithm chooses to recognize rented asa Participle (1st) and to attribute it to the NG:
Rented   = NG
Next comes living. At first it also seems to be a specialcase (since it can be Noun, Gerund, Verb — as part of a Compound Tense — Adjective or Participle). Instruction No 69 establishes that this word ends in -ing and No 70 sends it for further analysisto instructions No 246-303. Almost towards the end (instructions No 300 and301), the algorithm decides to attribute living to the acknowledging that it is aPresent Participle. If the program were more precise, it would be able also tosay that living isan Adjective used as an attribute.The last word in thissequence is quarters. Theway it ends very much resembles a verbal ending (3rd person singular). Will thealgorithm make a mistake this time? Instruction No 67 recognizes that theending -s isambiguous and sends quarters toinstructions No 165 245 for more detailed analysis. Then the word passesunsuccessfully (unrecognized) through many instructions till it finally reachesinstruction No 233, where it is evidenced that quarters is followed by a Punctuation Mark andthis serves as sufficient reason to attribute it to the NG:
Quarters     = NG
Finally, our algorithmic analysis of theabove sentence ends with commendable results: no error. However,in the long run we would expect errors to appear, mainly when we deal withVerbs, but these are not likely to exceed 2 per cent. For example, an error canbe detected in the following sample sentence: .Not only has his poetic fame — as was inevitable — beenovershadowed by that of Shakespeare but he was long believed to haveentertained and to have taken frequent opportunities oj expressing a malignjealousy oj one both greater and more successful than himself.
This sentence is divided into VG and NGin the following manner:
Text     Word Group
Not      VG
Only    NG
Has      VG
His poetic fame    NG
As     NG
Was     VG
Inevitable    NG
Been overshadowed  VG
By that of Shakespeare  NG
But he    NG
Was long believed to haveentertained   VG
And        NG
To have taken       VG
Frequent opportunitiesof expressing  NG
A malign jealousy ofone both greater       NG
And     NG
More successful thanhimself.    NG
As is seen in the above example, theword longwas wrongly attributedto the VG (according to our specifications laid down as a starting point forthe algorithm it should belong to the NG). Thereader, if he or she has enough patience, can put to the test many sentences inthe way described above (following the algorithmic instructions), to prove forhimself (herself) the accuracy of our description. Thoughthis is a description designed for computer use (to be turned into a computersoftware program), nevertheless it will surely be quite interesting for amoment or two to put ourselves on a par with the computer in order tounderstand better how it works. Of course, that is not the way we would do thejob. Our knowledge of grammar is far superior, and we understand the meaning ofthe sentence while the computer does not. The information used by the computeris extremely limited, only that presented in the instructions (operations) andin the Lists.Further on we will tryto give the computer more information (Algorithm No 3 and the algorithms inPart 2) and correspondingly increase our requirements./>

Conclusion
•  Most of the procedures to determine thenominal or verbal nature of the wordform, depending on its context, are basedon the phrasal and syntactic structures present in the Sentence (for example,instructions 11 and 12, 67 and 68, 85, etc.), i.e. structures such asPreposition + Article + Noun; will (shall) + be + (Adverb)+ Participle; to + be + (not) + Participle2nd +to + Verb; -ing + Possessive Pronoun + Noun, etc. (thewords in brackets represent alternatives).
•  When constructing the algorithm it wasthought to be more expedient to deal first with the auxiliary and short wordsof two-letter length, then with words of three-letter length, then with therest of the words — for frequency considerations and also because theyrepresent the main body of the markers.
•  The approach presented in this study isnot based on formal grammars and is to be used exclusively for text analysis(not for text synthesis). One should not associate the VP (Verbal Phrase) withthe VG and the NP (Noun Phrase) with the NG — for these are completelydifferent notions as has been shown by the presentation.
•  The algorithm can be checked by feedingin texts through the procedures (the instructions) manually and if the readeris dissatisfied he or she may change the instructions to improve the results.(See Section 3.3 for details of how the performance of the algorithms can behand checked.)The algorithm can beeasily programmed in one of the existing artificial languages best suited forthis type of operation./>

References
1.Brill, E. and Mooney, R.J. (1997), ‘An overview ofempirical natural language processing', in AI Magazine, 18 (4): 13-24.
2.Chomsky,N. (1957), Syntactic Structures. The Hague: Mouton. Curme, G.O. (1955), EnglishGrammar. New York: Barnes and Noble.
3.Dowty, D.R., Karttunen, L. and Zwicky, A.M. (eds)(1985), Natural Language Parsing. Cambridge: Cambridge University Press.Garside, R. (1986),
4.'The CLAWS word-tagging system', in R. Garside,G. Leech and G. Sampson (eds) The ComputationalAnalysis of English. Harlow: Longman. Gazdar,G. and Mellish, C. (1989), Natural Language Processing in POP-11. Reading, UK:Addison-Wesley. Georgiev, H. (1976),
5.'Automatic recognition of verbal and nominal wordgroups in Bulgarian texts', in t.a. information, Revue International dutraitement automatique du langage, 2, 17-24.Georgiev, H. (1991), 'English Algorithmic Grammar',in Applied Computer Translation, Vol. 1, No. 3, 29-48.
6.Georgiev, H. (1993a), 'Syntparse, software programfor parsing of English texts', demonstration at the Joint Inter-Agency Meetingon Computer-assisted Terminology and Translation, The United Nations, Geneva.
7.Georgiev, H. (1993b), 'Syntcheck, a computersoftware program for orthographical and grammatical spell-checking of Englishtexts', demonstration at the Joint Inter-Agency Meeting on Computer-assistedTerminology and Translation, The United Nations, Geneva.Georgiev, H. (1994—2001), Softhesaurus, EnglishElectronic Lexicon, produced and marketed by LANGSOFT, Sprachlernmittel,Switzerland; platform: DOS/ Windows. Georgiev,H. (1996-2001a),
8.Syntcheck, a computer software program fororthographical and grammatical spell-checking of German texts, producedand marketed by LANGSOFT, Sprachlernmittel, Switzerland; platform: DOS/Windows.Georgiev, H. (1996-200lb), Syntparse, software program for parsing of Germantexts, produced and marketed by LANGSOFT, Sprachlernmittel, Switzerland;platform: DOS Windows.
9.Georgiev, H. (1997—2001a), Syntcheck, a computersoftware program for orthographical and grammatical spell-checking of Frenchtexts, produced and marketed by LANGSOFT, Sprachlernmittel, Switzerland;platform: DOS Windows.
10.Georgiev, H. (1997-2001b), Syntparse, softwareprogram for parsing of French texts, produced and marketed by LANGSOFT,Sprachlernmittel, Switzerland; platform: DOS/Windows.
11.Georgiev, H. (2000 2001), Syntcheck, a computersoftware program for orthographical and grammatical spell-checking of Italiantexts, produced and marketed by LANGSOFT, Sprachlernmittel, Switzerland;platform: DOS/Windows.
12.Giorgi, A. and Longobardi, G. (1991), The Syntax ofNoun Phrases: Configuration, Parameters and Empty Categories. Cambridge:Cambridge University Press. Graver,B. D. (1971), Advanced English Practice. Oxford: Oxford University Press.
13.Grisham, R. (1986), Computational Linguistics. Cambridge:Cambridge University Press. Harris,Z.S. (1982)
14.A Grammar of English on Mathematical Principles. NewYork: Wiley. Hausser,R. (1989), Computation of Language. Berlin: Springer.Hornby. A.S. (1958)
15.A Guide lo Patterns and Usage in English. London:Oxford University Press. Kavi,M. and Nirenburg, S. (1997), 'Knowledge-based systems for natural language', inA.B. Tucker (ed.) The Computer Science and Engineering Handbook. Boca Raton,FL: CRC Press, Inc., 637 53.
16.Koverin, A.A. (1972), 'Grammatical analysis, on acomputer, of French scientific and technical texts' (in Russian), PhD thesis,Leningrad University, Russia. Leech,S. and Svartvik, J. (1975)
17.A Communicative Grammar of English. London:Longman. Manning, C. and Schutze, H. (1999), Foundations of Statistical NaturalLanguage Processing. Cambridge, MA: MIT Press. Marcus, M.P. (1980)
18.A Theory of Syntactic Recognition for NaturalLanguage. Cambridge, MA: MIT Press.McEnery, T. (1992), Computational Linguistics. Wilmslow,UK: Sigma Press.
19.Mihailova, I.V. (1973), Automatic recognition of thenominal group in Spanish texts' (in Russian), in R. G. Piotrovskij (ed.) InjenernajaLinguistika. St Petersburg: Politechnical Institute, 148-75.
20.Primov, U.V. and Sorokina, V.A. (1970), 'Algorithmfor automatic recognition of the nominal group in English technical texts' (in Russian),in R.G.
21.Piotrovskij (ed.) Statistika Teksta, II. Minsk:Politechnical Institute. Pullum,G.K. (1984), 'On two recent attempts to show that English is not a CFL', ComputationalLinguistics, 10 (3-4), 182-6. Quirk, R. and Greenbaum, S. (1983),


Не сдавайте скачаную работу преподавателю!
Данный реферат Вы можете использовать для подготовки курсовых проектов.

Поделись с друзьями, за репост + 100 мильонов к студенческой карме :

Пишем реферат самостоятельно:
! Как писать рефераты
Практические рекомендации по написанию студенческих рефератов.
! План реферата Краткий список разделов, отражающий структура и порядок работы над будующим рефератом.
! Введение реферата Вводная часть работы, в которой отражается цель и обозначается список задач.
! Заключение реферата В заключении подводятся итоги, описывается была ли достигнута поставленная цель, каковы результаты.
! Оформление рефератов Методические рекомендации по грамотному оформлению работы по ГОСТ.

Читайте также:
Виды рефератов Какими бывают рефераты по своему назначению и структуре.