An Introduction to Syntactic Analysis and Theory - Dominique Sportiche - E-Book

An Introduction to Syntactic Analysis and Theory E-Book

Dominique Sportiche

4,3
42,95 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

An Introduction to Syntactic Analysis and Theory offers beginning students a comprehensive overview of and introduction to our current understanding of the rules and principles that govern the syntax of natural languages.

  • Includes numerous pedagogical features such as 'practice' boxes and sidebars, designed to facilitate understanding of both the 'hows' and the 'whys' of sentence structure
  • Guides readers through syntactic and morphological structures in a progressive manner
  • Takes the mystery out of one of the most crucial aspects of the workings of language – the principles and processes behind the structure of sentences
  • Ideal for students with minimal knowledge of current syntactic research, it progresses in theoretical difficulty from basic ideas and theories to more complex and advanced, up to date concepts in syntactic theory

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 933

Veröffentlichungsjahr: 2013

Bewertungen
4,3 (16 Bewertungen)
9
3
4
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Acknowledgments

1 Introduction

1.1 Where to Start

1.2 What this Book is and is Not, and How to Use It

1.3 Further Reading

2 Morphology: Starting with Words

2.1 Words Come in Categories

2.2 Words are Made of Smaller Units: Morphemes

2.3 Morphemes Combine in Regular Ways

2.4 Apparent Exceptions to the RHHR

2.5 Morphological Atoms

2.6 Compositionality and Recursion

2.7 Conclusion

3 Syntactic Analysis Introduced

3.1 Word Order

3.2 Constituency

3.3 Syntactic Productivity

3.4 Substitution

3.5 Ellipsis

3.6 Coordination

3.7 Movement and Other Distortions

3.8 Some More Complex Distortion Experiments, Briefly

3.9 Some More Practice

3.10 Some Other Evidence of Constituency

3.11 Conclusion

4 Clauses

4.1 Full Clauses: CPs

4.2 Tense Phrase

4.3 Conclusion

5 Other Phrases: A First Glance

5.1 Verb Phrases

5.2 Determiner Phrases

5.3 Noun Phrases

5.4 Adjective Phrases

5.5 Prepositional Phrases

5.6 Ways to Talk About Tree Geometry

5.7 Conclusion

6 X-bar Theory and the Format of Lexical Entries

6.1 Review: The Model of Morphology

6.2 Building a Model of Syntax

6.3 Headedness

6.4 Internal Organization of Constituents

6.5 Some Consequences

6.6 Cross-categorial Symmetries

6.7 Subjects Across Categories: Small Clauses

6.8 Lexical Entries

6.9 The Projection Principle and Locality

6.10 Cross-linguistic Variation

6.11 Conclusion

7 Binding and the Hierarchical Nature of Phrase Structure

7.1 Anaphors

7.2 Pronouns

7.3 Non-pronominal Expressions

7.4 Binding Theory Summarized

7.5 Small Clauses and Binding Theory

7.6 Some Issues

7.7 Cross-linguistic Variation

7.8 Learning About Binding Relations

7.9 Conclusion

8 Apparent Violations of Locality of Selection

8.1 Setting the Stage

8.2 Topicalization: A First Case of Movement

8.3 Head Movement

8.4 Detecting Selection

8.5 Phrasal Movements

8.6 How Selection Drives Structure Building

8.7 Addressing some Previous Puzzles

8.8 Synthesis

8.9 Terminology and Notation

8.10 Conclusion

9 Infinitival Complements: Raising and Control

9.1 Subject Control

9.2 Using the Theory: Control and Binding

9.3 Interim Summary: Inventory of To-infinitival

9.4 Raising to Object/ECM and Object Control

9.5 Conclusion

10 Wh-questions: Wh-movement and Locality

10.1 Introduction

10.2 The Landing Site or Target Position of Wh-Movement

10.3 What Wh-movement Moves

10.4 Locality I: The Problem

10.5 Locality II: Theory of Constraints

10.6 Special Cases

10.7 Conclusion

11 Probing Structures

11.1 Introduction

11.2 Probing Derived Structures

11.3 Probing Underlying Structures

11.4 Probing with Binding

11.5 Conclusion

12 Inward Bound: Syntax and Morphology Atoms

12.1 The Size of Atoms

12.2 Head Movement and the Head Movement Constraint

12.3 Causative Affixes: Syntax or Morphology?

12.4 VP Shells

12.5 Ternary Branching

12.6 Using VP Shells: VP Shells and Adjuncts

12.7 Terminological Changes

12.8 Raising to Object

12.9 The Model of Morphosyntax

12.10 Conclusion

13 Advanced Binding and Some Binding Typology

13.1 Basics: Reminders

13.2 Reminder About Principle A

13.3 Subjects of Tensed Clauses

13.4 VP shells and the Binding Theory

13.5 Binding Variation and Typology

13.6 Conclusion

14 Wh-constructions

14.1 Diagnostic Properties of Wh-movement

14.2 Relative Clauses

14.3 Another Case of Null Operator Movement: Tough-Construction

14.4 Topicalization and Left Dislocation

14.5 Other Wh-movement Constructions

14.6 Conclusion

15 Syntactic Processes

15.1 The Language Model: Defining Structure

15.2 Selection, Movement, Locality

15.3 Computational Properties of the Model

15.4 Conclusion

References

Index

Additional and updated materials are available at www.wiley.com/go/syntacticanalysis

This edition first published 2014© 2014 Dominique Sportiche, Hilda Koopman, and Edward Stabler

Registered OfficeJohn Wiley & Sons, Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial Offices350 Main Street, Malden, MA 02148-5020, USA9600 Garsington Road, Oxford, OX4 2DQ, UKThe Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, for customer services, and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com/wiley-blackwell.

The right of Dominique Sportiche, Hilda Koopman, and Edward Stabler to be identified as the authors of this work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of Warranty: While the publisher and author(s) have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

Library of Congress Cataloging-in-Publication Data

Sportiche, Dominique.An introduction to syntactic analysis and theory / Dominique Sportiche, Hilda Koopman, Edward Stabler.      pages cm   Includes index.

   ISBN 978-1-4051-0016-8 (cloth) – ISBN 978-1-4051-0017-5 (pbk.)1. Grammar, Comparative and general–Syntax. 2. Generative grammar. I. Sportiche, Dominique, author. II. Koopman, Hilda Judith, author. III. Stabler, Edward P., author.  P291.S57 2013  415–dc23

2013015034

A catalogue record for this book is available from the British Library.

Cover image: © FotosearchCover design by E&P Design

To Noam with gratitude beyond words, whose influence is found on every page, and whose charismatic ideas have made intellectual life exciting, Chomsky-style.

Acknowledgments

This book was written more slowly than we anticipated, over years of teaching syntax to advanced undergraduate students and graduate students at UCLA. This means many people have contributed to it in many different ways. We would like to thank:

Robin Clark who inspired a very first attempt to write an uncompromising introduction to contemporary syntactic theory. We hope you will like the result.

The generations of students at UCLA and elsewhere, whose reactions, positive and negative, have forced us to work harder.

The colleagues who have used earlier versions of this book and gave us very valuable feedback: Ora Matushansky, Léa Nash, and Elena Soare at Université Paris 8, Vincent Homer at École normale supérieure in Paris, Maire Noonan at McGill University, Chris Collins at NYU, Greg Kobele at the University of Chicago, and Benjamin George, Peter Hallman, Anoop Mahajan, Keir Moulton, Robyn Orfitelli, and Martin Prinzhorn at UCLA.

Three colleagues who gave us extremely extensive comments: Joseph Emonds, Chris Collins, and Leston Buell. We tried our best to incorporate your many suggestions.

The anonymous reviewers who provided encouragement, and criticisms that we tried to address.

Our teaching assistants who have experienced the effects of earlier versions from the trenches and our graduate students, whose comments made this book much better than it could ever have been without them: Natasha Abner, Byron Ahn, Melanie Bervoets, Ivano Caponigro, Isabelle Charnavel, Vincent Homer, Tomoko Ishizuka, Ananda Lima, Robyn Orfitelli, Dave Schueller, Sarah van Wagenen, Jun Yashima.

The many colleagues whose work and taste have influenced us in too many ways to list and who are also part of this book: Alec Marantz, Anna Szabolcsi, Barry Schein, Benjamin Spector, Danny Fox, Gennaro Chierchia, Guglielmo Cinque, Hagit Borer, Henk van Riemsdijk, Isabelle Charnavel, Jean Roger Vergnaud, Jim Huang, Joseph Aoun, Luigi Rizzi, Maria Luisa Zubizaretta, Maria Polinsky, Martin Prinzhorn, Michal Starke, Misi Brody, Morris Halle, Norbert Hornstein, Philippe Schlenker, Richard Carter, Richard Kayne, Tim Stowell, Vincent Homer, Viola Schmitt.

Last but not least, Noémie and Sophie who, as often the sole native speakers of English close by, have suffered endless torment over acceptability judgments. If you read this book, you will perhaps feel that the end justified the means.

1

Introduction

Linguistics is a domain in which language is studied. The notion of language is a common sense notion. In general, a common sense name is not sufficient to characterize a field of research, as there may be many different fields studying more or less the object referred to by the same common-sense name. For example, one can study the oceans from the point of view of a marine biologist, a climate oceanographer, a plate tectonics physicist, a zoologist, a botanist, or a chemist. To get a more precise idea of the field of linguistics, it is necessary to define the type of questions that one is asking about the object of study. Most of what modern linguistics studies falls under the heading of the study of the “language faculty” that human beings possess as part of their human biology. The capacity for language shared by all normal members of our species includes, among other things, the ability to physically manifest thoughts, to linguistically express diverse and original ideas, in diverse and original ways. This faculty also underlies the ability to understand others, to have coherent conversations, and to deduce other people’s intentions from their utterances. These properties of language point to a field where the object of study is some mental property of individuals. So linguistics is a part of individual psychology, viewed in contemporary research as part of the study of the human brain.

Investigating this complex and often mysterious faculty appears a daunting task. One way to start is to try to formulate sensible questions about the output of our linguistic capacity – the sounds, the words, the sentences – so as to provide a framework within which incremental knowledge about this faculty can be gained.

1.1 Where to Start

We can start by looking at a simple case of basic linguistic behavior in an individual. When I hear language, my ears are sensing small, rapid variations in air pressure. This vibration, sound, is transformed into nerve impulses that travel to my brain, where the signal is somehow decoded and transformed into an idea, a thought. This is speech perception, or recognition. A similar process occurs in users of sign language: a visual signal is somehow decomposed and converted into an idea. Inversely, when I speak, an idea in my mind is physically manifested through speech or visual signs: this is speech production. These simple observations raise many questions. What exactly goes on when we produce or recognize speech? How does perception or production unfold in real time and how is this coded in brain tissue?

To understand language processing in the brain, we aim to understand how it rapidly changes state in ways that we can interpret as analytical steps in decoding the linguistic signal. How does the brain do this, exactly? In engineering jargon, we are faced with a problem of “reverse engineering,” a common problem for industrial spies. We have a machine – a body, particularly a brain – capable of accomplishing a certain task and we try to understand how it works and how it could be built. We have similar questions in vision, acoustic processing, and other domains of cognitive science. Note that our question is not simply how a particular ability could be produced or imitated in principle. Rather, we aim to identify how it is really produced in the human language user.

This “reverse engineering” problem is very difficult to approach directly. First, to find the physical mechanisms responsible for some ability, we need to have a good understanding of the basic properties of that ability. For example, studies show that certain areas of the brain (Broca’s and Wernicke’s areas, etc.) are active when we perform certain kinds of linguistic tasks, but to decipher what exactly they are doing, what computations are being performed, we need to know a lot about what these tasks involve. To obtain this kind of understanding, we must initially approach the problems of production and perception abstractly.

We start with the hypothesis that, stored in our mind somehow, we possess some information about certain basic things, some atoms of information that we can deploy in given contexts. As a starting point, we could assume that these atoms are “words” or something similar, such as table and book, and eat and curious, and the and before, and further that these elements have phonetic properties, or instructions on how to physically realize the atoms through speech, as well as meanings, a pairing of the elements to bits of thoughts.

So when it comes to expressing a thought to someone else, a simple story is that I search in my mind for the words corresponding to what I want to talk about and string them together in a particular way to express the thought. Of course, this string of words has to be physically manifested, as in speech or signing, which means that a signal has to be sent to the motor centers that coordinate physical gestures of the mouth or the hand, as the case may be.

Similarly, successful perception of a sentence might be achieved by an electrical signal sent to my brain by the hearing system, which is then segmented by my brain into words, each corresponding to a bit of thought. As my brain receives the particular sequential arrangement of these words, it can also calculate what the combination of the words in the sentence means as a whole.

We can schematize these processes with the following flow chart:

If we go from left to right, we have a crude model of spoken language perception, from right to left a crude model of spoken language production. If we replaced “sounds” with “signs,” we would have crude models of sign language perception and production.

Understanding how this could work, even at an abstract level, is not easy. For example, it is imaginable that the rules governing how sounds are composed into ordered sets of words in a particular situation depend on what the hearer is looking at when perceiving the sounds. This would mean that we have to worry the visual system and how it relates to the linguistic system. This is why we decide to concern ourselves with an even more abstract problem, and will not investigate how the flow chart above really works, how it unfolds in real time. Instead, we will ask: What necessary linguistic properties does the language processing task have? By linguistic, we mean that we are going to abstract away from the influence of visual clues or background knowledge about the world, and (initially at least) focus on grammatical properties (akin to those found in traditional grammar books). By necessary, we mean that we will concentrate on properties that hold across all normal uses of the language, properties that linguistic computations must respect in order to yield the linguistic phenomena we observe.

Here is a sample of more precise questions about necessary linguistic properties:

Do all orders of words yield meaningful expressions (in the way that all orders of decimal digits represent numbers)? If not, why not?

Do meaningful word sequences have any structure beyond their linear, temporal ordering? If so, what kind of structure, and why would this structure exist?

How are the meanings of a sentence determined by (or restricted by) the meanings of its component parts?

The discovery that speech can be symbolically transcribed, can be written down, is certainly among the most momentous human discoveries ever. It allowed the transmission of information across distances and across generations in ways that were never before possible.

In the famous 1632 Dialogue Concerning the Two Chief World Systems written by Galileo Galilei one of the characters, Sir Giovanni Francesco Sagredo, “a man of noble extraction and trenchant wit,” so marvels about this invention: “But surpassing all stupendous inventions, what sublimity of mind was his who dreamed of finding means to communicate his deepest thoughts to any other person, though distant by mighty intervals of place and time! Of talking with those who are in India; of speaking to those who are not yet born and will not he born for a thousand or ten thousand years; and with what facility, by the different arrangements of twenty characters upon a page!”

What is less obvious is that this invention is also a theoretical discovery about human psychology: the structure of language, the nature of this system, allows thoughts to be transmitted in this way. One fundamental aspect of this discovery can be stated as follows: the speech signal – even though it is a continuous physical phenomenon – namely a continuous variation of air pressure for normal speech, or continuous motion for sign language – can be represented with a finite (sometimes small) number of discrete units. A particularly striking example of this property is illustrated by alphabetic writing systems, as the quote in the box opposite emphasizes: with a small number of symbols – letters – they can very effectively (partially) code speech. Informally speaking, it is clear that this segmentation of the speech signal occurs at various levels of “graininess.” Alphabetic writing systems segment the speech signal in very small units of writing, while Chinese characters or Egyptian hieroglyphic writing systems segment it in somewhat larger units. That this segmentation can occur at various degrees of “magnification” can be illustrated with the following string:

these books burned

Evidence confirms that English speakers segment and analyze this expression at some levels roughly like these, where the number in the leftmost column indicates the number of units in its line (and the first line is written in phonetic alphabet and is meant to represent the sequence of sounds found in this string):

Linguists have extensively documented the relevance of such segmentations for our understanding of language structure. Hypothesizing that these modes of segmentation, these different “levels of magnification,” correspond to real psychological properties of the speech signal, we will need to at least answer the following questions about each level as part of understanding how our flow chart above really works:

What is the inventory of the smallest pieces, the atomic elements, that are assembled at each level?

What are the rules or principles that govern how these units can be assembled?

Traditionally, linguists have postulated the following divisions: Phonology studies the atoms and combinations of sounds; Morphology considers the atoms and how words are built; and Syntax considers how words are put together to form phrases. Of course, this preliminary division into such subdomains may not be correct. It could be that the atoms of syntax are not words but morphemes, and that the rules of combination for morphemes are the same as the rules of combination for words. If this were the case, there would really be no distinction between morphology and syntax. Or morphology might be part of phonology. (These kinds of proposals have been seriously explored.) But we will start with this traditional picture of the components, modifying it as necessary. In fact, where syntax books start with words, this book will start with morphology, i.e. the structure of words.

These two questions – what are the atoms of language, and how are they combined – characterize very well what this book is about. We view language as a system of symbols (e.g. sounds, which are paired up with meanings), and a combinatory system, where symbols combine to yield more complex objects, themselves associated with meanings. Here we concentrate on how aspects of this particular symbolic combinatory system are put together: this is a technical manual that almost exclusively worries about how to investigate and characterize the combinatory structure of morphological and syntactic units.

Even though the scope of this book is relatively narrow, the research perspective on language it embodies is part of a much broader research program that tries to characterize cognitive functions. The questions we address here are limited to the structure of the syntactic and morphological combinatory systems but, as will become clear, a substantial amount is known about these systems that suggests that non-trivial principles regulate how it works. This in turn raises all sorts of questions which we will not address but that are central research questions: how is this system put to use when we speak or understand? How much computational power is needed to master such a system? Is language fundamentally shaped by our communicative intentions? Where do the fundamential principles of language structure come from? Are they innate? Are they learned? Are they completely specific to language, or only partially so, or not at all? Are they specific to humans? Or only partially so? How did they appear in our species? Suddenly? Progressively? By themselves?

The approach to the study of language described above took off in the mid 20th century and is now a dynamic field incorporating an increasing panoply of methods or tools, from the methods used by traditional grammarians and language fieldworkers, to laboratory methods originating in experimental psychology, to neuro-imagery, to mathematical methods imported from pure and applied mathematics, to statistical tools and the tools of modern genetics. Noam Chomsky, pictured here on the left, presently (2013) Institute Professor at the Massachusetts Institute of Technology is the most influential pioneer of this research perspective and agenda and the research methods that carry it out.

In conjunction with other mental or physical properties? All these questions are (on their way to) being investigated in the rapidly developing field of cognitive science.

It is important to emphasize that apart from framing questions about language in a useful way, the new, systematic methods of investigation have met with great success. It is fair to say that in the past 50 years, our knowledge of linguistic structures has increased at a pace unparalleled in the history of linguistics. Because this approach to language is relatively young, progress is rapid and our progressively increasing understanding means that new, better hypotheses emerge all the time, sometimes revealing inadequacies of previous assumptions.

1.2 What this Book is and is Not, and How to Use It

First, a book like this one, which tries to provide a snapshot of our current state of understanding, is to a certain extent bound to be or to become incorrect as research progresses. It is thus important to remember that although the results described herein are sound and reasonable, most important are the methods used to reach them.

As was stated, this book is a technical manual focusing on syntactic and morphological structures. It is not meant to be exhaustive. Nor is it meant to be a systematic introduction to the relevant literature. Because the field progresses rapidly, notions once thought to be useful or important no longer play a prominent role. In general we do not discuss them. Other notions are not included because they are too advanced for such an introductory book. Rather we aim to provide a reasoned introduction to the central tools and concepts that seem necessary and well justified by the research of the past 50 years.

Although this book uses quite a bit of formal notation, it does not, by design, present a formalized theory of syntax, except in the last chapter. In that chapter, a complete formalization is introduced. This formalization of a part of our linguistic system has allowed researchers to precisely investigate important issues, like the expressive power of our model of the combinatory system. It has also allowed us to prove results showing that this model is in principle well behaved with respect to a number of important measures, like parsability and learnability.

The main text of this book is definitely anglocentric, focusing on English. This is in part due to the fact this book grew out of class notes written for English-speaking students in syntax classes at the University of California at Los Angeles. But the focus on English is motivated by the field: supporting the conclusions reached requires sophisticated argumentation and thus requires looking in depth at an individual language. English is the best and most deeply studied language by a wide margin, so it is not unreasonable to think that there is a greater chance that deep, perhaps universal, properties of language have been discovered, and that some of the results have wide, perhaps universal, validity. This anglocentricity is also somewhat illusory, as a very substantial amount of how English is analyzed today is, and will continue to be, informed by in-depth work done on other languages, most notably by work in the 1960’s on the Romance languages, the other Germanic languages, and Japanese, and by more recent work on an ever expanding variety of languages spanning all language families and broad regions of the planet. Theoretical research is crucially informed by cross-linguistic research, and it is our firm belief – buttressed by the life-long personal experience of working on a wide variety of human languages – as well as the very large number of studies that bear on a wide variety of languages, that the methods and tools introduced here are reliable, productive, and necessary for the study of any natural human language.

From a practical standpoint, we intend this book to be readable on its own, by anyone, even those without any prior knowledge of linguistics. Since this is a technical manual, it should be read slowly, making sure to the greatest extent possible that nothing is left unclear. To this end, it is essential to do exercises. This cannot be overemphasized. The exercises are meant to let the reader check that the content relevant for them has been mastered. Additionally, the exercises may sometimes introduce results not discussed in the body of the chapters, or discuss alternatives to hypotheses previously adopted.

Wherever possible and accessible, we try to show how the conclusions reached connect with other modes of investigation of cognitive capacities, such as the (neuro-)psychology of learning and processing and computational approaches.

Scattered in the chapters are three types of box. They have different functions, indicated by their titles. Boxes marked “Practice” indicate points at which particular practice should be done. Shaded boxes should be read and paid particular attention to: they highlight important information. Material in double-ruled boxes is not critical to the reading of the chapter. These boxes introduce, discuss, or anticipate more advanced material. Read with this material, the book is a pretty advanced introduction to current theorizing and results.

While the emphasis in this book is on methods of investigation, the current results are important too. In general we summarize both at the end of each chapter in a section entitled “What to remember.”

1.3 Further Reading

Reading original literature in syntactic theory can be difficult. The original literature by now spans more than 60 years. As mentioned in the Introduction, this field is very young, progress has been and continues to be rapid. In this time span, many discoveries have been made, sometimes making earlier work obsolete, new notations are constantly introduced to encode new understanding, making earlier notations opaque. At the moving frontiers of knowledge, several hypotheses, incompatible with each other, were and are simultaneously entertained. All this contributes to making reading the original literature difficult. It is important to replace the literature in its time: what exactly was the particular theoretical understanding then? What was known and what was not? What did the overall model look like? What were the particular questions linguists were worrying about, and what technical vocabulary or notation was used to talk about the phenomena discussed? As part of language, the meaning of words sometimes shifts, and technical vocabulary or notation evolves.

We recommend to the reader to first carefully work through the (relevant parts of this) textbook, and read only general background or foundational literature. Only once a good understanding has been gained of the properties that any linguistic theory will have to account for, should a start be made with the original literature.

General resources There are many textbooks and handbooks that can be used to complement the current textbook. We list a selection of general syntax textbooks, some of which are helpful to prepare reading the literature of what was broadly understood at the time they were published: Van Riemsdijk and Williams (1986), McCawley (1998), Haegeman (1994), Ouhalla (1994), Culicover (1997), Roberts (1997), Carnie (2002), Adger (2003), Radford (2004), Hornstein, Nunes, and Grohmann (2005).

In this general context, the site http://tgraf.bol.ucla.edu/timeline.html is a useful resource: Thomas Graf attempts to provide an annotated timeline of how new anlytical or theoretical ideas were introduced in generative linguistics. In addition, the reader may also want to check out the Blackwell Series “Classic Readings in Syntax,” currently in development, as well as the very useful Blackwell Companion to Syntax: (Everaert and Riemsdijk, 2006).

For a current assessment of the general results of the generative enterprise, written for a broad general public, the collection of articles about core ideas and results in syntax in the Lingua volume edited by Luigi Rizzi (2013) is particularly noteworthy.

For descriptive grammars, or grammars informed by work in the type of syntactic framework broadly construed we have described, we mention the following selection of works, often the result of collaborations of many linguists, comprising multiple volumes or thousands of pages, and by no means exhaustive: (Hualde and De Urbina, 2003) for Basque, (Haeseryn and Haeseryn, 1997) for Dutch. (Huddleston and Pullum, 2002) on English, (Renzi, 1988–1995) for Italian, and (Demonte Barreto and Bosque, 1999) for Spanish.

There are other, valuable introductory books at various levels and with different objectives. The following are general, fairly systematic introductions to the research program we pursue: Anderson and Lightfoot (2002), Baker (2001), Chomsky (1975), Jackendoff (1995), Lightfoot (1982), Pinker (1994). It is also a good idea to look around the Internet, in particular for Noam Chomsky’s writings.

Finally, we highly recommend the site http://ling.auf.net/lingbuzz maintained by Michal Starke. It is a searchable, openly accessible repository of scholarly papers, discussions, and other documents for linguistics. Current research in its various subfields – most relevantly syntax and semantics – is continually being uploaded by researchers.

2

Morphology: Starting with Words

Our informal characterization defined syntax as the study of rules or principles that govern how words are put together to form phrases, well-formed sequences of words. The crucial elements in this informal characterization – “words,” and “rules or principles” – have common-sense meanings independent of the study of language. We more or less understand what a rule or principle is. A rule or principle describes a regularity in what happens. For example, “If the temperature drops suddenly, water vapor will condense,” is a rule of natural science. This is the notion of rule that we will be interested in. It should be distinguished from the notion of a rule that is an instruction or a statement about what should happen, such as “If the light is green, do not cross the street.” As linguists, our primary interest is not in how anyone says you should talk. Rather, we are interested in how people really talk. Before considering rules for building phrases and sentences, we will consider the structure of words.

In common usage, “word” refers to some kind of linguistic unit. We have a rough, common-sense idea of what a word is, but it is surprisingly difficult to characterize this precisely. It is not even clear that this notion allows a precise definition. It could be like the notion of a “French language.” There is a central idea to this notion but, as we try to define it, we are led to making arbitrary decisions as to whether something is part of French or not. Fortunately, as we will see later, we may not need a precise version of the notion “word” at all. Nevertheless, these common-sense notions provide a reasonable starting point for our subject. So we will begin with some of the usual ideas about words: objects of the kind that can be more or less isolated in pronunciation, that can be represented by strings of letters separated by blank spaces, and that have meanings.

As we will see, some evidence has been put forth to the effect that words are not the basic units of phrases, not the atomic units of syntax. Instead, the atoms, or “building blocks” that syntax manipulates would be smaller units, units that we will see later in this chapter. We will also see that that there are reasons to think that the way these units are combined is very regular, obeying laws very similar to those that combine larger units of linguistic structure. But we begin by looking at properties of words as we have informally characterized them, and see where this leads. As mentioned above, the subdomain of linguistics dealing with word properties, particularly word structure, is called morphology. Here we will concentrate on just a few kinds of morphological properties that will turn out to be relevant for syntax. We will briefly introduce the following basic ideas:

Words come in categories.

Words can be made of smaller units (morphemes).

Morphemes combine in a regular, rule-governed fashion:

a. To define the regularities we need the notions of head and selection.
b. The regularities exhibit a certain kind of locality.

Morphemes can be silent.

2.1 Words Come in Categories

The first important observation is that there are different types of words. This is usually stated as the fact that words belong to different categories, where categories are nouns, verbs, adjectives, prepositions, adverbs, determiners, complementizers, and other things. Some of these are familiar from traditional grammar (like nouns and verbs), others probably less so (like complementizers, or determiners).

Open class categories: have a large number of members and new words can be (more or less freely) created in these categories.

Noun (N)

table, computer, event, joy, action

Verb (V)

run, arrive, laugh, know, love, think, say, spray

Adjective (A)

big, yellow, stable, intelligent, legal, fake

Adverb (Adv)

badly, curiously, possibly, often

Closed class categories: have a limited number of members, which can be enumerated. Speakers cannot really create new members in these categories.

Preposition (P)

on, of, by, through, into, from, for, to, with

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!