Programming language theory

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
The lowercase Greek letter λ (lambda) is an unofficial symbol of the field of programming language theory. This usage derives from the lambda calculus, a model of computation introduced by Alonzo Church in the 1930s and widely used by programming language researchers. It graces the cover of the classic text Structure and Interpretation of Computer Programs, and the title of the so-called Lambda Papers, written by Gerald Jay Sussman and Guy Steele, the developers of the Scheme programming language.

Programming language theory (PLT) is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, linguistics and even cognitive science. It is a well-recognized branch of computer science, and an active research area, with results published in numerous journals dedicated to PLT, as well as in general computer science and engineering publications.

History[edit]

In some ways, the history of programming language theory predates even the development of programming languages themselves. The lambda calculus, developed by Alonzo Church and Stephen Cole Kleene in the 1930s, is considered by some to be the world's first programming language, even though it was intended to model computation rather than being a means for programmers to describe algorithms to a computer system. Many modern functional programming languages have been described as providing a "thin veneer" over the lambda calculus,[1] and many are easily described in terms of it.

The first programming language to be invented was Plankalkül, which was designed by Konrad Zuse in the 1940s, but not publicly known until 1972 (and not implemented until 1998). The first widely known and successful high-level programming language was Fortran, developed from 1954 to 1957 by a team of IBM researchers led by John Backus. The success of FORTRAN led to the formation of a committee of scientists to develop a "universal" computer language; the result of their effort was ALGOL 58. Separately, John McCarthy of MIT developed the Lisp programming language (based on the lambda calculus), the first language with origins in academia to be successful. With the success of these initial efforts, programming languages became an active topic of research in the 1960s and beyond.

Some other key events in the history of programming language theory since then:

1950s[edit]

1960s[edit]

1970s[edit]

1980s[edit]

1990s[edit]

Sub-disciplines and related fields[edit]

There are several fields of study which either lie within programming language theory, or which have a profound influence on it; many of these have considerable overlap. In addition, PLT makes use of many other branches of mathematics, including computability theory, category theory, and set theory.

Formal semantics[edit]

Formal semantics is the formal specification of the behaviour of computer programs and programming languages. Three common approaches to describe the semantics or "meaning" of a computer program are denotational semantics, operational semantics and axiomatic semantics.

Type theory[edit]

Type theory is the study of type systems; which are "a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute".[3] Many programming languages are distinguished by the characteristics of their type systems.

Program analysis and transformation[edit]

Program analysis is the general problem of examining a program and determining key characteristics (such as the absence of classes of program errors). Program transformation is the process of transforming a program in one form (language) to another form.

Comparative programming language analysis[edit]

Comparative programming language analysis seeks to classify programming languages into different types based on their characteristics; broad categories of programming languages are often known as programming paradigms.

Generic and metaprogramming[edit]

Metaprogramming is the generation of higher-order programs which, when executed, produce programs (possibly in a different language, or in a subset of the original language) as a result.

Domain-specific languages[edit]

Domain-specific languages are languages constructed to efficiently solve problems of a particular part of domain.

Compiler construction[edit]

Compiler theory is the theory of writing compilers (or more generally, translators); programs which translate a program written in one language into another form. The actions of a compiler are traditionally broken up into syntax analysis (scanning and parsing), semantic analysis (determining what a program should do), optimization (improving the performance of a program as indicated by some metric; typically execution speed) and code generation (generation and output of an equivalent program in some target language; often the instruction set of a CPU).

Run-time systems[edit]

Runtime systems refers to the development of programming language runtime environments and their components, including virtual machines, garbage collection, and foreign function interfaces.

Journals, publications, and conferences[edit]

Conferences are the primary venue for presenting research in programming languages. The most well known conferences include the Symposium on Principles of Programming Languages (POPL), Conference on Programming Language Design and Implementation (PLDI), the International Conference on Functional Programming (ICFP), and the International Conference on Object Oriented Programming, Systems, Languages and Applications (OOPSLA).

Notable journals that publish PLT research include the ACM Transactions on Programming Languages and Systems (TOPLAS), Journal of Functional Programming (JFP), Journal of Functional and Logic Programming, and Higher-Order and Symbolic Computation.

See also[edit]

References[edit]

  1. ^ http://www.c2.com/cgi/wiki?ModelsOfComputation
  2. ^ C. Böhm and W. Gross (1996). Introduction to the CUCH. In E. R. Caianiello (ed.), Automata Theory, p. 35-64/
  3. ^ Benjamin C. Pierce. 2002. Types and Programming Languages. MIT Press, Cambridge, Massachusetts, USA.

Further reading[edit]

External links[edit]