In this talk I will explain how M-theory compactifications on a manifold of G2 holonomy can lead physicists to robust testable experimental predictions such as superpartner properties at the CERN LHC collider, for cosmological history, dark matter, and Higgs bosons, even though the global and other properties of the G2 manifolds are not fully known. The constructed M-theory vacuum is a metastable de Sitter one about which our knowledge can be phenomenologically surprisingly complete. Remarkably, some major results can be generalized to any generic string theory (e.g. heterotic strings compactified on Calabi-Yau manifolds and others). The cosmological constant problem(s) are evaded though not solved.

The success of mathematics in application to musical acoustics and psycho-acoustics has no match so far in the investigations of other levels of musical reality. And like a counterpart to this balance one must confess that the ontological foundations of music are still unsearchable. None the less a lot of attempts have been made to understand the prominent role of the major and minor triads as constituents of harmonic tonality in mathematical terms. Philosophers, such as Johann Friedrich Herbart or Rudolf Carnap, were challenged to use the triads in order to explicate central concepts of their mathematically inspired theories. In the Helmholtzian tradition of psycho-acoustics consonant triads (major and minor) correspond to local minima of sensory dissonance surfaces. It is a widespread - but often unreflected - belief that the consonance of these chords explains their utilization in music. But nobody has managed yet to furnish a convincing theory of tonal music on these grounds. In his article "Neo-Riemannian Operations, Parsimonious Trichords, and their Tonnetz-Representations" Richard Cohn therefore coins the term "the overdetermined triad" in order to highlight an epistemological puzzlement resulting from a distinction of the major and minor traids along quite different criteria. Clifton Callender, Ian Quinn and Dmitri Tymoczko translate Cohn's observation into the geometric fact, that the orbit of the major and minor triads under voice permutation, octave identification, translation and inversion occupies a position close to a singularity on a voice-leading orbifold. In my talk I give on overview about several mathematical approaches to the music-theoretical understanding of the triads. The mathematical tools include group actions, monoid actions, and concepts from combinatorics on words. Despite of a cautious evaluation of the hitherto existing results in mathematical music theory, the synopsis gives reason to speak about the epistemological motivation of this work. In what way does mathematical knowledge contribute to a proper extension of music-theoretical knowledge?

A building is a structure relevant to geometry and group combinatorics. The notion was invented by J. Tits in the early 60ies in order to provide a uniform construction of simple groups of Lie type over arbitrary fields. It was then exploited in order to better understand symmetric spaces and non-archimedean analogues. Considering automorphisms of exotic buildings leads today to new group-theoretic situations (e.g. construction of finitely generated simple groups, of equivariant compactifications etc).

Invariant distributions are important objects in the smooth ergodic theory of weakly chaotic flows and maps. They appear naturally as obstructions to solve cohomological equations. Cohomological equations are crucial in the study of several kinds of questions, from conjugacy problems to the asymptotics of ergodic integrals. We will discuss some basic examples and some open problems.

I shall explain the CAT(0) condition, Gromov's easy way to check it for a cubical or cube complex, and discuss some applications.

We will define quantum entanglement and give some examples. We will also show how one can use invariant theory to measure entanglement.

Most finite simple groups are groups of Lie type i.e., versions of the well known simple Lie groups written over finite fields. Besides the infinite families of alternating groups and groups of Lie type, there are 26 sporadic groups, which do not naturally belong to infinite families.Ê They were discovered a few at a time (from 1860s to 1970s). Their existence seems to depend on exceptional phenomena in group theory, number theory and combinatorics. There is no 'axiomatic theory' for these groups, such as the beautiful and useful theory of BN pairs for groups of Lie type.Ê The largest sporadic group is the Monster, whose subquotients involve 20 of the 26 sporadics. These 20 sporadic constitute the Happy Family and the six others are the Pariahs. The order of the Monster is a 54 digit number and the smallest degree of a complex matrix representation is 196883.Ê In this introductory, expository talk, we shall sketch the three generations of the Happy Family: Generation (I): five Mathieu groups. Generation (II): the Conway groups (they involve twelve sporadics, including (I)). Generation (III): the Monster (involving the twelve in (II) plus eight more). (I) involves the combinatorics of the binary Golay code (19th century to late 40s). (II) involves the Leech lattice, built up from (I) (late 60s). (III) involves a commutative, nonassociative algebra, B, with unit, dim(B)=196884, built up from (II) (about 1980). The algebra B was custom-built in a sense, and was not part of traditional studies in nonassociative algebras.Ê The Monster unifies several themes in finite group theory.Ê Its existence implies existence for some other sporadic groups which had been constructed by special methods, including computer work.Ê As time permits, we may mention some examples of Moonshine, meaning relations of sporadic group phenomena to phenomena in other areas of mathematics.Ê Explaining such amazing connections remains a challenge.Ê Ê PS.Ê Conference photo from the first Moonshine conference (Montreal, 1982) is in my picture gallery: http://www.math.lsa.umich.edu/~rlg/picturegallery/montreal82photonames.html.Ê

Young tableaux are classical combinatorial objects that first arose in the study of the representation theory of the symmetric group. ÊI will discuss some of the remarkable enumerative properties of Young tableaux, and mention some modern connections to other mathematics.

The associahedron is a convex polytope whose vertices are labeled by triangulations of a convex polygon by its diagonals. I will discuss alternative definitions of the associahedra, their enumerative properties, geometric realizations, and generalizations associated with finite root systems.

The physics and the mathematics of Landau damping will be discussed. The talk will be very elementary.

Our mathematical tradition goes back to Euclid who was one of the first professors of the Museum of Alexandria. The Museum, which contained the Great Library, was the world's first University. I shall illustrate Euclid's landmark contributions by discussing three examples: his proof of the theorem that the set of prime numbers is infinite, his theory of proportions, which is in essense the theory of real numbers and goes back to Eudoxus, and his axiomatic foundation of plane geometry. I shall then move on to Euclid's most famous student, Archimedes, perhaps the greatest mathematical genius in history. I shall illustrate the power of his imagination by going through his proof of the theorem giving the area of a spherical segment, a theorem which he proved in his youth while he was still at Alexandria. I shall finally discuss his most impressive contribution to science, his hydrostatics, which constitutes, with his mechanics, the foundational work of physics as a science, and is also the most advanced mathematical work of antiquity.

An orbifold is a space with finite quotient singularities. Traditionally, it was viewed as a mild generalization of smooth manifold. The first hint of extra structure came from Thurston's construction of orbifold fundamental group. Motivated by physics, a great deal of new ideas and results have been obtained in last ten years. In the talk, I will explain these new ideas.

Over 50 years ago, Alan Hodgkin and Andrew Huxley developed a mathematical description for the electrical activity of neurons. This work won the Nobel Prize and forms the basis of computational neuroscience. I will review their work, simplifications of this model, open mathematical questions, and how this lead me to spend recent summers playing with squid.

How to extract trend from highly nonlinear and nonstationary data is an important problem that has many practical applications ranging from bio-medical signal analysis to econometrics, finance, and geophysical fluid dynamics. We review some exisiting methodologies in defining trend in data analysis. Many of these methods use pre-determined basis and is not completely adaptive. They tend to introduce artificial harmonics in the decomposion of the data. Various attempts to preserve the temportal locality property of the data introduce problems of their own. Here we discuss how adaprtive data analysis can be formulated as a nonlinear optimization problem in which we look for a sparse representation of data in some unknown basis which is derived from the physical data. We will show that this formulation has some beautiful mathematical structure and can be considered as a nonlinear version of compressed sensing.

Mathematical music theory, very broadly speaking, aims to interpret works of music using mathematical concepts. I will describe some recent work in this rapidly growing field. The tools of the mathematical music theorist range from the elementary (arithmetic mod 12) to the sophisticated (topos theory). Mathematical music theorists also study musical objects, for example the major triad and major scale can be uniquely characterized by their group-theoretic and number theoretic properties. Topics will probably include Cohn's characterization of the triad, Noll's characterization of the Ionian mode, and perhaps also the musical tilings of Amiot, Agon, Andreatta, Fripertinger and others.

The Schwarzian derivative of an analytic locally univalent function f is defined classically by Sf = (f''/f')' - 1/2 (f''/f')^2. After discussing its surprisingly elegant properties, we'll give an overview of some historical applications, then show why the Schwarzian occurs naturally in univalence criteria for analytic functions. We'll conclude with some recent generalizations: the Ahlfors Schwarzian of a curve in R^n, and the Schwarzian of a planar harmonic mapping, defined through its canonical lift to a minimal surface.

Starting with the history of mathematical publishing at Springer, the talk will give an overview of the competitive market place and the transition from print to electronic in STM.

A train track on a surface associated with a surface homeomorphism is a finite graph embedded in the surface that one would naturally discover while iterating a simple closed curve under the homeomorphism. I will go through this process of discovery and then try to explain what train tracks are good for. If there is time I will talk about train tracks for automorphisms of free groups as well.

(One-dimensional) complex dynamics deals with the dynamics of a rational function on the Riemann sphere under iteration. In my talk I will define some basic concepts such as the Julia and the Fatou set of a rational function and give an overview of important results in the field.

This talk will explain, in plain language, how the lower bound of K energy is related to the existence of KE metrics etc..

D-module theory, a.k.a. algebraic analysis, concerns the use of algebraic and geometric techniques in the study of linear PDE's. The theory became very popular due to connections to various areas, in particular to representation theory and singularities. The talk will discuss how D-modules arise, and what they are good for.

In the 60's, I.M.Gelfand raised the question as to whether there was an explicit combinatorial formula for the signature of a simplicial manifold. James Simons tried to solve the problem for a 4-manifold by trying to integrate the curvature integral expression for the first Pontryagin class over the simplicial manifold. The idea failed, because Simons had hoped to integrate by parts down to a sum of contributions on the 0-dimensional skeleton, and there were other terms that wouldn't go away. These terms were the first examples of secondary characteristics classes. Together with Chern, he developed this into a theory which gives a kind of first Pontryagin class of a three manifold. This invariant, the Chern-Simons invariant, has since blossomed in many directions, including relations with index theory, K-theory, and its widespread use in string theory, as well as versions in complex geometry (CR secondary classes, renormalized characteristic classes). We will describe the basics, and give a sampler of some of the later developments.

A tetrahedral complex is the set of lines in projective 3-dimensional space P^3 which intersect the four coordinate planes at four points with a fixed cross ratio. From a modern point of view it is the closure of a torus orbit in the Grassmannian of lines in P^3. The study of tetrahedral complexes by S. Lie in 1870 led him to the development of theory of Lie groups as symmetry groups of differential equations. Their generalization to other Grassmannians in the works of I. Gelfand and R. MacPherson led to a modern theory of M. Kapranov and L. Lafforgue of compactifications of configuration spaces of linear subspaces. In my talk I will try to give a historical and a modern account of the theory of tetrahedral complexes.

In 1979 Charles Fefferman introduced a measure on (strongly pseudoconvex) real hypersurfaces in C^n; his construction is a natural extension of Blaschke's (equi-)affine surface area. After defining Fefferman's measure and explaining its invariance properties and a few of it uses, I will focus on an isoperimetric problem naturally associated to Fefferman's construction. This problem will be compared to the corresponding euclidean and Blaschke isoperimetric problems. To keep the formulas simple, I will focus mostly on (real and complex) dimension two.

From the point of view of modern convex geometry, every convex body should looks like a round ball. The precise meaning of this varies. For example, one can make every convex body approximately round after applying some natural operation, such as removal of small volume, bisecting with a subspace, projection onto a subspace. We will review old and recent results in asymptotic convex geometry about Euclidean structure associated with convex sets.

Shimura varieties are initially defined as complex manifolds (quotients of hermitian symmetric domains by congruence subgroups) but they are known to arise from algebraic varieties defined in a natural way over number fields. The simplest examples are the elliptic modular curves (quotients of the complex upper half plane by congruence subgroups of SL(2,Z)). In the talk, I'll explain the last two sentences, and I'll also discuss why Langlands was so interested in Shimura varieties.

We discuss the concepts of domain of holomorphy and pseudoconvexity both in the contexts of one and several complex variables. Many examples will be given.

The real question, of course, is why we care so much about the Riemann Hypothesis. The short answer is that it is because it is a central question that has many consequences. We shall explore some of these connections.

Signalizer functors were introduced by Danny Gorenstein as a tool in the study of finite simple groups, extrapolating from ideas of John Thompson. They played a central role in the classification of finite simple groups and have also been used in the study of infinite groups of finite Morley rank. The Signalizer Functor Theorem of Goldschmidt and Glauberman is one of the most beautiful theorems in finite group theory.

Forcing is the most powerful known method for proving consistency and independence results in set theory, i.e., for proving that certain statements cannot be proved on the basis of the usual foundation of mathematics. The forcing method works by very carefully enlarging the universe of sets. Part of the talk will be about the conceptual issue of how one could introduce new sets into a universe that already contains all sets. Another part will describe some of the technical issues that arise. Finally, if time permits, I'll indicate a few of the applications of forcing.

I will answer the question: What is the algebraic fundamental group? This group is like the topological fundamental group, but we replace unramified covering spaces with finite etale covers. It should be noted that we cannot simply replace bases loops with algebraic curves. I will give examples of these groups and some uses for them.

In 1734 Euler introduced the constant now bearing his name, and computed it to be approximately .577218. But what is the "meaning" of this constant? This talk will review Euler's work related to zeta values and "renormalization," which is topical since these numbers show up in various quantum field theory calculations. It will then describe various places Euler's constant and harmonic numbers H_n show up in number theory, especially in relation to the Riemann hypothesis.

Perhaps contrary to popular belief, supersymmetry is a notion of mathematics and not physics. While supersymmetry is used in nuclear physics to simplify certain calculations, no known experimental system exhibits fundamental supersymmetry. On the other hand, ``super'' aspects of geometry have rigorous mathematical treatments. In this talk, I will outline the relevant definitions, and will give some examples. In particular, I will talk about supersymmetry between translations and spinors, which leads to physical speculations of supersymmetry between bosons and fermions (which is not the same thing as the boson-fermion correspondence). I also hope to talk about N-super-Riemann surfaces which are used in conformal field theory, about the N-superconformal algebra, and perhaps even more specifically about the N=2 case, and precise mathematical definitions corresponding to notions such as ``A-models'' and ``B-models''.

In this expository talk I will demonstrate the prominence of martingale theory in mathematical finance. The talk is based on the seminal paper by Harrison and Pliska (1981) and an exposition of it in the book by Lamberton and Lapeyre (1996). (I will prove the theorem in the discrete time setting.)

I will answer the question "What is G_2?" in many different and correct ways, providing a user's manual to this exceptional group. I will approach the question from numerous points of view, including Hurwitz algebras and octonions, root data, and Chevalley groups. The only prerequisites for this lecture will be basic group theory and linear algebra. Participants will receive commemorative handouts.

Control theory is concerned with the evolution of a dynamical system with parameters (the controls), which are to be optimized to minimize a cost function. The cost function is a function of the present state x of the dynamical system at the present time t, and its evolution up to some terminal time T. The dynamics may be deterministic or stochastic. In the case of deterministic dynamics, the optimal cost function C(x,t) satisfies a first order partial differential equation. In the case of stochastic dynamics it satisfies a second order parabolic PDE. In this seminar I will explain through simple examples how this comes about. I will also explain some connections with the Calculus of Variations and Hamiltonian Mechanics.

In a probabilistically-checkable proof system, a verifier checks a small number of randomly-chosen bits in a "proof" and, with high probability, determines properly whether the "proof" is valid or not. We survey connections and applications to error-correcting codes, to inapproximability (there are functions that cannot be computed or even approximated efficiently), and to zero-knowledge (Alice convinces Bob that a theorem is true without giving Bob any help in proving that theorem). We consider cultural applications to the proofs of Tartaglia, Appel and Haken, and Auburn.

The homology of a compact closed n-manifold X satisfies Poincare duality: the intersection pairing between degree i and degree n-i homology is a perfect pairing over a field. When X has singularities, Poincare duality may fail to hold. Nonetheless, in the 1980's Goresky and MacPherson defined a topological invariant, the intersection homology, of a space X which satisfies Poincare duality even if X is singular; for a smooth space X, intersection homology agrees with ordinary homology. Intersection homology crops up in many places, from analysis to representation theory. In this talk I will give an informal introduction to intersection homology and some of its applications.

Nearly all flows in nature and in technology, with exceptions such as flows in tiny capillaries, are turbulent. In the 19th century, Reynolds was intrigued that so dramatic a phenomenon revealed nothing to the eye without the use of tracers. Since Reynolds, we have learnt that the incompressible Navier-Stokes equation with the right boundary conditions can capture all the features of turbulent velocity fields, including the law of the wall, coherent motions, intermittency, and energy spectra. This talk will outline the computational and experimental investigations that have brought us to that understanding.

Grothendieck introduced the notion of a "motif'' in a letter to Serre in 1964. Later he wrote that, among the objects he had been privileged to discover, they were the most charged with mystery and formed perhaps the most powerful instrument of discovery. In this talk, I shall explain what motives are, and why Grothendieck valued them so highly.

Although it is widely known that the finite simple groups have been classified, it is not so widely known that this classification has many applications to other areas of math. I will present several such applications, illustrating a happy situation where interesting results in other areas are proved by combining this classification with new types of group theoretic results of independent interest.

From the point of view of modern convex geometry, every convex body looks like a round ball, after applying some natural operation. Some examples of such operations: removal of small volume, bisecting with a subspace, projection onto a subspace. We will review old and recent results in asymptotic convex geometry about Euclidean structure associated with convex sets.

The Teichmuller space of all (marked) hyperbolic surfaces homeomorphic to a fixed closed surface naturally arises in various areas of mathematics. For example, it is the universal cover, from the orbifold viewpoint, of the much-studied Moduli space. We will discuss a 3-dimensional analogue of Teichmuller space, the space AH(M) of all marked hyperbolic 3-manifolds homotopy equivalent to a fixed compact 3-manifold M (usually with non-empty boundary). While Teichmuller space is homeomorphic to an open ball, the topology of AH(M) tends to be quite complicated. Bumponomics is the study of the topology of AH(M).

Iwasawa theory refers to a circle of ideas introduced by Iwasawa to study various objects of arithmetic interest (such as class groups, Galois groups, L-values etc) by putting them in families. This talk will be an introduction to some ideas in Iwasawa theory, starting from a historical point of view and leading up to more recent developments.

How different are two convex bodies? To compare them, we want to be able to put one on top of another. Frequently, we also want to view a convex body independently of the coordinate structure of the ambient space. This means that our measure of the difference should be invariant under shifts and linear transformations. We consider a notion of distance, which naturally arises in asymptotic geometric analysis, and discuss how to evaluate distances between high-dimensional convex bodies.

K3 surfaces are a recurring theme in modern complex geometry. We discuss their origins in classical algebraic geometry, the 20th century structural results that made them a prime example of Hodge theory, and open problems of current interest.

I will review the moduli theory of Higgs bundles on a curve, as introduced in seminal papers of Hitchin and Simpson from the 1980's, and the correspondence theorem relating them to flat GL(n,C) connections. I will describe Hitchin's completely integrable Hamiltonian systems on these moduli spaces, and explain how, according to the recent work of Kapustin-Witten, they exhibit a physical duality which can be regarded as mirror symmetry or the electric-magnetic duality of Montonen-Olive.

An informal discussion of geometric gluing/surgery, a method to construct solutions of geometric problems in many contexts from simpler given, but singular solutions. Its closely related to structures at the boundaries of moduli spaces.

A contingency table is just a non-negative integer matrix with prescribed row and column sums. In the talk, I'll try to explain why people think that such matrices deserve a special name and discuss a variety of combinatorial, probabilistic and algorithmic questions, many yet to be answered satisfactorily, about such matrices. In particular, how many non-negative integer matrices with prescribed row and column sums are there? If not exactly, then approximately? asymptotically?

If we consider a Hermitian matrices whose entries are randomly chosen, the eigenvalues are some random points on the real line. How do these random points look like? Such question was first asked (and answered) in physics where various random matrix models arise naturally. More curiously, there are also some mathematical problems, which apparently have no matrix structure, but nevertheless behave like the eigenvalues of a random matrix. Examples include the zeros of Riemann-zeta function, the longest increasing subsequences of a random permutation, and the configuration of a random tiling. Also observed is the bus arrival times at a bus stop in the city of Cuernevaca in Mexico. We will discuss some of such universal feature of random matrices.

I will discuss some constructions and results from holomorphic dynamics, focusing on iteration problems in several oomplex variables.

We explain how representations of Galois groups naturally arise in a variety of ways in number theory, and how they can be used to study interesting concepts whose definition does not involve Galois groups.

The short answer to the question is that a splitting of a group G is an expression of G as an amalgamated free product or a HNN extension. (It will not be assumed that the audience knows what these are.) The 50 minute answer will discuss the history, definitions, and a few applications, of these ideas.

Everyone knows that our genetic blueprint is carried by molecules of DNA in the nuclei of every cell, where the blueprint is encoded in the succession of constituent bases. There are 3 billion such bases and such a double helix is about a meter in length. Mechanical, geometric and topological features of these molecules come into play in the packing of DNA into the nucleus, and the subsequent regulation of its use in the normal functioning of the cell. There are interesting ways to extract information about these features from the sequence of constituent bases, as well as analogues at other scales. These are related to development and cellular differentiation. We will discuss some example models currently in use, at two different scales of organization of DNA, related to gene transcription and chromatin structure and organization.

Norman Zabusky coined the word "soliton" in 1965 to describe a curious feature he and Martin Kruskal observed in their numerical simulations of the initial-value problem for a simple nonlinear partial differential equation. This talk will describe several of the aspects of solitons that have become important in pure and applied mathematics since their accidental discovery 40 years ago in a (by today's standards) primitive numerical experiment. In particular, a soliton is at once (i) a particular solution of one of many special "integrable" nonlinear partial differential equations, (ii) an eigenvalue of a linear operator, and (iii) a robust coherent structure with particle-like properties.

Compressed sensing is a new method for first acquiring and compressing data (e.g., functions, vectors, signals, or images) and then extracting relevant information about the data. From a mathematical perspective, we multiply the data (a column vector) by a matrix with considerably fewer rows than columns and call this vector of shorter dimension than the signal, the measurement vector or sketch of the signal. Although the sketch is much smaller than the original signal, if the matrix is chosen carefully, we can still extract plenty of useful information from the signal. I will discuss mathematical, algorithmic, and engineering constructions of carefully chosen measurement matrices, reconstruction algorithms, and physical devices to produce such sketches.

A discrete finitely generated group carries a natural equivalence class of metrics. Amazingly, the metric structure by itself is enough to carry out a bit of harmonic analysis. The notion of Òproperty AÓ for metric spaces was invented about ten years ago and is natural in this context. IÕll explain what it is, give some examples, and show how it can be used.

A quiver is just a directed graph. We get a representation of that quiver by attaching vector spaces to vertices and linear maps to arrows. Quivers form a natural context for studying linear algebra problems, and modules of finite dimensional associative algebras. This will be explained in this introductory talk.

With every vector bundle over a manifold M one can associate certain de Rham cohomology classes on M, called characteristic classes of the vector bundle. They measure how the vector bundle is "twisted". In my talk I will review some basic definitions and discuss some applications of characteristic classes.

Mumford and Shah's variational model for image segmentation is one of the best known and influential mathematical models in image processing and computer vision. It poses image segmentation (which means partitioning a given image into regions containing distinct objects) as an optimization problem. It has been adapted to many other applications since its inception, both in and outside of image processing. Its analysis and computation motivated lots of interesting mathematics. I will describe some of these.

In this talk I will survey problems and results about the relationship between the spectrum of the Laplace-Beltrami operator on a compact Riemannian manifold and the geometry of the manifold.

In this introductory talk, I will describe a central problem that insurance addresses, namely the pooling of risks. I will demonstrate the pooling of risks with a simple model: whole life insurance with a fixed interest rate (but with a random time of death). I will begin by finding the single premium that an insurer should charge so that the probability of losing money on a single contract is no greater than a given number. Then, I will find the single premium that an insurer should charge (per contract) so that the probability of losing money on n i.d.d. contracts is no greater than a given number. This premium decreases with n and approaches the "break-even-on-average" premium as n approaches infinity. If time permits, I will repeat this exercise to determine the corresponding periodic premium payable until the buyer of insurance dies. The only background that I assume of the attendee is the equivalent of Math 425.

The Navier-Stokes equations are a set of nonlinear partial differential equations that are generally believed to describe fluid flows. They are routinely used in scientific modeling and engineering design applications but it is still an open question---one with a $1M Clay Prize attached to it---whether or not solutions can develop singularities. In this talk we will (a) review the physical foundations of the Navier-Stokes equations and the importance of this mathematical question for fundamental physics and numerical analysis, (b) discuss the physical basis of the mathematical difficulties, and (c) describe some aspects of the current state of knowledge.

Tight closure is a technique that uses char. p > 0 methods to prove theorems both in positive characteristic and over the complex numbers. Many theorems that are susceptible to this approach were first proved by analytic techniques. Results obtained using tight closure include theorems on the properties of rings of invariants of groups of matrices, somewhat mysterious results related to the integral closure of an ideal (Briancon-Skoda theorems), behavior of symbolic powers in regular local rings, progress on a family of problems known as "the local homological conjectures", and behavior of special classes of singularities. Typically, when tight closure provides an answer to a question, it also gives a result that is far more general than what was originally conjectured. We will also discuss some of the many open questions in tight closure theory.

The Hodge Conjecture is about recognizing which homology classes on a projective algebraic manifold are the Poincare duals of analytic submanifolds (or subvarieties). Not much has been obtained positively on this conjecture since it was stated, but it has given rise to several interesting geometric and analytic approaches, related to minimal surfaces, normal functions, vector bundles of finite order, etc. We will give a low-brow introduction to the question and some of the examples (mainly negative) and techniques. Hopefully we can discuss the recent approach of M. Green and P. Griffiths.

A vortex sheet is a model for the interface between two streams of fluid moving at different speeds. A common example is the vortex wake behind an aircraft, which is responsible for the lift, and which poses a hazard for other aircraft in crowded airports. The initial value problem for vortex sheets is ill-posed and a curvature singularity forms in finite time from analytic initial data, but this is just the beginning of the story. I'll describe some relevant experiments and analysis, and focus on how computations are being used to investigate the sheet dynamics. Principal value integrals appear early on and chaos enters midway.

I discuss the statement ``Differentiability is infinitesimal stability" in old and new contexts.

Singular fiberings are generalizations of fiber bundle mappings. After some illustrations I shall concentrate on those fiberings whose typical fibers are homogeneous spaces and whose singular fibers are quotients of the typical fiber by compact groups of affine diffeomorphisms. Questions of existence, uniqueness and rigidity of the fiberings will be examined. Geometric applications in the holomorphic, smooth and topological categories will also be discussed.

The P vs NP Problem is one of the seven "Millenium Prize Problems" for which the Clay Mathematics Institute is offering a $1 million prize. P is the class of decision problems (problems with a yes/no answer) solvable in polynomial time. NP is the class of decision problems for which solutions can be verified in polynomial time. The problem is to determine whether or not these two problem classes are the same. We will present the terminology and mathematical background needed to understand the statement of the problem, give a history its place in the development of complexity theory, and survey recent attempts to solve it.

Abstract: First, I plan to explain the meanings of the three long words in the title. I also intend to explain why the topic is reasonable (What happened to non-singular cardinals? What happened to addition and multiplication?) and what the classical results say about it. Finally, I'll describe (without proof) more recent results of Shelah that not only provide surprising restrictions on possible answers to the title question but also provide new insight into some of the fundamental techniques of modern set theory.