I am happy to announce that I just posted my first video to a new YouTube Channel. The result is an interview with Xavier Garcia about how Chat GPT works.

I’ll continue to post videos surrounding Machine Learning, Data Science, and perhaps elsewhere. I plan to do more interviews, individual videos about relevant topics, as well as some educational material.

I was already meeting with Xavier regularly to discuss Machine Learning topics, and so I thought to myself that other people might benefit from our discussions.

Our discussion for the video was perhaps a bit altered than the usual, as we were a bit less likely to say things “off the cuff.” I’ll be looking to make this easier going forward, as I think a lot of valuable insight lie here.

There was/is a learning curve in making videos, and I’ll definitely be looking to improve the quality as time goes forward. From zero to published video took a couple months (of course not doing this full time). Also, I got lucky and found a very helpful video editor which made things a lot easier. Things like sound quality, video quality, and lighting are not things I’ve had to worry about too much in the past. I didn’t invest too much in equipment, just trying to make use of what I had.

]]>By math, we do not mean simply perform computations. Its own design ensures that there will be computational problems it will be unable to solve. What I am more interested in is if it can solve problems that require some mathematical reasoning.

To choose our problems, we use the MMLU dataset. Galactica, a large language model recently released by Meta AI, has achieved some good results on this dataset. Their findings are in section 5.3 of their paper.

In what follows, we’ll pick a few problems and see how Chat-GPT handles them. If it fails, we will try to reword the question in hopes of achieving some partial results.

We note that the MMLU data set has multiple choice questions, but we just present the questions to chat-GPT. We just choose question from the abstract algebra section of the MMLU. Afterwards, we’ll prod chat-GPT for intuition and a proof. We’ll end by suggesting a few applications of chat-GPT for mathematicians.

**Question 1**

**Me**: The inverse of -i in the multiplicative group, is

**Chat-GPT**: The inverse of -i in the multiplicative group is . The inverse of an element in a group is an element such that $latex x*y=y*x=1$, where is the identity element of the group. In this group, is the identity element, so the inverse of an element is an element such that .

For example, the inverse of is , because . Similarly, the inverse of is , because .

**Remark**: Chat-GPT answers this question perfectly and gives an excellent explanation. To be fair, this is an easier question.

**Question 2**

**Me**: How many homomorphisms are there of Z into Z_2? Please write in latex format.

**Chat-GPT**: There are two homomorphisms of into . They are defined as follows:

such that for all

such that for all

**Remark**: Again, Chat-GPT is spot on. However, I asked this question a few times and it once said 1, and another time said two but gave a false justification:

There are only two homomorphisms of Z (the integers) into Z2 (the integers modulo 2).

One homomorphism sends every integer to 0, and the other homomorphism sends every integer to 1.

**Question 3**

**Me:** Let p = (1, 2, 5, 4)(2, 3) in S_5 . Find the index of <p> in S_5. Please write your answer in latex.

**Chat-GPT:**

The index of <p> in S_5 is the number of elements in S_5 that come before <p> in lexicographic order.

To find the index of <p>, we must first determine the cycle type of p. The cycle type of a permutation is a list of the lengths of its disjoint cycles, listed in decreasing order.

The cycle type of p is [2,2], since p consists of a 2-cycle and a 2-cycle.

Now we can use the cycle index of S_5 to determine the number of permutations in S_5 with cycle type [2,2].

The cycle index of S_5 is given by:

where is the number of permutations in S_5 with i cycles, and is a variable representing a cycle of length i.

Substituting the cycle type [2,2] into the cycle index gives:

This simplifies to:

The coefficient of in this expression is , so the number of permutations in S_5 with cycle type [2,2] is .

Since <p> is the only permutation in S_5 with cycle type [2,2], it is the -th permutation in S_5.

Therefore, the index of <p> in S_5 is .

**Remark:** Woah! This is very wrong! The very first sentence is completely wrong. Also, its final conclusion, is not even an integer, which is certainly not right. Indeed, the index of a subgroup counts something and most be a positive integer.

We are able to salvage something though. Suppose we computed that the order of is 5, using our algebra skills. Then we can prompt chat-GPT with the following.

Me: Now suppose is a subgroup of order 5. What is the index of in ?

**Chat-GPT:** The index of a subgroup in a group is the number of left cosets of in . This is also equal to the number of right cosets of in .

Since the order of a subgroup divides the order of the group, the order of must divide the order of , which is 120. The only divisors of 120 that are less than 120 are 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, and 120. Therefore, the only possible orders for are 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, and 120.

Since the order of is 5, the index of in is .

**Remark**: Here chat-GPT answers perfectly with a great explanation. Perhaps it would be nice to mention that the main fact being used is Lagrange’s theorem, though this can easily be obtained by asking it “What theorem did you use above?”

**But what’s the intuition?**

What makes a research mathematician great is a certain “intuition” for problems. Let’s see if chat-GPT can explain any of that.

**Me: **Can you explain the intuition behind the Cauchy-Schwarz inequality? Please write any math formula in latex.

**Chat-GPT: ** Sure! The Cauchy-Schwarz inequality is a useful inequality that arises in various areas of mathematics, such as vector algebra, inner product spaces, and probability theory. It states that for any two vectors `x`

and `y`

, the following inequality holds:

Intuitively, the Cauchy-Schwarz inequality states that the inner product of two vectors `x`

and `y`

is “controlled” by the lengths of the vectors `x`

and `y`

. To see this, consider the special case where `x`

and `y`

are unit vectors (i.e., vectors of length 1). In this case, the inequality becomes:

which means that the inner product of `x`

and `y`

is always less than or equal to 1. This is consistent with our intuition that the inner product of two unit vectors should be “small” if they are “orthogonal” (i.e., perpendicular) and “large” if they are “parallel” (i.e., pointing in the same direction).

I hope this helps! Let me know if you have any questions.

**Remark: **This is not too bad, but could certainly be improved. For instance, explaining when equality holds in Cauchy-Schwarz (when one vector is a multiple of the other) helps give a more precise meaning the the last paragraph. One can see this first few paragraphs of this blog post of Terence Tao to see an explanation from one of the world’s leading mathematicians. Also, here is a math stack exchange on the topic.

**Prove it.**

**Me: **Can you prove it?

**Chat-GPT**: Sure! Here is a proof of the Cauchy-Schwarz inequality:

First, we can rewrite the inequality as:

Next, we can expand the left-hand side as follows:

This simplifies to:

Now, we can apply the arithmetic-geometric mean inequality to each term on the right-hand side:

This simplifies to:

Finally, we can apply the arithmetic-geometric mean inequality again to the term on the right-hand side:

This simplifies to:

And we are done! This completes the proof of the Cauchy-Schwarz inequality.

I hope this helps! Let me know if you have any questions.

**Remark:** Unfortunately, this proof is not quite correct. Both the proof method, as well as the details (i.e. its application of AM-GM) are severely flawed.

**Conclusion**

To be honest, Chat-GPT is quite impressive for a model that is not tailored for mathematics in any way. I think it could already, in its current format, help a mathematician to prepare lecture notes or even easier parts of a paper. Perhaps its greatest utility would come in automating certain parts of grants and job applications

]]>To acclimate myself with Computer Proofs, I aimed to write down some basic theorems from my area of research in Lean. I chose Lean primarily because of the very active and helpful community. In fact, everything I did in lean was done alongside Yael Dillies whom I met through Zulip. In what follows below, I will discuss how I went about this and what I took away from the experience.

First of all, what is Lean? It is an interactive theorem prover (such as Isabelle, Coq, and others) . For our purposes, it is a way to input mathematical statements and their corresponding proofs. The power of lean is that it verifies that the proof you enter is indeed correct. The user can write a mathematical statement she wishes to prove. The interactive window then notifies us what goals remain to prove and we can begin working to establish them. Correct proofs are rewarded by a “Proof Complete” (and a festive ). Incomplete proofs give unaccomplished goals or errors. To get a feel of how this works, one can load up the natural number game.

An interactive theorem prover offers interesting possibilities. The most obvious is that we can be sure that a proof is correct. Of course this is not completely true, as one has to make sure that previous statements and definitions are indeed correctly stated! One can consult this Quanta article about the impressive computer formalization of some of Peter Scholze’s work.

An even more interesting goal is the hope to automate parts of the mathematical research. One idea, independent of theorem provers, is an advanced autocomplete to be used when writing mathematical papers in latex. The technology to build this already exists and has been applied successfully to programming and is still evolving. For instance, Google’s Minerva has already had some success in using language models to perform mathematical reasoning.

Another goal would be a to automate the process of checking correctness of proofs. An effective type of system would save mathematicians countless hours just in refereeing. This can be viewed loosely as a neural machine translation problem. There has been lots of success in this domain with natural language (say something like English to German) both in research and in practice.

Finally a third, and even more ambitious, goal is for an AI to be able to come up with its own proofs. Needless to say, we are currently at the infancy of this program. The interested reader can consult open AI’s somewhat recent paper on the subject.

For further reading on these sorts of things, I recommend a promising path towards autoformalization and general artificial intelligence by Christian Szegedy. Here Szegedy outlines his thoughts on so-called autoformalization as well as how such a program would fit into the general development of AI.

My first step in getting familiar with lean was to work through the natural number game. This is basically a guided tour for proving some basic statements in lean, avoiding some technical difficulties in the process (for instance, it is browser based). Here one has a goal to prove, and must use the given hypotheses to establish the goal. After each new line of code is written, one can analyze how this changes the hypotheses/goals.

After this, I got lean on my local machine and worked through a tutorial, which is again a guided tour, but a bit more in depth than the natural number game (here I was joined for some of this by Tsutomu Okano). I worked through various problems one might see in undergraduate analysis course relating to sequences, limits, and other topics.

The next leap is to begin to contribute to mathlib. Mathlib is a library of mathematics in lean that hundreds (in my estimation) of people have worked on. This is not an easy step, and to proceed here it helps that lean has a very active community through Zulip. The active community was my main reason for choosing Lean over other interactive theorem provers. For instance, knowledge of the incredibly useful Github is helpful for figuring out what is going on.

After asking around, I was lucky that Bhavik Mehta and Yael Dillies were both knowledgeable with Lean and interested in my main area of research. Moreover, there was already some work in mathlib on additive combinatorics and finite sets, which was very useful to build off of (see this work of Bloom and Mehta). I started off by trying to prove Ruzsa’s triangle inequality.

**Theorem 1: **Let be finite sets. Then

I thought this was a good starting point as it is fundamental to the area, whilst having a short proof (i.e. Lemma 2.6 in Tao and Vu). This turned out to be a bit harder than I expected. The argument can be formulated as a double counting argument or showing that a certain function is an injection. It turns out that the former formulation is a bit easier to input into lean due to what is already in mathlib, borrowing some of Bhavik’s work on the sum-product problem (however, I was shown that with enough perseverance one can use the latter proof). Moreover, some very simple things like point-wise set subtraction, had to be added to mathlib. This turns out to be quite a non-trivial task and was completed by Yael. This is perhaps a result of additive combinatorics being a relatively new addition to mathlib.

Next, Yael and I input Plunnecke’s inequality into mathlib. Plunnecke’s inquality is another fundamental result that has a relatively short proof (thanks to Petridis). Here, Yael took over the lean part. Their workflow was way more efficient than mine, as they is a veteran of lean. They have developed a plethora of tricks for finding results in mathlib and even figuring out what to use in mathlib. While they cannot write in lean as quickly as I can Latex, they is not ten times slower.

We eventually made a pull request to mathlib containing Ruzsa’s triangle inquality and Plunnecke’s inequality. Mathematically, I did not gain too much working on this. I believe this is in part because I already had a thorough understanding of the proofs. Nevertheless, thinking of how to actually convert theorems to lean did force me to revisit these classical theorems from multiple perspectives and I could see this being productive as one scales.

Overall, I enjoyed the experience of working in Lean, largely due to the vibrant community. Transferring mathematics to theorem provers requires a unique combination of mathematics and software engineering (both to contribute as well as under the hood). We already have some talented people (both young and old) working on this and I certainly look forward to seeing what will be accomplished along these lines.

]]>Let be a finite subset of the integers. We say is sum-free if there are no solutions to

with . We define to be the size of the largest sum-free subset of . We seek lower bounds for . It is conjectured that

for any . ErdÅ‘s established is admissible and Bourgain later improved this to . By a construction, Eberhard, Green, and Manners showed that .

I was originally drawn to this problem for two reasons. The first is that the aforementioned result of ErdÅ‘s is the first additive combinatorics result in Tao and Vu’s additive combinatorics book. The second is that Bourgain’s original proof seemed to have a stigma that it was quite difficult.

We now sketch that is admissible, as shown by ErdÅ‘s. The first idea is that the set is sum-free. Thus any subset of this set is also sum-free. Note this set has measure , which is the same as the multiplicative constant in .

The second idea is to randomly map into . Indeed choosing

at random, we consider

.

One can check that this set on average has size and as mentioned before, is sum-free.

Bourgain’s work and also our work involves more careful choices of . Underpinning the work is to think of as a function, , on the torus and to apply a combination of Fourier techniques and combinatorial techniques.

For a set , we let

Then the ErdÅ‘s argument above may be restated as . Furthermore, would follow from establishing there is an satisfying

One new idea in our work is to partition into and , where is the set of elements in that are divisible by . It turns out that this decomposition is useful as

while

Thus, for instance, a short argument reveals that if one can establish , then it follows that for .

]]>and

A classical principle going back to at least Hilbert is that a typical polynomial should have Galois group , and this would suggest is significantly smaller than . Van der Waerden conjectured that . Van der Waerden’s conjecture was almost proved quite recently by Chow and Dietmann (for ), except they are not able to estimate

Gallagher showed, using the large sieve, that

We show the following.

**Theorem 1:** Let be a positive integer. then

We note that Bhargava recently announced a proof of , which is stronger than Theorem 1. We remark that our techniques also apply to counting polynomials with almost prime discriminants (see Theorem 1.2 and the related 1.3).

We briefly mention some of the ideas used to prove Theorem 1. Classical algebraic number theory reveals that if the Galois group of a polynomial is , then , where is the Mobius function for . Thus we can employ a sieve to estimate . However, instead of using the large sieve, we use the small sieve (more precisely, the Selberg sieve). Using this naively gives a bound similar to that of Gallagher in (1), as expected. The key advantage of using the small sieve is we can seek improvements if we have good distribution results. After employing Poisson summation our task in then to understand local factors, which turns out to be basically

where is an additive character. This is morally the same problem as estimating , where is the classical Mobius function. Baker and Harman showed that GRH implies that for any ,

As the Riemann hypothesis is known in the function field setting, such a bound should also hold for (2). Indeed Porritt showed this is the case and this allows us to establish Theorem 1.

]]>We have the following result of Khovanskii from 1992.

**Theorem 1 (Polynomial size of iterated sumsets):** Let of size . Then there exists a and such that for

One part of a recent work, Andrew Granville, Aled Walker, and I were able to provide effective bounds for . We let

**Theorem 2 (Effective Khovanskii):** Let of size . Then there exists such that for any

It is not clear that (1) is best possible. Certainly there has to be some dependence on how lies in space, as can be seen by taking the elements of to be increasingly spread out. Curran and Goldmakher showed that when , one needs at least the order of

We first recall an elegant proof of Theorem 1 due to Nathanson and Ruzsa, which is also the starting point for Theorem 2.

**Proof of Theorem 1:** We consider , equipped with , the lexicographical ordering. Let . We then have a map

via

It is worth noting that if is injective, we immediately deduce Theorem 1 by the stars and bars. Typically is not injective, but it turns out to not be a significant problem. We let be the set of elements such that there exists a with , , and . We call any element of useless. They are deemed useless as

There is nothing intrinsic that makes elements of useless, rather it is a consequence of our choice of lexicographical ordering. One can check that is closed under translations from elements of .

We need a way to count the elements of and thus propose another definition. We let be the partial ordering where if is smaller than coordinate-wise. Let be the elements such that there is no with . Dickson’s lemma implies that is finite. For a set , we let

Thus we have, by the inclusion-exclusion principle,

Thus it is enough to show for any finite , is polynomial in , for large enough. This follows from the same stars and bars argument mentioned above, as long as

and Theorem 1 follows.

Note that this proof does not give any effective bound on , as we do not have any control over the set . In particular, one would like to have a bound on the norm of the elements of , in light of (2). In general, one cannot make Dickson’s lemma quantitative, but in our case we can use the structure of to do so. The point is that is defined by linear relations, so one can appeal to Siegel’s lemma.

**Proof (sketch) of Theorem 2:** We translate so that , which does not effect the problem (in particular, remains unchanged). We build upon the proof of Theorem 1. Suppose . As , there is a such that , , and . Thus

As , one can check that . We now construct a matrix as follows. The first row has 1’s followed by ‘s. The remaining rows are give by and . Thus

One would hope to apply Siegel’s lemma, which asserts that (3) has a solution such that is small. The primary issue is that this small solution, may have nothing to do with . However one can translate by a multiple of to create a solution that is small in a single coordinate. Then one “forgets” this coordinate and tries proceeds by induction. A secondary issue is that the , may have negative coordinates, but this turns out to not be a critical issue. All of this carried out in section 6 of the aforementioned paper with Granville and Walker.

]]>We recall Kuratowski’s closure axioms for a set . These are a set of axioms which can be used to define a topology on a set. Here, one replaces the notion of open (or closed) sets with a closure operation. The closure operation has the advantage of being defined on all subsets of . For, one may think of the closure of , denoted , to be points that are “close to” .

First we recall what is meant by a topological space. We let be the power set of , that is the set of all subsets of .

**Definition 1:** Let be a set and . Then is a topology with open sets if

- (a) ,
- (b) if , then ,
- (c) if for all , then .

Furthermore, we say

are the closed sets of .

Note we could have defined (a), (b), (c) in terms of closed sets, utilizing DeMorgan’s law. We make use of this implicitly below. We now present the definition of Kuratowski closure.

**Definition 2:** Let be a set and . We say is a closure operation if

- (i) ,
- (ii) if , then ,
- (iii) if , then ,
- (iv) if , then .

As mentioned above, one can informally think of as the set of points in which are “close” to . We now explain that these two definitions are equivalent. To define a closure operation from Definition 1, we set

Note that is closed by (c). Given a closure operation, we may define the closed sets to be

**Proposition 1:** Let be a topological space as in Definition 1. Then the operation defined in (1) is a closure operation in the sense of Definition 2.

**Proof:** Let

be the set of closed sets in . Let . By (1), we have and thus (ii) holds. As is open, and so (i) is satisfied. Note for any closed set, , we have by (ii) and that itself appears on the right hand side of (1). By (c), we have that

and so . This proves (iii) and so it remains to show (iv). Note that is a closed set containing , by (b). Thus

Now let that contains . Then and so

This proves the reverse inclusion of (3) and thus of Proposition 1.

We now show that a closure axiom can be used to define a topology.

**Proposition 2:** Let be a closure axiom as in Definition 2. Then the sets defined in (2) form the closed sets of a topology in the sense of Definition 1.

**Proof:** We implicitly make use of DeMorgan’s law to transition bewtween closed and open sets. By , we have and by (ii), we have . Thus is established. Let . By (iii), (iv) and (2), we have

which establishes (b). Let for all some index set. Then we have

Indeed, the backwards inclusion follows from (ii). To see the forward direction, it is enough to show for any ,

The second equality is (iii). The first subset inequality follows from (iv) as if then

We apply this with and .

Recall if and are topological spaces, then

is said to be continuous if the pre-image of every open set in is open in , that is

for all open . As , (5) is equivalent to the analogous definition for closed sets. It turns out that we may equivalently define a map to be continuous if

This can be informally interpreted as points that are close to are mapped to points that are close to . We now prove this.

**Proposition 3:** Let be a function. Then is continuous in the sense of (4) if and only if it is continuous in the sense of (5).

**Proof:** We start with the forward direction. Let . Then by (4), is closed. As , we have

Since the right hand side is closed, by (ii) and (iii), we have

Applying to both sides establishes (5).

Now we show the reverse implication. Let be a closed set. Then by (5),

As any element which is mapped to by must lie in , we have

By (ii), equality holds and thus is closed.

]]>Let be a positive integer (which we take later to be ). We consider the exponential sum

where . We are interested in bounds for

where are a family of sets indexed by some parameter . For simplicity, we only consider, for fixed ,

We would like to show that for most , (1) is small. This is supported by the usual heuristic that we expect square root cancellation in (1). On the other hand, is large for some special values of (say ), so it certainly matters how the lie in space. Burak ErdoÄŸan and I studied these types of questions (also the topic of this previous blog post) motivated from understanding the fractal dimension of solutions to certain PDE (see the introduction and references within for a complete history). We let

**Theorem 1 (Chen-Shparlinski):** Let and

Then for a.e. , with respect to the Lebesgue measure,

Let us make some remarks before beginning the proof. It is worth noting that for very small we know the best value of . Indeed, Brandes, Parsell, Poulias, Vaughan, and I showed is admissible and cannot be improved. In the aforementioned paper of ErdoÄŸan, we mentioned that one could obtain a variant of Theorem 1 by invoking Vinogradov’s mean value theorem, which is a best possible bound for

The guiding principle is that if an exponential sum is large at a single point, then one can create many other points where the exponential sum is large. On the other hand, there cannot be too many points where this occur as (2) is small. This is a somewhat unsatisfactory approach, as it is not clear that the variable mean value in (2) is the right tool to analyze the two variable . Recently, Chen and Shparlinski instead utilized the following two variable mean value type theorem of Wooley, which turns out to improve the bounds a bit and simplify the proof.

**Theorem 2 (Wooley):** Suppose is an integer. Then for any

Note that Theorem 2 is best possible, in a certain sense. By considering a small rectangle near , we see

Thus Theorem 2 cannot be improved much, for the values of for which it applies. It is not clear that the range of is best possible. A natural conjecture is that Theorem 2 holds for

Such an improvement would improve Theorem 1.

**Proof of Theorem 1:** We only prove for , for simplicity. The reader may consult Chen and Shparlinski’s paper for the general case, where the additional idea is to employ the completion technique.

Let and . We partition into a grid of small rectangles of size approximately

We label these rectangles by

The point is that (1) is does not change much on such rectangles. Indeed it is easy to check, using , that (for large enough) if

for some , then

for any . We let consist of the rectangles such that there is a with . Combining this with the mean value estimate in Theorem 2, we see that cannot be too large.

Indeed, Markov’s inequality and Theorem 2, we see that for ,

We now consider the image of these rectangles under the map

We have

where

Note that does not distort rectangles too much, so that

where is the Lebesgue measure. Thus by subadditivity of the Lebesgue measure and (3)

Note here that is fixed. What we actually care about is what happens for a fixed and for some large . There is a standard trick from probability (or analysis) to apply the Borel-Cantelli lemma. We first apply the above result with to find

By the Borel-Cantelli lemma, if

then the set of such that (4) holds for infinitely many has measure zero. This is implied by

as long as is sufficiently small. This, in turn, is implied by

]]> **Theorem 1:** Let with at least two elements. Suppose . Then there is a and such that

As the note is only 3 pages, we do not remark on the proof (which uses the polynomial method) but elaborate on some related topics. Note by the pigeon-hole principle, Theorem 1 is true for every if we only insist . Thus we have effectively doubled the bound coming from the pigeon hole principle. Note without dilation, Theorem 1 is false, as one can take equally spaced elements.

One can ask what happens for Theorem 1 if one does not allow translation by . It turns out that one cannot hope to go beyond , as it was shown in this paper of Chen, Shparlinski and Winterhof, using the example

It is an interesting question to decide to what extent Theorem 1 is true with translation by . We remark this is in a similar spirit to the lonely runner conjecture.

It would be nice if Theorem 1 were true for for all , even in the special case when . Certainly this is true for a random set, without the need for dilation.

In particular, this would give us hope in answering an old question of ErdÃ¶s, which we recall. A Sidon set in a abelian group is a set such that with implies . Let be the largest Sidon set contained in . ErdÃ¶s asked if

There are constructions of Sidon sets of size (for some ) coming from for well-chosen . The hope would be to dilate the set in so there is a large gap of size , thus finding a Sidon set of inside of in place of . It is actually not known if we can take to be a prime in the modular construction, which may be found in this nice survey of O’ Bryant. This is certainly a question of interest.

On the other hand, one can hope to improve Theorem 1 for some of these constructions. It turns out one can easily check that Ruzsa’s construction which is the set does not admit large gaps. Indeed the set has size but any dilate of does not contain a gap significantly longer than . This also shows Theorem 1 is false for general cyclic groups. The point is that in his construction the natural projection to maps surjectively.

This seems to be a bit of a red herring in the application to Sidon sets. On the other hand, for the well-known construction of Bose-Chowla (contained in the aforementioned survey), the analog of Theorem 1 is true and there is no reason to suspect that it cannot be improved. In fact, in this case a proof also proceeds by the polynomial method.

]]>