In the above single page PDF I discuss a proof of Holder’s inequality that I found useful in a problem I was recently working on.
In the above PDF, we classify which convex bodies have significantly more lattice points than expected. This post also provides an alternative proof to Lemma 9.2 in a recent paper of Maynard.
Junxian Li, a fellow graduate student at UIUC, and I just uploaded our paper On distinct consecutive r-differences to the arXiv.
Our goal is to show for finite with special structure for some positive (here we adopt Vinogradov’s notation). Note that for arithmetic progressions, so we really need to make assumptions about the structure of .
Motivated by a paper of Solymosi, we introduce the notion of distinct consecutive -differences. We say has distinct consecutive -differences if are distinct for .
Assume has distinct consecutive -differences. We show that for any finite , one has and that this inequality is sharp in such generality. We wonder if one can improve upon this if and ask what is the largest such that . The above result implies , while in our paper we use de Bruijn sequences to show .
When has additive structure, the results from our paper suggest that should have few distinct consecutive -differences. We investigate two of these cases show that these have very few distinct consecutive differences. In the process, we generalize Steinhaus’ 3-gap theorem as well as a result of Slater concerning return times of irrational numbers under the multiplication map of the Torus.
Let be a function on some measure space with measure (for instance or a finite set).
Often one is interested in finding lower bounds for the norm of , that is , but has no way to directly estimate it. As a toy example, we can consider via . Estimating the norm directly seems hard.
But sometimes, we are able to estimate higher norms of . This is useful to our original problem, since an application of Holder’s inequality reveals that a lower bound on the norm and an upper bound on the norm gives a lower bound on the . To see this, note .
We can apply this idea to our original example. Parseval’s identity gives that while orthogonality and the divisor bound give that . This gives and is expected by the heuristic that a typical exponential sum should be about square root of the length of the sum.
My intuition is the following. Suppose the measure space is a probability space. Then . We are basically trying to reverse this inequality. Equality holds when is constant, that is is not too concentrated. The upper bound on the norm of implies that indeed is not too concentrated.
We mention that there is nothing too special about the exponents 2 and 4 chosen for the above discussion (although they are convenient for the specific example I chose).
This post was inspired by a question of Trevor Wooley after a talk of James Maynard at MSRI. He asked what was known for lower bounds of large gaps between integers that have a representation as the sum of two squares. We assume some familiarity with the large gaps between primes problem.
A theorem of Fermat asserts that an integer, is represented by the sum of two squares if and only if all the prime factors of equivalent to 3 modulo 4 appear with an even power.
For technical simplicity, we consider the set of integers that simply have no prime factors equivalent to 3 modulo 4, but one may extend the considerations below to the original question.
The size of such a set is proportional to . Thus the average gap is and so the average gap is at least this big.
Surprisingly, there is an easy argument that yields a gap of size . We sketch it here. For each prime equivalent to 3 modulo 4, choose a residue class . Using a greedy algorithm to choose the , we may sieve out the integers in for . By the Chinese remainder theorem, there is a such that for all the relevant . Thus all contain a prime factor equivalent to 3 modulo 4, as desired.
What should one expect? Adopting the Cramer model, we expect that there should be gaps of size at least . We give a rough sketch as to why one should expect this. Let . We suppose that an integer is chosen random with probability . Then the probability that is which is well approximated by . If , then this is around the number of choices for . One can see the Cramer model worked out more carefully here.
Fix and coprime to . Now we consider the set . The Cramer model, as above, suggests that there should be gaps between elements of that are of size . Nevertheless we may utilize the trivial bound above to obtain gaps of size . Thus, for these , we can come arbitrarily close (i.e. an arbitrarily small power of log) to the lower bounds associated to Cramer’s conjecture.
Note for primes, we are nowhere near Cramer’s conjecture. Ford, Green, Konyagin, Maynard, and Tao have the best lower bound. Things look worse for upper bounds, as no one knows how to improve upon a power of for the primes or any of the sets mentioned in this post (even if one assumes the Riemann hypothesis).
Is a triangle uniquely determined by its area, perimeter, and sum of the reciprocals of its angles?
This is an open question, though it is supported by numerical evidence. This question is related to the inverse spectral problem of the Laplace operator in spectral geometry. In particular, the area, perimeter, and sums of the reciprocals of the angles are the first three eigenvalues of the Laplace operator. Note that a triangle, up to congruence, is determined by three pieces of information (for instance, the SSS theorem from high school geometry), which is why it makes sense to require at least three eigenvalues. We remark that all of the eigenvalues of the Laplace operator uniquely determine a triangle
On the other hand, right triangles are determined by two pieces of information (say the two legs). Thus one might guess that a right triangle is determined by the first two eigenvalues of the Laplace operator. Indeed, this is the case as we outline below that the area and perimeter of a triangle determine a right triangle.
Suppose is a right triangle with area and perimeter . Then the two legs of the triangle are the solutions of the quadratic polynomial
Let be the side lengths of , where is the hypotenuse. Then The proposition follows from a modest computation.
For fun, one can check the solutions quadratic polynomial in the above proposition are invariant under the map .
In the above pdf, I explore the first nontrivial case of Vinogradov’s mean value theorem. That is, I seek to bound solutions to the simultaneous systems for , where all the variables are integers in . This is much easier (still nontrivial) than the general case, due to the existence of helpful symmetries.
A link to a talk I gave on the topic. The talk is much more elementary than the blog post.
As a problem in number theory, the mean value theorem requests to show most of the solutions to a certain system of Diophantine equations of the form for some integral polynomials are of the form . As a problem in harmonic analysis, the mean value theorem requests for an upper bound for the norm of a certain function. These two problems turn out to be equivalent, as explained in the pdf linked at the top of the post, thanks to Fourier analytic identities.
One goal in the above pdf is to understand the nature of the so called “critical exponent.” Interpolation reveals that Vinogradov’s mean value theorem follows from an bound of a certain function. While a first step to understanding this critical exponent is interpolation, consideration of the major arcs gives proper insight into why appears.
In the final section, I attempt to explain how the mean value theorem can be interpreted as a stronger form of orthogonality of certain complex exponentials. For a vector , we define via . Then Vinogradov’s mean value theorem can be interpreted as showing are stronger than orthogonal (, not true for ). We make this somewhat precise in the above pdf, adopting a probabilistic perspective.
I’d like to thank Chris Gartland, a fellow graduate student, for helping me formulate the ideas in the last section. For instance, it was his idea to utilize equation 5.
I’d also like to thank Junxian Li, a fellow graduate student in number theory, for very useful discussions regarding the major arcs.
Lastly, anyone at UIUC interested in hearing more about Vinogradov’s mean value theorem (for instance Bourgain, Demeter and Guth’s recent result or classical number theoretic methods), please get in touch with me. My email can be found here, or you can visit my office in Altgeld 169.
Junxian Li, Kyle Pratt, and I recently uploaded our paper A lower bound for the least prime in an arithmetic progression to the arXiv.
Here is a file where the heuristics considered in section 2 of the paper are developed in a slightly simpler situation.
Given a positive integer and a coprime to , define to be the smallest prime equivalent to modulo . We are interested in the worst case behavior, that is Thus and . In particular we are interested in lower bounds for for large . An elementary observation, due to Pomerance, in Theorem 1 shows roughly that to find lower bounds for , it is enough to find lower bounds for the Jacobsthal function (the roughly will be explained below). For an integer , the Jacobsthal function, , is the largest difference between consecutive integers coprime to .
In recent work on Long gaps between primes by Ford, Green, Konyagin, Maynard, and Tao, they improve upon lower bounds for where is the product of the first primes (they also mention the connection to the least prime problem; indeed it was Kevin Ford who originally introduced us to the problem). The key difference in the current problem is that we seek lower bounds for where is the product of the first primes that are coprime to . Our main new idea is to modify these sieve weights of Maynard used in the work of Ford, Green, Konyagin, Maynard, and Tao. We outline our approach in section 4 of our paper.
We finish by taking some time here to discuss smooth number estimates, which is perhaps the most important input to our work as well as all previous work on large gaps between primes (Westzynthius, in 1931, was the first to realize this connection). For , let be the number of integers whose prime factors are all . Thus is the number of powers of that are at most and is the number of integers of the form . Estimating is straightforward and for is fixed, one can obtain an asymptotic for by counting lattice points in a simplex, as I describe in this previous blog post.
For our current problem, it is crucial that we are allowed to let depend on . The important fact is that is much smaller than expected (by sieve theory heuristics). Rankin, in 1938, in his work on gaps between primes (see also: these set of notes) improved upon smooth number estimates to obtain better lower bounds for large gaps between primes. Westzynthius’ strategy, along with Rankin’s estimates, are still the starting points for current methods of constructing large gaps between primes.