# Exponential Sums along Oblique Lines

Thanks to Changhao Chen, Burak Erdoğan, and Igor Shparlinski for useful discussions surrounding this post.

Let ${k}$ be a positive integer (which we take later to be ${\geq 11}$). We consider the exponential sum

$\displaystyle S(x,y) : = \sum_{n=N}^{2N} e(xn + yn^k),\ \ \ \ \ (1)$

where ${e(x) : = e^{2\pi i x}}$. We are interested in bounds for

$\displaystyle \sup_{(x,y) \in \mathcal{L}_z} |S(x,y)|,$

where ${\mathcal{L}_z \subset [0,1)^2}$ are a family of sets indexed by some parameter ${z\in \mathbb{R}}$. For simplicity, we only consider, for fixed ${(a,b) \in \mathbb{R}}$,

$\displaystyle \mathcal{L}_z = \{(x,y) \in [0,1)^2 : ax + by = z\}.$

We would like to show that for most ${z}$, (1) is small. This is supported by the usual heuristic that we expect square root cancellation in (1). On the other hand, ${S(x,y)}$ is large for some special values of ${(x,y)}$ (say ${(0,0)}$), so it certainly matters how the ${\mathcal{L}_z}$ lie in space. Burak Erdoğan and I studied these types of questions (also the topic of this previous blog post) motivated from understanding the fractal dimension of solutions to certain PDE (see the introduction and references within for a complete history). We let

$\displaystyle s(k) = k(k-1) + 2 \lfloor \sqrt{2k-2}\rfloor - 2.$

${\ }$ Theorem 1 (Chen-Shparlinski): Let ${\epsilon > 0}$ and

$\displaystyle \alpha > 1 - \frac{1}{1 + s(k)}.$

Then for a.e. ${z \in \mathbb{R}}$, with respect to the Lebesgue measure,

$\displaystyle \sup_{(x,y) \in \mathcal{L}_z} S(x,y) \ll_z N^{\alpha + \epsilon}, \ \ \ \spadesuit$

${\ }$

Let us make some remarks before beginning the proof. It is worth noting that for very small ${k}$ we know the best value of ${\alpha}$. Indeed, Brandes, Parsell, Poulias, Vaughan, and I showed ${\alpha = 3/4}$ is admissible and cannot be improved. In the aforementioned paper of Erdoğan, we mentioned that one could obtain a variant of Theorem 1 by invoking Vinogradov’s mean value theorem, which is a best possible bound for

$\displaystyle \int_{[0,1]^k} |\sum_{n=N}^{2N} e(x_1 n + x_2 n^2 + \cdots + x_k n^k)|^{p}.\ \ \ \ \ (2)$

The guiding principle is that if an exponential sum is large at a single point, then one can create many other points where the exponential sum is large. On the other hand, there cannot be too many points where this occur as (2) is small. This is a somewhat unsatisfactory approach, as it is not clear that the ${k}$ variable mean value in (2) is the right tool to analyze the two variable ${S(x,y)}$. Recently, Chen and Shparlinski instead utilized the following two variable mean value type theorem of Wooley, which turns out to improve the bounds a bit and simplify the proof.

${\ }$ Theorem 2 (Wooley): Suppose ${k \geq 11}$ is an integer. Then for any ${\sigma \geq s(k)}$

$\displaystyle \int_0^1 \int_0^1 |S(x,y)|^{\sigma} dx dy \leq N^{\sigma - k - 1 + o(1)}. \ \ \spadesuit$

${\ }$

Note that Theorem 2 is best possible, in a certain sense. By considering a small ${N^{-1} \times N^{-k}}$ rectangle near ${(0,0)}$, we see

$\displaystyle \int_0^1 \int_0^1 |S(x,y)|^{\sigma} dx dy \gg N^{\sigma - k - 1}.$

Thus Theorem 2 cannot be improved much, for the values of ${\sigma}$ for which it applies. It is not clear that the range of ${\sigma}$ is best possible. A natural conjecture is that Theorem 2 holds for

$\displaystyle \sigma > 2(k+1) .$

Such an improvement would improve Theorem 1.

Proof of Theorem 1: We only prove for ${N=2^j}$, for simplicity. The reader may consult Chen and Shparlinski’s paper for the general case, where the additional idea is to employ the completion technique.

Let ${\epsilon > 0}$ and ${0 < \alpha < 1}$. We partition ${[0,1)^2}$ into a grid of ${O(N^{2 \alpha - k - 3 - 2 \epsilon})}$ small rectangles of size approximately

$\displaystyle N^{\alpha - 2 - \epsilon} \times N^{\alpha - k - 1 - \epsilon}.$

We label these rectangles by

$\displaystyle \cup_{R \in \mathcal{R}} R.$

The point is that (1) is does not change much on such rectangles. Indeed it is easy to check, using ${e(x) = 1 + O(x)}$, that (for ${N}$ large enough) if

$\displaystyle |S(x,y)| \geq N^{\alpha},$

for some ${(x,y) \in R}$, then

$\displaystyle |S(x',y')| \geq N^{\alpha}/2,$

for any ${(x',y') \in R}$. We let ${\mathcal{R}_{\alpha} \subset \mathcal{R}}$ consist of the rectangles ${R}$ such that there is a ${(x,y) \in \mathcal{R}}$ with ${|S(x,y)| \geq N^{\alpha}}$. Combining this with the mean value estimate in Theorem 2, we see that ${\#\mathcal{R}_{\alpha}}$ cannot be too large.

Indeed, Markov’s inequality and Theorem 2, we see that for ${\sigma \geq s(k)}$,

$\displaystyle N^{2 \alpha -2 -k - 1 - 2 \epsilon} N^{\alpha \sigma} \#\mathcal{R}_{\alpha} \leq \int_0^1 \int_0^1 |S(x,y)|^{\sigma} dx dy \leq N^{\sigma - k - 1 + o(1)}.$

Thus

$\displaystyle \#\mathcal{R}_{\alpha} \leq N^{(1-\alpha)\sigma -2\alpha+2 + 2 \epsilon + o(1)}.\ \ \ \ \ (3)$

We now consider the image of these rectangles under the map

$\displaystyle (x,y) \mapsto ax + by.$

We have

$\displaystyle \{z\in \mathbb{R} : |S(x,y)| \geq N^{\alpha}, \ \ \text{for some} \ (x,y) \in \mathcal{L}_z\} \subset f\left(\bigcup_{R \in \mathcal{R}_{\alpha}}R\right) = \bigcup_{R \in \mathcal{R}_{\alpha}}f(R),$

where

$\displaystyle f(x,y) = ax+by.$

Note that ${f}$ does not distort rectangles too much, so that

$\displaystyle \lambda(f(R)) \ll_{a,b} N^{\alpha - 2 + \epsilon}.$

where ${\lambda}$ is the Lebesgue measure. Thus by subadditivity of the Lebesgue measure and (3)

$\displaystyle \lambda(\{z\in \mathbb{R} : |S(x,y)| \geq N^{\alpha}, \ \ \text{for some} \ (x,y) \in \mathcal{L}_z\} )\ll_{a,b} N^{\alpha - 2 +\epsilon} N^{(1-\alpha)\sigma -2\alpha+2 + 2 \epsilon + o(1)}.$

Note here that ${N}$ is fixed. What we actually care about is what happens for a fixed ${z}$ and ${N \geq N(z)}$ for some large ${N(z)}$. There is a standard trick from probability (or analysis) to apply the Borel-Cantelli lemma. We first apply the above result with ${N = 2^j}$ to find

$\displaystyle \lambda(\{z\in \mathbb{R} : |S(x,y)| \geq 2^{j\alpha}, \ \ \text{for some} \ (x,y) \in \mathcal{L}_z\} ) \leq 2^{j((1-\alpha)\sigma - \alpha + 2 \epsilon )}.\ \ \ \ \ (4)$

By the Borel-Cantelli lemma, if

$\displaystyle \sum_{j=1}^{\infty} 2^{j((1-\alpha)\sigma - \alpha + 2 \epsilon )}< \infty,$

then the set of ${z}$ such that (4) holds for infinitely many ${j}$ has measure zero. This is implied by

$\displaystyle (1-\alpha)\sigma - \alpha < 0,$

as long as ${\epsilon}$ is sufficiently small. This, in turn, is implied by

$\displaystyle \alpha > \frac{\sigma}{\sigma + 1} = 1 - \frac{1}{\sigma + 1} . \ \ \ \spadesuit$