Planet SymPy
http://planet.sympy.org/
enPlanet SymPy - http://planet.sympy.org/https://asmeurer.com/blog/posts/verifying-the-riemann-hypothesis-with-sympy-and-mpmath/Aaron Meurer (asmeurer)Aaron Meurer (asmeurer): Verifying the Riemann Hypothesis with SymPy and mpmathTue, 31 Mar 2020 21:12:54 GMT
https://asmeurer.com/blog/posts/verifying-the-riemann-hypothesis-with-sympy-and-mpmath/
<div><p>Like most people, I've had a lot of free time recently, and I've spent some of
it watching various YouTube videos about the <a href="https://en.wikipedia.org/wiki/Riemann_hypothesis">Riemann
Hypothesis</a>. I've collected
the videos I've watched into <a href="https://www.youtube.com/playlist?list=PLrFrByaoJbcqKjzgJvLs2-spSmzP7jolT">YouTube
playlist</a>.
The playlist is sorted with the most mathematically approachable videos first,
so even if you haven't studied complex analysis before, you can watch the
first few. If you have studied complex analysis, all the videos will be within
your reach (none of them are highly technical with proofs). Each video
contains parts that aren't in any of the other videos, so you will get
something out of watching each of them.</p>
<p>The <a href="https://www.youtube.com/watch?v=lyf9W2PWm40&list=PLrFrByaoJbcqKjzgJvLs2-spSmzP7jolT&index=8">last video in the
playlist</a>
is a lecture by Keith Conrad. In it, he mentioned a method by which one could
go about verifying the Riemann Hypothesis with a computer. I wanted to see if
I could do this with SymPy and mpmath. It turns out you can.</p>
<h2>Background Mathematics</h2>
<h3>Euler's Product Formula</h3>
<p>Before we get to the computations, let's go over some mathematical background.
As you may know, the Riemann Hypothesis is one of the 7 <a href="https://en.wikipedia.org/wiki/Millennium_Prize_Problems">Millennium Prize
Problems</a> outlined by
the Clay Mathematics Institute in 2000. The problems have gained some fame
because each problem comes with a $1,000,000 prize if solved. One problem, the
<a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture">Poincaré conjecture</a>,
has already been solved (Grigori Perelman who solved it turned down the 1
million dollar prize). The remainder remain unsolved.</p>
<p>The Riemann Hypothesis is one of the most famous of these problems. The reason
for this is that the problem is central many open questions in number theory.
There are hundreds of theorems which are only known to be true contingent on
the Riemann Hypothesis, meaning that if the Riemann Hypothesis were proven,
immediately hundreds of theorems would be proven as well. Also, unlike some
other Millennium Prize problems, like P=NP, the Riemann Hypothesis is almost
universally believed to be true by mathematicians. So it's not a question of
whether or not it is true, just one of how to actually prove it. The problem
has been open for over 160 years, and while many advances have been made, no
one has yet come up with a proof of it (crackpot proofs aside).</p>
<p>To understand the statement of the hypothesis, we must first define the zeta
function. Let</p>
<p>$$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}$$</p>
<p>(that squiggle $\zeta$ is the lowercase Greek letter zeta). This expression
makes sense if $s$ is an integer greater than or equal to 2, $s=2, 3, 4, \ldots$,
since we know from simple arguments from calculus that the summation converges
in those cases (it isn't important for us what those values are, only that the
summation converges). The story begins with Euler, who in 1740 considered the
following infinite product:</p>
<p>$$\prod_{\text{$p$ prime}}\frac{1}{1 -
\frac{1}{p^s}}.$$</p>
<p>The product ranges over all prime numbers, i.e., it is
$$\left(\frac{1}{1 - \frac{1}{2^s}}\right)\cdot\left(\frac{1}{1 -
\frac{1}{3^s}}\right)\cdot\left(\frac{1}{1 - \frac{1}{5^s}}\right)\cdots.$$
The fraction $\frac{1}{1 - \frac{1}{p}}$ may seem odd at first, but consider
the famous geometric series formula, $$\sum_{k=0}^\infty r^k = \frac{1}{1 -
r},$$ which is true for $|r| < 1$. Our fraction is exactly of this form, with
$r = \frac{1}{p^s}$. So substituting, we have</p>
<p>$$\prod_{\text{$p$ prime}}\frac{1}{1 - \frac{1}{p^s}} =
\prod_{\text{$p$ prime}}\sum_{k=0}^\infty \left(\frac{1}{p^s}\right)^k =
\prod_{\text{$p$ prime}}\sum_{k=0}^\infty \left(\frac{1}{p^k}\right)^s.$$</p>
<p>Let's take a closer look at what this is. It is</p>
<p>$$\left(\frac{1}{p_1^s} + \frac{1}{p_1^{2s}} + \frac{1}{p_1^{3s}} +
\cdots\right)\cdot\left(\frac{1}{p_2^s} + \frac{1}{p_2^{2s}} +
\frac{1}{p_2^{3s}} + \cdots\right)\cdot\left(\frac{1}{p_3^s} + \frac{1}{p_3^{2s}} +
\frac{1}{p_3^{3s}} + \cdots\right)\cdots,$$</p>
<p>where $p_1$ is the first prime, $p_2$ is the second prime, and so on. Now
think about how to expand finite products of finite sums, for instance,
$$(x_1 + x_2 + x_3)(y_1 + y_2 + y_3)(z_1 + z_2 + z_3).$$ To expand the above,
you would take a sum of every combination where you pick one $x$ term, one $y$
term, and one $z$ term, giving</p>
<p>$$x_1y_1z_1 + x_1y_1z_2 + \cdots + x_2y_1z_3 + \cdots + x_3y_2z_1 + \cdots + x_3y_3z_3.$$</p>
<p>So to expand the infinite product, we do the same thing. We take every
combination of picking $1/p_i^{ks}$, with one $k$ for each $i$. If we pick
infinitely many non-$1$ powers, the product will be zero, so we only need to
consider terms where there are finitely many primes. The resulting sum will be
something like</p>
<p>$$\frac{1}{1^s} + \frac{1}{p_1^s} + \frac{1}{p_2^s} + \frac{1}{\left(p_1^2\right)^s} +
\frac{1}{p_3^s} + \frac{1}{\left(p_1p_2\right)^s} + \cdots,$$</p>
<p>where each prime power combination is picked exactly once. However, we know by
the <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmetic">Fundamental Theorem of
Arithmetic</a>
that when you take all combinations of products primes that you get each
positive integer exactly once. So the above sum is just</p>
<p>$$\frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \cdots,$$ which is just
$\zeta(s)$ as we defined it above.</p>
<p>In other words,</p>
<p>$$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} = \prod_{\text{$p$
prime}}\frac{1}{1 - \frac{1}{p^s}},$$ for $s = 2, 3, 4, \ldots$. This is known
as Euler's product formula for the zeta function. Euler's product formula
gives us our first clue as to why the zeta function can give us insights into
prime numbers.</p>
<h3>Analytic Continuation</h3>
<p>In 1859, Bernhard Riemann wrote a <a href="https://en.wikipedia.org/wiki/On_the_Number_of_Primes_Less_Than_a_Given_Magnitude">short 9 page paper on number theory and the
zeta
function</a>.
It was the only paper Riemann ever wrote on the subject of number theory, but
it is undoubtedly one of the most important papers every written on the
subject.</p>
<p>In the paper, Riemann considered that the zeta function summation,</p>
<p>$$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s},$$</p>
<p>makes sense not just for integers $s = 2, 3, 4, \ldots$, but for any real
number $s > 1$ (if $s = 1$, the summation is the <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)">harmonic
series</a>, which
famously diverges). In fact, it is not hard to see that for complex $s$, the
summation makes sense so long as $\mathrm{Re}(s) > 1$ (for more about what it
even means for $s$ to be complex in that formula, and the basic ideas of
analytic continuation, I recommend <a href="https://www.youtube.com/watch?v=sD0NjbwqlYw&list=PLrFrByaoJbcqKjzgJvLs2-spSmzP7jolT&index=3">3Blue1Brown's
video</a>
from my YouTube playlist).</p>
<p>Riemann wanted to extend this function to the entire complex plane, not just
$\mathrm{Re}(s) > 1$. The process of doing this is called <a href="https://en.wikipedia.org/wiki/Analytic_continuation">analytic
continuation</a>. The theory
of complex analysis tells us that if we can find an extension of $\zeta(s)$ to
the whole complex plan that remains differentiable, then that extension is
unique, and we can reasonably say that that <em>is</em> the definition of the
function everywhere.</p>
<p>Riemann used the following approach. Consider what we might call the
"completed zeta function"</p>
<p>$$Z(s) = \pi^{-\frac{s}{2}}\Gamma\left(\frac{s}{2}\right)\zeta(s).$$</p>
<p>Using Fourier analysis, Riemann gave a formula for $Z(s)$ that is defined
everywhere, allowing us to use it to define $\zeta(s)$ to the left of 1. I
won't repeat Riemann's formula for $Z(s)$ as the exact formula isn't
important, but from it one could also see</p>
<ol>
<li>
<p>$Z(s)$ is defined everywhere in the complex plane, except for simple poles at 0
and 1.</p>
</li>
<li>
<p>$Z(s) = Z(1 - s).$ This means if we have a value for $s$ that is right of
the line $\mathrm{Re}(z) = \frac{1}{2},$ we can get a value to the left of
it by reflecting it over the real-axis and the line at $\frac{1}{2}$ (to
see this, note that the average of $s$ and $1 - s$ is $1/2$, so the
midpoint of a line connecting the two should always go through the point
$1/2$).</p>
</li>
</ol>
<img alt="Reflection of s and 1 - s" src="https://asmeurer.com/blog/s-and-1-s.svg" width="608" />
<p>(Reflection of $s$ and $1 - s$. Created with
<a href="https://www.geogebra.org/graphing/c9rzy9hj">Geogebra</a>)</p>
<h3>Zeros</h3>
<p>Looking at $Z(s)$, it is a product of three parts. So the zeros and poles of
$Z(s)$ correspond to the zeros and poles of these parts, unless they cancel.
$\pi^{-\frac{s}{2}}$ is the easiest: it has no zeros and no poles. The second
part is the <a href="https://en.wikipedia.org/wiki/Gamma_function">gamma function</a>.
$\Gamma(z)$ has no zeros and has simple poles at nonpositive integers $z=0,
-1, -2, \ldots$.</p>
<p>So taking this, along with the fact that $Z(s)$ is entire except for simple
poles at 0 and 1, we get from $$\zeta(s) =
\frac{Z(s)}{\pi^{-\frac{s}{2}}\Gamma\left(\frac{s}{2}\right)}$$</p>
<ol>
<li>$Z(s)$ has a simple pole at 1, which means that $\zeta(s)$ does as well.
This is not surprising, since we already know the summation formula from
above diverges as $s$ approaches 1.</li>
<li>$Z(s)$ has a simple pole at 0. Since $\Gamma\left(\frac{s}{2}\right)$ also
has a simple pole at 0, they must cancel and $\zeta(s)$ must have neither a
zero nor a pole at 0 (in fact, $\zeta(0) = -1/2$).</li>
<li>Since $\Gamma\left(\frac{s}{2}\right)$ has no zeros, there are no further
poles of $\zeta(s)$. Thus, $\zeta(s)$ is entire everywhere except for a
simple pole at $s=1$.</li>
<li>$\Gamma\left(\frac{s}{2}\right)$ has poles at the remaining negative even
integers. Since $Z(s)$ has no poles there, these must correspond to zeros
of $\zeta(s)$. These are the so-called "trivial" zeros of the zeta
function, at $s=-2, -4, -6, \ldots$. The term "trivial" here is a relative
one. They are trivial to see from the above formula, whereas other zeros of
$\zeta(s)$ are much harder to find.</li>
<li>$\zeta(s) \neq 0$ if $\mathrm{Re}(s) > 1$. One way to see this is from the
Euler product formula. Since each term in the product is not zero, the
function itself cannot be zero (this is a bit hand-wavy, but it can be made
rigorous). This implies that $Z(s) \neq 0$ in this region as well. We can
reflect $\mathrm{Re}(s) > 1$ over the line at $\frac{1}{2}$ by considering
$\zeta(1 - s)$. Using the above formula and the fact that $Z(s) = Z(1 -
s)$, we see that $\zeta(s)$ cannot be zero for $\mathrm{Re}(s) < 0$ either,
with the exception of the aforementioned trivial zeros at $s=-2, -4, -6,
\ldots$.</li>
</ol>
<p>Thus, any non-trivial zeros of $\zeta(s)$ must have real part between 0 and 1.
This is the so-called "critical strip". Riemann hypothesized that these zeros
are not only between 0 and 1, but are in fact on the line dividing the strip
at real part equal to $1/2$. This line is called the "critical line". This is
Riemann's famous hypothesis: that all the non-trivial zeros of $\zeta(s)$ have
real part equal to $1/2$.</p>
<h3>Computational Verification</h3>
<p>Whenever you have a mathematical hypothesis, it is good to check if it is true
numerically. Riemann himself used some methods (not the same ones we use here)
to numerically estimate the first few non-trivial zeros of $\zeta(s)$, and
found that they lied on the critical line, hence the motivation for his
hypotehsis. Here is an <a href="https://www.maths.tcd.ie/pub/HistMath/People/Riemann/Zeta/EZeta.pdf">English
translation</a>
of his original paper if you are interested.</p>
<p>If we verified that all the zeros in the critical strip from, say,
$\mathrm{Im}(s) = 0$ to $\mathrm{Im}(s) = N$ are in fact on the critical line
for some large $N$, then it would give evidence that the Riemann Hypothesis is
true. However, to be sure, this would not constitute a proof.
<a href="https://en.wikipedia.org/wiki/G._H._Hardy">Hardy</a> showed in 1914 that
$\zeta(s)$ has infinitely many zeros on the critical strip, so only finding
finitely many of them would not suffice as a proof. (Although if we were to
find a counter-example, a zero <em>not</em> on the critical line, that WOULD
constitute a proof that the Hypothesis is false. However, there are strong
reasons to believe that the hypothesis is not false, so this would be unlikely
to happen.)</p>
<p>How would we verify that the zeros are all on the line $1/2$. We can find
zeros of $\zeta(s)$ numerically, but how would we know if the real part is
really exactly 0.5 and not 0.500000000000000000000000000000000001? And more
importantly, just because we find some zeros, it doesn't mean that we have all
of them. Maybe we can find a bunch of zeros on the critical line, but how
would we be sure that there aren't other zeros lurking around elsewhere on the
critical strip?</p>
<p>We want to find rigorous answers to these two questions:</p>
<ol>
<li>
<p>How can we count the number of zeros between $\mathrm{Im}(s) = 0$ and
$\mathrm{Im}(s) = N$ of $\zeta(s)$?</p>
</li>
<li>
<p>How can we verify that all those zeros lie on the critical line, that is,
they have real part equal to exactly $1/2$?</p>
</li>
</ol>
<h4>Counting Zeros Part 1</h4>
<p>To answer the first question, we can make use of a powerful theorem from
complex analysis called the <a href="https://en.wikipedia.org/wiki/Argument_principle#Generalized_argument_principle">argument
principle</a>.
The argument principle says that if $f$ is a meromorphic function on some
closed contour $C$, and does not have any zeros or poles on $C$ itself, then</p>
<p>$$\frac{1}{2\pi i}\oint_C \frac{f'(z)}{f(z)}\,dx = \#\left\{\text{zeros of $f$
inside of C}\right\} - \#\left\{\text{poles of $f$
inside of C}\right\},$$ where all zeros and poles are counted with
multiplicity.</p>
<p>In other words, the integral on the left-hand side counts the number of zeros
of $f$ minus the number of poles of $f$ in a region. The argument principle is
quite easy to show given the Cauchy residue theorem (see the above linked
Wikipedia article for a proof). The expression $f'(z)/f(z)$ is called the
"<a href="https://en.wikipedia.org/wiki/Logarithmic_derivative">logarithmic
derivative</a> of $f$",
because it equals $\frac{d}{dz} \log(f(z))$ (although it makes sense even without
defining what "$\log$" means).</p>
<p>One should take a moment to appreciate the beauty of this result. The
left-hand side is an integral, something we generally think of as being a
continuous quantity. But it is always exactly equal to an integer. Results
such as these give us a further glimpse at how analytic functions and complex
analysis can produce theorems about number theory, a field which one would
naively think can only be studied via discrete means. In fact, these methods
are far more powerful than discrete methods. For many results in number
theory, we only know how to prove them using complex analytic means. So-called
<a href="https://en.wikipedia.org/wiki/Elementary_proof">"elementary" proofs</a> for
these results, or proofs that only use discrete methods and do not use complex
analysis, have not yet been found.</p>
<p>Practically speaking, the fact that the above integral is exactly an integer
means that if we compute it numerically and it comes out to something like
0.9999999, we know that it must in fact equal exactly 1. So as long as we get
a result that is near an integer, we can round it to the exact answer.</p>
<p>We can integrate a contour along the critical strip up to some $\mathrm{Im}(s)
= N$ to count the number of zeros up to $N$ (we have to make sure to account
for the poles. I go into more details about this when I actually compute the
integral below).</p>
<h4>Counting Zeros Part 2</h4>
<p>So using the argument principle, we can count the number of zeros in a region.
Now how can we verify that they all lie on the critical line? The answer lies
in the $Z(s)$ function defined above. By the points outlined in the previous
section, we can see that $Z(s)$ is zero exactly where $\zeta(s)$ is zero on
the critical strip, and it is not zero anywhere else. In other words,</p>
<div style="text-align: center;"> <b>the zeros of $Z(s)$ are exactly the non-trivial zeros of $\zeta(s)$.</b></div>
<p>This helps us because $Z(s)$ has a nice property on the critical line. First
we note that $Z(s)$ commutes with conjugation, that is $\overline{Z(s)} =
Z(\overline{s})$ (this isn't obvious from what I have shown, but it is true).
On the critical line $\frac{1}{2} + it$, we have</p>
<p>$$\overline{Z\left(\frac{1}{2} + it\right)} = Z\left(\overline{\frac{1}{2} +
it}\right) = Z\left(\frac{1}{2} - it\right).$$</p>
<p>However, $Z(s) = Z(1 - s)$, and $1 - \left(\frac{1}{2} - it\right) =
\frac{1}{2} + it$, so</p>
<p>$$\overline{Z\left(\frac{1}{2} + it\right)} = Z\left(\frac{1}{2} +
it\right),$$</p>
<p>which means that $Z\left(\frac{1}{2} + it\right)$ is real valued for real $t$.</p>
<p>This simplifies things a lot, because it is much easier to find zeros of a real
function. In fact, we don't even care about finding the zeros, only counting
them. Since $Z(s)$ is continuous, we can use a simple method: counting sign
changes. If a continuous real function changes signs from negative to positive or from
positive to negative n times in an interval, then it must have at least n
zeros in that interval. It may have more, for instance, if some zeros are
clustered close together, or if a zero has a multiplicity greater than 1, but
we know that there must be at least n.</p>
<p>So our approach to verifying the Riemann Hypothesis is as such:</p>
<ol>
<li>
<p>Integrate $\frac{1}{2\pi i}\oint_C Z'(s)/Z(s)\,dx$ along a contour $C$
that runs along the critical strip up to some $\mathrm{Im}(s) = N$. The
integral will tell us there are exactly $n$ zeros in the contour, counting
multiplicity.</p>
</li>
<li>
<p>Try to find $n$ sign changes of $Z(1/2 + it)$ from $t=0\ldots N$. If we can
find $n$ of them, we are done. We have confirmed all the zeros are on the
critical line.</p>
</li>
</ol>
<p>Step 2 would fail if the Riemann Hypothesis is false, in which case a zero
wouldn't be on the critical line. But it would also fail if a zero has a
multiplicity greater than 1, since the integral would count it more times than
the sign changes. Fortunately, as it turns out, the Riemann Hypothesis has
been verified up to N = 10000000000000, and no one has yet found a zero of the
zeta function yet that has a multiplicity greater than 1, so we should not
expect that to happen (no one has yet found a counterexample to the Riemann
Hypothesis either).</p>
<h2>Verification with SymPy and mpmath</h2>
<p>We now use SymPy and mpmath to compute the above quantities. We use
<a href="https://www.sympy.org/">SymPy</a> to do symbolic manipulation for us, but the
heavy work is done by <a href="http://mpmath.org/doc/current/index.html">mpmath</a>.
mpmath is a pure Python library for arbitrary precision numerics. It is used
by SymPy under the hood, but it will be easier for us to use it directly. It
can do, among other things, numeric integration. When I first tried to do
this, I tried using the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.zeta.html"><code>scipy.special</code> zeta
function</a>,
but unfortunately, it does not support complex arguments.</p>
<p>First we do some basic imports</p>
<pre><code class="language-py">>>> from sympy import *
>>> import mpmath
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> s = symbols('s')
</code></pre>
<p>Define the completed zeta function $Z = \pi^{-s/2}\Gamma(s/2)\zeta(s)$.</p>
<pre><code>>>> Z = pi**(-s/2)*gamma(s/2)*zeta(s)
</code></pre>
<p>We can verify that Z is indeed real for $\frac{1}{2} + it.$</p>
<pre><code class="language-py">>>> Z.subs(s, 1/2 + 0.5j).evalf()
-1.97702795164031 + 5.49690501450151e-17*I
</code></pre>
<p>We get a small imaginary part due to the way floating point arithmetic works.
Since it is below <code>1e-15</code>, we can safely ignore it.</p>
<p><code>D</code> will be the logarithmic derivative of <code>Z</code>.</p>
<pre><code class="language-py">>>> D = simplify(Z.diff(s)/Z)
>>> D
polygamma(0, s/2)/2 - log(pi)/2 + Derivative(zeta(s), s)/zeta(s)
</code></pre>
<p>This is $$\frac{\operatorname{polygamma}{\left(0,\frac{s}{2} \right)}}{2} -
\frac{\log{\left(\pi \right)}}{2} + \frac{
\zeta'\left(s\right)}{\zeta\left(s\right)}.$$</p>
<p>Note that logarithmic derivatives behave similar to logarithms. The
logarithmic derivative of a product is the sum of logarithmic derivatives (the
$\operatorname{polygamma}$ function is the derivative of $\Gamma$).</p>
<p>We now use
<a href="https://docs.sympy.org/latest/modules/utilities/lambdify.html#sympy.utilities.lambdify.lambdify"><code>lambdify</code></a>
to convert the SymPy expressions <code>Z</code> and <code>D</code> into functions that are evaluated
using mpmath. A technical difficulty here is that the derivative of the zeta
function $\zeta'(s)$ does not have a closed-form expression. <a href="http://mpmath.org/doc/current/functions/zeta.html?highlight=zeta#mpmath.zeta">mpmath's <code>zeta</code>
can evaluate
$\zeta'$</a>
but it doesn't yet work with <code>sympy.lambdify</code> (see <a href="https://github.com/sympy/sympy/issues/11802">SymPy issue
11802</a>). So we have to manually
define <code>"Derivative"</code> in lambdify, knowing that it will be the derivative of
<code>zeta</code> when it is called. Beware that this is only correct for this specific
expression where we know that <code>Derivative</code> will be <code>Derivative(zeta(s), s)</code>.</p>
<pre><code class="language-py">>>> Z_func = lambdify(s, Z, 'mpmath')
>>> D_func = lambdify(s, D, modules=['mpmath',
... {'Derivative': lambda expr, z: mpmath.zeta(z, derivative=1)}])
</code></pre>
<p>Now define a function to use the argument principle to count the number of
zeros up to $Ni$. Due to the symmetry $Z(s) = Z(1 - s)$, it is only necessary
to count zeros in the top half-plane.</p>
<p>We have to be careful about the poles of $Z(s)$ at 0 and 1. We can either
integrate right above them, or expand the contour to include them. I chose to
do the former, starting at $0.1i$. It is known that there $\zeta(s)$ has no
zeros near the real axis on the critical strip. I could have also expanded the
contour to go around 0 and 1, and offset the result by 2 to account for the
integral counting those points as poles.</p>
<p>It has also been shown that there are no zeros on the lines $\mathrm{Re}(s) =
0$ or $\mathrm{Re}(s) = 1$, so we do not need to worry about that. If the
upper point of our contour happens to have zeros exactly on it, we would be
very unlucky, but even if this were to happen we could just adjust it up a
little bit.</p>
<img alt="Our contour" src="https://asmeurer.com/blog/contour-c.svg" width="608" />
<p>(Our contour. Created with <a href="https://www.geogebra.org/graphing/nmnsaywd">Geogebra</a>)</p>
<p><a href="http://mpmath.org/doc/current/calculus/integration.html#mpmath.quad"><code>mpmath.quad</code></a>
can take a list of points to compute a contour. The <code>maxdegree</code> parameter
allows us to increase the degree of the quadrature if it becomes necessary to
get an accurate result.</p>
<pre><code class="language-py">>>> def argument_count(func, N, maxdegree=6):
... return 1/(2*mpmath.pi*1j)*(mpmath.quad(func,
... [1 + 0.1j, 1 + N*1j, 0 + N*1j, 0 + 0.1j, 1 + 0.1j],
... maxdegree=maxdegree))
</code></pre>
<p>Now let's test it. Lets count the zeros of $$s^2 - s + 1/2$$ in the box
bounded by the above rectangle ($N = 10$).</p>
<pre><code class="language-py">>>> expr = s**2 - s + S(1)/2
>>> argument_count(lambdify(s, expr.diff(s)/expr), 10)
mpc(real='1.0', imag='3.4287545414000525e-24')
</code></pre>
<p>The integral is 1. We can confirm there is indeed one
zero in this box, at $\frac{1}{2} + \frac{i}{2}$.</p>
<pre><code class="language-py">>>> solve(s**2 - s + S(1)/2)
[1/2 - I/2, 1/2 + I/2]
</code></pre>
<p>Now define a function to count the number of sign changes in a list of real
values.</p>
<pre><code class="language-py">>>> def sign_changes(L):
... """
... Count the number of sign changes in L
...
... Values of L should all be real.
... """
... changes = 0
... assert im(L[0]) == 0, L[0]
... s = sign(L[0])
... for i in L[1:]:
... assert im(i) == 0, i
... s_ = sign(i)
... if s_ == 0:
... # Assume these got chopped to 0
... continue
... if s_ != s:
... changes += 1
... s = s_
... return changes
</code></pre>
<p>For example, for $\sin(s)$ from -10 to 10, there are 7 zeros ($3\pi\approx
9.42$)</p>
<pre><code class="language-py">>>> sign_changes(lambdify(s, sin(s))(np.linspace(-10, 10)))
7
</code></pre>
<p>Now compute sign changes along the critical line. We also make provisions in
case we have to increase the precision of mpmath to get correct results here.</p>
<pre><code class="language-py">>>> def compute_points(Z_func, N, npoints=10000, dps=15):
... import warnings
... old_dps = mpmath.mp.dps
... points = np.linspace(0, N, npoints)
... try:
... mpmath.mp.dps = dps
... L = [mpmath.chop(Z_func(i)) for i in 1/2 + points*1j]
... finally:
... mpmath.mp.dps = old_dps
... if L[-1] == 0:
... # mpmath will give 0 if the precision is not high enough, since Z
... # decays rapidly on the critical line.
... warnings.warn("You may need to increase the precision")
... return L
</code></pre>
<p>Now we can check how many zeros of $Z(s)$ (and hence non-trivial zeros of
$\zeta(s)$) we can find. According to
<a href="https://en.wikipedia.org/wiki/Riemann_hypothesis">Wikipedia</a>, the first few
non-trivial zeros of $\zeta(s)$ in the upper half-plane are 14.135, 21.022,
and 25.011.</p>
<p>First try up to $N=20$.</p>
<pre><code class="language-py">>>> argument_count(D_func, 20)
mpc(real='0.99999931531867581', imag='-3.2332902529067346e-24')
</code></pre>
<p>Mathematically, the above value <em>must</em> be an integer, so we know it is 1.</p>
<p>Now check the number of sign changes of $Z(s)$ from $\frac{1}{2}$ to
$\frac{1}{2} + 20i$.</p>
<pre><code class="language-py">>>> L = compute_points(Z_func, 20)
>>> sign_changes(L)
1
</code></pre>
<p>So it checks out. There is one zero between $0$ and $20i$ on the critical
strip, and it is in fact on the critical line, as expected!</p>
<p>Now let's verify the other two zeros from Wikipedia.</p>
<pre><code class="language-py">>>> argument_count(D_func, 25)
mpc(real='1.9961479945577916', imag='-3.2332902529067346e-24')
>>> L = compute_points(Z_func, 25)
>>> sign_changes(L)
2
>>> argument_count(D_func, 30)
mpc(real='2.9997317058520916', imag='-3.2332902529067346e-24')
>>> L = compute_points(Z_func, 30)
>>> sign_changes(L)
3
</code></pre>
<p>Both check out as well.</p>
<p>Since we are computing the points, we can go ahead and make a plot as well.
However, there is a technical difficulty. If you naively try to plot $Z(1/2 +
it)$, you will find that it decays rapidly, so fast that you cannot really
tell where it crosses 0:</p>
<pre><code class="language-py">>>> def plot_points_bad(L, N):
... npoints = len(L)
... points = np.linspace(0, N, npoints)
... plt.figure()
... plt.plot(points, L)
... plt.plot(points, [0]*npoints, linestyle=':')
>>> plot_points_bad(L, 30)
</code></pre>
<img src="https://asmeurer.com/blog/riemann-bad.svg" width="608" />
<p>So instead of plotting $Z(1/2 + it)$, we plot $\log(|Z(1/2 + it)|)$. The
logarithm will make the zeros go to $-\infty$, but these will be easy to see.</p>
<pre><code class="language-py">>>> def plot_points(L, N):
... npoints = len(L)
... points = np.linspace(0, N, npoints)
... p = [mpmath.log(abs(i)) for i in L]
... plt.figure()
... plt.plot(points, p)
... plt.plot(points, [0]*npoints, linestyle=':')
>>> plot_points(L, 30)
</code></pre>
<img src="https://asmeurer.com/blog/riemann-30.svg" width="608" />
<p>The spikes downward are the zeros.</p>
<p>Finally, let's check up to N=100. <a href="https://oeis.org/A072080">OEIS A072080</a>
gives the number of zeros of $\zeta(s)$ in upper half-plane up to $10^ni$.
According to it, we should get 29 zeros between $0$ and $100i$.</p>
<pre><code class="language-py">>>> argument_count(D_func, 100)
mpc(real='28.248036536895913', imag='-3.2332902529067346e-24')
</code></pre>
<p>This is not near an integer. This means we need to increase the precision of
the quadrature (the <code>maxdegree</code> argument).</p>
<pre><code class="language-py">>>> argument_count(D_func, 100, maxdegree=9)
mpc(real='29.000000005970151', imag='-3.2332902529067346e-24')
</code></pre>
<p>And the sign changes...</p>
<pre><code class="language-py">>>> L = compute_points(Z_func, 100)
__main__:11: UserWarning: You may need to increase the precisio
</code></pre>
<p>Our guard against the precision being too low was triggered. Try raising it
(the default dps is 15).</p>
<pre><code class="language-py">>>> L = compute_points(Z_func, 100, dps=50)
>>> sign_changes(L)
29
</code></pre>
<p>They both give 29. So we have verified the Riemann Hypothesis up to $100i$!</p>
<p>Here is a plot of these 29 zeros.</p>
<pre><code class="language-py">>>> plot_points(L, 100)
</code></pre>
<img src="https://asmeurer.com/blog/riemann-100.svg" width="608" />
<p>(remember that the spikes downward are the zeros)</p>
<h2>Conclusion</h2>
<p>$N=100$ takes a few minutes to compute, and I imagine larger and larger values
would require increasing the precision more, slowing it down even further, so
I didn't go higher than this. But it is clear that this method works.</p>
<p>This was just me playing around with SymPy and mpmath, but if I wanted to
actually verify the Riemann Hypothesis, I would try to find a more efficient
method of computing the above quantities. For the sake of simplicity, I used
$Z(s)$ for both the argument principle and sign changes computations, but it
would have been more efficient to use $\zeta(s)$ for the argument principle
integral, since it has a simpler formula. It would also be useful if there
were a formula with similar properties to $Z(s)$ (real on the critical line
with the same zeros as $\zeta(s)$), but that did not decay as rapidly.</p>
<p>Furthermore, for the argument principle integral, I would like to see precise
error estimates for the integral. We saw above with $N=100$ with the default
quadrature that we got a value of 28.248, which is not close to an integer.
This tipped us off that we should increase the quadrature, which ended up
giving us the right answer, but if the original number happened to be close to
an integer, we might have been fooled. Ideally, one would like know the exact
quadrature degree needed. If you can get error estimates guaranteeing the
error for the integral will be less than 0.5, you can always round the answer
to the nearest integer. For the sign changes, you don't need to be as
rigorous, because simply seeing as many sign changes as you have zeros is
sufficient. However, one could certainly be more efficient in computing the
values along the interval, rather than just naively computing 10000 points and
raising the precision until it works, as I have done.</p>
<p>One would also probably want to use a faster integrator than mpmath (like one
written in C), and perhaps also find a faster to evaluate expression than the
one I used for $Z(s)$. It is also possible that one could special-case the
quadrature algorithm knowing that it will be computed on $\zeta'(s)/\zeta(s)$.</p>
<p>In this post I described the Riemann zeta function and the Riemann Hypothesis,
and showed how to computationally verify it. But I didn't really go over the
details of why the Riemann Hypothesis matters. I encourage you to watch the
videos in my <a href="https://www.youtube.com/playlist?list=PLrFrByaoJbcqKjzgJvLs2-spSmzP7jolT">YouTube
playlist</a>
if you want to know this. Among other things, the truth of the Riemann
Hypothesis would give a very precise bound on the distribution of prime
numbers. Also, the non-trivial zeros of $\zeta(s)$ are, in some sense, the
"spectrum" of the prime numbers, meaning they exactly encode the position of
every prime on the number line.</p></div>https://czgdp1807.github.io/gsocGagandeep Singh (czgdp1807)Gagandeep Singh (czgdp1807): Google Summer of Code - What & How?Sun, 16 Feb 2020 00:00:00 GMT
https://czgdp1807.github.io/gsoc/
<p>I am writing this blog entry for sharing some of my personal opinions about this awesome open source program, Google Summer of Code or simply GSoC. I will address various common questions related to this program i.e., What it is all about? and How to achieve it? I hope you can form a reason of yours to participate in it.</p>
<p>So, let’s start with getting to know more about GSoC. Basically, it is a program funded and organised by Google LLC. Google first asks various open source organisations to apply to their portal. After shortlisting, selected organisations are published on <a href="https://summerofcode.withgoogle.com/">their website</a>. These organisations cover a variaty of areas like mathematics, biology, artificial intelligence, web development and much more. However, software development is at the core of these organisations. Students are then expected to apply to the organisations of their interest. I will be covering more about this part later in this blog entry. Organisations then select some really good applications to work on, the following summers.
So, in short, it’s much more of like match making between students and organisations for the projects in summer. Note that it is not an internship.</p>
<p>Now, let’s see how you can be a successfull GSoC student. I have mentioned some points below, which you can consider to aim for the same,</p>
<ol>
<li>
<p><strong>Selecting the right organisation</strong> - This is the most important aspect of getting accepted. You should select the organisation which you find most interesting, and most comfortable to work with. For example, I was interested in mathematics and software development, so I went for <code class="language-plaintext highlighter-rouge">SymPy</code>. I would suggest you to go through <code class="language-plaintext highlighter-rouge">ideas list</code> of some organisations which work in your areas of interest. If you find them interesting and you think that you have the right skill set to form a nice project out of those ideas then you have found your “the right one”. For example, take a look at ideas of list of <code class="language-plaintext highlighter-rouge">SymPy</code> at <a href="https://github.com/sympy/sympy/wiki/GSoC-2020-Ideas">https://github.com/sympy/sympy/wiki/GSoC-2020-Ideas</a>. You should be able to find similar such pages for other organisations too.</p>
</li>
<li>
<p><strong>Contribute as much as possible</strong> - Now you have found your right one. Let’s start fixing some issues. As we know, bugs are the most friendly enemies of any software. They are there with the code always. Many organisations list their issues/bugs on github issue tracker. You can take a look at that list and pick the ones which you can think you can fix by making a pull request. In addition, not all issues are bugs, some are about adding new features to the software too. For example, issues of <code class="language-plaintext highlighter-rouge">SymPy</code> are available at, <a href="https://github.com/sympy/sympy/issues">https://github.com/sympy/sympy/issues</a>. Something similar is available for other organisations too.</p>
</li>
<li>
<p><strong>Make a proposal</strong> - Now comes the hard part of writing a well organised proposal. I will mention here some of the tips to make a good proposal which may increase your chances of getting accepted. First of all, mention about yourself in your proposal, your programming experience, past internships, if any. Then come to your idea which you want to work on in the summers. May be it can be working on improving a module like making the code more efficient or bug fixes. Describe, in detail, the theory related to that idea so that anyone not working in that field should be able to get something out of it. Providing draft code is a big plus. Mention the details of your plan for the official GSoC timeline. There are usually three phases other than the community bonding period. Write about what you will do in each phase. Mention your weekly goals and be <strong>reasonable and practical</strong>. Optionally, you can write about your plans for the project after GSoC ends officially.
Don’t forget to discuss your ideas with your potential mentors and community, otherwise it will be like firing a shot in the dark and anything can happen. Follow the application template if your organisation has any. You can take a look at my proposal which is available <a href="https://docs.google.com/document/d/1oIeaROiJyglpbris7X1uZPRE5ZeO0pD1ygCFhBIeATI/edit?usp=sharing">here</a>.</p>
</li>
<li>
<p><strong>Interact with the community</strong> - Your attitude matters a lot. Be respectful with the members of the community. Follow their code of conduct, and ask your doubts irrespective of the fact that they sound trivial.</p>
</li>
</ol>
<p>Well, that’s all from my side. Best wishes for your GSoC journey. Don’t think much about the results until they are announced. Whether you are accepted or not, just know that you have made a difference even by making a simple comment on one of the issues or PRs. :-)</p>
<p>I have provided some resources which might be helpful for developing your skill set for this program,</p>
<ol>
<li><a href="https://www.youtube.com/playlist?list=PL6gx4Cwl9DGAKWClAD_iKpNC0bGHxGhcx">Git tutorials</a></li>
<li><a href="https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-0001-introduction-to-computer-science-and-programming-in-python-fall-2016/">Introduction to Computer Science and Programming in Python</a></li>
<li><a href="https://www.udemy.com/course/python-beyond-the-basics-object-oriented-programming/">Object Oriented Programming in Python</a>.</li>
</ol>
<p>A lot more of such resources are available at <a href="https://codezonediitj.github.io/resources/">https://codezonediitj.github.io/resources/</a></p>
<p>Want to add something? Just make a PR. Bye.</p>http://ishanaj.wordpress.com/?p=122Ishan Joshi (ishanaj)Ishan Joshi (ishanaj): Everything about SymPy’s Column moduleThu, 28 Nov 2019 19:56:42 GMT
https://ishanaj.wordpress.com/2019/11/29/everything-about-sympys-column-module/
<p>The Column class implemented in <a href="https://github.com/sympy/sympy/pull/17122">PR #17122</a> enables the
continuum mechanics module of SymPy to deal with column buckling related
calculations. The Column module can calculate the moment equation, deflection
equation, slope equation and the critical load for a column defined by a user.</p>
<p><strong>Example use-case of Column class:</strong></p>
<pre class="brush: python; collapse: false; title: ; wrap-lines: false; notranslate">
>>> from sympy.physics.continuum_mechanics.column import Column
>>> from sympy import Symbol, symbols
>>> E, I, P = symbols('E, I, P', positive=True)
>>> c = Column(3, E, I, 78000, top="pinned", bottom="pinned")
>>> c.end_conditions
{'bottom': 'pinned', 'top': 'pinned'}
>>> c.boundary_conditions
{'deflection': [(0, 0), (3, 0)], 'slope': [(0, 0)]}
>>> c.moment()
78000*y(x)
>>> c.solve_slope_deflection()
>>> c.deflection()
C1*sin(20*sqrt(195)*x/(sqrt(E)*sqrt(I)))
>>> c.slope()
20*sqrt(195)*C1*cos(20*sqrt(195)*x/(sqrt(E)*sqrt(I)))/(sqrt(E)*sqrt(I))
>>> c.critical_load()
pi**2*E*I/9
</pre>
<h1><strong>The Column class</strong></h1>
<p>The Column class is non-mutable,<span id="more-122"></span> which means unlike the Beam class, a user cannot change the attributes of the class once they are defined along with the object definition. Therefore to change the attribute values one will have to define a new object.</p>
<h3><strong>Reasons for creating a non-mutable class</strong></h3>
<ul><li> From a backward-compatibility perspective, it is always possible to adopt a different plan and add mutability later but not the other way around. </li><li>Most things are immutable in SymPy which is useful for caching etc. Matrix is an example where allowing mutability has lead to many problems that are now impossible to fix without breaking backwards compatibility.</li></ul>
<h2><strong>Working of the column class:</strong></h2>
<p>
The <strong>governing equation</strong> for column buckling is:</p>
<div class="wp-block-image"><figure class="aligncenter"><img alt="" src="https://lh3.googleusercontent.com/qfv6QbVnotFKUPebBoLgBNPjNz5uhN6g2-mbfBzDTR13Cb5z4BkAM7RHGerTtvqEzMzjQFL8r44iYeIVTm0OpYX6f0QWn2rCuz1qxKNVnvM6LHnTX9mfJ9pyBzPmBaFGZPrdiy-p" /></figure></div>
<p>If we determine the the <strong>moment equation</strong> of the column ,on which the buckling load is applied, and place it in the above equation, we might be able to get the deflection by further solving the differential equation for <strong>y</strong>. </p>
<p><strong>Step-1: To determine the internal moment.</strong></p>
<p>This is simply done by assuming deflection at any arbitrary cross section at a distance<strong> x</strong> from the bottom as <strong>y </strong>and then multiplying this by the load <strong>P</strong> and for eccentric load another moment of magnitude <strong>P*e</strong> is added to the moment.</p>
<figure class="wp-block-image size-large"><img alt="" class="wp-image-129" src="https://ishanaj.files.wordpress.com/2019/11/image-2.png?w=641" /></figure>
<p><strong>Simple load</strong> <strong>is given by</strong>: </p>
<figure class="wp-block-image"><img alt="" src="https://lh3.googleusercontent.com/mY4nhR2YWfTEITzlL8LFRGnPq2KXPcwbyAGajOWtTkMEBYtTKGya0n4r62RolTLImOGjXazs0RqAjOyAy3K94vrM4G_xZxRKV-GBdG2uULX9qap7xPsgI6ahIY4-tXbx1zYH2LNR" /></figure>
<p><strong>Eccentric load is given by: </strong></p>
<figure class="wp-block-image"><img alt="" src="https://lh3.googleusercontent.com/vIfQDO151xRZO2hcDi9pSPeyafqlYmTUBnr_zszHjiZiv07cOA6xnuu__5EslONxpPtQFY5RaUGLXefgY0AtHip6Y6LgANv3XZ1uo790QctxO-Q5qTledCkiTuKzmpaMzJ5LBt-e" /></figure>
<p><strong>Step-2: </strong>This moment can then be substituted in the governing equation and the resulting differential equation can be solved using SymPy’s <strong>dsolve()</strong> for the <strong>deflection y</strong>.</p>
<h2><strong>Applying different end-conditions</strong></h2>
<p>The above steps considers a simple example of a column pinned at both of its ends. But the end-condition of the column can vary, which will cause the moment equation to to vary.</p>
<p>Currently <strong>four</strong> basic supports are implemented:  Pinned-pinned, fixed-fixed, fixed-pinned, one pinned-other free.</p>
<p>Depending on the supports the moment due to applied load would change as:</p>
<ul><li><strong>Pinned-Pinned:</strong> no change in moment</li></ul>
<div class="wp-block-image"><figure class="aligncenter"><img alt="" src="https://lh5.googleusercontent.com/1rNjrJP3SC6q80sWh9m5-EAj80_YLYSmCECNYMqGh0n24r7EAqP5D8b-joCrvjhV0pnoQeD5EWrcStUufFj8zgHGSIMkk-lrnRPfkxYIJP42RIzh6pCNxthuEP83wWDdhAAZ8I30" /></figure></div>
<div class="wp-block-image"><figure class="aligncenter is-resized"><img alt="" height="218" src="https://lh4.googleusercontent.com/ki5Fbllhkj2xCcEJiRQxPyuTDlJnQGPfjcvk2GNjnJq5tNd83--zKWRKMck4v9TRx7SINESjNxcmdsXaXh6Le1-fBp8pQLY7pVTy-H_o895Ts_813cFmjlQDfbp34i3RJ3Qvb9RR" width="217" /></figure></div>
<ul><li><strong>Fixed-fixed</strong>: reaction moment M is included</li></ul>
<div class="wp-block-image"><figure class="aligncenter"><img alt="" src="https://lh6.googleusercontent.com/76IEGW9i83Am9oy_YQC7xm2BVGnEw_BcgM_bQxUgnVdWY4hBpgIIbhE4bG0C8FLpNYpajyoi7F_z8g4uVLfEZOfjv3dQBQ9fvLnIFVZUJvsIaleRSUVA7B1vrQsBj5FY3Ln3H6sx" /></figure></div>
<div class="wp-block-image"><figure class="aligncenter"><img alt="" src="https://lh3.googleusercontent.com/8jXG4tyFMgAzJOanpHqj-d_f37-OkFrntfuhulynED1JjhNT6h_UkHmcAtDyN3Rem95uYIoKuhHUkslItdgIictxZC8dS_6mA9xbW-YxcDgMtyJ-L46UExUNH8VR8octca5v7RWa" /></figure></div>
<ul><li><strong>Fixed-pinned:</strong> </li></ul>
<div class="wp-block-image"><figure class="aligncenter"><img alt="" src="https://lh5.googleusercontent.com/zphyPG-BZaoTYFrhY1NVWVza7oBX85d-K3HIDXF02bpcG_3gMsA8zMD-T6UO1X7GX4ssJYeok9IFCILq18GZMDkztjLdA_IA_Otq-qSM30Us22gwqPjPwPnhubYPG3jwtwzq0yML" /></figure></div>
<p>Here <strong>M</strong> is the restraint moment at <strong>B (</strong>which is fixed<strong>). </strong>To counter this, another moment is considered by applying a horizontal force <strong>F</strong> at point A.</p>
<div class="wp-block-image"><figure class="aligncenter"><img alt="" src="https://lh5.googleusercontent.com/2Wlvbp1qFbq2p9uf057TCNM7StusOl0J5VAFU-qMQ0BhKTDDdtvP_l-tPgSkC9vmmsAaJd3QR8sEddl_z4LsAqo5FBKEvQNVF6eEssYdex61ENPUb4qWf6nFV7OkZV1Hy5ftDCdJ" /></figure></div>
<ul><li><strong>One pinned- other free:</strong></li></ul>
<div class="wp-block-image"><figure class="aligncenter"><img alt="" src="https://lh6.googleusercontent.com/8vG470fp-wb2CzGMHfR5_chDnr4SqLGM0pAaRXXLSiDklpxXlcCoVOAh8q4dc97ZjWF3GM5-HvPGO1RzRhevUpDxl-cM6pHhztiTxJY-P7Ft04bfciVYK-FnzYJTZr4TvNqqX_wJ" /></figure></div>
<div class="wp-block-image"><figure class="aligncenter"><img alt="" src="https://lh3.googleusercontent.com/eoHyQWUNWNOTFDsofRof215piNvdk9OqETpA4YQWRlp904vpTx69eKRxW12NdAHGxTBNRs63oyP87cMT36DE5judlZWoVeh_7zpP7Vxq5MZ_RkUieecAZGgxSm8Hj5RpKrZIk87g" /></figure></div>
<h2> <strong>Solving for slope and critical load</strong></h2>
<p>Once we get the deflection equation we can solve for the slope by differentiating the deflection equation with respect to <strong>x</strong>. This is done by SymPy’s <strong>diff()</strong> function</p>
<pre class="brush: python; collapse: false; title: ; wrap-lines: false; notranslate">
self._slope = self._deflection.diff(x)
</pre>
<h2><strong>Critical load</strong></h2>
<p>Critical load for single bow buckling condition can be easily determined by the substituting the boundary conditions in the deflection equation and solving it for <strong>P</strong> i.e the load.</p>
<p><strong>Note:</strong> Even if the user provides the applied load, during the entire calculation, we consider the load to be <strong>P</strong>. Whenever the <strong>moment()</strong>, <strong>slope(), deflection(),</strong> <strong>etc</strong>. methods are called the variable <strong>P </strong>is replaced with the users value. This is done so that it is easier for us to calculate the critical load in the end.</p>
<pre class="brush: python; collapse: false; title: ; wrap-lines: false; notranslate">
defl_eqs = []
# taking last two bounndary conditions which are actually
# the initial boundary conditions.
for point, value in self._boundary_conditions['deflection'][-2:]:
defl_eqs.append(self._deflection.subs(x, point) - value)
# C1, C2 already solved, solve for P
self._critical_load = solve(defl_eqs, P, dict=True)[0][P]
</pre>
<p>The case of the pinned-pinned end condition is a bit tricky. On solving the differential equation via <strong>dsolve()</strong>, the deflection comes out to be zero. This problem has been described in <a href="https://ishanaj.wordpress.com/2019/07/08/gsoc19-week-6-completing-the-column-class/#more-56">this</a> blog. Its calculation is handled a bit differently in the <a href="https://github.com/sympy/sympy/pull/17122/files#diff-00c8ee080a295764f42be4b0e448935dR225">code</a>. Instead of directly solving it via <strong>dsolve()</strong>, it is solved in steps, and the trivial solutions are removed. This technique not only solves for the deflection of the column, but simultaneously also calculates the critical load it can bear.</p>
<p>Although this may be considered as a hack to the problem. I think in future it would be better if <strong>dsolve()</strong> gets the ability to remove the trivial solutions. But this seems to be better as of now.</p>
<p>A problem that still persists is the calculation of critical load for pinned-fixed end condition. Currently, it has been made as an XFAIL, since to resolve that either <strong>solve()</strong> or <strong>solveset() </strong>has to return the solution in the required form. An <a href="https://github.com/sympy/sympy/issues/17162">issue </a>has been raised on GitHub, regarding the same.</p>
<p>Hope that gives a crisp idea about the functioning of SymPy’s Column module.</p>
<p>Thanks!</p>https://asmeurer.com/blog/posts/quansight-labs-work-update-for-september-2019/Aaron Meurer (asmeurer)Aaron Meurer (asmeurer): Quansight Labs Work Update for September, 2019Mon, 07 Oct 2019 05:00:00 GMT
https://asmeurer.com/blog/posts/quansight-labs-work-update-for-september-2019/
<div><p><em>This post has been cross-posted on the <a href="https://labs.quansight.org/blog/2019/10/quansight-labs-work-update-for-september-2019/">Quansight Labs
Blog</a>.</em></p>
<p>As of November, 2018, I have been working at
<a href="https://www.quansight.com/">Quansight</a>. Quansight is a new startup founded by
the same people who started Anaconda, which aims to connect companies and open
source communities, and offers consulting, training, support and mentoring
services. I work under the heading of <a href="https://www.quansight.com/labs">Quansight
Labs</a>. Quansight Labs is a public-benefit
division of Quansight. It provides a home for a "PyData Core Team" which
consists of developers, community managers, designers, and documentation
writers who build open-source technology and grow open-source communities
around all aspects of the AI and Data Science workflow.</p>
<p>My work at Quansight is split between doing open source consulting for various
companies, and working on SymPy.
<a href="https://www.sympy.org/en/index.html">SymPy</a>, for those who do not know, is a
symbolic mathematics library written in pure Python. I am the lead maintainer
of SymPy.</p>
<p>In this post, I will detail some of the open source work that I have done
recently, both as part of my open source consulting, and as part of my work on
SymPy for Quansight Labs.</p>
<h3>Bounds Checking in Numba</h3>
<p>As part of work on a client project, I have been working on contributing code
to the <a href="https://numba.pydata.org">numba</a> project. Numba is a just-in-time
compiler for Python. It lets you write native Python code and with the use of
a simple <code>@jit</code> decorator, the code will be automatically sped up using LLVM.
This can result in code that is up to 1000x faster in some cases:</p>
<pre><code>
In [1]: import numba
In [2]: import numpy
In [3]: def test(x):
...: A = 0
...: for i in range(len(x)):
...: A += i*x[i]
...: return A
...:
In [4]: @numba.njit
...: def test_jit(x):
...: A = 0
...: for i in range(len(x)):
...: A += i*x[i]
...: return A
...:
In [5]: x = numpy.arange(1000)
In [6]: %timeit test(x)
249 µs ± 5.77 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [7]: %timeit test_jit(x)
336 ns ± 0.638 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [8]: 249/.336
Out[8]: 741.0714285714286
</code></pre>
<p>Numba only works for a subset of Python code, and primarily targets code that
uses NumPy arrays.</p>
<p>Numba, with the help of LLVM, achieves this level of performance through many
optimizations. One thing that it does to improve performance is to remove all
bounds checking from array indexing. This means that if an array index is out
of bounds, instead of receiving an <code>IndexError</code>, you will get garbage, or
possibly a segmentation fault.</p>
<pre><code>>>> import numpy as np
>>> from numba import njit
>>> def outtabounds(x):
... A = 0
... for i in range(1000):
... A += x[i]
... return A
>>> x = np.arange(100)
>>> outtabounds(x) # pure Python/NumPy behavior
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in outtabounds
IndexError: index 100 is out of bounds for axis 0 with size 100
>>> njit(outtabounds)(x) # the default numba behavior
-8557904790533229732
</code></pre>
<p>In numba pull request <a href="https://github.com/numba/numba/pull/4432">#4432</a>, I am
working on adding a flag to <code>@njit</code> that will enable bounds checks for array
indexing. This will remain disabled by default for performance purposes. But
you will be able to enable it by passing <code>boundscheck=True</code> to <code>@njit</code>, or by
setting the <code>NUMBA_BOUNDSCHECK=1</code> environment variable. This will make it
easier to detect out of bounds issues like the one above. It will work like</p>
<pre><code class="language-pycon">>>> @njit(boundscheck=True)
... def outtabounds(x):
... A = 0
... for i in range(1000):
... A += x[i]
... return A
>>> x = np.arange(100)
>>> outtabounds(x) # numba behavior in my pull request #4432
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: index is out of bounds
</code></pre>
<p>The pull request is still in progress, and many things such as the quality of
the error message reporting will need to be improved. This should make
debugging issues easier for people who write numba code once it is merged.</p>
<h3>removestar</h3>
<p><a href="https://www.asmeurer.com/removestar/">removestar</a> is a new tool I wrote to
automatically replace <code>import *</code> in Python modules with explicit imports.</p>
<p>For those who don't know, Python's <code>import</code> statement supports so-called
"wildcard" or "star" imports, like</p>
<pre><code class="language-py">from sympy import *
</code></pre>
<p>This will import every public name from the <code>sympy</code> module into the current
namespace. This is often useful because it saves on typing every name that is
used in the import line. This is especially useful when working interactively,
where you just want to import every name and minimize typing.</p>
<p>However, doing <code>from module import *</code> is generally frowned upon in Python. It is
considered acceptable when working interactively at a <code>python</code> prompt, or in
<code>__init__.py</code> files (removestar skips <code>__init__.py</code> files by default).</p>
<p>Some reasons why <code>import *</code> is bad:</p>
<ul>
<li>It hides which names are actually imported.</li>
<li>It is difficult both for human readers and static analyzers such as
pyflakes to tell where a given name comes from when <code>import *</code> is used. For
example, pyflakes cannot detect unused names (for instance, from typos) in
the presence of <code>import *</code>.</li>
<li>If there are multiple <code>import *</code> statements, it may not be clear which names
come from which module. In some cases, both modules may have a given name,
but only the second import will end up being used. This can break people's
intuition that the order of imports in a Python file generally does not
matter.</li>
<li><code>import *</code> often imports more names than you would expect. Unless the module
you import defines <code>__all__</code> or carefully <code>del</code>s unused names at the module
level, <code>import *</code> will import every public (doesn't start with an
underscore) name defined in the module file. This can often include things
like standard library imports or loop variables defined at the top-level of
the file. For imports from modules (from <code>__init__.py</code>), <code>from module import *</code> will include every submodule defined in that module. Using <code>__all__</code> in
modules and <code>__init__.py</code> files is also good practice, as these things are
also often confusing even for interactive use where <code>import *</code> is
acceptable.</li>
<li>In Python 3, <code>import *</code> is syntactically not allowed inside of a function
definition.</li>
</ul>
<p>Here are some official Python references stating not to use <code>import *</code> in
files:</p>
<ul>
<li>
<p><a href="https://docs.python.org/3/faq/programming.html?highlight=faq#what-are-the-best-practices-for-using-import-in-a-module">The official Python
FAQ</a>:</p>
<blockquote>
<p>In general, don’t use <code>from modulename import *</code>. Doing so clutters the
importer’s namespace, and makes it much harder for linters to detect
undefined names.</p>
</blockquote>
</li>
<li>
<p><a href="https://www.python.org/dev/peps/pep-0008/#imports">PEP 8</a> (the official
Python style guide):</p>
<blockquote>
<p>Wildcard imports (<code>from <module> import *</code>) should be avoided, as they
make it unclear which names are present in the namespace, confusing both
readers and many automated tools.</p>
</blockquote>
</li>
</ul>
<p>Unfortunately, if you come across a file in the wild that uses <code>import *</code>, it
can be hard to fix it, because you need to find every name in the file that is
imported from the <code>*</code> and manually add an import for it. Removestar makes this
easy by finding which names come from <code>*</code> imports and replacing the import
lines in the file automatically.</p>
<p>As an example, suppose you have a module <code>mymod</code> like</p>
<pre><code>mymod/
| __init__.py
| a.py
| b.py
</code></pre>
<p>with</p>
<pre><code class="language-py"># mymod/a.py
from .b import *
def func(x):
return x + y
</code></pre>
<p>and</p>
<pre><code class="language-py"># mymod/b.py
x = 1
y = 2
</code></pre>
<p>Then <code>removestar</code> works like:</p>
<pre><code>$ removestar -i mymod/
$ cat mymod/a.py
# mymod/a.py
from .b import y
def func(x):
return x + y
</code></pre>
<p>The <code>-i</code> flag causes it to edit <code>a.py</code> in-place. Without it, it would just
print a diff to the terminal.</p>
<p>For implicit star imports and explicit star imports from the same module,
<code>removestar</code> works statically, making use of
<a href="https://github.com/PyCQA/pyflakes">pyflakes</a>. This means none of the code is
actually executed. For external imports, it is not possible to work statically
as external imports may include C extension modules, so in that case, it
imports the names dynamically.</p>
<p><code>removestar</code> can be installed with pip or conda:</p>
<pre><code>pip install removestar
</code></pre>
<p>or if you use conda</p>
<pre><code>conda install -c conda-forge removestar
</code></pre>
<h3>sphinx-math-dollar</h3>
<p>In SymPy, we make heavy use of LaTeX math in our documentation. For example,
in our <a href="https://docs.sympy.org/dev/modules/functions/special.html#sympy.functions.special.hyper.hyper">special functions
documentation</a>,
most special functions are defined using a LaTeX formula, like <img alt="The docs for besselj" src="https://asmeurer.com/blog/besselj_docs.png" /></p>
<p>(from <a href="https://docs.sympy.org/dev/modules/functions/special.html#sympy.functions.special.bessel.besselj">https://docs.sympy.org/dev/modules/functions/special.html#sympy.functions.special.bessel.besselj</a>)</p>
<p>However, the source for this math in the docstring of the function uses RST
syntax:</p>
<pre><code class="language-py">class besselj(BesselBase):
"""
Bessel function of the first kind.
The Bessel `J` function of order `\nu` is defined to be the function
satisfying Bessel's differential equation
.. math ::
z^2 \frac{\mathrm{d}^2 w}{\mathrm{d}z^2}
+ z \frac{\mathrm{d}w}{\mathrm{d}z} + (z^2 - \nu^2) w = 0,
with Laurent expansion
.. math ::
J_\nu(z) = z^\nu \left(\frac{1}{\Gamma(\nu + 1) 2^\nu} + O(z^2) \right),
if :math:`\nu` is not a negative integer. If :math:`\nu=-n \in \mathbb{Z}_{<0}`
*is* a negative integer, then the definition is
.. math ::
J_{-n}(z) = (-1)^n J_n(z).
</code></pre>
<p>Furthermore, in SymPy's documentation we have configured it so that text
between `single backticks` is rendered as math. This was originally done for
convenience, as the alternative way is to write <code>:math:`\nu`</code> every
time you want to use inline math. But this has lead to many people being
confused, as they are used to Markdown where `single backticks` produce
<code>code</code>.</p>
<p>A better way to write this would be if we could delimit math with dollar
signs, like <code>$\nu$</code>. This is how things are done in LaTeX documents, as well
as in things like the Jupyter notebook.</p>
<p>With the new <a href="https://www.sympy.org/sphinx-math-dollar/">sphinx-math-dollar</a>
Sphinx extension, this is now possible. Writing <code>$\nu$</code> produces $\nu$, and
the above docstring can now be written as</p>
<pre><code class="language-py">class besselj(BesselBase):
"""
Bessel function of the first kind.
The Bessel $J$ function of order $\nu$ is defined to be the function
satisfying Bessel's differential equation
.. math ::
z^2 \frac{\mathrm{d}^2 w}{\mathrm{d}z^2}
+ z \frac{\mathrm{d}w}{\mathrm{d}z} + (z^2 - \nu^2) w = 0,
with Laurent expansion
.. math ::
J_\nu(z) = z^\nu \left(\frac{1}{\Gamma(\nu + 1) 2^\nu} + O(z^2) \right),
if $\nu$ is not a negative integer. If $\nu=-n \in \mathbb{Z}_{<0}$
*is* a negative integer, then the definition is
.. math ::
J_{-n}(z) = (-1)^n J_n(z).
</code></pre>
<p>We also plan to add support for <code>$$double dollars$$</code> for display math so that <code>.. math ::</code> is no longer needed either .</p>
<p>For end users, the documentation on <a href="https://docs.sympy.org">docs.sympy.org</a>
will continue to render exactly the same, but for developers, it is much
easier to read and write.</p>
<p>This extension can be easily used in any Sphinx project. Simply install it
with pip or conda:</p>
<pre><code>pip install sphinx-math-dollar
</code></pre>
<p>or</p>
<pre><code>conda install -c conda-forge sphinx-math-dollar
</code></pre>
<p>Then enable it in your <code>conf.py</code>:</p>
<pre><code class="language-py">extensions = ['sphinx_math_dollar', 'sphinx.ext.mathjax']
</code></pre>
<h3>Google Season of Docs</h3>
<p>The above work on sphinx-math-dollar is part of work I have been doing to
improve the tooling around SymPy's documentation. This has been to assist our
technical writer Lauren Glattly, who is working with SymPy for the next three
months as part of the new <a href="https://developers.google.com/season-of-docs/">Google Season of
Docs</a> program. Lauren's project
is to improve the consistency of our docstrings in SymPy. She has already
identified many key ways our docstring documentation can be improved, and is
currently working on a style guide for writing docstrings. Some of the issues
that Lauren has identified require improved tooling around the way the HTML
documentation is built to fix. So some other SymPy developers and I have been
working on improving this, so that she can focus on the technical writing
aspects of our documentation.</p>
<p>Lauren has created a draft style guide for documentation at
<a href="https://github.com/sympy/sympy/wiki/SymPy-Documentation-Style-Guide">https://github.com/sympy/sympy/wiki/SymPy-Documentation-Style-Guide</a>. Please
take a moment to look at it and if you have any feedback on it, comment below
or write to the SymPy mailing list.</p></div>https://sc0rpi0n101.github.io/2019/08/week-12-the-final-week/Nikhil Maan (Sc0rpi0n101)Nikhil Maan (Sc0rpi0n101): Week 12: The Final WeekFri, 23 Aug 2019 00:00:00 GMT
https://sc0rpi0n101.github.io/2019/08/week-12-the-final-week/
<p>“Software is like entropy: It is difficult to grasp, weighs nothing, and obeys the Second Law of Thermodynamics; i.e., it always increases.” — Norman Augustine
Welcome everyone, this is your host Nikhil Maan aka Sc0rpi0n101 and this week will be the last week of coding for GSoC 2019. It is time to finish work now.
The C Parser Travis Build Tests Documentation The C Parser I completed the C Parser last week along with the documentation for the module.https://sc0rpi0n101.github.io/2019/08/week-11-the-other-parser/Nikhil Maan (Sc0rpi0n101)Nikhil Maan (Sc0rpi0n101): Week 11: The Other ParserThu, 22 Aug 2019 00:00:00 GMT
https://sc0rpi0n101.github.io/2019/08/week-11-the-other-parser/
<p>Welcome everyone, this is your host Nikhil Maan aka Sc0rpi0n101 and this week we’re talking about the C parser.
The Fortran Parser The C Parser Documentation Travis Build The Fortran Parser The Fortran Parser is complete. The Pull Request has also been merged. The parser is merged in master and will be a part of the next SymPy release. You can check out the source code for the Parser at the Pull Request.https://www.shubhamjha.com/posts/GSoC-Week-12-(The-Final-Week)Shubham Kumar Jha (ShubhamKJha)Shubham Kumar Jha (ShubhamKJha): GSoC 2019: Week 12 (The Final Week)Tue, 20 Aug 2019 18:30:00 GMT
https://www.shubhamjha.com/posts/GSoC-Week-12-(The-Final-Week)/
<p>The last week of coding period is officially over. A summary of the work done during this week is:</p>
<ul>
<li><a href="https://github.com/sympy/sympy/pull/17379">#17379</a> is now complete and currently under review. I will try to get it merged within this week.</li>
<li><a href="https://github.com/sympy/sympy/pull/17392">#17392</a> still needs work. I will try to put a closure to this by the end of week.</li>
<li><a href="https://github.com/sympy/sympy/pull/17440">#17440</a> was started. It attempts to add a powerful (but optional) SAT solving engine to SymPy (<a href="https://pypi.org/project/pycosat/">pycosat</a>). The performance gain for SAT solver is also subtle here: Using this
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><table class="rouge-table"><tbody><tr><td class="rouge-gutter gl"><pre class="lineno">1
2
3
4
</pre></td><td class="rouge-code"><pre>from sympy import *
from sympy.abc import x
r = random_poly(x, 100, -100, 100)
ans = ask(Q.positive(r), Q.positive(x))
</pre></td></tr></tbody></table></code></pre></div> </div>
<p>The performance is like</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><table class="rouge-table"><tbody><tr><td class="rouge-gutter gl"><pre class="lineno">1
2
3
4
5
6
7
8
</pre></td><td class="rouge-code"><pre># In master
| `- 0.631 check_satisfiability sympy/assumptions/satask.py:30
| `- 0.607 satisfiable sympy/logic/inference.py:38
| `- 0.607 dpll_satisfiable sympy/logic/algorithms/dpll2.py:21
# With pycosat
| `- 0.122 check_satisfiability sympy/assumptions/satask.py:30
| `- 0.098 satisfiable sympy/logic/inference.py:39
| `- 0.096 pycosat_satisfiable sympy/logic/algorithms/pycosat_wrapper.py:11
</pre></td></tr></tbody></table></code></pre></div> </div>
<p>It is finished and under review now.</p>
</li>
</ul>
<p>Also, with the end of GSoC 2019, final evaluations have started. I will be writing a final report to the whole project by the end of this week.</p>
<p>So far it has been a great and enriching experience for me. It was my first attempt at GSoC and I am lucky to get such an exposure. I acknowledge that I started with an abstract idea of the project but I now understand both the need and the code of <code class="language-plaintext highlighter-rouge">New Assumptions</code> pretty well (thanks to <a href="https://github.com/asmeurer">Aaron</a> who wrote the most of it). The system is still in its early phases and needs a lot more work. I am happy to be a part of it and I will be available to work on it.</p>
<p>This is the last weekly report but I will still be contributing to SymPy and open source in general. I will try to write more of such experiences through this portal. Till then, Good bye and thank you!</p>http://ishanaj.wordpress.com/?p=113Ishan Joshi (ishanaj)Ishan Joshi (ishanaj): GSoC’19: Week-12 – The Final wrap-upTue, 20 Aug 2019 17:10:27 GMT
https://ishanaj.wordpress.com/2019/08/20/gsoc19-week-12-the-final-wrap-up/
<p>This was the last week of the coding
period. With not much of work left, the goal was to wrap-up the PR’s.</p>
<p>The week started with the merge of <a href="https://github.com/sympy/sympy/pull/17001">PR #17001</a> which implemented a method <strong>cut_section()</strong> in the polygon class, in order to get two new polygons when a polygon is cut via a line. After this a new method <strong>first_moment_of_area()</strong> was added in <a href="https://github.com/sympy/sympy/pull/17153">PR #17153</a>. This method used <strong>cut_section()</strong> for its implementation. Tests for the same were added in this PR. Also the existing documentation was improved. I also renamed the <strong>polar_modulus()</strong> function to <strong>polar_second_moment_of_area() </strong>which was a more general term as compared to the previous name. This PR also got <strong>merged</strong> later on.</p>
<p>Now, we are left with two more PR’s to go.
<a href="https://github.com/sympy/sympy/pull/17122">PR #17122</a> (Column
Buckling) and <a href="https://github.com/sympy/sympy/pull/17345">PR #17345</a>
(Beam diagram). The column buckling probably requires a little more
documentation. I will surely look into it and add some more explanations and references
to it. Also, the beam diagram PR has been completed and documented. A few more
discussions to be done on its working and we will be ready with it.<span id="more-113"></span></p>
<p>I believe that by the end of this week
both of these will finally get a merge.</p>
<p>Another task that remains is the implementation of the <a href="https://github.com/sympy/sympy/issues/17302">Truss class</a>. Some rigorous debate and discussion is still needed to be done before we start its implementation. Once we agree on the implementation needs and API it won’t be a difficult task to write it through.</p>
<p>Also, since the final evaluations have
started I will be writing the project report which I have to submit before the
next week ends.</p>
<p>Since officially the coding period ends here, there would be no ToDo’s for the next week, just the final wrapping up and will surely try to complete the work that is still left.</p>
<p>Will keep you updated!</p>
<p>Thanks! </p>https://arighnaiitg.github.io/2019-08-20-gsoc-week12/Arighna Chakrabarty (arighnaiitg)Arighna Chakrabarty (arighnaiitg): GSoC Week 12 !!Tue, 20 Aug 2019 07:00:00 GMT
https://arighnaiitg.github.io/2019-08-20-gsoc-week12/
<p>Week 12 ends.. -
So, finally after a long summer GSoC has come to an end!! It has been a great experience, and something which I will cherish for the rest of my life. I would like to thank my mentor Sartaj, who has been guiding me through the thick and thin of times....https://czgdp1807.github.io/week_12Gagandeep Singh (czgdp1807)Gagandeep Singh (czgdp1807): Week 12 - Ending GSoC 2019Tue, 20 Aug 2019 00:00:00 GMT
https://czgdp1807.github.io/week_12/
<p>As the title suggests, with the third phase, the journey of my GSoC 2019 comes to an end. It was full of challanges, learning experiences, and above all interaction with the open source community of <code class="language-plaintext highlighter-rouge">SymPy</code>.<br />
In this blog post I will share with you the work done between phase 2 and phase 3, in terms of PRs, merged and open.</p>
<p><strong>Merged</strong></p>
<ul>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17174">#17174</a> : In this PR, Gaussian ensembles were added to <code class="language-plaintext highlighter-rouge">sympy.stats</code>.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17304">#17304</a> : While working on the above PR, I got an idea to open this one to add cicular ensembles to <code class="language-plaintext highlighter-rouge">sympy.stats</code>. I learned a lot about Haar measure while working on this.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17306">#17306</a>: This PR added matrices with random expressions. The challenging part of this PR was to generate canonical results for passing the tests.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17336">#17336</a> : This was related to bug fix in <code class="language-plaintext highlighter-rouge">Q.ask</code> and <code class="language-plaintext highlighter-rouge">Matrix</code>. Take a look at an example <a href="https://github.com/sympy/sympy/pull/17336#issue-304058013">here</a>.</p>
</li>
</ul>
<p><strong>Open</strong></p>
<ul>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17387">#17387</a> : This PR aims to add support for assumptions of dependence among random variables, like, <code class="language-plaintext highlighter-rouge">Covariance</code>, etc.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17146">#17146</a> : This PR is in its last stages to fix and upgrade the <code class="language-plaintext highlighter-rouge">Range</code> set and we are finalizing few things, like changes in the output of <code class="language-plaintext highlighter-rouge">Range</code>. As planned I was successful at writing exhaustive and systematic tests.</p>
</li>
</ul>
<p>Well, now, time to say good bye! It was a nice experience writing about journey in this blog. If you have read this from the beginning then thanks a lot buddy, and I wish for your acceptance in GSoC 2020. Keep Open Sourcing :D</p>https://czgdp1807.github.io/z_final_reportGagandeep Singh (czgdp1807)Gagandeep Singh (czgdp1807): Final ReportTue, 20 Aug 2019 00:00:00 GMT
https://czgdp1807.github.io/z_final_report/
<p>This report summarizes the work done in my GSoC 2019 project, <strong>Enhancement of Statistics Module</strong> wth SymPy. A step by step development of the project is available at <a href="https://czgdp1807.github.io">czgdp1807.github.io</a>.</p>
<p><strong>About Me</strong></p>
<p>I am a third year Bachelor of Technology student at Indian Institute of Technology, Jodhpur in the department of Computer Science and Engineering.</p>
<p><strong>Project Outline</strong></p>
<p>The project plan was focused on the following areas of statistics that were required to be added to <code class="language-plaintext highlighter-rouge">sympy.stats</code>.</p>
<ol>
<li><strong>Community Bonding</strong> - I was supposed to add, Dirichlet Distribution, Multivariate Ewens Distribution, Multinomial Distribution, Negative multinomial distribution, and Generalized multivariate log-gamma distribution to <code class="language-plaintext highlighter-rouge">sympy.stats.joint_rv_types</code>.</li>
<li><strong>Phase 1</strong> - I was supposed to work on stochastic processes, primraly on Markov chains, including it’s API design, algorithm and implementation.</li>
<li><strong>Phase 2</strong> - I was expected to work on random matrices, including Gaussian ensembles and matrices with random expressions as their elements.</li>
<li><strong>Phase 3</strong> - I planned to work on assumptions of dependence, improving result generation by <code class="language-plaintext highlighter-rouge">sympy.stats</code> and improving other modules so that <code class="language-plaintext highlighter-rouge">sympy.stats</code> can function properly.</li>
</ol>
<p><strong>Pull Requests</strong></p>
<p>This section describes the actual work done during the coding period in terms of merged PRs.</p>
<ol>
<li><strong>Community Bonding</strong></li>
</ol>
<ul>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16576">#16576</a>: This PR added <code class="language-plaintext highlighter-rouge">Dirichlet</code> and <code class="language-plaintext highlighter-rouge">MultivariteEwens</code> distributions.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16808">#16808</a> : This PR added <code class="language-plaintext highlighter-rouge">Multinomial</code> and <code class="language-plaintext highlighter-rouge">NegativeMultinomial</code> distribution.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16810">#16810</a> : This PR improved the API of <code class="language-plaintext highlighter-rouge">Sum</code> by allowing <code class="language-plaintext highlighter-rouge">Range</code> as the limits.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16825">#16825</a> : This PR in continuation, added <code class="language-plaintext highlighter-rouge">GeneralizedMultivariateLogGamma</code> distribution. This was an interesting one due to the complexity involved in its PDF.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16834">#16834</a> : This PR enhanced the <code class="language-plaintext highlighter-rouge">Multinomial</code> and <code class="language-plaintext highlighter-rouge">NegativeMultinomial</code> distributions by allowing symbolic dimensions for them.</p>
</li>
</ul>
<ol>
<li><strong>Phase 1</strong></li>
</ol>
<ul>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16897">#16897</a> : This was related to <code class="language-plaintext highlighter-rouge">sympy.core</code> and it helped in removing disparity in the results of special function <code class="language-plaintext highlighter-rouge">gamma</code>.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16908">#16908</a> : This PR improved <code class="language-plaintext highlighter-rouge">sympy.stats.frv</code> by allowing conditions with foriegn symbols.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16913">#16913</a> : This removed the unreachable code from <code class="language-plaintext highlighter-rouge">sympy.stats.frv</code>.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16914">#16914</a> : This PR allowed symbolic dimensions to <code class="language-plaintext highlighter-rouge">MultivariateEwens</code> distribution.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16929">#16929</a> : This one was for the <code class="language-plaintext highlighter-rouge">sympy.tensor</code> module. It optimized the <code class="language-plaintext highlighter-rouge">ArrayComprehension</code> and covered some corner cases.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16981">#16981</a> : This PR added the architecture of stochastic processes. It also added discrete Markov chain to <code class="language-plaintext highlighter-rouge">sympy.stats</code>.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17030">#17030</a> : Some features like, <code class="language-plaintext highlighter-rouge">joint_dsitribution</code> were added to stochastic processes in this PR.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17046">#17046</a> : Some common properties of discrete Markov chains, like fundamental matrix, fixed row vector were added.</p>
</li>
</ul>
<ol>
<li><strong>Phase 2</strong></li>
</ol>
<ul>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16934">#16934</a> : The bug fixes for <code class="language-plaintext highlighter-rouge">sympy.stats.joint_rv_types</code> were complete and the further work has been handed over to my co-student, Ritesh.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16962">#16962</a> : This was continuation of the work done in phase 1 for allowing symbolic dimensions in finite random variables. As I planned, this PR got merged in phase 2, after some changes.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17083">#17083</a>: The work done in this PR framed the platform and reason for the next one. The algorithm that got merged was a bit difficult to extend, and maintain. Thanks to Francesco for his <a href="https://github.com/sympy/sympy/pull/17083#issuecomment-508256359">comment</a> for motivating me to re-think the whole framework.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17163">#17163</a> : This was one of the most challenging PRs of the project, because, it involved re-designing the algorithm, refactoring the code and moreover lot of thinking. The details can be found at <a href="https://github.com/sympy/sympy/pull/17163#issuecomment-510939984">this comment</a>.</p>
</li>
</ul>
<ol>
<li><strong>Phase 3</strong></li>
</ol>
<ul>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17174">#17174</a> : In this PR, Gaussian ensembles were added to <code class="language-plaintext highlighter-rouge">sympy.stats</code>.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17304">#17304</a> : While working on the above PR, I got an idea to open this one to add cicular ensembles to <code class="language-plaintext highlighter-rouge">sympy.stats</code>. I learned a lot about Haar measure while working.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17306">#17306</a>: This PR added matrices with random expressions. The challenging part of this PR was to generate canonical results for passing the tests.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17336">#17336</a> : This was related to bug fix in <code class="language-plaintext highlighter-rouge">Q.ask</code> and <code class="language-plaintext highlighter-rouge">Matrix</code>. Take a look at an example <a href="https://github.com/sympy/sympy/pull/17336#issue-304058013">here</a>.</p>
</li>
</ul>
<p><strong>Miscellaneous Work</strong></p>
<p>This section contains some of my PRs related to miscellanous issues like, workflow improvement, etc.</p>
<ul>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16899">#16899</a> : This was a workflow related to PR to ignore the <code class="language-plaintext highlighter-rouge">.vscode</code> folder.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17003">#17003</a> : This PR ignored the <code class="language-plaintext highlighter-rouge">__pycahce__</code> folder by adding it <code class="language-plaintext highlighter-rouge">.gitignore</code> file.</p>
</li>
</ul>
<p><strong>Future Work</strong></p>
<p>The following PRs are open and are in their last stages for merging. Any interested student can take a look at them to extend my work in his/her GSoC project.</p>
<ul>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17387">#17387</a> : This PR aims to add support for assumptions of dependence among random variables, like, <code class="language-plaintext highlighter-rouge">Covariance</code>, etc.</p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17146">#17146</a> : This PR is in its last stages to fix and upgrade the <code class="language-plaintext highlighter-rouge">Range</code> set and we are finalizing few things, like changes in the output of <code class="language-plaintext highlighter-rouge">Range</code>. As planned I was successful at writing exhaustive and systematic tests.</p>
</li>
</ul>
<p>Apart from the above, work on densities of Circular ensembles remains to be done. One can read the Theorem 3, page 8 of <a href="https://arxiv.org/pdf/1103.3408.pdf">this paper</a>.</p>https://divyanshu132.github.io//gsoc-week-12Divyanshu Thakur (divyanshu132)Divyanshu Thakur (divyanshu132): GSoC 2019 - Week 11 and 12 - Phase-III CompletionMon, 19 Aug 2019 00:00:00 GMT
https://divyanshu132.github.io//gsoc-week-12
<p>We’ve reached to the end of GSoC 2019, end to the really productive and wonderful summer. In the last two weeks I worked on documenting polycyclic groups which got merged as well, here is the PR <a href="https://github.com/sympy/sympy/pull/17399">sympy/sympy#17399</a>.</p>
<p>Also, the PR on Induced-pcgs and exponent vector for polycyclic subgroups got merged <a href="https://github.com/sympy/sympy/pull/17317">sympy/sympy#17317</a>.</p>
<p>Let’s have a look at some of the highlights of documentation.</p>
<ul>
<li>The parameters of both the classes(<code class="highlighter-rouge">PolycyclicGroup</code> and <code class="highlighter-rouge">Collector</code>) has been discussed in detail.</li>
<li>Conditions for a word to be collected or uncollected is highlighted.</li>
<li>Computation of polycyclic presentation has been explained in detail highlighting the sequence in which presentation is computed with the corresponding pcgs and and polycyclic series elements used.</li>
<li>Other methods like <code class="highlighter-rouge">subword_index</code>, <code class="highlighter-rouge">exponent_vector</code>, <code class="highlighter-rouge">depth</code>, etc are also documented.</li>
</ul>
<p>An example is provided for every functionality.
For more details one can visit:
<a href="https://docs.sympy.org/dev/modules/combinatorics/pc_groups.html">https://docs.sympy.org/dev/modules/combinatorics/pc_groups.html</a></p>
<p>Now, I’m supposed to prepare a final report presenting all the work done. Will update with report next week.
In addition to the report preparation I’ll try to add <code class="highlighter-rouge">Parameters</code> section in the <code class="highlighter-rouge">docstrings</code> for various classes and methods of <code class="highlighter-rouge">pc_groups</code>.</p>https://jmig5776.github.io//gsoc-final-reportJogi Miglani (jmig5776)Jogi Miglani (jmig5776): Final report for GSoC 2019 (Week 12)Sun, 18 Aug 2019 00:00:00 GMT
https://jmig5776.github.io//gsoc-final-report
<p>It’s finally the last week of the Google Summer of Code 2019. Before I start
discussing my work over the summer I would like to highlight my general
experience with the GSoC program.</p>
<p>GSoC gives students all over the world the opportunity to connect and
collaborate with some of the best programmers involved in open source from
around the world. I found the programme tremendusly enriching both in terms of
the depth in which I got to explore some of the areas involved in my project
and also gave me exxposure to some areas I had no previous idea about.
The role of a mentor in GSoC is the most important and I consider myself
very lucky to have got Yathartha Anirudh Joshi and Amit Kumar as my mentors.
Amit and Yathartha has been tremendously encouraging and helpful throughout the summer.
I would also like to mention the importance of the entire community involved,
just being part of the SymPy community.</p>
<h3 id="work-completed">Work Completed</h3>
<p>Here is a list of PRs which were opened during the span of GSoC:</p>
<ol>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16976">#16796 Added <code class="highlighter-rouge">_solve_modular</code> for handling equations a - Mod(b, c) = 0 where only b is expr</a></p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16890">#16890 Fixing lambert in bivariate to give all real solutions</a></p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16960">#16960 (Don’t Merge)(Prototype) Adding abs while converting equation to log form to get solved by <code class="highlighter-rouge">_lambert</code></a></p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17043">#17043 Feature power_list to return all powers of a variable present in f</a></p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/17079">#17079 Defining ImageSet Union</a></p>
</li>
</ol>
<p>Here is a list of PRs merged:</p>
<ol>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16976">#16796 Added <code class="highlighter-rouge">_solve_modular</code> for handling equations a - Mod(b, c) = 0 where only b is expr</a></p>
</li>
<li>
<p><a href="https://github.com/sympy/sympy/pull/16890">#16890 Fixing lambert in bivariate to give all real solutions</a></p>
</li>
</ol>
<p>Here is all the brief description about the PRs merged:</p>
<ol>
<li><a href="https://github.com/sympy/sympy/pull/16976">#16796 Added <code class="highlighter-rouge">_solve_modular</code> for handling equations a - Mod(b, c) = 0 where only b is expr</a></li>
</ol>
<p>In this PR a new solver <code class="highlighter-rouge">_solve_modular</code> was made for solving modular equations.</p>
<h3 id="what-type-of-equations-to-be-considered-and-what-domain">What type of equations to be considered and what domain?</h3>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>A - Mod(B, C) = 0
A -> This can or cannot be a function specifically(Linear, nth degree single
Pow, a**f_x and Add and Mul) of symbol.(But currently its not a
function of x)
B -> This is surely a function of symbol.
C -> It is an integer.
And domain should be a subset of S.Integers.
</code></pre></div></div>
<h3 id="filtering-out-equations">Filtering out equations</h3>
<p>A check is being applied named <code class="highlighter-rouge">_is_modular</code> which verifies that only above
mentioned type equation should return True.</p>
<h3 id="working-of-_solve_modular">Working of <code class="highlighter-rouge">_solve_modular</code></h3>
<p>In the starting of it there is a check if domain is a subset of Integers.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>domain.is_subset(S.Integers)
</code></pre></div></div>
<p>Only domain of integers and it subset are being considered while solving
these equations.
Now after this it separates out a modterm and the rest term on either
sides by this code.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>modterm = list(f.atoms(Mod))[0]
rhs = -(S.One)*(f.subs(modterm, S.Zero))
if f.as_coefficients_dict()[modterm].is_negative:
# f.as_coefficient(modterm) was returning None don't know why
# checks if coefficient of modterm is negative in main equation.
rhs *= -(S.One)
</code></pre></div></div>
<p>Now the equation is being inverted with the helper routine <code class="highlighter-rouge">_invert_modular</code>
like this.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>n = Dummy('n', integer=True)
f_x, g_n = _invert_modular(modterm, rhs, n, symbol)
</code></pre></div></div>
<p>I am defining n in <code class="highlighter-rouge">_solve_modular</code> because <code class="highlighter-rouge">_invert_modular</code> contains
recursive calls to itself so if define the n there then it was going to have
many instances which of no use. Thats y I am defining it in <code class="highlighter-rouge">_solve_modular</code>.</p>
<p>Now after the equation is inverted now solution finding takes place.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if f_x is modterm and g_n is rhs:
return unsolved_result
</code></pre></div></div>
<p>First of all if <code class="highlighter-rouge">_invert_modular</code> fails to invert then a ConditionSet is being
returned.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> if f_x is symbol:
if domain is not S.Integers:
return domain.intersect(g_n)
return g_n
</code></pre></div></div>
<p>And if <code class="highlighter-rouge">_invert_modular</code> is fully able to invert the equation then only domain
intersection needs to takes place. <code class="highlighter-rouge">_invert_modular</code> inverts the equation
considering S.Integers as its default domain.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> if isinstance(g_n, ImageSet):
lamda_expr = g_n.lamda.expr
lamda_vars = g_n.lamda.variables
base_set = g_n.base_set
sol_set = _solveset(f_x - lamda_expr, symbol, S.Integers)
if isinstance(sol_set, FiniteSet):
tmp_sol = EmptySet()
for sol in sol_set:
tmp_sol += ImageSet(Lambda(lamda_vars, sol), base_set)
sol_set = tmp_sol
return domain.intersect(sol_set)
</code></pre></div></div>
<p>In this case when g_n is an ImageSet of n and f_x is not symbol so the
equation is being solved by calling <code class="highlighter-rouge">_solveset</code> (this will not lead to
recursion because equation to be entered is free from Mod) and then
the domain intersection takes place.</p>
<h3 id="what-does-_invert_modular-do">What does <code class="highlighter-rouge">_invert_modular</code> do?</h3>
<p>This function helps to convert the equation <code class="highlighter-rouge">A - Mod(B, C) = 0</code> to a
form (f_x, g_n).
First of all it checks the possible instances of invertible cases if not then
it returns the equation as it is.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>a, m = modterm.args
if not isinstance(a, (Dummy, Symbol, Add, Mul, Pow)):
return modterm, rhs
</code></pre></div></div>
<p>Now here is the check for complex arguments and returns the equation as it is
if somewhere it finds I.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if rhs.is_real is False or any(term.is_real is False \
for term in list(_term_factors(a))):
# Check for complex arguments
return modterm, rhs
</code></pre></div></div>
<p>Now after this we check of emptyset as a solution by checking range of both
sides of equation.
As modterm can have values between [0, m - 1] and if rhs is out of this range
then emptySet is being returned.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if (abs(rhs) - abs(m)).is_positive or (abs(rhs) - abs(m)) is S.Zero:
# if rhs has value greater than value of m.
return symbol, EmptySet()
</code></pre></div></div>
<p>Now the equation haveing these types are being returned as the following</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if a is symbol:
return symbol, ImageSet(Lambda(n, m*n + rhs), S.Integers)
if a.is_Add:
# g + h = a
g, h = a.as_independent(symbol)
if g is not S.Zero:
return _invert_modular(Mod(h, m), (rhs - Mod(g, m)) % m, n, symbol)
if a.is_Mul:
# g*h = a
g, h = a.as_independent(symbol)
if g is not S.One:
return _invert_modular(Mod(h, m), (rhs*invert(g, m)) % m, n, symbol)
</code></pre></div></div>
<p>The more peculiar case is of <code class="highlighter-rouge">a.is_Pow</code> which is handled as following.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if a.is_Pow:
# base**expo = a
base, expo = a.args
if expo.has(symbol) and not base.has(symbol):
# remainder -> solution independent of n of equation.
# m, rhs are made coprime by dividing igcd(m, rhs)
try:
remainder = discrete_log(m / igcd(m, rhs), rhs, a.base)
except ValueError: # log does not exist
return modterm, rhs
# period -> coefficient of n in the solution and also referred as
# the least period of expo in which it is repeats itself.
# (a**(totient(m)) - 1) divides m. Here is link of theoram:
# (https://en.wikipedia.org/wiki/Euler's_theorem)
period = totient(m)
for p in divisors(period):
# there might a lesser period exist than totient(m).
if pow(a.base, p, m / igcd(m, a.base)) == 1:
period = p
break
return expo, ImageSet(Lambda(n, period*n + remainder), S.Naturals0)
elif base.has(symbol) and not expo.has(symbol):
remainder_list = nthroot_mod(rhs, expo, m, all_roots=True)
if remainder_list is None:
return symbol, EmptySet()
g_n = EmptySet()
for rem in remainder_list:
g_n += ImageSet(Lambda(n, m*n + rem), S.Integers)
return base, g_n
</code></pre></div></div>
<p>Two cases are being created based of a.is_Pow</p>
<ol>
<li>x**a</li>
<li>a**x</li>
</ol>
<p>x**a - It is being handled by the helper function <code class="highlighter-rouge">nthroot_mod</code> which returns
required solution. I am not going into very mch detail for more
information you can read the documentation of nthroot_mod.</p>
<p>a**x - For this <code class="highlighter-rouge">totient</code> is being used in the picture whose meaning can be
find on this <a href="https://en.wikipedia.org/wiki/Euler's_theorem">Wikipedia</a>
page. And then its divisors are being checked to find the least period
of solutions.</p>
<ol>
<li><a href="https://github.com/sympy/sympy/pull/16890">#16890 Fixing lambert in bivariate to give all real solutions</a></li>
</ol>
<p>This PR went through many up and downs and nearly made to the most commented PR.
And with the help of @smichr it was successfully merged. It mainly solved the
bug for not returning all solutions of lambert.</p>
<h2 id="explaining-the-function-_solve_lambert-main-function-to-solve-lambert-equations">Explaining the function <code class="highlighter-rouge">_solve_lambert</code> (main function to solve lambert equations)</h2>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Input - f, symbol, gens
OutPut - Solution of f = 0 if its lambert type expression else NotImplementedError
</code></pre></div></div>
<p>This function separates out cases as below based on the main function present in
the main equation.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>For the first ones:
1a1) B**B = R != 0 (when 0, there is only a solution if the base is 0,
but if it is, the exp is 0 and 0**0=1
comes back as B*log(B) = log(R)
1a2) B*(a + b*log(B))**p = R or with monomial expanded or with whole
thing expanded comes back unchanged
log(B) + p*log(a + b*log(B)) = log(R)
lhs is Mul:
expand log of both sides to give:
log(B) + log(log(B)) = log(log(R))
1b) d*log(a*B + b) + c*B = R
lhs is Add:
isolate c*B and expand log of both sides:
log(c) + log(B) = log(R - d*log(a*B + b))
</code></pre></div></div>
<p>If the equation are of type 1a1, 1a2 and 1b then the mainlog of the equation is
taken into concern as the deciding factor lies in the main logarithmic term of equation.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>For the next two,
collect on main exp
2a) (b*B + c)*exp(d*B + g) = R
lhs is mul:
log to give
log(b*B + c) + d*B = log(R) - g
2b) -b*B + g*exp(d*B + h) = R
lhs is add:
add b*B
log and rearrange
log(R + b*B) - d*B = log(g) + h
</code></pre></div></div>
<p>If the equation are of type 2a and 2b then the mainexp of the equation is
taken into concern as the deciding factor lies in the main exponential term of equation.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>3) d*p**(a*B + b) + c*B = R
collect on main pow
log(R - c*B) - a*B*log(p) = log(d) + b*log(p)
</code></pre></div></div>
<p>If the equation are of type 3 then the mainpow of the equation is
taken into concern as the deciding factor lies in the main power term of equation.</p>
<p>Eventually from all of the three cases the equation is meant to be converted to this form:-</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>f(x, a..f) = a*log(b*X + c) + d*X - f = 0 which has the
solution, X = -c/b + (a/d)*W(d/(a*b)*exp(c*d/a/b)*exp(f/a)).
</code></pre></div></div>
<p>And the solution calculation process is done by <code class="highlighter-rouge">_lambert</code> function.</p>
<p>Everything seems flawless?? You might be thinking no modification is required. Lets
see what loopholes are there in it.</p>
<h2 id="what-does-pr-16890-do">What does PR <a href="https://github.com/sympy/sympy/pull/16890">#16890</a> do?</h2>
<p>There are basically two flaws present with the this approach.</p>
<ol>
<li>Not considering all branches of equation while taking log both sides.</li>
<li>Calculation of roots should consider all roots in case having rational power.</li>
</ol>
<h3 id="1-not-considering-all-branches-of-equation-while-taking-log-both-sides">1. Not considering all branches of equation while taking log both sides.</h3>
<p>Let us consider this equation to be solved by <code class="highlighter-rouge">_solve_lambert</code> function.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>-1/x**2 + exp(x/2)/2 = 0
</code></pre></div></div>
<p>So what the old <code class="highlighter-rouge">_solve_lambert</code> do is to convert this equation to following.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>2*log(x) + x/2 = 0
</code></pre></div></div>
<p>and calculates its roots from <code class="highlighter-rouge">_lambert</code>.
But it missed this branch of equation while taking log on main equation.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>2*log(-x) + x/2 = 0
</code></pre></div></div>
<p>Yeah you can reproduce the original equation from this equation.So basically the problem
was that it missed the branches of equation while taking log. And when does the
main equation have more than one branch?? The terms having even powers of variable x
leads to two different branches of equation.</p>
<p>So how it is solved?
What I has done is that before actually gets into solving I preprocess the main equation
and if it has more than one branches of equation while converting taking log then I consider
all the equations generated from them.(with the help of <code class="highlighter-rouge">_solve_even_degree_expr</code>)</p>
<p>How I preprocess the equation?
So what I do is I replace all the even powers of x present with even powers of t(dummy variable).</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Code for targeted replacement
lhs = lhs.replace(
lambda i: # find symbol**even
i.is_Pow and i.base == symbol and i.exp.is_even,
lambda i: # replace t**even
t**i.exp)
Example:-
Main equation -> -1/x**2 + exp(x/2)/2 = 0
After replacement -> -1/t**2 + exp(x/2)/2 = 0
</code></pre></div></div>
<p>Now I take logarithms on both sides and simplify it.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>After simplifying -> 2*log(t) + x/2 = 0
</code></pre></div></div>
<p>Now I call function <code class="highlighter-rouge">_solve_even_degree_expr</code> to replace the t with +/-x to generate two equations.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Replacing t with +/-x
1. 2*log(x) + x/2 = 0
2. 2*log(-x) + x/2 = 0
</code></pre></div></div>
<p>And consider the solutions of both of the equations to return all lambert real solutions
of <code class="highlighter-rouge">-1/x**2 + exp(x/2)/2 = 0</code>.</p>
<p>Hope you could understand the logic behind this work.</p>
<h3 id="2-calculation-of-roots-should-consider-all-roots-in-case-having-rational-power">2. Calculation of roots should consider all roots in case having rational power.</h3>
<p>This flaw is in the calculation of roots in function <code class="highlighter-rouge">_lambert</code>.
Earlier the function_lambert has the working like :-</p>
<ol>
<li>Find all the values of a, b, c, d, e in the required loagrithmic equation</li>
<li>Then it defines a solution of the form
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>-c/b + (a/d)*l where l = LambertW(d/(a*b)*exp(c*d/a/b)*exp(-f/a), k)
</code></pre></div> </div>
<p>and then it included that solution.
I agree everything seems flawless here. but try to see the step where we are defining l.</p>
</li>
</ol>
<p>Let us suppose a hypothetical algorithm just like algorithm used in <code class="highlighter-rouge">_lambert</code>
in which equation to be solved is</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>x**3 - 1 = 0
</code></pre></div></div>
<p>and in which we define solution of the form</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>x = exp(I*2*pi/n) where n is the power of x in equation
</code></pre></div></div>
<p>so the algorithm will give solution</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>x = exp(I*2*pi/3) # but expected was [1, exp(I*2*pi/3), exp(-I*2*pi/3)]
</code></pre></div></div>
<p>which can be found by finding all solutions of</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>x**n - exp(2*I*pi) = 0
</code></pre></div></div>
<p>by a different correct algorithm. Thats y it was wrong.
The above algorithm would have given correct values for <code class="highlighter-rouge">x - 1 = 0</code>.</p>
<p>And the question in your mind may arise that why only exp() because the
possiblity of having more than one roots is in exp(), because if the algorithm
would have been like <code class="highlighter-rouge">x = a</code>, where a is some real constant then there is not
any possiblity of further roots rather than solution like <code class="highlighter-rouge">x = a**(1/n)</code>.
And its been done in code like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>code
num, den = ((c*d-b*f)/a/b).as_numer_denom()
p, den = den.as_coeff_Mul()
e = exp(num/den)
t = Dummy('t')
args = [d/(a*b)*t for t in roots(t**p - e, t).keys()]
</code></pre></div></div>
<h3 id="work-under-development">Work under development</h3>
<ul>
<li><a href="https://github.com/sympy/sympy/pull/17079">#17079 Defining ImageSet Union</a></li>
</ul>
<p>This PR tends to define a unifying algorithm for linear relations.</p>
<h3 id="future-work">Future Work</h3>
<p>Here is a list that comprises of all the ideas (which were a part of my GSoC
Proposal and/or thought over during the SoC) which can extend my GSoC project.</p>
<ol>
<li>
<p>Integrating helper solvers within solveset: linsolve, solve_decomposition, nonlinsolve</p>
</li>
<li>
<p>Handle nested trigonometric equations.</p>
</li>
</ol>http://ishanaj.wordpress.com/?p=105Ishan Joshi (ishanaj)Ishan Joshi (ishanaj): GSoC’19: Week-11- Heading to the final weekTue, 13 Aug 2019 17:26:54 GMT
https://ishanaj.wordpress.com/2019/08/13/gsoc19-week-11-heading-to-the-final-week/
<p>With the end of this week the <strong>draw()</strong> function has been completely implemented. The work on <a href="https://github.com/sympy/sympy/pull/17345">PR #17345</a> has been completed along with the documentations.</p>
<p>As mentioned in the previous blog this PR was an attempt to make the <strong>draw()</strong> function use SymPy’s own plot() rather than importing matplotlib externally to plot the diagram. The idea was to plot the load equation which is in terms of singularity function. This would directly plot uniformly distributed load, uniformly varying load and other higher order loads except for point loads and moment loads.<br /> The task was now to plot the remaining parts of the diagram which were:</p>
<ul><li>A rectangle for drawing the beam</li><li>Arrows for point loads</li><li>Markers for moment loads and supports </li><li>Colour filling to fill colour in inside the higher order loads (order >=0).<span id="more-105"></span></li></ul>
<p>Instead of making temporary hacks to implement these, I went a step further to give the plotting module some additional functionalities. Apart from helping in implementing the <strong>draw()</strong> function, this would also enhance the plotting module.</p>
<p>The basic idea was to have some additional keyworded arguments in the <strong>plot()</strong> function. Every keyworded argument would be a list of dictionaries where each dictionary would represent the arguments (or parameters) that would have been passed in the corresponding matplotlib functions.</p>
<p>These are the functions of matplotlib that can now be accessed using <strong>sympy’s plot()</strong>, along with where there are used in our current situation:</p>
<ul><li><a href="https://matplotlib.org/api/_as_gen/matplotlib.patches.Rectangle.html">matplotlib.patches.Rectangle</a> -to draw the beam</li><li><a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.annotate.html">matplotlib.pyplot.annotate</a> – to draw arrows of load</li><li><a href="https://matplotlib.org/3.1.1/api/markers_api.html">matplotlib.markers</a>– to draw supports and moment loads</li><li><a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.fill_between.html">fill_between()</a> – to fill an area with color</li></ul>
<p>Another thing which is worth mentioning is that to use <strong>fill_between() </strong>we would require numpy’s <strong>arange()</strong> for sure. Although it might be better if we could avoid using an external module directly, but I guess this is unavoidable for now. </p>
<p>Also, I have added an option for the user to scale the plot and get a pictorial view of it in case where the plotting with the exact dimensions doesn’t produce a decent diagram. For eg. If the magnitude of the load (order >= 0) is relatively higher to other applied loads or to the length of the beam, the load plot might get out of the final plot window. </p>
<p>Here is an example:</p>
<pre class="brush: python; collapse: false; title: ; wrap-lines: false; notranslate">
>>> R1, R2 = symbols('R1, R2')
>>> E, I = symbols('E, I')
>>> b1 = Beam(50, 20, 30)
>>> b1.apply_load(10, 2, -1)
>>> b1.apply_load(R1, 10, -1)
>>> b1.apply_load(R2, 30, -1)
>>> b1.apply_load(90, 5, 0, 23)
>>> b1.apply_load(10, 30, 1, 50)
>>> b1.apply_support(50, "pin")
>>> b1.apply_support(0, "fixed")
>>> b1.apply_support(20, "roller")
# case 1 on the left
>>> p = b1.draw()
>>> p.show()
# case 2 on the right
>>> p1 = b1.draw(pictorial=True)
>>> p1.show()
</pre>
<figure class="wp-block-image size-large"><img alt="" class="wp-image-107" src="https://ishanaj.files.wordpress.com/2019/08/screenshot-10-08-2019-23_04_45.png" /></figure>
<h2><strong>Next Week:</strong></h2>
<ul><li>Getting leftover PR’s merged</li><li>Initiating implementation of Truss class</li></ul>
<p>Will keep you updated!</p>
<p>Thanks!</p>https://czgdp1807.github.io/week_11Gagandeep Singh (czgdp1807)Gagandeep Singh (czgdp1807): Week 11 - Final touchesTue, 13 Aug 2019 00:00:00 GMT
https://czgdp1807.github.io/week_11/
<p>So, the second last week of the project is over and we have decided to improve on the work we have done so far in the last few days. Read below to know more.</p>
<p>In this week, I worked on, <a href="https://github.com/sympy/sympy/pull/17146">#17146</a> concered with symbolic <code class="language-plaintext highlighter-rouge">Range</code>, <a href="https://github.com/sympy/sympy/pull/17387">#17387</a> related to assumptions of dependence among random variables, <a href="https://github.com/sympy/sympy/pull/17336">#17336</a> which fixed the bug in <code class="language-plaintext highlighter-rouge">Q.hermitian</code> the one I told you about in my previous post, and <a href="https://github.com/sympy/sympy/pull/17306">#17306</a>, implementing the matrices with random expressions.</p>
<p>In fact, the last two PRs are merged. Now, coming on to symbolic <code class="language-plaintext highlighter-rouge">Range</code>, I have completed the testing of all the methods except <code class="language-plaintext highlighter-rouge">slicing</code> feature of <code class="language-plaintext highlighter-rouge">__getitem__</code>, which I will do in this week. Regarding, the bug in <code class="language-plaintext highlighter-rouge">Q.hermitian</code>, well, my code at first, was giving incorrect results due to overriding problems in the logic. Francesco, helped me correct them and it’s finally in. The major part of the week was devoted to assumptions of dependence. I did some study from Wikipedia, and implemented the class <code class="language-plaintext highlighter-rouge">DependentPSpace</code>. I have kept the class static because it will handle queries of the type, <code class="language-plaintext highlighter-rouge">density(X + Y, Eq(Covariance(X, Y), S(1)/2)</code> which from my point of view doesn’t require creation of a probability space object.</p>
<p>Coming on to the plan for the last week, we have decided that no new PRs will be opened and focus will be towards completing the already open PRs, so that we have most of our work completed. Francesco has also suggested to test the newly introduced classes with the ones of Wolfram Alpha, so that there are no inconsistencies.</p>https://www.shubhamjha.com/posts/GSoC-Week-10-and-11Shubham Kumar Jha (ShubhamKJha)Shubham Kumar Jha (ShubhamKJha): GSoC 2019: Week 10 and 11Mon, 12 Aug 2019 18:30:00 GMT
https://www.shubhamjha.com/posts/GSoC-Week-10-and-11/
<p>So, the second last week of the official coding period is over now. During the last two weeks, I was mostly occupied with on-campus placement drives, hence I couldn’t put up a blog earlier. A summary of my work during these weeks is as follows:</p>
<ul>
<li>
<p>First of all, <a href="https://github.com/sympy/sympy/pull/17144">#17144</a> is merged 😄. This was a large PR and hence took time to get fully reviewed. With this, the performance of New assumptions comes closer to that of the old system. Currently, queries are evaluated about <strong>20X</strong> faster than before.</p>
</li>
<li><a href="https://github.com/sympy/sympy/pull/17379">#17379</a> attempts to remove SymPy’s costly <strong>rcall()</strong> from the whole assumptions mechanism. It’s a follow-up from <a href="https://github.com/sympy/sympy/pull/17144">#17144</a> and the performance gain is subtle for large queries. E.g.
<div class="language-py highlighter-rouge"><div class="highlight"><pre class="highlight"><code><table class="rouge-table"><tbody><tr><td class="rouge-gutter gl"><pre class="lineno">1
2
3
</pre></td><td class="rouge-code"><pre><span class="kn">from</span> <span class="nn">sympy</span> <span class="kn">import</span> <span class="o">*</span>
<span class="n">p</span> <span class="o">=</span> <span class="n">random_poly</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="mi">50</span><span class="p">,</span> <span class="o">-</span><span class="mi">50</span><span class="p">,</span> <span class="mi">50</span><span class="p">)</span>
<span class="k">print</span><span class="p">(</span><span class="n">ask</span><span class="p">(</span><span class="n">Q</span><span class="o">.</span><span class="n">positive</span><span class="p">(</span><span class="n">p</span><span class="p">),</span> <span class="n">Q</span><span class="o">.</span><span class="n">positive</span><span class="p">(</span><span class="n">x</span><span class="p">)))</span>
</pre></td></tr></tbody></table></code></pre></div> </div>
<p>In the master it takes <code class="language-plaintext highlighter-rouge">4.292 s</code>, out of this <code class="language-plaintext highlighter-rouge">2.483 s</code> is spent in <strong>rcall</strong>. With this, the time spent is <code class="language-plaintext highlighter-rouge">1.929 s</code> and <code class="language-plaintext highlighter-rouge">0.539 s</code> respectively.</p>
</li>
<li><a href="https://github.com/sympy/sympy/pull/17392">#17392</a> attempts to make the New Assumptions able to handle queries which involve Relationals. Currently, it works only with simple queries (e.g. <code class="language-plaintext highlighter-rouge">ask(x>y, Q.positive(x) & Q.negative(y))</code> now evaluates <code class="language-plaintext highlighter-rouge">True</code>) just like the way old system works. This is a much-awaited functionality for the new system. Also, during this I realized that sathandlers lack many necessary facts. This PR also adds many new facts to the system.</li>
</ul>
<p>For the last week of coding, my attempt would be to complete both of these PRs and get them merged. Also, I will try to add new facts to sathandlers.</p>https://arighnaiitg.github.io/2019-08-12-gsoc-week11/Arighna Chakrabarty (arighnaiitg)Arighna Chakrabarty (arighnaiitg): GSoC Week 11 !!Mon, 12 Aug 2019 07:00:00 GMT
https://arighnaiitg.github.io/2019-08-12-gsoc-week11/
<p>Week 11 ends.. -
The second last week has also come to an end. We are almost there at the end of the ride. Me and Sartaj had a meeting on 13th of August about the final leftovers to be done, and wrapping up the GSoC work successfully. Here are the works which have...https://jmig5776.github.io//gsoc-week-11Jogi Miglani (jmig5776)Jogi Miglani (jmig5776): GSoC 2019 - Week 11Sun, 11 Aug 2019 00:00:00 GMT
https://jmig5776.github.io//gsoc-week-11
<p>This was the eleventh week meeting with the GSoC mentors which was scheduled on
Sunday 11th August, 2019 between 11:30 - 12:30 PM (IST). Me, Yathartha and Amit
were the attendees of the meeting. <code class="highlighter-rouge">_solve_modular</code> was discussed in this meeting.</p>
<p>Here is all the brief description about new solver <code class="highlighter-rouge">_solve_modular</code> for solving
modular equations.</p>
<h3 id="what-type-of-equations-to-be-considered-and-what-domain">What type of equations to be considered and what domain?</h3>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>A - Mod(B, C) = 0
A -> This can or cannot be a function specifically(Linear, nth degree single
Pow, a**f_x and Add and Mul) of symbol.(But currently its not a
function of x)
B -> This is surely a function of symbol.
C -> It is an integer.
And domain should be a subset of S.Integers.
</code></pre></div></div>
<h3 id="filtering-out-equations">Filtering out equations</h3>
<p>A check is being applied named <code class="highlighter-rouge">_is_modular</code> which verifies that only above
mentioned type equation should return True.</p>
<h3 id="working-of-_solve_modular">Working of <code class="highlighter-rouge">_solve_modular</code></h3>
<p>In the starting of it there is a check if domain is a subset of Integers.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>domain.is_subset(S.Integers)
</code></pre></div></div>
<p>Only domain of integers and it subset are being considered while solving
these equations.
Now after this it separates out a modterm and the rest term on either
sides by this code.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>modterm = list(f.atoms(Mod))[0]
rhs = -(S.One)*(f.subs(modterm, S.Zero))
if f.as_coefficients_dict()[modterm].is_negative:
# f.as_coefficient(modterm) was returning None don't know why
# checks if coefficient of modterm is negative in main equation.
rhs *= -(S.One)
</code></pre></div></div>
<p>Now the equation is being inverted with the helper routine <code class="highlighter-rouge">_invert_modular</code>
like this.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>n = Dummy('n', integer=True)
f_x, g_n = _invert_modular(modterm, rhs, n, symbol)
</code></pre></div></div>
<p>I am defining n in <code class="highlighter-rouge">_solve_modular</code> because <code class="highlighter-rouge">_invert_modular</code> contains
recursive calls to itself so if define the n there then it was going to have
many instances which of no use. Thats y I am defining it in <code class="highlighter-rouge">_solve_modular</code>.</p>
<p>Now after the equation is inverted now solution finding takes place.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if f_x is modterm and g_n is rhs:
return unsolved_result
</code></pre></div></div>
<p>First of all if <code class="highlighter-rouge">_invert_modular</code> fails to invert then a ConditionSet is being
returned.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> if f_x is symbol:
if domain is not S.Integers:
return domain.intersect(g_n)
return g_n
</code></pre></div></div>
<p>And if <code class="highlighter-rouge">_invert_modular</code> is fully able to invert the equation then only domain
intersection needs to takes place. <code class="highlighter-rouge">_invert_modular</code> inverts the equation
considering S.Integers as its default domain.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> if isinstance(g_n, ImageSet):
lamda_expr = g_n.lamda.expr
lamda_vars = g_n.lamda.variables
base_set = g_n.base_set
sol_set = _solveset(f_x - lamda_expr, symbol, S.Integers)
if isinstance(sol_set, FiniteSet):
tmp_sol = EmptySet()
for sol in sol_set:
tmp_sol += ImageSet(Lambda(lamda_vars, sol), base_set)
sol_set = tmp_sol
return domain.intersect(sol_set)
</code></pre></div></div>
<p>In this case when g_n is an ImageSet of n and f_x is not symbol so the
equation is being solved by calling <code class="highlighter-rouge">_solveset</code> (this will not lead to
recursion because equation to be entered is free from Mod) and then
the domain intersection takes place.</p>
<h3 id="what-does-_invert_modular-do">What does <code class="highlighter-rouge">_invert_modular</code> do?</h3>
<p>This function helps to convert the equation <code class="highlighter-rouge">A - Mod(B, C) = 0</code> to a
form (f_x, g_n).
First of all it checks the possible instances of invertible cases if not then
it returns the equation as it is.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>a, m = modterm.args
if not isinstance(a, (Dummy, Symbol, Add, Mul, Pow)):
return modterm, rhs
</code></pre></div></div>
<p>Now here is the check for complex arguments and returns the equation as it is
if somewhere it finds I.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if rhs.is_real is False or any(term.is_real is False \
for term in list(_term_factors(a))):
# Check for complex arguments
return modterm, rhs
</code></pre></div></div>
<p>Now after this we check of emptyset as a solution by checking range of both
sides of equation.
As modterm can have values between [0, m - 1] and if rhs is out of this range
then emptySet is being returned.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if (abs(rhs) - abs(m)).is_positive or (abs(rhs) - abs(m)) is S.Zero:
# if rhs has value greater than value of m.
return symbol, EmptySet()
</code></pre></div></div>
<p>Now the equation haveing these types are being returned as the following</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if a is symbol:
return symbol, ImageSet(Lambda(n, m*n + rhs), S.Integers)
if a.is_Add:
# g + h = a
g, h = a.as_independent(symbol)
if g is not S.Zero:
return _invert_modular(Mod(h, m), (rhs - Mod(g, m)) % m, n, symbol)
if a.is_Mul:
# g*h = a
g, h = a.as_independent(symbol)
if g is not S.One:
return _invert_modular(Mod(h, m), (rhs*invert(g, m)) % m, n, symbol)
</code></pre></div></div>
<p>The more peculiar case is of <code class="highlighter-rouge">a.is_Pow</code> which is handled as following.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>if a.is_Pow:
# base**expo = a
base, expo = a.args
if expo.has(symbol) and not base.has(symbol):
# remainder -> solution independent of n of equation.
# m, rhs are made coprime by dividing igcd(m, rhs)
try:
remainder = discrete_log(m / igcd(m, rhs), rhs, a.base)
except ValueError: # log does not exist
return modterm, rhs
# period -> coefficient of n in the solution and also referred as
# the least period of expo in which it is repeats itself.
# (a**(totient(m)) - 1) divides m. Here is link of theoram:
# (https://en.wikipedia.org/wiki/Euler's_theorem)
period = totient(m)
for p in divisors(period):
# there might a lesser period exist than totient(m).
if pow(a.base, p, m / igcd(m, a.base)) == 1:
period = p
break
return expo, ImageSet(Lambda(n, period*n + remainder), S.Naturals0)
elif base.has(symbol) and not expo.has(symbol):
remainder_list = nthroot_mod(rhs, expo, m, all_roots=True)
if remainder_list is None:
return symbol, EmptySet()
g_n = EmptySet()
for rem in remainder_list:
g_n += ImageSet(Lambda(n, m*n + rem), S.Integers)
return base, g_n
</code></pre></div></div>
<p>Two cases are being created based of a.is_Pow</p>
<ol>
<li>x**a</li>
<li>a**x</li>
</ol>
<p>x**a - It is being handled by the helper function <code class="highlighter-rouge">nthroot_mod</code> which returns
required solution. I am not going into very mch detail for more
information you can read the documentation of nthroot_mod.</p>
<p>a**x - For this <code class="highlighter-rouge">totient</code> is being used in the picture whose meaning can be
find on this <a href="https://en.wikipedia.org/wiki/Euler's_theorem">Wikipedia</a>
page. And then its divisors are being checked to find the least period
of solutions.</p>
<p>Hope I am able to clear out everything!!</p>
<p>Code improvement takes time!!</p>https://anpandey.github.io/posts/sympy/2019-08-06-week-10.htmlAnkit Pandey (anpandey)Ankit Pandey (anpandey): Google Summer of Code Week 10: Matrix Wildcard ReduxTue, 06 Aug 2019 00:00:00 GMT
https://anpandey.github.io/posts/sympy/2019-08-06-week-10.html
<p>For this week, I’ve made some more minor changes to the <a href="https://github.com/sympy/sympy/pull/17299"><code>Indexed</code> pull request</a> from last week, in addition to filing a new <a href="https://github.com/sympy/sympy/pull/17347">matrix wildcard pull request</a>.</p>
<h3 id="matrix-wildcards-again">Matrix Wildcards (again)</h3>
<p>Since <a href="https://github.com/sympy/sympy/pull/17223">#17223</a> was merged this week, I started with an implementation of matrix wildcards that takes advantage of the functionality included in the pull request. I thought that this would be relatively straightforward, with an implementation of the <code>matches</code> method for the <code>MatrixWild</code> subclass being enough. There was one problem though: the underlying matching implementation assumes that all powers in the expression are an instance of the <code>Pow</code> class. However, this isn’t true for matrix expressions: the <code>MatPow</code> class, which represents matrix powers, is a subclass of its own. I’m not exactly sure what the reason for this is, since a quick change of <code>MatPow</code> to inherit from <code>Pow</code> doesn’t seem to break anything. I’ll probably look into this a bit more, since I think this might have something to do with the fact that Matrix exponents can also include other matrices.</p>
<p>My solution for this was to allow temporarily allow expansion of powers by recursing through the expression tree and setting the <code>is_Pow</code> field of each matrix power to <code>True</code> and later reverting these states later. It doesn’t look pretty, but it does seem to work (you can see the code <a href="https://github.com/sympy/sympy/blob/17fb5010e36e10de156dad032d2aea376051df24/sympy/matrices/expressions/matmul.py#L178-L197">here</a>).</p>
<h2 id="next-steps">Next Steps</h2>
<p>I’ll try to get started with some optimizations that utilize this wildcard class once the pull request gets merged.</p>https://czgdp1807.github.io/week_10Gagandeep Singh (czgdp1807)Gagandeep Singh (czgdp1807): Week 10 - Debugging, testing and Haar measureTue, 06 Aug 2019 00:00:00 GMT
https://czgdp1807.github.io/week_10/
<p>This week was about a lot of debugging and testing. I also got to know some facts about random matrices and group theory.</p>
<p>With the ending of 10th week, we have entered the second last week of the project. Well, this week was full of finding bugs, correcting and testing them. Mainly, I worked on, <a href="https://github.com/sympy/sympy/pull/17146">#17146</a>, <a href="https://github.com/sympy/sympy/pull/17304">#17304</a>, <a href="https://github.com/sympy/sympy/pull/17336">#17336</a> and <a href="https://github.com/sympy/sympy/pull/17306">#17306</a>. The first one was related to symbolic <code class="language-plaintext highlighter-rouge">Range</code>, and it lacked systematic and robust tests. I pushed some commits to resolve the issue, though more is to be done. Now, coming to the second PR, it was related to circular ensembles. I got to know that distribution of these ensembles is something called Haar measure on <code class="language-plaintext highlighter-rouge">U(n)</code>, group of unitary matrices. I was not familiar with this. Thanks to <a href="https://github.com/jksuom">jksuom</a> for sharing some papers for the same. I will go through them in the following week. The third PR fixes a bug which was found while working on circular ensembles. Acutally, <code class="language-plaintext highlighter-rouge">ask(Q.hermitian(Matrix([[2, 2 + I, 4], [2 - I, 3, I], [4, -I, 1]])))</code> was giving <code class="language-plaintext highlighter-rouge">False</code>, however clearly the matrix is hermitian. So, I went ahead fixing it and waiting for reviews on my approach. The last one is related to matrices with random elements and it is complete after fixing a few bugs related to canonical outputs.</p>
<p>What I learnt this week?
Well, I learnt, <strong>When you think your work is complete, well, sorry to say, that’s the beginning ;-)</strong></p>
<p>Bye!!</p>https://sc0rpi0n101.github.io/2019/08/week-10-the-finished-parser/Nikhil Maan (Sc0rpi0n101)Nikhil Maan (Sc0rpi0n101): Week 10: The Finished ParserTue, 06 Aug 2019 00:00:00 GMT
https://sc0rpi0n101.github.io/2019/08/week-10-the-finished-parser/
<p>“Software is like entropy: It is difficult to grasp, weighs nothing, and obeys the Second Law of Thermodynamics; i.e., it always increases.” — Norman Augustine
Welcome everyone, this is your host Nikhil Maan aka Sc0rpi0n101 and we will talk all about the Fortran Parser this week. I have passed the second evaluation and Fortran Parser pull request is complete.
The Week Fortran Parser SymPy Expression Travis Builds The C Parser The Meeting The Week This week began with me working on the C parser to finalize that.http://ishanaj.wordpress.com/?p=91Ishan Joshi (ishanaj)Ishan Joshi (ishanaj): GSoC’19: Week-10- An alternative to the draw() functionMon, 05 Aug 2019 17:58:18 GMT
https://ishanaj.wordpress.com/2019/08/05/gsoc19-week-10-an-alternative-to-the-draw-function/
<p>This was
the end of the tenth week, and we have entered the final phase of the project.</p>
<p>For the last phase we have Truss calculations to be implemented in the continuum_mechanics module. I had initiated a discussion regarding what needs to be done and how the implementation will move forward in an <a href="https://github.com/sympy/sympy/issues/17302">issue #17302</a>. We will have to analyse a bit more about making Truss calculations symbolic and what benefits one might get in solving it symbolically. We have some good packages to compare from like <a href="https://anastruct.readthedocs.io/en/latest/?badge=latest">this</a>. I guess a bit more discussion is needed before we go ahead with it. </p>
<p>Besides this, I had worked on improving the <strong>draw()</strong> function implemented in the previous week in <a href="https://github.com/sympy/sympy/pull/17240">PR #17240</a>. I modified it to use the <strong>_backend</strong> attribute for plotting the beam diagram. This could have worked until <span id="more-91"></span>I realised that using the <strong>_backend</strong> attribute doesn’t really has affect the <strong>Plot object. </strong>To understand the last statement, lets go to how <strong>sympy.plot() </strong>works.</p>
<p>In simple terms, the equations that we pass through the <strong>plot()</strong> function as arguments are actually stored in<strong> _series</strong> attribute. So we can indirectly say that the basic data of the plot is stored in this attribute. But using the <strong>_backend </strong>attribute wouldn’t alter <strong>_series </strong>at all and if <strong>_series </strong>remains empty at the start it would end up storing nothing. </p>
<p>But we are of course getting a decent plot at the end, so shouldn’t we probably ignore this? No, it would surely give the plot but we won’t be getting a fully defined<strong> Plot </strong>object which we can further use with <strong>PlotGrid</strong> to get a subplot which includes all the five plots related to the beam.</p>
<p>Keeping this in mind, I tried an alternative way to directly use<strong> sympy.plot() </strong> to give the plot. Although a bit hard and time taking to do, I have intiated this in a draft <a href="https://github.com/sympy/sympy/pull/17345">PR #17345</a>. This PR perfectly plots a rectangular beam and loads (except point and moment loads). Only things that are left here are to plot supports and arrows denoting the direction of the load.</p>
<p>The example below shows how it functions: (keep in mind it just plots the basic structure of the intended beam diagram, it hasn’t been completed yet)</p>
<div class="wp-block-group"><div class="wp-block-group__inner-container"><pre class="brush: python; collapse: false; title: ; wrap-lines: false; notranslate">
>>> E, I = symbols('E, I')
>>> b = Beam(9, E, I)
>>> b.apply_load(-12, 9, -1) # gets skipped
>>> b.apply_load(50, 5, -2) # gets skipped
>>> b.apply_load(3, 6, 1, end=8)
>>> b.apply_load(4, 0, 0, end=5)
>>> b.draw()
</pre>
</div></div>
<figure class="wp-block-image size-large is-resized"><img alt="" class="wp-image-92" height="351" src="https://ishanaj.files.wordpress.com/2019/08/screenshot-05-08-2019-19_49_21.png" width="449" /></figure>
<p>I also tried to complete the leftover PR’s in this week, but still some work is left.</p>
<h2><strong>Next week:</strong></h2>
<ul><li>Completing the <strong>draw() </strong>function</li><li>Documentation and testing</li><li>Starting Truss implementations</li></ul>
<p>Will keep you updated!</p>
<p>Thanks!</p>https://arighnaiitg.github.io/2019-08-05-gsoc-week10/Arighna Chakrabarty (arighnaiitg)Arighna Chakrabarty (arighnaiitg): GSoC Week 10 !!Mon, 05 Aug 2019 07:00:00 GMT
https://arighnaiitg.github.io/2019-08-05-gsoc-week10/
<p>Week 10 ends.. -
Phase 3 of the GSoC coding period is traversong smoothly. !! I and Sartaj had a meeting on the 05th of August, about the timeline of the next 2 weeks. Here are the deliverables that have been completed in this week, including the minutes of the meeting. The second aseries...https://divyanshu132.github.io//gsoc-week-10Divyanshu Thakur (divyanshu132)Divyanshu Thakur (divyanshu132): GSoC 2019 - Week 10 - Induced Pcgs for polycyclic subgroupsMon, 05 Aug 2019 00:00:00 GMT
https://divyanshu132.github.io//gsoc-week-10
<p>The tenth week of coding period has ended and a new PR<a href="https://github.com/sympy/sympy/pull/17317">sympy/sympy#17317</a> has been introduced. The PR implements induced Pcgs and exponent vector for polycyclic subgroups with respect to the original pcgs of the group.
Below is an example to show the functionality.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>>>> from sympy.combinatorics import *
>>> S = SymmetricGroup(8)
>>> G = S.sylow_subgroup(2)
>>> gens = [G[0], G[1]]
>>> PcGroup = G.polycyclic_group()
>>> collector = PcGroup.collector
>>> ipcgs = collector.induced_pcgs(gens)
>>> [gen.order() for gen in ipcgs]
[2, 2, 2]
</code></pre></div></div>
<p>Further it can also be used to implement <code class="highlighter-rouge">Canonical polycyclic sequence</code> which can be used to check if two subgroups of polycyclic presented group <code class="highlighter-rouge">G</code> are equal or not.</p>
<p>For the next week I’ll try to complete the documentation work on polycyclic groups and open a PR for the same.</p>
<p>Till then, good byee..</p>https://jmig5776.github.io//gsoc-week-10Jogi Miglani (jmig5776)Jogi Miglani (jmig5776): GSoC 2019 - Week 10Sun, 04 Aug 2019 00:00:00 GMT
https://jmig5776.github.io//gsoc-week-10
<p>This was the tenth week meeting with the GSoC mentors which was scheduled on
Sunday 4th August, 2019 between 1:00 - 2:00 PM (IST). Me, Yathartha
were the attendees of the meeting.</p>
<ul>
<li>Discussing previous week’s progress</li>
</ul>
<ol>
<li>
<p>Progress of <code class="highlighter-rouge">_solve_modular</code>:- In PR <a href="https://github.com/sympy/sympy/pull/16976">#16976</a>
After discussing with Yathartha, I decided to change the basic model of the <code class="highlighter-rouge">_solve_modular </code>
such that I should be able to target equations more efficiently and also the rest
of the types of equation should return ConditionSet. Cases like <code class="highlighter-rouge">Mod(a**x, m) - rhs = 0</code>
are special type and will be handled differently with the helper functions of ntheory module.</p>
</li>
<li>
<p>Progress of ImageSet Union:- In PR <a href="https://github.com/sympy/sympy/pull/17079">#17079</a>
This PR is currently been left for review.</p>
</li>
</ol>
<ul>
<li>
<p>Next week goals</p>
</li>
<li>Work upon <code class="highlighter-rouge">_solve_modular</code></li>
<li>In the following week I will be changing the domain of solving equations to
Integers only.</li>
</ul>
<p>Code improvement takes time!!</p>https://arighnaiitg.github.io/2019-08-01-gsoc-week9/Arighna Chakrabarty (arighnaiitg)Arighna Chakrabarty (arighnaiitg): GSoC Week 9 !!Thu, 01 Aug 2019 07:00:00 GMT
https://arighnaiitg.github.io/2019-08-01-gsoc-week9/
<p>Week 9 ends.. -
The last phase of this journey has started. I am happy to let you know that I have passed Phase 2 successfully. Phase 3 will include merging of some important code written in Phase 2, and also implementation of some other useful code. I had a meeting with Sartaj in...https://www.shubhamjha.com/posts/GSoC-Week-9Shubham Kumar Jha (ShubhamKJha)Shubham Kumar Jha (ShubhamKJha): GSoC 2019: Week 9Tue, 30 Jul 2019 18:30:00 GMT
https://www.shubhamjha.com/posts/GSoC-Week-9/
<p>I spent most of this week getting <a href="https://github.com/sympy/sympy/pull/17144">#17144</a> ready to be merged. I had to change a lot of things from the last attempt. One of such was an attempt on <strong>early encoding</strong>, I had tried it on <strong>Literals</strong>. They were eventually going to be encoded so I tried to do this when <strong>Literals</strong> were created only. But as Aaron suggested, my approach had left encodings in the global space and hence could leak memory. During the week, I tried to attach encoding to the <strong>CNF</strong> object itself but it would have needed a lot of refactoring, since <strong>CNF</strong> objects interacted with other such objects. So, after some attempts, at the end I left the encoding to be done at last in <strong>EncodedCNF</strong> object. Currently, this is ready to be merged.</p>
<p>For the coming weeks, I would try to improve over this.</p>
<p>This was also the week for second monthly evaluation and I feel happy to announce that I passed it. From this week my college has also started but I am still able to give the required time to this project and complete it.</p>
<p>Will keep you updated. Thank you !</p>http://ishanaj.wordpress.com/?p=74Ishan Joshi (ishanaj)Ishan Joshi (ishanaj): GSoC’19: Week-9- Analyzing the draw() functionMon, 29 Jul 2019 05:43:20 GMT
https://ishanaj.wordpress.com/2019/07/29/gsoc19-week-9-analyzing-the-draw-function/
<p>With the
end of this week the third phase officially ends. </p>
<p>There has been some discussions in the <a href="https://github.com/sympy/sympy/pull/17240">PR #17240</a> which implements the <strong>draw() </strong>function. We might change the name of the function to <strong>plot() </strong>which is more consistent with the previous beam methods <strong>plot_shear_force()</strong>, <strong>plot_bending_moment(), </strong>etc.</p>
<p>Another discussion was about making this beam diagram a part of the <strong>plot_loading_results(), </strong>which basically intends to plot all the beam related plots. Although currently the beam diagram uses <strong>matplotlib </strong>as an external module, whereas the <strong>plot_loading_results()</strong> uses <strong>PlotGrid</strong> which is Sympy’s internal functionality. So it would be a bit tricky to merge those two.<span id="more-74"></span></p>
<p>We also discussed the idea or rather the possibility of directly making use of SymPy’s own plot to create a beam diagram. SymPy’s <strong>plot() </strong>is capable to plotting Singularity functions, so the load applied on the beam can also be plotted using <strong>sympy.plot() </strong>as beam.load is indeed in terms of singularity function. But there is a problem when it comes to point loads and moment loads as the are in terms singularity function of negative order (or exponent). Not sure whether the sympy plot for singularity functions of negative order is plotted correctly, but the current plot won’t help us in drawing point loads and moment loads. We might have to deal with it separately.</p>
<p>I have
opened a discussion in the <a href="https://groups.google.com/forum/?fromgroups#!topic/sympy/gmBNI-sffls">mailing
list</a> regarding whether the plot is correct for singularity functions of negative
order, or what else should be done in order to get it corrected.</p>
<p>Also, it will be difficult to plot a rectangle (for making beam) and markers (for making supports) via sympy.plot(). One idea is to go with the <strong>_backend</strong> attribute of sympy.plot() which helps in directly using the <strong>backend </strong>(i.e. matplotlib backend). I will have a look over it.</p>
<p>Of
course if the beam diagram is made using SymPy’s own plot it would surely be
preferred but for that we also need work on <strong>sympy.plot()</strong> as currently it is limited to certain functionalities.</p>
<p>From the
next week I will be starting with the last phase of implementing a Truss structure
and its respective calculations.</p>
<p>Since only last few weeks are left, I think I will be able to make a draft PR for the last phase implementation by the end of the next week. And then we would only be left with minor things and leftovers of the previous phases.</p>
<p>Also, I am glad to share that I was able to pass the second evaluations. So once again thank you mentors for all your support and guidance!</p>
<h2><strong>Next Week:</strong></h2>
<ul><li>Starting phase-IV implementations</li><li>Simultaneously working and discussing previous
PR’s.</li></ul>
<p>Will
keep you updated!</p>
<p>Thanks!</p>https://czgdp1807.github.io/week_9Gagandeep Singh (czgdp1807)Gagandeep Singh (czgdp1807): Week 9 - Lots of reviewsMon, 29 Jul 2019 00:00:00 GMT
https://czgdp1807.github.io/week_9/
<p>This week I recieved a lot of reviews from the members of community on my various PRs and this has formed the base of the work for the next week. Let me share some of those reviews with you.</p>
<p>As I told you that the PR <a href="https://github.com/sympy/sympy/pull/17146">#17146</a> was pending for reviews. Well, I received a lot of comments from <a href="https://github.com/oscarbenjamin">@oscarbenjamin</a> and <a href="https://github.com/smichr">@smichr</a> on pretty printing of symbolic <code class="language-plaintext highlighter-rouge">Range</code>, the way tests are written, about <code class="language-plaintext highlighter-rouge">inf</code> and <code class="language-plaintext highlighter-rouge">sup</code> of <code class="language-plaintext highlighter-rouge">Range</code>. This in turn helped me to discover bugs in other features of <code class="language-plaintext highlighter-rouge">Range</code>, like, <code class="language-plaintext highlighter-rouge">reversed</code>. In the following week, I will work on this stuff and will correct the things. Now moving on to the random matrices, i.e., the PR <a href="https://github.com/sympy/sympy/pull/17174">#17174</a> has been merged but more work is to be done for <code class="language-plaintext highlighter-rouge">Matrix</code> with entries as random variables. In fact, I studied about expressions of random matrices and summarised the results <a href="https://github.com/sympy/sympy/pull/17174#issuecomment-514985333">here</a>. Though the findings suggest specific algorithms for specific expressions like sum. I am still looking for a more generalized technique and will update you if found any.</p>
<p>So, coming to the learning aspect. This week I learnt about the importance of exhaustive and systematic tests. The tests which I wrote for symbolic <code class="language-plaintext highlighter-rouge">Range</code> aren’t so systematic and robust. I have found a way to improve them from <a href="https://github.com/sympy/sympy/pull/17146#discussion_r307971324">this comment</a>.</p>
<p>That’s all for now, signing off!!</p>https://divyanshu132.github.io//gsoc-week-9Divyanshu Thakur (divyanshu132)Divyanshu Thakur (divyanshu132): GSoC 2019 - Week 9 - Merged Polycyclic groupsMon, 29 Jul 2019 00:00:00 GMT
https://divyanshu132.github.io//gsoc-week-9
<p>Hello everyone, the ninth week of coding period has ended and there is a really good news the polycyclic group PR <a href="https://github.com/sympy/sympy/pull/16991">sympy/sympy#16991</a> that we were working from the last one and half months is finally merged. This week I didn’t do that much work except organizing different methods and fixing small issues in the above pr to get it merged.</p>
<p>There has been a lot of rearrangement of methods, where most of the methods were moved to the class <code class="highlighter-rouge">Collector</code> from the class <code class="highlighter-rouge">PolycyclicGroup</code>. Now, we do not need free symbols in-hand, they can be computed by the Collector if not provided by the user. There are few more things which are changed like relative order is computed in the course of polycyclic sequence and series computation. For better look one can go through the above Pr.</p>
<p>I’m hopping to implement few things next week which are mentioned below.</p>
<ul>
<li>Induced polycyclic sequence for a subgroup.</li>
<li>Get started with writing docs for polycyclic groups.</li>
</ul>
<p>Till then, good byee..</p>