Planet SymPy
http://planet.sympy.org/
enPlanet SymPy - http://planet.sympy.org/https://shikharj.github.io//2017/06/27/GSoC-Progress-Week-4Shikhar Jaiswal (ShikharJ)Shikhar Jaiswal (ShikharJ): GSoC Progress - Week 4Tue, 27 Jun 2017 00:00:00 GMT
https://shikharj.github.io//2017/06/27/GSoC-Progress-Week-4/
<p>Hello, this post contains the fourth report of my GSoC progress. Though I had planned on writing a new blog post every Monday, seems like I’m a bit late on that account.</p>
<h2 id="report">Report</h2>
<h3 id="symengine">SymEngine</h3>
<p>Last week I had mentioned that I’d be finishing up my work on <code class="highlighter-rouge">Range</code> set, however, after a talk with Isuru, it was decided to schedule it for the later part of the timeline. Instead I finished up on simplification of <code class="highlighter-rouge">Add</code> objects in <code class="highlighter-rouge">Sign</code> class through the PR <a href="https://github.com/symengine/symengine/pull/1297">#1297</a>.</p>
<p>Also, Isuru suggested implementing parser support for <code class="highlighter-rouge">Relationals</code>, which was required for <code class="highlighter-rouge">PyDy</code>. The work is currently in progress, through PR <a href="https://github.com/symengine/symengine/pull/1298">#1298</a>, after we hit an issue with dual-character operators (<code class="highlighter-rouge"><=</code> and <code class="highlighter-rouge">>=</code>), but we’re working on it.</p>
<p>I sent in another PR <a href="https://github.com/symengine/symengine/pull/1302">#1302</a> removing some redundant includes.</p>
<p>From now my work with <code class="highlighter-rouge">SymEngine</code> will probably be a little slower than usual, as I’ve started off wrapping classes in <code class="highlighter-rouge">SymEngine.py</code>, which is supposed to continue for some time from now.</p>
<h3 id="symenginepy">SymEngine.py</h3>
<p>I pushed in <a href="https://github.com/symengine/symengine.py/pull/162">#162</a> wrapping off a huge portion of the <code class="highlighter-rouge">functions</code> module in <code class="highlighter-rouge">SymEngine</code>.</p>
<p>I’ll be working on wrapping <code class="highlighter-rouge">Logic</code> classes and functions in the coming week, as well as finishing off my work on the parser support in <code class="highlighter-rouge">SymEngine</code>.</p>
<p>See you again!</p>
<p><strong>Vale</strong></p>http://bjodah.github.io/blog/posts/gsoc-week4.htmlBjörn Dahlgren (bjodah)Björn Dahlgren (bjodah): Status update week 4 GSoCMon, 26 Jun 2017 19:39:00 GMT
http://bjodah.github.io/blog/posts/gsoc-week4.html
<div><div class="section" id="continued-work-on-tutorial-material">
<h2>Continued work on tutorial material</h2>
<p>During the past week I got <a class="reference external" href="https://mybinder.org">binder</a> to work with
our tutorial material. Using <tt class="docutils literal">environment.yml</tt> did require that we had
a conda package available. So I set up our instance of <a class="reference external" href="https://drone.io">Drone IO</a> to push to the <tt class="docutils literal">sympy</tt> conda channel. The new
base dockerimage used in the beta version of binder (v2) does not
contain gcc by default. This prompted me to add gcc to our
<tt class="docutils literal">environment.yml</tt>, unfortunately this breaks the build process:</p>
<pre class="literal-block">
Attempting to roll back.
LinkError: post-link script failed for package defaults::gcc-4.8.5-7
running your command again with `-v` will provide additional information
location of failed script: /opt/conda/bin/.gcc-post-link.sh
==> script messages <==
<None>
</pre>
<p>I've reached out on their gitter channel, we'll see if anyone knows
what's up. If we cannot work around this we will have two options:</p>
<ol class="arabic simple">
<li>Accept that the binder version cannot compile any generated code</li>
<li>Try to use binder with a Dockerimage based on the one used for CI
tests (with the difference that it will include a prebuilt conda
environment from <tt class="docutils literal">environment.yml</tt>)</li>
</ol>
<p>I hope that the second approach should work unless we hit a image size
limitation (and perhaps we need a special base image, I haven't looked
into those details yet).</p>
<p>Jason has been producing some really high quality tutorial notebooks
and left lots of constructive feedback on my initial work. Based on
his feedback I've started reworking the code-generation examples not
to use classes as extensively. I've also added some introductory
notebooks: numerical integration of ODEs in general & intro to SymPy's
cryptically named <tt class="docutils literal">lambdify</tt> (the latter notebook is still to be
merged into master).</p>
</div>
<div class="section" id="work-on-sympy-for-version-1-1">
<h2>Work on SymPy for version 1.1</h2>
<p>After last weeks mentor meeting we decided that we would try to
<a class="reference external" href="https://github.com/sympy/sympy/pull/12805">revert</a> a change to
<tt class="docutils literal">sympy.cse</tt> (a function which performs common subexpression
elimination). I did spend some time profiling the new implementation.
However, I was not able to find any obvious bottle-necks, and given
that it is not the main scope of my project I did not pursue this any
further at the moment.</p>
<p>I also <a class="reference external" href="https://github.com/sympy/sympy/pull/12808">just started</a>
looking at introducing a Python code printer. There is a
<tt class="docutils literal">PythonPrinter</tt> in SymPy right now, although it assumes that
<tt class="docutils literal">SymPy</tt> is available. The plan right now is to rename to old
printer to <tt class="docutils literal">SymPyCodePrinter</tt> and have the new printer primarily print
expressions, which is what the <tt class="docutils literal">CodePrinter</tt> class does best, even
though it can be "coerced" into emitting statements.</p>
</div>
<div class="section" id="plans-for-the-upcoming-week">
<h2>Plans for the upcoming week</h2>
<p>As the conference approaches the work intensifies both with respect to
finishing up the tutorial material and fixing blocking issues for a
SymPy 1.1 release. Working on the tutorial material helps finding
things to improve code-generation wise (just today I opened a <a class="reference external" href="https://github.com/sympy/sympy/issues/12810">minor
issue</a> just to realize
that my <a class="reference external" href="https://github.com/sympy/sympy/pull/12693">"work-in-progress" pull-request</a> from an earlier week
(which I intend to get back working on next week too) actually fixes
said issue.</p>
</div></div>https://gxyd.github.io/blogs/Gsoc2017-week-3Gaurav Dhingra (gxyd)Gaurav Dhingra (gxyd): GSoC Week 3Sun, 25 Jun 2017 00:00:00 GMT
https://gxyd.github.io/blogs/Gsoc2017-week-3/
<blockquote>
<h2 id="rischs-structure-theorems">Risch’s structure theorem’s</h2>
</blockquote>
<p>For the next couple of weeks we will be moving to understand and implement the Risch’s structure theorem and applications that we will make use of in gsoc project. One of the application is that of “Simplification of real elementary functions”, a paper by Manuel Bronstein (in 1989)<sup>[1]</sup>. This paper by Bronstein uses Risch’s real structure theorem, the paper determines explicitly all algebraic relations among a set of real elementary functions.</p>
<blockquote>
<h3 id="simplification-of-real-elementary-functions">Simplification of Real Elementary Functions</h3>
</blockquote>
<p>Risch in 1979 gave a theorem and an algorithm that found explicitly all algebraic relations among a set of only ’s and ’s. And to apply this algorithm required to convert trigonometric functions to ’s and ’s.</p>
<p>Consider the integrand . For this first we make use of form of , i.e . Substituting it in the original expression we get . Now using the exponential form of we get .
And substituting in place of we get the final expression. , which is a complex algebraic function.</p>
<p>All of this could be avoided since the original function is a real algebraic function. The part that could be a problem to see it is , which can be seen to satisfy the algebraic equation using the formula for . Bronstein discusses the algorithm that wouldn’t require the use of complex ’s and ’s.</p>
<p>A function is called real elementary over a differential field , if for a differential extension () of . If either is algebraic over or it is , , , of an element of (consider this recursively).
For example: is real elementary over , so is and .</p>
<p>Point to mention here is, we have to explicitly include , , since we don’t want the function to be a complex function by re-writing as , form( and can be written in form of complex and respectively), and we can write other trigonometric functions have real elementary relations with and . Alone or alone can’t do the job.</p>
<p>This way we can form the definition of a <em>real-elementary extension of a differential field</em> .</p>
<blockquote>
<h3 id="functions-currently-using-approach-of-structure-theorems-in-sympy">Functions currently using approach of structure theorems in SymPy</h3>
</blockquote>
<p>Moving on, let us now look at the three functions in SymPy that use the structure theorem “approach”:</p>
<ol>
<li>
<p><code class="highlighter-rouge">is_log_deriv_k_t_radical(fa, fd, DE)</code>: Checks if is the logarithmic derivative of a <code class="highlighter-rouge">k(t)-radical</code>. Mathematically where , . In naive terms if . Here <code class="highlighter-rouge">k(t)-radical</code> means <code class="highlighter-rouge">n-th</code> roots of element of . Used in process of calculating DifferentialExtension of an object where ‘case’ is ‘exp’.</p>
</li>
<li>
<p><code class="highlighter-rouge">is_log_deriv_k_t_radical_in_field(fa, fd, DE)</code>: Checks if is the logarithmic derivative of a <code class="highlighter-rouge">k(t)-radical</code>. Mathematically where , . It may seem like it is just the same as above with <code class="highlighter-rouge">f</code> given as input instead of having to calculate , but the “in_field” part in function name is important.</p>
</li>
<li>
<p><code class="highlighter-rouge">is_deriv_k</code>: Checks if is the derivative of a k(t), i.e. where .</p>
</li>
</ol>
<blockquote>
<h3 id="what-have-i-done-this-week">What have I done this week?</h3>
</blockquote>
<p>Moving onto what I have been doing for the last few days (at a very slow pace) went through a debugger for understanding the working of <code class="highlighter-rouge">DifferentialExtension(exp(x**2/2) + exp(x**2), x)</code> in which <code class="highlighter-rouge">integer_powers</code> function is currently used to determine the relation and , instead of and , since we can’t handle algebraic extensions currently (that will hopefully come later in my gsoc project). Similar example for is there in the book , though the difference is it uses the <code class="highlighter-rouge">is_deriv_k</code> (for <code class="highlighter-rouge">case == 'primitive'</code>, we have <code class="highlighter-rouge">is_log_deriv_k_t_radical</code> for case == ‘exp’) to reach the conclusion that , and .</p>
<p>I still have to understand the structure theorem, for what they are? and how exactly they are used. According to Aaron, Kalevi, I should start reading the source of <code class="highlighter-rouge">is_deriv_k</code>, <code class="highlighter-rouge">is_log_deriv_k_t_radical</code> and <code class="highlighter-rouge">parametric_log_deriv</code> functions in prde.py file.</p>
<p>We worked on <a href="https://github.com/sympy/sympy/pull/12743">#12743 Liouvillian case for Parametric Risch Diff Eq.</a> this week, it handles liouvillian cancellation cases. It enables us to handle integrands like .</p>
<div class="highlighter-rouge"><pre class="highlight"><code>>>> risch_integrate(log(x/exp(x) + 1), x)
(x*log(x*exp(-x) + 1) + NonElementaryIntegral((x**2 - x)/(x + exp(x)), x))
</code></pre>
</div>
<p>earlier it used to raise error <code class="highlighter-rouge">prde_cancel()</code> not implemented. After testing it a bit I realised that part of returned answer could be further integrated instead of being returned as a <code class="highlighter-rouge">NonElementaryIntegral</code>. Consider this</p>
<div class="highlighter-rouge"><pre class="highlight"><code>In [4]: risch_integrate(x + exp(x**2), x)
Out[4]: Integral(x + exp(x**2), x)
</code></pre>
</div>
<p>as can be easily seen it can be further integrated. So Aaron opened the issue <a href="https://github.com/sympy/sympy/issues/12779">#12779 Risch algorithm could split out more nonelementary cases</a>.</p>
<p>Though I am not sure why Kalevi has still not merged the <code class="highlighter-rouge">liouvillian</code> PR (only a comment needs to be fixed <code class="highlighter-rouge">n=2</code> -> <code class="highlighter-rouge">n=5</code> in a comment), though that PR is not blocking me from doing further work.</p>
<p>Starting tomorrow (26th) we have the first evaluation of GSoC. Anyways I don’t like the idea of having 3 evaluations.</p>
<blockquote>
<h3 id="todo-for-the-next-week">TODO for the next week:</h3>
</blockquote>
<p>Figuring out and implementing the structure theorems.</p>
<blockquote>
<h3 id="references">References</h3>
</blockquote>
<ul>
<li>{1}. Simplification of real elementary functions, http://dl.acm.org/citation.cfm?id=74566</li>
</ul>http://valglad.github.io/2017/06/25/homomorphismValeriia Gladkova (valglad)Valeriia Gladkova (valglad): The homomorphism classSun, 25 Jun 2017 00:00:00 GMT
http://valglad.github.io/2017/06/25/homomorphism/
<p>At the beginning of the week, all of my previous PRs got merged. This is good. Also, the homomorphism class is now mostly written with the exception of the kernel computation. The kernel is the only tricky bit. I don’t think there is any general method of computing it, but there is a way for finite groups. Suppose we have a homomorphism <code class="highlighter-rouge">phi</code> from a finite group <code class="highlighter-rouge">G</code>. If <code class="highlighter-rouge">g</code> is an element of <code class="highlighter-rouge">G</code> and <code class="highlighter-rouge">phi.invert(h)</code> returns an element of the preimage of <code class="highlighter-rouge">h</code> under <code class="highlighter-rouge">phi</code>, then <code class="highlighter-rouge">g*phi.invert(phi(g))**-1</code> is an element of the kernel (note that this is not necessarily the identity as the preimage of <code class="highlighter-rouge">phi(g)</code> can contain other elements beside <code class="highlighter-rouge">g</code>). With this in mind, we could start by defining <code class="highlighter-rouge">K</code> to be the trivial subgroup of <code class="highlighter-rouge">G</code>. If <code class="highlighter-rouge">K.order()*phi.image().order() == G.order()</code>, then we are done. If not, generate some random elements until <code class="highlighter-rouge">g*phi.invert(phi(g))**-1</code> is not the identity, and redefine <code class="highlighter-rouge">K</code> to be the subgroup generated by <code class="highlighter-rouge">g*phi.invert(phi(g))**-1</code>. Continue adding generators like this until <code class="highlighter-rouge">K.order()*phi.image().order() == G.order()</code> or finite <code class="highlighter-rouge">G</code> this will always terminate (or rather almost always considering that the elements are generated randomly - though it would only not terminate if at least one of the elements of <code class="highlighter-rouge">G</code> is never generated which is practically impossible).</p>
<p>This is how I am going to implement it. I haven’t already because I’ve been thinking about some of the details of the implementation. One thing that could be problematic is multiple calls to <code class="highlighter-rouge">reidemeister_presentation()</code> to compute the presentation of <code class="highlighter-rouge">K</code> at each step. As I’ve discovered when I was implementing the search for finite index subgroups last week, this can be very inefficient if the index of <code class="highlighter-rouge">K</code> is large (which it could well be for a small number of generators at the start). After giving it some thought, I realised that actually we could completely avoid finding the presentation because <code class="highlighter-rouge">K</code>’s coset table would be enough to calculate the order and check if an element belongs to it. Assuming <code class="highlighter-rouge">G</code> is finite, <code class="highlighter-rouge">K.order()</code> can be calculated as the order of <code class="highlighter-rouge">G</code> divided by the length of the coset table so the knowledge of the generators is enough. And for membership testing, all that’s necessary is to check if a given element stabilises <code class="highlighter-rouge">K</code> with the respect to the action on the cosets described by the coset table. And that should be a huge improvement over the obvious calls to the <code class="highlighter-rouge">subgroup()</code> method.</p>
<p>Actually, <code class="highlighter-rouge">FpGroup</code>s doesn’t currently have a <code class="highlighter-rouge">__contains__()</code> method so one can’t check if a word <code class="highlighter-rouge">w</code> is in a group <code class="highlighter-rouge">G</code> using <code class="highlighter-rouge">w in G</code>. This is easy to correct. What I was wondering was if we wanted to be able to check if words over <code class="highlighter-rouge">G</code> belong to a subgroup of <code class="highlighter-rouge">G</code> created with the <code class="highlighter-rouge">subgroup()</code> method - that wouldn’t be possible directly because <code class="highlighter-rouge">subgroup()</code> returns a group on a different set of generators but it wouldn’t be too unreasonable to have SymPy do this:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>>>> F, a, b = free_group('a, b')
>>> G = FpGroup(F, [a**3, b**2, (a*b)**2])
>>> K = G.subgroup([a])
>>> a**2 in K
True
</code></pre>
</div>
<p>I asked <a href="https://github.com/jksuom">Kalevi</a> about it earlier today and they said that it would be preferable to treat <code class="highlighter-rouge">K</code> in this case as a different group that happens to be linked to <code class="highlighter-rouge">G</code> through an injective homomorphism (essentially the canonical inclusion map). If we call this homomorphism <code class="highlighter-rouge">phi</code>, then the user can check if an element of <code class="highlighter-rouge">G</code> belongs to the subgroup represented by <code class="highlighter-rouge">K</code> like so: <code class="highlighter-rouge">a**2 in phi.image()</code>. Here <code class="highlighter-rouge">phi.image()</code> wouldn’t be an instance of <code class="highlighter-rouge">FpGroup</code> but rather of a new class <code class="highlighter-rouge">FpSubgroup</code> that I wrote today - it is a way to represent a subgroup of a group while keeping the same generators as in the original group. It’s only attributes are <code class="highlighter-rouge">generators</code>, <code class="highlighter-rouge">parent</code> (the original group) and <code class="highlighter-rouge">C</code> (the coset table of the original group by the subgroup) and it has a <code class="highlighter-rouge">__contains__</code>, an <code class="highlighter-rouge">order()</code> and a <code class="highlighter-rouge">to_FpGroup()</code> methods (the names are self-explanatory). For finite parent groups, the order is calculated as I described above for the kernel and for the infinite ones <code class="highlighter-rouge">redeimeister_presentation()</code> has to be called. The injective homomorphism from an instance of <code class="highlighter-rouge">FpGroup</code> returned by <code class="highlighter-rouge">subgroup()</code> would need to be worked out during the running of <code class="highlighter-rouge">reidemeister_presentation()</code> when the schreier generators are defined - this is still to be done.</p>
<p>Another thing I implemented this week was an improved method for generating random elements of finitely presented groups. However, when I tried it out to see how random the elements looked, I got huge words even for small groups so I didn’t send a PR with it yet. Once I implement rewriting systems, these words could be reduced to much shorter ones. Speaking of rewriting systems, I think it could be good if the rewriting was applied automatically much like <code class="highlighter-rouge">x*x**-1</code> is removed for <code class="highlighter-rouge">FreeGroupElement</code>s. Though I suppose this could be too inefficient some times - this would need testing. This is what I’ll be working on this week.</p>http://arif7blog.wordpress.com/?p=516Arif Ahmed (ArifAhmed1995)Arif Ahmed (ArifAhmed1995): Week 4 Report(June 18 – 24) : Dynamic ProgrammingSat, 24 Jun 2017 22:33:31 GMT
https://arif7blog.wordpress.com/2017/06/24/week-4-reportjune-18-24-dynamic-programming/
<p>Note : If you’re viewing this on Planet SymPy and Latex looks weird, go to the <a href="https://arif7blog.wordpress.com/" rel="noopener" target="_blank">WordPress site</a> instead.</p>
<p>Prof. Sukumar came up with the following optimization idea :<br />
Consider equation 10 in <a href="http://dilbert.engr.ucdavis.edu/~suku/quadrature/cls-integration.pdf" rel="noopener" target="_blank">Chin et. al .</a> The integral of the inner product of the gradient of <img alt="f(x) " class="latex" src="https://s0.wp.com/latex.php?latex=f%28x%29+&bg=ffffff&fg=444444&s=0" title="f(x) " /> and <img alt="{x}_{0} " class="latex" src="https://s0.wp.com/latex.php?latex=%7Bx%7D_%7B0%7D+&bg=ffffff&fg=444444&s=0" title="{x}_{0} " /> need not be always re-calculated. This is because it might have been calculated before. Hence, the given polynomial can be broken up into a list of monomials with increasing degree. Over a certain facet, the integrals of the list of monomials can be calculated and stored for later use. Before calculation of a certain monomial we need to see if it’s gradient has  already calculated and hence available for re-use.<br />
Then again, there’s another way to implement this idea of re-use. Given the degree of the input polynomial we know all the types of monomials which can possibly exist. For example, for max_degree = 2 the monomials are : <img alt="[1, x, y, x^2, xy, y^2]" class="latex" src="https://s0.wp.com/latex.php?latex=%5B1%2C+x%2C+y%2C+x%5E2%2C+xy%2C+y%5E2%5D&bg=ffffff&fg=444444&s=1" title="[1, x, y, x^2, xy, y^2]" />.<br />
All the monomial integrals over all facets can be calculated and stored in a list of lists. This would be very useful for one specific use-case, that is when there are many different polynomials with max degree less than or equal to a certain global maximum to be integrated over a given polytope.</p>
<p>I’m still working on implementing the first one. That’s cause the worst case time(when no or very few dictionary terms are re-used) turns out to much more expensive than the straightforward method. Maybe computing the standard deviation among monomial degrees would be a good way to understand which technique to use. Here is an example to show what I mean. If we have a monomial list : <img alt="[x^3, y^3, 3x^2y, 3xy^2] " class="latex" src="https://s0.wp.com/latex.php?latex=%5Bx%5E3%2C+y%5E3%2C+3x%5E2y%2C+3xy%5E2%5D+&bg=ffffff&fg=444444&s=0" title="[x^3, y^3, 3x^2y, 3xy^2] " /> , then the dynamic programming technique becomes useless here since gradient of any term will have degree 2 and not belong in the list. Now, the corresponding list of degrees is : <img alt="[3, 3, 3, 3] " class="latex" src="https://s0.wp.com/latex.php?latex=%5B3%2C+3%2C+3%2C+3%5D+&bg=ffffff&fg=444444&s=0" title="[3, 3, 3, 3] " />, and standard deviation is zero. Hence the normal technique is applicable here.</p>
<p>In reality, I’ll probably have to use another measure to make the judgement of dynamic vs. normal. Currently, I’m trying to fix a bug in the implementation of the first technique. After that, I’ll try out the second one.</p><br /> <a href="http://feeds.wordpress.com/1.0/gocomments/arif7blog.wordpress.com/516/" rel="nofollow"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/arif7blog.wordpress.com/516/" /></a> <img alt="" border="0" height="1" src="https://pixel.wp.com/b.gif?host=arif7blog.wordpress.com&blog=126429718&post=516&subd=arif7blog&ref=&feed=1" width="1" />https://parsoyaarihant.github.io/blog/gsoc/2017/06/24/GSoC17-Week3-ReportArihant Parsoya (parsoyaarihant)Arihant Parsoya (parsoyaarihant): GSoC17 Week 3 ReportSat, 24 Jun 2017 06:30:00 GMT
https://parsoyaarihant.github.io/blog/gsoc/2017/06/24/GSoC17-Week3-Report.html
<p>I ran the test suit for algebraic rules 1.2. Here is the report:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>Total number of tests: 1480
Passed: 130
Failed: 1350
Time taken to run the tests: 4967.428s
</code></pre>
</div>
<p>Very few test passed since many rules use <code class="highlighter-rouge">ExpandIntegrand[ ]</code> function to evaluate its integral. There are 37 definitions of <code class="highlighter-rouge">ExpandIntegrand[ ]</code>(for different expressions). I am trying to implement few definitions which can are used in algebraic rules. The problem is that <code class="highlighter-rouge">ExpandIntegrand[ ]</code> is defined after 50% of all the rules in the list. We have only been able to complete 10% of utility functions in SymPy. Hence, it is tricky to implement <code class="highlighter-rouge">ExpandIntegrand[ ]</code> when it depends on functions which we haven’t implemented.</p>
<p>A good thing is that MatchPy was able to match almost all the patterns. Manuel said that he is going to work on incorporating <code class="highlighter-rouge">optional arguments</code> in MatchPy next week. I will rework on Rubi parser to incorporate his changes.</p>
<p>I will complete my work on <code class="highlighter-rouge">ExpandIntegrand[ ]</code> and also continue my work in implementing remaining utility functions with Abdullah.</p>http://nesar2017.wordpress.com/?p=133Abdullah Javed Nesar (Abdullahjavednesar)Abdullah Javed Nesar (Abdullahjavednesar): GSoC: Progress on Utility FunctionsFri, 23 Jun 2017 16:27:46 GMT
https://nesar2017.wordpress.com/2017/06/23/gsoc-progress-on-utility-functions/
<p>Hello all!</p>
<p>After adding Algebraic\Linear Product rules and tests Arihant and I are working on implementations of Utility Functions, there is a huge set Utility Functions that needs to be implemented before we proceed ahead with next set of rules. Once we are done with the entire set of utility functions implementing rules and adding tests would be an easy task. Understanding the Mathematica codes and analyzing its dependent functions and converting it into Python syntax is a major task.</p>
<p>We started with implementation of only those functions which were necessary to support <strong>Algebraic rules\Linear Products</strong> but because they were dependent on the previous functions we had to start off from the very beginning of the <a href="http://www.apmaths.uwo.ca/~arich/IntegrationRules/PortableDocumentFiles/Integration%20utility%20functions.pdf">Pdf</a>. So far we have implemented more that 100 utility functions. Our priority is to implement all Utility functions capable to support Algebraic Rules as soon as possible.</p><br /> <a href="http://feeds.wordpress.com/1.0/gocomments/nesar2017.wordpress.com/133/" rel="nofollow"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/nesar2017.wordpress.com/133/" /></a> <img alt="" border="0" height="1" src="https://pixel.wp.com/b.gif?host=nesar2017.wordpress.com&blog=126779945&post=133&subd=nesar2017&ref=&feed=1" width="1" />https://szymag.github.io/post/week-3/Szymon Mieszczak (szymag)Szymon Mieszczak (szymag): Week 3Tue, 20 Jun 2017 22:41:21 GMT
https://szymag.github.io/post/week-3/
<p>In the last week I focused mainly on finishing tasks related Lame coefficients. During this time two PR were merged. In my previous post I described how we can calculate gradient, curl and divergence in different type of coordinate system, so now I described only new thing which was already add to mainline. We decided to remove dependency between Del class and CoordSysCartesian. From mathematical point of view it makes sense, because nabla operator is just an entity which acts on vector or scalar and his behavior is independent from coordinate system.http://ranjithkumar007.github.io/2017/06/20/SolversRanjith Kumar (ranjithkumar007)Ranjith Kumar (ranjithkumar007): SolversTue, 20 Jun 2017 00:00:00 GMT
http://ranjithkumar007.github.io/2017/06/20/Solvers/
<p>This week, I tried to get the earlier PR’s <a href="https://github.com/symengine/symengine/pull/1291">#1291</a> and <a href="https://github.com/symengine/symengine/pull/1293">#1293</a> merged in.
But, Unfortunately there were several mistakes in the earlier implementation of simplifications in <code class="highlighter-rouge">ConditionSet</code>. Thanks to <a href="https://github.com/isuruf">isuruf</a>, I was able to correct them and finally it got merged in.
<a href="https://github.com/symengine/symengine/pull/1293">#1293</a> PR on ImageSet is also complete but not yet merged.</p>
<p>Along side, I started working on implementing lower order polynomial solvers. This work is done here in <a href="https://github.com/symengine/symengine/pull/1296">#1296</a>.</p>
<p>Short description of this PR:</p>
<ul>
<li>Basic module for sovlers is up and running.</li>
<li>adds solvers for polynomials with degree <= 4.</li>
<li>integrates Flint’s wrappers for factorisation into solvers.</li>
<li>Fixes a bug in Forward Iterator. credits : <a href="https://github.com/srajangarg">Srajangarg</a></li>
</ul>
<p>This PR is still a WIP, as it doesn’t handle polynomials with symbolic coefficients.</p>
<p>More on Solvers coming soon !! Until then Stay tuned !!</p>http://bjodah.github.io/blog/posts/gsoc-week3.htmlBjörn Dahlgren (bjodah)Björn Dahlgren (bjodah): Status update week 3 GSoCMon, 19 Jun 2017 21:15:00 GMT
http://bjodah.github.io/blog/posts/gsoc-week3.html
<div><div class="section" id="fast-callbacks-from-sympy-using-symengine">
<h2>Fast callbacks from SymPy using SymEngine</h2>
<p>My main focus the past week has been to get <tt class="docutils literal">Lambdify</tt> in SymEngine
to work with multiple output parameters. Last year Isuru Fernando lead
the <a class="reference external" href="https://github.com/symengine/symengine/pull/1094">development</a> to support jit-compiled callbacks using LLVM in SymEngine.
I started work on leveraging this in the Python wrappers of SymEngine
but my work stalled due to time constraints.</p>
<p>But since it is very much related to code generation in SymPy I did
put it into my time-line (later in the summer) in my GSoC
application. But with the upcoming SciPy conference, and the fact that
it would make a nice addition to our tutorial, I have put in <a class="reference external" href="https://github.com/symengine/symengine.py/pull/112">work</a> to
get this done earlier than first planned.</p>
<p>Another thing on my to-do-list from last week was to get <tt class="docutils literal">numba</tt> working
with <tt class="docutils literal">lambdify</tt>. But for this to work we need to wait for a new upstream
release of <tt class="docutils literal">numba</tt> (which they are hoping to release before the SciPy
<a class="reference external" href="https://scipy2017.scipy.org/ehome/220975/493418/">conference</a>).</p>
</div>
<div class="section" id="status-of-codegen-tutorial-material">
<h2>Status of codegen-tutorial material</h2>
<p>I have not added any new tutorial material this week, but have been
working on making all notebooks work under all targeted operating
systems. However, every change to the notebooks have to be checked
on all operating systems using both Python 2 and Python 3. This
becomes tedious very quickly so I decided to enable continuous
integration on our <a class="reference external" href="https://github.com/sympy/scipy-2017-codegen-tutorial">repository</a>. I followed conda-forges approach: Travis CI
for OS X, CircleCI for Linux and AppVeyor for Windows (and a private
CI server for another Linux setup). And last night
I <em>finally</em> got green light on all 4 of our CI services.</p>
</div>
<div class="section" id="plans-for-the-upcoming-week">
<h2>Plans for the upcoming week</h2>
<p>We have had a performance <a class="reference external" href="https://github.com/sympy/sympy/issues/12411">regression</a> in <tt class="docutils literal">sympy.cse</tt> which has bit me
multiple times this week. I managed to <a class="reference external" href="https://github.com/sympy/sympy_benchmarks/pull/38">craft</a> a small test case
indicating that the algorithmic complexity of the new function is
considerably worse than before (effectively making it useless for many
applications). In my weekly mentor-meeting (with Aaron) we discussed
possibly reverting that <a class="reference external" href="https://github.com/sympy/sympy/pull/11232">change</a>. I will first try to see if I can
identify easy-to-fix bottlenecks by profiling the code. But the
risk is that it is too much work to be done before the upcoming
new release of SymPy, and then we will simply revert for now (choosing
speed over extensiveness of the sub-expression elimination).</p>
<p>I still need to test the notebooks using not only <tt class="docutils literal">msvc</tt> under Windows
(which is currently used in the AppVeyor tests), but also <tt class="docutils literal">mingw</tt>. I did
manage to get it working locally but there is still some effort left
in order to make this work on AppVeyor. It's extra tricky since there
is a <a class="reference external" href="https://bugs.python.org/issue21821">bug</a> in <tt class="docutils literal">distutils</tt> in Python 3 which causes the detection of mingw
to fail. So we need to either:</p>
<ul class="simple">
<li>Patch <tt class="docutils literal">cygwincompiler.py</tt> in <tt class="docutils literal">distutils</tt> (which I believe we can do
if we create a conda package for our tutorial material).</li>
<li>...or use something else than <tt class="docutils literal">pyximport</tt> (I'm hesitant to do this
before the conference).</li>
<li>...or provide a gcc executable (not a <tt class="docutils literal">.bat</tt> file) that simply
spawns <tt class="docutils literal">gcc.bat</tt> (but that executable would need to be compiled
during build of our conda package).</li>
</ul>
<p>Based on my work on making the CI services work, we will need to
provide test scripts for the participants to run. We need to provide
the organizers with these scripts by June 27th so this needs to be
decided upon during next week. I am leaning towards providing an
<tt class="docutils literal">environment.yml</tt> file together with a simple instruction of
activating said environment, e.g.:</p>
<pre class="literal-block">
$ conda env create -f environment.yml
$ source activate codegen17
$ python -c "import scipy2017codegen as cg; cg.test()"
</pre>
<p>This could even be tested on our CI services.</p>
<p>I also intend to add a (perhaps final) tutorial notebook for chemical
kinetics where we also consider diffusion. We will solve the PDE using
the method of lines. The addition of a spatial dimension in this way
is simple in principle, things do tend to become tricky when handling
boundary conditions though. I will try to use the simplest possible
treatment in order to avoid taking focus from what we are teaching
(code-generation).</p>
<p>It is also my hope that this combined diffusion-reaction model is a
good candidate for <tt class="docutils literal">ufuncify</tt> from
<tt class="docutils literal">sympy.utilities.autowrap</tt>.</p>
</div></div>https://asmeurer.github.io/blog/posts/automatically-deploying-this-blog-to-github-pages-with-travis-ci/Aaron Meurer (asmeurer)Aaron Meurer (asmeurer): Automatically deploying this blog to GitHub Pages with Travis CIMon, 19 Jun 2017 19:21:32 GMT
https://asmeurer.github.io/blog/posts/automatically-deploying-this-blog-to-github-pages-with-travis-ci/
<div><p>This blog is now <a href="http://travis-ci.org/asmeurer/blog">deployed to GitHub pages automatically</a> from Travis CI.</p>
<p>As I've outlined in the <a href="https://asmeurer.github.io/blog/posts/automatically-deploying-this-blog-to-github-pages-with-travis-ci/moving-to-github-pages-with-nikola/">past</a>,
is built with the <a href="https://getnikola.com/">Nikola</a> static blogging
engine. I really like Nikola because it uses Python, has lots of nice
extensions, and is <a href="https://github.com/getnikola/nikola/blob/master/LICENSE.txt">sanely
licensed</a>.</p>
<p>Most importantly, it is a static site generator, meaning I write my posts in
Markdown, and Nikola generates the site as static web content ("static" means no web server
is required to run the site). This means that the site can be hosted for free
on <a href="https://pages.github.com/">GitHub pages</a>. This is how this site has been
hosted since I started it. I have
a <a href="http://github.com/asmeurer/blog">GitHub repo</a> for the site, and the content
itself is deployed to
the <a href="https://github.com/asmeurer/blog/tree/gh-pages">gh-pages</a> branch of the
repo. But until now, the deployment has happened only manually with the
<code>nikola github_deploy</code> command.</p>
<p>A much better way is to deploy automatically using Travis CI. That way, I do
not need to run any software on my computer to deploy the blog.</p>
<p>The steps outlined here will work for any static site generator. They assume
you already have one set up and hosted on GitHub.</p>
<h3>Step 1: Create a .travis.yml file</h3>
<p><strong>Create a <code>.travis.yml</code> file like the one below</strong></p>
<pre><code class="language-yaml">sudo: false
language: python
python:
- 3.6
install:
- pip install "Nikola[extras]" doctr
script:
- set -e
- nikola build
- doctr deploy . --built-docs output/
</code></pre>
<ul>
<li>If you use a different static site generator, replace <code>nikola</code> with that
site generator's command.</li>
<li>If you have Nikola configured to output to a different directory, or use a
different static site generator, replace <code>--built-docs output/</code> with the
directory where the site is built.</li>
<li>Add any extra packages you need to build your site to the <code>pip install</code>
command. For instance, I use the <code>commonmark</code> extension for Nikola, so I
need to install <code>commonmark</code>.</li>
<li>The <code>set -e</code> line is important. It will prevent the blog from being deployed
if the build fails.</li>
</ul>
<p><strong>Then go to <a href="http://www.asmeurer.com/blog/">https://travis-ci.org/profile/</a> and enable Travis for your blog
repo.</strong></p>
<h3>Step 2: Run doctr</h3>
<p>The key here is <a href="https://drdoctr.github.io/doctr/">doctr</a>, a tool I wrote with
<a href="https://github.com/gforsyth">Gil Forsyth</a> that makes deploying anything from
Travis CI to GitHub Pages a breeze. It automatically handles creating and
encrypting a deploy SSH key for GitHub, and the syncing of files to the
<code>gh-pages</code> branch.</p>
<p><strong>First install doctr.</strong> <code>doctr</code> requires
Python 3.5+, so you'll need that. You can install it with conda:</p>
<pre><code class="language-bash">conda install -c conda-forge doctr
</code></pre>
<p>or if you don't use conda, with pip</p>
<pre><code class="language-bash">pip install doctr
</code></pre>
<p><strong>Then run this command in your blog repo:</strong></p>
<pre><code class="language-bash">doctr configure
</code></pre>
<p>This will ask you for your GitHub username and password,
and for the name of the repo you are deploying from and to (for instance, for
my blog, I entered <code>asmeurer/blog</code>). The output will look something like this:</p>
<pre><code class="language-http">$ doctr configure
What is your GitHub username? asmeurer
Enter the GitHub password for asmeurer:
A two-factor authentication code is required: app
Authentication code: 911451
What repo do you want to build the docs for (org/reponame, like 'drdoctr/doctr')? asmeurer/blog
What repo do you want to deploy the docs to? [asmeurer/blog] asmeurer/blog
Generating public/private rsa key pair.
Your identification has been saved in github_deploy_key.
Your public key has been saved in github_deploy_key.pub.
The key fingerprint is:
SHA256:4cscEfJCy9DTUb3DnPNfvbBHod2bqH7LEqz4BvBEkqc doctr deploy key for asmeurer/blog
The key's randomart image is:
+---[RSA 4096]----+
| ..+.oo.. |
| *o*.. . |
| O.+ o o |
| E + o B . |
| + S . +o +|
| = o o o.o+|
| * . . =.=|
| . o ..+ =.|
| o..o+oo |
+----[SHA256]-----+
The deploy key has been added for asmeurer/blog.
You can go to https://github.com/asmeurer/blog/settings/keys to revoke the deploy key.
================== You should now do the following ==================
1. Commit the file github_deploy_key.enc.
2. Add
script:
- set -e
- # Command to build your docs
- pip install doctr
- doctr deploy <deploy_directory>
to the docs build of your .travis.yml. The 'set -e' prevents doctr from
running when the docs build fails. Use the 'script' section so that if
doctr fails it causes the build to fail.
3. Put
env:
global:
- secure: "Kf8DlqFuQz9ekJXpd3Q9sW5cs+CvaHpsXPSz0QmSZ01HlA4iOtdWVvUttDNb6VGyR6DcAkXlADRf/KzvAJvaqUVotETJ1LD2SegnPzgdz4t8zK21DhKt29PtqndeUocTBA6B3x6KnACdBx4enmZMTafTNRX82RMppwqxSMqO8mA="
in your .travis.yml.
</code></pre>
<p>Follow the steps at the end of the command:</p>
<ol>
<li><strong>Commit the file <code>github_deploy_key.enc</code>.</strong></li>
<li>You already have <code>doctr deploy</code> in your <code>.travis.yml</code> from step 1 above.</li>
<li><strong>Add the <code>env</code> block to your <code>.travis.yml</code>.</strong> This will let Travis CI decrypt
the SSH key used to deploy to <code>gh-pages</code>.</li>
</ol>
<h3>That's it</h3>
<p>Doctr will now deploy your blog automatically. You may want to look at the
Travis build to make sure everything works. Note that <code>doctr</code> only deploys
from <code>master</code> by default (see below). You may also want to look at the
other
<a href="https://drdoctr.github.io/doctr/commandline.html#doctr-deploy">command line flags</a> for
<code>doctr deploy</code>, which let you do things such as to deploy to <code>gh-pages</code> for a
different repo than the one your blog is hosted on.</p>
<p>I recommend these steps over the ones in
the
<a href="https://getnikola.com/blog/automating-nikola-rebuilds-with-travis-ci.html">Nikola manual</a> because
doctr handles the SSH key generation for you, making things more secure. I
also found that the <code>nikola github_deploy</code> command
was <a href="https://github.com/getnikola/nikola/issues/2847">doing too much</a>, and
<code>doctr</code> handles syncing the built pages already anyway. Using <code>doctr</code> is much
simpler.</p>
<h3>Extra stuff</h3>
<h4>Reverting a build</h4>
<p>If a build goes wrong and you need to revert it, you'll need to use git to
revert the commit on your <code>gh-pages</code> branch. Unfortunately, GitHub doesn't
seem to have a way to revert commits in their web interface, so it has to be
done from the command line.</p>
<h4>Revoking the deploy key</h4>
<p>To revoke the deploy key generated by doctr, go your repo in GitHub, click on
"settings" and then "deploy keys". Do this if you decide to stop using this,
or if you feel the key may have been compromised. If you do this, the
deployment will stop until you run step 2 again to create a new key.</p>
<h4>Building the blog from branches</h4>
<p>You can also build your blog from branches, e.g., if you want to test things
out without deploying to the final repo.</p>
<p>We will use the steps
outlined
<a href="https://drdoctr.github.io/doctr/recipes.html#deploy-docs-from-any-branch">here</a>.</p>
<p>Replace the line</p>
<pre><code class="language-yaml"> - doctr deploy . --built-docs output/
</code></pre>
<p>in your <code>.travis.yml</code> with something like</p>
<pre><code class="language-yaml"> - if [[ "${TRAVIS_BRANCH}" == "master" ]]; then
doctr deploy . --built-docs output/;
else
doctr deploy "branch-$TRAVIS_BRANCH" --built-docs output/ --no-require-master;
fi
</code></pre>
<p>This will deploy your blog as normal from <code>master</code>, but from a branch it will
deploy to the <code>branch-<branchname></code> subdir. For instance, my blog is at
<a href="http://www.asmeurer.com/blog/">http://www.asmeurer.com/blog/</a>, and if I had a branch called <code>test</code>, it would
deploy it to <a href="https://asmeurer.github.io/blog/posts/automatically-deploying-this-blog-to-github-pages-with-travis-ci/">http://www.asmeurer.com/blog/branch-test/</a>.</p>
<p>Note that it will not delete old branches for you from <code>gh-pages</code>. You'll need
to do that manually once they are merged.</p>
<p>This only works for branches in the same repo. It is not possible to deploy
from a branch from pull request from a fork, for security purposes.</p>
<h4>Enable build cancellation in Travis</h4>
<p>If you go the Travis page for your blog and choose "settings" from the
hamburger menu, you can enable auto cancellation for branch builds. This will
make it so that if you push many changes in succession, only the most recent
one will get built. This makes the changes get built faster, and lets you
revert mistakes or typos without them ever actually being deployed.</p></div>https://shikharj.github.io//2017/06/19/GSoC-Progress-Week-3Shikhar Jaiswal (ShikharJ)Shikhar Jaiswal (ShikharJ): GSoC Progress - Week 3Mon, 19 Jun 2017 00:00:00 GMT
https://shikharj.github.io//2017/06/19/GSoC-Progress-Week-3/
<p>Hello, this post contains the third report of my GSoC progress. This week was mostly spent on learning the internal structures of <code class="highlighter-rouge">SymEngine</code> functionalities and methods on a deeper level.</p>
<h2 id="report">Report</h2>
<h3 id="symengine">SymEngine</h3>
<p>This week I worked on implementing <code class="highlighter-rouge">Conjugate</code> class and the related methods in <code class="highlighter-rouge">SymEngine</code>, through PR <a href="https://github.com/symengine/symengine/pull/1295">#1295</a>.</p>
<p>I also worked on implementing the “fancy-set” <code class="highlighter-rouge">Range</code>, the code for which would be complete enough to be pushed sometime in the coming GSoC week.</p>
<p>Also, since it would probably be the last week that I’d be working on <code class="highlighter-rouge">SymEngine</code>, I spent some time going through the codebase and checking for discontinuities between <code class="highlighter-rouge">SymEngine</code> and <code class="highlighter-rouge">SymPy</code>’s implementations.</p>
<h3 id="symenginepy">SymEngine.py</h3>
<p>I pushed in <a href="https://github.com/symengine/symengine.py/pull/155">#155</a> fixing a trivial change from <code class="highlighter-rouge">_sympify</code> to <code class="highlighter-rouge">sympify</code> in relevant cases throughout the <code class="highlighter-rouge">SymEngine.py</code> codebase. The PR is reviewed and merged.</p>
<p>I reached out to Isuru once again regarding further work to be undertaken for <code class="highlighter-rouge">PyDy</code>, and he suggested wrapping up <code class="highlighter-rouge">Relationals</code> from <code class="highlighter-rouge">SymEngine</code>. The work, which is pushed through <a href="https://github.com/symengine/symengine.py/pull/159">#159</a>, is in itself close to completion, with only specific parsing capabilities left to be implemented (for eg. <code class="highlighter-rouge">x < y</code> should return a <code class="highlighter-rouge">LessThan(x, y)</code> object).</p>
<p>Wrapping <code class="highlighter-rouge">Relationals</code> also marks the initiation of <code class="highlighter-rouge">SymEngine.py</code>’s side of Phase II, which predominantly focuses on bug-fixing and wrapping.</p>
<p>See you again!</p>
<p><strong>Addio</strong></p>http://valglad.github.io/2017/06/18/orderValeriia Gladkova (valglad)Valeriia Gladkova (valglad): The order methodSun, 18 Jun 2017 00:00:00 GMT
http://valglad.github.io/2017/06/18/order/
<p>Last week and some of this one I was working on changing the <code class="highlighter-rouge">order()</code> method of <code class="highlighter-rouge">FpGroup</code>s. Currently SymPy attempts to perform coset enumeration on the trivial subgroup and, if it terminates, the order of the group is the length of the coset table. A somewhat better way, at least theoretically, is to try and find a subgroup of finite index and compute the order of this subgroup separately. The function I’ve implemented only looks for a finite index subgroup generated by a subset of the group’s generators with a pseudo-random element thrown in (this can sometimes give smaller index subgroups and make the computation faster). The PR is <a href="https://github.com/sympy/sympy/pull/12761">here</a>.</p>
<p>The idea is to split the list of generators (with a random element) into two halves and try coset enumeration on one of the halves. To make sure this doesn’t go on for too long, it is necessary to limit the number of cosets that the coset enumeration algorithm is allowed to define. (Currently, the only way to set the maximum number of cosets is by changing the class variable <code class="highlighter-rouge">CosetTable.coset_table_max_limit</code> which is very large (4096000) by default - in the PR, I added a keyword argument to all functions relevant to coset enumeration so that the maximum can be set when calling the function.) If the coset enumeration fails (because the maximum number of cosets was exceeded), try the other half. If this doesn’t succeed, double the maximum number of cosets and try again. Once (if) a suitable subgroup is found, the order of the group is just the index times the order of the subgroup. The latter is computed in the same way by having <code class="highlighter-rouge">order()</code> call itself recursively.</p>
<p>The implementation wasn’t hard in itself but I did notice that finding the subgroup’s presentation was taking far too long in certain cases (specifically when the subgroup’s index wasn’t small enough) and spent a while trying to think of a way around it. I think that for cyclic subgroups, there is a way to calculate the order during coset enumeration without having to find the presentation explicitly but I couldn’t quite work out how to do that. Perhaps, I will eventually find a way and implement it. For now, I left it as it is. For large groups, coset enumeration will take a long time anyway and at least the new way will be faster in some cases and may also be able to tell if a group is infinite (while coset enumeration on the trivial subgroup wouldn’t terminate at all).</p>
<p>Now I am going to actually start working on homomorphisms which will be the main and largest part of my project. I’ll begin by writing the <code class="highlighter-rouge">GroupHomomorphism</code> class in this coming week. This won’t actually become a PR for a while but it is easier to do it first because I still have exams in the next several days. After that I’ll implement the Knuth-Bendix algorithm for rewriting systems (I might make a start on this later this week as I’ll have more time once the exams are over). Then I’ll send a PR with rewriting systems and once that’s merged, the <code class="highlighter-rouge">GroupHomomorphism</code> class one, because it would depend on rewriting systems.</p>http://arif7blog.wordpress.com/?p=400Arif Ahmed (ArifAhmed1995)Arif Ahmed (ArifAhmed1995): Week 3 Report(June 11 – 17): Implementing suggestionsSat, 17 Jun 2017 17:17:28 GMT
https://arif7blog.wordpress.com/2017/06/17/week-3-reportjune-11-17-implementing-suggestions/
<p>Ondrej and Prof.Sukumar were quite busy this week, but eventually they reviewed both the notebook and code. Their review was quite insightful as it brought to surface one major optimization problem which I’ll discuss below. I also encountered a major issue with the already defined Polygon class.</p>
<p>Firstly, there were some minor issues :</p>
<ol>
<li>I had used python floating point numbers instead of SymPy’s exact representation in numerous places(both in the algorithm and the test file). So, that had to be changed first.</li>
<li>The decompose() method discarded all constant terms in a polynomial. Now, the constant value is assigned a value of zero as key.<br />
Example:<br />
Before:</p>
<pre class="brush: python; title: ; notranslate">
decompose(x**2 + x + 2) = {1: x, 2: x**2}
</pre>
<p>After:</p>
<pre class="brush: python; title: ; notranslate">
decompose(x**2 + x + 2) = {0: 2, 1: x, 2: x**2}
</pre>
</li>
<li>Instead of computing component-wise and passing that value to integration_reduction(), the inner product is computed directly and then passed on. This leads to only one recursive call instead of two for 2D case and future three for 3D case.</li>
</ol>
<p>Prof. Sukumar also suggested that I should add the option of hyperplane representation. This was simple to do as well. All I did was compute intersection of the hyperplanes(lines as of now) to get the vertices. In case of vertex representation the hyperplane parameters would have to be computed.</p>
<p><strong>Major Issues:</strong></p>
<ol>
<li>Another suggestion was to add the tests for 2D polytopes mentioned in the <span style="color: #00ffff;"><a href="http://dilbert.engr.ucdavis.edu/~suku/quadrature/cls-integration.pdf" rel="noopener" target="_blank">paper</a></span>(Page 10). The two tests which failed were polytopes with intersecting sides. In fact, this was the error which I got :</li>
</ol>
<pre class="brush: python; title: ; notranslate">
sympy.geometry.exceptions.GeometryError: Polygon has intersecting sides.
</pre>
<p>It seems that the existing polygon class in SymPy does not account for polygons with intersecting sides. At first I thought maybe polygons are not supposed to have intersecting sides by geometrical definition but that was <span style="color: #00ffff;"><a href="https://en.wikipedia.org/wiki/Complex_polygon" rel="noopener" target="_blank">not true.  </a></span>I’ll have to discuss how to circumvent this problem with my mentor.</p>
<p>2. Prof. Sukumar correctly questioned the use of best_origin(). As explained in an earlier post, best_origin() finds that point on the facet which will lead to an inner product with lesser degree. But, obviously there is an associated cost with computing that intersection point. So, I wrote some really basic code to test out the current best_origin() versus simply choosing the first vertex of the facet.</p>
<pre class="brush: python; title: ; notranslate">
from __future__ import print_function, division
from time import time
import matplotlib.pyplot as plt
from sympy import sqrt
from sympy.core import S
from sympy.integrals.intpoly import (is_vertex, intersection, norm,
decompose, best_origin,
hyperplane_parameters,
integration_reduction, polytope_integrate,
polytope_integrate_simple)
from sympy.geometry.line import Segment2D
from sympy.geometry.polygon import Polygon
from sympy.geometry.point import Point
from sympy.abc import x, y
MAX_DEGREE = 10
def generate_polynomial(degree, max_diff):
poly = 0
if max_diff % 2 == 0:
degree += 1
for i in range((degree - max_diff)//2, (degree + max_diff)//2):
if max_diff % 2 == 0:
poly += x**i*y**(degree - i - 1) + y**i*x**(degree - i - 1)
else:
poly += x**i*y**(degree - i) + y**i*x**(degree - i)
return poly
times = {}
times_simple = {}
for max_diff in range(1, 11):
times[max_diff] = 0
for max_diff in range(1, 11):
times_simple[max_diff] = 0
def test_timings(degree):
hexagon = Polygon(Point(0, 0), Point(-sqrt(3) / 2, S(1) / 2),
Point(-sqrt(3) / 2, 3 / 2), Point(0, 2),
Point(sqrt(3) / 2, 3 / 2), Point(sqrt(3) / 2, S(1) / 2))
square = Polygon(Point(-1,-1), Point(-1,1), Point(1,1), Point(1,-1))
for max_diff in range(1, degree):
poly = generate_polynomial(degree, max_diff)
t1 = time()
polytope_integrate(square, poly)
times[max_diff] += time() - t1
t2 = time()
polytope_integrate_simple(square, poly)
times_simple[max_diff] += time() - t2
return times
for i in range(1, MAX_DEGREE + 2):
test_timings(i)
plt.plot(list(times.keys()), list(times.values()), 'b-', label="Best origin")
plt.plot(list(times_simple.keys()), list(times_simple.values()), 'r-', label="First point")
plt.show()
</pre>
<p>The following figures are that of computation time vs. maximum difference in exponents of x and y. Blue line is when best_origin() is used. Red line is when simply the first vertex of the facet(line-segment) is selected.</p>
<p><span style="text-decoration: underline;">Hexagon</span></p>
<p><img alt="figure_hexagon" class="alignnone wp-image-482" height="181" src="https://arif7blog.files.wordpress.com/2017/06/figure_hexagon.png?w=242&h=181" width="242" /></p>
<p><img alt="hexagon" class="alignnone size-full wp-image-485" src="https://arif7blog.files.wordpress.com/2017/06/hexagon.png?w=730" /></p>
<p><span style="text-decoration: underline;">Square</span></p>
<p><img alt="figure_square.png" class="alignnone wp-image-488" height="213" src="https://arif7blog.files.wordpress.com/2017/06/figure_square.png?w=283&h=213" width="283" /></p>
<p><img alt="Square" class="alignnone size-full wp-image-491" src="https://arif7blog.files.wordpress.com/2017/06/square.png?w=730" /></p>
<p>When the polygon has a lot of facets which intersect axes thereby making it an obvious choice to select that intersection point as best origin, then the current best origin technique works better as in the case with the square where all four sides intersected the axes.<br />
However, in case of the hexagon the best_origin technique would result in a better point than the first vertex only for one facet. The added computation time makes it more expensive than just selecting the first vertex. Of course, as the difference between exponents increase then the time taken by best_origin is overshadowed by another processes in the algorithm. I’ll need to look at the method again and see if there are preliminary checks which can be performed making computing intersections a last resort.</p><br /> <a href="http://feeds.wordpress.com/1.0/gocomments/arif7blog.wordpress.com/400/" rel="nofollow"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/arif7blog.wordpress.com/400/" /></a> <img alt="" border="0" height="1" src="https://pixel.wp.com/b.gif?host=arif7blog.wordpress.com&blog=126429718&post=400&subd=arif7blog&ref=&feed=1" width="1" />https://parsoyaarihant.github.io/blog/gsoc/2017/06/16/GSoC17-Week2-ReportArihant Parsoya (parsoyaarihant)Arihant Parsoya (parsoyaarihant): GSoC17 Week 2 ReportFri, 16 Jun 2017 06:30:00 GMT
https://parsoyaarihant.github.io/blog/gsoc/2017/06/16/GSoC17-Week2-Report.html
<p>Our plan was to implement all Algebraic rules and complete the Rubi test suit for algebraic integration. However, there was a setback in our work because of a <a href="https://github.com/HPAC/matchpy/issues/9">bug</a> I found in <code class="highlighter-rouge">ManyToOneReplacer</code> of MatchPy. This bug prevents matching of expressions having many nested commutative expressions. Hence, we were not able to match all expressions in the test suit. <a href="https://github.com/wheerd">Manuel Krebber</a> have helped us a lot in adding features and giving suggestions to make MatchPy work for Rubi. He have fixed the bug today. I will resume my testing of algebraic rules asap.</p>
<h3 id="utility-functions">Utility functions</h3>
<p>Previously, our strategy was to implement only those functions which were used by algebraic rules. However, we found that those functions have dependencies on many other functions. So, we decided to implement the functions from the beginning itself, to avoid problems in the long run.</p>
<p>Mathematica has functionality of implementing function arguments as patterns:</p>
<pre><code class="language-Mathematica">SqrtNumberQ[m_^n_] :=
IntegerQ[n] && SqrtNumberQ[m] || IntegerQ[n-1/2] && RationalQ[m]
SqrtNumberQ[u_*v_] :=
SqrtNumberQ[u] && SqrtNumberQ[v]
SqrtNumberQ[u_] :=
RationalQ[u] || u===I
</code></pre>
<p>In the above code <code class="highlighter-rouge">SqrtNumberQ</code> is defined multiple times for different function arguments. To implement these functions in Python, Francesco suggested that we test the type of argument using conditionals:</p>
<pre><code class="language-Python">def SqrtNumberQ(expr):
# SqrtNumberQ[u] returns True if u^2 is a rational number; else it returns False.
if expr.is_Pow:
m = expr.base
n = expr.exp
return IntegerQ(n) & SqrtNumberQ(m) | IntegerQ(n-1/2) & RationalQ(m)
elif expr.is_Mul:
return all(SqrtNumberQ(i) for i in expr.args)
else:
return RationalQ(expr) or expr == I
</code></pre>
<p>There was some problem while implementing <code class="highlighter-rouge">Catch</code> since SymPy doesn’t currently has <code class="highlighter-rouge">Throw</code> object:</p>
<pre><code class="language-Mathematica">MapAnd[f_,lst_] :=
Catch[Scan[Function[If[f[#],Null,Throw[False]]],lst];True]
</code></pre>
<p>I used the following code to implement <code class="highlighter-rouge">ManAnd</code> in Python:</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="k">def</span> <span class="nf">MapAnd</span><span class="p">(</span><span class="n">f</span><span class="p">,</span> <span class="n">l</span><span class="p">,</span> <span class="n">x</span><span class="o">=</span><span class="bp">None</span><span class="p">):</span>
<span class="c"># MapAnd[f,l] applies f to the elements of list l until False is returned; else returns True</span>
<span class="k">if</span> <span class="n">x</span><span class="p">:</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">l</span><span class="p">:</span>
<span class="k">if</span> <span class="n">f</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="n">x</span><span class="p">)</span> <span class="o">==</span> <span class="bp">False</span><span class="p">:</span>
<span class="k">return</span> <span class="bp">False</span>
<span class="k">return</span> <span class="bp">True</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">l</span><span class="p">:</span>
<span class="k">if</span> <span class="n">f</span><span class="p">(</span><span class="n">i</span><span class="p">)</span> <span class="o">==</span> <span class="bp">False</span><span class="p">:</span>
<span class="k">return</span> <span class="bp">False</span>
<span class="k">return</span> <span class="bp">True</span>
</code></pre>
</div>
<h2 id="todo">TODO</h2>
<p>I and Abdullah have implemented ~40 utility functions so far. There are around ~40 more functions which needs to be implemented in order to support all algebraic rules. We should be able to complete those functions by tomorrow.</p>
<p>I will also resume adding tests to Rubi test suit for algebraic functions.</p>https://gxyd.github.io/blogs/Gsoc2017-week-2Gaurav Dhingra (gxyd)Gaurav Dhingra (gxyd): GSoC Week 2Wed, 14 Jun 2017 00:00:00 GMT
https://gxyd.github.io/blogs/Gsoc2017-week-2/
<p>After the Kalevi’s comment</p>
<blockquote>
<p>The param-rde branch is getting inconveniently big for hunting down and reviewing small changes. I think we should create another branch that would contain my original PR plus the added limited_integrate code (and optionally something else). Then that should be merged. Thereafter it would be easier to review new additions.</p>
</blockquote>
<p>I decided to remove a few commits from top of <code class="highlighter-rouge">param-rde</code> branch and made the branch <code class="highlighter-rouge">param-rde</code> mergeable.</p>
<p>Yesterday Kalevi merged the pull request <a href="https://github.com/sympy/sympy/pull/11761">#11761, Parametric Risch differentia equation</a>, there were quite a few problems(unrelated to my work) with travis but finally it passes the tests.</p>
<p>So till now these are pull requests that have been completed/started for my project:</p>
<ul>
<li>
<font color="green">Merged:</font>
<p><a href="https://github.com/sympy/sympy/pull/11761">param-rde #11761</a>: This is the pull request that Kalevi made back in september 2016. I further made commits for <code class="highlighter-rouge">limited_integration</code> function, implementation of parametric risch differential equation, though not many tests were added in this, which should definitely be done. I haven’t been able to find tests that lead to non-cancellation cases (Kalevi mentions that we should be able to find them), so for the time being we decided to start the implementation of cancellation routines, particularly liouvillian cases (others being non-linear and hypertangent cases), there isn’t a good reason to implement the hypertangent cases right now.</p>
</li>
<li>
<font color="red">Unmerged:</font>
<p><a href="https://github.com/sympy/sympy/pull/12734">param-rde_polymatrix</a> this pull request is intended to use <code class="highlighter-rouge">PolyMatrix</code> instead of <code class="highlighter-rouge">Matrix</code> (it is <code class="highlighter-rouge">MutableDenseMatrix</code>), here is Kalevi’s comment regarding it: “It would also be possible to use from … import PolyMatrix as Matrix. That would hint that there might be a single matrix in the future.”. The reason for making is <code class="highlighter-rouge">Matrix</code> doesn’t play well with <code class="highlighter-rouge">Poly</code>(or related) elements.</p>
</li>
<li>
<font color="green">Merged:</font>
<p><a href="https://github.com/sympy/sympy/pull/12727">Change printing of DifferentialExtension object</a>, it wasn’t necessary to make this pull request, but it does help me in debugging the problems a little easier.</p>
</li>
</ul>
<p>I was hoping that I would write a bit of mathematics in my blog posts, but unfortunately things I have dealt with till now required me to focus on programming API, introducing <code class="highlighter-rouge">PolyMatrix</code> so it deals well with the elements of <code class="highlighter-rouge">Poly</code>’s, but I am thinking this week I am going to deal with more of mathematics.</p>
<blockquote>
<h2 id="todo-for-this-week">TODO for this week</h2>
</blockquote>
<ul>
<li>Complete the cancellation liouvillian cases. I just sent the pull request for it <a href="https://github.com/sympy/sympy/pull/12743">Liouvillian cases for Parametric Risch differential equation #12743</a>. I really need to catch with the core of things, do it a little quick.</li>
</ul>
<p>I hope the next blog post is going to be a good mathematical one :)</p>https://szymag.github.io/post/week-2/Szymon Mieszczak (szymag)Szymon Mieszczak (szymag): Week 2Tue, 13 Jun 2017 18:41:21 GMT
https://szymag.github.io/post/week-2/
<p>My second week I spent on introducing Lame coefficients into CoordSysCartesian. Unfortunately, our work is restricted by SymPy structure so, we don’t have too much freedom in our implementation. Hopefully, with my mentor, Francesco, we found some solution how to achieve our goals without destroying vector module. This week shows that I have lack in my knowledge about object-oriented programming and SymPy.
Having access to Lame coefficient I was able to modified Del operator (in mathematics nabla operators) to handle spherical and cylindrical coordinate system.http://ranjithkumar007.github.io/2017/06/13/Improve Sets Module Part IIRanjith Kumar (ranjithkumar007)Ranjith Kumar (ranjithkumar007): Improve Sets Module Part IITue, 13 Jun 2017 00:00:00 GMT
http://ranjithkumar007.github.io/2017/06/13/Improve-Sets-Module-Part-II/
<p>Last week’s PR <a href="https://github.com/symengine/symengine/pull/1281">#1281</a> on <code class="highlighter-rouge">Complement</code>, <code class="highlighter-rouge">set_intersection</code> and <code class="highlighter-rouge">set_complement</code> got merged in. This week, I implemented <code class="highlighter-rouge">ConditionSet</code> and <code class="highlighter-rouge">ImageSet</code>. This work is done in <a href="https://github.com/symengine/symengine/pull/1291">#1291</a> for <code class="highlighter-rouge">ConditionSet</code> and <a href="https://github.com/symengine/symengine/pull/1293">#1293</a> for <code class="highlighter-rouge">ImageSet</code>.</p>
<p><code class="highlighter-rouge">ConditionSet</code> : <br />
It is useful to represent unsolved equations or a partially solved equation.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>class ConditionSet : public Set
{
private:
vec_sym syms_;
RCP<const Boolean> condition_;
...
}
</code></pre>
</div>
<p>Earlier, I used another data member for storing the base set. Thanks to <a href="https://github.com/isuruf">isuruf</a>, I was able to merge it within <code class="highlighter-rouge">condition_</code>. <br />
For implementing <code class="highlighter-rouge">contains</code> method for <code class="highlighter-rouge">ConditionSet</code>, I added Subs Visitor for <code class="highlighter-rouge">Contains</code> and <code class="highlighter-rouge">And</code>.</p>
<p><code class="highlighter-rouge">ImageSet</code> : <br />
It is a Set representation of a mathematical function defined on symbols over some set.
For example, <code class="highlighter-rouge">x**2 for all x in [0,2]</code> is represented as <code class="highlighter-rouge">imageset({x},x**2,[0,2])</code>.</p>
<p>When is ImageSet useful ? <br />
Say, I need to solve a trignometric equation <code class="highlighter-rouge">sin(x) = 1</code>. Solution is <code class="highlighter-rouge">2*n*pi+pi/2, n belongs to [0,oo)</code>. for such solutions imageset is a useful representation to have.</p>
<p>I will try to get these two PRs merged in ASAP. <br />
Next week, I will be working on implementing solvers for lower degree(<=4) polynomials.</p>
<p>See you next time. Bye for now !</p>https://shikharj.github.io//2017/06/12/GSoC-Progress-Week-2Shikhar Jaiswal (ShikharJ)Shikhar Jaiswal (ShikharJ): GSoC Progress - Week 2Mon, 12 Jun 2017 00:00:00 GMT
https://shikharj.github.io//2017/06/12/GSoC-Progress-Week-2/
<p>Hello, this post contains the second report of my GSoC progress.</p>
<h2 id="report">Report</h2>
<h3 id="symengine">SymEngine</h3>
<p>This week I mostly worked on implementing specific classes in <code class="highlighter-rouge">SymEngine</code>, namely <code class="highlighter-rouge">Sign</code>, <code class="highlighter-rouge">Float</code> and <code class="highlighter-rouge">Ceiling</code>, through PRs <a href="https://github.com/symengine/symengine/pull/1287">#1287</a> and <a href="https://github.com/symengine/symengine/pull/1290">#1290</a>. The work is currently under review, but again, mostly complete.</p>
<p>Though I had originally planned to implement more classes in my proposal, after a thorough look, I realised that a number of the mentioned classes could easily be implemented in <code class="highlighter-rouge">SymEngine.py</code> side only. As such, there was no hard requirement for them to be implemented in <code class="highlighter-rouge">SymEngine</code>.
Also, a number of them had been pre-implemented, but rather as <code class="highlighter-rouge">virtual</code> methods, and not standalone classes. There are still a couple of classes that I’d be working on in the coming week, which would effectively finish up a huge part of the planned Phase I of my proposal.</p>
<h3 id="symenginepy">SymEngine.py</h3>
<p>Isuru and I conversed a bit on whether I’d be interested in working on providing <code class="highlighter-rouge">SymEngine</code> support for <a href="https://github.com/pydy/pydy">PyDy</a> (a multi-body dynamics toolkit), as a part of GSoC. I agreed happily, and Isuru opened a couple of issues in <code class="highlighter-rouge">SymEngine.py</code> for me to work on, as I was completely new to <code class="highlighter-rouge">PyDy</code>.
I started off wrapping up <code class="highlighter-rouge">Infinity</code>, <code class="highlighter-rouge">NegInfinity</code>, <code class="highlighter-rouge">ComplexInfinity</code> and <code class="highlighter-rouge">NaN</code> classes through PR <a href="https://github.com/symengine/symengine.py/pull/151">#151</a>. I also worked on finishing Isuru’s code, wrapping <code class="highlighter-rouge">ccode</code> and <code class="highlighter-rouge">CodePrinter</code> class with an improvised <code class="highlighter-rouge">doprint</code> function through PR <a href="https://github.com/symengine/symengine.py/pull/152">#152</a>.
I also opened <a href="https://github.com/symengine/symengine.py/pull/153">#153</a>, working on acquisition and release of <code class="highlighter-rouge">Global Interpreter Lock</code> or <code class="highlighter-rouge">GIL</code> in <code class="highlighter-rouge">pywrapper.cpp</code> file.</p>
<p>See you again!</p>
<p><strong>Au Revoir</strong></p>http://bjodah.github.io/blog/posts/gsoc-week2.htmlBjörn Dahlgren (bjodah)Björn Dahlgren (bjodah): Status update week 2 GSoCSun, 11 Jun 2017 21:37:00 GMT
http://bjodah.github.io/blog/posts/gsoc-week2.html
<div><p>I have spent the second week of Google Summer of Code on essentially two things:</p>
<ol class="arabic simple">
<li>Continued work on type awareness in the code printers (<tt class="docutils literal">CCodePrinter</tt> and
<tt class="docutils literal">FCodePrinter</tt>). The work is ongoing in <a class="reference external" href="https://github.com/sympy/sympy/pull/12693">gh-12693</a>.</li>
<li>Writing tutorial <a class="reference external" href="https://github.com/sympy/scipy-2017-codegen-tutorial">code</a> on code-generation in the form of jupyter notebooks for the
upcoming SciPy 2017 <a class="reference external" href="https://scipy2017.scipy.org/ehome/220975/493418/">conference</a>.</li>
</ol>
<div class="section" id="precision-aware-code-printers">
<h2>Precision aware code printers</h2>
<p>After my weekly mentor meeting, we decided to take another approach to
how we are going to represent <tt class="docutils literal">Variable</tt> instances in the
<tt class="docutils literal">.codegen.ast</tt> module. Previously I had proposed to use quite a
number of arguments (stored in <tt class="docutils literal">.args</tt> since it inherits
<tt class="docutils literal">Basic</tt>). Aaron suggested we might want to represent that underlying
information differently. After some discussion we came to the
conclusion that we could introduce a <tt class="docutils literal">Attribute</tt> class (inhereting
from <tt class="docutils literal">Symbol</tt>) to describe things such as value const-ness, and
pointer const-ness (those two are available as <tt class="docutils literal">value_const</tt> and
<tt class="docutils literal">pointer_const</tt>). Attributes will be stored in a <tt class="docutils literal">FiniteSet</tt>
(essentially SymPy version of <tt class="docutils literal">set</tt>) and the instances we provide as
"pre-made" in the <tt class="docutils literal">.codegen.ast</tt> module will be supported by the
printers by default. Here is some example code showing what the
current proposed API looks like (for C99):</p>
<pre class="literal-block">
>>> u = symbols('u', real=True)
>>> ptr = Pointer.deduced(u, {pointer_const, restrict})
>>> ccode(Declaration(ptr))
'double * const restrict u;'
</pre>
<p>and for Fortran:</p>
<pre class="literal-block">
>>> vx = Variable(Symbol('x'), {value_const}, float32)
>>> fcode(Declaration(vx, 42))
'real(4), parameter :: x = 42'
</pre>
<p>The C code printer can now also print code using different math functions depending
on the targeted precision (functions guaranteed to be present in the C99 stanard):</p>
<pre class="literal-block">
>>> ccode(x**3.7, standard='c99', precision=float32)
'powf(x, 3.7F)'
>>> ccode(exp(x/2), standard='c99', precision=float80)
'expl((1.0L/2.0L)*x)'
</pre>
</div>
<div class="section" id="tutorial-material-for-code-generation">
<h2>Tutorial material for code generation</h2>
<p>Aaron, Jason and I have been discussing what examples to use for the
tutorial on code generation with SymPy. Right now we are aiming to use
quite a few examples from chemistry actually, and more specifically
<a class="reference external" href="https://en.wikipedia.org/wiki/Chemical_kinetics">chemical kinetics</a>. This is the
precise application which got me started using SymPy for
code-generation, so it lies close to my heart (I do extensive modeling
of chemical kinetics in my PhD studies).</p>
<p>Working on the tutorial material has already been really helpful for
getting insight in development needs for the exisiting classes and
functions used for code-generation. I was hoping to use <tt class="docutils literal">autowrap</tt>
from the <tt class="docutils literal">.utilities</tt> module. Unfortunately I found that it was not
flexible enough to be useful for integration of systems of ODEs (where
we need to evaluate a vector-valued function taking a vector as
input). I did attempt to subclass the <tt class="docutils literal">CodeWrapper</tt> class to allow
me to do this. But personally I found those classes to be quite hard to
extend (much unlike the printers which I've often found to be
intuitive).</p>
<p>My current plan for the chemical kinetics case is to first solve it
using <tt class="docutils literal">sympy.lambdify</tt>. That allows for quick prototyping, and
unless one has very high demands with respect to performance, it is
usually good enough.</p>
<p>The next step is to generate a native callback (it was here I was
hoping to use <tt class="docutils literal">autowrap</tt> with the Cython backend). The current
approach is to write the expressions as Cython code using a template.
Cython conveniently follows Python syntax, and hence the string
printer can be used for the code generation. Doing this speeds up the
integration considerably. At this point the bottleneck is going back and
forth through the Python layer.</p>
<p>So in order to speed up the integration further, we need to bypass
Python during integration (and let the solver call the user
provided callbacks without going through the interpreter). I did
this by providing a C-code template which relies on <tt class="docutils literal">CVode</tt> from the
<a class="reference external" href="https://computation.llnl.gov/projects/sundials">Sundials</a> suite of
non-linear solvers. It is a well established solver and is already
available for Linux, Mac & Windows under conda from the
<tt class="docutils literal"><span class="pre">conda-forge</span></tt> channel. I then provide a thin Cython wrapper, calling
into the C-function, which:</p>
<ol class="arabic simple">
<li>sets up the CVode solver</li>
<li>runs the integration</li>
<li>records some statistics</li>
</ol>
<p>Using native code does come at a cost. One of the strengths of Python
is that it is cross-platform. It (usually) does not matter if your
Python application runs on Linux, OSX or Windows (or any other
supported operating system). However, since we are doing
code-generation, we are relying on compilers provided by the
respective platform. Since we want to support both Python 2 & Python 3
on said three platforms, there are quite a few combinations to cover.
That meant quite a few surprises (I now know for example that MS
Visual C++ 2008 does not support C99), but thanks to the kind <a class="reference external" href="https://github.com/sympy/scipy-2017-codegen-tutorial/issues/2#issuecomment-307538308">help</a> of
<a class="reference external" href="https://github.com/isuruf">Isuru Fernando</a> I think I will manage to
have all platform/version combinations working during next week.</p>
<p>Also planned for next week:</p>
<ul class="simple">
<li>Use <tt class="docutils literal">numba</tt> together with <tt class="docutils literal">lambdify</tt></li>
<li>Use <tt class="docutils literal">Lambdify</tt> from <a class="reference external" href="https://github.com/symengine">SymEngine</a>
(preferably with the LLVM backend).</li>
<li>Make the notebooks more tutorial like (right now they are more of a
show-case).</li>
</ul>
<p>and of course: continued work on the code printers. That's all for now
feel free to get in touch with any feedback or questions.</p>
</div></div>http://arif7blog.wordpress.com/?p=247Arif Ahmed (ArifAhmed1995)Arif Ahmed (ArifAhmed1995): Week 2 Report(June 3 – June 10) : Working Prototype, Improving functionality.Sun, 11 Jun 2017 10:16:52 GMT
https://arif7blog.wordpress.com/2017/06/11/week-2-reportjune-3-june-10-working-prototype-improving-functionality/
<p>Note : If you’re viewing this on Planet SymPy and Latex looks weird, go to the <a href="https://arif7blog.wordpress.com/" rel="noopener" target="_blank"><span style="color: #00ff00;">WordPress site</span></a> instead.</p>
<p>The 2D use case works perfectly although there are limitations. The current API and method-wise limitations are discussed <a href="https://github.com/ArifAhmed1995/sympy/blob/857463fac558702def0af27d08a1afa01a14aff0/IntegrationOverPolytopes.ipynb" rel="noopener" target="_blank">here</a>.</p>
<p>That lone failing test was caused due to typecasting a Python set object to a list. Sets in Python do not support indexing, so when they are coerced to list, indexing of the resulting object is arbitrary. Therefore, I had to change to an axes list of [x, y].</p>
<p>Some other coding improvements were suggested by Gaurav Dhingra (another GSoC student). One can view them as comments to the <span style="color: #00ccff;"><a href="https://github.com/sympy/sympy/pull/12673" rel="noopener" target="_blank">pull request</a></span>.</p>
<p><span style="text-decoration: underline;"><strong>Clockwise Sort</strong></span><br />
I wanted to overcome some of the method-wise limitations. So, the first improvement I did was to implement a clockwise sorting algorithm for the points of the input polygon.<br />
Come to think of it, for the 3D use case the points belonging to a particular facet will have to be sorted anti-clockwise for the algorithm to work. Therefore, it would be better to include an extra argument “<span style="color: #00ff00;"><span style="color: #808000;">orient</span></span>” with default value 1 (clockwise sort) and -1 when anti-clockwise sort.</p>
<p>First I tried to think of it myself, and naturally the first thing that came to my mind was sorting the points depending on their <img alt="\arctan { \theta }" class="latex" src="https://s0.wp.com/latex.php?latex=%5Carctan+%7B+%5Ctheta+%7D&bg=ffffff&fg=444444&s=-1" title="\arctan { \theta }" /> value where <img alt="\theta" class="latex" src="https://s0.wp.com/latex.php?latex=%5Ctheta&bg=ffffff&fg=444444&s=-1" title="\theta" /> is the angle that the line from that point to an arbitrary reference point (center) makes with the x-axis (more correctly, the independent variable axis). But obviously this is not efficient, because calculating the <img alt="\arctan { \theta }" class="latex" src="https://s0.wp.com/latex.php?latex=%5Carctan+%7B+%5Ctheta+%7D&bg=ffffff&fg=444444&s=-1" title="\arctan { \theta }" /> value is expensive.</p>
<p>Then I came across <span style="color: #00ccff;"><a href="https://stackoverflow.com/a/6989383" rel="noopener" target="_blank"><span style="color: #00ccff;">this answer</span></a></span> on StackOverflow. It was a much better technique because it did not use <img alt="\arctan { \theta }" class="latex" src="https://s0.wp.com/latex.php?latex=%5Carctan+%7B+%5Ctheta+%7D&bg=ffffff&fg=444444&s=-1" title="\arctan { \theta }" /> , no division operations and no distance computing using square root (as pointed out in the comments). Let us understand the reasoning.</p>
<p>Firstly, I’m sure we all know that the distance of a point <img alt="({ x }_{ 1 },{ y }_{ 1 })" class="latex" src="https://s0.wp.com/latex.php?latex=%28%7B+x+%7D_%7B+1+%7D%2C%7B+y+%7D_%7B+1+%7D%29&bg=ffffff&fg=444444&s=-1" title="({ x }_{ 1 },{ y }_{ 1 })" /> from the line <img alt="(ax+by+c=0)" class="latex" src="https://s0.wp.com/latex.php?latex=%28ax%2Bby%2Bc%3D0%29&bg=ffffff&fg=444444&s=-1" title="(ax+by+c=0)" /> is <img alt="(\frac { a{ x }_{ 1 }+b{ y }_{ 1 }+c }{ \sqrt { { a }^{ 2 }+{ b }^{ 2 } } })" class="latex" src="https://s0.wp.com/latex.php?latex=%28%5Cfrac+%7B+a%7B+x+%7D_%7B+1+%7D%2Bb%7B+y+%7D_%7B+1+%7D%2Bc+%7D%7B+%5Csqrt+%7B+%7B+a+%7D%5E%7B+2+%7D%2B%7B+b+%7D%5E%7B+2+%7D+%7D+%7D%29&bg=ffffff&fg=444444&s=-1" title="(\frac { a{ x }_{ 1 }+b{ y }_{ 1 }+c }{ \sqrt { { a }^{ 2 }+{ b }^{ 2 } } })" />. Now, for clockwise sorting we need to decide the left or right orientation of a point given the other point and a constant reference(in this case, the center). Therefore, it is required to define a custom compare function.</p>
<p>Given three points : <img alt="a, b, center" class="latex" src="https://s0.wp.com/latex.php?latex=a%2C+b%2C+center&bg=ffffff&fg=444444&s=-1" title="a, b, center" />, consider the line made by points <img alt="b" class="latex" src="https://s0.wp.com/latex.php?latex=b&bg=ffffff&fg=444444&s=-1" title="b" /> and <img alt="center" class="latex" src="https://s0.wp.com/latex.php?latex=center&bg=ffffff&fg=444444&s=-1" title="center" />. The point <img alt="a" class="latex" src="https://s0.wp.com/latex.php?latex=a&bg=ffffff&fg=444444&s=-1" title="a" /> will be on the left if numerator of of above formula is negative and right if positive. The cases where we need not consider this formula is when we are sure of :</p>
<p>Case 1 > The center lies in between <img alt="a" class="latex" src="https://s0.wp.com/latex.php?latex=a&bg=ffffff&fg=444444&s=-1" title="a" /> and <img alt="b" class="latex" src="https://s0.wp.com/latex.php?latex=b&bg=ffffff&fg=444444&s=-1" title="b" />(Firstly check this by comparing x-coordinates).</p>
<p>Case 2 > If all the points are on a line parallel to y-axis :<br />
Sub-Case 1 > If any point is above center, that point is “lesser” than other one.<br />
Sub-Case 2 > If both below center, the point closer to center is “lesser” than<br />
other one.</p>
<p>Case 3 > This is when Case 1, 2 and the standard check fails.<br />
This can only happen if <img alt="a" class="latex" src="https://s0.wp.com/latex.php?latex=a&bg=ffffff&fg=444444&s=-1" title="a" />, <img alt="b" class="latex" src="https://s0.wp.com/latex.php?latex=b&bg=ffffff&fg=444444&s=-1" title="b" /> and <img alt="center" class="latex" src="https://s0.wp.com/latex.php?latex=center&bg=ffffff&fg=444444&s=-1" title="center" /> are all collinear points and <img alt="a" class="latex" src="https://s0.wp.com/latex.php?latex=a&bg=ffffff&fg=444444&s=-1" title="a" />, <img alt="b" class="latex" src="https://s0.wp.com/latex.php?latex=b&bg=ffffff&fg=444444&s=-1" title="b" /> both lie on the same side of the line with respect to center. Then, the one farthest  from center is “lesser” than the other one.</p>
<p>Ondrej and Prof.Sukumar have looked at the notebook but will discuss in detail further and then inform me about the final 2D API.</p>
<p>Till then, I’ll work on the existing limitations.</p>
<p> </p><br /> <a href="http://feeds.wordpress.com/1.0/gocomments/arif7blog.wordpress.com/247/" rel="nofollow"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/arif7blog.wordpress.com/247/" /></a> <img alt="" border="0" height="1" src="https://pixel.wp.com/b.gif?host=arif7blog.wordpress.com&blog=126429718&post=247&subd=arif7blog&ref=&feed=1" width="1" />https://parsoyaarihant.github.io/blog/gsoc/2017/06/09/Week1-ReportArihant Parsoya (parsoyaarihant)Arihant Parsoya (parsoyaarihant): GSoC Week1 ReportFri, 09 Jun 2017 06:30:00 GMT
https://parsoyaarihant.github.io/blog/gsoc/2017/06/09/Week1-Report.html
<p>Francesco was skeptical if MatchPy could support all Rubi rules. Hence we (I and Abdullah) are trying to implement first set of rules (Algebraic) into MatchPy as soon as possible. Major things involved in accomplishing this are:</p>
<ul>
<li><strong>[Completed]</strong> Complete writing parser for Rubi rules form <code class="highlighter-rouge">DownValues[]</code> generated from Mathematica.</li>
<li><strong>[Partially Completed]</strong> Complete basic framework for MatchPy to support Rubi rules.</li>
<li><strong>[Completed]</strong> Parse Rubi tests.</li>
<li><strong>[Incomplete]</strong> Add utility functions for Rubi into SymPy syntax.</li>
</ul>
<h4 id="generation-of-new-patterns-from-optional-arguments">Generation of new patterns from Optional Arguments</h4>
<p>Mathematica supports <a href="https://reference.wolfram.com/language/tutorial/OptionalAndDefaultArguments.html">optional arguments</a> for Wild symbols. Some common functions (such as <code class="highlighter-rouge">Mul</code>, <code class="highlighter-rouge">Add</code>, and <code class="highlighter-rouge">Pow</code>) of Mathematica have built‐in default values for their arguments.</p>
<p>MatchPy does not support optional arguments to its <code class="highlighter-rouge">Wildcards</code>. So, Manuel Krebber suggested to add two more rules for each optional argument that exist in the pattern. For example:</p>
<pre><code class="language-Mathematica">Int[x_^m_.,x_Symbol] :=
x^(m+1)/(m+1) /;
FreeQ[m,x] && NonzeroQ[m+1]
</code></pre>
<p>In the above rule, the default value of <code class="highlighter-rouge">m_</code> is <code class="highlighter-rouge">1</code>. So I implemented these rules in MatchPy:</p>
<pre><code class="language-Mathematica">Int[x_^m_.,x_Symbol] :=
x^(m+1)/(m+1) /;
FreeQ[m,x] && NonzeroQ[m+1]
(* substituting m = 1*)
Int[x_,x_Symbol] :=
x^(2)/(2) /;
FreeQ[2,x] && NonzeroQ[2]
</code></pre>
<p>I have used <a href="https://stackoverflow.com/questions/18035595/powersets-in-python-using-itertools">powerset</a> to generate all combinations of default values to generate patterns. Code for parser can be found <a href="https://github.com/parsoyaarihant/Rubi-Parse">here</a>.</p>
<h4 id="utility-functions">Utility functions</h4>
<p>There are many utility functions for Rubi written in Mathematica. We are currently focusing on implementing the functions which are being used in Algebraic rules. As soon we we complete our implementation (hopefully by this weekend), we can start running the test suit for Rubi.</p>
<p>I will work on implementing utility functions along with Abdullah in coming days. I will keep testing the module as we implement utility functions and add more rules into our matcher.</p>http://nesar2017.wordpress.com/?p=49Abdullah Javed Nesar (Abdullahjavednesar)Abdullah Javed Nesar (Abdullahjavednesar): Week 2 BeginsThu, 08 Jun 2017 18:40:20 GMT
https://nesar2017.wordpress.com/2017/06/08/week-2-begins/
<p>Hello all!</p>
<p>The first week comes to an end and we (Arihant and me) have partially implemented Algebraic functions\ Linear products.</p>
<p><strong>THE PROGRESS</strong></p>
<p>Most of my task was to translate Algebraic Integration tests from <a href="http://www.apmaths.uwo.ca/~arich/IntegrationProblems/MapleSyntaxFiles/MapleSyntaxFiles.html">Maple Syntax</a> to Python, write Utility Functions and writing tests for Utility Functions. I have already implemented <a href="http://www.apmaths.uwo.ca/~arich/IntegrationProblems/MapleSyntaxFiles/1%20Algebraic%20functions/1%20Linear%20products/1.2%20(a+b%20x)%5Em%20(c+d%20x)%5En.txt">1 Algebraic functions\1 Linear products\1.2 (a+b x)**m (c+d x)**n</a>, <a href="https://github.com/parsoyaarihant/sympy/blob/rubi4/sympy/rubi/tests/test_all_algebriac.py">here</a>. And test sets for <a href="http://www.apmaths.uwo.ca/~arich/IntegrationProblems/MapleSyntaxFiles/1%20Algebraic%20functions/1%20Linear%20products/1.3%20(a+b%20x)%5Em%20(c+d%20x)%5En%20(e+f%20x)%5Ep.txt">1 Algebraic functions\1 Linear products\1.3 (a+b x)**m (c+d x)**n (e+f x)**p</a> and <a href="http://www.apmaths.uwo.ca/~arich/IntegrationProblems/MapleSyntaxFiles/1%20Algebraic%20functions/1%20Linear%20products/1.4%20(a+b%20x)%5Em%20(c+d%20x)%5En%20(e+f%20x)%5Ep%20(g+h%20x)%5Eq.txt">1 Algebraic functions\1 Linear products\1.4 (a+b x)^m (c+d x)^n (e+f x)^p (g+h x)^q</a> are almost ready <a href="https://github.com/parsoyaarihant/sympy/pulls">here</a> along with most of the Utility Functions we require till now, after this we will be successfully covering the <span style="color: #808080;">Algebraic\ Linear products </span>portion.</p>
<p>Next what’s delaying our progress is Utility Functions, I  have been taking help from <a href="http://www.apmaths.uwo.ca/~arich/IntegrationRules/PortableDocumentFiles/Integration%20utility%20functions.pdf">this pdf</a> on Integration Utility Functions and looking for their definitions in Mathematica website but the major problem is the definitions provided are not very clear or no definition at all. Meanwhile Arihant was implementing RUBI rules, default values for variables and working on constraints .</p><br /> <a href="http://feeds.wordpress.com/1.0/gocomments/nesar2017.wordpress.com/49/" rel="nofollow"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/nesar2017.wordpress.com/49/" /></a> <img alt="" border="0" height="1" src="https://pixel.wp.com/b.gif?host=nesar2017.wordpress.com&blog=126779945&post=49&subd=nesar2017&ref=&feed=1" width="1" />https://shikharj.github.io//2017/06/05/GSoC-Progress-Week-1Shikhar Jaiswal (ShikharJ)Shikhar Jaiswal (ShikharJ): GSoC Progress - Week 1Mon, 05 Jun 2017 00:00:00 GMT
https://shikharj.github.io//2017/06/05/GSoC-Progress-Week-1/
<p>Ahoy there! This post contains my first GSoC progress report.</p>
<h2 id="report">Report</h2>
<h3 id="symengine">SymEngine</h3>
<p>My previous PR(<a href="https://github.com/symengine/symengine/pull/1276">#1276</a>) on <code class="highlighter-rouge">Relationals</code> is reviewed and merged in. I also worked on introducing additional support for them. The PRs, <a href="https://github.com/symengine/symengine/pull/1279">#1279</a> and <a href="https://github.com/symengine/symengine/pull/1280">#1280</a> are also reviewed and merged, leaving only <a href="https://github.com/symengine/symengine/pull/1282">LLVM support</a> as a work in progress.</p>
<p>I also noticed, that one of the pending requests for <code class="highlighter-rouge">0.3.0</code> milestone for <code class="highlighter-rouge">SymEngine.py</code> was the implementation of vector-specific methods such as <code class="highlighter-rouge">dot()</code> and <code class="highlighter-rouge">cross()</code> in <code class="highlighter-rouge">SymEngine</code>. The work is done <a href="https://github.com/symengine/symengine/pull/1286">here</a> and is mostly complete.</p>
<p>Apart from this, I started the planned implementation of <code class="highlighter-rouge">SymPy</code> classes by implementing the <code class="highlighter-rouge">Dummy</code> class in <code class="highlighter-rouge">SymEngine</code> <a href="https://github.com/symengine/symengine/pull/1284">here</a>.</p>
<p>Most of the above mentioned pending work should be ready to merge within a couple of days.</p>
<h3 id="sympy">SymPy</h3>
<p>Continuing my work on <code class="highlighter-rouge">sympy/physics</code>, I pushed in <a href="https://github.com/sympy/sympy/pull/12703">#12703</a> covering the stand-alone files in <code class="highlighter-rouge">physics</code> module and <a href="https://github.com/symengine/symengine/pull/12700">#12700</a> which is a minor addition to the work done in <code class="highlighter-rouge">physics/mechanics</code>.</p>
<h3 id="symenginepy">SymEngine.py</h3>
<p>Isuru pointed out some inconsistencies in the existing code for <code class="highlighter-rouge">ImmutableMatrix</code> class, which needed to be fixed for <code class="highlighter-rouge">0.3.0</code> milestone. The code was fixed through the PR <a href="https://github.com/symengine/symengine.py/pull/148">#148</a>.</p>
<p>We’ll probably have a <code class="highlighter-rouge">SymEngine.py</code> release next week, after which I plan to port over pre-existing functionalities in <code class="highlighter-rouge">SymEngine.py</code> to <code class="highlighter-rouge">SymPy</code>’s left-over modules.</p>
<p>That’s all for now.</p>
<p><strong>Adiós</strong></p>http://ranjithkumar007.github.io/2017/06/05/Improve Sets Module - The BeginningRanjith Kumar (ranjithkumar007)Ranjith Kumar (ranjithkumar007): Improve Sets Module - The BeginningMon, 05 Jun 2017 00:00:00 GMT
http://ranjithkumar007.github.io/2017/06/05/Improve-Sets-Module-The-Beginning/
<p>As discussed in the previous blog, this and the next week’s task for me is to improve the Sets module.
I started off by implementing the class for <code class="highlighter-rouge">Complemsent</code> and the functions <code class="highlighter-rouge">set_complement</code> and <code class="highlighter-rouge">set_intersection</code>.
This work is done in <a href="https://github.com/symengine/symengine/pull/1281">#1281</a>.</p>
<p><code class="highlighter-rouge">set_intersection</code> :</p>
<div class="highlighter-rouge"><pre class="highlight"><code>RCP<const Set> set_intersection(const set_set &in);
</code></pre>
</div>
<p><code class="highlighter-rouge">set_intersection</code> tries to simplify the set of sets input by applying various rules.</p>
<ul>
<li>trivial rules like if one of the input is <code class="highlighter-rouge">emptyset()</code></li>
<li>handles finitesets by checking all other sets in the input for all elements in finitesets.</li>
<li>If any of the sets is union, then it tries to return a Union of Intersections</li>
<li>If any of the sets is Complement, then this returns a simplified <code class="highlighter-rouge">Complement</code>.</li>
<li>pair-wise rules checks every pair of sets and tries to merge them into one.</li>
</ul>
<p><code class="highlighter-rouge">set_complement</code> :</p>
<div class="highlighter-rouge"><pre class="highlight"><code>RCP<const Set> set_complement(const RCP<const Set> &universe, const RCP<const Set> &container);
</code></pre>
</div>
<p>For this function, i had to implement virtual functions <code class="highlighter-rouge">set_complement</code> for all the existing child classes of <code class="highlighter-rouge">Set</code>.
and this function simply calls the container’s <code class="highlighter-rouge">set_complement()</code> with the given universe.</p>
<p>Details of <code class="highlighter-rouge">Complement</code> class :</p>
<ul>
<li>similar to other classes in sets module, the class prototype for <code class="highlighter-rouge">Complement</code> is
<div class="highlighter-rouge"><pre class="highlight"><code>class Complement : public Set
</code></pre>
</div>
</li>
<li>It stores two Sets. Universe and container, and Complement(a, b) represents a-b where a is universe and b is the container.</li>
</ul>
<p>Apart from this, I started maintaining a <a href="https://github.com/symengine/symengine/wiki/GSoC-2017-Solvers-Progress-report">wiki page</a> for GSoC’s progress report as suggested by <a href="https://github.com/aktech/">Amit Kumar</a> in the first weekly meeting. I will post minutes of all meetings in this wiki page.</p>
<p>Next week, I will be working on <code class="highlighter-rouge">ImageSet</code> and <code class="highlighter-rouge">ConditionSet</code>.</p>
<p>That’s it for now. See you next time. Until then, Goodbye !</p>http://valglad.github.io/2017/06/05/smithValeriia Gladkova (valglad)Valeriia Gladkova (valglad): The Smith Normal FormMon, 05 Jun 2017 00:00:00 GMT
http://valglad.github.io/2017/06/05/smith/
<p>Last week I was working on implementing the Smith Normal Form for matrices over principal ideal domains. I’m still making corrections as the <a href="https://github.com/sympy/sympy/pull/12705">PR</a> is being reviewed. I used the standard algorithm: use invertable (in the ring) row and column operations to make the matrix diagonal and make sure the diagonal entries have the property that an entry divides all of the entries that come after it (this is described in more detail on <a href="https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm">wikipedia</a> for example). I ran into trouble when trying to determine the domain of the matrix entries if the user hadn’t explicitly specified one. Matrices in SymPy don’t have a <code class="highlighter-rouge">.domain</code> attribute or anything similar, and can contain objects of different types. So, if I did attempt to find some suitable principal ideal domain over which to consider all of them, the only way would be to try a few of the ones that are currently implemented until something fits, and that would have to be extended every time a new domain is implemented and generally sounds tedious to do. I’ve asked on the <a href="https://gitter.im/sympy/GroupTheory">Group Theory channel</a> if there was a better way and that started a discussion about changing the <code class="highlighter-rouge">Matrix</code> class to have a <code class="highlighter-rouge">.domain</code> or a <code class="highlighter-rouge">.ring</code> attribute and have the entries checked at construction. In fact, this has been brought up by other people before as well. Unfortunately, adding this attribute would require going over the matrix methods implemented so far and making sure they don’t assume anything that might not hold for general rings (especially non-commutative ones: like the determinant in its traditional form wouldn’t even make sense; however, it turns out there are several generalisations of determinants to non-commutative rings like <a href="https://en.wikipedia.org/wiki/Quasideterminant">quasideterminants</a> and <a href="https://en.wikipedia.org/wiki/Dieudonn%C3%A9_determinant">Dieudonné determinant</a>). And this would probably take quite a while and is not directly related to my project. So in the end we decided to have the function work only for matrices for which a <code class="highlighter-rouge">.ring</code> attribute has been added manually by the user. For example,</p>
<div class="highlighter-rouge"><pre class="highlight"><code>>>> from sympy.polys.solvers import RawMatrix as Matrix
>>> from sympy.polys.domains import ZZ
>>> m = Matrix([0,1,3],[2,4,1])
>>> setattr(m, "ring", ZZ)
</code></pre>
</div>
<p>Hopefully, at some point in the future matrices will have this attribute by default.</p>
<p>The Smith Normal Form was necessary to analyse the structure of the abelianisation of a group: abelian groups are modules over integers which is a PID and so the Smith Normal form is applicable if the relators of the abelianisation are written in the form of a matrix. If 0 is one of the abelian invariants (the diagonal entries of the Smith Normal form), then the abelianisation is infinite and so must be the whole group. I’ve added this test to the <code class="highlighter-rouge">.order()</code> method for <code class="highlighter-rouge">FpGroup</code>s and now it is able to terminate with the answer <code class="highlighter-rouge">oo</code> (infinity) for certain groups for which it wouldn’t terminate previously. I hope to extend this further with another way to evaluate the order: trying to find a finite index cyclic subgroup (this can be achieved by generating a random word in the group and considering the coset table corresponging to it), and obtain the order of the group by multiplying the index with the order of the cyclic subgroup. The latter could be infinite, in which case the whole group is. Of course, this might not always terminate, but it will terminate in more cases than coset enumeration applied directly to the whole group. This is also what’s done in <a href="https://www.gap-system.org">GAP</a>.</p>
<p>I have begun working on it but this week is the last before my exams, and I feel that I should spend more time revising. For this reason, I probably wouldn’t be able to send a PR with this new test by the end of the week. However, it would most likely be ready by the end of the next one, and considering that the only other thing I planned to do until the first evaluation period was to write (the main parts of) the <code class="highlighter-rouge">GroupHomomorphism</code> class assuming that the things it depends on (e.g. rewriting systems) are already implemented, I believe I am going to stay on schedule.</p>http://bjodah.github.io/blog/posts/gsoc-week1.htmlBjörn Dahlgren (bjodah)Björn Dahlgren (bjodah): A summer of code and mathematicsSat, 03 Jun 2017 13:10:00 GMT
http://bjodah.github.io/blog/posts/gsoc-week1.html
<div><p>Google are generously funding work on selected <a class="reference external" href="https://en.wikipedia.org/wiki/Open-source_software">open source</a> projects each
year through the <a class="reference external" href="https://summerofcode.withgoogle.com/">Google Summer of Code</a> project. The project allows
under- and post-graduate students around the world to apply to
mentoring organizations for a scholarship to work on a project during
the summer. This spring I made the leap, I wrote a <a class="reference external" href="https://github.com/sympy/sympy/wiki/GSoC-2017-Application-Bj%C3%B6rn-Dahlgren:-Improved-code-generation-facilities">proposal</a> which
got accepted, and I am now working full time for the duration of this
summer on one of these projects. In this blog post I'll give some
background and tell you about the first project week.</p>
<div class="section" id="background">
<h2>Background</h2>
<p>Since a few years I've been contributing code to the open-source project
<a class="reference external" href="http://www.sympy.org">SymPy</a>. SymPy is a so-called "computer algebra system",
which lets you manipulate mathematical expressions symbolically. I've used this
software package extensively in my own doctoral studies and it has been really useful.</p>
<p>My research involves formulating mathematical models to: rationalize experimental observations,
fit parameters or aid in design of experiments. Traditionally one sits down and derive equations,
often using pen & paper, then one writes computer code which implements said model, and finally
one writes a paper with the same formulas as LaTeX code (or something similar).
Note how this procedure involves writing the same equations essentially three times,
during derivation, coding and finally the article.</p>
<p>By using SymPy I can, from a single source:</p>
<ol class="arabic simple">
<li>Do the derivations (fewer hard-to-find mistakes)</li>
<li>Generate the numerical code (a blazing fast computer program)</li>
<li>Output LaTeX formatted equations (pretty formulas for the report)</li>
</ol>
<p>A very attractive side-effect of this is that one truly get reproducible research
(reproducibility is one of the pillars in science). Every step of the process is
self-documented, and because SymPy is free software: <em>anyone</em> can redo them. I
can't stress enough how big this truly is. It is also the main
motivation why I haven't used proprietary software in place of SymPy,
even though that software may be considerably more feature complete
than SymPy, any code I wrote for it would be inaccessible to people
without a license (possibly even including myself if I leave academia).</p>
<p>For this work-flow to work in practice the capabilities of the computer algebra system
need to be quite extensive, and it is here my current project with SymPy comes in.
I have had several ideas on how to improve capability number two
listed above: generating the numerical code, and now I get the chance
to realize some of them and work with the community to improve SymPy.</p>
</div>
<div class="section" id="first-week">
<h2>First week</h2>
<p>The majority of the first week has been spent on introducing type-awareness into
the code-printers. SymPy has printer-classes which specialize printing of e.g.
strings, C code, Fortran code etc. Up to now there has been no way to indicate
what precision the generated code should be for. The default floating point type
in python is for example "double precision" (i.e. 64-bit binary IEEE 754 floating
point). This is also the default precision targeted by the code
printers.</p>
<p>However, there are occasions where one want to use another
precision. For example, consumer class graphics cards which are
ubiquitous often have excellent single precision performance, but are
intentionally capped with respect to double precision arithmetic (due
to marketing reasons). At other times, one want just a bit of extra
precision and extended precision (80-bit floating point, usually the
data type of C's <tt class="docutils literal">long double</tt>) is just what's needed to compute
some values with the required precision. In C, the <a class="reference external" href="http://en.cppreference.com/w/c/numeric/math">corresponding math functions</a> are standardized since
C99.</p>
<p>I have started the work to enable the code printers to print this in a
<a class="reference external" href="https://github.com/sympy/sympy/pull/12693">pull-request</a> to the
SymPy source repository. I have also started experimenting with a
class representing arrays. Arrays</p>
<p>The first weekly meeting with <a class="reference external" href="http://asmeurer.com">Aaron Meurer</a> went well and we also briefly
discussed how to reach out to the SymPy community for wishes on what
code-generation functions to provide, I've set up a wiki-page for it
under the SymPy projects wiki:</p>
<p><a class="reference external" href="https://github.com/sympy/sympy/wiki/codegen-gsoc17">https://github.com/sympy/sympy/wiki/codegen-gsoc17</a></p>
<p>I'll be sending out an email to the <a class="reference external" href="https://groups.google.com/forum/#!forum/sympy">mailing list for SymPy</a> asking for feedback.</p>
<p>We also discussed the upcoming SciPy 2017 conference where Aaron
Meurer and <a class="reference external" href="http://www.moorepants.info/">Jason Moore</a> will be giving
a tutorial on code-generation with SymPy. They've asked me to join
forces with them and I've happily accepted that offer and am looking
forward to working on the tutorial material and teaching fellow
developers and researchers in the scientific python community about
how to leverage SymPy for code generation.</p>
<p>Next blog post will most likely be a bit more technical, but I thought
it was important to give some background on what motivates this effort
and what the goal is.</p>
</div></div>http://arif7blog.wordpress.com/?p=62Arif Ahmed (ArifAhmed1995)Arif Ahmed (ArifAhmed1995): Week 1 Report(May 24 – June 2) : The 2D prototypeSat, 03 Jun 2017 00:39:01 GMT
https://arif7blog.wordpress.com/2017/06/03/week-1-reportmay-24-june-2-the-2d-prototype/
<p><span style="color: #000000;">As per the timeline, I spent the week writing a prototype for the 2D use case.</span> <a href="https://github.com/sympy/sympy/pull/12673/files" rel="noopener noreferrer" target="_blank">Here</a> <span style="color: #000000;">is the current status of the implementation.</span></p>
<p><span style="color: #000000;">At the time of writing this blog post, the prototype mostly works but fails for one of the test cases. I haven’t used a debugger on it yet but will get around to fixing it today. The main agenda for the next week should be to improve this prototype and make it more robust and scalable. </span><span style="color: #000000;">Making it robust would mean that the implementation need not require a naming convention for the axes and can handle more boundary cases.<br />
Scalability would refer to implementing in a way which works reasonably fast for large input data. It would also mean designing the functions in a use case independent way so that they could be re-used for the 3-Polytopes. The simplest example would be the <span style="color: #3366ff;"><a href="https://github.com/ArifAhmed1995/sympy/blob/daa69f291fb0cd5b7ba0e26ed81aa9be05b46150/sympy/integrals/intpoly.py#L374" rel="noopener noreferrer" target="_blank">norm</a> <span style="color: #000000;">function. It works for both 2D and 3D points. </span></span></span><br />
<span style="color: #000000;">Currently, some parts of the implementation are hacky and depend upon denoting the axes by specific symbols namely</span> <span style="color: #993366;"><strong>x</strong></span> <span style="color: #000000;">and</span> <span style="color: #993366;"><strong>y<span style="color: #000000;">. </span></strong><span style="color: #000000;">That would be the main focus for the beginning of Week 2. Let’s take a look at the methods implemented and their respective shortcomings. </span><span style="color: #000000;">The methods currently implemented are :</span></span></p>
<p><span style="color: #000000;">1. <span style="color: #003366;">polytope_integrate</span> : This is the main function which calls integration_reduction which further calls almost everything else. Mathematically, this function is the first application of the Generalized Stokes Theorem. Integration over the 2-Polytope surface is related to integration over it’s facets (lines in this use case). Nothing much to change here.</span></p>
<p><span style="color: #000000;">2.<span style="color: #000080;"> integration_reduction</span> : This one is the second application of the Stokes Theorem. Integration over facets (lines) is related to evaluation of the function at the vertices. The implementation can be made more robust by avoiding accessing the values of the best_origin by the index and instead accessing it by key. One workaround would be to assign the Symbol ‘<span style="color: #993300;">x</span>‘ to the independent variable and ‘<span style="color: #993300;">y</span>‘ to the dependent one. In the code, all dictionary accesses will be done with these symbols. This offers scalability as well. An extra Symbol ‘<span style="color: #993300;">z</span>‘ will suffice to denote the third axis variable for the 3D use case.<br />
</span></p>
<p><span style="color: #000000;">3. <span style="color: #000080;">hyperplane_parameters</span> :  This function returns the values of the hyperplane parameters of which the facets are a part of. I can’t see improvements to be made with respect to the 2D use case, but that may change with the new API.</span></p>
<p><span style="color: #000000;">4. <span style="color: #000080;">best_origin</span> : This function returns a point on the line for which the vector inner product between the divergence of the homogeneous polynomial and that point yields a polynomial of least degree.</span> <span style="color: #000000;">This function is very much dependent on the naming scheme of the axes but this issue can be circumvented by </span><span style="color: #000000;">assigning symbols as explained above.</span></p>
<p><span style="color: #000000;">5. <span style="color: #000080;">decompose</span> : This function works perfectly for all test data. However, I’ll see if it can be made faster. Will be important for scaling up-to polynomials containing large number of terms.</span></p>
<p><span style="color: #000000;">6. <span style="color: #000080;">norm</span> : This is a really simple function. The only reason to change it’s code would be adding support for different representations of points.</span></p>
<p><span style="color: #000000;">7. <span style="color: #000080;">intersection, is_vertex</span> : Both of these methods are simple to write and don’t require any further changes for the 2D case (at least as far as I can see).</span></p>
<p><span style="color: #000000;">9. <span style="color: #000080;">plot_polytope, plot_polynomial</span> : These are simple plotting functions to help visualize the polytope and polynomial respectively. If extra features for the plots are required then suitable changes to code can be made.<br />
</span></p>
<p><span style="color: #000000;">After I get all the tests for the 2D prototype to pass, I’ll write a Jupyter notebook and add it to the examples/notebooks folder of SymPy. As recommended by Ondrej, it should contain examples along with some plots and description of whatever basic API exists now.<br />
The main focus for Week 2 should be : </span><br />
1 > Get the prototype to pass all test cases (should be completed really soon).<br />
2 > Make the notebook and discuss a better API for the 2D use case.<br />
3 > Implement the final 2D API and write any tests for it.<br />
3 > Discuss how to extend to the 3D use case (Implementation and API).</p><br /> <a href="http://feeds.wordpress.com/1.0/gocomments/arif7blog.wordpress.com/62/" rel="nofollow"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/arif7blog.wordpress.com/62/" /></a> <img alt="" border="0" height="1" src="https://pixel.wp.com/b.gif?host=arif7blog.wordpress.com&blog=126429718&post=62&subd=arif7blog&ref=&feed=1" width="1" />https://szymag.github.io/post/week-1/Szymon Mieszczak (szymag)Szymon Mieszczak (szymag): Week 1Fri, 02 Jun 2017 22:41:21 GMT
https://szymag.github.io/post/week-1/
<p>My first task, which corresponds to GSoC was create three classes, Curl, Divergence, Gradient. They create object which are unevaluated mathematical expression. Sometimes it’s better working on such expression, for example when we wants to check some identity. We have to check if it’s true for every possible vector. We have still some work here, because in next step we want to create abstract vector expression. There is one open PR corresponding to described task:https://parsoyaarihant.github.io/blog/gsoc/2017/06/02/Coding-period-startsArihant Parsoya (parsoyaarihant)Arihant Parsoya (parsoyaarihant): Coding period startsFri, 02 Jun 2017 06:30:00 GMT
https://parsoyaarihant.github.io/blog/gsoc/2017/06/02/Coding-period-starts.html
<p>Community bonding period is completed and coding period started on 31st May.</p>
<p>Our original plan was to rewrite the pattern matcher for SymPy and generate decision tree for Rubi rules from <code class="highlighter-rouge">Downvalues[]</code> <a href="https://raw.githubusercontent.com/Upabjojr/RUBI_integration_rules/master/RUBI_DownValues_FullForm.txt">generated</a> by Francesco.</p>
<p>Aaron gave us a link to <a href="https://arxiv.org/pdf/1705.00907.pdf">this</a> paper by Manuel Krebber. Pattern matching algorithms discussed in the paper are implemented in <a href="https://github.com/HPAC/matchpy">MatchPy</a> library.</p>
<h4 id="matchpy">MatchPy</h4>
<p>MatchPy uses <code class="highlighter-rouge">discrimination net</code>’s to do many-to-one matching(i.e. matching one subject to multiple patterns). MatchPy generates its own discrimination net as we add patterns to its <code class="highlighter-rouge">ManyToOneReplacer</code>.</p>
<p>Discrimination nets can be more efficient than a decision tree hence we decided to use MatchPy as the pattern matcher for Rubi. I wrote basic matching programs which implements few Rubi rules MatchPy.</p>
<p>I found the following issues with MatchPy:</p>
<ul>
<li>
<p>MatchPy cannot be directly added to SymPy because it is written in Python3.6(whereas SymPy supports Python2 also).</p>
</li>
<li>
<p>It lacked mathematical operations on its <code class="highlighter-rouge">Symbols</code> due to which it becomes difficult to implement Rubi constrains. A workaround this issue is to <code class="highlighter-rouge">sympify</code> the expression and do calculations in SymPy.</p>
</li>
<li>
<p>MatchPy uses external libraries such as <code class="highlighter-rouge">Multiset</code>, <code class="highlighter-rouge">enum</code> and <code class="highlighter-rouge">typing</code>. SymPy does not encourage using external libraries in its code. Those modules need to be reimplemented into SymPy if we are going to directly import MatchPy code into SymPy.</p>
</li>
</ul>
<p>Re-implementing MatchPy algorithms in SymPy can be very challenging and time consuming task as I am not very familiar with the algorithms used in MatchPy.</p>
<p>I used <code class="highlighter-rouge">3to2</code> to convert MatchPy code to Python2 syntax. Majority tests are passing in Python2 Syntax. I am currently trying to get the code working in Python2.</p>
<p>In coming week I will import MatchPy code to SymPy directly. If there are some setbacks in this approach, I will reimplement MatchPy algorithms in SymPy.</p>