

I have almost completed implementing all the Utility Functions just to complete few tests and few left out functions. Till now we were using Rt()
which could only handle numeric value the new definition can handle even expressions. This week I was a bit engaged with my pending academic works so I was a bit inactive. I’ll be committing my changes soon. These are few tests for Rt()
.
def test_Rt():
assert Rt(x**2, 2) == x
assert Rt(S(2 + 3*I), S(8)) == (2 + 3*I)**(1/8)
assert Rt(x**2 + 4 + 4*x, 2) == x + 2
assert Rt(S(8), S(3)) == 2
assert Rt(S(16807), S(5)) == 7


So, I wrapped up PR#12931 on handling implicit intersections. I’ve to clean it up a bit more as suggested by Christopher. I plan to implement the BentleyOttmann Algorithm and the case with intersecting polygons with more than two sides passing through same point in separate PRs.
Now, about the 3D case I initially thought about writing a 3Polytope class first but I now realise that there is no need as such. The API can be kept simpler. I’ll update this post with a PR link for the 3D case quite soon.
During week 8 I added several methods which calculate transformation equations for some specific situation. We’ve finished step, where transformation equations are obtained for rotation, translation, and changing the type of coordinate system. This all transitions are handled in vector module. We’ve also prepared the method for composition of any two kind of transformations, because coordinate system can be rotated and translated at the same time. My current task is to use already added methods in CoordSys3D constructor.


My first two months under GSoC
have finally ended, and we are right on track towards the completion of the project. I finished up on my log of SymPy
’s directories and have analysed everything that needs to be further implemented or improved upon.
I pushed in #1313, implementing row_insert
and column_insert
functions in DenseMatrix
class and row_del
and col_del
functions in C
wrappers, while making row_join
, col_join
, row_del
and col_del
inplace member functions.
After testing all the viable directories that could use SymEngine
as a backend, only the LieAlgebras
module worked completely out of the box, with no major issues. As such, we were able to achieve the same through #13023, which now marks the beginning of Phase III of my proposal. I have also pushed the work done on other directories as separate branches on my local repository, to be finished upon gradually.
I worked on implementing the Singleton
pattern in the wrappers through #178, though the work is currently in progress. More on this next week.
That’s all I have for now.
Bidāẏa


This is the blog on progress for week7 and week8. It revolves around trigonometric solvers. As a side note, My progress rate has significantly dropped since the commencement of classes due to tight schedule and loaded assignments. I am working all out on weekends to compensate this time loss.
As I mentioned in my previous blog, I continued my work on solving simple trigonometric equations. As isuruf suggested, I breaked the PR #1305 into several small parts.
First one is on as_real_imag
#1310. Several Changes were made like adding support for TrigFunctions and HyperbolicFunctions.
Next PR is on rewrite_as_exp
#1309. For this, I Implemented a Base Transform Visitor as follows.
class TransformVisitor : public BaseVisitor<TransformVisitor>
{}
and rewrite_as_exp
is implemented with a class overriding TransformVisitor’s methods.
class RewriteAsExp : public BaseVisitor<RewriteAsExp, TransformVisitor>
{}
Additionally, going by srajan’s suggestion, I changed introduced a new base class TrigBase
for TrigFunctions and its inverses and similarly for HyperbolicFunctions.
Additionally, I made some fixes in ImageSet
here.
Once, this PR gets in, I will push the remaining bits and pieces from #1305.
The rewriting PR hasn’t been merged yet. At the beginning of last week, there was still some work to be done on it. For example, the RewritingSystem
’s check_confluence()
method (which, unsurprisingly, checks if the system is confluent) would add new rewriting rules while it runs  definitely not something it should do. Another addition, suggested by my mentor, was the attribute _max_exceeded
: this is True
when the KnuthBendix completion method has already been attempted but the maximum number of rules allowed to be added to the system was exceeded. Trying again would be a waste of time. However, if the maximum number of rules is increased, _max_exceeded
is reset so that the completion method could be run again.
Then I worked on the multistep version of the presentation algorithm (which was possible because the homomorphism PR was merged by then). Once I wrote it up, made sure it worked and started testing against the single step one, I noticed that the multistep version ran several times as fast for several of the groups I was using and yet was suddenly much slower for a group of higher order. This was unexpected and I spent a while trying to work out why that would be. The reason was this: the function uses coset_enumeration_c
for filling in a certain coset table. coset_enumeration_c
stores the deductions it makes while filling in so that it can process them later. However, if there are too many deductions, it’s not practical to go through all of them so the algorithm uses the method look_ahead
that allows it to continue without looking at the deduction stack which is emptied. The critical size of the deduction stack was 500 by default. For large groups, the single step version would greatly exceed that most of the time so look_ahead
was run instead, speeding it up considerably, while the multi step version would be working with a smaller coset table and the number of deductions would be large but not over 500 which made it slow. So it looked like processing close to 500 deductions was inefficient and I reduced the number to 100 to see what happens. What happened was that both version got faster but now the multistep version was consistently faster than the singlestep one. I ran the coset table tests and they seemed to be fine too so the reduction in the maximum number of deductions didn’t seem to affect anything negatively. I pushed the multi step version into the presentation PR but that hasn’t started being reviewed because another problem came up.
The homomorphisms tests from the merged PR would occasionally time out so that needed investigating. Turned out it was because the __contains__
method of the FpSubgroup
class couldn’t handle group elements in the conjugate form, i.e. something of the form r**1*w*r
for some words r, w
. It would get into an infinite loop and the tests would only occasionally time out because the method is used during kernel computation which is randomised. So a couple of days was spent thinking about what would be the best way to eliminate the problem, whether this sort of problem would only be caused by conjugate elements and finally expanding the code accordingly. You can follow what I did in this PR. Though this is still being discussed and it’s unclear whether some other solution would be called for.
So because of all this, I couldn’t work on the other homomorphism cases (though I did sketch some things that I can use once the dependent PRs are merged). If the FpSubgroup
issue is resolved soon and the rewriting PR is merged, I will implement homomorphisms to FpGroup
s. The presentation PR will probably need more reviewing and I can’t implement PermutationGroup
domains without it. I’m considering looking into an alternative method of doing homomorphisms specifically for the PermutationGroup
to PermutationGroup
case because I’ve seen a section on it in the Handbook. Or I could start working on Sylow subgroups which is another things I was thinking of doing. These things shouldn’t depend on any of the unmerged PRs so it would be a good use of time.


Francesco opened the PR to merge the Rubi module in SymPy. I made the code compatible for python<3.6 to pass the Travis tests. I used @doctest_depends_on
function decorator to stop the doctest for python<3.6 and defined dummy classes to pass the tests. We are keeping the PR open for now since it will allow me to run tests on Travis and I expect significant change in code after code generation functionality is added in MatchPy.
I have updated the parser to accommodate for Wildcard.optional()
. Previously, I sympified the expression and substituted the optional values to create new patterns. The current parser does not use sympy and is very fast compared to previous version.
I ran the test suit for algebraic rules 1.2 previously and ~1250/1500 tests were passing. The tests are really slow since they use simplify
and expand
functions. I tried using numerical evaluation at random points and it turned out to be faster than evaluating tests symbolically. Test suits are grouped by the type of expression:
# Integrands of the form a
[S(0), x, S(1), S(0)],
[S(1), x, S(1), x],
[S(5), x, S(1), S(5)*x],
[  S(2), x, S(1),  S(2)*x],
# Integrands of the form x**m
[x**m, x, S(1), x**(S(1) + m)/(S(1) + m)],
[x**S(100), x, S(1), S(1)/S(101)*x**S(101)],
[x**S(3), x, S(1), S(1)/S(4)*x**S(4)],
# Integrands of the form x**(m/S(2))
[x**(S(5)/S(2)), x, S(1), S(2)/S(7)*x**(S(7)/S(2))],
[x**(S(3)/S(2)), x, S(1), S(2)/S(5)*x**(S(5)/S(2))],
[x**(S(1)/S(2)), x, S(1), S(2)/S(3)*x**(S(3)/S(2))],
# Integrands of the form x**(m/S(3))
[x**(S(5)/S(3)), x, S(1), S(3)/S(8)*x**(S(8)/S(3))],
[x**(S(4)/S(3)), x, S(1), S(3)/S(7)*x**(S(7)/S(3))],
[x**(S(2)/S(3)), x, S(1), S(3)/S(5)*x**(S(5)/S(3))],
Adding thousands of tests could lead to slowdown of tests. So far my plan is to add 12 test from each expression type:
[S(0), x, S(1), S(0)],
[x**m, x, S(1), x**(S(1) + m)/(S(1) + m)],
[x**S(100), x, S(1), S(1)/S(101)*x**S(101)],
[x**(S(5)/S(2)), x, S(1), S(2)/S(7)*x**(S(7)/S(2))],
While inspecting the failed tests, I figured out that the failing tests depend on rules from other modules(such as binomials). Hence, they could not be integrated currently.
I have added all algebriac rules to the matcher(~1700). But, it led to significant decrease in speed. In some cases, the matcher take so much time that I have to stop the program itself. I have opened an issue for this. Hope this get fixed asap.
I have added all Algebriac rules to the matcher.
I have already added all algebraic rules which is 1/5 of all the rules. I cannot really test all those rules until the speed issue gets fixed. I will focus on the following this week:
Integer
in rubi.symbols to matchpyInteger


I’ve been working on the last set of Utility functions and after that I’ll complete some left over functions yet to be implemented. Along with it we have been fixing some bugs which we could notice.
TrigReduce()
is an inbuilt function in Mathematica, following is its code in Python.
def TrigReduce(i): if SumQ(i): t = 0 for k in i.args: t += TrigReduce(k) return t if ProductQ(i): if any(PowerQ(k) for k in i.args): if (i.rewrite(sin, exp).rewrite(cos, exp).expand().rewrite(exp, sin)).has(I): return i.rewrite(sin, exp).rewrite(cos, exp).expand().rewrite(exp, sin).simplify() else: return i.rewrite(sin, exp).rewrite(cos, exp).expand().rewrite(exp, sin) else: a = Wild('a') b = Wild('b') v = Wild('v') Match = i.match(v*sin(a)*cos(b)) if Match: a = Match[a] b = Match[b] v = Match[v] # 2 sin A cos B = sin(A + B) + sin(A − B) return i.subs(v*sin(a)*cos(b), v*S(1)/2*(sin(a + b) + sin(a  b))) Match = i.match(v*sin(a)*sin(b)) if Match: a = Match[a] b = Match[b] v = Match[v] # 2 sin A sin B = cos(A − B) − cos(A + B) return i.subs(v*sin(a)*sin(b), v*S(1)/2*cos(a  b)  cos(a + b)) Match = i.match(v*cos(a)*cos(b)) if Match: a = Match[a] b = Match[b] v = Match[v] # 2 cos A cos B = cos(A + B) + cos(A − B) return i.subs(v*cos(a)*cos(b), v*S(1)/2*cos(a + b) + cos(a  b)) if PowerQ(i): if i.has(sin): if (i.rewrite(sin, exp).expand().rewrite(exp, sin)).has(I): return i.rewrite(sin, exp).expand().rewrite(exp, sin).simplify() else: return i.rewrite(sin, exp).expand().rewrite(exp, sin) if i.has(cos): if (i.rewrite(cos, exp).expand().rewrite(exp, cos)).has(I): return i.rewrite(cos, exp).expand().rewrite(exp, cos).simplify() else: return i.rewrite(cos, exp).expand().rewrite(exp, cos) else: return i
Some tests for TrigReduce()
assert TrigReduce(cos(x)**2) == cos(2*x)/2 + 1/2 assert TrigReduce(cos(x)**2*sin(x)) == sin(x)/4 + sin(3*x)/4 assert TrigReduce(cos(x)**2+sin(x)) == sin(x) + cos(2*x)/2 + 1/2 assert TrigReduce(cos(x)**2*sin(x)**5) == 5*sin(x)/64 + sin(3*x)/64  3*sin(5*x)/64 + sin(7*x)/64 assert TrigReduce(2*sin(x)*cos(x) + 2*cos(x)**2) == sin(2*x) + cos(2*x) + 1
I have taken some help from a reply in stackoverflow and may be now we can answer that question much better.
My next week’s target would be to complete all the utility functions and fix bugs as much as possible.
We worked on Parametric logarithmic derivative PR #12885 only this week. So now I will get to a few mathematical points regarding the problem of “parametric logarithmic derivative” problem (will shortly write it as PLD
problem).
The PLD
problem is, for a given hyperexponential monomial over a differential field and , to decide whether there exists with , for the equation: , and to find one such solution.
There are two versions of it, the first one being the heursitic version which is quite simple (with available pseudo code) and is already implemented in SymPy, but since it doesn’t handle all the cases, it becomes necessary to have the deterministic version of it. The deterministic version is sort of used as a fallback i.e in case the heuristic version fails (raises NotImplementedError
) which isn’t quite often.
We already have done the basic programming of it in SymPy, so before I say any further let’s see an example.
A simple example of where the heuristic version fails is:
>>> fa, fd = Poly(1, x), Poly(1, x)
>>> wa, wd = Poly(1, x), Poly(1, x)
>>> DE = DifferentialExtension(extension={'D': [Poly(1, x, domain='ZZ'), Poly(t, t, domain='ZZ')], 'exts': [None, 'exp']})
>>> parametric_log_deriv_heu(fa, fd, wa, wd, DE)
None
# while the deterministic version, runs fine
>>> parametric_log_deriv(fa, fd, wa, wd, DE)
[Matrix([[1, 1, 0]]), Matrix([[1, 0, 1]])] # don't worry about output type
Internals of deterministic version
In the deterministic version we use the Risch’s structure theorem, i.e
( equation 1)
where we solve for each .
This result of structure theorem is quite useful, this same theorem can be used in the process of DifferentialExtension
construction, as can be seen the section 5.2 in book (Outline and Scope of Integration Algorithm, search by section name if you have a different version of the book).
In PLD
(deterministic) we first obtain the and parts depending on ‘log’/’exp’ extension respectively and store them in a list F
(the order wouldn’t matter).
We reorganize this problem in a way somewhat similar to limited_integrate
problem, i.e , we solve the problem and chose the one where .
The same thing we do here in this problem, but that does cause some problem as may not necesarily be contained in the extension , and currently we are working on solving this part of the problem, Kalevi suggests to take the unfinished DE
object and the extension containing could be built on that.(No need to start from scratch.)
I haven’t mentioned much regarding the function that does the working for converting the relation in eq (1) over a field to one over . It contains Poly
manipulation methods to do that. It does mathematically the same thing as “to obtain a system with coefficients in C and the same constant solutions.”
I’ll complete the parametric_log_deriv
before the next week starts.
During week 7 I continued my work on transformation equations for rotation. One PR #12960 which obtains transformation equations from rotation matrix was merged. I have prepared also method for composition of transformation equation but I didn’t push them. I need to still think how to arrange the whole structure. My mentor suggest that we can just use transformation equations instead of rotation matrix, because it’s just general concept. After that we could modify constructor in CoordSys3D class and put them transformation equation instead of rotation matrix.


My seventh report on my GSoC progress. It’s been quite a journey so far, full of challenges and rewards. This week, apart from the work, I took some time to write a proposal for a full fledged talk at the upcoming PyCon India 2017
Conference, titled SymEngine: Leveraging The Power Of A Computer Algebra System (CAS) To Another
to be held during the first week of November 2017
in New Delhi
. What’s more exciting is that Sumith and Ranjith would be there as well, hopefully if it gets selected.
I pushed in #1308 fixing a minor Travis
irregularity.
I pushed in #176, a minor PR for skipping some of the tests where SymPy
wasn’t being used.
Most of my time this week was spent on finding inconsistencies between SymEngine
and SymPy
objects which arise while using SymEngine
in SymPy
. I’m currently maitaining a log of all the issues that are occuring and their possible solutions. As such, I have decided to make a small change in our strategy. For the coming week, I’ll try and complete the issue log, and implement the possible missing functionalities and attributes in SymEngine.py
, after which a minor release in SymEngine
and SymEngine.py
would be made to make these changes available for SymPy
.
Till Next Time!
Ciao


I had initially planned to begin implementing the 3D case this week. But I wanted closure for the 2D case. Currently there is no support for implicit intersections for polygons. One could make the failing tests pass by not raising an error when such a polygon is encountered, but obviously that wouldn’t be a good solution.
So, from the discussion in pull request #12931 :
1. Fleury’s algorithm is not required. A better approach to calculate area would be to simply flip the sides after crossing an intersection point and come back to initial direction of traversal once that point is encountered again. I have implemented this technique locally. It has worked fine for the following polygons but still seems to have some minor bugs which I’ll fix really soon. Once that’s done I’ll update the pull request.
>>> p1 = Polygon((0,3), (1,2), (2,0), (1,2), (0,3))
>>> p1.area
13/3
>>> p2 = Polygon((0,0), (1,0), (0,1), (1,1))
>>> p2.area
1/2
All of this was because of the material here.
2. The intersections are currently being found by a slow technique which I designed. A much better one by the name of BentleyOttmann algorithm exists for the same purpose. Once I get the area working for all the test cases on the codegolf link, I’ll work on implementing this.
3. Christopher also suggested that we add a property called footprint_area, which would refer to area that a polygon covers(i.e. the hull of the polygon) and another property called hull which would refer to that very path. I was initially confusing this area with the area that depends on direction. Now I see that this footprint area isn’t the one normally referred to when talking about the area of a polygon and it’s actually the other one.


The work on finding finite presentations of permutation groups was somewhat delayed last week because I’ve come across a bug in the reidemester_presentation
code while testing the one step version of the presetation algorithm (which is essentially finished). Turned out it was caused by an unnecessary cyclic reduction on one line in the main elimination technique. See PR for the explanation.
Another delay was because I realised that for the multistep version, it would be useful to have the homomorphism PR merged. Mostly this is because of the generator_product
method from the PermutationGroup
class  this takes an arbitrary permutation in the group and returns a list of the strong generators such that the product of the elements of the list is equal to this permutation. But the homomorphism functionality that’s already there (the case of the homomorphism from an FpGroup
to a PermutationGroup
) would be helpful too. So I switched by attention to getting that PR ready for merging.
So this week I’ll try to add the multistep version and will also start implementing homomorphisms from PermutationGroup
s and to FpGroup
s, as outlined in the plan. Also, at some point when the rewriting PR is merged, I could try out the potentially improved random element generator for FpGroup
s that I sketched a couple of weeks ago. This is not officially part of the plan and not in any way urgent, but could probably give better random elements which would be useful for the order()
method and the kernel calculation for homomorphisms. Though I might not actually get around to it this week.


Manuel has implemented Optional arguments in MatchPy. I am currently working on modifying our current parser to accommodate the changes. Optional arguments in MatchPy will enable us to add all rules to ManyToOneReplacer
without increasing the number of rules. Manuel has been helping us a lot in making Rubi work in SymPy. He has also found a way to add MatchPy expressions in SymPy. I will try to use it to implement MatchQ in SymPy.
ExpandIntegrand takes up majority of time while integration. I have implemented ExpandIntegrand
using SymPy’s .match()
. However, .match()
can give unwanted results such as:
>>> (x**a).match((c_ + d_*x)**n_)
{d_: 1, c_: 0, n_: a}
In the above example, it assumes that c_ is an optional variable when we don’t want it to be. To orevent these situations, I used exclude
attribute to avoid these situations:
>>> c_ = Wild('c', exclude=[0])
>>> d_ = Wild('d', exclude=[0])
>>> n_ = Wild('n', exclude=[0])
>>> (x**a).match((c_ + d_*x)**n_)
None
I have used the above trick to prevent unnecessary matches in ExpandIntegrand
and made it a bit faster.
powerset
and parse all Rubi rules.


So far we have covered almost all the Utility Functions to support Algebraic Rules/Linear Products rules, Arihant had given the entire details of the first test set in his recent blog post once we are successfully done with the first set we have the other two test sets for Linear Products ready to work on.
Currently I’m work on implementation of Normalization functions, although I have implemented NormalizeIntegrandFactorBase
using SymPy’s pattern matcher but that needs a bit improvisation. After that I’ll complete implementation of SimpHelp
.
May be a couple of weeks more and we’ll be done with the implementation of all the Utility Functions.
We started working on implementation of parametric logarithmic derivative problem using Risch structure Theorem. First we implemented the function basis_Q
, which seems like we wouldn’t be needing nomore. While doing that it seemed like there was problem with the .is_algebraic
property, as it was returning (pi**3).is_algebraic
as True. This bug was fixed in the pull request #12924.
Also a few improvements to the integral module pull request #12925 was also merged.
I don’t have much to say for now, until I complete the parametric logarithmic derivate PR.
In the last week I didn’t create any PR which is worth to mention so in this report I’d like to introduce the concept which we’re going to add vector module. Currently, in SymPy we have transformation equations but they’re responsible only for rotation and translation. We have already implemented the transformation equations for changing type of coordinate system so we need to somehow deals with composition of this two kind of transformation.


This week, quite like the previous week, my work was mostly limited to SymEngine.py
.
I pushed in #172 wrapping off some of the miscellaneous functions, and aliasing some of the existing ones to make them compatible with SymPy
.
Since the above PR practically sums up my work in SymEngine.py
, it’s safe to say no new functionality is expected to be wrapped for now. However, since sympy/physics/vector
and a number of other modules are currently failing with SymEngine
, after the 0.3.0
release of SymEngine.py
, some minor tweaks would be required in the wrappers. Also, I’ll try and get all the pending PRs merged as soon as possible.
Till Next Time!
Adéu


This is the blog on progress for week6. It revolves around solvers for trigonometric equations. The PR on polynomial solvers is merged in.
As I mentioned in my previous blog, I continued my work on solving simple trigonometric equations. I made a PR. See the commit messages for more details. Currently, this PR doesn’t pass the whole test suit. Specifically, the below case.
auto y = symbol("y");
eqn = sub(sin(add(x, y)), sin(x));
soln = solve(eqn, y);
It currently returns a ConditionSet(means it is unable to find any solution). The reason for this traces to problem in expand()
. expand(exp(I*(x + y)))
stays as exp(I*(x+y))
. It should be returning exp(I*x)*exp(I*y)
. This is needed because subs
are not robust enough to handle inputs like exp(x + I*y)>subs(exp(I*y) = z)
. Even SymPy subs are not robust enough(see example below).
>>> a=sin(x+y).rewrite('exp')
>>> a.subs(exp(I*y),z)
⎛ ⅈ⋅(x  y) ⅈ⋅(x + y)⎞
ⅈ⋅⎝ ℯ + ℯ ⎠
────────────────────────────────
2
>>> a=expand(sin(x+y).rewrite('exp'))
>>> a.subs(exp(I*y),z)
ⅈ⋅x ⅈ⋅x
ⅈ⋅z⋅ℯ ⅈ⋅ℯ
 ──────── + ───────
2 2⋅z
Apart from this, I fixed a bug in dummy::compare
here.
Next immediate goal is to implement trigonometric simplification using fu
and integrate it with current trig_solve
. I will add as many rules as possible. Hopefully, we would be able to solve not so simple equations as well after it is successfully integrated.
You can track my progress on this at fu
branch.
That’s it for now. See you next week. Until then, Goodbye !


I finished the 2D implementation this week. I’m very thankful to Gaurav Dhingra(a fellow senior GSoC student) for taking the time to review the pull request and suggest some really good changes. This is the link to it –> #12673.
The last issue with the 2D case was the case of implicit intersecting polygons not being handled properly.
So, I submitted another pull request to resolve that matter. Of course, it will probably require a considerable number of changes which reviewers will advise. This is the link to it –> #12931.
Now, I have to begin implementing the 3D case. When making the proposal I thought of writing an entire 3Polytope module first, but now that would seem like overkill and it isn’t a simple task to do so. Of course, if such a need arises then I definitely will get down to the task and write one on similar lines as the Polygon module. Firstly, I’ll have to think about the input API for 3Polytopes. Once I have some kind of idea, I’ll ask Ondrej for advice.


I sent the PR with the rewriting system implementation on Wednesday night, i.e. by Thursday as planned. It could have been earlier if I hadn’t spent one day experimenting with making the elements of FpGroup
different from the underlying FreeGroup
. At the moment they are the same so that, for instance:
>>> F, a, b = free_group('a, b')
>>> G = FpGroup(F, [a**2, a*b*a**1*b**1])
>>> a in F
True
>>> a in G
True
Theoretically, since F
and G
are different groups (not even isomorphic), it would make sense for their elements to be represented by different objects, e.g. the generators for G
could be a_1, b_1
so that G
and F
are connected by the injective homomorphism sending a_1
and b_1
to a
and b
respectively. Though, for convenience, it could be allowed to refer to the elements of an FpGroup
by the names of the underlying free group elements (and performing the conversion to a_1
and b_1
backstage before passing them onto FpGroup
’s methods).
Now, if this were done, it would be possible to check for equality of two elements of FpGroup
using the ==
syntax. E.g., if the generators of G
were made distinct from a, b
and called a_1, b_1
, then we could have a_1*b_1 == b_1*a_1
(while, of course, a*b == b*a
is not true because a
and b
are the elements of a FreeGroup
). The latter was the main motivation for trying this out but in the end, after additionally discussing things with my mentor, I left everything as it is and implemented a couple new FpGroup
methods instead. So, for example, to check if a*b
and b*a
are equal in G
, one would call G.equals(a*b, b*a)
. The PR has more details.
At the moment I’m in the middle of writing the function for finding finite presentations of permutation groups. I’m slightly modifying the coset enumeration code to allow it to return an incomplete CosetTable
(instead of raising a ValueError
if, say, the maximum number of cosets is exceeded) and to start running with a partially filled table (which could be passed as a keyword argument) rather than with a new, empty instance. This is necessary as finding the presentation requires setting some of the coset table entries manually and adding relators to the future presentation while examining the undefined table entries, after which coset enumeration could be used to make more deductions. Messing with coset enumeration is the only complicated part (as far as I can see): the rest is pretty much sorted by now. I suppose, the only other thing is deciding when to find the presentation in one step and when to use the multistep version (finding the presentation of a subgroup first and then the whole presentation based on that). The latter can be more efficient for larger groups. But I could think about that later, maybe run some tests, once the coset enumeration is set up properly.


We have implemented the utility functions needed for algebraic rules 1.2.
In my last blog I explained that the output generated by Mathematica and SymPy can be different which leads to failing of tests. We have solved that problem by differentiating the expressions and equating them. This helps but it takes lot of time when the integrated expression is very long. Example:
integration(S(1)/((a + b*x)**(S(5)/S(2))*(c + d*x)**(S(7)/S(6))), x) = (  S(2)/S(3))/((b*c  a*d)*(a + b*x)**(S(3)/S(2))*(c + d*x)**(S(1)/S(6))) + S(20)/S(9)*d/((b*c  a*d)**S(2)*(c + d*x)**(S(1)/S(6))*sqrt(a + b*x)) + S(80)/S(9)*d**S(2)*sqrt(a + b*x)/((b*c  a*d)**S(3)*(c + d*x)**(S(1)/S(6))) + S(80)/S(9)*b**(S(1)/S(3))*d**S(2)*(c + d*x)**(S(1)/S(6))*(S(1) + sqrt(S(3)))*sqrt(a  b*c/d + b*(c + d*x)/d)/((b*c  a*d)**S(3)*((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1) + sqrt(S(3))))) + S(80)/S(3)*b**(S(1)/S(3))*d*(c + d*x)**(S(1)/S(6))*((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3)))*sqrt(cos(arccos(((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1)  sqrt(S(3))))/((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1) + sqrt(S(3))))))**S(2))/cos(arccos(((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1)  sqrt(S(3))))/((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1) + sqrt(S(3))))))*EllipticE(sin(arccos(((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1)  sqrt(S(3))))/((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1) + sqrt(S(3)))))), sqrt(S(1)/S(4)*(S(2) + sqrt(S(3)))))*sqrt(((b*c  a*d)**(S(2)/S(3)) + b**(S(1)/S(3))*(b*c  a*d)**(S(1)/S(3))*(c + d*x)**(S(1)/S(3)) + b**(S(2)/S(3))*(c + d*x)**(S(2)/S(3)))/((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1) + sqrt(S(3))))**S(2))/(S(3)**(S(3)/S(4))*(b*c  a*d)**(S(8)/S(3))*sqrt(a  b*c/d + b*(c + d*x)/d)*sqrt(  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3)))/((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1) + sqrt(S(3))))**S(2))) + S(40)/S(9)*b**(S(1)/S(3))*d*(c + d*x)**(S(1)/S(6))*((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3)))*sqrt(cos(arccos(((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1)  sqrt(S(3))))/((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1) + sqrt(S(3))))))**S(2))/cos(arccos(((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1)  sqrt(S(3))))/((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1) + sqrt(S(3))))))*EllipticF(sin(arccos(((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1)  sqrt(S(3))))/((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1) + sqrt(S(3)))))), sqrt(S(1)/S(4)*(S(2) + sqrt(S(3)))))*(S(1)  sqrt(S(3)))*sqrt(((b*c  a*d)**(S(2)/S(3)) + b**(S(1)/S(3))*(b*c  a*d)**(S(1)/S(3))*(c + d*x)**(S(1)/S(3)) + b**(S(2)/S(3))*(c + d*x)**(S(2)/S(3)))/((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1) + sqrt(S(3))))**S(2))/(S(3)**(S(1)/S(4))*(b*c  a*d)**(S(8)/S(3))*sqrt(a  b*c/d + b*(c + d*x)/d)*sqrt(  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3)))/((b*c  a*d)**(S(1)/S(3))  b**(S(1)/S(3))*(c + d*x)**(S(1)/S(3))*(S(1) + sqrt(S(3))))**S(2)))
The above expression can take 10s of minutes to differentiate. This is huge problem since the tests take more time to complete. I am using signal
module to add time constraint on tests. But this can lead to failing of few tests unnecessarily.
I have been able to pass half of the tests. I figured out that half of the tests were failing due their dependency on other rules, such as binomials.
The test suit overall takes lot of time to complete(~hours) in Python as compared to Mathematica(~5 minutes). Mathematica is optimized in C++ so its inherently faster than python. However I found that some functions are very slow in SymPy as compared to Mathematica. For example apart(1/(x*(a + b*x)**10), x)
in SymPy take lot of time to compute partial fractions as compared to Mathematica(apart
is used in ExpandIntegrand
to expand an integrand into partial fractions).
I am about to complete my part of utility functions. We have already implemented the functions needed for algebraic rules so I am currently focusing on completing the test suit. I will complete the remaining functions in few days.
This posts contains summary of two weeks work. I was working on introducing transformation equations in CoordSysCartesian. We decide, up to now, to connect every other curvilinear coordinate system to Cartesian, so equations maps the rectangular one to the desired one. We allow user to set new coordinate system in three possible way: a = CoordSys3D('a') a.connect_to_cartesian('spherical') a.connect_to_cartesian((a.x, a.y, a.z)) x, y, z = Symbols('x y z') a.connect_to_cartesian((x, y, z), (x, y, z)) So we can set coordinate system by str from predefined dict.


Hello all!
Arihant and I have made a good progress on implementation of Utility functions and have successfully implemented almost half of it. Till now we have worked on it in sets with each containing 7 to 15 functions along with tests which takes approximately 35 days for complete implementation. Some of the functions used are predefined in Mathematica and to define those in SymPy we need to look into its definitions and examples. Thanks to Francesco for helping whenever I got stuck. Recently I have changed the format of all Linear products\Algebraic test suit and sympified the numbers using Regex. Hopefully we’ll be completing all the functions to support Linear products\Algebraic rules soon.


Note : If you’re viewing this on Planet SymPy and the Latex looks weird view it on WordPress instead.
Unfortunately the dynamic algorithm was much faster only on a regular hexagon. Why that particular figure ? The reason still eludes me.
But I did use the same idea to implement the multiple polynomial case. That is, if a list of polynomials and the maximum exponent(max x degree + max y degree) of a monomial is given as input we can calculate integrals of all the possible monomial terms and then store those values in a dictionary. Then those values can be referenced as many times as required by the polynomials. However, is this one time cost too much?
The cost is greatly reduced if we do not have to recompute the integral of the gradient.
For the 2D case we need not compute the left integral(i.e. the one with ) since that translates to computing the function at that point and multiplying it with the distance. Also, the right integral(i.e. one with )need not be recalculated since the integral of the terms are being calculated in the aforementioned order. Example : gradient_terms = . Once we calculate we can use that result again in the right integral for and and then chain follows.
Therefore, only one integral has to be calculated namely, .
So, why doesn’t this technique work always ? Maybe, I implemented it in a bad way for the first case. I’ll have to look at it again and see how to better it.
Apart from the timings, Ondrej said that the 2D API looks good enough(some changes may be suggested though). The 2D case is close to getting merged. I also have to start thinking about the 3D implementation. Of course, the immediate thought is to translate the 3D problem into a sum of the 2D ones, but is that the fast way to do it ? Can some other preprocessing techniques be used to simplify the problem whilst in 3D. I will have to think about it.


Hello, this post contains the fifth report of my GSoC progress. This week, my work was mostly limited to SymEngine.py
, and as such this post will be rather short.
I pushed in #168 wrapping off Logic
classes as well as the Set
classes. Though the PR is not complete yet, currently only test cases are remaining to be written, along with some probable debugging.
Once this PR is merged in, only some miscellaneous functions from SymEngine
would remain to be wrapped, after which our focus can be shifted to the pending SymPy
PRs using SymEngine
.
Till Next Time!
Tschüs
This week we completed PR #12850]() on adding/removing variable in DifferentialExtension
class for later introduction of creating extensions of , , and probably algebraic extensions as well. Lists like E_K
, L_K
and the new ones have been combined in a single list one, and the variables to record that information are exts
and extargs
. exts
will store str
instances having value in {‘exp’, ‘log’, ‘atan’, ‘tan’, ‘alg’}, so whenever we at each ‘level’ earlier we discovered the extension to be exponential we appended level  1
to the list E_K
(i.e. index) or to L_K
when the extension is primitive
but now we will append to just a single list and i.e. to exts
either exp
or log
/atan
accordingly (Note: is also a primitive
until now, the only ‘primitive’ has been ‘log’). First argument of both of lists (exts
and extargs
) is currently kept as None
(owing to keep the code clean).
While extargs
as is clear from the name stores the argument of extension. Well, it seems odd what the meaing of args
will be for the algebraic case, I am guessing a tuple (base, exp), but we needn’t worry about that for now.
We also have an open pull request on some reorganising of __init__
of DifferentialExtension
#12849. It isn’t blocking anything. I think that PR is done.
Key points
What seems important to me is that I can list some of the discussion key points here, since I need to those issues to be solved in my mind, in particular I often go back and look at the archives of gitter discussion.
Aaron mention about the issue in is_deriv_k
(also present in is_log_deriv_k_t_radical
), which prevent integrals of the form , and it raises NotImplementedError: Cannot work with nonrational coefficients in this case.
. Now this problem can be solved with the use of Risch structure Theorem (yes, with capital R
, small s
and capital T
, it comes to my mind whenever I see the name in the text, IIRC last time summers I was corrected Kalevi regarding Todd Coxeter
, may be someone else).
Using the paper on “Simplification of Real Elementary Functions” (already mentioned in last blog post).
TODO
Now coming to what I am currently doing is reading the text for
parametric_log_deriv
(there isn’t pseudo code of this). I think it would take me about 2 weeks to complete this task, by that time my next semester in university will start as well (on 15th July)
This time I am late for my blog post, damn!. Aaron had to mail me for the post, SymPy has a Code Generation workshop in the upcoming SciPy conference (probably 9th of this month), and also we are going to have a release of SymPy 1.1 coming out soon, I guess that is handling quite a bit.
Oh, I forgot to mention, I passed the first evaluation as well.


Crossplatform work is always a bit tricky. Python does in general a great job of allowing programs to access the filesystem, network etc in an OS agnostic manner, but sometimes there is no way around having to write special code for each platform. In Python that often means that we will do runtime inspection:
>>> sys.platform 'linux' # or e.g. darwin >>> os.name 'posix' # or e.g. nt
To add another dimension of complexity we sometimes deal with 32bit and sometimes 64bit systems (most headaches are caused by pointer sizes, size of int and long), in python we can get the size of int of the platform quite easily:
>>> sys.maxsize  2**63 + 1 0
when you, in addition to this, are doing codegeneration, you are most likely relying on a compiler. That is our third dimension of complexity, on linux that (usually) translates to gcc & llvm/clang (but Intel's icc/ifort & portland group compilers and not uncommon). This third "dimension" has been the number one struggle the past week (and in particular the combination mingw/windows). At this point I feel that it is not worthwhile to continue pursuing mingw support on Windows.
It is quite likely that a few of our attendees will have some technical difficulties with system installed compilers. Therefore, I'm particularly happy that we now have got our binder environment to work flawlessly. It took some work to setup a Dockerfile, but now it feels reassuring to have this as a fallback.
The work on the PythonCodePrinter has progressed since last week and should hopefully soon be ready for merging. The purpose of this class is to be stepping stone moving away from having e.g. lambdify relying on the StrPrinter, and instead use the CodePrinter class in .printing.codeprinter. Changing the super class showed to be trickier than first anticipiated (I thought it would be an afternoons work at most). I originally planned to rename the original PythonPrinter to something more appropriate (SymPyCodePrinter) together with its convenience functions. However, since this would incur a deprecation for only name change, it was decided that we will have to live with the old naming (to avoid raising tons of deprecation warnings in downstream projects).
We had identified two extensions we wanted to introduce to the codegeneration/autowrap factilities before the v1.1 release:
With a release candidate of SymPy v1.1 out the door (an impressive effort led by @asmeurer) and most crossplatform issues out of the way, it is about time for the final iterations of the tutorial material (and hopefully iron out any related issues in the rc).
At the end of the week it's time for me to fly to the US to attend SciPy 2017. I have high expectations and believe that it will be a great catalyst for the rest of the summers work on codegeneration.


This a combined blog for progress of week4 and week5. The PR on ImageSet is merged in. As mentioned in my previous blog, I continued my work on implementing solvers for lower degree polynomials here.
We can now solve polynomials even with symbolic coefficients.
When FLINT is not supported,
if the order of the polynomial is <=4, then Solve Poly returns a FiniteSet of roots calculated using the known algorithms as described in detail here
When FLINT is supported, solve_poly
tries to fit the given input as a Flint Polynomial.
If it fails, then it will use the method described above. or else, It factorises the polynomial using the wrappers developed during community bonding period and returns the solutions found for each factor.
For solving inputs, if we already know that input is a kind of polynomial, initially i designed a different function here, But later upon Srajan’s suggestion, I went on to modify from_basic<>()
here to directly call from_poly<>()
if the input is a poly
.
I started working on implementing solvers for trigonometric equations. You can track its progress on trigsolve branch of my repo. I will make a PR once #1296 gets in.
Short story of how this is implemented :
type
is whether if given input
comprises functions of trigonometric/hyperbolic classes which are multiplied by or added to expressions independent of the symbol and the arguments of all these functions are linear in symbol.expand_as_exp()
, we write all trigs/hyperbolics in the given equation in terms of exp
, then exp(I*x)
is substituted with a dummy and solved using polynomial solver. using invertComplex
on the solutions thus found, we find the actual solutions of the given input.For this^, I implemented a new function as_real_imag()
using visitor pattern for getting the real and imaginary parts. This was needed for implementing invertComplex
routine. Also, earlier subs was incapable of handling cases as described in the second point above. It is handled correctly now.
More on Trigonometric Solvers next time !!


At the beginning of this week I finished writing the first draft of the homomorphism class, at least as much of it as is possible without the rewriting systems for FpGroup
s. Homomorphisms from FpGroup
s to PermutationGroup
s are fully implemented though I’m sure corrections will be made as the PR is reviewed. I mentioned that I wasn’t planning on sending the PR until after the rewriting systems but there is quite a bit there and a lot of it can be reviewed (and even merged) now.
It took a bit longer than I expected because the rewriting for permutation groups wasn’t quite what was necessary for homomorphisms. I knew that there was a function returning the unique normal form of a permutation in the group (the coset_factor()
method) and had assumed that that would be enough but the normal form wasn’t in terms of the generators, nor was there an easy way to rewrite it in terms of the generators. So I had to modify the schreier_sims()
method that finds the strong generating set and the elements that make up the normal form returned by coset_factor()
to keep track of which generators in what order are multiplied to obtain them. Then I wrote a method that would write any permutation in the group as a product of the strong generators and, since I had the information about the makeup of the strong generators, this could easily be rewritten in terms of the original generators. Doing this was a bit fiddly and took a while. Though I suppose it could be counted as part of the work on rewriting systems.
The rewriting systems for FpGroup
s are coming along. At the moment, it is a new class RewritingSystem
which FpGroup
s will probably have an attribute for. I’ve almost finished implementing the KnuthBendix completion algorithm and that was most of the work. Though there is also the matter of reduction. At the moment I’m using the eliminate_words()
method from the FreeGroupElement
class  it takes a dictionary of substitutions (in this case a dictionary of reduction rules) and substitutes all of them into a given word, if possible. But that isn’t very efficient for long words and large reduction dictionaries since eliminate_words()
will scan the word from the start every time it tries to perform a new substitution. The approach I’ve seen described in papers on rewriting systems is to construct a finite automaton to perform the reduction and modify it every time a new rule is added. This would somewhat increase the time taken to initiate the system and to make it confluent but would also make word reductions much quicker. However, it could also take a while to actually implement. I’m unsure if it’s worth doing it now  perhaps, I could add it later, when the rest of my project is done.
Anyway, I was going to use this post to outline a schedule for this month’s work, like I did in the first post. Looking back at the Timeline I sketched in my proposal, I seem to have given myself 2 weeks for implementing the rewriting systems. This estimate wasn’t unreasonable, considering the unexpected matter of rewriting in permutation groups I had to deal with in the first half of the week and the reading I had to do to get a better idea of KnuthBendix. But if I don’t implement finite automatons for reductions right now, I think I should be able to send the rewriting PR by Thursday (probably earlier). With this in mind, my plan for the month is roughly as follows.
RewritingSystem
class.PermutationGroup
s (this should be plenty of time considering that I made a start on this during the application period)PermutationGroup
s and to FpGroup
s. Implement the permutation presentation for finite FpGroup
s.PermutationGroup
methods for FpGroup
s via homomorphisms.So by the end of the month, the homomorphism functionality should be complete, so should be the switching between FpGroup
and PermutationGroup
presentations (where it is possible, i.e. finite FpGroup
s). Making PermutationGroup
methods work for FpGroup
s properly could take more time but if everything goes more or less according to plan, I should have most of August to try and implement Sylow subgroups for PermutationGroup
s (there is a chapter on it in “The Handbook of Computational Group Theory”), look into the construction of finite automatons for word reduction or automatically grouping subwords of group words, like (a*b)**2
instead of a*b*a*b
, or anything else projectrelated really. That’s the optimistic scenario of course, provided no sudden terrible complications arise which can never be guaranteed.
Planet SymPy is made from the blogs of SymPy's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet SymPy with RSS, FOAF or OPML.