

Hi all, sorry for the delay. We have added test suit 1.2 successfully, This week we will complete implementing all tests for expressions involving products of powers of linears. I have completed parsing test suits for quadratic but implementation is yet to do. There are about 56 Utility functions which are left and are difficult to implement using SymPy’s pattern matcher but, I’ll try to implement those as soon as possible. There were few failing test cases for PowerVariableDegree
I’ve fixed those.


This week I continued work on PR#13082. The last implementation left for the 3D case is the hyperplane representation. For example, the user can express the list of facets of the polytope by a list of points for each facet or a list of hyperplane parameters(a tuple for each facet).
p1 = [(0, 1, 0), (1, 0, 0), (0, 0, 0)]
p2 = [([1, 0, 0], 0), ([1, 1, 0], 1), ([0, 1, 0], 0)]
The code should be able to figure out what the points are and then pass on that list of points representation to the rest of the other functions. I should be done with this in a day or two. To finish up the work for GSoC I’ll get the PR on intersecting polygons sorted out. After that, remaining documentation will have to be written and requisite cleanup to be done.
During week 10 with my mentor, we finished creation of new CoordSys3D constructor. We can set now transformation while coordinate system is created. We’ve moved functionality from _connect_to_standard_cartesian to constructor so we support the same type of transformation as previously. Now I demonstrate shorty how coordinate system different that Caertsian can be created in SymPy: a = CoordSys3D('a', transformation='spherical', variable_names=["r", "theta", "phi"]) a.lame_coefficients() a.transformation_to_parent() b = CoordSys3D('b', lambda r, theta, phi: (r*sin(theta)*cos(phi), r*sin(theta)*sin(phi), r*cos(theta)), variable_names=["


trigsimp
). These functions are very large to be implemented by hand. I have an idea to implement these functions using MatchPy’s ManyToOneReplacer
(similar to what we have done with main Rubi Integrate function).


I sent the PR with the other homomorphism cases a week ago, so about a day after my last post. The work required for the main part of the PR wasn’t really complicated but it took a while to get merged (earlier today) because some more problems showed up in the rewriting system part.
It started off with Kalevi noticing that in the case of a free abelian group, the list of rewriting rules after initiation seemed incomplete  it so happened that the test didn’t pick up on it because it didn’t need the missing rules. In itself, that wouldn’t be much of a problem because the missing rules could be added during the run of make_confluent
but is_confluent
was already True
 that was definitely wrong. So for one thing, _check_confluence
wasn’t working properly and also I thought that the type of rules that wasn’t added during rule initiation, could be added as another case  if it could be done in place, why wait till it’s discovered by the double loop in make_confluent
. I made a few little changes throughout the code to fix things but ultimately, it was the inadequacy of add_rule
that was causing problems.
When a pair of words is given to add_rule
, it first multiplies them by the inverse of the first element of the longer word until the length difference is 0, 1 or 2 (greater length differences are redundant when the smaller length differences are in the rules dictionary). Then it does the same on the other (right) side which leads to a different set of rules. We could obtain even more rules right here, without waiting for make_confluent
, if we allow switching sides, i.e. not just continuously multiplying on the right or on the left, but perform some left multiplications after several on the right, etc. This makes make_confluent
a little more efficient as more rules are discovered at one time but trying all possible combinations of sides would probably take too much time without actually being productive. At the moment, when the length difference becomes sufficiently small, instead of adding the rule directly, add_rule
calls itself recursively which allows for some side switching. Perhaps in the future, it would seem fit to try all combinations. A couple of days ago I added a rules cache to prevent repeating the work that has already been done by the function so maybe it won’t cause too much of a slowdown in practice.
After this, one rule was still missing. I reread the code several times and it took a while to work out that the problem was what seems quite obvious now. When a pair of words w1, w2
of the same length is given to add_rule
, the only rule that was added was w1: w2
for w1 > w2
. But another possibility right there could be w2**1: w1**1
provided w2**1 > w1**1
. Normally, this inverse rule doesn’t need to be added because if len(w1) > len(w2)
, then w1**1 > w2**1
and w**1: w2**1
is implied by how word reduction is set up. Adding this last case solved the issue.
There were some other little improvements. For example, make_confluent
has been made to returns a boolean at all times, not just when checking if the system is confluent. This could be used to see if it is successful. I also spotted an error in the kernel computation method that hadn’t come up before only by sheer luck.
Now that all the basic homomorphism functionality is available, I can have a go at extending the FpGroup
class with PermutationGroup
methods. I might be able to get it to work without the finite presentation of permutation groups PR (it hasn’t been reviewed yet) but I’m not entirely sure yet.
Another thing on my hands is sylow subgroups. I actually thought I got them to work several days ago but then one of the test groups (SymmetricGroup(10)
) revealed a bug in the _strong_gens_slp
attribute. It wasn’t caused by the sylow method and only comes up after computing a stabilizer or a normalizer  something I only realised yesterday; this bug really confused me for a while. I did fix it now but a different problem came up and what worked before no longer does. I don’t see why the bug fix would lead to it but evidently it did… So still trying to sort it out.
Update: Have just worked out that sylow thing. Turned out minimal blocks weren’t being computed properly (my fault: I wrote a separate function that should have outputed all minimal block systems but failed on minimality). So now all that remains is to do some more testing and tidy up the code, and I can send a PR with it in a day or so (if no other bugs turn up, that is).


Greetings! The GSoC final submissions are about three weeks away and I’m trying my best to get everything sorted out before the deadline. However, we are faced with an issue. Isuru won’t be available for the major part of the penultimate week. As such, I’ll have to reach out to Sumith for reviews, who’s been pretty busy lately. Hence my goal for the next week would be to get everything reviewed and merged as soon as possible. Here is a gist of the work done in the previous week.
I implemented some attributes seeking inspiration from SymPy
’s classes in #180, which is reviewed and merged. I also took some time fixing the assertion failures in SymPy
’s modules, which would be pushed in soon. More on this next week.
That’s all I have.
Totsiens


Operations
. I have also updated the parser to accommodate for this change.Tests for all algebraic rules are already added.
AppellF1
is not implemented in SymPy. I couldn’t find time to implement is last week. I will implement basic version of AppellF1
.


We are almost done with the implementation of utility functions. My next task would be to parse all test suits and minimize the test cases as there are numerous tests (of similar type) which is taking too long to run in Python. Along with it I’ll be completing some incomplete utility functions and fixing bugs. We need to port all the rules and test it as early as possible to fix all possible bugs. Although a major bulk of our work is completed adding rules and test should not take much time.


This week I returned to college and quite some time was spent in setting up the room, registering for courses, etc. Also, I have 27 hours a week of classes from now on which is okay considering that some of my batchmates have 31 – 32 hours/week.
The good thing is that the major part of my work is complete. This week I worked on the 3D case. Here is the PR : #13082 . A minor limitation(minor from the perspective of fixing it) is that only constant expressions are supported. Another limitation is that the input has to be a list of the polygons constituting the faces of the 3D polytope. This should actually be a list of points in correct order and the algorithm should figure out the polygon from the input. Examples of such input are in Chin et. al(2015) .
I’ll finish it up by Saturday and then proceed to completing PR #12931 . That might extend to the first few days of next week as well.
Reconstruction of constructor in CoordSys3D is like never ending story, but fortunately we are almost at the end of the work. We decide to distinguish two cases. When rotation matrix or location is set and when transformation is set. In the first case we are creating transformation equations from rotation matrix and translation vector. In the second, user is responsible for defining transformation equations but it is also possible to use some predefined curvilinear coordinate system.


Hi all, we’re in the final month of GSoC
with only about 4 weeks remaining on the development time. Last week was a bit rough because my college semester started off with a heavy schedule on the very first day, and a number of boarding issues, due to which a number of my days were spent in shifting my stuff from one room to another. Add to that the summer heat of this country, and it becomes a total nightmare. Here’s what I could do.
I pushed in #1316, resolving some of the scope issues we were facing in SymEngine.py
. I’m expecting a light implementation schedule here in SymEngine
form now on, as we have most of the stuff we need for a sizeable amount of SymPy
’s directories to be ported over SymEngine
.
Pushed in #13051, fixing a minor piece of code that was previously preventing us from using SymEngine
’s igcd
in SymPy
’s LieAlgebras
module. I had also taken some time updating the work on other directories.
I worked on implementing some miscellaneous missing functionalities in #179, which should soon be ready to get merged.
Since we are slowly reaching towards the end of the project, I’ll have to request Isuru for a release in SymEngine
and SymEngine.py
so that our latest work becomes available for SymPy
.
Pozdrav


Hey everyone, this post contains progress in week9. We are in the last phase of GSoC project. My progress is a bit lagging from the proposed timeline primarily due to commencement of classes.
As mentioned in my last blog, I was able to get the PR on fixes for ImageSet merged in and I baked all remaining pieces within #1305.
In this, I implemented a IsALinearArgTrig
as follows
class IsALinearArgTrigVisitor
: public BaseVisitor<IsALinearArgTrigVisitor, StopVisitor>
{}
It checks if the argument of Trigonometric and Hyperbolic parts is linear in symbol or not. If input is not linear in symbol, then we can’t solve that equation using the present TrigSolver.
Next is invertComplex
.
class InvertComplexVisitor : public BaseVisitor<InvertComplexVisitor>
{}
This is useful for finding inverse. Ex: for finding the x
that satisfies the equation exp(I*x) = 3
. Some tests are failing on MSVC15
compiler. I will try to figure out and fix that ASAP.
Meanwhile, I implemented basic solvers for system of equations in this PR.
That’s all for now. See you next time.


The rewriting PR only got merged today. Firstly, it took several days to sort out the FpSubgroup
’s __contains__
method (in this PR). Secondly, my mentor pointed out a case I overlooked in the add_rule
routine, and once I corrected it, another problem presented itself. It wasn’t to do with add_rule
but adding the overlooked case made it possible for the tests to pick up on it (luckily). The problem was that sometimes make_confluent
would try to use a nonexistent key for the dictionary of rewriting rules. This happened because make_confluent
is set up in such a way that if sufficiently many rules are added, _remove_redundancies
method is called, and this removes or modifies some of the existing rules, and the function didn’t account for this change properly. It took me several goes until I finally got it. And while I was at it, I noticed yet another bug which took some time to track down. Turned out that “for” loops don’t always properly iterate over lists that are changed inside the loop (spefically, they ignore newly appended elements). I didn’t think it would be a problem because I have done similar things before in python. I ended up replacing it with a “while” loop like:
>>> while i < len(some_list):
>>> some_list.append(new_element)
>>> i += 1
and that worked properly. Still not entirely sure what happened there: appending elements inside a for loop in the terminal shell doesn’t cause such problems  I should probably look into that more at some point, for future reference.
So I only began working on completing the other homomorphism cases today (not counting what I have sketched in the previous couple of weeks). I’ll try to send a PR with some of it in several days. At this point, I should be able to do everything except checking that a homomorphism from a permutation group is well defined. For that I’ll need the presentation PR and it’s quite substantial so its review will almost certaintly take more than several days. I’m planning to add the keyword argument check
to the homomorphism
function so that if check==False
, the given images are assumed to define a homomorphism. I found it useful in some of the work I was doing last week.
I decided to work on computing sylow subgroups, and as part of it, wrote two new homomorphism functions specifically for permutation groups: orbit_action_homomorphism
and block_action_homomorphism
for defining homomorphisms induced by the action of the group on a union of orbits or a block system respectively. These are of course homomorphisms between permutation groups and there is no need to check if they are welldefined so it was possible to create them without the presentation PR. I don’t know if it will stay that way as it hasn’t been discussed yet but it seemed appropriate to have them as separate functions in the homomorphisms file. Also, I found a bug in the minimal_block
method while testing block_action_homomorphism
yesterday but it’s not anything major and the fix for it will likely be merged soon. There was some trouble with Travis today though.
The actual computation of sylow subgroups is going to be a PermutationGroup
method sylow_subgroup()
and it already works for some cases so it’s going well. However, I am going to pause it for now to finish homomorphisms.


Manuel found a way to use MatchPy Symbol
with SymPy Symbol
(sample code). Implementing rules using SymPy symbols would increase the speed of module since we don’t have to convert the expressions back and forth (sympymatchpy).
I am removing constraint(cons()
) defined for the patterns and started using CustomConstraint
in Patterns
. I wasn’t able to do this previously since ManyToOneReplacer
was only able to handle MatchPy expressions. Now that I can use CustomConstraint
, I have divided the constraint into smaller CustomConstraint
. Example:
pattern3 = Pattern(Int(Pow(x_, Wildcard.optional('m', mpyInt('1'))), x_), cons(And(FreeQ(m, x), NonzeroQ(Add(m_, matchpyInteger(1)))), (m, x)))
pattern3 = Pattern(Int(Pow(x_, Wildcard.optional('m', mpyInt('1'))), x_), CustomConstraint(lambda m, x: FreeQ(m, x)), CustomConstraint(lambda m: NonzeroQ(Add(m_, mpyInt(1)))))
Defining the Constraints in this way will help the ManyToOneReplacer
to backtrack easily and thereby improving the overall speed of the module. There is a bug in MatchPy related to this, I hope it will be fixed soon.
I have updated the parser to make the above changes. It divides the constraint into different constraints if the head
of expression tree is And
:
def _divide_constriant(s, symbols):
# Creates a CustomConstraint of the form `CustomConstraint(lambda a, x: FreeQ(a, x))`
if s[0] == 'FreeQ':
return ''
lambda_symbols = list(set(get_free_symbols(s, symbols, [])))
return 'CustomConstraint(lambda {}: {})'.format(','.join(lambda_symbols), generate_sympy_from_parsed(s))
def divide_constraint(s, symbols):
if s[0] == 'And':
result = [_divide_constriant(i, symbols) for i in s[1:]]
else:
result = _divide_constriant(s, symbols)
r = ['']
for i in result:
if i != '':
r.append(i)
return ', '.join(r)
Symbol
Operations


I have almost completed implementing all the Utility Functions just to complete few tests and few left out functions. Till now we were using Rt()
which could only handle numeric value the new definition can handle even expressions. This week I was a bit engaged with my pending academic works so I was a bit inactive. I’ll be committing my changes soon. These are few tests for Rt()
.
def test_Rt():
assert Rt(x**2, 2) == x
assert Rt(S(2 + 3*I), S(8)) == (2 + 3*I)**(1/8)
assert Rt(x**2 + 4 + 4*x, 2) == x + 2
assert Rt(S(8), S(3)) == 2
assert Rt(S(16807), S(5)) == 7


So, I wrapped up PR#12931 on handling implicit intersections. I’ve to clean it up a bit more as suggested by Christopher. I plan to implement the BentleyOttmann Algorithm and the case with intersecting polygons with more than two sides passing through same point in separate PRs.
Now, about the 3D case I initially thought about writing a 3Polytope class first but I now realise that there is no need as such. The API can be kept simpler. I’ll update this post with a PR link for the 3D case quite soon.
During week 8 I added several methods which calculate transformation equations for some specific situation. We’ve finished step, where transformation equations are obtained for rotation, translation, and changing the type of coordinate system. This all transitions are handled in vector module. We’ve also prepared the method for composition of any two kind of transformations, because coordinate system can be rotated and translated at the same time. My current task is to use already added methods in CoordSys3D constructor.


My first two months under GSoC
have finally ended, and we are right on track towards the completion of the project. I finished up on my log of SymPy
’s directories and have analysed everything that needs to be further implemented or improved upon.
I pushed in #1313, implementing row_insert
and column_insert
functions in DenseMatrix
class and row_del
and col_del
functions in C
wrappers, while making row_join
, col_join
, row_del
and col_del
inplace member functions.
After testing all the viable directories that could use SymEngine
as a backend, only the LieAlgebras
module worked completely out of the box, with no major issues. As such, we were able to achieve the same through #13023, which now marks the beginning of Phase III of my proposal. I have also pushed the work done on other directories as separate branches on my local repository, to be finished upon gradually.
I worked on implementing the Singleton
pattern in the wrappers through #178, though the work is currently in progress. More on this next week.
That’s all I have for now.
Bidāẏa


This is the blog on progress for week7 and week8. It revolves around trigonometric solvers. As a side note, My progress rate has significantly dropped since the commencement of classes due to tight schedule and loaded assignments. I am working all out on weekends to compensate this time loss.
As I mentioned in my previous blog, I continued my work on solving simple trigonometric equations. As isuruf suggested, I breaked the PR #1305 into several small parts.
First one is on as_real_imag
#1310. Several Changes were made like adding support for TrigFunctions and HyperbolicFunctions.
Next PR is on rewrite_as_exp
#1309. For this, I Implemented a Base Transform Visitor as follows.
class TransformVisitor : public BaseVisitor<TransformVisitor>
{}
and rewrite_as_exp
is implemented with a class overriding TransformVisitor’s methods.
class RewriteAsExp : public BaseVisitor<RewriteAsExp, TransformVisitor>
{}
Additionally, going by srajan’s suggestion, I changed introduced a new base class TrigBase
for TrigFunctions and its inverses and similarly for HyperbolicFunctions.
Additionally, I made some fixes in ImageSet
here.
Once, this PR gets in, I will push the remaining bits and pieces from #1305.
The rewriting PR hasn’t been merged yet. At the beginning of last week, there was still some work to be done on it. For example, the RewritingSystem
’s check_confluence()
method (which, unsurprisingly, checks if the system is confluent) would add new rewriting rules while it runs  definitely not something it should do. Another addition, suggested by my mentor, was the attribute _max_exceeded
: this is True
when the KnuthBendix completion method has already been attempted but the maximum number of rules allowed to be added to the system was exceeded. Trying again would be a waste of time. However, if the maximum number of rules is increased, _max_exceeded
is reset so that the completion method could be run again.
Then I worked on the multistep version of the presentation algorithm (which was possible because the homomorphism PR was merged by then). Once I wrote it up, made sure it worked and started testing against the single step one, I noticed that the multistep version ran several times as fast for several of the groups I was using and yet was suddenly much slower for a group of higher order. This was unexpected and I spent a while trying to work out why that would be. The reason was this: the function uses coset_enumeration_c
for filling in a certain coset table. coset_enumeration_c
stores the deductions it makes while filling in so that it can process them later. However, if there are too many deductions, it’s not practical to go through all of them so the algorithm uses the method look_ahead
that allows it to continue without looking at the deduction stack which is emptied. The critical size of the deduction stack was 500 by default. For large groups, the single step version would greatly exceed that most of the time so look_ahead
was run instead, speeding it up considerably, while the multi step version would be working with a smaller coset table and the number of deductions would be large but not over 500 which made it slow. So it looked like processing close to 500 deductions was inefficient and I reduced the number to 100 to see what happens. What happened was that both version got faster but now the multistep version was consistently faster than the singlestep one. I ran the coset table tests and they seemed to be fine too so the reduction in the maximum number of deductions didn’t seem to affect anything negatively. I pushed the multi step version into the presentation PR but that hasn’t started being reviewed because another problem came up.
The homomorphisms tests from the merged PR would occasionally time out so that needed investigating. Turned out it was because the __contains__
method of the FpSubgroup
class couldn’t handle group elements in the conjugate form, i.e. something of the form r**1*w*r
for some words r, w
. It would get into an infinite loop and the tests would only occasionally time out because the method is used during kernel computation which is randomised. So a couple of days was spent thinking about what would be the best way to eliminate the problem, whether this sort of problem would only be caused by conjugate elements and finally expanding the code accordingly. You can follow what I did in this PR. Though this is still being discussed and it’s unclear whether some other solution would be called for.
So because of all this, I couldn’t work on the other homomorphism cases (though I did sketch some things that I can use once the dependent PRs are merged). If the FpSubgroup
issue is resolved soon and the rewriting PR is merged, I will implement homomorphisms to FpGroup
s. The presentation PR will probably need more reviewing and I can’t implement PermutationGroup
domains without it. I’m considering looking into an alternative method of doing homomorphisms specifically for the PermutationGroup
to PermutationGroup
case because I’ve seen a section on it in the Handbook. Or I could start working on Sylow subgroups which is another things I was thinking of doing. These things shouldn’t depend on any of the unmerged PRs so it would be a good use of time.


Francesco opened the PR to merge the Rubi module in SymPy. I made the code compatible for python<3.6 to pass the Travis tests. I used @doctest_depends_on
function decorator to stop the doctest for python<3.6 and defined dummy classes to pass the tests. We are keeping the PR open for now since it will allow me to run tests on Travis and I expect significant change in code after code generation functionality is added in MatchPy.
I have updated the parser to accommodate for Wildcard.optional()
. Previously, I sympified the expression and substituted the optional values to create new patterns. The current parser does not use sympy and is very fast compared to previous version.
I ran the test suit for algebraic rules 1.2 previously and ~1250/1500 tests were passing. The tests are really slow since they use simplify
and expand
functions. I tried using numerical evaluation at random points and it turned out to be faster than evaluating tests symbolically. Test suits are grouped by the type of expression:
# Integrands of the form a
[S(0), x, S(1), S(0)],
[S(1), x, S(1), x],
[S(5), x, S(1), S(5)*x],
[  S(2), x, S(1),  S(2)*x],
# Integrands of the form x**m
[x**m, x, S(1), x**(S(1) + m)/(S(1) + m)],
[x**S(100), x, S(1), S(1)/S(101)*x**S(101)],
[x**S(3), x, S(1), S(1)/S(4)*x**S(4)],
# Integrands of the form x**(m/S(2))
[x**(S(5)/S(2)), x, S(1), S(2)/S(7)*x**(S(7)/S(2))],
[x**(S(3)/S(2)), x, S(1), S(2)/S(5)*x**(S(5)/S(2))],
[x**(S(1)/S(2)), x, S(1), S(2)/S(3)*x**(S(3)/S(2))],
# Integrands of the form x**(m/S(3))
[x**(S(5)/S(3)), x, S(1), S(3)/S(8)*x**(S(8)/S(3))],
[x**(S(4)/S(3)), x, S(1), S(3)/S(7)*x**(S(7)/S(3))],
[x**(S(2)/S(3)), x, S(1), S(3)/S(5)*x**(S(5)/S(3))],
Adding thousands of tests could lead to slowdown of tests. So far my plan is to add 12 test from each expression type:
[S(0), x, S(1), S(0)],
[x**m, x, S(1), x**(S(1) + m)/(S(1) + m)],
[x**S(100), x, S(1), S(1)/S(101)*x**S(101)],
[x**(S(5)/S(2)), x, S(1), S(2)/S(7)*x**(S(7)/S(2))],
While inspecting the failed tests, I figured out that the failing tests depend on rules from other modules(such as binomials). Hence, they could not be integrated currently.
I have added all algebriac rules to the matcher(~1700). But, it led to significant decrease in speed. In some cases, the matcher take so much time that I have to stop the program itself. I have opened an issue for this. Hope this get fixed asap.
I have added all Algebriac rules to the matcher.
I have already added all algebraic rules which is 1/5 of all the rules. I cannot really test all those rules until the speed issue gets fixed. I will focus on the following this week:
Integer
in rubi.symbols to matchpyInteger


I’ve been working on the last set of Utility functions and after that I’ll complete some left over functions yet to be implemented. Along with it we have been fixing some bugs which we could notice.
TrigReduce()
is an inbuilt function in Mathematica, following is its code in Python.
def TrigReduce(i): if SumQ(i): t = 0 for k in i.args: t += TrigReduce(k) return t if ProductQ(i): if any(PowerQ(k) for k in i.args): if (i.rewrite(sin, exp).rewrite(cos, exp).expand().rewrite(exp, sin)).has(I): return i.rewrite(sin, exp).rewrite(cos, exp).expand().rewrite(exp, sin).simplify() else: return i.rewrite(sin, exp).rewrite(cos, exp).expand().rewrite(exp, sin) else: a = Wild('a') b = Wild('b') v = Wild('v') Match = i.match(v*sin(a)*cos(b)) if Match: a = Match[a] b = Match[b] v = Match[v] # 2 sin A cos B = sin(A + B) + sin(A − B) return i.subs(v*sin(a)*cos(b), v*S(1)/2*(sin(a + b) + sin(a  b))) Match = i.match(v*sin(a)*sin(b)) if Match: a = Match[a] b = Match[b] v = Match[v] # 2 sin A sin B = cos(A − B) − cos(A + B) return i.subs(v*sin(a)*sin(b), v*S(1)/2*cos(a  b)  cos(a + b)) Match = i.match(v*cos(a)*cos(b)) if Match: a = Match[a] b = Match[b] v = Match[v] # 2 cos A cos B = cos(A + B) + cos(A − B) return i.subs(v*cos(a)*cos(b), v*S(1)/2*cos(a + b) + cos(a  b)) if PowerQ(i): if i.has(sin): if (i.rewrite(sin, exp).expand().rewrite(exp, sin)).has(I): return i.rewrite(sin, exp).expand().rewrite(exp, sin).simplify() else: return i.rewrite(sin, exp).expand().rewrite(exp, sin) if i.has(cos): if (i.rewrite(cos, exp).expand().rewrite(exp, cos)).has(I): return i.rewrite(cos, exp).expand().rewrite(exp, cos).simplify() else: return i.rewrite(cos, exp).expand().rewrite(exp, cos) else: return i
Some tests for TrigReduce()
assert TrigReduce(cos(x)**2) == cos(2*x)/2 + 1/2 assert TrigReduce(cos(x)**2*sin(x)) == sin(x)/4 + sin(3*x)/4 assert TrigReduce(cos(x)**2+sin(x)) == sin(x) + cos(2*x)/2 + 1/2 assert TrigReduce(cos(x)**2*sin(x)**5) == 5*sin(x)/64 + sin(3*x)/64  3*sin(5*x)/64 + sin(7*x)/64 assert TrigReduce(2*sin(x)*cos(x) + 2*cos(x)**2) == sin(2*x) + cos(2*x) + 1
I have taken some help from a reply in stackoverflow and may be now we can answer that question much better.
My next week’s target would be to complete all the utility functions and fix bugs as much as possible.
We worked on Parametric logarithmic derivative PR #12885 only this week. So now I will get to a few mathematical points regarding the problem of “parametric logarithmic derivative” problem (will shortly write it as PLD
problem).
The PLD
problem is, for a given hyperexponential monomial over a differential field and , to decide whether there exists with , for the equation: , and to find one such solution.
There are two versions of it, the first one being the heursitic version which is quite simple (with available pseudo code) and is already implemented in SymPy, but since it doesn’t handle all the cases, it becomes necessary to have the deterministic version of it. The deterministic version is sort of used as a fallback i.e in case the heuristic version fails (raises NotImplementedError
) which isn’t quite often.
We already have done the basic programming of it in SymPy, so before I say any further let’s see an example.
A simple example of where the heuristic version fails is:
>>> fa, fd = Poly(1, x), Poly(1, x)
>>> wa, wd = Poly(1, x), Poly(1, x)
>>> DE = DifferentialExtension(extension={'D': [Poly(1, x, domain='ZZ'), Poly(t, t, domain='ZZ')], 'exts': [None, 'exp']})
>>> parametric_log_deriv_heu(fa, fd, wa, wd, DE)
None
# while the deterministic version, runs fine
>>> parametric_log_deriv(fa, fd, wa, wd, DE)
[Matrix([[1, 1, 0]]), Matrix([[1, 0, 1]])] # don't worry about output type
Internals of deterministic version
In the deterministic version we use the Risch’s structure theorem, i.e
( equation 1)
where we solve for each .
This result of structure theorem is quite useful, this same theorem can be used in the process of DifferentialExtension
construction, as can be seen the section 5.2 in book (Outline and Scope of Integration Algorithm, search by section name if you have a different version of the book).
In PLD
(deterministic) we first obtain the and parts depending on ‘log’/’exp’ extension respectively and store them in a list F
(the order wouldn’t matter).
We reorganize this problem in a way somewhat similar to limited_integrate
problem, i.e , we solve the problem and chose the one where .
The same thing we do here in this problem, but that does cause some problem as may not necesarily be contained in the extension , and currently we are working on solving this part of the problem, Kalevi suggests to take the unfinished DE
object and the extension containing could be built on that.(No need to start from scratch.)
I haven’t mentioned much regarding the function that does the working for converting the relation in eq (1) over a field to one over . It contains Poly
manipulation methods to do that. It does mathematically the same thing as “to obtain a system with coefficients in C and the same constant solutions.”
I’ll complete the parametric_log_deriv
before the next week starts.
During week 7 I continued my work on transformation equations for rotation. One PR #12960 which obtains transformation equations from rotation matrix was merged. I have prepared also method for composition of transformation equation but I didn’t push them. I need to still think how to arrange the whole structure. My mentor suggest that we can just use transformation equations instead of rotation matrix, because it’s just general concept. After that we could modify constructor in CoordSys3D class and put them transformation equation instead of rotation matrix.


My seventh report on my GSoC progress. It’s been quite a journey so far, full of challenges and rewards. This week, apart from the work, I took some time to write a proposal for a full fledged talk at the upcoming PyCon India 2017
Conference, titled SymEngine: Leveraging The Power Of A Computer Algebra System (CAS) To Another
to be held during the first week of November 2017
in New Delhi
. What’s more exciting is that Sumith and Ranjith would be there as well, hopefully if it gets selected.
I pushed in #1308 fixing a minor Travis
irregularity.
I pushed in #176, a minor PR for skipping some of the tests where SymPy
wasn’t being used.
Most of my time this week was spent on finding inconsistencies between SymEngine
and SymPy
objects which arise while using SymEngine
in SymPy
. I’m currently maitaining a log of all the issues that are occuring and their possible solutions. As such, I have decided to make a small change in our strategy. For the coming week, I’ll try and complete the issue log, and implement the possible missing functionalities and attributes in SymEngine.py
, after which a minor release in SymEngine
and SymEngine.py
would be made to make these changes available for SymPy
.
Till Next Time!
Ciao


I had initially planned to begin implementing the 3D case this week. But I wanted closure for the 2D case. Currently there is no support for implicit intersections for polygons. One could make the failing tests pass by not raising an error when such a polygon is encountered, but obviously that wouldn’t be a good solution.
So, from the discussion in pull request #12931 :
1. Fleury’s algorithm is not required. A better approach to calculate area would be to simply flip the sides after crossing an intersection point and come back to initial direction of traversal once that point is encountered again. I have implemented this technique locally. It has worked fine for the following polygons but still seems to have some minor bugs which I’ll fix really soon. Once that’s done I’ll update the pull request.
>>> p1 = Polygon((0,3), (1,2), (2,0), (1,2), (0,3))
>>> p1.area
13/3
>>> p2 = Polygon((0,0), (1,0), (0,1), (1,1))
>>> p2.area
1/2
All of this was because of the material here.
2. The intersections are currently being found by a slow technique which I designed. A much better one by the name of BentleyOttmann algorithm exists for the same purpose. Once I get the area working for all the test cases on the codegolf link, I’ll work on implementing this.
3. Christopher also suggested that we add a property called footprint_area, which would refer to area that a polygon covers(i.e. the hull of the polygon) and another property called hull which would refer to that very path. I was initially confusing this area with the area that depends on direction. Now I see that this footprint area isn’t the one normally referred to when talking about the area of a polygon and it’s actually the other one.


The work on finding finite presentations of permutation groups was somewhat delayed last week because I’ve come across a bug in the reidemester_presentation
code while testing the one step version of the presetation algorithm (which is essentially finished). Turned out it was caused by an unnecessary cyclic reduction on one line in the main elimination technique. See PR for the explanation.
Another delay was because I realised that for the multistep version, it would be useful to have the homomorphism PR merged. Mostly this is because of the generator_product
method from the PermutationGroup
class  this takes an arbitrary permutation in the group and returns a list of the strong generators such that the product of the elements of the list is equal to this permutation. But the homomorphism functionality that’s already there (the case of the homomorphism from an FpGroup
to a PermutationGroup
) would be helpful too. So I switched by attention to getting that PR ready for merging.
So this week I’ll try to add the multistep version and will also start implementing homomorphisms from PermutationGroup
s and to FpGroup
s, as outlined in the plan. Also, at some point when the rewriting PR is merged, I could try out the potentially improved random element generator for FpGroup
s that I sketched a couple of weeks ago. This is not officially part of the plan and not in any way urgent, but could probably give better random elements which would be useful for the order()
method and the kernel calculation for homomorphisms. Though I might not actually get around to it this week.


Manuel has implemented Optional arguments in MatchPy. I am currently working on modifying our current parser to accommodate the changes. Optional arguments in MatchPy will enable us to add all rules to ManyToOneReplacer
without increasing the number of rules. Manuel has been helping us a lot in making Rubi work in SymPy. He has also found a way to add MatchPy expressions in SymPy. I will try to use it to implement MatchQ in SymPy.
ExpandIntegrand takes up majority of time while integration. I have implemented ExpandIntegrand
using SymPy’s .match()
. However, .match()
can give unwanted results such as:
>>> (x**a).match((c_ + d_*x)**n_)
{d_: 1, c_: 0, n_: a}
In the above example, it assumes that c_ is an optional variable when we don’t want it to be. To orevent these situations, I used exclude
attribute to avoid these situations:
>>> c_ = Wild('c', exclude=[0])
>>> d_ = Wild('d', exclude=[0])
>>> n_ = Wild('n', exclude=[0])
>>> (x**a).match((c_ + d_*x)**n_)
None
I have used the above trick to prevent unnecessary matches in ExpandIntegrand
and made it a bit faster.
powerset
and parse all Rubi rules.


So far we have covered almost all the Utility Functions to support Algebraic Rules/Linear Products rules, Arihant had given the entire details of the first test set in his recent blog post once we are successfully done with the first set we have the other two test sets for Linear Products ready to work on.
Currently I’m work on implementation of Normalization functions, although I have implemented NormalizeIntegrandFactorBase
using SymPy’s pattern matcher but that needs a bit improvisation. After that I’ll complete implementation of SimpHelp
.
May be a couple of weeks more and we’ll be done with the implementation of all the Utility Functions.
We started working on implementation of parametric logarithmic derivative problem using Risch structure Theorem. First we implemented the function basis_Q
, which seems like we wouldn’t be needing nomore. While doing that it seemed like there was problem with the .is_algebraic
property, as it was returning (pi**3).is_algebraic
as True. This bug was fixed in the pull request #12924.
Also a few improvements to the integral module pull request #12925 was also merged.
I don’t have much to say for now, until I complete the parametric logarithmic derivate PR.
Planet SymPy is made from the blogs of SymPy's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet SymPy with RSS, FOAF or OPML.