

Hi, I'm Gaurav Dhingra (@gxyd) and this post is a report of my work for SymPy under Google Summer of Code 2016.
This is also intended to be a description of my work for submission to GSoC's Final Evaluation.
PR for GSoC 2016 product submission: Gaurav Dhingra.
GSoC proposal
Project: Group Theory
Blog: https://gxyd.github.io/gsoc.html
My Website: https://gxyd.github.io
email: axyd0000[at]gmail.com, igauravdhingra[at]protonmail.com
I am a prefinal year undergraduate student of Applied Mathematics at IIT Roorkee.
I heard about the GSoC program and about SymPy org. since of one of my college mate, Prasoon Shukla (he also participated in GSoC with SymPy). I was quite amazed by how mathematics and computer science could intersect and help in solving math problems.
I have been contributing to SymPy from May last year. In the beginning I tended often to work on issues that were more often related to my then ongoing courses in my college. In December I completed a basic course on Group Theory, so I thought of choosing to work on Group Theory and I saw quite a few things in CGT were missing from SymPy and those functionalities were already there in GAP and Magma. So I asked them from Kalevi, regarding things that could be taken up for a gsoc project. So finally with some discussions with him and Aaron, I was sure by December end that I was going to work on implementing Finitely Presented Group in SymPy. I looked upto the last GSoC project on CGT by Alexandar Makelov, for reference material that could be used for the implementation and it turned out that the book by Derek Holt, Handbook of Computational Group Theory (will mention as shortly the Handbook).
Though I already started working on PR #10350 for implementation of free groups in beginning January though I started working on the proposal from February beginning.
So what I should do? Since I was quite sure of the project. Well at that moment I thought, since I am a mathematics major, choosing this project would also help me. So I reached out to Kalevi, said to him what I was thinking to do, what could be good for the project. So we worked upon making a good gsoc proposal.
Here is my GSoC proposal. Now that the summer is over and I've tackled a lot more with computational group theory, it seems that the main points in my GSoC proposal were:
Group
class.
Well, I submitted my proposal and needed to wait until April 22 to the result, and then...
I got lucky to have Kalevi Suominen like mentor too. Aaron Meurer, the project leader of SymPy was to be comentor, and I felt honored to be alloted my preferred mentors.
A more detailed account of my progress can be found on my blog here
Basic
should be a superclass of FreeGroup
or not. In the end we decied not to derive FreeGroup
from Basic
. Its basic methods had already been written, most of them inspired from GAP software.Group
class), we never worked upon it. Also the topic of Presenation of Finite Groups , could not get much attention, since of my busy college schedule.
Since I have commited quite a few commits unrelated to my project. So I decided to cherrypick the commits made for my GSoC project. So the combined commits makes the things quite clear. commits in GSoC 2016 product submission: Gaurav Dhingra.
There is still much to be done regarding the finitely presented groups. As during the program my efforts were directed mainly towards getting the fundamental algorithms for finitely presented groups in, the overall structure of the module hasn't received much attention.
The algorithms are in, but the methods in sympy/combinatorics/fp_groups.py are not that much userfriendly. The code organization is not terrible, I think. Some suggestions (both for me and anyone else interested) for future work are:
polys.agca
as abelian groups are the same as Zmodules. It is all essentially about linear algebra over the integers.groups
, since finitely presented groups should not be a part of combinatorics module which would contain the all algebraic structures including the permutation groups.
I had say that I did about 70% of the work I promised to do in my proposal, considering that I also did two nonincluded task of Low Index subgroups algorithm and Modified Todd Coxeter algorithm, so I can say I swapped my work. It is good enough, and I hope to get back to extending the things that I have started. There's still some revision to be made, some code to be clean up. And I'm doing that.
I don't really know if I'll pass final evaluations by Google, but, regardless, I'm really glad for all the coding and development experience I got during these weeks. I'll probably use for personal projects, may be in Dissertation in last year, and try to contribute more to SymPy.
I appreciate the help of my mentor Kalevi Suominen who was always there for any query that I had regarding Group Theory in project, and I appreciate his ability to reply back within 1 hour for any message I left any time of day and every day of the week including weekend (I call it 1hour rule). I think he was a wonderful mentor and I learnt a lot from him, and my comentor Aaron Meurer, the current project leader of SymPy, and the entire SymPy community for helping me out and reviewing my work!
Also thank you Google.
अलविदा !
This week I started work on polynomial factorization in Z
.
I implemented the zassenhaus’s algorithm, which factors a primitive squarefree polynomial.
The basic idea behind factoring a polynomial in Z
is first to make it square free using Yun's
algorithm or something similar and then choosing a prime number p
, such that the polynomial mod p
is square free, and p
doesn’t divide its leading coefficient.
After that it is reduced to factoring polynomial in finite field. Which has already been implemented, then we lift all the factors from the finite field to Z
using hensel lifting.
The number of time Hensel Lifting has to be done is found using the Mignotte’s bound.
This wall required implementation of extended euclidean GCD in the finite fields.
So, after the implementation of factorization of square free algorithms, we were able to perform:
f = UIntPoly::from_vec(x, {1_z, 0_z, 9_z});
// f = 9x**2 + 1
out = f>zz_zassenhaus();
// out is a set of RCP<const UIntPoly>
// out = {3x + 1, 3x  1}
out_f = f>zz_factor_sqf();
// out_f is a pair<integer_class, std::set<RCP<const UIntPoly>>
// out_.f.first = 1
// out_f. second = {3x + 1, 3x  1}
After this I need to implement the algorithm for square free factorization and the trial division for finding the power of factor in the polynomial, this way we will be able to factor the polynomials. The PR is here.


Documentation nonlinsolve:
PR #11532
Conclusion :
So the GSoC coding period is going to end. Here I am writing what I did, my todo list and suggestions for future work.
One can see the progress report of each week and minutes of meeting here : https://github.com/sympy/sympy/wiki/GSoC2016SolversProgress
My all the PRs link is here : sympy/pulls/shekharrajak
I have created a vertical timeline to show all the work in chrononical order in this link : timeline
Meanwhile :
I find one issue : #11534 in mailing list discussion. This shows that we need better
invert
method for inverting nested roots/powers equtions in solveset
.
and this #11528. factor_list
will be helpful in factoring the equation in solveset
and solve for it factors and union the solution(during the PR #11188 discussion with Harsh this idea came out). It will actually replace the elif f.is_Mul and all(_is_finite_with_finite_vars(m, domain) statement in _solveset
and factor_list
will be used soon.
Hello all! This is officially the last week for Google Summer of Code 2016. The submission and final evaluation has already started. Last week I was busy with fixing bugs and preparing documentation for the beam module.
Jason has came up with some new ideas for beam module:
I have planed for a long term association with SymPy, I am taking the full responsibilty of my code and I will try to contribute as much as I can. I will help the new developers with the Sympy code paradigms and the development workflow.
I have been able to meet all the target that I had mentioned in my proposal and all of my pull requests have got merged except PR 11494. It was great working with Sympy. Jason, Sartaj, Aaron, Kalevi, Isuru and all other sympy members have helped me a lot. Many of the times, I used to get stuck with some implementation, testing or with the Travis CI build but, for everytime they have always tried to help me fix them. My mentors have always appreciated and motivated me. I got to learn many new thing about the software development workflow. Thank you Sympy for making this summer an awesome journey of learning and coding.
That’s all folks.
Happy Coding.
For the duration of GSoC, I have worked on the SymEngine project, with focus on writing the Cwrapper and specifically the Ruby Wrapper. Major work during the project timeline included wrapping already existing features such as MPC, MPFR, arbitrary precision evaluations, Matrices, expression parsing etc., and new features such as lambdifying expressions in Ruby and Exception Handling.
My proposal for GSoC 2016 can be viewed here [1].
After 13 weeks of GSoC, we’re now at the end of the summer, and it is time for me to explain and report what I did with SymEngine and its Ruby Wrappers throughout the summer.
For reporting purposes, I will take each week’s workload and indicate the status in a tabular manner, with links to the relevant PRs. Due to the nature of the project, each part usually consists of two PRs. One to the SymEngine repository [2], and the other to the SymEngine.rb [3] repository.
Week  Work  SymEngine  SymEngine.RB  Comments 
1, 2  Complex Numbers & Floating Points  Completed
Wrappers for Real Double: PR#954 Implementing real_double_get_d method: PR#966 Wrappers for Real MPFR and Complex MPC: PR#972 fixes #974: PR#975 
Completed
Real Double Class: PR#46 Ruby Wrappers for Real MPFR and Complex MPC: PR#49 

3  Derivatives, Substitutions, Abs  Completed
Function symbol Cwrappers: PR#982 
Completed
Abs: PR#50 Function symbol Ruby Wrappers: PR#51 

3  Series & Polynomials  –  –  The relevant parts in SymEngine’s C++ code library was not ready, so I had to skip this part. 
3  Evalf  Completed
Common evals method and its Cwrappers: PR#987 Sign check for Numbers: PR#1027 
Completed
evalf ruby wrapper: PR#55 
This was not included in the original proposal. Completed to compensate for Series and Polynomials. 
4, 5, 6  Matrices  Completed.
Matrix Cwrappers: PR#992 
PR Under Review.
Matrix Ruby Wrappers: PR#56 

7  Parser  Completed
CWrapper for parser: PR#1029 
Completed.
Ruby wrapper for parser: PR#60 
Awaiting merging. 
8  Lambdify  N/A  Completed
Lambdify as a module level function: PR#61 
All work was performed on Ruby Wrappers directly. 
9, 10  Exception Handling  Completed
Exception Handling: PR#1044 Fixing enum issue in Exceptions: PR#1069 
Completed
Exception Handling: PR#64 
Wiki on exception handling. [4] 
11  Interfacing with GMP, MPFR, MPC gems  Incomplete  Incomplete  
12  Buffer Period  N/A  N/A  Allocated time to catch up with missed work. 
13  Iruby Notebooks  N/A  Awaiting Merging Changes to beginner notebook: PR#59 
Although the examples are ready, and were included in SymEngine’s presentation at SciPy [5], they were not merged, pending improvements to the formatting. 
The following PRs were made for minor contributions.
https://github.com/symengine/symengine/commits?author=rajithv
106 commits / 4,316 ++ / 1,928 —
https://github.com/symengine/symengine.rb/commits?author=rajithv
134 commits / 3,756 ++ / 1,319 —
—
[1] https://docs.google.com/document/d/1HKEzqpm3yk9ax5Fs7POQaBFZFxSOjy1MXNyeB7JXBxg/edit#heading=h.t8we2dfbtvd1
[2] https://github.com/symengine/symengine/
[3] https://github.com/symengine/symengine.rb/
[4] https://github.com/symengine/symengine/wiki/SymEngineExceptions
[5] https://www.youtube.com/watch?v=03rBe2RdMt4
This is the final week of GSoC! I have the write up of the work done on a different page.
This project has been a wonderful learning experience and I have met many wonderful people over the course of this summer. Before the summer began I had never written any test code. Now at the conclusion of the summer I have written benchmark tests and I try to make sure the code I write has complete unit coverage. I feel that this is one of my biggest areas of improvement over the past 3 months. I have also learned a considerable amount about the combination of programming and dynamics. Prior to GSoC my experience with dynamics consisted solely of finding the equations of motion by hand. With my time in GSoC I have been exposed to finding the equations of motion programatically and using the results with an ode integrator. I have also obtained a more in depth knowledge of different algorithms for creating the equations of motion.
Another thing that I have enjoyed in this experience is seeing how a large programming project works. I feel that this will make me more employable in the future as well as allow me to help any other new open source community get up and running.
Lastly, one of the big highlights of my summer was my trip to Austin, TX for Scipy 2016. Being a GSoC student I not only got to go to my first professional conference but I was able to help give one of the tutorials. Also I was able to meet many of the other people who work on sympy. This went a long way in making my contributions to sympy from being just contributions to some organization to working on a project with other people. It was also neat to meet the developers of other open source projects. Like my comment about sympy, I now see those projects less as being “project name” and more as the people actually working on them.
I have enjoyed my GSoC experience and if I were eligible I would not hesitate to apply again. I would like to thank those who chose me and made this summer of learning and experiences possible.
After the last blog post, I started working on the “fixing up the remainder of rational polynomials”. This also included adding support for basic to rational polynomial conversions. The current work is in #1055 which has not been merged in yet.
Another thing I started working on was conversions from SymEngine symbolics to multivariate polynomials. The way the current conversions work, led me to change the overall structure of multivariate polynomials.
The current working of basic conversions is that it constructs the “internal container” of the polynomial as it parses the expression. These containers have operators overloaded and they can easily be manipulated by the functions. Handling multivariate polynomials using this approach would be impossible with the current structure of the code. There were no “containers” per se in the previous implementation.
I thought it would be better if we can implement a container based polynomial structure for multivariate polynomials too. Fortunately, the way the SymEngine polynomials work, an approach similar to the univariate case of containers worked. The container is a wrapper around a map from vector to a value (the coefficient)
template <typename Vec, typename Value, typename Wrapper>
class UDictWrapper
{
public:
using Dict = std::unordered_map<Vec, Value, vec_hash<Vec>>;
Dict dict_;
unsigned int vec_size;
...
The polynomial class uses this as a container, along with information about the generators and their ordering. Apart from holding the map, this class has various constructors, and all the operators overloaded for it. It also has helper functions, which help translate the current keys to another specific order/ arrangement. The container assumes that the all operations done to it are with other “adjusted” containers. Currently, the job of adjusting/aligning th container has been delegated to functions like add_mpoly
etc.
template <typename Poly>
RCP<const Poly> add_mpoly(const Poly &a, const Poly &b)
{
typename Poly::container_type x, y;
set_basic s = get_translated_container(x, y, a, b);
x += y;
return Poly::from_container(s, std::move(x));
}
and
template <typename Poly, typename Container>
set_basic get_translated_container(Container &x, Container &y,
const Poly &a, const Poly &b)
{
vec_uint v1, v2;
set_basic s;
unsigned int sz = reconcile(v1, v2, s, a.vars_, b.vars_);
x = a.poly_.translate(v1, sz);
y = b.poly_.translate(v2, sz);
return s;
}
Basically, both the containers get the ordering of generators in the “new” polynomial which will soon be constructed, and adjust the ordering of the vectors inside their maps accordingly. Thus consistency is maintained. All the work done is in #1049 and this has been merged in.
After getting this restructure done, I started work on the basic to multivariate conversions, though I have not been able to cover it completely. The idea for these conversions remains the same and should be easy to implement.
I also could not spend time to integrate wrappers to Piranha polynomials which can be used as multivariate polynomials within SymEngine
This is officially the last week of Google Summer of Code ‘16. Although GSoC is over, I plan to get the above mentioned work finished within August, before any other big undertaking. It has been a wonderful journey of insight and knowledge. Thanks a lot to my mentors Isuru, Ondrej and Sumith for helping me throughout the summers, and always being there for discussions and the silly issues I bombarded them with!
Bye!
nonlinsolve continue:
PR : #11111
few things that should be improved in nonlinsolve
:
 If system of equation contains trigonometric functions, `nonlinsolve`
sometime fails because `solve_trig` of `solveset` is not much better and
`nonlinsolve` have to identify what is the Intersection soln when we have
`2*n*pi + pi/2` and `n*pi + pi/2`(means something similar cases).Right now it is returning
one of them.
 `substitution` function which solves the system of equation using substitution method.
There should be better method to handle `imageset` (Complex solution).
 Code quality of `substitution` should be improved.
Continue Simplified Trig soln
PR #11188
Imageset/union is generalized and now it handle basically these three cases:
# img1 = ImageSet(Lambda(n, a*n + b), S.Integers)
# img2 = ImageSet(Lambda(n, c*n + d), S.Integers)
In [1]: img1 = ImageSet(Lambda(n, 4*n + 4), S.Integers)
In [2]: img2 = ImageSet(Lambda(n, 4*n), S.Integers)
# when a == c and (b  d == a) then ans is img2.
In [3]: Union(img1, img2)
Out[3]: {4⋅n  n ∊ ℤ}
# 
In [4]: img1 = ImageSet(Lambda(n, 4*n + 2), S.Integers)
# when a == c and (b  d) == a/2, means value is a/2 * n
In [5]: Union(img1, img2)
Out[5]: {2⋅n  n ∊ ℤ}
# 
# 5*n + 5/4 ==> 5*n + 1 + 1/4
# 5*n + 1 + 1/4 is in n + 1/4
# check using img1.superset(img2) == true so img1 in ans
img1 = ImageSet(Lambda(n, n + S(1)/4 ), S.Integers)
img2 = ImageSet(Lambda(n, 5*n + S(5)/4 ), S.Integers)
In [4]: Union(img1, img2)
Out[4]: {n + 1/4  n ∊ ℤ}
# 
# img1.issuperset(img2) is false so no change
img1 = ImageSet(Lambda(n, 2*n + S(1)/4 ), S.Integers)
img2 = ImageSet(Lambda(n, 5*n +S(5)/4), S.Integers)
In [5]: Union(img1, img2)
Out[5]: {2⋅n + 1/4  n ∊ ℤ} ∪ {5⋅n + 5/4  n ∊ ℤ}
Meanwhile :
factor_list
fails and opened a PR : https://github.com/sympy/sympy/issues/11528continue..
This week I spent a lot of time working on FeatherstonesMethod and its component parts. I started off by moving a bunch of spatial vector functions from another PR I have to the featherstone PR and used some of those functions to calculate the spatial inertia of Body objects. The next thing I worked on was completely rewriting the internals of the joint code. The joints now consist of 4 reference frames and points (one set at each of the bodys mass centers and one set per body at the joint location).
After this I ran some basic code that used these new features and kept making changes until the code was able to run without producing errors. I used this same method of work with FeatherstonesMethod and now it too is able to run without producing errors. Now that the code runs it was time to make sure that the output is correct which is a lot more involved than the previous step of work. To begin I solved for the spatial inertia by hand and used this calculation to create test code for Body.spatial_inertia. As expected the code initially was completely incorrect but it now passes the test. I have since been working on the tests for the joint code. Since this code is completely new to the sympy repository it takes a lot more planning than the body test did. Also I need to solve the kinematics by hand for the joints so that I have a base for the test code. This is where I am currently located in the process.
Also this week I addressed review comments on SymbolicSystem and have moved that PR closer to being able to merge. One of the current hang ups is trying to force Sphinx to autodocument the __init__ method. I think the best solution currently is to move the relevant code back to the main docstring for the class and not worry about trying to document the __init__ method.
While working on rewriting the joint code I came across a bug in frame.py and have created a docstring with a fix to this along with a test to make sure the fix works.
Lastly I reviewed a PR that adds a docstring to a method that did not yet have a docstring. The PR had some information in it that was incorrect and after some research I was able to make some suggestions for its implementation.
Next week is the last full week of GSoC and my main priority is getting the final evaluation information correctly finished so that the work can be processed correctly. My next goal is to make sure SymbolicSystem gets merged into SymPy. This is not entirely in my hands, however, as I will be having to wait for feedback and so while waiting I will be pulling off different parts of FeatherstonesMethod for separate PR’s at the recomendation of my advisor. These separate PR’s I hope to possibly include in my final evaluation.
This week I worked on little refactoring of Fields
module. Like moving functions implementations from .h
file to .cpp
file in this PR. I added a from_uintpoly
method which allows us to create a GaloisField
object from UIntPoly
object in this PR. After which I worked on the implementation of diff
method for GaloisField
in this PR.
Then I worked on Logic
module. It is needed for the implementation of Intersection
class in Sets
module. It has the implementation for Logical And, Or and Not operations.
Both Or
and And
class have a private variable named container_
which is a set of RCP<const Boolean>
. It stores the Boolean
objects on which And
and Or
operator can’t be applied using the known rules.
Not
also has one private variable which is an RCB<const Boolean>
object, which stores the Boolean
object on which we were not able to apply not
operator using the current rules.
Then we have three functions:
RCP<const Boolean> logical_and(const set_boolean &s);
RCP<const Boolean> logical_or(const set_boolean &s);
RCP<const Boolean> logical_not(const RCP<const Boolean> &s);
These are used to do the respective operation on the operands supplied.
Talking little about implementaion details:
RCP<const Boolean> logical_and(const set_boolean &s)
{
return and_or<And>(s, false);
}
RCP<const Boolean> logical_or(const set_boolean &s)
{
return and_or<Or>(s, true);
}
And the function and_or
do the required operations.
After this PR gets merged, I would start working on Intersection
class.
Hi all, here's a brief summary of the 12th week of my GSoC:
Last week I uploaded the testcoverage files on my website, that revealed some interesting places where a few versions of scan
routine in coset enumeration have untested ifelifelse
case.
As we are now approaching the end of GSoC time period, we decided to do some testing with some of the examples from 1973cdhw paper [2]. Coset Enumeration got my attention again since:
There seemed to be one bug raising TypeError
so opened issue sympy/sympy/#11449, resulting from coset enumeration by the cosettable based method. From beginning it was clear that the issue was not in compress()
method. It was quite difficult for me get onto the main source of problem. But then Kalevi had a closer look on the pseudocode in Derek Holt and also in Sims Finitely Presented Groups.
I wrote docstrings for a few of methods and fixed the issue #11449 in PR #11460.
The problem there in code is explained briefly below
Previous codesnippet
1 i = 0 2 while i < len(C.omega): 3 alpha = C.omega[i] 4 i += 1 5 for x in C.A: 6 if C.table[alpha][C.A_dict[x]] is None: 7 C.define_f(alpha, x) 8 C.process_deductions(R_c_list[C.A_dict[x]], R_c_list[C.A_dict_inv[x]])After codesnippet
1 while alpha < len(C.table): 2 if C.p[alpha] == alpha: 3 for x in C.A: 4 if C.p[alpha] != alpha: 5 break 6 if C.table[alpha][C.A_dict[x]] is None: 7 C.define_c(alpha, x) 8 C.process_deductions(R_c_list[C.A_dict[x]], R_c_list[C.A_dict_inv[x]])
Here $\alpha$ looks over in till $\lt$ C.table
. This way all elements of $C.\Omega$ are tested even in case that the set becomes very small. The inner for $x$ loop should also tests $p[i]$ at each round and break if that becomes different from $i$.
The changes that have been addressed in PR #11460 also include chaging the file name free_group.py
to free_groups.py
, similar to what we have.
It seems that Presentation of Permutation Groups won't happen during GSoC since there's just one more week; instead, I plan to focus on improving and completing the current PR's #11361 on Modified ToddCoxeter algorithm and PR #11460 on addition of docstrings and better user methods.
One more thing, that I would start in this week though may not be completed this week will be the sphinx documentation of finitely presented groups. I found the documentation of Poly's module by Kalevi very much readable and interesting, may be I can seek to follow that.
This week I primarily worked on some bugs and added singular initial conditions to the result of expr_to_holonomic()
.
Firstly I fixed a bug in .unify()
. There were some errors being raised in it when one of the Holonomic Function had ground domain RR
or an extension of it. This now works fine.
In [9]: expr_to_holonomic(1.4*x)*expr_to_holonomic(a*x, x) Out[9]: HolonomicFunction((2.0) + (1.0*x)*Dx, x, 0, {2: [1.4*a]}) In [10]: _9.to_expr() Out[10]: 2 1.4⋅a⋅x
Later I fixed a bug in converting the two types of initial condition into one another. Apparently I forgot to add the factorial
term while converting.
After that I added singular initial conditions to the result of expr_to_holonomic()
whenever the Indicial equation have one root. For example:
In [14]: expr_to_holonomic(x*exp(x)) Out[14]: HolonomicFunction((x  1) + (x)*Dx, x, 0, {1: [1]}) In [15]: _.to_expr() Out[15]: x x⋅ℯ
I also changed printing of the class HolonomicFunction
to include the initial conditions inside the call, so as to make it proper python. These are implemented in #11480 and #11451.
Right now we are trying to find a way to include convergence conditions in the result while converting Holonomic Functions to expressions.


During the past couple of weeks, I worked on replacing solve
with solveset
in solving inequalities.
SymPy’s univariate inequality solver, solve_univariate_inequality
relies on solve
for reducing inequalities.
In[] : isolve = solve_univariate_inequality # for simplicity
As we would prefer using solveset
internally, it is important to remove the dependency on solve
here.
However, there are a lot of issues which need to be handled carefully. Here are a few of them:
The major issue with using solveset
is the Set
output API.
As isolve
is designed to work according to solve
’s results, this seems to be the roadblock.
No support for restricted domain.
Generally, the inequalties are solved in the entire real domain. However, this might not be what the user want
(especially while dealing with trigonometric inequalties). Also, to maintain uniformity across all our solvers,
support for userspeicifed domain
is a must.
Incorrect results for trigonometric inequalties
As Issue #9721, isolve
returns incorrect results for certain trigonometric inequalities.
In []: isolve(cos(x) > 0, x, relational=False)
Out[]: (oo, pi/2)
In []: isolve(tan(x) > 0, x, relational=False)
Out[]: (0, oo)
Implementing solvify
: “solve”fy solveset
’s solutions.
There were two possible ways to tackle the issue of APIs:
a. Refactoring isolve
in according with Set
API.
b. Somehow returning solve
results from solveset
.
Solution : b
In order to resolve the issue of portability from solve
to solveset
, I implemented solvify
which returns solvelike results.
We classify the return value based on the type of solution returned by solveset
.
Solution  Output ————————————— FiniteSet  list
ImageSet,  list (if f is periodic) 
Union 
EmptySet  empty list 
Others  None 
Adding domain
support.
I added a domain
argument to isolve
whose default value is the real domain (domain=S.Reals
).
A few limitation to the conditions of S.Infinity
and S.NegativeInfinity
followed.
Another noticable point, from the point of implementation, was to deal with singularities and discontinuities of the given inequality.
For this, I used the continuous_domain
method to find the continuous domains of the expression within the specified domain.
This was a minimalistic addition with considerable returns (especially for solving periodic inequalties).
Solving trigonometric inequalities.
Since most of the trigonometric inequalities are periodic in nature and have infinite solutions, solving the expression in the entire real domain is a repetitive task and computationally expensive.
In order to simplify the problem, I intend to solve all the inequalities in a positive periodic interval (say [0, 2*pi)
for sin(x)
).
We already have a function to compute the real period of a function : periodicity
.
In case of a nonfinite domain
argument, I intend to use this to restrict the solutions of the problem to a periodic interval.
This might not seem the perfect approach but it seems reasonable to handle infinite interval sets for now.
We need a new Set
object : BigUnion
to represent infinite number of interval objects.
I have opened PR#11458 for the same with all the implementation details.
In []: isolve(cos(x) > S(0), x, relational=False)
Out[]: [0, pi/2) U (3*pi/2, 2*pi)
In []: isolve(tan(x) > S(0), x, relational=False)
Out[]: (0, pi/2)
For the past couple of weeks, I haven’t been able to give much time to the project due to my college schedule. I try to make up for the lost time during the weekend and meet the 40 hour weekly deadline.
With the endsem evaluation soon approaching, I expect this PR to get merged and count as a part of my project.
eliminate() continue:
PR : #11485
Regarding issue #2720, It is something similar to eliminate
present in
wolfram.
Right now it is working for real domain and not considering ImageSet, Intersection. Complement. If it find ImageSet, Intersection. Complement
for the symbol to be eliminated, then just raises NotImaplementedError
(in near future it can be implemented).
It can take N
nummber of equations and M
number of symbols
to be eliminated. It returns FiniteSet
of equations which doesn’t contains
these symbols
.
Continue  Diophantine in Solveset :
PR 11234
ConditionSet
when it diophantine
doesn’t the diophantine equation type and not able to solve.Idea for improved _solve_trig in Solveset :
In [1]: solveset(sin(x) + cos(y), x, S.Reals)
Out[1]: {x  x ∊ ℝ ∧ sin(x) + cos(y) = 0}
In [2]: solve(sin(x) + cos(y), x)
Out[2]: [asin(cos(y)) + π, asin(cos(y))]
This above examples is enough to tell that _solve_trig
is not using inverse trigonometric function. We can have something, which can solve trig equations by making free the symbol in lhs and in rhs inverse trig function.
solveset(sin(2*x)  sqrt(3)*cos(2*x), x, S.Reals)
for this right now _solve_trig
is converting it into exp
form and solving it.
But it is can be simply solved using sin(x + y) == sin(x)*cos(y) + cos(x)*sin(y)
formula.
First divide both side with sqrt(a**2 + b**2)
where a, b
is coeff of sin(2*x)
, cos(2*x)
.
sin(2*x)/2  (sqrt(3)/2)*cos(2*x)
==> sin(2*x)*cos(pi/3)  sin(pi/3)*cos(2*x)
==> sin(2*x  pi/3)
.
Now sin(2*x  pi/3)
is solvable using solve_decomposition
.
Meanwhile :
Abs
solver :
In [1]: solveset(Abs(x)  1, x)
ValueError:
Absolute values cannot be inverted in the complex domain.
In [2]: solveset(Abs(x)  (1 + I), x)
ValueError:
Absolute values cannot be inverted in the complex domain.
In [3]: solveset(Abs(x)  y , x)
ValueError:
Absolute values cannot be inverted in the complex domain.  In 1st case (for complex domain) ans should be http://www.wolframalpha.com/input/?i=Abs(x)++(1+)+%3D0+for+x
In 2nd case EmptySet. and in 3rd case (general solution when domain=S.Complexes) soln should be http://www.wolframalpha.com/input/?i=Abs(x)++y+%3D0+for+x
In general( 3rd case) it should print ConditionSet(x, Abs(re(y)) <= re(x) and re(x) <= Abs(re(y)) and re(y)>0, Eq(im(y), 0) , S.Complexes).
continue..
Hi there! It’s been eleven weeks into GSoC . The module for solving beam bending problems is almost ready. Now, I am focusing fully on example documentation part because it’s very important to let others know what can be done.
Let us see the capabilities:
A full fledged documentation of tutorial and example for this module is on its way.
That’s all for now, looking forward for week 12.
Happy Coding.
This week my main work was on Featherstone’s articulated body algorithm. I started by prototyping what I thought his algorith might look like in python code (the algorithm was pulled from chapter 7 of his book). With the passes prototyped it was apparent that I would need a full description of the kinematic tree and so I prototyped the building of the kinematic tree from a single “base” body. I then went on to see what it would look like if the kinematic tree was built during the first pass of his articulated body algorithm and decided that keeping the two separate would result in cleaner code.
With the three passes prototyped and the kinematic tree built I started digging into Featherstone’s book to better determine the definition of each of the variables in the algorithm. While doing this I ended up reading a second source where Featherstone describes the articulated body algorithm and it was helpful in furthering my understanding of the algorithm as it was a condensed summary. I then compared the written version of the algorith in his book and this article with the two matlab versions he has posted online and the python version her provides a link for online. This helped me see where some terms he includes in his book he doesn’t include in his code. It also helped me to see what code for the algorithm might look like.
After working on the mock up of the passes and trying to better understand them, I switched focus to the joint code that needs to be finished so that it can be used in my implementation of the articulated body algorithm. This has lead to some confusion about the design decisions that were made in the past when putting together the joint code and this is the current stage I am sitting at as I await feedback on some of my questions.
This week I also looked over a couple of documentation PR’s. One was a simple matter of fixing some indentation and seems mostly ready to merge but the second turned some docstrings into raw strings so they could add latex math code. I don’t know what the general stance is on the latter but I’m of the opinion that the docstrings should be human readable since people may actually look through the code for them or hope that help(function) provides something useful. In this case the latex math code is cluttered and would be better off in .rst files where people are only going to be reading the rendered version. On that PR I am awaiting response from someone with sympy to see if this is indeed prefered.
Hopefully I’ll recieve some feedback about the joints and Featherstone’s method so I can keep moving forward with these. In the mean time there are a few other bits of code I will need to complete that the algorithm uses that is not directly related to my questions. If I finish these tasks before recieving feedback I will move forward with changing the joint code as I think would be best.
This week, I had been working on implementaion of Union
and Contains
in this PR.
Our Set::contains
function has been changed from :
virtual bool contains(const RCP<const Basic> &a) const = 0;
to
virtual RCP<const Boolean> contains(const RCP<const Basic> &a) const = 0;
And these functions:
virtual bool is_subset(const RCP<const Set> &o) const = 0;
virtual bool is_proper_subset(const RCP<const Set> &o) const = 0;
virtual bool is_superset(const RCP<const Set> &o) const = 0;
virtual bool is_proper_superset(const RCP<const Set> &o) const = 0;
have been changed to:
bool is_subset(const RCP<const Set> &o) const
{
return eq(*this>set_intersection(o), *this);
}
bool is_proper_subset(const RCP<const Set> &o) const
{
return not eq(*this, *o) and this>is_subset(o);
}
bool is_superset(const RCP<const Set> &o) const
{
return o>is_subset(rcp_from_this_cast<const Set>());
}
bool is_proper_superset(const RCP<const Set> &o) const
{
return not eq(*this, *o) and this>is_superset(o);
}
depending solely on set_intersection
.
Then the SymEngine::set_union(const set_set &in, bool solve = true)
has been defined which will create a Union
object with in
if it is not solvable, or will do union operation on all of them.
Our Union
class has std::set
of Set
to store the different Set containers which can’t be unified into one single container.
Apart from it, the set_intersection
ans set_union
virtual functions have been restructured to handle other Set
types case by case.
Started off this week by continuing my work on #11422. I added support for singular initial conditions in multiplication and made some amendments in addition too. They now can return a Holonomic function with singular initial condition. The input functions can have singular initial condition both or one of them can have singular one and the other one with ordinary initial condition.
# one function have singular initial condition and the other have ordinary. In [4]: expr_to_holonomic(x) + expr_to_holonomic(sqrt(x)) Out[4]: HolonomicFunction((1/2) + (x/2)Dx + (x**2)Dx**2, x), {1/2: [1], 1: [1]} In [5]: _4.to_expr() Out[5]: √x + x In [6]: expr_to_holonomic(x) * expr_to_holonomic(sqrt(x)) Out[6]: HolonomicFunction((3/2) + (x)Dx, x), {3/2: [1]} In [7]: _6.to_expr() Out[7]: 3/2 x # both have singular initial conditions. In [9]: expr_to_holonomic((x)**(S(1)/3)) + expr_to_holonomic(sqrt(x)) Out[9]: HolonomicFunction((1/6) + (x/6)Dx + (x**2)Dx**2, x), {1/3: [1], 1/2: [1]} In [10]: _9.to_expr() Out[10]: 3 ___ ╲╱ x + √x In [11]: expr_to_holonomic((x)**(S(1)/3))*expr_to_holonomic(sqrt(x)) Out[11]: HolonomicFunction((5/6) + (x)Dx, x), {5/6: [1]} In [12]: _11.to_expr() Out[12]: 5/6 x
I found some problems in coding because of storing these initial conditions in two different attributes. So I merged them to a single attribute and instead added methods to check which one is stored and refactored the existing code using it.
I opened a new PR #11451 majorly focused on adding singular initial conditions to the result of .expr_to_holonomic()
when necessary. At first I added it in converting polynomials. Here are some examples:
In [14]: expr_to_holonomic(3*x**3+4*x**2) Out[14]: HolonomicFunction((9*x  8) + (3*x**2 + 4*x)Dx, x), {2: [4, 3]} In [15]: _14.to_expr() Out[15]: 2 x ⋅(3⋅x + 4) In [16]: expr_to_holonomic(x) Out[16]: HolonomicFunction((1) + (x)Dx, x), {1: [1]} In [17]: _16.to_expr() Out[17]: x
I also a found a bug in .to_hyper()
when the recurrence relation has order 0
. Added its fix too. Earlier the output also considered negative exponents which weren’t needed.
# previously In [18]: expr_to_holonomic(y*x, x).integrate(x).to_expr() Out[18]: 2 x ⋅y ──── + C₁ 2 # after fix In [19]: expr_to_holonomic(y*x, x).integrate(x).to_expr() Out[19]: 2 x ⋅y ──── 2
What Next:
I hope to add singular initial conditions to more types of functions in .expr_to_holonomic()
.
After college has started, work has been slower than usual. Last week I finally wound up Rational Polynomials in #1028. Some minor changes still remain in some parts of the code. Most of the changes left are mainly related to code duplication, and will be fixed by writing their templated versions.
I started off by reading and understanding how the multivariate class is implemented currently in SymEngine. It was developed by the UC Davis team as a part of their course project. The basic idea is pretty simple. It still is a sparse representation and maps from a vector (representing the powers of each generator in the monomial) to the nonzero coefficient multiplied to it. We use an unordered map instead of the ordered map used in the univariate counterpart. The choice does make some sense, as deciding order between two monomials is subjective and does not make sense (for eg. which is larger x**2*y
or x*y**2
) Even if we define a custom ordering, there are no real benefits I could think of that it would provide.
On a side note, it kind of makes me think why we have stuck to the ordered map implementation for our univariate polynomials. The places it offers benefits are eval
and mul
(in some ways) I worked with ordered maps, as that was the implementation the initial polynomial class was built on. I’m also wondering if it is healthy to use two different implementations for both the polynomial types, univariate and multivariate. This needs to be discussed.
Right now, I’m basically refactoring most of the code written in the multivariate class, so that it matches the API more or less from the univariate case. Some functions have been rewritten and some unnecessary storage within the polynomials have been removed. Initially, I thought I will stick with the same container based approach I used in the univariate case. This proves to be non trivial for the multivariate case. So, after some discussion with Isuru, we decided to stick with different implementations for SymEngine polynomials and Piranha polynomials, as combining code was not easy. The current work is in #1049
A lot is on my plate right now. Some of the work I plan to finish by the end of the next two weeks are
Finish up the refactoring of multivariate polynomials, and make their interaction with univariate polynomials as seamless as possible
Introduce wrappers for piranha polynomials to be used as multivariate polynomials within SymEngine
Fix up the remainder of rational polynomials, and template code wherever possible to remove code duplicates
Write conversions from Basic
to multivariate polynomials, which will finish up one part of the coercion framework as proposed by Isuru
Off to work!
eliminate() continue:
issue #2720 : We need some kind of eliminate function, like http://reference.wolfram.com/mathematica/ref/Eliminate.html. See also http://stackoverflow.com/q/20826969/161801
I am trying to use subs
and _invert
to get answer. May be one can use replace
, xreplace
, match
to eliminate some kind of same sub expression.
There can be ans in Imageset
, Finiteset
, Complement
, Intersection
when we use _invert
. So there should be a technique to handle it.
Still need some good idea and technique. WIP.
Output of solveset should be of one type:
Amit discussed about it. Solution we see in solveset
should be in one type of set. Right now we may have solution in Imageset
, Finiteset
, Complement
, Intersection
or ConditionSet
. So there would be problem for user to handle these many solution type.
I think there should be something that separate Complements
, Intersections
,ConditionSet
and main solution in Finiteset
.
E.g. if solveset solution is Intersection(Complement(FiniteSet(x), {y}), {z})
then
soln : FiniteSet(x)
, x != {y}
, {x} intersect {z}
.
Continue Simplified Trig soln
PR #11188
According to the Harsh comments/review I modified the PR. Now it seems it is returning more simplified solution( one case is here) .
To understand the changes I did in the _solve_trig
method, one should check this gist
To see the advantage of imageset union, One good example is in this gist
continue
Hello, guys. Welcome back. It’s been ten weeks into the coding period. I had a meeting with Jason on 25th of this month. We discussed many new changed on the API that I had implemented before this week. Now, the beam bending module is almost ready to solve beam bending problems.
Let us see how to solve a beam bending problem using this module.
Problem Statement :
The deflection is restricted at the end of the beam.
Solution :
>>> from sympy.physics.continuum_mechanics.beam import Beam
>>> from sympy import Symbol, Piecewise
>>> x = Symbol('x')
>>> E = Symbol('E')
>>> I = Symbol('I')
>>> b = Beam(4, E, I)
>>> b.apply_load(value=9, start=4, order=1)
>>> b.apply_load(value=3, start=0, order=1)
>>> b.apply_load(order=0, start=2, value=6)
>>> b.bc_deflection = [(4, 0)]
>>> b.boundary_conditions
{'deflection': [(4, 0)], 'moment': [], 'slope': []}
>>> b.load
3*SingularityFunction(x, 0, 1) + 6*SingularityFunction(x, 2, 0)  9*SingularityFunction(x, 4, 1)
>>> b.shear_force()
3*SingularityFunction(x, 0, 0) + 6*SingularityFunction(x, 2, 1)  9*SingularityFunction(x, 4, 0)
>>> b.bending_moment()
3*SingularityFunction(x, 0, 1)  3*SingularityFunction(x, 2, 2) + 9*SingularityFunction(x, 4, 1)
>>> b.slope()
(3*SingularityFunction(x, 0, 2)/2  SingularityFunction(x, 2, 3) + 9*SingularityFunction(x, 4, 2)/2  7)/(E*I)
>>> b.deflection()
(7*x + SingularityFunction(x, 0, 3)/2  SingularityFunction(x, 2, 4)/4 + 3*SingularityFunction(x, 4, 3)/2)/(E*I)
If the user wants to represent the deflection in the piecewise form, then:
>>> b.deflection().rewrite(Piecewise)
(7*x + Piecewise((x**3, x > 0), (0, True))/2
+ 3*Piecewise(((x  4)**3, x  4 > 0), (0, True))/2
 Piecewise(((x  2)**4, x  2 > 0), (0, True))/4)/(E*I)
That’s all for this week. Cheers !!
Happy Coding.
Somehow I think I was off by a week. I think last week’s blog post covers week
9 and 10 and this week’s covers week 11. This week I created a full draft for
all components of the SymbolicSystem
class that will take the place of a
equations of motion generator “base class” that was discussed in my project
proposal. I began by creating all of the docstrings for the class followed by
the test code. With the documentation and test code written it was a simple
matter to finish off the code for the class itself. Lastly I added
documentation to two places in sympy, one place contains the autogenerated
documentation from the docstrings and the other place I adapted an example from
pydy to show how to use the new class.
After working on SymbolicSystem
I decided to try to finish off an old PR of
mine regarding the init_printing code that Jason and I had discussed at Scipy.
The idea was to build separate dictionaries to pass to the different printers
in ipython based on the parameters that the specific printers take. The idea
was to find this information using inspect.getargs(). The problem arose when
trying to implement this solution because each separate printer has an expr
argument and a **settings argument and the different possible paramters are
processed internally by the printer. This means that there would not be an
elegant way to build dictionaries for each printer.
The next thing I worked on this week was looking into Jain’s version of the order(N) method as suggested last week. When I started looking over his book, however, I found that uses a rather different set of notion than Featherstone and had some additional terms. I have decided to move forward with Featherstone’s method due to the summer coming to an end and I am already familiar with his version of the method. To that end I reread the first part of chapter 7 in Featherstone’s book where he discusses the articulated body method.
I reviewed two PR’s this week. This work was rather quick as they were simply documentation additions. I verified the method docstrings matched what the mehtod actually does and that the modual docstring included the different functions present in the file. Determining that they were correct I gave the +1 to merge and they have both since been merged.
The plan for next week is to focus entirely on the order(N) articulated body method of forming the equations of motion. I plan on writing the three passes for the method as if I have all of the information and methods I need in order to make it work. I expect this to be the best way to determine what additional code I will need in addition to finding my weak points in how well I understand the method. Once I have a skeleton of the of how the algorithm is supposed to work I will stop working directly on the algorithm itself and start working on the peripheral code such as the joints and body code or spatial vector processing methods.
Previous week I had implemented the Shoup’s Algorithm in this PR. During the review we came to realise that it is better to use unsigned int
instead of integer_class
because we didn’t need big numbers.
Working on the same PR, I found a bug in negate
and similar function where we were doing:
for (auto &a : dict_) {
a *= 1;
a += modulo_;
Here: if the dict_
is [0, 0, 10]
, it will negate to [11, 11, 1]
in GF(11)
. So, it was needed to add a check when the value is 0
. So, the method was changed to:
for (auto &a : dict_) {
a *= 1;
if (a != 0_z)
a += modulo_;
}
Along with it I was working on the gf_factor
PR and it eventually got merged. It introduced Zassenhaus’s algorithm and gf_factor()
function. In coming days, we will have to change gf_factor
to switch between gf_zassenhaus
and gf_shoup
according to the degree of polynomial.
We made one more design change, we changed the factor container from std::pair<GaloisFieldDict, integer_class>
to std::pair<GaloisFieldDict, unsigned>
because we didn’t need large numbers as power.
Then I started working on change of base class of GaloisField
class from UPolyBase
to UIntPolyBase
, this needed implementation of eval
and multi_eval
method, then I implemented the iterator for GaloisField
class and the pow
method. The PR is under review.
I couldn’t write a blog post last week so including progress of week 8 and 9 both here.
I continued working on the PR #11360. We added functionality to store a different type of initial condition for regular singular points other than the usual [y(0), y'(0), ...]
. The exact format is described here in master, though it is changed to a more elegant form in #11422. This type of initial condition provides more information at regular singular points and is helpful in converting to expressions. Examples on how to use it:
In [22]: expr_to_holonomic(sin(x)/x**2, singular_ics={1: [1, 0, 1]}).to_expr() Out[22]: sin(x) ────── 2 x
I also added method to compute this type of initial condition for algebraic functions of the form P^r
, for some Polynomial P
and a Rational Number R
.
In [25]: expr_to_holonomic(sqrt(x**2+x)) Out[25]: HolonomicFunction((x  1/2) + (x**2 + x)Dx, x), {1/2: [1]} In [26]: _25.to_expr() Out[26]: _______ √x⋅╲╱ x + 1
After that I made some changes in `to_meijerg()` to return the polynomial itself if the `meijerg` function represents a polynomial instead of raising `NotImplementedError`.
In [40]: expr_to_holonomic(4*x**3 + 2*x**2, lenics=3).to_meijerg().expand() Out[40]: 3 2 4⋅x + 2⋅x
I also added code to return the general solution in `_frobenius()` if none of the roots of indicial equation differ by an integer.
I wasn’t able to do much this week because my college started. I travelled back and had some college related stuff to do.
I opened #11422 and first added a basic method to determine the domain for polynomial coefficients in the differential equation.
In [77]: expr_to_holonomic(sqrt(y*x+z), x=x, lenics=2).to_expr() Out[77]: _________ ╲╱ x⋅y + z In [78]: expr_to_holonomic(1.1329138213*x) Out[78]: HolonomicFunction((1.1329138213) + (1.1329138213*x)Dx, x), f(0) = 0
Then I added support for the new type of initial condition on regular singular points in .integrate()
.
In [83]: expr_to_holonomic(sin(x)/x**3, singular_ics={2: [1, 0, 1]}).integrate(x).to_expr() Out[83]: ⎛ 2 ⎞ ⎝x ⋅Si(x) + x⋅cos(x) + sin(x)⎠ ──────────────────────────────── 2 2⋅x
Also added support for the same in addition.
In [6]: expr_to_holonomic(sqrt(x)) + expr_to_holonomic(sqrt(2*x)) Out[6]: HolonomicFunction((1/2) + (x)Dx, x), {1/2: [1 + sqrt(2)]} In [7]: _6.to_expr() Out[7]: √x⋅(1 + √2)
I plan to continue my work on this PR and add more support for this initial condition.
In the last blog post I reported that lambdify function was fully wrapped. Yes, that’s what I thought at the time! But it did indeed dragged on quite a bit, requiring many changes, which were very educative for me in terms of how Ruby looks at user experience. Several changes were done, including structural changes on calling lambdify, and for supporting older Ruby versions. The really interesting and long discussions on this can be viewed in the PR 61.
Apart from that, simultaneously I started reading and exchanging ideas on exception handling. It was agreed that in the C wrappers, an error code to be returned, which can be accessed from the Ruby wrapper, which in turn can raise a Ruby exception. The preliminary model can be seen in PR 1044 in SymEngine and PR 64 in SymEngine Ruby wrapper.
Right now any exception is caught and sent to the Ruby Wrapper with an error code of 1, which raises a generic Runtime Error in Ruby. Although not very informative, this is helpful in prevention of crashing the Ruby runtime.
To illustrate, when the following code (a division by zero) is run before and after is shown.
Before:
irb(main):001:0> require 'symengine' => true irb(main):002:0> x = SymEngine(1) => #<SymEngine::Integer(1)> irb(main):003:0> y = SymEngine(0) => #<SymEngine::Integer(0)> irb(main):004:0> x/y terminate called after throwing an instance of 'Teuchos::NullReferenceError' what(): /home/rajith/Development/symengine/symengine/utilities/teuchos/Teuchos_RCPNode.cpp:720: Throw number = 1 Throw test that evaluated to true: true Teuchos::RCP<SymEngine::Basic const> : You can not call operator>() or operator*() if getRawPtr()==0! Abort caught. Printing stacktrace: Traceback (most recent call last): Done. [2] 590 abort (core dumped) irb
After:
irb(main):001:0> require 'symengine' => true irb(main):002:0> x = SymEngine(1) => #<SymEngine::Integer(1)> irb(main):003:0> y = SymEngine(0) => #<SymEngine::Integer(0)> irb(main):004:0> x/y RuntimeError: Runtime Error from (irb):4:in `/' from (irb):4 from /usr/bin/irb:11:in `<main>' irb(main):005:0>
This is a good improvement overall, but as it’s nicer to have a more descriptive error shown to the user, that part will be the continuation of exception handling during the 10th week.
See you next week!
This week I worked on solving trigonometric inequalities.
The primary univariate inequality solver  solve_univariate_inequality
,
depends upon the results of solve
in order to solve the given inequality.
Taking a cue from PR#10022 on incorporating solveset
for inequalities, I worked on developing an approach for replacing the use of solve
with solveset
in solving inequalities.
Idea
Convert
Set
type output fromsolveset
tolist
objects similar to that returned bysolve
.
The most striking difference between both the APIs is the uniform Set
output returned by solveset
.
Hence, the prime concern while transitioning from solve
to solveset
should be handling the various type of solutions.
Here are a few implementation ideas on the same:
FiniteSet : finite number of solutions
Using the list
constructor on these type of objects works extremely well.
ImageSet : infinite number of solutions
This is generally the case with trigonometric functions as most of them are periodic in nature.
We need to limit the number of solutions to be finite.
For this, I intend to use the a periodic interval: [0, period]
as the basis for filtering the solution set.
This returns a simplified FiniteSet
of solutions which can be used to solve inequalities in a restricted interval.
Following which we can generalise the output over the entire domain of the function.
A major issue here is the representation of the final solution set.
For example:
In []: solveset(cos(x)<0, x, domain=S.Reals)
⎡ π 3⋅π ⎤
⎢2*n*pi + ─, 2*n*pi + ───  n ∊ ℤ⎥
⎣ 2 2 ⎦
Currently, we do not have a Set
object for representing this.
For this, we need to implement an Indexed Union : Union(X(a) for a in I)
We can symbolically represent the above solution as BigUnion(Interval(2*n*pi + pi/2, 2*n*pi + 3*pi/2), n, S.Intgers)
.
After thoughts
Also, this week my PR#11277 on periodicity got merged finally. I have updated the corresponding PR#11141 which has been stalled for some time now. Hopefully, it will get merged soon.
The next week I will devote my time to the implementation part of solving inequalities.
Sorry I haven’t been able to report my work for about two weeks now. Things have become slower mainly due to the fact that my university has resumed and along with it a heavily packed timetable and assignments in the first week don’t help. I also caught a bad fever the past week which really hindered my progress, but it’s dying down and I will resume my work with full vigor eventually.
The work I did do has been summarized below.
While writing the code for the rational polynomials, as mentioned in the last blogpost, I encountered various bugs. Some of them caused other bugs to be exposed, which took a lot of time for me to debug.
pow(Poly, uint)
was throwing a segmentation fault. After digging in and wasting more than four hours on unrelated checks I figured out that the eq
inside the polynomial class was incorrect. Without checking whether the other parameter was a Poly
or not, I was static_cast
ing it which posed a problem.
The happiness was shortlived, as the bug persisted. On further inspection I found that the polynomial was being treated as a number! This was because is_a_Number
relied on typecodes, and the polynomial types were defined before the NUMBERWRAPPER
class, which deemed them numbers. The fix for this was simple, just move the polynomial type code definitions after the numbers.
The pow
tests pass, but what’s this? All the MSVC
builds on appveyor fail. They all fail a coeff
test. Wow, I had not changed any code related to coeff
at all, how does it affect that specific test and only on the MSVC
compiler? This kept me wondering and looking at the source for a day. Finally, I had to login to the VM of appveyor running the tests. I was not familiar with windows development environment at all, which was the reason I failed a couple of times before I gave up debugging. The next morning I woke up determined to fix this Windows bug, I set the break points in Visual Studio and started the code execution. I found it! It was a bug in the CoeffVisitor
itself. The code for the coeff
function was incomplete. Why wasn’t this bug being captured before? Probably because the previous change (in the typecodes) caused a reordering in a map, which no other compiler was doing. Do read up here for more details.
An appveyor build was failing for unknown reason, which had to be shifted to allowed failures
This was basically the components of #1033. Less quantity of changes, but really important none the less.
The work with rational polynomials continues. I had underestimated the amount of work required, and I also feel that I should have broken down rational polynomials into three parts each, just like integer polynomials. Right now, the work continues in #1028, but it’s soon going to become huge with all varieties of changes.
I finally benchmarked cotire to see how much speedup it was providing to SymEngine builds. Here is the short summary of the speedups obtained, and the work to include it is in #1041.
Also a small bug was present in our flint and gmp rational number wrappers. We were not canonicalizing on construction from two integers. It was fixed in #1031.
Laters!
Hello, guys. Welcome back. It’s been nine weeks into the coding period. I had a meeting with Jason on 17^{th } of this month. He was attending the code sprints at Scipy and I am very glad to meet other Sympy developers.
That’s all for this week. Cheers !!
Happy Coding.
Last week I did not end up writing a blog post and so I am combining that week’s post with this week. Last week I attended the SciPy 2016 conference and was able to meet my mentor, and many other contributers to SymPy, in person. I was also able to help out with the Pydy tutorial. During this time at the conference (and this current week) I was able to flesh out the remaining details on the different portions of the project. I have updated PR #353 to reflect the api decisions for SymbolicSystem (previously eombase.EOM).
In line with trying to put the finishing touches on implementation details before diving in to code, Jason and I met with someone who has actually implemented the algorithm in the past to help us with details surrounding Featherstone’s method. He also pointed me to a different description of the same algorithm that may be easier to implement.
This week I also worked on rewriting the docstrings in physics/mechanics/body.py because I found the docstrings currently there to be somewhat confusing. I also did a review on one of Jason’s PR’s where he reduces the amount of work that *method.rhs() has to do when inverting the mass matrix by pulling out the kinematical information before the inversion takes place.
With the work these past two weeks being focused on implementing the different parts of the projects, I will start implementing these various parts next week. I will first work on finishing off the SymbolicSystem object and then move towards implementing the OrderNMethod. This work should be very straight forward with all the work that has been put into planning the api’s.
After completing the work on gf_factor
, I moved on to implement GathenShoup’s factorization algorithm. Like Zassenhaus’s agorithm, it is also a probabilistic algorithm.
The paper is available here.
First question: Why this algorithm ?
> Because, it is kind of faster than zassenhaus’s algorithm.
Second question: What is “kind of” here ?
> Well, it is faster when a specific condition satisfies.
Its asymptotic runtime is O(n**2 + n log q).(log n)**2.loglog n
, where n
is degree of polynomial and q
is the field characteristics.
In “Soft O” notation, it is O~(n**2 + n log q)
. While the cantor zassenhaus’s algorithm has O~(n**2 log q)
asymptotic runtime.
So when log q
approaches n
, the difference is remarkable.
Algorithm  Asymptotic Runtime 

Shoup  O~(n**2) 
Zassenhaus  O~(n**3) 
It also works in three parts, firstly square free factorization, then distinct degree and finally equal degree factorization.
I have completed the implementation of this algorithm on shoup_factorization branch.
And also, I changed the container (which stores factors) type to set
.
Will send a PR soon.
Planet SymPy is made from the blogs of SymPy's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet SymPy with RSS, FOAF or OPML.