Update: I want to keep the original post intact, but I'll post any feature updates that GitHub makes here as they are released.
You can now dismiss a review.
You can now filter the pull request list by review. The review status of a PR also appears in the pull request list (albeit in very tiny and easy to miss text).
GitHub recently rolled out a new feature on their pull requests called "Reviews". The feature is (in theory), something that I've been asking for. However, it really falls short in a big way, and I wanted to write down my gripes with it, in the hopes that it spurs someone at GitHub to fix it.
If you don't know, GitHub has had, for some time, a feature called "pull requests" ("PRs"), which lets you quite nicely show the diff and commit differences between two branches before merging them. People can comment on pull requests, or on individual lines in the diff. Once an administrator feels that the pull request is "ready", they can click a button and have the branch automatically merged.
The concept seems super simple in retrospect, but this feature completely revolutionized open source software development. It really is the bread and butter of GitHub. I would argue that this one single feature has made GitHub the (the) primary hosting site for open source software.
Aside from being an awesome idea, GitHub's trademark with pull requests, along with their other features, has been absolute simplicity in implementation. GitHub Reviews marks, by my estimation, the first major feature released by GitHub that completely and utterly lacks in this execution of simplicity.
Let's look at what Reviews is. When the feature first came out, I had a hard time figuring out how it even worked (the poor release date docs didn't help here either).
Basically, at the bottom of a pull request, you now see this
Clicking the "Add your review" button takes you to the diff page (first gripe: why does it move you to the diff page?), and opens this dialog
"OK", you might think, "this is simple enough. A review is just a special comment box where I can approve or reject a pull request." (This is basically the feature that I've been wanting, the ability to approve or reject pull requests.) And if you thought that, you'd be wrong.
The simplest way I can describe a review, having played with it, is that it is a distinct method of commenting on pull requests and on lines of diffs of pull requests. Distinct, that is, from the methods that already exist in the GitHub pull requests feature. That's right. There are now two ways to comment on a pull request (or on a line in a pull request). There's the old way, which involves typing text into the box at the bottom of the main pull request page (or on a line, and then pressing "Add a single comment"), and the new way, which involves clicking a special button at the top of the diff view (and the diff view only) (or by clicking a line in the diff and clicking "Start a review").
How do these two ways of the extremely simple task of commenting differ from one another? Two ways. One, with the old way, when you comment on a PR (or line), the comment is made immediately. It's saved instantly to the GitHub database, and a notification email is sent to everyone subscribed to the PR. With the new way, the comment is not made immediately. Instead, you start a "review", which postpones all comments from being published until you scroll to the top and press a button ("Review changes"). Did you forget to scroll to the top and press that button? Oh well, your comments never got sent to anyone.
Now, I've been told by some people that delayed commenting is a feature that they like. I can see how fewer total emails could be nice. But if you just want a way to delay comments, why do you need distinct commenting UIs? Couldn't the same thing be achieved via a user setting (I highly suspect that any given person will either like or dislike delayed commenting universally)? Or with a checkbox next to the comment button, like "delay notifications for this comment"? You can probably guess by now which of the two commenting systems I prefer. But guess what happens when I press the "CmdEnter" keyboard shortcut that's been hardwired into my brain to submit a comment? I'll give you a hint: the result does not make me happy.
The second distinction between normal, oldfashioned commenting and the newfangled reviews system is that when you finalize a review, you can elect to "approve" or "reject" the pull request. This approval or rejection gets set as a special status on the pull request. This status, for me personally, is the only feature here that I've been wanting. It turns out, however, that it's completely broken, and useless.
Here's my problem. We have, at the time of writing, 382 open pull requests in SymPy. A lot of these are old, and need to be triaged. But the problem from my point of view is the new ones. When I look through the list of pull requests, I want to be able to know, at a glance, which ones are "reviewable". For me, this means two things
The tests pass.
No other reviewer (myself included) has already requested changes, which still need to be made by the PR author.
Point 1 is really easy to see. In the pull request list, there is a nice green checkmark if Travis passed and a red X if it failed.
The second point is a disaster. Unfortunately, there's no simple way to do this. You might suggest adding a special label, like "Needs changes", to pull requests that have been reviewed. The problem with this is that the label won't go away when the changes have been made. And to worsen things, people who don't have push access (in the list above, only two PR authors have push access, and one of them is me), cannot add or remove labels on pull requests.
Another thing that has been suggested to me is an external "review" service that sets a status for a review. The problem with this (aside from the fact that I couldn't find one that actually did the very simple thing that I wanted), is that you now have to teach all your pull request reviewers to use this service. You might as well forget about it.
Having a firstclass way in GitHub for reviewers to say "I approve these changes" or "I don't approve these changes" would be a huge boon, because then everyone would use it.
So great right, this new Reviews feature is exactly what you want, you say. You can now approve or reject pull requests.
Well no, because GitHub managed to overengineer this feature to the point of making it useless. This completely simple feature. All they had to do was extend the status UI and add a simple "I approve/I reject" button. If they did that, it would have worked perfectly.
Here are the problems. First, the pull request list has no indication of review status. Guess which pull requests in the above screenshot have reviews (and which are positive and which are negative). You can't tell (for example, the last one in the list has a negative review). If they were actually treated like statuses, like the UI suggests that they would, you would at least see an X on the ones that have negative reviews (positive reviews I'm much less worried about; most people who review PRs have merge access, so if they like the PR they can just merge it). I would suggest to GitHub to add, next to the status checkbox, a picture of everyone who's made a review on the PR, with a green checkmark or red X to indicate the type of review. Also, add buttons (buttons, not just buried advanced search options) to filter by reviews.
OK, so that's a minor UI annoyance, but it gets worse. Next on the docket, you can't review your own pull requests. It's not allowed for some reason.
Now why would you want to review your own pull request, you might ask? Aren't you always going to "approve" your own PR? Well, first off, no. There is such a thing as a WIP PR. The author setting a negative review on his own PR would be a great way to indicate WIP status (especially given the way reviews work, see my next gripe). Secondly, the "author" of a pull request is just the person who clicked the "New pull request" button. That's not necessarily the only person who has changes in the pull request. Thanks to the magic of how git works, it's quite easy to have a pull request with commits from many people. Multiple people pushing to a shared branch, with a matching pull request for discussion (and easy viewing of new commits and diff) is a valid and useful workflow (it's the only one I know of that works for writing collaborative prose). For the SymPy paper, I wanted to use GitHub Reviews to sign off on a common branch, but since I'm the one who started the pull request, I couldn't do it.
Next gripe, and this, I want to stress, makes the whole feature completely useless for my needs: reviews do not reset when new commits are pushed. Now, I just outlined above two usecases where you might want to do a review that doesn't reset (marking WIP, and marking approval, although the second is debatable), but both of those can easily be done by other means, like editing the title of the PR, or oldfashioned commenting. The whole point of Reviews (especially negative reviews), you'd think, would be to indicate to people that the pull request, as it currently stands, needs new changes. A negative review is like failing your "human" test suite.
But unlike your automated test suite, which reset and get a new go every time you push a change (because hey, who knows, maybe the change ACTUALLY FIXED THE ISSUE), reviews do not reset, unless the original reviewers explicitly change them. So my dream of being able to glance at the pull request list and see which PRs need changes has officially been piped. Even if the list actually showed what PRs have been reviewed, it would be a lie, because as soon as the PR author pushes a change, the review status becomes potentially outdated.
Now, given the rocky start that this whole feature has had, I figured that this was probably just a simple bug. But after I reported it to GitHub, they've informed me that this is in fact intended behavior.
To make things worse, GitHub has another feature with Reviews, called required reviews. You can make it so that every pull request must receive at least one positive review and zero negative reviews before it can be merged (go to the branch settings for your repository). This works similar to required status checks, which make it so that your tests must pass before a PR can be merged. In practice, this means you need zero negative reviews, since anyone with push access could just do a positive review before merging (although annoyingly, you have to actually manually do it; IMHO, just requiring zero negative reviews should be sufficient, since merging is implicitly a positive review).
Now, you can see that the above "feature" of reviews not resetting breaks the whole thing. If someone negative reviews a PR, that one person has to go in and change their review before it can be merged. And even if the author pushes new changes to fix the issues outlined in the review, the PR cannot be merged until the reviewer resets it. So this actually makes the reviewing situation worse, because now anyone who reviews a pull request at any point in time has to go through with it all the way to the merge. I can't go to a PR that someone requested changes for, which were later made by the author, and merge it. I have to ping the reviewer and get them to change their review first. Needless to say, we do not have this feature enabled for SymPy's repo.
I think I maybe see the intended usecase here. You want to make it so that people's reviews are not forgotten or ignored. But that's completely foreign to my own problems. I trust the SymPy community, and the people who have push access to do due diligence before merging a pull request. And if a bad change gets in, we can revert it. Maybe this feature matters more for projects that continuously deploy. Likely most of the code internal at GitHub works like that. But guess what GitHub, most of the code on GitHub does not work like that. You need to rethink this feature to support more than just your usecases.
I think starting simple, say, just a simple "approve/reject" button on each PR, which just adds an icon, and that's it, would have been a better approach. Then they could have listened to the feedback on what sorts of things people wanted it to be able to do (like setting a status, or showing up in the search list, or "delayed commenting" if that's really what people want). This is how GitHub used to do things. It's frustrating to see a feature implemented that doesn't (yet) do quite what you want, but it's even more frustrating to see a feature implemented that does all the things that you don't want.
Yes, I'm a little mad here. I hope you enjoyed my rant. Here are what I see as the problems with the "Reviews" feature. I don't know how to fix these problems (I'm not a UI/UX guy. GitHub supposedly hires them, though).
There are now two distinct ways to comment on a PR (delayed and nondelayed). There should be one (say, with a checkbox to delay commenting).
If you insist on keeping delayed commenting, let me turn it off by default (default = the CmdEnter keyboard shortcut).
The reviews button is buried on the diff page. I would put it under the main PR comment box, and just reuse the same comment box.
Reviews should show up in the pull request list. They should be filterable with a nice UI.
Let me review my own pull requests. These can be excluded from required reviews (that makes sense to me). Beyond that, there's no reason this shouldn't be allowed.
Don't require a positive review for required reviews, only zero negative reviews. Merging a PR is implicitly positively reviewing it.
Allow reviews to reset when new commits are pushed.
I get that the last point may not be what everyone wants. But GitHub needs to think about UI, and defaults here. Right now, the UI looks like reviews are like statuses, but they actually aren't because of this.
I am dispirited to see GitHub release such a broken feature, but even the best trip up sometimes. I'm not yet calling "doom" on GitHub. Everyone has their hockey puck mice. I'm actually hopeful that they can fix these issues, and implement a feature that makes real headway into helping me solve one of my biggest problems on GitHub right now, the reviewing of pull requests.
Today, David Beazley made some tweets:
At the bookstore. I'll admit I judge Python books by their tuple description. "Read only list?" Back on the shelf. Nope.
— David Beazley (@dabeaz) September 21, 2016
Usually there's a sentence nearby "sometimes you need a read only list." No. No, I haven't. Not in 20 years. Not even once. Sorry.
— David Beazley (@dabeaz) September 21, 2016
My main objection is that "read only list" is a lazy description lacking thought. A red flag for every other topic that might be covered.
— David Beazley (@dabeaz) September 21, 2016
There are quite a few good responses to these tweets, both from David and from others (and from yours truly). I recommend reading the the thread (click on the first tweet above).
Now to start off, I want to say that I respect the hell out of David Beazley. The guy literally wrote the book on Python, and he knows way more about Python than I ever will. He's also one of the most entertaining Python people you can follow on Twitter. But hey, that doesn't mean I can't disagree sometimes.
As you probably know, there are two "array" datatypes in Python, list
and
tuple
.^{1} The primary difference between the two is that lists are mutable,
that is you can change their entries and length after they are created, with
methods like .append
or +=
. Tuples, on the other hand, are immutable.
Once you create one, you cannot change it. This makes the implementation
simpler (and hence faster, although don't let anyone tell you you should use a
tuple just because it's faster). This, as
Ned Batchelder
points out, is the only technical difference between the two.
The the idea that particularly bugs me here is that tuples are primarily useful as "record" datatypes.
@AllenDowney Better than "readonlylist." ;). Mainly looking for the tupleasrecord description. That's often what's missing.
— David Beazley (@dabeaz) September 21, 2016
Tuples are awesome for records. This is both by design—since they have a
fixed shape, the positions in a tuple can be "fixed" values, and by convention—if a
Python programmer sees parentheses instead of square brackets, he is more
likely to see the object as "recordlike". The namedtuple
object in the standard library
takes the record idea further by letting you actually name the fields:
>>> from collections import namedtuple >>> person = namedtuple('Person', 'name, age') >>> person('Aaron', 26) Person(name='Aaron', age=26)
But is that really the only place you'd want to use a tuple over a list?
Consider five other places you might encounter a tuple in Python, courtesy of Allen Downey:
@dabeaz (1) tuple assignment (2) multiple return values (3) *args (4) output from zip, enumerate, etc (5) key in dictionary
— Allen Downey (@AllenDowney) September 21, 2016
In code these look like:
Multiple assignments:
>>> (a, b) = 1, 2
(yes, the parentheses are optional here, as they are in many places where a tuple can be used, but this is still a tuple, or at least it looks like one ;)
Multiple return values:
For example,
os.walk
. This is
for the most part a special case of using tuples as records.
*args
:
>>> def f(*args): ... print(type(args), args) ... >>> f(1, 2, 3) <class 'tuple'> (1, 2, 3)
Arbitrary positional function arguments are always stored as a tuple
.
Return value from builtins zip
, enumerate
, etc.:
>>> for i in zip(range(3), 'abc'): ... print(i) ... (0, 'a') (1, 'b') (2, 'c') >>> for i in enumerate('abc'): ... print(i) ... (0, 'a') (1, 'b') (2, 'c')
This also applies to the combinatoric generators in
itertools (like
product
, combinations
, etc.)
Dictionary keys:
>>> { ... (0, 0): '.', ... (0, 1): ' ', ... (1, 0): '.', ... (1, 1): ' ', ... } {(0, 1): ' ', (1, 0): '.', (0, 0): '.', (1, 1): ' '}
This last one I find to be very important. You could arguably use a list for the first four of Allen Downey's points^{2} (or Python could have, if it wanted to). But it is impossible to meaningfully hash a mutable data structure in Python, and hashability is a requirement for dictionary keys.
However, be careful. Not all tuples are hashable. Tuples can contain anything, but only tuples of immutable values are hashable. Consider^{4}
>>> t = (1, 2, [3, 4]) >>> t[2].append(5) >>> t (1, 2, [3, 4, 5])
Such tuples are not hashable, and cannot be used as dictionary keys.
>>> hash(t) Traceback (most recent call last): File "<ipythoninput3936822ba665ca>", line 1, in <module> hash(t) TypeError: unhashable type: 'list'
list
the Default?My second gripe here is this notion that your default ordered collection
object in Python should be list
. tuples
are only to be used as "records",
or if you suspect might want to use it as a dictionary key. First off, you
never know when you'll want something to be hashable. Both dictionary keys and
sets
require hashability. Suppose you want to deduplicate a collection of
sequences. If you represent the sequences with list
, you'll either have to
write a custom loop that checks for duplicates, or manually convert them to
tuple
and throw them in a set
. If you start with tuple
, you don't have
to worry about it (again, assuming the entries of the tuples are all hashable
as well).
Consider another usage of tuples, which I consider to be
important, namely tree structures. Say you wanted a simple representation of a
Python syntax tree. You might represent 1  2*(3 + 4)
as
('', 1, ('*', 2, ('+', ('', 3), 4)))
This isn't really a record. The meaning of the entries in the tuples is
determined by the first value of the tuple, not position. In this example, the length
of the tuple also signifies meaning (binary vs. unary 
).
If this looks familiar to you, it's because this is how the language Lisp represents all programs. This is a common pattern. Dask graphs use tuples and dictionaries to represent computations. SymPy expression trees use tuples and Python classes to represent symbolic mathematical expressions.
But why use tuples over lists here? Suppose you had an object like the one
above, but using lists: ['', 1, ['*', 2, ['+', ['', 3], 4]]]
. If you
discover you need to use this as a dictionary key, or want to put it in a
set
, you would need to convert this to a hashable object. To do this you
need to write a function that recursively converts each list
to a tuple
.
See how long it takes you to write that function correctly.
More to the point, however, mutability is bad. I counted 12 distinct methods
on list
that mutate it (how many can you remember off the top of your
head?^{3}). Any function that gets access to a list can mutate it,
using any one of these methods. All it takes is for someone to forget that
+=
mutates a list (and that they should copy it first) for code completely
distant from the origin definition to cause issues. The hardest bug I ever
debugged had a three character
fix,
adding [:]
to copy a global list that I was (accidentally) mutating. It took
me a several hour airplane ride and some deep dark magic that I'll leave for
another blog post to discover the source of my problems (the problems I was
having appeared to be quite distant from the actual source).
I propose that Python code in general would be vastly improved if people used
tuple
as the default ordered collection, and only switched to list
if
mutation was necessary (it's less necessary than you think; you can always
copy a tuple instead of mutating it). I agree with David Beazley that you
don't "sometimes need a read only list". Rather, you "sometimes need a
writable tuple".
This makes more sense than defaulting to list
, and only switching to tuple
when hashability is needed, or when some weird "rule of thumb" applies that
says that you should use tuple
if you have a "record". Maybe there's a good
reason that *args
and almost all builtin and standard library functions
return tuples instead of lists. It's harder to accidentally break someone
else's code, or have someone else accidentally break your code, when your data
structures are immutable.
I want to avoid saying "a tuple is an immutable list", since "list"
can be interpreted in two ways, as an English word meaning "ordered
collection" (in
which case, the statement is true), or as the Python type list
(in which
case, the statement is false—tuple
is not a subclass of list
). ↩
Yes,
>>> [a, b] = 1, 2
works. ↩
One of the tweets from the conversation:
@asmeurer @AllenDowney As yes:
— David Beazley (@dabeaz) September 21, 2016
t = (1,2, [3, 4])
t[2] += [5,6]
;)
This is similar to this example. But it turns out this one doesn't work:
>>> t = (1,2, [3, 4]) >>> t[2] += [5,6] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment
I have no idea why. It seems to me that it should work. t[2]
is a list
and list
has __iadd__
defined. It seems that Python gets kind of weird
about things on the lefthand side of an assignment. EDIT:
Here's why. ↩
This week I worked on the PR of factoring square free polynomials. Adding many changes for size in base. Like changing this:
auto l = std::ceil(std::log(2 * B + 1) / std::log(mp_get_ui(fsqf.first)));
to
if (mp_fits_slong_p(B)) {
temp = 2 * B + 1;
l = std::ceil(std::log(mp_get_d(temp))
/ std::log(mp_get_ui(fsqf.first)));
} else {
auto b_d = mp_get_d(mp_abs(b));
l = std::ceil(
(1 + std::log2(n + 1) / 2.0 + n + log2_d_A + std::log2(b_d))
/ (std::log2(mp_get_ui(fsqf.first))));
}
This prevented overflow.
And a lot of implementation changes were incorporated in gf_gcdex
also.
I also wrote documentation on Fields
module here.


Hi, I'm Gaurav Dhingra (@gxyd) and this post is a report of my work for SymPy under Google Summer of Code 2016.
This is also intended to be a description of my work for submission to GSoC's Final Evaluation.
PR for GSoC 2016 product submission: Gaurav Dhingra.
GSoC proposal
Project: Group Theory
Blog: https://gxyd.github.io/gsoc.html
My Website: https://gxyd.github.io
email: axyd0000[at]gmail.com, igauravdhingra[at]protonmail.com
I am a prefinal year undergraduate student of Applied Mathematics at IIT Roorkee.
I heard about the GSoC program and about SymPy org. since of one of my college mate, Prasoon Shukla (he also participated in GSoC with SymPy). I was quite amazed by how mathematics and computer science could intersect and help in solving math problems.
I have been contributing to SymPy from May last year. In the beginning I tended often to work on issues that were more often related to my then ongoing courses in my college. In December I completed a basic course on Group Theory, so I thought of choosing to work on Group Theory and I saw quite a few things in CGT were missing from SymPy and those functionalities were already there in GAP and Magma. So I asked them from Kalevi, regarding things that could be taken up for a gsoc project. So finally with some discussions with him and Aaron, I was sure by December end that I was going to work on implementing Finitely Presented Group in SymPy. I looked upto the last GSoC project on CGT by Alexandar Makelov, for reference material that could be used for the implementation and it turned out that the book by Derek Holt, Handbook of Computational Group Theory (will mention as shortly the Handbook).
Though I already started working on PR #10350 for implementation of free groups in beginning January though I started working on the proposal from February beginning.
So what I should do? Since I was quite sure of the project. Well at that moment I thought, since I am a mathematics major, choosing this project would also help me. So I reached out to Kalevi, said to him what I was thinking to do, what could be good for the project. So we worked upon making a good gsoc proposal.
Here is my GSoC proposal. Now that the summer is over and I've tackled a lot more with computational group theory, it seems that the main points in my GSoC proposal were:
Group
class.
Well, I submitted my proposal and needed to wait until April 22 to the result, and then...
I got lucky to have Kalevi Suominen like mentor too. Aaron Meurer, the project leader of SymPy was to be comentor, and I felt honored to be alloted my preferred mentors.
A more detailed account of my progress can be found on my blog here
Basic
should be a superclass of FreeGroup
or not. In the end we decied not to derive FreeGroup
from Basic
. Its basic methods had already been written, most of them inspired from GAP software.Group
class), we never worked upon it. Also the topic of Presenation of Finite Groups , could not get much attention, since of my busy college schedule.
Since I have commited quite a few commits unrelated to my project. So I decided to cherrypick the commits made for my GSoC project. So the combined commits makes the things quite clear. commits in GSoC 2016 product submission: Gaurav Dhingra.
There is still much to be done regarding the finitely presented groups. As during the program my efforts were directed mainly towards getting the fundamental algorithms for finitely presented groups in, the overall structure of the module hasn't received much attention.
The algorithms are in, but the methods in sympy/combinatorics/fp_groups.py are not that much userfriendly. The code organization is not terrible, I think. Some suggestions (both for me and anyone else interested) for future work are:
polys.agca
as abelian groups are the same as Zmodules. It is all essentially about linear algebra over the integers.groups
, since finitely presented groups should not be a part of combinatorics module which would contain the all algebraic structures including the permutation groups.
I had say that I did about 70% of the work I promised to do in my proposal, considering that I also did two nonincluded task of Low Index subgroups algorithm and Modified Todd Coxeter algorithm, so I can say I swapped my work. It is good enough, and I hope to get back to extending the things that I have started. There's still some revision to be made, some code to be clean up. And I'm doing that.
I don't really know if I'll pass final evaluations by Google, but, regardless, I'm really glad for all the coding and development experience I got during these weeks. I'll probably use for personal projects, may be in Dissertation in last year, and try to contribute more to SymPy.
I appreciate the help of my mentor Kalevi Suominen who was always there for any query that I had regarding Group Theory in project, and I appreciate his ability to reply back within 1 hour for any message I left any time of day and every day of the week including weekend (I call it 1hour rule). I think he was a wonderful mentor and I learnt a lot from him, and my comentor Aaron Meurer, the current project leader of SymPy, and the entire SymPy community for helping me out and reviewing my work!
Also thank you Google.
अलविदा !
This week I started work on polynomial factorization in Z
.
I implemented the zassenhaus’s algorithm, which factors a primitive squarefree polynomial.
The basic idea behind factoring a polynomial in Z
is first to make it square free using Yun's
algorithm or something similar and then choosing a prime number p
, such that the polynomial mod p
is square free, and p
doesn’t divide its leading coefficient.
After that it is reduced to factoring polynomial in finite field. Which has already been implemented, then we lift all the factors from the finite field to Z
using hensel lifting.
The number of time Hensel Lifting has to be done is found using the Mignotte’s bound.
This wall required implementation of extended euclidean GCD in the finite fields.
So, after the implementation of factorization of square free algorithms, we were able to perform:
f = UIntPoly::from_vec(x, {1_z, 0_z, 9_z});
// f = 9x**2 + 1
out = f>zz_zassenhaus();
// out is a set of RCP<const UIntPoly>
// out = {3x + 1, 3x  1}
out_f = f>zz_factor_sqf();
// out_f is a pair<integer_class, std::set<RCP<const UIntPoly>>
// out_.f.first = 1
// out_f. second = {3x + 1, 3x  1}
After this I need to implement the algorithm for square free factorization and the trial division for finding the power of factor in the polynomial, this way we will be able to factor the polynomials. The PR is here.


Documentation nonlinsolve:
PR #11532
Conclusion :
So the GSoC coding period is going to end. Here I am writing what I did, my todo list and suggestions for future work.
One can see the progress report of each week and minutes of meeting here : GSoC2016SolversProgress
My all the PRs link is here : sympy/pulls/shekharrajak
I have created a vertical timeline to show all the work in chrononical order in this link : timeline
Meanwhile :
I find one issue : #11534 in mailing list discussion. This shows that we need better
invert
method for inverting nested roots/powers equtions in solveset
.
and this #11528. factor_list
will be helpful in factoring the equation in solveset
and solve for it factors and union the solution(during the PR #11188 discussion with Harsh this idea came out). It will actually replace the elif f.is_Mul and all(_is_finite_with_finite_vars(m, domain) statement in _solveset
and factor_list
will be used soon.
Hello all! This is officially the last week for Google Summer of Code 2016. The submission and final evaluation has already started. Last week I was busy with fixing bugs and preparing documentation for the beam module.
Jason has came up with some new ideas for beam module:
I have planed for a long term association with SymPy, I am taking the full responsibilty of my code and I will try to contribute as much as I can. I will help the new developers with the Sympy code paradigms and the development workflow.
I have been able to meet all the target that I had mentioned in my proposal and all of my pull requests have got merged except PR 11494. It was great working with Sympy. Jason, Sartaj, Aaron, Kalevi, Isuru and all other sympy members have helped me a lot. Many of the times, I used to get stuck with some implementation, testing or with the Travis CI build but, for everytime they have always tried to help me fix them. My mentors have always appreciated and motivated me. I got to learn many new thing about the software development workflow. Thank you Sympy for making this summer an awesome journey of learning and coding.
That’s all folks.
Happy Coding.
For the duration of GSoC, I have worked on the SymEngine project, with focus on writing the Cwrapper and specifically the Ruby Wrapper. Major work during the project timeline included wrapping already existing features such as MPC, MPFR, arbitrary precision evaluations, Matrices, expression parsing etc., and new features such as lambdifying expressions in Ruby and Exception Handling.
My proposal for GSoC 2016 can be viewed here [1].
After 13 weeks of GSoC, we’re now at the end of the summer, and it is time for me to explain and report what I did with SymEngine and its Ruby Wrappers throughout the summer.
For reporting purposes, I will take each week’s workload and indicate the status in a tabular manner, with links to the relevant PRs. Due to the nature of the project, each part usually consists of two PRs. One to the SymEngine repository [2], and the other to the SymEngine.rb [3] repository.
Week  Work  SymEngine  SymEngine.RB  Comments 
1, 2  Complex Numbers & Floating Points  Completed
Wrappers for Real Double: PR#954 Implementing real_double_get_d method: PR#966 Wrappers for Real MPFR and Complex MPC: PR#972 fixes #974: PR#975 
Completed
Real Double Class: PR#46 Ruby Wrappers for Real MPFR and Complex MPC: PR#49 

3  Derivatives, Substitutions, Abs  Completed
Function symbol Cwrappers: PR#982 
Completed
Abs: PR#50 Function symbol Ruby Wrappers: PR#51 

3  Series & Polynomials  –  –  The relevant parts in SymEngine’s C++ code library was not ready, so I had to skip this part. 
3  Evalf  Completed
Common evals method and its Cwrappers: PR#987 Sign check for Numbers: PR#1027 
Completed
evalf ruby wrapper: PR#55 
This was not included in the original proposal. Completed to compensate for Series and Polynomials. 
4, 5, 6  Matrices  Completed.
Matrix Cwrappers: PR#992 
PR Under Review.
Matrix Ruby Wrappers: PR#56 

7  Parser  Completed
CWrapper for parser: PR#1029 
Completed.
Ruby wrapper for parser: PR#60 
Awaiting merging. 
8  Lambdify  N/A  Completed
Lambdify as a module level function: PR#61 
All work was performed on Ruby Wrappers directly. 
9, 10  Exception Handling  Completed
Exception Handling: PR#1044 Fixing enum issue in Exceptions: PR#1069 
Completed
Exception Handling: PR#64 
Wiki on exception handling. [4] 
11  Interfacing with GMP, MPFR, MPC gems  Incomplete  Incomplete  
12  Buffer Period  N/A  N/A  Allocated time to catch up with missed work. 
13  Iruby Notebooks  N/A  Awaiting Merging Changes to beginner notebook: PR#59 
Although the examples are ready, and were included in SymEngine’s presentation at SciPy [5], they were not merged, pending improvements to the formatting. 
The following PRs were made for minor contributions.
https://github.com/symengine/symengine/commits?author=rajithv
106 commits / 4,316 ++ / 1,928 —
https://github.com/symengine/symengine.rb/commits?author=rajithv
134 commits / 3,756 ++ / 1,319 —
—
[1] https://docs.google.com/document/d/1HKEzqpm3yk9ax5Fs7POQaBFZFxSOjy1MXNyeB7JXBxg/edit#heading=h.t8we2dfbtvd1
[2] https://github.com/symengine/symengine/
[3] https://github.com/symengine/symengine.rb/
[4] https://github.com/symengine/symengine/wiki/SymEngineExceptions
[5] https://www.youtube.com/watch?v=03rBe2RdMt4
This is the final week of GSoC! I have the write up of the work done on a different page.
This project has been a wonderful learning experience and I have met many wonderful people over the course of this summer. Before the summer began I had never written any test code. Now at the conclusion of the summer I have written benchmark tests and I try to make sure the code I write has complete unit coverage. I feel that this is one of my biggest areas of improvement over the past 3 months. I have also learned a considerable amount about the combination of programming and dynamics. Prior to GSoC my experience with dynamics consisted solely of finding the equations of motion by hand. With my time in GSoC I have been exposed to finding the equations of motion programatically and using the results with an ode integrator. I have also obtained a more in depth knowledge of different algorithms for creating the equations of motion.
Another thing that I have enjoyed in this experience is seeing how a large programming project works. I feel that this will make me more employable in the future as well as allow me to help any other new open source community get up and running.
Lastly, one of the big highlights of my summer was my trip to Austin, TX for Scipy 2016. Being a GSoC student I not only got to go to my first professional conference but I was able to help give one of the tutorials. Also I was able to meet many of the other people who work on sympy. This went a long way in making my contributions to sympy from being just contributions to some organization to working on a project with other people. It was also neat to meet the developers of other open source projects. Like my comment about sympy, I now see those projects less as being “project name” and more as the people actually working on them.
I have enjoyed my GSoC experience and if I were eligible I would not hesitate to apply again. I would like to thank those who chose me and made this summer of learning and experiences possible.
After the last blog post, I started working on the “fixing up the remainder of rational polynomials”. This also included adding support for basic to rational polynomial conversions. The current work is in #1055 which has not been merged in yet.
Another thing I started working on was conversions from SymEngine symbolics to multivariate polynomials. The way the current conversions work, led me to change the overall structure of multivariate polynomials.
The current working of basic conversions is that it constructs the “internal container” of the polynomial as it parses the expression. These containers have operators overloaded and they can easily be manipulated by the functions. Handling multivariate polynomials using this approach would be impossible with the current structure of the code. There were no “containers” per se in the previous implementation.
I thought it would be better if we can implement a container based polynomial structure for multivariate polynomials too. Fortunately, the way the SymEngine polynomials work, an approach similar to the univariate case of containers worked. The container is a wrapper around a map from vector to a value (the coefficient)
template <typename Vec, typename Value, typename Wrapper>
class UDictWrapper
{
public:
using Dict = std::unordered_map<Vec, Value, vec_hash<Vec>>;
Dict dict_;
unsigned int vec_size;
...
The polynomial class uses this as a container, along with information about the generators and their ordering. Apart from holding the map, this class has various constructors, and all the operators overloaded for it. It also has helper functions, which help translate the current keys to another specific order/ arrangement. The container assumes that the all operations done to it are with other “adjusted” containers. Currently, the job of adjusting/aligning th container has been delegated to functions like add_mpoly
etc.
template <typename Poly>
RCP<const Poly> add_mpoly(const Poly &a, const Poly &b)
{
typename Poly::container_type x, y;
set_basic s = get_translated_container(x, y, a, b);
x += y;
return Poly::from_container(s, std::move(x));
}
and
template <typename Poly, typename Container>
set_basic get_translated_container(Container &x, Container &y,
const Poly &a, const Poly &b)
{
vec_uint v1, v2;
set_basic s;
unsigned int sz = reconcile(v1, v2, s, a.vars_, b.vars_);
x = a.poly_.translate(v1, sz);
y = b.poly_.translate(v2, sz);
return s;
}
Basically, both the containers get the ordering of generators in the “new” polynomial which will soon be constructed, and adjust the ordering of the vectors inside their maps accordingly. Thus consistency is maintained. All the work done is in #1049 and this has been merged in.
After getting this restructure done, I started work on the basic to multivariate conversions, though I have not been able to cover it completely. The idea for these conversions remains the same and should be easy to implement.
I also could not spend time to integrate wrappers to Piranha polynomials which can be used as multivariate polynomials within SymEngine
This is officially the last week of Google Summer of Code ‘16. Although GSoC is over, I plan to get the above mentioned work finished within August, before any other big undertaking. It has been a wonderful journey of insight and knowledge. Thanks a lot to my mentors Isuru, Ondrej and Sumith for helping me throughout the summers, and always being there for discussions and the silly issues I bombarded them with!
Bye!
nonlinsolve continue:
PR : #11111
few things that should be improved in nonlinsolve
:
 If system of equation contains trigonometric functions, `nonlinsolve`
sometime fails because `solve_trig` of `solveset` is not much better and
`nonlinsolve` have to identify what is the Intersection soln when we have
`2*n*pi + pi/2` and `n*pi + pi/2`(means something similar cases).Right now it is returning
one of them.
 `substitution` function which solves the system of equation using substitution method.
There should be better method to handle `imageset` (Complex solution).
 Code quality of `substitution` should be improved.
Continue Simplified Trig soln
PR #11188
Imageset/union is generalized and now it handle basically these three cases:
# img1 = ImageSet(Lambda(n, a*n + b), S.Integers)
# img2 = ImageSet(Lambda(n, c*n + d), S.Integers)
In [1]: img1 = ImageSet(Lambda(n, 4*n + 4), S.Integers)
In [2]: img2 = ImageSet(Lambda(n, 4*n), S.Integers)
# when a == c and (b  d == a) then ans is img2.
In [3]: Union(img1, img2)
Out[3]: {4⋅n  n ∊ ℤ}
# 
In [4]: img1 = ImageSet(Lambda(n, 4*n + 2), S.Integers)
# when a == c and (b  d) == a/2, means value is a/2 * n
In [5]: Union(img1, img2)
Out[5]: {2⋅n  n ∊ ℤ}
# 
# 5*n + 5/4 ==> 5*n + 1 + 1/4
# 5*n + 1 + 1/4 is in n + 1/4
# check using img1.superset(img2) == true so img1 in ans
img1 = ImageSet(Lambda(n, n + S(1)/4 ), S.Integers)
img2 = ImageSet(Lambda(n, 5*n + S(5)/4 ), S.Integers)
In [4]: Union(img1, img2)
Out[4]: {n + 1/4  n ∊ ℤ}
# 
# img1.issuperset(img2) is false so no change
img1 = ImageSet(Lambda(n, 2*n + S(1)/4 ), S.Integers)
img2 = ImageSet(Lambda(n, 5*n +S(5)/4), S.Integers)
In [5]: Union(img1, img2)
Out[5]: {2⋅n + 1/4  n ∊ ℤ} ∪ {5⋅n + 5/4  n ∊ ℤ}
Meanwhile :
factor_list
fails and opened a PR : https://github.com/sympy/sympy/issues/11528continue..
This week I spent a lot of time working on FeatherstonesMethod and its component parts. I started off by moving a bunch of spatial vector functions from another PR I have to the featherstone PR and used some of those functions to calculate the spatial inertia of Body objects. The next thing I worked on was completely rewriting the internals of the joint code. The joints now consist of 4 reference frames and points (one set at each of the bodys mass centers and one set per body at the joint location).
After this I ran some basic code that used these new features and kept making changes until the code was able to run without producing errors. I used this same method of work with FeatherstonesMethod and now it too is able to run without producing errors. Now that the code runs it was time to make sure that the output is correct which is a lot more involved than the previous step of work. To begin I solved for the spatial inertia by hand and used this calculation to create test code for Body.spatial_inertia. As expected the code initially was completely incorrect but it now passes the test. I have since been working on the tests for the joint code. Since this code is completely new to the sympy repository it takes a lot more planning than the body test did. Also I need to solve the kinematics by hand for the joints so that I have a base for the test code. This is where I am currently located in the process.
Also this week I addressed review comments on SymbolicSystem and have moved that PR closer to being able to merge. One of the current hang ups is trying to force Sphinx to autodocument the __init__ method. I think the best solution currently is to move the relevant code back to the main docstring for the class and not worry about trying to document the __init__ method.
While working on rewriting the joint code I came across a bug in frame.py and have created a docstring with a fix to this along with a test to make sure the fix works.
Lastly I reviewed a PR that adds a docstring to a method that did not yet have a docstring. The PR had some information in it that was incorrect and after some research I was able to make some suggestions for its implementation.
Next week is the last full week of GSoC and my main priority is getting the final evaluation information correctly finished so that the work can be processed correctly. My next goal is to make sure SymbolicSystem gets merged into SymPy. This is not entirely in my hands, however, as I will be having to wait for feedback and so while waiting I will be pulling off different parts of FeatherstonesMethod for separate PR’s at the recomendation of my advisor. These separate PR’s I hope to possibly include in my final evaluation.
This week I worked on little refactoring of Fields
module. Like moving functions implementations from .h
file to .cpp
file in this PR. I added a from_uintpoly
method which allows us to create a GaloisField
object from UIntPoly
object in this PR. After which I worked on the implementation of diff
method for GaloisField
in this PR.
Then I worked on Logic
module. It is needed for the implementation of Intersection
class in Sets
module. It has the implementation for Logical And, Or and Not operations.
Both Or
and And
class have a private variable named container_
which is a set of RCP<const Boolean>
. It stores the Boolean
objects on which And
and Or
operator can’t be applied using the known rules.
Not
also has one private variable which is an RCB<const Boolean>
object, which stores the Boolean
object on which we were not able to apply not
operator using the current rules.
Then we have three functions:
RCP<const Boolean> logical_and(const set_boolean &s);
RCP<const Boolean> logical_or(const set_boolean &s);
RCP<const Boolean> logical_not(const RCP<const Boolean> &s);
These are used to do the respective operation on the operands supplied.
Talking little about implementaion details:
RCP<const Boolean> logical_and(const set_boolean &s)
{
return and_or<And>(s, false);
}
RCP<const Boolean> logical_or(const set_boolean &s)
{
return and_or<Or>(s, true);
}
And the function and_or
do the required operations.
After this PR gets merged, I would start working on Intersection
class.
Hi all, here's a brief summary of the 12th week of my GSoC:
Last week I uploaded the testcoverage files on my website, that revealed some interesting places where a few versions of scan
routine in coset enumeration have untested ifelifelse
case.
As we are now approaching the end of GSoC time period, we decided to do some testing with some of the examples from 1973cdhw paper [2]. Coset Enumeration got my attention again since:
There seemed to be one bug raising TypeError
so opened issue sympy/sympy/#11449, resulting from coset enumeration by the cosettable based method. From beginning it was clear that the issue was not in compress()
method. It was quite difficult for me get onto the main source of problem. But then Kalevi had a closer look on the pseudocode in Derek Holt and also in Sims Finitely Presented Groups.
I wrote docstrings for a few of methods and fixed the issue #11449 in PR #11460.
The problem there in code is explained briefly below
Previous codesnippet
1 i = 0 2 while i < len(C.omega): 3 alpha = C.omega[i] 4 i += 1 5 for x in C.A: 6 if C.table[alpha][C.A_dict[x]] is None: 7 C.define_f(alpha, x) 8 C.process_deductions(R_c_list[C.A_dict[x]], R_c_list[C.A_dict_inv[x]])After codesnippet
1 while alpha < len(C.table): 2 if C.p[alpha] == alpha: 3 for x in C.A: 4 if C.p[alpha] != alpha: 5 break 6 if C.table[alpha][C.A_dict[x]] is None: 7 C.define_c(alpha, x) 8 C.process_deductions(R_c_list[C.A_dict[x]], R_c_list[C.A_dict_inv[x]])
Here $\alpha$ looks over in till $\lt$ C.table
. This way all elements of $C.\Omega$ are tested even in case that the set becomes very small. The inner for $x$ loop should also tests $p[i]$ at each round and break if that becomes different from $i$.
The changes that have been addressed in PR #11460 also include chaging the file name free_group.py
to free_groups.py
, similar to what we have.
It seems that Presentation of Permutation Groups won't happen during GSoC since there's just one more week; instead, I plan to focus on improving and completing the current PR's #11361 on Modified ToddCoxeter algorithm and PR #11460 on addition of docstrings and better user methods.
One more thing, that I would start in this week though may not be completed this week will be the sphinx documentation of finitely presented groups. I found the documentation of Poly's module by Kalevi very much readable and interesting, may be I can seek to follow that.
This week I primarily worked on some bugs and added singular initial conditions to the result of expr_to_holonomic()
.
Firstly I fixed a bug in .unify()
. There were some errors being raised in it when one of the Holonomic Function had ground domain RR
or an extension of it. This now works fine.
In [9]: expr_to_holonomic(1.4*x)*expr_to_holonomic(a*x, x) Out[9]: HolonomicFunction((2.0) + (1.0*x)*Dx, x, 0, {2: [1.4*a]}) In [10]: _9.to_expr() Out[10]: 2 1.4⋅a⋅x
Later I fixed a bug in converting the two types of initial condition into one another. Apparently I forgot to add the factorial
term while converting.
After that I added singular initial conditions to the result of expr_to_holonomic()
whenever the Indicial equation have one root. For example:
In [14]: expr_to_holonomic(x*exp(x)) Out[14]: HolonomicFunction((x  1) + (x)*Dx, x, 0, {1: [1]}) In [15]: _.to_expr() Out[15]: x x⋅ℯ
I also changed printing of the class HolonomicFunction
to include the initial conditions inside the call, so as to make it proper python. These are implemented in #11480 and #11451.
Right now we are trying to find a way to include convergence conditions in the result while converting Holonomic Functions to expressions.
eliminate() continue:
PR : #11485
Regarding issue #2720, It is something similar to eliminate
present in
wolfram.
Right now it is working for real domain and not considering ImageSet, Intersection. Complement. If it find ImageSet, Intersection. Complement
for the symbol to be eliminated, then just raises NotImaplementedError
(in near future it can be implemented).
It can take N
nummber of equations and M
number of symbols
to be eliminated. It returns FiniteSet
of equations which doesn’t contains
these symbols
.
Continue  Diophantine in Solveset :
PR 11234
ConditionSet
when it diophantine
doesn’t the diophantine equation type and not able to solve.Idea for improved _solve_trig in Solveset :
In [1]: solveset(sin(x) + cos(y), x, S.Reals)
Out[1]: {x  x ∊ ℝ ∧ sin(x) + cos(y) = 0}
In [2]: solve(sin(x) + cos(y), x)
Out[2]: [asin(cos(y)) + π, asin(cos(y))]
This above examples is enough to tell that _solve_trig
is not using inverse trigonometric function. We can have something, which can solve trig equations by making free the symbol in lhs and in rhs inverse trig function.
solveset(sin(2*x)  sqrt(3)*cos(2*x), x, S.Reals)
for this right now _solve_trig
is converting it into exp
form and solving it.
But it is can be simply solved using sin(x + y) == sin(x)*cos(y) + cos(x)*sin(y)
formula.
First divide both side with sqrt(a**2 + b**2)
where a, b
is coeff of sin(2*x)
, cos(2*x)
.
sin(2*x)/2  (sqrt(3)/2)*cos(2*x)
==> sin(2*x)*cos(pi/3)  sin(pi/3)*cos(2*x)
==> sin(2*x  pi/3)
.
Now sin(2*x  pi/3)
is solvable using solve_decomposition
.
Meanwhile :
Abs
solver :
In [1]: solveset(Abs(x)  1, x)
ValueError:
Absolute values cannot be inverted in the complex domain.
In [2]: solveset(Abs(x)  (1 + I), x)
ValueError:
Absolute values cannot be inverted in the complex domain.
In [3]: solveset(Abs(x)  y , x)
ValueError:
Absolute values cannot be inverted in the complex domain.
In 1st case (for complex domain) ans should be http://www.wolframalpha.com/input/?i=Abs(x)++(1+)+%3D0+for+x
In 2nd case EmptySet. and in 3rd case (general solution when domain=S.Complexes) soln should be http://www.wolframalpha.com/input/?i=Abs(x)++y+%3D0+for+x
In general( 3rd case) it should print ConditionSet(x, Abs(re(y)) <= re(x) and re(x) <= Abs(re(y)) and re(y)>0, Eq(im(y), 0) , S.Complexes).
continue..
Hi there! It’s been eleven weeks into GSoC . The module for solving beam bending problems is almost ready. Now, I am focusing fully on example documentation part because it’s very important to let others know what can be done.
Let us see the capabilities:
A full fledged documentation of tutorial and example for this module is on its way.
That’s all for now, looking forward for week 12.
Happy Coding.
This week my main work was on Featherstone’s articulated body algorithm. I started by prototyping what I thought his algorith might look like in python code (the algorithm was pulled from chapter 7 of his book). With the passes prototyped it was apparent that I would need a full description of the kinematic tree and so I prototyped the building of the kinematic tree from a single “base” body. I then went on to see what it would look like if the kinematic tree was built during the first pass of his articulated body algorithm and decided that keeping the two separate would result in cleaner code.
With the three passes prototyped and the kinematic tree built I started digging into Featherstone’s book to better determine the definition of each of the variables in the algorithm. While doing this I ended up reading a second source where Featherstone describes the articulated body algorithm and it was helpful in furthering my understanding of the algorithm as it was a condensed summary. I then compared the written version of the algorith in his book and this article with the two matlab versions he has posted online and the python version her provides a link for online. This helped me see where some terms he includes in his book he doesn’t include in his code. It also helped me to see what code for the algorithm might look like.
After working on the mock up of the passes and trying to better understand them, I switched focus to the joint code that needs to be finished so that it can be used in my implementation of the articulated body algorithm. This has lead to some confusion about the design decisions that were made in the past when putting together the joint code and this is the current stage I am sitting at as I await feedback on some of my questions.
This week I also looked over a couple of documentation PR’s. One was a simple matter of fixing some indentation and seems mostly ready to merge but the second turned some docstrings into raw strings so they could add latex math code. I don’t know what the general stance is on the latter but I’m of the opinion that the docstrings should be human readable since people may actually look through the code for them or hope that help(function) provides something useful. In this case the latex math code is cluttered and would be better off in .rst files where people are only going to be reading the rendered version. On that PR I am awaiting response from someone with sympy to see if this is indeed prefered.
Hopefully I’ll recieve some feedback about the joints and Featherstone’s method so I can keep moving forward with these. In the mean time there are a few other bits of code I will need to complete that the algorithm uses that is not directly related to my questions. If I finish these tasks before recieving feedback I will move forward with changing the joint code as I think would be best.
This week, I had been working on implementaion of Union
and Contains
in this PR.
Our Set::contains
function has been changed from :
virtual bool contains(const RCP<const Basic> &a) const = 0;
to
virtual RCP<const Boolean> contains(const RCP<const Basic> &a) const = 0;
And these functions:
virtual bool is_subset(const RCP<const Set> &o) const = 0;
virtual bool is_proper_subset(const RCP<const Set> &o) const = 0;
virtual bool is_superset(const RCP<const Set> &o) const = 0;
virtual bool is_proper_superset(const RCP<const Set> &o) const = 0;
have been changed to:
bool is_subset(const RCP<const Set> &o) const
{
return eq(*this>set_intersection(o), *this);
}
bool is_proper_subset(const RCP<const Set> &o) const
{
return not eq(*this, *o) and this>is_subset(o);
}
bool is_superset(const RCP<const Set> &o) const
{
return o>is_subset(rcp_from_this_cast<const Set>());
}
bool is_proper_superset(const RCP<const Set> &o) const
{
return not eq(*this, *o) and this>is_superset(o);
}
depending solely on set_intersection
.
Then the SymEngine::set_union(const set_set &in, bool solve = true)
has been defined which will create a Union
object with in
if it is not solvable, or will do union operation on all of them.
Our Union
class has std::set
of Set
to store the different Set containers which can’t be unified into one single container.
Apart from it, the set_intersection
ans set_union
virtual functions have been restructured to handle other Set
types case by case.
Started off this week by continuing my work on #11422. I added support for singular initial conditions in multiplication and made some amendments in addition too. They now can return a Holonomic function with singular initial condition. The input functions can have singular initial condition both or one of them can have singular one and the other one with ordinary initial condition.
# one function have singular initial condition and the other have ordinary. In [4]: expr_to_holonomic(x) + expr_to_holonomic(sqrt(x)) Out[4]: HolonomicFunction((1/2) + (x/2)Dx + (x**2)Dx**2, x), {1/2: [1], 1: [1]} In [5]: _4.to_expr() Out[5]: √x + x In [6]: expr_to_holonomic(x) * expr_to_holonomic(sqrt(x)) Out[6]: HolonomicFunction((3/2) + (x)Dx, x), {3/2: [1]} In [7]: _6.to_expr() Out[7]: 3/2 x # both have singular initial conditions. In [9]: expr_to_holonomic((x)**(S(1)/3)) + expr_to_holonomic(sqrt(x)) Out[9]: HolonomicFunction((1/6) + (x/6)Dx + (x**2)Dx**2, x), {1/3: [1], 1/2: [1]} In [10]: _9.to_expr() Out[10]: 3 ___ ╲╱ x + √x In [11]: expr_to_holonomic((x)**(S(1)/3))*expr_to_holonomic(sqrt(x)) Out[11]: HolonomicFunction((5/6) + (x)Dx, x), {5/6: [1]} In [12]: _11.to_expr() Out[12]: 5/6 x
I found some problems in coding because of storing these initial conditions in two different attributes. So I merged them to a single attribute and instead added methods to check which one is stored and refactored the existing code using it.
I opened a new PR #11451 majorly focused on adding singular initial conditions to the result of .expr_to_holonomic()
when necessary. At first I added it in converting polynomials. Here are some examples:
In [14]: expr_to_holonomic(3*x**3+4*x**2) Out[14]: HolonomicFunction((9*x  8) + (3*x**2 + 4*x)Dx, x), {2: [4, 3]} In [15]: _14.to_expr() Out[15]: 2 x ⋅(3⋅x + 4) In [16]: expr_to_holonomic(x) Out[16]: HolonomicFunction((1) + (x)Dx, x), {1: [1]} In [17]: _16.to_expr() Out[17]: x
I also a found a bug in .to_hyper()
when the recurrence relation has order 0
. Added its fix too. Earlier the output also considered negative exponents which weren’t needed.
# previously In [18]: expr_to_holonomic(y*x, x).integrate(x).to_expr() Out[18]: 2 x ⋅y ──── + C₁ 2 # after fix In [19]: expr_to_holonomic(y*x, x).integrate(x).to_expr() Out[19]: 2 x ⋅y ──── 2
What Next:
I hope to add singular initial conditions to more types of functions in .expr_to_holonomic()
.
After college has started, work has been slower than usual. Last week I finally wound up Rational Polynomials in #1028. Some minor changes still remain in some parts of the code. Most of the changes left are mainly related to code duplication, and will be fixed by writing their templated versions.
I started off by reading and understanding how the multivariate class is implemented currently in SymEngine. It was developed by the UC Davis team as a part of their course project. The basic idea is pretty simple. It still is a sparse representation and maps from a vector (representing the powers of each generator in the monomial) to the nonzero coefficient multiplied to it. We use an unordered map instead of the ordered map used in the univariate counterpart. The choice does make some sense, as deciding order between two monomials is subjective and does not make sense (for eg. which is larger x**2*y
or x*y**2
) Even if we define a custom ordering, there are no real benefits I could think of that it would provide.
On a side note, it kind of makes me think why we have stuck to the ordered map implementation for our univariate polynomials. The places it offers benefits are eval
and mul
(in some ways) I worked with ordered maps, as that was the implementation the initial polynomial class was built on. I’m also wondering if it is healthy to use two different implementations for both the polynomial types, univariate and multivariate. This needs to be discussed.
Right now, I’m basically refactoring most of the code written in the multivariate class, so that it matches the API more or less from the univariate case. Some functions have been rewritten and some unnecessary storage within the polynomials have been removed. Initially, I thought I will stick with the same container based approach I used in the univariate case. This proves to be non trivial for the multivariate case. So, after some discussion with Isuru, we decided to stick with different implementations for SymEngine polynomials and Piranha polynomials, as combining code was not easy. The current work is in #1049
A lot is on my plate right now. Some of the work I plan to finish by the end of the next two weeks are
Finish up the refactoring of multivariate polynomials, and make their interaction with univariate polynomials as seamless as possible
Introduce wrappers for piranha polynomials to be used as multivariate polynomials within SymEngine
Fix up the remainder of rational polynomials, and template code wherever possible to remove code duplicates
Write conversions from Basic
to multivariate polynomials, which will finish up one part of the coercion framework as proposed by Isuru
Off to work!
eliminate() continue:
issue #2720 : We need some kind of eliminate function, like http://reference.wolfram.com/mathematica/ref/Eliminate.html. See also http://stackoverflow.com/q/20826969/161801
I am trying to use subs
and _invert
to get answer. May be one can use replace
, xreplace
, match
to eliminate some kind of same sub expression.
There can be ans in Imageset
, Finiteset
, Complement
, Intersection
when we use _invert
. So there should be a technique to handle it.
Still need some good idea and technique. WIP.
Output of solveset should be of one type:
Amit discussed about it. Solution we see in solveset
should be in one type of set. Right now we may have solution in Imageset
, Finiteset
, Complement
, Intersection
or ConditionSet
. So there would be problem for user to handle these many solution type.
I think there should be something that separate Complements
, Intersections
,ConditionSet
and main solution in Finiteset
.
E.g. if solveset solution is Intersection(Complement(FiniteSet(x), {y}), {z})
then
soln : FiniteSet(x)
, x != {y}
, {x} intersect {z}
.
Continue Simplified Trig soln
PR #11188
According to the Harsh comments/review I modified the PR. Now it seems it is returning more simplified solution( one case is here) .
To understand the changes I did in the _solve_trig
method, one should check this gist
To see the advantage of imageset union, One good example is in this gist
continue
Hello, guys. Welcome back. It’s been ten weeks into the coding period. I had a meeting with Jason on 25th of this month. We discussed many new changed on the API that I had implemented before this week. Now, the beam bending module is almost ready to solve beam bending problems.
Let us see how to solve a beam bending problem using this module.
Problem Statement :
The deflection is restricted at the end of the beam.
Solution :
>>> from sympy.physics.continuum_mechanics.beam import Beam
>>> from sympy import Symbol, Piecewise
>>> x = Symbol('x')
>>> E = Symbol('E')
>>> I = Symbol('I')
>>> b = Beam(4, E, I)
>>> b.apply_load(value=9, start=4, order=1)
>>> b.apply_load(value=3, start=0, order=1)
>>> b.apply_load(order=0, start=2, value=6)
>>> b.bc_deflection = [(4, 0)]
>>> b.boundary_conditions
{'deflection': [(4, 0)], 'moment': [], 'slope': []}
>>> b.load
3*SingularityFunction(x, 0, 1) + 6*SingularityFunction(x, 2, 0)  9*SingularityFunction(x, 4, 1)
>>> b.shear_force()
3*SingularityFunction(x, 0, 0) + 6*SingularityFunction(x, 2, 1)  9*SingularityFunction(x, 4, 0)
>>> b.bending_moment()
3*SingularityFunction(x, 0, 1)  3*SingularityFunction(x, 2, 2) + 9*SingularityFunction(x, 4, 1)
>>> b.slope()
(3*SingularityFunction(x, 0, 2)/2  SingularityFunction(x, 2, 3) + 9*SingularityFunction(x, 4, 2)/2  7)/(E*I)
>>> b.deflection()
(7*x + SingularityFunction(x, 0, 3)/2  SingularityFunction(x, 2, 4)/4 + 3*SingularityFunction(x, 4, 3)/2)/(E*I)
If the user wants to represent the deflection in the piecewise form, then:
>>> b.deflection().rewrite(Piecewise)
(7*x + Piecewise((x**3, x > 0), (0, True))/2
+ 3*Piecewise(((x  4)**3, x  4 > 0), (0, True))/2
 Piecewise(((x  2)**4, x  2 > 0), (0, True))/4)/(E*I)
That’s all for this week. Cheers !!
Happy Coding.
Somehow I think I was off by a week. I think last week’s blog post covers week
9 and 10 and this week’s covers week 11. This week I created a full draft for
all components of the SymbolicSystem
class that will take the place of a
equations of motion generator “base class” that was discussed in my project
proposal. I began by creating all of the docstrings for the class followed by
the test code. With the documentation and test code written it was a simple
matter to finish off the code for the class itself. Lastly I added
documentation to two places in sympy, one place contains the autogenerated
documentation from the docstrings and the other place I adapted an example from
pydy to show how to use the new class.
After working on SymbolicSystem
I decided to try to finish off an old PR of
mine regarding the init_printing code that Jason and I had discussed at Scipy.
The idea was to build separate dictionaries to pass to the different printers
in ipython based on the parameters that the specific printers take. The idea
was to find this information using inspect.getargs(). The problem arose when
trying to implement this solution because each separate printer has an expr
argument and a **settings argument and the different possible paramters are
processed internally by the printer. This means that there would not be an
elegant way to build dictionaries for each printer.
The next thing I worked on this week was looking into Jain’s version of the order(N) method as suggested last week. When I started looking over his book, however, I found that uses a rather different set of notion than Featherstone and had some additional terms. I have decided to move forward with Featherstone’s method due to the summer coming to an end and I am already familiar with his version of the method. To that end I reread the first part of chapter 7 in Featherstone’s book where he discusses the articulated body method.
I reviewed two PR’s this week. This work was rather quick as they were simply documentation additions. I verified the method docstrings matched what the mehtod actually does and that the modual docstring included the different functions present in the file. Determining that they were correct I gave the +1 to merge and they have both since been merged.
The plan for next week is to focus entirely on the order(N) articulated body method of forming the equations of motion. I plan on writing the three passes for the method as if I have all of the information and methods I need in order to make it work. I expect this to be the best way to determine what additional code I will need in addition to finding my weak points in how well I understand the method. Once I have a skeleton of the of how the algorithm is supposed to work I will stop working directly on the algorithm itself and start working on the peripheral code such as the joints and body code or spatial vector processing methods.
Previous week I had implemented the Shoup’s Algorithm in this PR. During the review we came to realise that it is better to use unsigned int
instead of integer_class
because we didn’t need big numbers.
Working on the same PR, I found a bug in negate
and similar function where we were doing:
for (auto &a : dict_) {
a *= 1;
a += modulo_;
Here: if the dict_
is [0, 0, 10]
, it will negate to [11, 11, 1]
in GF(11)
. So, it was needed to add a check when the value is 0
. So, the method was changed to:
for (auto &a : dict_) {
a *= 1;
if (a != 0_z)
a += modulo_;
}
Along with it I was working on the gf_factor
PR and it eventually got merged. It introduced Zassenhaus’s algorithm and gf_factor()
function. In coming days, we will have to change gf_factor
to switch between gf_zassenhaus
and gf_shoup
according to the degree of polynomial.
We made one more design change, we changed the factor container from std::pair<GaloisFieldDict, integer_class>
to std::pair<GaloisFieldDict, unsigned>
because we didn’t need large numbers as power.
Then I started working on change of base class of GaloisField
class from UPolyBase
to UIntPolyBase
, this needed implementation of eval
and multi_eval
method, then I implemented the iterator for GaloisField
class and the pow
method. The PR is under review.
I couldn’t write a blog post last week so including progress of week 8 and 9 both here.
I continued working on the PR #11360. We added functionality to store a different type of initial condition for regular singular points other than the usual [y(0), y'(0), ...]
. The exact format is described here in master, though it is changed to a more elegant form in #11422. This type of initial condition provides more information at regular singular points and is helpful in converting to expressions. Examples on how to use it:
In [22]: expr_to_holonomic(sin(x)/x**2, singular_ics={1: [1, 0, 1]}).to_expr() Out[22]: sin(x) ────── 2 x
I also added method to compute this type of initial condition for algebraic functions of the form P^r
, for some Polynomial P
and a Rational Number R
.
In [25]: expr_to_holonomic(sqrt(x**2+x)) Out[25]: HolonomicFunction((x  1/2) + (x**2 + x)Dx, x), {1/2: [1]} In [26]: _25.to_expr() Out[26]: _______ √x⋅╲╱ x + 1
After that I made some changes in `to_meijerg()` to return the polynomial itself if the `meijerg` function represents a polynomial instead of raising `NotImplementedError`.
In [40]: expr_to_holonomic(4*x**3 + 2*x**2, lenics=3).to_meijerg().expand() Out[40]: 3 2 4⋅x + 2⋅x
I also added code to return the general solution in `_frobenius()` if none of the roots of indicial equation differ by an integer.
I wasn’t able to do much this week because my college started. I travelled back and had some college related stuff to do.
I opened #11422 and first added a basic method to determine the domain for polynomial coefficients in the differential equation.
In [77]: expr_to_holonomic(sqrt(y*x+z), x=x, lenics=2).to_expr() Out[77]: _________ ╲╱ x⋅y + z In [78]: expr_to_holonomic(1.1329138213*x) Out[78]: HolonomicFunction((1.1329138213) + (1.1329138213*x)Dx, x), f(0) = 0
Then I added support for the new type of initial condition on regular singular points in .integrate()
.
In [83]: expr_to_holonomic(sin(x)/x**3, singular_ics={2: [1, 0, 1]}).integrate(x).to_expr() Out[83]: ⎛ 2 ⎞ ⎝x ⋅Si(x) + x⋅cos(x) + sin(x)⎠ ──────────────────────────────── 2 2⋅x
Also added support for the same in addition.
In [6]: expr_to_holonomic(sqrt(x)) + expr_to_holonomic(sqrt(2*x)) Out[6]: HolonomicFunction((1/2) + (x)Dx, x), {1/2: [1 + sqrt(2)]} In [7]: _6.to_expr() Out[7]: √x⋅(1 + √2)
I plan to continue my work on this PR and add more support for this initial condition.
In the last blog post I reported that lambdify function was fully wrapped. Yes, that’s what I thought at the time! But it did indeed dragged on quite a bit, requiring many changes, which were very educative for me in terms of how Ruby looks at user experience. Several changes were done, including structural changes on calling lambdify, and for supporting older Ruby versions. The really interesting and long discussions on this can be viewed in the PR 61.
Apart from that, simultaneously I started reading and exchanging ideas on exception handling. It was agreed that in the C wrappers, an error code to be returned, which can be accessed from the Ruby wrapper, which in turn can raise a Ruby exception. The preliminary model can be seen in PR 1044 in SymEngine and PR 64 in SymEngine Ruby wrapper.
Right now any exception is caught and sent to the Ruby Wrapper with an error code of 1, which raises a generic Runtime Error in Ruby. Although not very informative, this is helpful in prevention of crashing the Ruby runtime.
To illustrate, when the following code (a division by zero) is run before and after is shown.
Before:
irb(main):001:0> require 'symengine' => true irb(main):002:0> x = SymEngine(1) => #<SymEngine::Integer(1)> irb(main):003:0> y = SymEngine(0) => #<SymEngine::Integer(0)> irb(main):004:0> x/y terminate called after throwing an instance of 'Teuchos::NullReferenceError' what(): /home/rajith/Development/symengine/symengine/utilities/teuchos/Teuchos_RCPNode.cpp:720: Throw number = 1 Throw test that evaluated to true: true Teuchos::RCP<SymEngine::Basic const> : You can not call operator>() or operator*() if getRawPtr()==0! Abort caught. Printing stacktrace: Traceback (most recent call last): Done. [2] 590 abort (core dumped) irb
After:
irb(main):001:0> require 'symengine' => true irb(main):002:0> x = SymEngine(1) => #<SymEngine::Integer(1)> irb(main):003:0> y = SymEngine(0) => #<SymEngine::Integer(0)> irb(main):004:0> x/y RuntimeError: Runtime Error from (irb):4:in `/' from (irb):4 from /usr/bin/irb:11:in `<main>' irb(main):005:0>
This is a good improvement overall, but as it’s nicer to have a more descriptive error shown to the user, that part will be the continuation of exception handling during the 10th week.
See you next week!
Sorry I haven’t been able to report my work for about two weeks now. Things have become slower mainly due to the fact that my university has resumed and along with it a heavily packed timetable and assignments in the first week don’t help. I also caught a bad fever the past week which really hindered my progress, but it’s dying down and I will resume my work with full vigor eventually.
The work I did do has been summarized below.
While writing the code for the rational polynomials, as mentioned in the last blogpost, I encountered various bugs. Some of them caused other bugs to be exposed, which took a lot of time for me to debug.
pow(Poly, uint)
was throwing a segmentation fault. After digging in and wasting more than four hours on unrelated checks I figured out that the eq
inside the polynomial class was incorrect. Without checking whether the other parameter was a Poly
or not, I was static_cast
ing it which posed a problem.
The happiness was shortlived, as the bug persisted. On further inspection I found that the polynomial was being treated as a number! This was because is_a_Number
relied on typecodes, and the polynomial types were defined before the NUMBERWRAPPER
class, which deemed them numbers. The fix for this was simple, just move the polynomial type code definitions after the numbers.
The pow
tests pass, but what’s this? All the MSVC
builds on appveyor fail. They all fail a coeff
test. Wow, I had not changed any code related to coeff
at all, how does it affect that specific test and only on the MSVC
compiler? This kept me wondering and looking at the source for a day. Finally, I had to login to the VM of appveyor running the tests. I was not familiar with windows development environment at all, which was the reason I failed a couple of times before I gave up debugging. The next morning I woke up determined to fix this Windows bug, I set the break points in Visual Studio and started the code execution. I found it! It was a bug in the CoeffVisitor
itself. The code for the coeff
function was incomplete. Why wasn’t this bug being captured before? Probably because the previous change (in the typecodes) caused a reordering in a map, which no other compiler was doing. Do read up here for more details.
An appveyor build was failing for unknown reason, which had to be shifted to allowed failures
This was basically the components of #1033. Less quantity of changes, but really important none the less.
The work with rational polynomials continues. I had underestimated the amount of work required, and I also feel that I should have broken down rational polynomials into three parts each, just like integer polynomials. Right now, the work continues in #1028, but it’s soon going to become huge with all varieties of changes.
I finally benchmarked cotire to see how much speedup it was providing to SymEngine builds. Here is the short summary of the speedups obtained, and the work to include it is in #1041.
Also a small bug was present in our flint and gmp rational number wrappers. We were not canonicalizing on construction from two integers. It was fixed in #1031.
Laters!
Hello, guys. Welcome back. It’s been nine weeks into the coding period. I had a meeting with Jason on 17^{th } of this month. He was attending the code sprints at Scipy and I am very glad to meet other Sympy developers.
That’s all for this week. Cheers !!
Happy Coding.
Last week I did not end up writing a blog post and so I am combining that week’s post with this week. Last week I attended the SciPy 2016 conference and was able to meet my mentor, and many other contributers to SymPy, in person. I was also able to help out with the Pydy tutorial. During this time at the conference (and this current week) I was able to flesh out the remaining details on the different portions of the project. I have updated PR #353 to reflect the api decisions for SymbolicSystem (previously eombase.EOM).
In line with trying to put the finishing touches on implementation details before diving in to code, Jason and I met with someone who has actually implemented the algorithm in the past to help us with details surrounding Featherstone’s method. He also pointed me to a different description of the same algorithm that may be easier to implement.
This week I also worked on rewriting the docstrings in physics/mechanics/body.py because I found the docstrings currently there to be somewhat confusing. I also did a review on one of Jason’s PR’s where he reduces the amount of work that *method.rhs() has to do when inverting the mass matrix by pulling out the kinematical information before the inversion takes place.
With the work these past two weeks being focused on implementing the different parts of the projects, I will start implementing these various parts next week. I will first work on finishing off the SymbolicSystem object and then move towards implementing the OrderNMethod. This work should be very straight forward with all the work that has been put into planning the api’s.
Planet SymPy is made from the blogs of SymPy's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet SymPy with RSS, FOAF or OPML.