Planet SymPy

April 05, 2014

Aaron Meurer

This blog has moved

This blog has moved to http://asmeurer.github.io/

 

See http://asmeurer.github.io/blog/posts/moving-to-github-pages-with-nikola/ for more details on why I have moved. For now, all posts on this blog have not been migrated. 


by Aaron Meurer at April 05, 2014 01:16 AM

October 02, 2013

Sachin Joglekar

GSoC (officially) done!

GSoC is now over, and I have officially passed the Final Evaluations :-).
Looking back, this was an amazingly interesting and exciting process, and helped me learn a lot about SymPy, and also Python. Quite a few things that I learnt in the last three months (some resulting from my own coding, others from just messing around with SymPy code) were so full of Python 'magic' that I wondered whether I would ever use them again in any of my own work!
There were a few problems, like dealing with architectural difficulties, physics-related problems (that I had not though of while writing my proposal) and the radical change in the plan of action mid-way through the GSoC period. This change came because Prasoon's module didn't shape up as we had planned earlier- perhaps we underestimated the difficulties associated with creating such a complex module and integrating it with the rest of the codebase. As a result the code I had written based on our decided API became useless - almost. Thankfully,  while coding all the new stuff (a mechanics core to support fields, electrostatics, a unified class for ReferenceFrame and Particle) I had made a modified version of the mechanics core to test it all on. Hence, Gilbert and I decided around July end that the best thing to do would be to modify the current framework accordingly and base all my work on it.
After the mid-sem evals, I could not work as hard as I had during my vacations due to projects and course-work at college. Inspite of all that, I managed to get two PRs merged...one modifying sympy.physics.mechanics.essential to support scalar and vector fields, and other to add a big function to the functions.py file to calculate motion attributes from time-dependent vectors and boundary conditions.
I have a few more PRs in the pipeline, these would essentially just modify the code I wrote over the summer to work with the current module and add the documentation for the done work. The first one of them is already in the review process :-)
The help from Gilbert, Jason, Stefan and at times even Aaron, has been immense, and I am really thankful to them for it. Gilbert was a great mentor, especially while brainstorming solutions to problems that we faced from time to time. It was almost like solving problems with a college and getting tips to understand how some difficulties could be resolved.
Obviously, I will continue to work for SymPy (mainly sympy.logic) and PyDy - though mostly PyDy for the foreseeable future, since I have to add the E-M module and extend it as I had envisioned earlier. It would be fun to code complex electromagnetic concepts to work with dynamic systems, and having Jason, Gilbert and DL Peterson (and the rest of the PyDy team) to help would be quite the experience on its own.
For me, GSoC has just started my involvement with this community and its codebase, and I aim to be an active developer for them :-D Physics is one of the few things that I miss being in Computer Science, so writing Python code based on it in my free time is something that I am obviously looking forward to!
I will keep updating my blog as and when I get something merged or I work on something that's worth writing about. Once again, thanks a lot to SymPy, Gilbert, Aaron, Ondrej, Stefan and obviously- Google and Carol, for this amazing opportunity!

by Sachin Joglekar (noreply@blogger.com) at October 02, 2013 06:40 PM

September 27, 2013

Manoj Kumar

The end of a journey

Well. It surely has been a few testing months of my life (due to various reasons, some of which I don’t want to mention here), and has been a up and down journey throughout with more of downs than ups.(damn these cliches) .It all began sometime during this February/March when I had nothing to do (I almost have nothing to do always) and people around me were doing cool stuff, when I thought of giving GSoC a shot. (I had initially thought GSoC was for people who knew ten programming languages before they were born, or who started hacking on stuff since fifth grade, so I was a bit pessimistic and I had barely written my first Hello World program in C, a year back). And after some amount of bug fixing and a large amount of luck, my organistion decided to go with my proposal :) (For a slightly TLDR version of this, you can read http://manojbits.wordpress.com/2013/04/22/sympy-and-my-tryst-with-open-source/)

I was assigned one of the most awesome mentors in SymPy, Sean Vig, who is a physics grad student (I think) in University of Illinois -Urbana-Champaign. I’m not saying this just because he passed me, but also because all that I had to do if I got stuck somewhere, was to ping him and he would reply almost immediately with some solution to my query (except when he went biking of course).

I don’t want to bore anyone with my project details, because I myself am slightly bored already, but a one line description would be that it involved strengthening the already existing ODE module, with a number of hints and power series solving methods. I want to mention two moments, which had me quite fascinated, the first one was the recursive design of the ODE module, which makes it really easy to to add additional hints, and the second one was when I hacked a bit of core SymPy, it would take me days to think of such an object oriented design.

GSoC taught me much more than coding. Patience maybe. There were times when I got stuck really bad, and someone in the mailing list, usually came up with some answer. (Especially Raoul who saved the day with his research papers). And also humility from the core developers, who are seasoned programmers, and who treat people like me at par with them while reviewing Pull Requests or otherwise. Perseverance and dedication too. You can have things like a bad breakup or something, but you still got to put in the required hours of work.

SymPy has a broad range of stuff that can be worked on, that I believe SymPy can be called as a separate language in itself. Also congrats to my fellow GSoCers, Sachin, Prasoon, Chetna, Katja, Mary Clark ,Thilina and Saurabh who managed to string up a project on his own. It was a pleasure working with you all. As I said, SymPy is so broad, that I have a very vague idea about what the other projects are, sometimes I never knew what was going on in my project itself, but yeah whatever. If you are looking for a well documented Python project to contribute to, SymPy should be at the top of your list.

On a personal note, the way ahead is as undecided as before. I obviously will contribute to SymPy in my free time. Frankly speaking, I don’t consider myself as a programmer still, and I know plenty of people in my own college who are much better than me and I still have to improve on my skillset. Lets see where life takes me from here.


by Manoj Kumar at September 27, 2013 09:11 PM

September 24, 2013

Mary Clark

GSOC Wrapup

Well, GSOC 2013 is officially over.  I thought I’d write up a summary of what I accomplished over the last 14 weeks.

-Classes for types A, B, C, D, E, F, and G which stores information about their Dynkin diagrams, Cartan matrices, roots, and size

-A class, RootSystem, which allows users to work with the root system of a given Lie algebra.  It can generate all the roots of a Lie algebra, and has methods for adding roots together.

-A class WeylGroup, which is about the Weyl group of a given Lie algebra.  It gives the size and name of a given Weyl group as well as the matrix form of an element, and an element’s order.

-Methods for displaying the Cartan matrix and Dynkin diagram of a Lie algebra.

 

That’s pretty much it.  I had a great time working with SymPy this summer!


by meclark256 at September 24, 2013 07:00 PM

September 22, 2013

Tarun Gaba

GSoC Experiences(4)

All is well that ends well!
This is the only statement I can express in this last article for the series of articles under GSoC experiences.

I have been able to write a stable Javascript code which is fully functional, and has the basic functionality implemented.

For testing the new code on the grounds, I have ported the script of the PyDy Example Three link pendulum according to the new code.

I still need to write some tutorials on how to use the code, and I will post the relevant links to them, and if possible a simple tutorial on using the code here only.

Anyways I am leaving here a youtube link on the working Animation:
http://www.youtube.com/watch?v=W6MIwXUw7jQ&feature=youtu.be

And here are some screenshots for those who dont like waiting for buffering :)
It feels so exciting to see this work!






by TARUN GABA (noreply@blogger.com) at September 22, 2013 06:42 AM

September 21, 2013

Tarun Gaba

GSoC Experiences(3)

This is the third article in the series of articles written on my GSoC experiences, describing some details  on project developments, which I couldn't report due to circumstances.

I will start where I left off. So, I was working on a GUI.
This is what I came up with!



Well Its just a basic GUI with some simple implementation features, i.e. play pause and controls reset.
But it was a good start, and seemed neat.

So after that I had to start working on Javascript class, Canvas class, which was going to handle all the visualization things.
I felt that I did not have enought knowledge on Javascript structure based on Prototypes, So I spent some time knowing about prototypes, and  how to work with them.

I was stuck in more than one places during Javascript source code development phases.
I was stuck with rendering the initial scene, the axes and the grid, and getting them altogether.

Another problem I encountered was with the dreaded "this" keyword in Javascripts.
The problem was that the "Play","Pause" buttons had there OnClick attrubute attached to this.startAnimation, where "this" represented the Canvas class, But the buttons seemed to take this as the default window property of the browser, and gave errors. It took me some time to figure it out and debug them all. What works is instead of using "this.startAnimation", I was supposed to use "Canvas.prototype.startAnimation", hence this is one of the places where prototypes were savior.

Except this, there were some other minor glitches with animations, But thankfully, they were all resolved, and In next and last article of this series of GSoC Experiences, I am going to write about the final outcome of the hardwork, a working example of PyDyViz.



by TARUN GABA (noreply@blogger.com) at September 21, 2013 01:03 PM

September 20, 2013

Tarun Gaba

GSoC Experiences(2)

Continuing with my GSoC experiences, this is the second article in the list.

API seemed to be in good shape, I was supposed to start writing tests for javascripts.
The main problem which occured to me was how.
I had spent a few days figuring out exactly what I am supposed to write tests for defining the behaviour of code which will be used in animations, most of which is going to work in runtime(when we are using browser for animations).

I had some discussion with my mentor. He had given me some good motivation as well as ideas on test-writing. But (yes, call me a dumb), I cant still figure out how to do that. I had written some tests myself,
during this time, but I wasn't exactly satisfied with it.

So to keep up with time, I had to switch to writing source code for javascripts.
I was thinking that once I am complete with the source code, I should be more comfortable in writing tests for the same, once it is in working condition.

So with that in mind, I had started writing source code for the Javascripts side.
First thing that crept in my mind before starting was to give a GUI to the project.





by TARUN GABA (noreply@blogger.com) at September 20, 2013 08:36 PM

GSoC Experiences(1)


Hi All,
I have been unable to contact for a long time. This was mainly because I had ran into some troubles while working on Javascript side, and things were not going smooth. I had to work a lot for getting all in place. I totally apologise for missing all the weekly blog entries, which I should have made.

As a cover up, I am going to share my experiences during that time, for which I missed blog entries,
In the form of GSoC Experiences articles.

So, I was able to finish with the python side by 12th August( a little late then expected).
We had a basic server built in, using WebSockets for transferring content from python to browser.
It was running smooth, and all the tests were passing.

It was decided that we would be using Jasmine for testing javascripts. I had started fleshing out a clean and detailed API on Javascripts side, as well familiarizing with Jasmine and how to write tests on Jasmine.

So we had some API fleshed out, and next I was supposed to write tests for Javascript side.

by TARUN GABA (noreply@blogger.com) at September 20, 2013 10:37 AM

September 16, 2013

Prasoon Shukla

GSoC Week 13: Plans for the future

Today is the official soft pencils down date for GSoC 2013.  Any my work isn’t finished. This post, with the benefit of hindsight, seems eventful.

So, as I was quite worried about for the last two weeks, the exam week finally passed me by – and I can’t say I was unscathed. Nevertheless, I think I’ll get through without too bad a grade in most of my courses. Needless to say, I was unable to do any work this past week – exams had me occupied.

Getting to the topic of this post. Well, I saw that I wouldn’t be able to get the work done in time – at least not before 16th. So, I wrote to my mentors and the organisation admin hoping to continue working after the GSoC period on my project. Thankfully, my mentors and the org admins were accommodating. So, here’s a post mentioning what is the final goal of this project is and how things should look like once finished. Of course, this isn’t a complete post – I’ll do follow-up posts on this one in the coming weeks.

Right now, we are at a point where we have a foundation for the module – we can represent vectors in any orthogonal coordinate system (currently there are only three – although it is completely extensible with addition of classes for different coordinate systems) and do fundamental operations on it. In addition to the fundamental manipulations on the vectors, we can also do some other operations – dot and cross products, namely, for the time being. I should also point out the generic nature of our design – everything is generalized. That is how things stand now. In the near future – that is in the next one or two weeks, I believe that we should have implemented the differentiation methods on vectors – the simple differentiation, the grad (of scalars), the div and the curl. That much I am quite certain will work. The way ahead of that, I’m afraid, isn’t without its perils (okay, I exaggerate; :s/perils/challenges/g). The next thing to do is to implement the integration methods. I have already created a class for holding integrals – it is a base class. Then, there are two classes for two different type of integrals – one for line integrals and one for line integrals and one for surface integrals. I have also written methods that would reduce the integral to an internal in one variable for each of these classes. But, it remains to see if these will work.

Once I have these integration classes working, then, I should like to proceed with other integration classes – for example integrating vector over simple limits. I am fairly confident that if I can get the previous two classes working, then this class will work, too. The next order of business will be to implement numerical evaluation of these integration methods, which, given that I can reduce the integral to a normal scalar integration, shouldn’t be too difficult.

That would finish quite a few of the methods required for vector calculus. Then, I would like to work on the part that deal with reference frames. Currently, we have only three coordinate systems – I’d like to add two more; parabolic and elliptical coordinate systems seem like a good way to begin. Also, I’d like to implement more rotation methods – like adding quaternions, for example. Also, Stefan and I had a chat where her mentioned that I should remove the string based interface for these orientation method (and elsewhere as well). So, to take it one step further, I’d like to do some general refactoring – keeping coordinate systems in their own file, keeping vector classes separate, keeping integration classes separate and perhaps more separation based on helper methods.

One of the things that is quite important to do of course is profiling the code and optimizing the bottlenecks. Currently, I have identified several places in the code which can be easily optimized. But, I’m not beginning the optimization just yet; I should like to have a sense of completion before I begin optimizing the code. Currently, there is the express method that appears slow and that is because of the dcm  method – which is slow. So, I have a fairly good idea of how to go ahead with the optimizations – it is just that I will do it towards the end.

Now that those things are out-of-the-way, let me mention some of my current work. I am currently working to get the position of the coordinate systems work with the express method. That would work nicely except for the fact that I am having some trouble dealing with constant position dependent vectors. I am thinking of creating a new class for it. Anyway, that was a long post. There will be of course follow-up posts on my plans but for the time being, I would like to continue working.


by musingsofafriend at September 16, 2013 05:38 PM

Sachin Joglekar

GSoc: Weeks 12 and 13

Work gained pace in the past two weeks.
First off, I got my first GSoC PR merged into the SymPy base. This PR essentially modified the sympy.physics.mechanics core to incorporate the concept of coordinate variables and manipulations on them.
For example, consider a vector V defined in frame R1, with coordinate variables of frame R2 occurring as symbols in its definition. Now,  while calculating d/dt(V) wrt R1, it would be necessary to substitute values of the base scalars of R2 wrt those of R1(as they will be functions of t). One small point- Coordinate variables, as far as possible should not be used as normal Symbols. This may mislead the 'dt' and 'express' methods to try substitution of variables of frames not linked to each other.

We also added a few optimisations, such as a dcm cache(turned out to be trickier than I thought) to 'remember' the DCMs once calculated, for later use. Also added the 'express' and 'dt' methods to the ReferenceFrame class- this would help re-expressions of scalar as well as vector fields in the said frame. During all these changes, I took care to preserve the old API of SymPy classes and methods- we din't want to make all the code written wrt the previous module version redundant.

Now, I currently have a PR in the pipeline for adding the get_motion functions to the sympy.physics.mechanics.functions file. After this, I will turn to the rather-tricky task of modifying the Point class to incorporate functionality as needed by ParticleCharge from my EM work (all the while, making sure the previous API does not change). Parallely, I will be adding documentation on my previous PR to the Sphinx docs.

However, all this after the 18th- the day my first semester exam ends. After that, I can start working on the project full-time again :-).

Anyways, have a great week :-)

by Sachin Joglekar (noreply@blogger.com) at September 16, 2013 12:41 PM

September 15, 2013

Katja Sophie Hotz

12th and 13th week

Here is my status update on the factorization algorithm.

I was able to close all remaining gaps in the code and the algorithm is working smoothly now. In particular, I implemented the p-adic lifting I was taking about last time, which should improve performance for polynomials with large coefficients. First, I had to make the Gaussian elimination code in matrices.py a little more flexible. For example, now one can pass a custom elimination function in addition to a custom iszero function. To solve linear systems over the ring \mathbb Z_{p^l}, I then call it in a slightly strange way. Namely, I make calculations modulo p^l, but being zero is checked modulo p. The only drawback of this approach is that rows that contain only multiples of p will be treated as zero rows, thus the set of solutions might get bigger. However, in my use case this should happen only rarely and if it does, it will be detected later in the code.

In the last week of my GSoC I will keep fine tuning the code and issue a PR for the factorization algorithm.

Finally, here are some comparisons with %timeit.

First, a small example. Let \alpha be a root of t^4 + 1 and f = x^4 + y^4 \in \mathbb Q(\alpha)[x, y]. The current SymPy factorization algorithm needs 166 ms on my laptop, whereas the new one only takes 69.1 ms.

Now a bigger one. Let \alpha be a root of t^4 + t^3 + t^2 + t + 1 and f = f_1 \cdot f_2 \cdot f_3 \cdot f_4 \in \mathbb Q(\alpha)[x, y, z], where

\displaystyle f_1 = x^2 - 2 \alpha x - ( \alpha^3 + \alpha^2 + \alpha + 1) z^2 + \alpha^2 y + 12 \alpha^3
\displaystyle f_2 = x^2 - 2 \alpha^2 x + \alpha^3 z^2 - ( \alpha^3 + \alpha^2 + \alpha + 1) y + 12 \alpha
\displaystyle f_3 = x^2 - 2 \alpha^3 x + \alpha^2 z^2 + \alpha y - 12 ( \alpha^3 + \alpha^2 + \alpha + 1)
\displaystyle f_4 = x^2 + 2 ( \alpha^3 + \alpha^2 + \alpha + 1) x + \alpha z^2 + \alpha^3 y + 12 \alpha^2 .

The current SymPy factorization algorithm needs 215 seconds, whereas the new one only takes 60.3 seconds. More than three times faster! :)


by katjasophie at September 15, 2013 05:56 PM

September 14, 2013

Thilina Rathnayake

Status of the Diophantine module

Hi All,

In my project proposal for a Diophantine equation for SymPy, I mentioned the following five deliverables.

1. Linear Diophantine equation a_1x_1 + a_2x_2 + . . . + a_nx_n = b:
I implemented solutions for linear diophantine equations, you can access this functionality through `diop_linear()`.

2. Simplified Pell equation, x^2 - Dy^2 = 1:
Not only I implemented solutions for simplified Pell equation, I completely solved the general binary quadratic equation ax^2 + bxy + cy^2 + dx + ey + f = 0.

3. The equation, x^2 + axy + y^2 = z^2:
I implemented solutions for more general ternary quadratic equation ax^2 + by^2 + cz^2 + dxy + eyz + fxz = 0.

4. Extended Pythagorean equation,  x_1^2 + x_2^2 + . . . + x_n^2 = x_{n+1}^2:
I implemented solutions for slightly more general equation a_1^2x_1^2 + a_2^2x_2^2 + . . . + a_n^2x_n^2 = a_{n+1}^2x_{n+1}^2.

5. General sum of squares, x_1^2 + x_2^2 + . . . + x_k^2 = n:
This is a computationally hard problem and method I implemented finds only one solution. It’s quick and work for large n but not complete. I also implemented a brute force version which finds all the solutions but it doesn’t work for larger n.


by Thilina Rathnayake at September 14, 2013 05:22 PM

Change of plans for general sum of squares

Hi All,

In my last post I described about Euler’s four square identity to for solving general sum of squares equation. My idea was to factorize the number, represent each prime as a sum of four squares, then use Euler’s four square identity to construct the sum of four squares representation for the original Number. But I found it slower than the algorithm I found here. So I adapted the latter. The idea is to reduce the problem of representing a number as a sum of four squares to representing a number as a sum of three squares.

Algorithm for representing a positive number n as a sum of three squares

1. If n == 0, then return (0, 0, 0)
2. Write n as 4^vn_{1}
3. if n_{1} \in S then return the hard coded representation of n_{1}. Here, S = {2, 3, 10, 34, 58, 85, 130, 214, 226, 370, 526, 706, 730, 1414, 1906, 2986, 9634}. Representations for these numbers can be found here.
4. If n_{1} is a perfect square then return (2^v\sqrt{n_{1}}, 0, 0)
5. if n_{1} = 3 (mod 8), find an odd number i, i < \sqrt{n_{1}} such that \frac{n_{1} - i^2}{2} is a prime. Set x = i and p = \frac{n_{1} - i^2}{2}. Find two numbers y, z such that y^2 + z^2 = p. (You can use Cornacchia’s  algorithm for this. Return (2^vx, 2^v(y + z), 2^v|y - z|).
6. If n_{1} = 2 (mod 8) or n_{1} = 6 (mod 8) then find an odd number i, i < \sqrt{n_{1}} such that n_{1} - i^2 is a prime. Else find an even number i, i < \sqrt{n_{1}} with the above requirement. Set x = i and p = n_{1} - i^2. Find y, z such that y^2 + z^2 = p . Return (2^vx, 2^vy, 2^vz).

Note that above algorithm can not be used if n_{1} = 8k + 7 for some integer k \in Z. That is if n is in the form 4^v(8k + 7).

Algorithm for representing a positive number n as a sum of four squares

Every non-negative integer n can be represented as a sum of four squares.

1. Write n as 4^vn_{1}.
2. If n_{1} = 7 (mod 8) then set d = 2 and n_{1} = n_{1} - 4. If n_{1} = 6 (mod 8) or n_{1} = 2 (mod 8) then set d = 1 and n_{1} = n_{1} - 1. Otherwise set d = 0.
3. Use the algorithm described earlier to represent n_{1} as a sum of three squares. Say, x^2 + y^2 + z^2 = n.
4. Return (2^vd, 2^vx, 2^vy, 2^vz).

By using these two algorithms one can represent any non-negative integer as a sum of four squares. In the general sum of squares equation, where a given integer n needs to be represented as a sum of k squares, we can divide k  variables into segments of four and divide n into that same number of segments and represent each segment by four squares using the algorithm given above.


by Thilina Rathnayake at September 14, 2013 04:50 PM

September 09, 2013

Cristóvão Sousa

Faster Common Subexpression Elimination (CSE) in SymPy

Over the past months I’ve been studying  how to improve performance of the Common Subexpression Elimination (CSE) routine of SymPy, so it can be used in SymPyBotics with acceptable computing times.

This resulted in pull request #2355, which had already been merged into SymPy master branch (for release in version 0.7.4).

Here is an example comparing both new and previous CSE implementations when applied to the expressions of the “generic torque”, computed with SymPyBotics, for the first 3 joints of a 7-DOF WAM Arm robot.

Cache has influence in times, but we can notice an average performance improvement of about 25x when external (pre and post) optimizations are used. When no external optimizations are used, the performance has an average improvement of 90x. With the new `order=’none’` option, the improvement rises to 500x for the non cached case, and to 1000x for the cached one!

For this particular case, the CSE is less optimized when external optimizations are done (output has more operations) than when they are not.

How it works now

First, two remarks:

  • expressions are not trees but rather directed acyclic graphs (DAG).  E.g., in the expression sin(x+1)+cos(x+1), the arguments of sin and cos are the same, x+1; indeed the node x+1 has two parents;
  • SymPy (sub)expressions are nicely and fastly hashable, thus great to use in sets and dictionaries.

The CSE core/raw algorithm:

  1. The core of the new CSE parses the expression adding each seen subexpression to the seen set. If a subexpression was already seen, it is added to the repeated set and its children nodes are not parsed (there is no need to).
  2. After knowing the repeated subexpressions (nodes with more than one parent), the core CSE rebuilds the whole tree using intermediate variables in place of repeated subexpressions.

The internal optimizations:

  • Before the core CSE algorithm is performed the expression is parsed to find optimization opportunities; when an optimizable subexpression is found it is added to the opt_subs substitutions dictionary.
  • When the core algorithm parses a subexpressions it looks for it in the opt_subs dictionary, if it is there is parses the substitution instead.
  • The currently implemented internal optimizations are the following:
    • negative signs are striped out from multiplications, e.g., -2*x is substituted by -1*(2*x)
    • negative signs are striped out from exponents, e.g., x**(-2*y) is substituted by (x**(2*y))**-1
    • common Add and Mul terms are grouped, e.g., in cos(a+b+c)+sin(a+b+d)),  a+b+c is substituted by (a+b)+c and a+b+d is substituted by (a+b)+d, so that a+b is a single and repeated node

Future work

In my opinion three things could further improve CSE:

  1. add support for MatrixExprs;
  2. use replacement Symbols which could somehow clone the assumptions of the subexpressions they represent;
  3. implement an optimal Mul/Add term matching system (maybe using Matthew Rocklin’s logpy package).

Filed under: PhD, SymPyBotics, Work Tagged: python, subexpression elimination, sympy

by Cristóvão Duarte Sousa at September 09, 2013 11:59 AM

Prasoon Shukla

GSoC Week 12: Things are finally looking up!

This week, finally, I was able to get the express method working completely. I knew that once the express method starts working, everything else will fall into place. And, as you might deduce from the title of the post, things finally seem to be working.

First, a bit of background. I am developing a framework that can be used to represent vectors in Python and operate on them. There is this method called express that one can call on a vector and have it expressed in another coordinate systems. This vector calculus module was designed to with a very general approach – the vectors can be constructed from a variety of coordinate systems – the coordinate systems can be positioned, oriented and can also be in any orthogonal coordinate system (that are currently supported; new coordinate systems can be easily added). Also, and I won’t go into the details, the classes that represent vectors need to inherit from a SymPy class, Basic. This essentially means there’s a tree like structure of components so that objects can be decomposed into their forming components. All of this means decreased complexity for the user, but, increased complexity of the code. Anyway, as I was saying, the express method can be used to convert the vectors into different coordinate systems so that they can be operated on. I simply cannot overemphasize the importance of this one method. It took me a 3 weeks to finally be able to write this method and another 2 to test and make it work but finally, it got done this week.

Now that this method is working, I moved on the orientation methods. The majority of the work in this area was just the proper initialization of matrices when coordinate system objects are created. That was dealt with and as a result, the DCM methods (the methods that give the direction cosine matrices between two coordinate systems) are finally working as well.

Then, I wanted the operations on vectors to be working. Already, the basic operations of addition, subtraction, multiplication, division, expansion, factorization etc. were working. The next step was to get the dot and cross methods to work. That is what I did next. So, the dot and cross methods are now working.

Well, that was a bit of a long post. Anyway, I’d like to say that I did all this even though the next week (9-Sep-13 to 13-Sep-13) is the exam week. I didn’t study much in the last month. Usually, I would study really hard one week before the exams and hope for the best. But, I couldn’t follow my general practice this time. God knows what’ll happen this coming week! Anyway, just so that I don’t completely screw up my grades, I’ll try to study as much as humanly possible during the exam week and so, I won’t work at all during this week (till the weekend). That will give me from 13th evening to 16th night to get as much done as possible before the soft deadline. Then, there’d be about a week more before the hard deadline. Hopefully, I’ll be able to get most of the promised goals done by then including the documentation. Fingers crossed!


by musingsofafriend at September 09, 2013 11:51 AM

Mary Clark

A late update

This week has been work on the WeylGroup module, unsurprisingly.  I’ve implemented element_order and matrix_form for types A, B, C, D and G2, and matrix_form for E and F.  I think that implementing order for E and F is going to be much more difficult, because it’s not easy at all to realise those two groups as permutations.  I’m going to look for the next day or two to see if I can find a clever way of doing it, but otherwise I’ll just utilise matrix_form to get the matrix representation of the element of the WeylGroup and then just brute force it to find the order of the element by just finding what power of the matrix gives the identity. 

I also changed bits in in the root_system PR to utilise Rational(x, y) instead of using floating point numbers. 

I’m hoping to have most major coding done by the 13th, so I can spend the last 10 days of GSOC writing up more detailed docstrings and fixing other minor errors.

 

Sorry for the short update, but I’m not feeling particularly verbose today.


by meclark256 at September 09, 2013 01:26 AM

September 06, 2013

Manoj Kumar

Towards the end

Hi, I finally managed to do some amount of work for the past ten days, and I’m happy to day that all that I had wanted to do this GSoC has been pushed in and I’m waiting for comments on my final Pull Request. These were the changes that I had made.

1. Making stuff similar to the series function
While playing around with the code, I found out that, doing series(eq, n=terms), gives the series expansion upto O(x^n) rather than the nth term. This makes the code look a lot neater too. Also the default power is 6, so I changes that too. For instance, take this sample SymPy session

from sympy import dsolve, series
from sympy.abc import x, y
f = Function("f")
pprint(series(exp(x))
         2    3    4     5        
        x    x    x     x     ⎛ 6⎞
1 + x + ── + ── + ── + ─── + O⎝x ⎠
        2    6    24   120       
eq = f(x).diff(x) - x*f(x)
pprint(dsolve(eq, hint='1st_power_series'))
                2       4        
            C₀⋅x    C₀⋅x     ⎛ 6⎞
f(x) = C₀ + ───── + ───── + O⎝x ⎠
              2       8       

I also added support for power series solutions of homogeneous differential equations at ordinary points, and regular singular points. A homogeneous second differential equation is of the form P(x)\frac{d^2f}{dx^2} + Q(x)\frac{df}{dx} + R(x). (Any DE mentioned below is homogeneous second order unless otherwise specified, because typing DE is easier than typing a homogeneous second order differential equation).
a] A point x0 is said to be ordinary, if \frac{Q(x)}{P(x)} and \frac{R(x)}{P(x)} are analytic at the point.
b] It is said to be regular singular, if (x - x0)\frac{Q(x)}{P(x)} and (x - x0)^2\frac{Q(x)}{P(x)} are analytic. For simplicity in the series expansions, assumptions are made such that P(x), Q(x) and R(x) are polynomials. Okay, now for a bit more detail

1. Ordinary points : A DE has a power series solution, at x0. This can be found by substituting \sum_{k=0}^n a_{n}x^n in the DE, and equating the nth coefficient in order to obtain a recurrence relation. However this is not as trivial as it sounds and was one of the more tougher things I had to do.

Take the general case of P(x)\frac{d^2f}{dx^2} + Q(x)\frac{df}{dx} + R(x). Substituting y as \sum_{n = 0}^\infty a_{n}x^n in the DE. (One has to expand each of the terms P(x), Q(x) and R(x)), make transformations such that for each term the power of x is n. The catch here is that the starting point of the summation also changes, so the necessary terms shouls be stripped off such that the starting point is same. This is better explained in one of these tutorials, http://tutorial.math.lamar.edu/Classes/DE/SeriesSolutions.aspx , so you could read that after you’ve read the rest of the blogpost.

A small sample session.

from sympy import *
from sympy.abc import x, y
eq = (1 + x**2)*(f(x).diff(x, 2)) + 2*x*(f(x).diff(x)) -2*f(x)
pprint(eq)
                          2               
    d          ⎛ 2    ⎞  d                
2⋅x⋅──(f(x)) + ⎝x  + 1⎠⋅───(f(x)) - 2⋅f(x)
    dx                    2               
                        dx
pprint(dsolve(eq))
                 ⎛   4         ⎞        
                 ⎜  x     2    ⎟    ⎛ 6⎞
f(x) = C₁⋅x + C₀⋅⎜- ── + x  + 1⎟ + O⎝x ⎠
                 ⎝  3          ⎠
eq = f(x).diff(x, 2) + x*(f(x).diff(x)) + f(x)
pprint(eq)
                      2      
  d                  d       
x⋅──(f(x)) + f(x) + ───(f(x))
  dx                  2      
                    dx
pprint(dsolve(eq))
            ⎛   2    ⎞      ⎛ 4    2    ⎞        
            ⎜  x     ⎟      ⎜x    x     ⎟    ⎛ 6⎞
f(x) = C₁⋅x⋅⎜- ── + 1⎟ + C₀⋅⎜── - ── + 1⎟ + O⎝x ⎠
            ⎝  3     ⎠      ⎝8    2     ⎠  

2. Regular singular points :
1. Try expressing (x - x0)P(x) and ((x - x0)^{2})Q(x) as power series
solutions about x0. Find p0 and q0 which are the constants of the
power series expansions.
2. Solve the indicial equation f(m) = m(m - 1) + m*p0 + q0, to obtain the
roots m1 and m2 of the indicial equation.
3. If m1 - m2 is a non integer there exists two series solutions. If
m1 = m2, there exists only one solution. If m1 - m2 is an integer,
then the existence of one solution is confirmed. The other solution may
or may not exist.

The power series solution is of the form x^{m}\sum_{n=0}^\infty a_{n}x^n. The
coefficients are determined by the following recurrence relation.
a_{n} = -\frac{\sum_{k=0}^{n-1} q_{n-k} + (m + k)p_{n-k}}{f(m + n)}. For the case
in which m1 - m2 is an integer, it can be seen from the recurrence relation
that for the lower root m, when n equals the difference of both the
roots, the denominator becomes zero. So if the numerator is not equal to zero,
a second series solution exists. (Oh and by the way I just copy – pasted the docstring for the
last part, because I was lazy to write the whole thing again :P)

    eq = x**2*(f(x).diff(x, 2)) - 3*x*(f(x).diff(x)) + (4*x + 4)*f(x)  # One solution
    pprint(dsolve(eq))
                ⎛      3                 ⎞        
              2 ⎜  16⋅x       2          ⎟    ⎛ 6⎞
   f(x) = C₀⋅x ⋅⎜- ───── + 4⋅x  - 4⋅x + 1⎟ + O⎝x ⎠
                ⎝    9                   ⎠  
    eq = x**2*(f(x).diff(x, 2)) - x**2*(f(x).diff(x)) + (
        x**2 - 2)*f(x)  # Two solutions
             ⎛    6      5    4    2        ⎞                                    
             ⎜   x    3⋅x    x    x    x    ⎟                                    
          C₁⋅⎜- ─── - ──── - ── + ── + ─ + 1⎟         ⎛   3    2        ⎞        
             ⎝  720    80    8    2    2    ⎠       2 ⎜  x    x    x    ⎟    ⎛ 6⎞
   f(x) = ─────────────────────────────────── + C₀⋅x ⋅⎜- ── + ── + ─ + 1⎟ + O⎝x ⎠
                           x                          ⎝  60   20   2    ⎠  
    

Anyway, that sums up my official GSoC work. Now that I just need to address comments made on my PR which is at, https://github.com/sympy/sympy/pull/2395 . And I just realised that I have my exams coming up the next week, and its been a while since I touched my books (and gone to classes), and thats not a good thing. So have to catch up there. Seems like my whole life is about catching up.


by Manoj Kumar at September 06, 2013 05:13 PM

September 03, 2013

Chetna Gupta

Testing of cds and is_deriv branches

Currently we have merged cds and is_deriv branch to test and make the code working. We have been working to find the possible test errors to debug the every-bit of any condition added to the code

This week I have worked on the issue of solving infinite loop in cds.py
Cases Like:

cds_cancel_primitive(Poly(sqrt(-1), t2), Poly(x, t2), Poly(2*x, t2),
 Poly(2*x/(x**2 + 1) - x*t2), Poly(2*x/(x**2 + 1) + 3*x*t2), DE, 5)

I am happy to announce that after having a brainstorming week, trying to look for every parameter missing, I did finally resolve the error of the infinite loop.
This would mean that integrate_hypertangent would start working for complicated tests etc and hence we would be able to get answers for rich_integrate for some of the hypertangent cases now.

Ouptut for the test

Above case translates as follows:
\begin{pmatrix}2*x/(x^2 + 1)\\ 2*x/(x^2 + 1) \end{pmatrix} + \begin{pmatrix}x & -2x\\ 2x & x\end{pmatrix}\begin{pmatrix}t2\\ t2\end{pmatrix} = \begin{pmatrix}2*x/(x^2 + 1) - x*t2\\ 2*x/(x^2 + 1) + 3x t2 \end{pmatrix}
where t2 = log(x^2 + 1)

Screenshot from 2013-09-03 18:09:38

Previous week
I and Aaron solved the major difficulties incurred in the is_deriv part of the code.
Screenshot from 2013-09-03 17:51:54
Cases like above where giving incorrect answers before, have a solution now.

The only issues which is till unresolved from the precious week is the infinite loops for polynomial_reduce kt . While I see similar issues in the original code for case polynomial_reduce(Poly((-64*t**6 + 64*t**4 + 64*t**2 – 64, t)/(2048*t**4 – 4096*t**2 + 2048)), DE) I am not able to find the stopping-condition for the same in the book.


by chetnagupta at September 03, 2013 01:10 PM

September 01, 2013

Sachin Joglekar

GSoC Week 10 and 11 : Modifying the mechanics core

Unfortunately, as GSoC is drawing to an end, my workload at college seems to be increasing too. Hence, I haven't been able to work for the project as much as I would have liked, though I have a new PR in queue. Since Prasoon's Vector module isn't close to completion yet, Gilbert and I decided to go ahead and modify and hack the current framework to make space for the modifications I require for my work.
Last week, I opened a PR with the said modifications, but as I had expected, there are a *lot* of errors to be resolved. Changing one thing at one place gives rise to a hundred errors elsewhere in the module - my lesson for the week. We decided to go ahead and implement the caching of dcms and variable maps generated wrt pairs of reference frames, and it has led to quite some inconsistencies here and there. I am still struggling with them, I hope I can resolve all of them in a day or two. By this week-end, I should have this PR merged and a new one, with the new classes, opened.
About EM, I dint get to do much, I just implemented ideal dipoles and wrote tests for them. I still haven't pushed them to the PR. Will do that soon, I guess.
That's all there is for this week. Most probably, my GSoC work is going to stretch beyond the timeline, though that's not really a problem. Hope at the end of it, the code is perfectly shippable :-)
Have a good week!

by Sachin Joglekar (noreply@blogger.com) at September 01, 2013 03:01 PM

Katja Sophie Hotz

10th and 11th week

Wow, time seems to be flying right now, it’s already been two weeks again! In order to keep things clear, I described the factorization algorithm I am currently working on in another post. So here I want to talk about my progress and problems in the last two weeks.

Let’s start with good news. My first two PRs have been merged! So now the modular GCD algorithms I implemented for polynomials over the integers and algebraic number fields are part of SymPy. :)

Progress on the factorization algorithm has not been as fast as I would have liked. I spent a lot of time with Hensel lifting, since the description of the algorithm is not very explicit in this part. I ended up adapting the already implemented Hensel lifting code to my use case. The last (big) part missing is the sparse p-adic lifting algorithm. The problem here is that I would have to solve a cumbersome system of linear equations over \mathbb Z_{p^m}. One possible way to work around this is to choose a large prime so that no lifting is necessary, which is the way it is currently done in the integer polynomial case. But of course, a lifting would be more efficient.

I already made successful factorizations (by using big enough primes) for certain polynomials, and next week I hope to get it to work for arbitrary ones.


by katjasophie at September 01, 2013 02:52 PM

Factoring polynomials over algebraic number fields

In this post I will describe the factorization algorithm given in [1] for polynomials over simple algebraic extensions of \mathbb Q.

First, I have to introduce some definitions and notations.
We define the denominator \mathrm{den}(f) of a polynomial f \in \mathbb Q(\alpha)[x_o, \ldots, x_n] as the smallest integer, such that \mathrm{den}(f) f \in \mathbb Z[x_o, \ldots, x_n, z]. Most of the time we will work with the monic associate \tilde f \in \mathbb Z[x_o, \ldots, x_n, z] instead of f. It is defined as the associate of \mathrm{den}\left(\frac 1 {\mathrm{lc}(f)} f \right) \frac 1 {\mathrm{lc}(f)} f which is primitive in x_o, \ldots, x_n, z.

We will assume that the input polynomial f is square-free and primitive in x_o, because if this is not the case, f can be easily factored into square-free parts and its content in x_o. This can be done using only GCD computations.

First, we compute \tilde f and recursively call our algorithm to factor its leading coefficient in x_0 and obtain

\displaystyle \mathrm{lc}_{x_o} (\tilde f) = \gamma \cdot \prod_{i = 1}^r l_i^{e_i}

in \mathbb Q(\alpha)[x_1, \ldots, x_n]. If we get to the univariate case, we use the existing factoring algorithm by Trager.

Now we enter the main loop. We choose a new evaluation point \mathbf a = (a_1, \ldots, a_n) \in \mathbb Z^{n}, which has to satisfy certain conditions, e. g. \tilde f(x_o, \mathbf a) has to stay square-free.

Again we use Trager’s algorithm, this time to factor \tilde f(x_o, \mathbf a) in \mathbb Q(\alpha)[x_o] and obtain

\displaystyle \tilde f(x_o, \mathbf a) = \Omega' \cdot \prod_{j = 1}^s u_j,

where \Omega' = \mathrm{lc}_{x_o}(f(x_o, \mathbf a)) \in \mathbb Q(\alpha). Note that if \tilde f(x_o, \mathbf a) is irreducible, we know that \tilde f is irreducible and hence f, so in this case we are done.

In order to use Hensel lifting, we need to compute the true leading coefficients \bar l_1, \ldots, \bar l_r \in \mathbb Q(\alpha)[x_1, \ldots, x_n] of the irreducible factors of \tilde f. This can be done with simple GCD computations.

Now we choose a new prime p, which also has to satisfy certain requirements, e.g. \tilde f(x_o, \mathbf a) \; \mathrm{mod} \, p is still square-free and the minimal polynomial \mu(z) of \alpha is still irreducible modulo p.

Since evaluating at x_1 = a_1, \ldots, x_n = a_n is the same as factoring modulo the ideal \mathcal I generated by x_1 - a_1, \ldots, x_n - a_n, we can use Hensel lifting to lift the univariate factors u_1, \ldots, u_s and obtain

\displaystyle \tilde f(x_o, \ldots, x_n) = l \cdot \prod_{j = 1}^s \bar f_j(x_o, \ldots, x_n) \; \mathrm{mod} \, \left\langle p, \mathcal I^T\right\rangle,

where T \in \mathbb N is chosen large enough.

That means, we have the correct factorization modulo the prime p. In order to get the desired factorization, we use a procedure that is also based on Hensel’s Lemma, called p-adic lifting. We lift the factors \bar f_j from \mathbb Z_p(\alpha)[x_o, \ldots, x_n] to \mathbb Z_{p^m}(\alpha)[x_o, \ldots, x_n], where m is chosen big enough, such that

\displaystyle \tilde f(x_o, \ldots, x_n) = l \cdot \prod_{j = 1}^s f_j(x_o, \ldots, x_n)

also holds over \mathbb Q(\alpha) and we are done.

[1] http://www.cecm.sfu.ca/~monaganm/papers/mahdiFactor.pdf


by katjasophie at September 01, 2013 01:21 PM

August 31, 2013

Thilina Rathnayake

Status of the Diophantine Module

Hi All,

I am really pleased to say that my work (proposed) with Diophantine equation is coming to an end. I have implemented all the deliverables in my project report. When the current PR gets merged into the master, I can start pushing the rest of my work. I also hope to improve the documentation during the weekend and get them in quickly. Ondrej is currently reviewing the PR. I invite all of you to go through the work and give your feedback. I am really thankful to Aaron, Pernici, Stephen and Julien for the support given so far. First of all let me give you a rough idea about the work currently not in Github.

General Pythagorean equation

A general Pythagorean equation is an equation of the form x_{1}^2 + x_{2}^2 + . . . + x_{k}^2 = x_{k+1}^2. The solutions for the equation can be given by using k parameters like below,

x_{1} = m_{1}^2 + m_{2}^2 + . . . + m_{k-1}^2 - m_{k}^2
x_{2} = 2m_{1}m_{k}
.
.
x_{k} = 2m_{k-1}m_{k}x_{k +1} = m_{1}^2 + m_{2}^2 + . . . + m_{k-1}^2 + m_{k}^2

I implemented solutions for slightly more  general equation a_{1}^2x_{1}^2 + a_{2}^2x_{2}^2 + . . . + a_{k}^2x_{k}^2 = a_{k+1}^2x_{k+1}^2. Solutions for this equation can be constructed from the solutions of the former equation, multiplying each solution x_{i} of it by \frac{lcm(a_{1}, a_{2}, . . . a_{n})}{a_{i}}.

General sum of n squares

I also implemented solutions for x_{i}^2 + x_{2}^2 + . . . + x_{n}^2 = k where k is an integer. It is obvious that equation is solvable only when k \geq 0.

Lagrange’s four square theorem states\textsuperscript{[1]} that every non-negative number can be expressed as a sum of four squares. It is also known that every integer not in the the form 4^k(8m + 7) can be expressed as a sum of three squares\textsuperscript{[2]} where m, n are non-negative integers.  Also any integer which doesn’t contain, in it’s canonical representation, odd powers of a prime of the form 4m + 3 can be expressed as a sum of two squares\textsuperscript{[3]}. For example, N = 2^2.5^3.11^2.3^3 can’t be expressed as a sum of two squares since it contains and odd power of 3, which is a prime of the form 4m+3. But we can express N = 2^2.5^3. 11^2.3^4 as a sum of two squares.

Also, there is an interesting identity, found by Euler, known as Euler’s four square identity which can be stated as,

(a_1^2+a_2^2+a_3^2+a_4^2)(b_1^2+b_2^2+b_3^2+b_4^2) = (a_1 b_1 - a_2 b_2 - a_3 b_3 - a_4 b_4)^2 + (a_1 b_2 + a_2 b_1 + a_3 b_4 - a_4 b_3)^2 + (a_1 b_3 - a_2 b_4 + a_3 b_1 + a_4 b_2)^2 + (a_1 b_4 + a_2 b_3 - a_3 b_2 + a_4 b_1)^2.

So, if we can represent each prime divisor of a number N as a sum of four squares, we can then use this identity to construct such a representation forn. This is the idea behind my implementation of representing number as a sum of four squares. When we know how to do this, we have several approaches for representing a given non-negative integer k as a sum of n squares. Most obvious thing to do is, represent k as a sum of four squares and set other n -4 variables to zero. We can also partition k into approximately n/4 groups and represent each as a sum of four squares and later combine those results. I still haven’t decided on how to do this, I would like to know the ideas of the community.  I used a slightly modified version of  the algorithm found in\textsuperscript{[5]}.  I’ll describe the algorithm in my next blog post.

Apart from that, I did a lot of bug fixing and reviewed Pernici’s Pull request. My tests are now 2x faster after using his functions. A huge thank should go to Pernici for doing a great Job with solving quadratic congruences.

References

[1] Lagrange’s four square theorem, [online], Available: http://en.wikipedia.org/wiki/Lagrange%27s_four-square_theorem
[2] Integer as a sum of three squares, [online], Available: http://www.proofwiki.org/wiki/Integer_as_Sum_of_Three_Squares
[3] Sum of Squares, [online], Available: http://mathworld.wolfram.com/SumofSquaresFunction.html
[4] Euler’s four square identity, [online], Available: http://en.wikipedia.org/wiki/Euler%27s_four-square_identity
[5] http://www.schorn.ch/howto.html


by Thilina Rathnayake at August 31, 2013 01:07 PM

Mary Clark

More thoughts on the Weyl group module

So this week has been a lot more thinking about the Weyl group module.  I feel as though I didn’t accomplish much, but I have spent a fair amount of time researching and thinking.  I should note that I also did some fixes on the RootSystem module (fixing some doctests and whatnot). 

After conferring with David, I’ve concluded that I want to implement a method that displays the Coxeter diagram corresponding to a Weyl group (i.e. the undirected Dynkin diagram), and that the best way to proceed with the Weyl group module is to implement all the Weyl groups as permutation groups.  This makes sense, because as I noted in my last post, the Weyl group for type A is the symmetric group, and the Weyl group for B&C is the hyperoctahedral groups, which canb e thought of as permutations on the set [-n, ..., -1, 1, ... n].  The Weyl group for type D is is a subgroup of the hyperoctahedral group. 

This would make things like computing the order of an element of the Weyl group quite easy, as I could take input from the user and then get the permutation associated with it, and then use methods from the Permutation class to do what I want.  Now, the obvious challenge with this is that the Permutation class only allows permutations on positive integers like [0, ... , n].  So this presents problems with the hyperoctahedral group, since it include permutations on negative integers.  I reckon that I do not have the time to rewrite the entire Permutation class to allow it to act on an arbitrary set.  So, I am going to map the set [-n, ...., -1, 1, .... n] to a set of positive integers.  So I’m still thinking about how I want to do that. 

Furthermore, upon the suggestion of David, I think that I will try to include information about the hyperoctahedral group in the named groups module. 

Lastly I am also trying to find more concrete information about the Weyl group of type D; I know that it is a subgroup of index 2, but I’m not sure what exact permutations and stuff that it corresponds to. 

So yeah.  This coming week, my plan is to figure out how I want to write the permutations of the hyperoctahedral group as permutations on positive integers, and then to take input from the user in the form of products of simple reflections, and generate the corresponding permutation, at least for types A, B, and C.  

For example, given A3, the Coxeter diagram is 0—0–0, and the generating reflections are r1, r2, r3, and the Weyl group is S4.   We can think of r1 as the permutation (1,2), r2 as (2,3), and r3 as (3,4).  So if the user gave the input r1r2 that would be the permutation (1,2)(2,3) = (1 3 2).  Oh, I also need to remember that the Permutation class starts with 0 instead of 1, so I need to remember to take that into account.

This week I will also implement a method that gives the Coxeter diagram of a given Weyl group.  This should be very easy, given the Coxeter diagram is the undirected Dynkin diagram. 


by meclark256 at August 31, 2013 12:07 AM

August 30, 2013

Cristóvão Sousa

PyLMI-SDP

PyLMI-SDP

Symbolic linear matrix inequalities (LMI) and semi-definite programming (SDP) tools for Python


Filed under: PhD, Work Tagged: git, lmi, python, sdp, sympy

by Cristóvão Duarte Sousa at August 30, 2013 05:50 PM

August 26, 2013

Prasoon Shukla

GSoC Week 10: On to the testing

One of the things that I didn’t like about the way the development of code went was that I really couldn’t test most of the code until a certain amount of code was already written. This problem has raised its head many times. So, when I had finally written the code for integration methods, I was finally able to put the pieces together for testing.

The tests were performed in the order of increasing complexity of procedures. So, the first thing that was tested was proper initialization of coordinate systems, base scalars, base vectors, VectMuls, and, VectAdds and while the initialization of coordinate systems, base scalars and base vectors could be done smoothly, the VectMul and the VectAdd objects raised errors. While debugging these errors, at first I thought that this might be another cache issues (here’s the first one #1746) but fortunately, the error still showed with SYMPY_USE_CACHE=no. Upon further inspection, the error was due to args attribute of BaseScalar and BaseVector. That was fixed.

Once initialization problems were cured, then, I moved on to printing. Just setting the _sympystr function corrected that.

After that, I started initializing complex (as in long and nested) vector expressions. That pointed out some more errors in various helpers and some public methods too. Those were fixed as well – though it did take some time.

Next, I moved on to the operations on vectors. By now, the basic addition, multiplication operators etc. were working fine. The methods that I tested now were some other methods of low complexity (including helpers). Those included, for example, the express and the factor methods and the is_const_vect function. Problems were corrected as they appeared. Also, initialization of coordinate systems with their origins at any given point in space is also now possible.

The next piece of the puzzle was the express method. As I have repeatedly mentioned, this method is like the spine of the entire structure. In order for the testing to proceed any further, I need the express method to work flawlessly. So, the next order of business was to fix the express method. That is what I am doing now. Once the express method works, I’ll move on to  the other methods which have the express method as a dependency.

When the testing is finished, the project won’t in any way be finished. There’s a sprinkling of TODO comments throughout the code. But any work that will be done from there will be much easier.

On a side note, my mid semester exams are approaching, and fast. I think that I will be able to devote my full attention to the project till 5th September, I speculate. After that, I’ll be lucky to be able to contribute 3-4 hours daily. This shall go on till 13th September, when the exams will end. On a brighter note though, I just noticed that the final date of submission is 23rd September (and not 16th September, as I had been assuming until now). That will give me 10 days after my exams to get everything in order (perhaps also discussions on Github).

Back to testing!


by musingsofafriend at August 26, 2013 12:24 PM

Manoj Kumar

Moving on to power series methods

Hi, Sorry for the late post this week. I couldn’t do much of work last week, but I did manage to write some code.

First of all, I managed to get the lie group hint finally in.  The Pull Request can be seen here, https://github.com/sympy/sympy/pull/2359 . Its nice to see the work that you have been doing, for around two months, finally in :D

I just started work on the power series methods for first order differential equations. Contrary to what I learnt, which is substituting \sum_{k=0}^n a_{k}x^k and finding out a recurrence relation, I found out a much better (straightforward) way to find the power series methods to a given differential equation. Before discussing the algorithm, when does any first order differential equation have a power series solution and what is a power series solution?.

1. Condition for power series solution: Let us take a general first order ODE P(x, y) + Q(x, y)\frac{dy}{dx} = 0. It has a power series solution, at a given point when \frac{Q(x, y)}{P(x, y)} is analytic at a given point. Analyticity of an expression at a point is confirmed when, either the expression is infinitely differentiable at a given point, or when the expression has a Power series solution, at a given point. Right now, it is impossible to find out (atleast using SymPy) whether,
any of the above conditions are true. So for now, checking is just being done to see if \frac{Q(x, y)}{P(x, y)} and \frac{d \frac{Q(x, y)}{P(x, y)}}{dx} , exist at the given point.

2. A power series solution is when you can express the solution in the form of a Taylor series. Let us say the solution to a differential equation P(x, y) + Q(x, y)\frac{dy}{dx} = 0, is y = f(x), then a power series solution exists at x0 when, f(x), can be written as \sum_{k=0}^n (x - x0)\frac{f^{n}(x0)}{n!}. As is evident, this can exist only when f(x) is infinitely differentible at x0

The algorithm is as follows, lets say \frac{dy}{dx} = h(x, y) and y(x0) = y0. Then

1] \frac{dy}{dx} = F1
2] \frac{d^2 y}{dx ^2} = \frac{dF1}{dx} = \frac{\partial F1}{\partial x} + F1\frac{\partial F1}{\partial y} = F2
3] \frac{d^2 y}{dx ^2} = \frac{dF2}{dx} = \frac{\partial F2}{\partial x} + F1\frac{\partial F2}{\partial y} = F3

and so on. Since we know the expressions, we can also find their value at a particular point.

A sample session.

from sympy import *
f = Function("f")
eq = f(x).diff(x) - sin(x*f(x))
pprint(dsolve(eq, hint='1st_power_series', ics={f(2):2, 'terms':1}))
f(x) = (x - 2)⋅sin(4) + 2
pprint(dsolve(eq, hint='1st_power_series', ics={f(2):2, 'terms':3}))
                                   2                                     
                            (x - 2) ⋅(2⋅cos(4) + 2⋅sin(4)⋅cos(4))    ⎛ 3⎞
f(x) = 2 + (x - 2)⋅sin(4) + ───────────────────────────────────── + O⎝x ⎠
                                              2
pprint(dsolve(eq))
        4 ⎛    3       ⎞            2        
       x ⋅⎝- C₀  + 3⋅C₀⎠        C₀⋅x     ⎛ 6⎞
f(x) = ───────────────── + C₀ + ───── + O⎝x ⎠
               24                 2

eq = f(x).diff(x) - x*f(x)
pprint(dsolve(eq, hint='1st_power_series'))

                2       4        
            C₀⋅x    C₀⋅x     ⎛ 6⎞
f(x) = C₀ + ───── + ───── + O⎝x ⎠
              2       8     

Thats all for now. Cheers!


by Manoj Kumar at August 26, 2013 10:54 AM

August 23, 2013

Mary Clark

Weyl Groups

So, for each semisimple Lie group, we have a Weyl group.  It is a subgroup of the isometry group of the root system.  Specifically, it’s the subgroup that is generated by reflections through the hyperplanes orthogonal to the roots.  Therefore, Weyl groups are reflection groups, and so a Weyl group is a finite Coxeter group. 

Now, if we take a Lie algebra’s Dynkin diagram and delete all arrows from it, we have its Coxeter diagram, which encodes information about the reflections which generate the Weyl group, as follows.  The vertices of the Coxeter diagram represent the generating reflections of the Weyl group, s_i.  An edge is drawn between s_i and s_j if the order m(i, j) of s_i*s_j is greater than two.  If there is one edge, the order m(i, j) is 3.  If there are two edges, the order m(i, j) is 4, and if there are three edges, the order m(i, j) is 5. 

We note that the B series and the C series have the same Coxeter diagram, and hence have the same Weyl group, which is the hyperoctahedral group, which is the group of symmetries of a hypercube.  Considered as a permutation group, it is the signed symmetric group of permutations of the set  { −n, −n + 1, …, −1, 1, 2, …, n } such that π(i) = −π(−i) for all i.  The Weyl group of D_n is a subgroup of index two in the hyperoctahedral group.  The Weyl group of A_n is the symmetric group on n variables. 

So, I’m now looking at working on the class WeylGroup.  Obviously it’s easy enough to write out the generating reflections, but I’m not sure how exactly I want to proceed from there.  Perhaps a function where a user can specify a reflection as a product of the generating reflections (e.g. input s1s2s5 or something…) and then have the function output its order?  I’m also thinking about how to I want to represent a given reflection.  I’m leaning towards representing it at the product of the generating reflections.  We’ll see.  I also have included a function which returns the order of a given Weyl group.  So yeah, that’s where I’m at.  This week has been generally slow in terms of concrete output, but I’ve been spending a lot of time reading and researching (as Weyl groups aren’t my specialty) and generally contemplating where I want to go with this class.


by meclark256 at August 23, 2013 08:00 PM

August 22, 2013

Aaron Meurer

Python 3: Single codebase vs. 2to3

In my previous post about switching to Python 3 as my default Python, I praised the use of a single codebase for supporting both Python 2 and Python 3. I even chastised the Python core developers for creating 2to3, writing, “I think that the core Python folks made a mistake by presenting Python 3 as a new language. It has made people antagonistic against Python 3 (well, that and the print function, which was another stupid mistake, because even if it was a good idea, it alone has kept too many people from switching). 2to3 was a mistake too, because it perpetuated this idea.”

Well, this isn’t entirely fair, because I myself used to be one of the biggest advocates of using 2to3 over a single codebase. Take this GitHub comment from when the IPython guys were considering this issue, where I wrote, “maintaining a common code base is going to be a bit annoying from the developer side.…The main benefit of using 2to3 is that 99% of the time, you can just write your code as you would for Python 2, and when it gets to Python 3, it just works (maybe that percent is a bit smaller if you use strings a lot, but it’s still quite high). To write for Python 2 and 3 at the same time, you have to remember a lot of little rules, which no one will remember (and new contributors will not even know about). And given that IPython’s test coverage is still poor (unless I am mistaken, in which case, please correct me), little mistakes will slip through, and no one will notice until they try the certain behavior in Python 3.”

So I just want to clarify a few things.

  1. I was wrong. When I chastised the Python core developers for making people believe that Python 3 is a different language from Python 2, I too fell into that trap. It took a month of me working on a codebase that had to be directly Python 3 compatible to see the fallacy of this. And seeing just how small the SymPy compatibility file is sealed the deal. I now believe that I was completely wrong in saying that maintaining a common codebase is annoying. As I wrote in the previous post, it is no different from supporting 2.4-2.7, for instance (actually, by my memory, supporting 2.4-2.7 was much worse than supporting 2.6-3.3, because so many language features were introduced in Python 2.5)
  2. If you have to support 2.5 or earlier and Python 3, then 2to3 might actually be better. The reason is simple: Python 2.6 was the first version of Python to “know” about Python 3. So, for instance, from __future__ import print_function was introduced in Python 2.6. This means that to support a single codebase for 2.5-3.x you have to write print('\n') to print an empty line and to print something without a newline at the end, you have to use sys.stdout.write. Also, except Exception as e, using the as keyword, which is the only syntax allowed in Python 3, was introduced in Python 2.6, so if you want to catch an exception you have to use sys.exc_info()[1]. Now that really is annoying. But in Python 2.6, most differences can be fixed with simple definitions, most of which boil down to try, except ImportError, import x as y type workarounds. The worst are the print function, which can be imported from __future__, division, which can also be imported from __future__ (or worked around), and unicode literals (if it’s a big deal, drop support for Python 3.2). Most other things are just simple renames, like xrange -> range, or making sure that you wrap functions that are iterators in Python 3 in list if you want to access items from them.
  3. I was right about test coverage. Supporting Python 2 and Python 3 in a single codebase if you have bad test coverage is not going to work. You can get around the worst things by making sure that __future__ imports are at the top of each file, but you are bound to miss things, because, as I said, you will forget that map(f, s)[0] doesn’t work in Python 3 or that the StringIO module has been renamed to io, or that you can’t pass around data as strings—they have to be bytes.

    Of course, you also need good test coverage to support Python 3 well using 2to3, but you can get away with more because 2to3 will take care of things like the above for you. Perhaps instead of 2to3 what really should have been made is a pyflakes-like tool that uses the same knowledge as 2to3 to check for cross-compatibility for Python 2 and Python 3.

  4. In the end, you have to be actually using Python 3. I feel like people haven’t been, even today, taking Python 3 seriously. They aren’t actually using it. There’s a feeling that someday in the future they will, but for now, Python 2 is the way to go. 2to3 exacerbates this feeling, because to use it, you have to develop in Python 2. You shouldn’t touch the code generated by 2to3. As it is, then, if you develop with 2to3, you only ever use Python 3 to test that things are working in Python 3. You don’t prototype your code in Python 3, because then you will write code that doesn’t work in Python 2.

    With the single codebase, your view should change. You should start prototyping in Python 3. You should only use Python 2 to test that things work in Python 2 (and since you’ve been using Python 2 for so long before switching to Python 3, or at least if you’re like me you have, this is not that bad). Just yesterday, I found a bug in SymPy in Python 3 that went unnoticed. It relates to what I said above about using bytes instead of strings for data. I just checked, and 2to3 wouldn’t have fixed it (and indeed, the bug is present in SymPy 0.7.3, which used 2to3), because there’s no way for 2to3 to have known that the data was bytes and not a string. The code was obviously untested, but it would have been obvious that it didn’t work if anyone was using Python 3 to use SymPy interactively. As it turns out, some of our users are doing this, and they pointed it out on the mailing list, but it remained unfixed until I found it myself independently.

So old mistakes aside, the lessons to take away from this and the previous blog post are

  1. Use a single codebase instead of 2to3 to support both Python 2 and Python 3.
  2. Use Python 3 as your default Python.
  3. Keep Python 2 around, though, because not everything supports Python 3 yet.
  4. Expect to find some bugs, because, until everyone starts doing this, people aren’t going to test their software in Python 3.

by Aaron Meurer at August 22, 2013 12:45 AM

August 18, 2013

Manoj Kumar

Slow Progress

Hi, this week I decided to take a break from my actual GSoC project, and work on things that would indirectly some parts of my GSoC project.

Firstly, I discovered this bug, when I was playing with the lie group solver, https://code.google.com/p/sympy/issues/detail?id=3976 . I thought I could fix it myself, but the source code of solve seemed to be a bit too dense for me to comprehend, or atleast within a couple of days. Anyway as I was playing with the source code, I found another bug in solve_linear. This time however someone I found out what actually was wrong and managed to open a Pull Request to it https://github.com/sympy/sympy/pull/2385 . The problem was that, solve_linear(y - Integral(f(x), y)) was giving (y, Integral(f(x), y)/(-f(x) + 1)) rather than (y, 0) . I managed to implement a quick hack that would get around the issue, though it had me atleast thinking for two hours.

Secondly the issue with Integral subs, I read thoroughly the discussion on the issue, and the respective Pull request , and I tried fixing the issue, https://github.com/sympy/sympy/pull/2390 . There seems to be a lot of test failures, so I am waiting for reviews on the PR, to see if my approach is right, before attempting to fix the failures.

Thirdly, the lie group hint is almost ready to go in, just a few major tweaks here and there and a final yes by Sean.

Finally, it seems that my exams are on September 14, so I need atleast five days before, so that I can pass them without any backlogs. And also I found a bit of time to clean up a bit, on some minor stuff I was “working” on last year https://github.com/Manoj-Kumar-S/Thermo . Contributors to the repository are welcome :) . That’s it for now, I guess.


by Manoj Kumar at August 18, 2013 07:41 PM

Sachin Joglekar

GSoC Week9 : Beginning of electrostatics

Week 8 of GSoC is now over, and just 5 more weeks remain till the hard pencils-down date.
However, my work on the project will most likely exceed that time, since I will be having my first tests around September 17. I am sure I will be able to get all the code in by that time, but to make the entire work shippable, things like Sphinx documentation, doctests etc will also have to be finished, which may take a little more time beyond the actual deadline.

Anyways, coming to this week's work, I wrote some code for electrostatics. I added some functions and the ParticleCharge class (inheriting from Particle of mechanics module).
However, the E-M module's handling of charged particles required me to modify my earlier implementation of Particle- most importantly, adding a 'set_motion' method. Like I mentioned earlier, Particle earlier had a frame attached to it. Gilbert and I discussed at length on this, and we agreed that making the user initialize a new frame everytime, just to attach it to a Particle, was quite cumbersome.
Hence, I modified the class in the mechanics core- It still has a frame as an attribute, but its now 'hidden' from the user (nothing in Python in hidden from the user, but what I mean here is that the user can do everything with a Particle without caring about the frame). The set_motion method I mentioned earlier just initializes a new frame according to the motion parameters set by the user (similar to MovingRefFrame).

The main reason I want a frame attached to a Particle is the ease of inner working. Since a Particle will not have an 'angular acceleration', those parts of the MovingRefFrame code will never really get executed in Particle. However, Gilbert has a point in pointing out that its unnecessarily adding a complex attribute to the class. We havent finalized anything yet, but re-writing the motion-setting methods for Particle would be tedious, and more importantly, will lead to quite a lot of code duplication. Lets see what we end up doing.

Anyways, I also finished writing tests for the code I wrote this week, and I am waiting for a review from Gilbert. Jason and he also encouraged me to add example-tests to ensure everything works smoothly.
Here are the example tests I wrote (they are really basic, but its better that way, since strenuous tests only make things worse)-

 def test_example_1():  
"""
This is a sample example meant to test the basic functionality of
electrostatics.py

Consider a frame R, and a particle charge P(mass m, charge q) situated
at its origin initially.
Now, at time t=0, a field R.x+R.y+R.z is 'switched on' and kept that way
for time 't0'.
The user needs the x-coordinate of the particle charge as a function of
time, after the field is switched off at time=t0.
"""

#Basic initialization
m, q = symbols('m q')
P = ParticleCharge('P', m, q)
P.set_motion(R, pos_vector = 0)
field = R.x + R.y + R.z
time = dynamicsymbols._t
t0 = Symbol('t0')
#The acceleration is equal to the electrostatic force experience by the
#particle charge, divided by its mass
acceleration = P.electrostatic_force(field)/P.mass
#Use get_motion_acc from the mechanics core to find velocity and position
#parameters using acceleration function and boundary conditions
translation = get_motion_acc(acceleration, \
0, P.pos_vector_wrt(R), frame = R)
#Calculate the motion parameters of the particle charge at time t0
acceleration = translation[0]
velocity_at_t0 = translation[1].subs({time:t0})
position_at_t0 = translation[2].subs({time:t0})
#Set motion of P accordingly
P.set_motion(R, trans_acc = 0, trans_vel_b = velocity_at_t0,
pos_vector_b = position_at_t0)
#assert that the dot product of P's pos_vector wrt R, with R.x
#matches the actual result
assert P.pos_vector_wrt(R).dot(R.x) == \
q*t*t0/m + q*t0**2/(2*m)


def test_example_2():
"""
This is a sample example meant to test the basic functionality of
electrostatics.py

Consider a set of 3 particle charges of equal mass and charge placed
symmetrically on a circle of unit radius around the origin, with one
point lying at position vector of R.y
We calculate the energy required to assemble this system of charges from
infinity(with no kinetic energy in any particle) and find out values of
the electrostatic fields and potentials at the origin
"""

#Basic initialization
from sympy import sin, cos, pi
m, q = symbols('m q')
P = ParticleCharge('P', m, q)
Q = ParticleCharge('Q', m, q)
S = ParticleCharge('S', m, q)
#Place particle charges as per required configuration
P.set_motion(R, pos_vector = R.y)
Q.set_motion(R, pos_vector = -cos(pi/6) * R.x + -sin(pi/6) * R.y)
S.set_motion(R, pos_vector = cos(pi/6) * R.x + -sin(pi/6) * R.y)
#Check outputs of required values
assert charge_assembly_energy(P, Q, S) == 3*3**(-0.5)*k*q**2
assert S.electrostatic_field(R, 0) + \
Q.electrostatic_field(R, 0) + \
P.electrostatic_field(R, 0) == 0
assert P.electrostatic_potential(R, 0) == \
Q.electrostatic_potential(R, 0) == \
S.electrostatic_potential(R, 0)

If you think the API is a little complex, I think so too, but those modifications are for later. It would be good, according to me, to have something like an 'environment' infrastructure. Basically, the user will keep adding components (fields, particles etc etc) to it, and the constituents will keep getting modified accordingly. However, it will be good to have the module at a certain level of working before thinking on those lines.

That's all there is to report for this week. The coming week, I will be reading up on magnetostatics from Griffith's and start coding for the same. However, I am afraid things are only going to get more and more complex hereon. Lets hope for the best.

Have a great week :-)

by Sachin Joglekar (noreply@blogger.com) at August 18, 2013 05:29 PM

Prasoon Shukla

GSoC Week 9: Getting some work done

This week was quite productive, I think. Finally, I was able to get some important parts done.

So, for the last couple of weeks, I had been stuck on the express method for vectors. Reasons were many – the most prominent one being an onging coleege semester. Nevertheless, I knew that once I am done with the express method, then things will proceed at a much better pace. And it seems, foe once, I was right.

In the last post, I wrote that I had got started with the vector calculus methods for vectors. This means all the methods required to perform both differntiation operations as well as integration operations on vectors. The first in this series was the ‘grad’ method. This one required a little repetition of code from the express method; I’ll need to refactor that later on. One of the problems I faced with this method was due to the generic nature of coordinate system. Indeed, we are allowing for a vector field to be composed of any number of coordinate systems – limited only by the memory of the computer. But, because we are using such a generalized approach, we cannot apply even the most generalized of the formulae on these vector fields. Our only choice now is, either to operate on each component – while applying the generalized formula – or, to express the vector field in a single coordinate system. But even in the former method, we can have a component, say, c0.x * c1.y – which is still in two different coordinate systems. So, we’ll still need to apply express on this part of the vector field.

On doing some analysis, I reached the conclusion that computationally, we will end up doing many more calls to express and doing similar kinds of calculations, no matter which way we go. So, I chose the method which is clearer; if the vector is in a single coordinate systems, well and good; but, if the vector is in more than one coordinate systems, then, the user will need to provide a coordinate system to express the results in.

Anyway, that’s how I proceeded with this method. On similar lines, I also implemented the div and the curl methods for vector fields. Also, I finalized the dot and cross methods for vectors.

Once this was done, I wanted to start with methods related to integration. The first step to vector integrals was to have a class that can be used to represent parametric regions in space. Right now, I have allowed for at most 2 free parameters do define a region in space. In other words, the user can define curves and surfaces in space but not voulmes. Of course, I’ll add support for volumes later on subject to the availibility of time. Anyway, after this class had been defined, I wanted to have a class to hold integrals. First, I was thinking of adding a generic class for all kinds of integrals. But, just a little bit f thinking proced to me that this method will be quote inefficient not to mention very complex adding to the complexity of the code. So, I decided to have a base class VectIntegral wich will be subclasses by classes that will represent each differnt category of integrals.

For now, I have added two classes - LineVectIntegral and SurfaceVectInetgral. These will represent line and surface intrgrals of vectors fields. I have also added and eval method for the LineVectIntegral class as well. Currently, I am trying to add an eval method to the SurfaceVectIntegral class.

Anyway, this is how the work is going on. According to the Google timeline, there is still one month to go till the end of the coding period. But for me, things will be much harder. Again, the reason is academics. My midterm examinations start 10th September and that means I’ll have to begin preparing a week before that. So, for me, only about 18 days are left at best. I’ll try to do the most I can in that period.


by musingsofafriend at August 18, 2013 10:58 AM

Thilina Rathnayake

Improving the solving process using Cornacchia

Hi All, It has been sometime since I last wrote about my project.  I spent most of my time improving the solving methods used in binary quadratic forms. I also did a fair bit of refactoring of the earlier work and bug fixing. The implementation of ternary quadratic forms is now almost done and once the pull request by Pernici gets merged, we can use his function for finding the solutions for the quadratic congruence x^2 \equiv a \ (mod \ p) and the time taken to find the solutions for ternary / binary quadratic forms will be reduced considerably.

Tweaks to the algorithms used in solving quadratic binary forms

I had a misunderstanding earlier that the binary quadratic form ax^2 + bxy + cy^2 + dx + ey + f = 0 can be converted to the form x^2 - Dy^2 = N only in the case \Delta > 0 and \Delta is not a perfect square. Here \Delta = b^2 - 4ac. It occurred to me only when I read [1] more carefully that any binary quadratic with a \neq 0 and c \neq 0 can be transformed to the latter form. With this new discovery, our treatment of the binary quadratic becomes more general and efficient.  Now in both the cases \Delta > 0, \Delta is not a perfect square and \Delta < 0, we transform the equation to the form x^2 -Dy^2 = N. The former case was earlier addressed under solving generalized Pell equation as the equation reduces to a generalized Pell equation in that case. My earlier approach with the latter case was to use brute force. I managed to implement the Cornacchia’s algorithm for this case as suggested by Pernici. Cornacchia’s algorithm can be used to solve the equation ax^2 + by^2 = m with gcd(a, b) = 1 = gcd(a, m) and a > 0, b > 0. In our form under the case \Delta < 0, a = 1 and b = -D > 0, so these two conditions are automatically fulfilled. Below is a rough outline of the Cornacchia’s algorithm. Refer [2] and [3] for more details.

Solving ax^2 + by^2 = m by Cornacchia’s method

1. Let p be such that ap + mq = gcd(a, m).
2. Construct a set v such that v = \{ x | x^2 \equiv -bp \ (mod \ m) \}. We can ommit solutions greater than m/2.
3. For each t \in v set u_{1} = t, r_{1} = m.
4. Define the sequences, u_{i+1} = r_{i} and r_{i+1} = u_{i} \ mod \ r_{i} for i > 1.
5. Find the terms u_{i}, r_{i} until the condition ar_{i}^2 < m is met.
6. Set m_{1} = m - ar_{i}^2.
7. If b | m_{1} and m_{2} = m_{1}/b is a perfect square we have the solution x = r_{i} and y = \sqrt{m_{2}}.
8. Add (x, y) to a set S.
9. When we are done with all t, S contains all the solutions to the equation ax^2 + by^2 = m.

Cornacchia’s method finds solutions such that gcd(x, y) = 1. So it won’t find solutions to the equation x^2 + y^2 = 20 since the solutions of this equation is (x, y) = (4, 2) and gcd(x, y) > 1. So we have to extract out square factors in m and find solutions to the equation with this new m value and reconstruct the solutions. For example in the above example, 2 is a square factor of 20 (i.e square of two divides 20) so, we find solutions to m = 20 /2^2 = 5, i.e for the equation x^2 + y^2 = 5 and then reconstruct the solutions for the original equation.

diophantine() and check_solutions()

I replaced the main routine of the module by a new method `diophantine()`. What the old method,`diop_solve()` did was, when given with an input equation, it tried to find  the type of the equation using `classify_diop()` and called the appropriate solving function accordingly. So when given an equation like x^2 + 3xy + 5x = 0 , diop_solve will identify this as a binary quadratic and try to solve this using methods for binary quadratics. But the best thing to do in this case is to factorize the above equation as x(x + 3y + 5) = 0 and solve x = 0 and x+3y+5 = 0 separately. After that, we can construct the general solution by adding parameters. The general solution (x, y, z) = (0, n_{1}, n_{2}) will be constructed after solving x = 0 and will be added to the solution set for x^2 + 3xy + 5x = 0 . This was done by introducing the new routine `diophantine()`. This is now the main routine for the module and when we call `diophantine(eq)`, first `diophantine()` tries to factor `eq` and then solve each factor using `diop_solve()`. Then it creates the general solution and that will be added to the solution set for `eq`. This is a very good approach and it is also a very common method used when we try to solve Diophantine equations manually.

I also unified various kinds of  functions in `test_diophantine.py` which were used to find whether the solutions returned by a function satisfies the input equation. I wrote a new method `check_solutions()`. Aaron should be thanked for proposing such a method and now the `test_diophantine.py` looks nicer and whoever trying to hack it doesn’t need to study various kinds of solution checking functions. To be honest, I was running out of names for these functions because a new one had to be created for each equation type.

Bug Fixing

I spent a considerable time fixing the bugs in the module, especially towards the end of the last week. I had done some mistakes when transforming general ternary quadratic equation into the form ax^2 + by^2 + cz^2 = 0. Also, in `diophantine()` method, I hadn’t check for the case when None is returned to indicate the insolubility of the equation. I also found a bug in the `factor_list()` function under Python3. Here is the issue I created for it: https://code.google.com/p/sympy/issues/detail?id=3968

Hmm, That is pretty much everything I did during the past weeks. I hope in the upcoming week I can start on solving extended Pythagorean equation.

References

[1] Solving the equation ax^2 + bxy + cy^2 + dx + ey + f = 0, John P.Robertson, May 8, 2003. http://www.jpr2718.org/ax2p.pdf
[2] Solving ax^2 + by^2 = m by Cornacchia’s method, Number theory web, http://www.numbertheory.org/php/cornacchia.html
[3] A. Nitaj, L’algorithme de Cornacchia, Expositiones Mathematicae 13 (1995), 358-365


by Thilina Rathnayake at August 18, 2013 04:25 AM

August 16, 2013

Mary Clark

Week 9

This week I have been doing more work with the RootSystem class.   I created a PR (https://github.com/sympy/sympy/pull/2382) with everything that I accomplished on it last week, just so that it’s up there and people can comment on it. 

This week, I implemented the root adding functions.  One function just takes any two simple roots and adds them together, and returns the new root.  It functions by having the user input two integers (which must be less than the rank of the lie algebra) and takes these integers to be the keys in the dictionary of positive roots, and retrieves the corresponding simple root.  Then it just add adds the lists corresponding to the roots (by adding together the two first elements, the two second elements, etc) and returns a new list, which is the new root. 

 

Then, I also implemented a function which takes two roots as input, and if their sum is a root, returns their sum as a new root.  To do this, I first needed to have all the roots of a given Lie algebra available.  That is why last week I implemented a method for generating all the positive roots for A, B, C, and D.  This week I also did that for E, F, and G.  This enabled me to implement a method in RootSystem that generates all the roots of a given Lie algebra, by first getting the positive roots, and then using the positive roots to generate the negative roots (by multiplying each positive root by -1). 

 

Then the method add_as_root works by checking the dictionary of all roots to see if the sum of the two input roots is also a root.  If it’s not, it returns a string saying “The sum of these two roots is not a root”.  Otherwise it returns the sum as a list, as usual.

This week I also worked on documentation and the like for all my functions.  This weekend I’m going to make sure all the doctests pass.  Then, my plan for next week is to start work on the the WeylGroup class.


by meclark256 at August 16, 2013 07:11 PM

Katja Sophie Hotz

8th and 9th week

Finally, here is my post on the last two weeks. First of all, I submitted a PR for the modular GCD algorithm for polynomials over algebraic fields. For this, I had to finish and update documentation and write tests.

Another task was getting my first PR ready for merging. For this, I needed to cover more lines of the code, since every untested one is “a bug waiting to happen”. The difficulty here is the random choice of evaluation points in _modgcd_multivariate_p. So I searched for an example such that for one prime every evaluation point is unlucky. I am glad I found one, since the only other possibility would have been to add an optional argument to choose the evaluation points manually. Now almost every line is tested and the few ones that are not should be harmless. ;)

After that, I thoroughly studied the article [1] on factorization of polynomials over algebraic domains. I also started with the implementation, but I did not get very far yet.

Sorry for this entry being so late but still so short, I hope that next week there will be more stuff to blog about!

[1] http://www.cecm.sfu.ca/~monaganm/papers/mahdiFactor.pdf


by katjasophie at August 16, 2013 01:29 PM

August 14, 2013

MRocklin

Using SymPy within Theano

Several months ago I published a sequence of blogposts about using SymPy and Theano together to generate efficient mathematical codes. Main points from the posts were as follows

  • Code Generation: We created a drop-in replacement for SymPy’s code generation with a thin layer to Theano.
  • Scalar Simplificaiton: We used SymPy’s simplification routines to accelerate programs prior to code printing in Theano
  • Matrix Expressions: We generate fast blocked numeric linear algebra programs from SymPy’s matrix expressions using Theano array operations.

A week ago someone popped up on the SymPy mailing list asking if a particular SymPy operation (sympy.Piecewise) could be supported in the SymPy-Theano translation. Because Theano has a similar operation (theano.tensor.switch) it was simple to add this translation. In general though this post raised some interesting questions:

  • Is there a way to avoid constantly making new translations for operations that exist both in SymPy and in Theano?
  • What do we do with SymPy’s more exotic operations for which no Theano analog exists? E.g. how do we generate code for factorial or bessel functions?

In an attempt to resolve these issues we recently merged a general SymPyCCode operation into the Theano project. It enables the expression of a Theano scalar operation through SymPy expressions using SymPy’s original code generation capability. For example we can create a simple addition operation like so

from sympy import Symbol, symbols
from theano.scalar.basic_sympy import SymPyCCode

x, y = symbols('x,y')            # SymPy Symbols
add = SymPyCCode([x, y], x + y)  # A Theano addition operator

Theano operators can be applied to Theano variables to make compound Theano expressions

from theano.scalar import floats

xt, yt = floats('xy')
zt = add(xt, yt)

Theano can then turn these expressions into functions

from theano import function

f = function([xt, yt], zt)
f(2, 3)  # prints 5.0

So we can describe scalar operations in SymPy and use them directly in Theano without having to translate anything. Of course, the add operation is already native in Theano. This is more useful for complex scalar expressions, particularly if Theano does not already have such an operation

from sympy import gamma
theano_gamma = SymPyCCode([x], gamma(x))

from sympy.stats.crv_types import NormalDistribution
mu = Symbol('mu', bounded=True)
sigma = Symbol('sigma', positive=True)
normal = SymPyCCode([x, mu, sigma], NormalDistribution(mu, sigma)(x))

Under the Hood

Internally the SymPyCCode op calls SymPy’s C code printers to generate an implementation of the scalar operation. For example the following SymPy code generates C code to compute the probability density function of a normal distribution.

>>> from sympy.printing import ccode
>>> ccode(NormalDistribution(mu, sigma)(x))
(1.0L/2.0L)*sqrt(2)*exp(-1.0L/2.0L*pow(-mu + x, 2)/pow(sigma, 2))/(sqrt(M_PI)*sigma)

Theano is then able to use this generated C code within its generated C program. Theano still handles memory, common sub-expressions, arrays, etc. but is now able to leverage SymPy to generate low-level kernels for mathematical operations.

But Don’t Use This

But you shouldn’t use this mechanism if you don’t have to. Recall from the first post that SymPy can translate many standard operations to Theano directly, without having to wrap the SymPy expressions up in a black box Theano operation. Native translation enables Theano to use many additional optimizations like the use of the GPU, automatic differentiation, and common sub-expression elimination across many expressions. This approach is mainly for cases where your complex scalar expressions don’t translate well to Theano. In some cases the SymPyCCode op may also provide better performance (maybe SymPy’s generated C code is a bit tighter?)

Future Work

We need to improve SymPy’s code printers. While they support all the standard operators they neglect to cover the really interesting cases like bessel functions or factorial. These are cases where the numerical analysis community can concisely describe the “right way” to compute many of these operations in isolation. For example the factorial of n can be computed as gamma(n+1), a fact rarely known by mainstream programmers.

<script type="math/tex; mode=display"> n! = \Gamma(n+1) \;\; \forall n \in \mathbb{N} </script>

I’ve been thinking about the right way to do this generally. Right now my thought is that we should create a new expand hint for computation. If you have thoughts I’d love to hear about them; please speak up in the comments.

Example

There are a number of ways to compute a SymPy expression numerically. I’m going to explicily run throuh an example with a few of them below. You should ignore this section if these are already familiar to you.

We create a function to evaluate a normal distribution probability density function for a particular mean and standard deviation across a range of values for x.

# The Target Expression
from sympy import Symbol
from sympy.stats.crv_types import NormalDistribution
x = Symbol('x')
mu = Symbol('mu', bounded=True)
sigma = Symbol('sigma', positive=True)

result = NormalDistribution(mu, sigma)(x)

# Make a numpy `ufunc` with Pure SymPy
from sympy.utilities.autowrap import ufuncify
f_ufunc = ufuncify([x, mu, sigma], result)

# Make a Theano function with SymPy
from sympy.printing.theanocode import theano_function
f_sym_theano = theano_function([x, mu, sigma], [result], dims={x: 1, mu: 0, sigma: 0})

# Make a special Theano op using a SymPyCCode
from theano.scalar.basic_sympy import SymPyCCode
from theano.tensor.elemwise import Elemwise
normal_op = Elemwise(SymPyCCode([x, mu, sigma], result))

# And then use that `op` in plain Theano code
import theano
xt     = theano.tensor.vector('x')
mut    = theano.scalar.float32('mu')
sigmat = theano.scalar.float32('sigma')

ft = theano.function([xt, mut, sigmat], normal_op(xt, mut, sigmat))

August 14, 2013 07:00 AM

August 12, 2013

Tarun Gaba

GSoC report week 7, 8


This is the compiled report for the week 7 and 8.
The stuff is getting quite hectic now.

I have just been able to complete writing a Socket server, which would be used for fetching
simulation data and information to the browser, while calling the display method from
python interpreter.
The server is in very basic form, only accepting requests, and sending the data accordingly.
But it should suffice to complete the python side of things, that is creating the json files, and
creating a basic server, which would be able to serve the required files and data to the browser.


On the other side I have to work on Javascript Tests.
We would be using Jasmine framework for Javascript testing.
As far as Javascript API is concerned, it would be a simple script, containing a single class for
handling the visualizations, Canvas class.

Also I have to figure out some way to make a single javascript module to be used for both IPython
based visualizations and browser based.


by TARUN GABA (noreply@blogger.com) at August 12, 2013 04:48 PM

GSoC report week 7, 8


This is the compiled report for the week 7 and 8.
The stuff is getting quite hectic now.

I have just been able to complete writing a Socket server, which would be used for fetching
simulation data and information to the browser, while calling the display method from
python interpreter.
The server is in very basic form, only accepting requests, and sending the data accordingly.
But it should suffice to complete the python side of things, that is creating the json files, and
creating a basic server, which would be able to serve the required files and data to the browser.


On the other side I have to work on Javascript Tests.
We would be using Jasmine framework for Javascript testing.
As far as Javascript API is concerned, it would be a simple script, containing a single class for
handling the visualizations, Canvas class.

Also I have to figure out some way to make a single javascript module to be used for both IPython
based visualizations and browser based.


by TARUN GABA (noreply@blogger.com) at August 12, 2013 04:17 PM

August 11, 2013

Sachin Joglekar

GSoC: Week 8

Week 8's over.
This week was more of 'studying' and 'preparing' rather than coding. Since the entire infrastructure behind my initial proposal has now shifted considerably, I am drawing out the plans anew and getting the basic things done before I start working on sympy.physics.em (tentative name).

First off, I finally got the logic docs up at the SymPy development docs. You can see them here.
Second, I wrote the basic field functions for electrostatic theory (including the tests).
Apart from that, I was mostly busy with college stuff (first week of classes are usually a little difficult for adjusting) and going through Griffith's book. So hopefully, I will get a lot done by the end of the coming week (code + further plans).
Prasoon's code is shaping up well, and I may help him with the testing and debugging part to speed up the development of sympy.vector. I feel quite optimistic about the projects (Prasoon's and mine) now.

Well, thats all (short post, I know) for now.
Will come back with more next time.
Have a great week :-)

by Sachin Joglekar (noreply@blogger.com) at August 11, 2013 04:56 PM

Manoj Kumar

Lie Group hint for SymPy

Hi, this week started with two of my previous PR’s getting merged (finally).

1.  The heuristics PR – https://github.com/sympy/sympy/pull/2308
2. The variable coefficient PDE – https://github.com/sympy/sympy/pull/2346

I started working on the integration of the infinitesimals to the present dsolve architecture. Before telling about the issues I ran into (a number of them actually), let me explain the algorithm in a few lines.

As you know (If you have been following my blog),the past few weeks my focus was on solving this monster PDE.
\frac{\partial \eta}{\partial x} + (\frac{\partial \eta}{\partial y} - \frac{\partial \xi}{\partial x}) * h(x, y) - \frac{\partial \xi}{\partial y}*h(x,y)^{2} - \xi*\frac{\partial h}{\partial x} - \eta*\frac{\partial h}{\partial y}
Why? Well, without going too much into detail, the solution to this PDE, \xi and \eta, give the infinitesimals of the ODE \frac{dy}{dx} = h(x, y). After getting the infinitesimals, this method is adopted.
1] One has to solve the Partial Differential Equations (again?)
a] \xi\frac{\partial r}{\partial x} + \eta\frac{\partial r}{\partial y} = 0
b] \xi\frac{\partial r}{\partial x} + \eta\frac{\partial s}{\partial y} = 1
2] Now one knows r and s in terms of x and y, doing \frac{ds}{dr} = \frac{\frac{\partial s}{\partial x} + \frac{\partial s}{\partial y}*h}{\frac{\partial s}{\partial x} + \frac{\partial s}{\partial y}*h} and converting the R.H.S in terms of r and s, reduces into a quadrature, which can be solved quite easily with the ode_separable hint
3] After solving the ODE, it can be converted back into the original coordinates.

On a scale of optimism to pessimism, I am somewhere in between a realist and a pessimist, and I have to admit I was slightly disappointed with the effectiveness of the hint, since I was running into various issues, with a few ODEs that I had tested. These were some of them that I had identified.

1] Problem with Integral: I was testing an ODE in which, r = x, and I had to integrate a huge expression, which couldn’t be integrated, something like \frac{1}{\sqrt{a0 + a1*r + a2*r^{2} + a3^r{3}}}, it gave me an output of the form Integral, and when I substitued r as x, it gave me a definite value, this is because doing subs on an Integral object, doesn’t substitute for the variable with respect to which you are integrating to. I was pointed to this issue by Aaron, which I haven’t looked at yet.

2] Assumptions: When I was applying to SymPy for Google Summer of Code, I saw this awesome proposal by Tom Bachmann, which seemed Greek and Latin to me then (Some parts of it does still, but never mind). The bottom line, is \frac{dr}{ds} simplifies to a quadrature, sometimes only when, there are certain assumptions on r and s. Lets take a random example, suppose \frac{dr}{ds} = log{r^{s}} - s*log{r} reduces to zero only when r > 0, Since the input variable is x, giving assumptions on x, doesn’t seem to affect the assumptions on r.

3] There are some cases when the final expression, cannot be solved explicitly for y like this https://gist.github.com/Manoj-Kumar-S/6201842 . I think the best way would be to return it as it is.

4] Recursion: Take the case of this wonderful ODE, x^{2}*(-f(x)^{2} + \frac{df}{dx})- a*x^{2}*f(x) + 2 - a*x, calculating the infinitesimals, give me \xi = x^{2} and \eta = a*x^{2}*f(x) - 2 + a*x + x^{2}*(-f(x)^{2}). Since the first step in solving the PDE, \frac{dy}{dx} = \frac{\eta}{\xi} , it gives the same ODE again.

Apart from this I believe rest of the code is good.

P.S: First ten days of college is over. There have been huge disappointments, but apart from that I have enjoyed either roaming outside, and working on my SymPy project, and I have done nothing other than that.


by Manoj Kumar at August 11, 2013 03:21 PM

August 09, 2013

Aaron Meurer

Using Python 3 as my default Python

So I just finished my internship with Continuum. For the internship, I primarily worked on Anaconda, their free Python distribution, and conda, its free (BSD open source) package manager. I might write a blog post about conda later, but suffice it to say that I’m convinced that it is doing package management the right way. One of the major developments this summer that I helped out with was the ability for anybody to build a conda package, and a site called Binstar where people can upload them (the beta code is “binstar in beta” with no quotes). 

Another thing that happened over the summer is that Almar Klein made conda Python 3 compatible, so that it can be used with the Pyzo project, which is Python 3 only.    The way this was done was by using a single code base for Python 2 and Python 3. Thus, this became the first time I have done any heavy development on Python source that had to be Python 3 compatible from a single codebase (as opposed to using the 2to3 tool). 

Another development this summer was that SymPy was released (0.7.3). This marked the last release to support Python 2.5. Around the same time, we discussed our Python 3 situation, and how annoying it is to run use2to3 all the time. The result was this pull request, which made SymPy use a single code base for Python 2 and Python 3. Now, that pull request is hard to mull through, but the important part to look at is the compatibility file. Everything in that file has to be imported and used, because it represents things that are different between Python 2 and Python 3. Ondřej has written more about this on his blog

In all, I think that supporting Python 2.6-3.3 (not including 3.0 or 3.1) is not that bad. The compatibility file has a few things, but thinking back, it was just that bad or worse supporting Python 2.4-2.7 (heck, back then, we couldn’t even use the all function without importing it). The situation is much better today now that we use Travis too, since any mistake is caught before the pull request is merged. The worst of course is the print function, but since that can be imported from __future__, I will be warned about it pretty fast, since print as a statement is a SyntaxError in that case. It also doesn’t take that long to get into the habit of typing () after print.

Of course, there are a lot of nice Python 3 only features that we cannot use, but this was the case for supporting Python 2.4-2.7 too (e.g., the with statement and the ternary statement were both introduced in Python 2.5).  So this is really nothing new. There is always a stick to drop the oldest Python version we support, and a lag on what features we can use. Now that we have dropped Python 2.5 support in SymPy, we can finally start using new-style string formatting, abstract base classes, relative imports, and keyword arguments after *args.

So as a result of this, I’ve come to the conclusion that Python 3 is not another language. It’s just another version of the same language. Supporting Python 2.6-3.3 is no different from supporting Python 2.4-2.7. You have to have some compatibility imports, you can’t use new language features, and you have to have good test coverage. I think that the core Python folks made a mistake by presenting Python 3 as a new language. It has made people antagonistic against Python 3 (well, that and the print function, which was another stupid mistake, because even if it was a good idea, it alone has kept too many people from switching). 2to3 was a mistake too, because it perpetuated this idea.

In the past, I have always developed against the latest version of Python: 2.6 was the best when I learned Python, and then 2.7. Even though I have had to support back to 2.4, I only used 2.4 explicitly when testing.

Well, given what I said above, the only logical thing to do is to use Python 3.3 as my main development Python. If you use Anaconda, there are basically two ways you can do this. The first is to just create a Python 3 environment (conda create -n python3 python=3), and put that first in your PATH (you also will need to add source activate python3 to your bash profile if you go this route, so that conda install will install into that environment by default). For me, though, I plan to use a Python 3 version of Anaconda, which has Python 3 as the default. The main difference here is that conda itself is written in Python 3. Aside from purity, and the fact that I plan to fix any occasional conda bugs that I come across, the other difference here is that conda itself will default to Python 3 in this case (i.e., when creating a new environment with Python like conda create -n envname python, the Python will be Python 3, not Python 2, and also it will build against Python 3 by default with conda build). Continuum does not yet make Python 3 versions of Anaconda, but there are Python 3 versions of Miniconda (Miniconda3), which is a stripped down version of Anaconda with just Python, the conda package manager, and its dependencies. You can easily install Anaconda into it though with conda install anaconda. I personally prefer to install only what I need to keep the disk usage low (on an SSD, disk space is sparse), so this is perfect for me anyway.

My recommendation is to put a Python 2 installation second in your PATH, so that you can easily call python2 if you want to use Python 2. The easiest way to do this is to create a conda environment for it (conda create -n python2 python=2) and add ~/anaconda/envs/python2 to your PATH.

So far, I have run into a few issues:

  • Some packages aren’t build for Python 3 yet in Anaconda, or they don’t support it at all. The biggest blocker in Anaconda is PySide (at least on Mac OS X), though it should be coming soon.
  • Some packages only install entry points with a “3″ suffix, which is annoying. The biggest offender here is IPython. I brought up this issue on their mailing list, so hopefully they will see the light and fix this before the next release, but it hasn’t been implemented yet. I also plan to make sure that the Anaconda package for IPython installs an ipython entry point into Python 3 environments. Even so, one has to remember this when installing old versions of IPython in environments.
  • There are some bugs in conda in Python 3. Actually, I suspect that there are bugs in a lot of packages in Python 3, because people don’t develop against it, unless they have excellent test coverage. Even SymPy missed a few print statements.
  • You can’t setup.py develop against anything that uses 2to3 (like IPython).
  • It’s a little annoying working against old versions of SymPy (e.g., when digging through the git history to track something down), because I have to explicitly use Python 2. Conda makes this easier because I can just create a Python 2 environment and do source activate python2 when I am using Python 2. Or, for a one-off, I can just use python2, and keep a Python 2 environment second in my PATH. But this issue is not really new. For example, really old versions of SymPy only work with Python 2.5, because they used as as a variable name.
  • Everyone else isn’t using Python 3 yet, so if I write a script that only needs to support “the latest version of Python,” it probably needs to support Python 2.7, or else I should explicitly put /usr/bin/env python3 in the shebang line. But for SymPy, I have to be aware of how to support 2.6-3.3, so I have to know all the features that are only in some versions anyway. On the other side of things, if I run some random Python script with a shebang line, it probably is going to expect Python 2 and not Python 3, so I either have to explicitly add python2 to the command or activate a Python 2 environment
  • Some packages just don’t support Python 3 yet. Fabric (and its main dependency, Paramiko) is the one example I have come across so far in my own work. So I have to fall back to Python 2 if I want to use them. The best thing to do here is to pitch in and help these libraries port themselves.
  • People always give code examples with print as a statement instead of a function, so I either have to fix it manually before pasting it or use Python 2. I had tried at one point to make a %print magic for IPython that would let print work like a statement in Python 3, but I never finished it. I guess I should revisit it.

I’ll update this list as I come across more issues.

In all, so far, it’s nothing too bad. Conda makes switching back to Python 2 easy enough, and dealing with these issues are hardly the worst thing I have to deal with when developing with Python. And if anything, seeing Python 2-3 bugs and issues makes me more aware of the differences between the two versions of the language, which is a good things since I have to develop against code that has to support both.


by Aaron Meurer at August 09, 2013 09:59 PM

Mary Clark

Week 8

This week I have been working on the class RootSystem. 

The first major decision that I had to make about this class was how to encode the CartanType data, for lack of a better way of phrasing it.   There are numerous ways of doing this, but I think that my solution is somewhat elegant.  Effectively, when one calls an instance of the class RootSystem, it takes an argument, cartantype.  Then, the __new__ function takes the input and sets an attribute of RootSystem called cartan_type as CartanType(cartantype).  The code is:

class RootSystem(Basic):

    def __new__(cls, cartantype):
        “”"
        Creates a new RootSystem object.  This method assigns an attribute
        called cartan_type to each instance of a RootSystem object.  When
        an instance of RootSystem is called, it needs an argument, which
        should be an instance of a simple Lie algebra.  We then take the
        CartanType of this argument and set it as the cartan_type attribute
        of the RootSystem instance.  
        “”"
        obj = Basic.__new__(cls, cartantype)
        obj.cartan_type = CartanType(cartantype)
        return obj

By doing this, it means that I don’t need to check the argument and see if it is “A” or “B” or “G” or whatever, and then importing the associated class.  I can just call self.cartan_type.rank() or self.cartan_type.simple_root(i), which I think is quite neat. 

 

I’ve also decided to add a method to the base classes that generates all of their positive roots (and since the negative roots are just minus the positive roots, we effectively have the entire root system).  I think that this will facilitate some of the addition methods that I want to implement in RootSystem.  I have so far done this for types A, B, C, and D. 

 

At this point, in RootSystem I have so far implemented a method that generates all the simple roots,  a method for displaying the root space, and then methods which return the Cartan matrix and Dynkin diagram of the Lie algebra.  I am currently pondering on how I want to implement some root addition methods.

 

So yeah, that’s been my week!  To be honest, I think it has been fairly productive, and I am quite pleased about that.


by meclark256 at August 09, 2013 07:55 PM

Chetna Gupta

Week 6 & 7

Week 6, brought me back to my coding arena “YES COLLEGE”. Though it was a messed up week with tons and tons of administration work, course selections, juniors arrival but still I could manage re-going through the Manuel Bronstien to see what the “HELL” has been causing the recursion error in the code.

Though the code has started picking up the pace, it works for some of the simple cases I have included in my previous post (Week-5) but it ended others in recursion errors. These cases are the ones where I have more than one monomial, here is an example to demonstrate the failure.

Screenshot from 2013-07-30 21:43:22

There have been a lot of delays in me working on the PR, thanks to my awesome immunity system, after having food poisoning.

I need to really gear-up now, so have planned to update the blog as well as PRs daily. There is a lot which I guess we all would like to see implemented for the Risch Algorithm. I would really like to have a meeting soon with Aaron to discuss if we could please have daily meet-ups.
PS : Some Announcements
PS1 : YES !! i have passed the mid-term evaluation :D
PS2 : I am finally 20 !! :D Adios teens !


by chetnagupta at August 09, 2013 12:02 AM

August 08, 2013

Chetna Gupta

Week – 6 & 7

Week 6, brought me back to my coding arena “YES COLLEGE”. Though it was a messed up week with tons and tons of administration work, course selections, juniors arrival but still I could manage re-going through the Manuel Bronstien to see what the “HELL” has been causing the recursion error in the code.

Though the code has started picking up the pace, it works for some of the simple cases I have included in my previous post (Week-5) but it ended others in recursion errors. These cases are the ones where I have more than one monomial, here is an example to demonstrate the failure.

Screenshot from 2013-07-30 21:43:22

There have been a lot of delays in me working on the PR, thanks to my awesome immunity system, after having food poisoning.

I need to really gear-up now, so have planned to update the blog as well as PRs daily. There is a lot which I guess we all would like to see implemented for the Risch Algorithm. I would really like to have a meeting soon with Aaron to discuss if having daily meet-ups would be feasible for him.
PS  : Some Announcements
PS1 : YES !! i have passed the mid-term evaluation :D
PS2 : I am finally 20 !! :D Adios teens !


by chetnagupta at August 08, 2013 11:38 PM

August 04, 2013

Katja Sophie Hotz

Seventh week

I am very happy to report that I passed midterm evaluations! :)

So what happened this week? First, I implemented a (user facing) function to calculate a multivariate GCD of two polynomials over an algebraic field, building on the work of the previous two weeks. I also spent a considerable amount of time hunting bugs, but it seems I got everything in working order.

However, I realized that the polynomial ring I had been working with was not the best choice. So I went over the whole code again and rewrote everything to use polynomials in \mathbb Z[t_1, \ldots, t_k][x, z] instead of \mathbb Z[x, t_1, \ldots, t_k, z]. This made some parts easier and some parts more difficult, but overall it looks like the more natural choice. The parts that got more difficult now involve rings like \mathbb Z(t_k)[t_1, \ldots , t_{k-1}][x, z], where I have to distinguish between the cases k = 1 and k > 1. Maybe I will find a better solution for that in time.

In the next few days I want to finish this part of my project so I can start with the factorization algorithm, which will be the biggest and final part of my project.


by katjasophie at August 04, 2013 08:59 PM

Sachin Joglekar

GSoC Midterm Report (Week 7)

Week 7 of GSoC is over, which means I am now in the 'second half' of the coding period. The midterms were conducted by Google from 29th to 2nd, and I am happy to report I passed them :-).

As I had predicted in my last blog post, this week wasn't very exciting, atleast as far as the project is concerned. I am now at Goa, and classes have started. However, I did get a few things done - The Dyadic framework is now almost complete, with all the classes functioning as expected (from point of view of dyadic operations). I also added all the special operations like cross, dot, express and time-differentiation. For now, just like last week, they use SymPy Symbols for temporary working.
Moreover, I finished writing all the tests for my first PR. Gilbert reviewed them, and I made a few changes (mostly API-wise).
I also tested my MovingRefFrame implementation of vector time-derivatives on my hacked mechanics module, and I am happy that it worked fine. Thus, I now have the proof-of-concept for all the work I have done so far.
I have also submitted a PR to get the logic module docs into the online development documentation, but I am quite sure I am going wrong somewhere. I have pinged Aaron, and I hope it gets solved by the end of this week.

To sum up my work till now, this is what I got done-

1) MovingRefFrame class - The basic class to represent moving reference frames in 3-D space. Inherits from the CoordSysRect class of the new vector module.

2) Particle class - Class to represent particles (mass but no volume) in space. Each particle is associated with a frame of its own, and all it's parameters are calculated wrt it.

3) Miscellaneous mechanics functions like the ones to obtain position vector given a time-dependent vector and boundary conditions.

4) Dyadic framework - Classes Dyadic, BaseDyadic, DyadicAdd, DyadicMul. All these classes are based on relevant SymPy core classes and build dyadic-based functionality on top of them.
Currently, I am reading Griffith's book (very helpful suggestion from Aditya - my collegemate) and taking notes for the EM module. Since the development of the new mechanics core is now nearing completion, I can focus time on working on the new addition to sympy.physics.

Hence, the coming week will be spent on polishing off the mechanics work, making sure all required parts are in place, maybe helping Prasoon with some functions of vector fields (so that he can concentrate on vector-space integration).
Plus, I have to design the vector-dyadic interfacing. Basically, how dyadics will be instantiated from vector 'outer' functions. Obviously, having a dyadic framework is useless without fitting it to the vector module.
And last but not the least, I plan to code the electrostatic functions for the EM sub-module by this weekend. Hope I succeed.

Have a great week :-)

by Sachin Joglekar (noreply@blogger.com) at August 04, 2013 02:54 PM

Prasoon Shukla

GSoC Weeks 6 and 7

I hadn’t realized that I didn’t write the blog post last week. Sorry! Anyway, let us describe what was done in the last two weeks.

Okay, so first, off, I passed the midterm evaluation! I hoped I would, of course, but I didn’t know for sure. For the evaluations, I was hoping to get the basic framework of the module to work completely. I was pretty much there but for one particular method – the express method for vectors – wasn’t coming along well; so unwell in fact that I wasn’t even sure what to do at one point.

Taking a few steps back, in week 6, I was primarily testing and fixing things – and hoping that I’ll be ready with the basic framework working and ready to go. But alas, it was in vain. In the meantime, Gilbert made quite a bit of restructuring to the design of the existing code and made a PR against my branch. I didn’t know about this PR for at least 4 days from the time when it was made. Sachin brought it to my attention – just as the week 6 was ending. By now, I had made some commits of my own. Merge conflicts ensued. I reported this to GIlbert, and, he fixed the merge conflicts. But by the time he did that, I had made some more commits! But I hadn’t pushed them so Gilbert didn’t know about that. Anyway, Gilbert said, and I realized, that the longer I tarry in merging, the more difficult it will get. So, I decided to leave the completion of the express method till later and got on with the merge.

Let me describe some of these structural changes. The first this is that we now have a new class, BaseVector, that would represent basis vectors. Just as a reminder, this class was called Vector up until the changes. Now, the Vector class is just a foundation class – all classes representing vectors, namely, BaseVector, Vector and VectMul – inherit from this Vector class. This is a more uniform structure, I think. This also reduced the clutter of the code as well.

Among other changes, we now don’t have exec statements in the initialization of CoordSys classes. Instead, we are now using __setattr__ special function now. This again, helped reduce clutter.

Anyway, by the end of week 6, Stefan had asked me for a date – so that he could test the orientation methods. I was overly optimistic (as he was keen to point out) and assessed the situation to give him working orient methods by the end of Sunday. Right around this time, the new PR by Gilbert came to my attention. It needed merging and I couldn’t get the work done in time. I wrote and email to Stefan, naturally quite abashed, telling him that I couldn’t deliver the code on the previously accepted time. Stefan pointed me to a study in social science – which explored the lack of pessimism in students. It was quite apt, I tell you. Stefan wished, as did I, that I would learn, and I quote, “the art of pessimism”.

Later, when I had merged the work of Gilbert in my branch, I decided to get back on track. The first order of business was to fix the orientation methods. Now, orientation methods rely heavily on the express method. So, I decided to write that one off first. I spent a day writing and testing things. I was confident that in a day or two at most, things will be good to go. But it seems that the art of pessimism was yet to be learned. Though this time, the problem was not an overestimation of my own abilities. Rather, it was an oversight that caused further delay. I wasn’t quite considering a certain case of vectors – and that had allowed me to cut a few corners. When that particular case finally dawned on me, well, I was quite angry with myself, to say the least. This particular case meant that I’d have to write a more general approach to a couple of methods – among them were separate, dot, expres. Now, this meant quite some work. I fixed the separate method first – but I’m not at all sure what the fixed method will achieve – or that whether it will achieve anything. The new case seems to have made this method redundant. But we shall see. The next in line was the express method – which is the foundation of a very large part of the code. Now, I was at a loss on how to implement this method the most general way – thanks to the new case that I talked about earlier. I had one idea that I had from doing a computer graphics course – but, I wasn’t quite sure about its feasibility. So, I ran for the hills :) I asked a question stating my problems on the ML and got a reply by Sachin. Apparently, there’s an easier method. Gilbert made some further comments and finally, I had a solution – I hope it will work. I am currently coding it away though not wothout some difficulty (read academics).

Sheeh! That was a long post. I hope to have covered all the major work done in the past two weeks. Hope to have a working implementation ready in a few days. Note that I haven’t said when – I seem to be learning “the art of pessimism”.

:)


by musingsofafriend at August 04, 2013 02:42 PM

Manoj Kumar

GSoC – Mid term report

Hi,  I passed the midterm evaluations of GSoC, this is what I have accomplished till now.

1] Built the basic infrastructure of the pde module, and added hints that could solve general Partial Differential Equations with constant coefficients

2] Added a function, infinitesimals , that would try to find out the required infinitesimals of any given first order ODE. The following heuristics have been implemented.

a] abaco1_simple (Assumes one of the infinitesimals to be zero, and the other to be a function to x or y)
b] abaco1_product (Assumes one of the infinitesimals to be zero, and the other to be a product of a function of x and y)
c] abaco2_similar (Assumes both infintesimals to be a function of x or y)
d] abaco2_unique_unknown (This is when, one infinitesimals is a function of x and the other to be a function of y)
e] abaco2_unique_general (This is a more general case of the above mentioned hint)
f] linear (Infinitesimals are of the form ax + by + c
g] bivariate (Infinitesimals are bivariate, more general form of the above mnetioned hint)
h] function_sum (When the infinitesimals are the sum of two functions)
i] chi (Finds a polynomial \chi , which helps in calculating the infinitesimals directly)

3] Added a function checkinfsol , which helps in checking if the infinitesimals are the actual solutions to the PDE.

As far as this week went, I couldn’t do much but I managed to add a hint which helps in solving Linear PDE’s with variable coefficients. The general form of such a PDE is

a(x, y)\frac{\partial u}{\partial x} + b(x, y)\frac{\partial u}{\partial y} + c(x, y)u(x, y) = d(x, y) , where a(x, y), b(x, y), c(x, y) and d(x, y) are arbitrary functions in x and y.  This can be done using the method of characteristics. However a simpler way, according to a paper that I skimmed through is, to convert the PDE into an ODE of one variable. The change of coordinates is \xi is the constant in the solution of the differential equation \frac{dy}{dx} = \frac{b}{a} and \eta = x (I don’t know why surely though) is selected such that the Jacobian doesn’t become zero. This is the Pull request, https://github.com/sympy/sympy/pull/2346

TODO’s for this week

1. Get the heuristics PR merged.
2. Try integrating the PDE hint with the ODE hint, (I can foresee a few problems here)

I guess that’s it. Cheers to a new life.


by Manoj Kumar at August 04, 2013 07:51 AM

Ondřej Čertík

How to support both Python 2 and 3

I'll start with the conclusion: making backwards incompatible version of a language is a terrible idea, and it was bad a mistake. This mistake was somewhat corrected over the years by eventually adding features to both Python 2.7 and 3.3 that actually allow to run a single code base on both Python versions --- which, as I show below, was discouraged by both Guido and the official Python documents (though the latest docs mention it)... Nevertheless, a single code base fixes pretty much all the problems and it actually is fun to use Python again. The rest of this post explains my conclusion in great detail. My hope is that it will be useful to other Python projects to provide tips and examples how to support both Python 2 and 3, as well as to future language designers to keep languages backwards compatible.

When Python 3.x got released, it was pretty much a new language, backwards incompatible with Python 2.x, as it was not possible to run the same source code in both versions. I was extremely unhappy about this situation, because I simply didn't have time to port all my Python code to a new language.

I read the official documentation about how the transition should be done, quoting:

You should have excellent unit tests with close to full coverage.

  1. Port your project to Python 2.6.
  2. Turn on the Py3k warnings mode.
  3. Test and edit until no warnings remain.
  4. Use the 2to3 tool to convert this source code to 3.0 syntax. Do not manually edit the output!
  5. Test the converted source code under 3.0.
  6. If problems are found, make corrections to the 2.6 version of the source code and go back to step 3.
  7. When it's time to release, release separate 2.6 and 3.0 tarballs (or whatever archive form you use for releases).

I've also read Guido's blog post, which repeats the above list and adds an encouraging comment:

Python 3.0 will break backwards compatibility. Totally. We're not even aiming for a specific common subset.

In other words, one has to maintain a Python 2.x code base, then run 2to3 tool to get it converted. If you want to develop using Python 3.x, you can't, because all code must be developed using 2.x. As to the actual porting, Guido says in the above post:

If the conversion tool and the forward compatibility features in Python 2.6 work out as expected, steps (2) through (6) should not take much more effort than the typical transition from Python 2.x to 2.(x+1).

So sometime in 2010 or 2011 I started porting SymPy, which is now a pretty large code base (sloccount says over 230,000 lines of code, and in January 2010 it said almost 170,000 lines). I remember spending a few full days on it, and I just gave up, because it wasn't just changing a few things, but pretty fundamental things inside the code base, and one cannot just do it half-way, one has to get all the way through and then polish it up. We ended up using one full Google Summer of Code project for it, you can read the final report. I should mention that we use metaclasses and other things, that make such porting harder. Conclusion: this was definitely not "the typical transition from Python 2.x to 2.(x+1)".

Ok, after months of hard work by a lot of people, we finally have a Python 2.x code base that can be translated using the 2to3 tool and it works and tests pass in Python 3.x.

The next problem is that Python 3.x is pretty much like a ghetto -- you can use it as a user, but you can't develop in it. The 2to3 translation takes over 5 minutes on my laptop, so any interactivity is gone. It is true that the tool can cache results, so the next pass is somewhat faster, but in practice this still turns out to be much much worse than any compilation of C or Fortran programs (done for example with cmake), both in terms of time and in terms of robustness. And I am not even talking about pip issues or setup.py issues regarding calling 2to3. What a big mess... Programming should be fun, but this is not fun.

I'll be honest, this situation killed a lot of my enthusiasm for Python as a platform. I learned modern Fortran in the meantime and with admiration I noticed that it still compiles old F77 programs without modification and I even managed to compile a 40 year old pre-F77 code with just minimal modifications (I had to port the code to F77). Yet modern Fortran is pretty much a completely different language, with all the fancy features that one would want. Together with my colleagues I created a fortran90.org website, where you can compare Python/NumPy side by side with modern Fortran, it's pretty much 1:1 translation and a similar syntax (for numerical code), except that you need to add types of course. Yet Fortran is fully backwards compatible. What a pleasure to work with!

Fast forward to last week. A heroic effort by Sean Vig who ported SymPy to single code base (#2318) was merged. Earlier this year similar pull requests by other people have converted NumPy (#3178, #3191, #3201, #3202, #3203, #3205, #3208, #3216, #3223, #3226, #3227, #3231, #3232, #3235, #3236, #3237, #3238, #3241, #3242, #3244, #3245, #3248, #3249, #3257, #3266, #3281, #3191, ...) and SciPy (#397) codes as well. Now all these projects have just one code base and it works in all Python versions (2.x and 3.x) without the need to call the 2to3 tool.

Having a single code base, programming in Python is fun again. You can choose any Python version, be it 2.x or 3.x, and simply submit a patch. The patch is then tested using Travis-CI, so that it works in all Python versions. Installation has been simplified (no need to call any 2to3 tools and no more hacks to get setup.py working).

In other words, this is how it should be, that you write your code once, and you can use any supported language version to run it/compile it, or develop in. But for some reason, this obvious solution has been discouraged by Guido and other Python documents, as seen above. I just looked up the latest official Python docs, and that one is not upfront negative about a single code base. But it still does not recommend this approach as the one. So let me fix that: I do recommend a single code base as the solution.

The newest Python documentation from the last paragraph also mentions

Regardless of which approach you choose, porting is not as hard or time-consuming as you might initially think.

Well, I encourage you to browse through the pull requests that I linked to above for SymPy, NumPy or SciPy. I think it is very time consuming, and that's just converting from 2to3 to single code base, which is the easy part. The hard part was to actually get SymPy to work with Python 3 (as I discussed above, that took couple months of hard work), and I am pretty sure it was pretty hard to port NumPy and SciPy as well.

The docs also says:

It /single code base/ does lead to code that is not entirely idiomatic Python

That is true, but our experience has been, that with every Python version that we drop, we also delete lots of ugly hacks from our code base. This has been true for dropping support for 2.3, 2.4 and 2.5, and I expect it will also be true for dropping 2.6 and especially 2.7, when we can simply use the Python 3.x syntax. So not a big deal overall.

To sum this blog post up, as far as I am concerned, pretty much all the problems with supporting Python 2.x and 3.x are fixed by having a single code base. You can read the pull requests above to see how to implemented things (like metaclasses, and other fancy stuff...). Python is still quite the same language, you write your code, you use a Python version of your choice and things will just work. Not a big deal overall. The official documentation should be fixed to recommend this approach, and deprecate the other approaches.

I think that Python is great and I hope it will be used more in the future.

Written with StackEdit.

by Ondřej Čertík (noreply@blogger.com) at August 04, 2013 07:48 AM

Thilina Rathnayake

Holzer reduction and some modifications

Hi All, This week I managed to implement Holzer’s reduction and change the behaviour of DE module a little bit.  First let’s take a look at Holzer’s reduction.

Holzer Reduction

Holzer reduction is concerned with reducing solutions of the quadratic ternary nonrmal equation ax^2 + by^2 +cz^2 = 0. The Holzer’s theorem says that if the above equation is solvable there is always a solution (x, y, z) such that x \leq \sqrt{|bc|}, y \leq \sqrt{|ac|}, z \leq \sqrt{|ab|}. The algorithm for this is explained in [1]. Below is a rough sketch. Before applying the algorithm we have to make abc square free and we assume that a, b positive and c is negative(We can always select a, b, c in such a way that this is the case). Suppose there is a solution (x_{0}, y_{0}, z_{0}) that is not Holzer reduced.

  • If c is even set k = c /2 and let u_{0}, v_{0} be any solution to the equation k = uy_{0} - vx_{0}.
  • If c is odd, c is odd let k = 2c and  let u_{0}, v_{0} be any solution to the equation c = uy_{0} - vx_{0}.
  • Let w be the nearest integer to -(aux_{0} + bvy_{0})/(cz_{0}).
  • Then the following expressions for x, y, z gives a solution such that z < z_{0}.
    x = (x_{0}(au_{0}^2 + bv_{0}^2 + cw^2) - 2u_{0}(au_{0}x_{0} + bv_{0}y_{0} + cwz_{0})) / k
    y = (y_{0}(au_{0}^2 + bv_{0}^2 + cw^2) - 2v_{0}(au_{0}x_{0} + bv_{0}y_{0} + cwz_{0})) / k
    z = (z_{0}(au_{0}^2 + bv_{0}^2 + cw^2) - 2w(au_{0}x_{0} + bv_{0}y_{0} + cwz_{0}))) / k

Continuing this manner one would find a solution such that z_{0} \leq \sqrt{|ab|}.

Some changes to DE module

I made some changes to the structure of DE module too. Now every type of equation returns a tuple ordered according to the sorted order of variable names used in the equation. Quadratic ternary forms and linear equations returns only one tuple, the parametric solution,  and quadratic binary equation returns a set of tuples. This format may be changed in future. The most important change I did to the DE module is that, now before solving any given equation, DE module checks whether that can be factored and If so, solves those factors separately. For example, given the equation y^2 - 7xy + 4yz = 0, it will solve y = 0 and y-7x+4z = 0 separately and combine those results. Also I managed to implement a general method for testing that would check whether the solutions returned by various methods satisfy the original equation. So I hope to get rid of the other redundant methods like check_ternary_quadratic(), solution_ok_quadratic() in the test file. A thank should go to Aaron for proposing such a methodology.

Future work

Pernici has pointed out that some of the solution methodologies in quadratic binary form can be improved so that they take less time. He also has done a great job in implementing the solutions for quadratic congruence x^2 \equiv a (mod \ b). I hope to use those results when his PR gets merged into master. He proposed a method due to cornacchia to improve the speed of the solutions but after a little surveying, I found that implementing solutions for ax^2 + bxy + cy^2 = N can be used to improve the speed. Current algorithms for solving binary quadratic equation ax^2 + bxy + cy^2 + dx + ey + f = 0 employs a general solving method so it miss out some speed enhancements for the special case ax^2 + bxy + cy^2 = N . My plan is if ax^2 + bxy + cy^2 + dx + ey + f = 0 can be converted to the form ax^2 + bxy + cy^2 = N, then solve it using methods for that, otherwise apply the general method. I found a good reference [2] for this. A huge thank should go to Pernici for pointing out the improvements.

References

[1] Efficient Solution of Rational Conices, J. E. Cremona and D. Rusin, Mathematics of Computation, Volume 00, Number 0.
[2] Binary Quadratic Forms: An Algorithmic Approach, J. Buchmann and U. Vollmer, Springer-Verlag, Berlin, Heidelberg, 2007.


by Thilina Rathnayake at August 04, 2013 04:49 AM

August 01, 2013

Mary Clark

Week 7

(Or maybe week 6. See last week’s post).

This week, I’ve started one new PR (https://github.com/sympy/sympy/pull/2344) which includes the Dynkin diagram functionality for all classes. I had PR 2297 merged, and I’m pretty sure I’ve fixed the merge conflicts for PR 2337 (which is the Cartan matrix stuff).

Talking with David this week, he suggested that I include more technical information and sources in my code, to help explain some of the concepts in Lie algebras. I thought this was a very good idea, and so I’ve started writing some more detailed docstrings, including sources.

Additionally, I changed some of the classes in my code so that instead of using __init__ they use __new__ instead, as is the norm in the sympy code base.

Currently, I am contemplating the root system class I want to write, and what it should contain/accomplish. First off, it will need to generate all the simple roots for a given Lie algebra. This is easy enough to accomplish, using functions from the individual classes, though I need to figure out how exactly I want to go about importing data from things type type_a.py, type_b.py, etc. I think that it would be best to store these simple roots in a dictionary, with the keys being like, alpha1, alpha2, etc and then the values being vector representations of the roots.

I’d also like to have an addition method for adding simple roots together, and as David suggests, a method which would only add roots if the sum was a root.

I’m still internally debating if I want to have the RootSystem class hold all the roots (well, hold all the positive roots, as a reflection would then generate the negative roots) or not. I think it may make some of the above methods simpler, but then it might also just be overkill.

I’ll also have a method returns the root space of a given root system, which is just the span of all the simple roots.

So, yeah, that’s what I’m thinking and planning at the moment. Hopefully I’ll be starting this stuff this weekend.


by meclark256 at August 01, 2013 09:16 PM

July 31, 2013

Tarun Gaba

GSoC Report : Week 5, 6

The last two weeks have been a little too hectic. I have been working on writing the source code for classes from Python side, and they are nearing the completion.

This is the week for midterm evaluations, and as per the timeline, I was supposed to completely finish the python part by now. Although most of the stuff is complete, I have one or two days work to be left to be done.
I have finished with Shape, and VisualizationFrame classes. Also yesterday I just finished writing complete code for Camera classes.
There were two instances for Camera, one is PerspectiveCamera, and other is Orthographic Camera.

The Camera class is inherited from VisualizationFrame class, which would make it inherit the generation of simulation data. This has very much utility for the fact that we can attach a camera to any moving object in our system, and hence we have  moving cameras, which is an aide for the effective visualizations.

Another approach we have been thinking upon is the possibility of applying multiple cameras to a system, which can effectively be tied upon different moving objects. and they can be switched by a keystroke in the browser.
It would be kinda cool to see an animation from one camera, and then we can switch it to a different camera, in between the animation, without any extra pain, with a keystroke.

Anyways the priority is to get the basic functionality up, and then we can add these and many other features, as time permits.

Another concept which requires a little brainstorming is the Lighting. I am thinking of letting user choose where and what type of lights they want in their system.

These lights can be added in Scene class. and most probably they can also be inherited from VisualizationFrame class, giving them also the capability of motion, rather than static lights.

by TARUN GABA (noreply@blogger.com) at July 31, 2013 02:44 PM

July 29, 2013

Thilina Rathnayake

Descent method with gaussian reduction.

Sorry for a very late blog post, I had a really busy week and finally I managed to find sometime to write about the progress I made in the last week. I implemented an improved version of the descent method which uses gaussian reduction. I found this algorithm in [1]. Here is a sketch of the algorithm.

descent(a, b)

1. If |a| > |b| then swap a and b, solve the resulting equation, then swap y and z in the solution obtained.
2. If b = 1 then set (x, y, z) = (1, 1, 0) and stop.
3. If a = 1  then set (x, y, z) = (1, 0, 1) and stop.
4. If b = -1 there is no solution (since a must be -1).
5. If b = -a then set (x, y, z) = (0, 1, 1) and stop.
6. If b = a then let (x_{1}, y_{1}, z_{1}) be a solution of X_{1}^2 + Z_{1}^2 = aY_{1}^2 , set (x, y, z) = (ay_{1}, x_{1}, z_{1}) and stop.
7. Let w be a solution to x^2 \equiv a (mod \ b). with w \leq |b|/2, and set (x_{0}, z_{0}) = (w, 1), so that x_{0}^2 - az_{0}^2 \equiv 0 (mod \ b).
8. Use lattice reduction to find a new nontrivial solution (x_{0}, z_{0}) to the congruence X^2 - aZ^2 \equiv 0 (mod \ b) with x_{0}^2 + |a|z_{0}^2 as small as possible.
9. Set t = (x_{0}^2 - az_{0}^2)/b and write t = t_{1}t_{2}^2 with t_{1} square-free.
10. Let (x_{1}, y_{1}, z_{1}) be a solution to X^2 - aZ^2 = t_{1}Y^2 then
(x, y, z) = (x_{0 }x_{1} + az_{0}z_{1}, t_{1}t_{2}y_{1}, z_{0}x_{1} + x_{0}z_{1})

I spent a lot of time finding an algorithm for lattice reduction. There were generalized algorithms but I looked for something which is faster because we are concerned with vectors with two bases. I found  gaussian reduction algorithm in [2] which uses a method specific to this case.

References

[1] .. Efficient Solution of Rational Conices, J. E. Cremona and D. Rusin, Mathematics of Computation, Volume 00, Number 0.
[2] .. Gaussian lattice Reduction [online]. Available: http://home.ie.cuhk.edu.hk/~wkshum/wordpress/?p=404


by Thilina Rathnayake at July 29, 2013 05:59 AM

July 28, 2013

Katja Sophie Hotz

Sixth week

This week I implemented most of the modular GCD algorithm for algebraic function fields [1], which I described in my last post. At the moment it works for univariate polynomials over algebraic extensions of \mathbb Q. Almost everything is in place to also calculate GCDs of polynomials over algebraic function fields, but at the moment there is no (easy) way to represent them, since there is no “AlgebraicFunctionField” domain in sympy yet. However, if we wanted to compute the GCD of two polynomials f, g \in \mathbb Q(t_1, \ldots, t_k)[z] / \langle m(z) \rangle [x] right now, we could compute their primitive associates \check f, \check g \in \mathbb Z[x, t_1, \ldots, t_k, z] and the minimal polynomial in \mathbb Z[t_1, \ldots, t_k, z] manually and then directly use the private function _func_field_modgcd_m.

So how to handle multivariate inputs? Van Hoeij and Monagan write that for f, g \in \mathbb Q(t_1, \ldots, t_k)[z] / \langle m(z) \rangle [x_1, \ldots, x_n] we have to make three additional steps. First, we view f and g as univariate polynomials in \mathbb Q(t_1, \ldots, t_k)[z] / \langle m(z) \rangle [x_2, \ldots, x_n][x_1] and calculate the GCD of their contents, c_h. Here, we have to do multivariate GCD computations in one variable less. After dividing out c_h, we treat the variables x_2, \ldots, x_n as parameters, i.e. we view f and g as polynomials over \mathbb Q(t_1, \ldots, t_k, x_2, \ldots, x_n)[z] / \langle m(z) \rangle and compute their univariate GCD h. Finally, we divide out the content of h, which is again viewed as a polynomial in \mathbb Q(t_1, \ldots, t_k)[z] / \langle m(z) \rangle [x_2, \ldots, x_n][x_1], and multiply it by the correct content c_h.

Next week I plan to cover the multivariate case as well, write tests and documentation and clean up the code.

[1] http://www.cecm.sfu.ca/~mmonagan/papers/AFGCD.pdf


by katjasophie at July 28, 2013 11:31 AM

Manoj Kumar

Half the “summer” gone by

Hello, before I blog about my SymPy work, its going to be an exciting new semester in college (I hope) , and a set of new interesting courses (or courses with interesting names atleast) to be bunked. Last semester, just flew away,  I have the faintest idea of what happened , and I’m just hoping this time, it wouldn’t be the same. The summer wasn’t any different, and I have no idea how it went by. I had thought maybe I could learn something other than my SymPy project, like revising my C concepts, or maybe basics of ML, but if there is anything that I suck at, it is definitely time management, for some strange reason I prefer to while away the entire mornings and afternoons, listening to music, on Quora, Facebook, and in the evenings when the guilty conscience of having done nothing pricks me, I start to write some SymPy code.  There are of course others things that I suck at, however this post is going to about my SymPy project and not about things that I suck at and don’t.

Well poor jokes apart,  I guess this was the most productive week of my GSoC project (Was it something to do with the mid-term deadline? ). After numerous changes, I finally got my sphinx PR https://github.com/sympy/sympy/pull/2282 and refactor PR https://github.com/sympy/sympy/pull/2286 merged. Other than that I read five new heuristics that Raoul gave me, and pushed it to a single PR https://github.com/sympy/sympy/pull/2308. Since almost of all the algorithms are pretty straightforward, I’d just focus on the one that I found most interesting, and the one that I found well, not so interesting.

1. Linear:

This one assumes \xi to be ax + bf(x) + c and \eta to be fx + gf(x) + h, This is similar to the bivariate heuristic, except for the fact that, that h need not be a rational function.  This would mean cases even in which the exponents are symbolic constants would satisfy this heuristic. Substituting, \xi and \eta , the PDE is simplified to, f + (g - a)h - bh^{2} - (ax + by + c)\frac{\partial h}{\partial x} - (fx + gy + h)\frac{\partial h}{\partial y}  And as usual. grouping the coefficients and by using solve, one could get the value of the constants, a, b, c, f, g, h

2. Abaco2_unique_general:

This algorithm seems like these huge formulae in my mechanical engineering exams (that nobody has any idea how it came into place, in which you just substitute things and get the answer, (I get most of them wrong anyway). This gist would explain the algorithm better than me, https://gist.github.com/Manoj-Kumar-S/6095045

I guess thats it for now. I am waiting for Sean to give his comments, before I can go any further in my project. Cheers.


by Manoj Kumar at July 28, 2013 07:28 AM

July 27, 2013

Sachin Joglekar

GSoC Week 6: First half(almost) done

First, about the work done this week. It took me some time to set up my programming environment and get used to Ubuntu's interface. I fumbled around a bit messing up some things on the way, but now I can say I am quite comfortable...at least with the basics.
For one, I got the docs for my work on the logic module merged into the master docs. Aaron also pointed me to the method for merging those changes with the development version  of the SymPy docs. I will try and get it done in a day or two, before I leave for my campus.
Second, Gilbert and I submitted a PR to Prasoon's branch with some changes that we made to his code. Since the number of merge conflicts are high, he would most probably be merging the relevant parts manually.
Third, I started working on the code for dyadics. Initially, I submitted a slightly modified version of the earlier Dyadic class, but Gilbert soon made me realise that compatibility with SymPy's core would mean a lot more than just a few changes here and there. Sigh.
So, I am now busy working on classes for dyadics, compatible with SymPy's global architecture-
1) Dyadic(Expr), the super class - all operations and initializations will be handled from here
2) BaseDyadic(Dyadic) - the class to represent basic components of dyadics, things like (R1.x|R2.y)
3) DyadicAdd(Add, Dyadic) and DyadicMul(Mul, Dyadic) - additions and multiplications of dyadics

The last two classes are a little shaky for now, and the code I have submitted at the PR at the time I write this makes it quite evident. Inspite of that, I have got the basic operations- add, sub, mul, div and some others like 'and' to work as expected(conceptually). To see how it looks as of now, you can see this (real SymPy session with mock arguments). However, it's quite obvious that getting these new classes to behave exactly according to SymPy's way of doing things and return accurate results is going to take quite some time, maybe a week more. Not to forget docs, doc examples and tests.

Anyways, I can't believe I am almost at the end of the first half of my GSoC period. I have learnt *quite* a lot, well, that's obvious from my blog posts- not just vector math, but a bunch of Python as well. I am happy with the progress till now, though nothing has been merged yet. But well, Prasoon and my projects are such that when things will get merged, a huge chunk will go in together.
I plan to get the mechanics core done perfectly before I start with the EM module..and by 'done perfectly', I mean getting the main code merged, along with a lot of commentary-style documentation on the new module. I also have to enquire about putting deprecation signs on the old module- will have to consult the mailing list for this.

As mentioned earlier, I will be moving to Goa on Tuesday, so that period is going to be quite busy with getting things in order @the insti. Then I can start working again.
The mid-term GSoC evaluations will be done from 29th to 2nd of August, after which I will get my first big payment in 4-5 days :-)

Thats all for now!
Have a great week, and 'stay tuned' :-D.

by Sachin Joglekar (noreply@blogger.com) at July 27, 2013 06:32 PM

July 25, 2013

Mary Clark

Week 6

Or is it week 5? I’m no longer sure. My ability to count has decreased dramatically as my number of years studying mathematics has increased.

Excitingly, this week I’ve have two PRs merged, 2259 and 2237. Other than that, this week has been fairly slow. I’ve been battling illness and haven’t accomplished overly much. I’ve generally been cleaning up code for all the types (type_A and type_B are done, since those were in the PRs), and I’ve made definite progress with types C-F. I also think I’ve figured out the ascii art for the Dynkin diagrams.

To do the Dynkin diagrams, I learnt about the join function for strings in python, which made things 100 times easier. For instance, the code to generate the Dynkin diagram of A_n is:

>>> diag = “—”.join(“0″ for i in range(1, n+1)) + “\n”
>>> diag += ” “.join(str(i) for i in range(1, n+1))

For example for A_3 it produces a Dynkin diagram that looks like:
0—0—0
1    2     3
which I’m quite happy with. I though that I was going to have to use a loop, but the join command is much more elegant. The code for the other types looks very similar.

So yeah, that’s pretty much where I am. I seem to be getting over this illness, so I hope that I will be much more productive this weekend.


by meclark256 at July 25, 2013 06:26 PM

July 24, 2013

Fabian Pedregosa

Different ways to get memory consumption or lessons learned from ``memory_profiler``

As part of the development of memory_profiler I've tried several ways to get memory usage of a program from within Python. In this post I'll describe the different alternatives I've tested.

The psutil library

psutil is a python library that provides an interface for retrieving information on running processes. It provides convenient, fast and cross-platform functions to access the memory usage of a Python module:

def memory_usage_psutil():
    # return the memory usage in MB
    import psutil
    process = psutil.Process(os.getpid())
    mem = process.get_memory_info()[0] / float(2 ** 20)
    return mem

The above function returns the memory usage of the current Python process in MiB. Depending on the platform it will choose the most accurate and fastest way to get this information. For example, in Windows it will use the C++ Win32 API while in Linux it will read from /proc, hiding the implementation details and proving on each platform a fast and accurate measurement.

If you are looking for an easy way to get the memory consumption within Python this in my opinion your best shot.

The resource module

The resource module is part of the standard Python library. It's basically a wrapper around <sys>resource.h> getrusage, which is a POSIX standard but some methods are still missing in Linux . However, the ones we are interested seem to work fine in Ubuntu 10.04. You can get the memory usage with this function:

def memory_usage_resource():
    import resource
    rusage_denom = 1024.
    if sys.platform == 'darwin':
        # ... it seems that in OSX the output is different units ...
        rusage_denom = rusage_denom * rusage_denom
    mem = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / rusage_denom
    return mem

In my experience this approach is several times faster than the one based in psutil as was the default way to get the memory usage that I used in memory_profiler from version 0.23 up to 0.26. I changed this behavior in 0.27 after a bug report by Philippe Gervais. The problem with this approach is that it seems to report results that are slightly different in some cases. Notably it seems to differ when objects have been recently liberated from the python interpreter.

In the following example, orphaned arrays are liberated by the python interpreter, which is correctly seen by psutil but not by resource:

mem_resource = []
mem_psutil = []
for i in range(1, 21):
    a = np.zeros((1000 * i, 100 * i))
    mem_resource.append(memory_usage_resource())
    mem_psutil.append(memory_usage_psutil())

Memory plot

By the way I would be delighted to be corrected if I'm doing something wrong or informed of a workaround if this exists (I've got the code to reproduce the figures 1)

querying ps directly

The method based on psutils works great but is not available by default on all Python systems. Because of this in memory_profiler we use as last resort something that's pretty ugly but works reasonably well when all else fails: invoking the system's ps command and parsing the output. The code is something like::

def memory_usage_ps():
    import subprocess
    out = subprocess.Popen(['ps', 'v', '-p', str(os.getpid())],
    stdout=subprocess.PIPE).communicate()[0].split(b'\n')
    vsz_index = out[0].split().index(b'RSS')
    mem = float(out[1].split()[vsz_index]) / 1024
    return mem

The main disadvantage of this approach is that it needs to fork a process for each measurement. For some tasks where you need to get memory usage very fast, like in line-by-line memory usage then this be a huge overhead on the code. For other tasks such as getting information of long-running processes, where the memory usage is anyway working on a separate process this is not too bad.

benchmarks

Here is a benchmark of the different alternatives presented above. I am plotting the time it takes the different approaches to make 100 measurements of the memory usage (lower is better). As can be seen the smallest one is resource (although it suffers from the issues described above) followed closely by psutil which is in my opinion the best option if you can count on it being installed on the host system and followed far away by ps which is roughly a hundred times slower than psutil.

Memory plot


  1. IPython notebook to reproduce the figures: html ipynb 

by Fabian Pedregosa at July 24, 2013 10:00 PM

July 22, 2013

Chetna Gupta

Week-5

This was the week where things did start sprouting slowly and steadily but some things came about of blue storming the entire progress.
Some of the outputs:
Screenshot from 2013-07-22 20:08:38
Input:
DE = DifferentialExtension(extension={‘D’: [Poly(1, x), Poly(1/x, t)],
‘L_K’: [1], ‘E_K’: [], ‘L_args’: [x], ‘E_args’: []})
g1 = Poly(t + 1 + x*t – x*t**3, t)
g2 = Poly(2 + 2*t + 3*x*t**3 + 2*x*t, t)
coupled_DE_system(Poly(t**2 + 1, t), Poly(t**2, t), g1, g2, DE)

Output
-> The monomial is \log x
-> The s1, s2 should satisfy:
Ds1 + (t^2 + 1)s1 - (t^2)s2 = 2*t + 1
Ds2 + (t^2)s2 + (t^2 + 1)s1 = 2*t^2 + 2*t + 1

This case arises while solving
Ds + (1 + (\log x)^2 (1 + \iota)) s = (2*\log x + 1)(1 + \iota) +  2*(\log x)^2 \iota

Screenshot from 2013-07-22 20:06:02

Input:
DE = DifferentialExtension(extension={‘D’: [Poly(1, x), Poly(t, t)],
‘L_K’: [], ‘E_K’: [1], ‘L_args’: [], ‘E_args’: [x]})
g1 = Poly(2*t + 1)
g2 = Poly(2*t^2 + 2*t + 1, t)
coupled_DE_system(Poly(t + 1, t), Poly(t – 1, t), g1, g2, DE)

Output
-> The monomial is \log x
-> The s1, s2 should satisfy:
Ds1 + (t + 1)s1 - (t - 1)s2 = 2*t + 1
Ds2 + (t - 1)s2 + (t + 1)s1 = 2*t^2 + 2*t + 1

Screenshot from 2013-07-22 20:10:20
Input:
# t2 = e^(e^x)
t0, t1, t2 = symbols(‘t:3′)
DE = DifferentialExtension(extension={‘D’: [Poly(1, x), Poly(t1, t1), Poly(t2*t1, t2)],
‘L_K’: [], ‘E_K’: [1, 2], ‘L_args’: [], ‘E_args’: [t1, x]})
g1 = Poly(t2**2*t1 + t2**2*t1**2 – t2**4, t2, t1)
g2 = Poly(t2**4 + t1**3 + t2*t1**2 + t2*t1 + t1, t2, t1)
coupled_DE_system(Poly(t1**2, t1), Poly(t2**2, t2), g1, g2, DE)

Output:
NoElementaryIntegrals
monomial e^(e^x) is expected to return the above

There are also other test(and probably the reason i am not able to commit the progress). These tests are ending up in recursion b/w cancellation algorithms and coupled_differential_system, probably I am missing the boundary conditions somewhere.
And yes i still need to add b1=0 ad b2=0 conditions(Asmeurer yups the book mentions about them at places)

Mostly the re-opening of the college would bottleneck the progress for a couple of days, excluding the travel time and resettling life at college there would be ongoing campus-interviews, keeping me busy away from the terminal in office boardrooms(which makes every bone of my body to shiver).

Life is definitely a roller-coaster, and I am about to the cross the highest point of it this week.
Hoping to witness no more crashes for a couple of days
Signing- off :)


by chetnagupta at July 22, 2013 03:05 PM

July 21, 2013

Prasoon Shukla

GSoC Week 5 : A week gone bad

This was was rather eventful. And as such, I didn’t get much time to work. As a matter of fact, I hardly got any time to work at all.

So, this week, a new semester in my college began, and, things were hectic, as it always does at the start of a new session. There was the usual hum-drum that is associated with a new session and more. But I was optimistic. I hoped that I’ll cover some ground in the weekend but alas! That was not to be. As it happens, a professor got hold of me for writing out some odd scripts. And the guy didn’t even offer a measly thanks in return. Nevertheless, as life settles down, I know I’ll find more time to work.

Nevertheless, there was some work that I did get done. First thing – the basic structure finally did get completed. There were some small bugs that have been taken care of. Now that the basic building block of the project are in their final state, I have finally begun the testing of the code written till now. I have been able to initialize coordinate system objects correctly and also tested initialization of VectAdd and VectMul objects using operators defined on various classes.

That’s pretty much it for this week. I know that it’s very little. But again, I am optimistic that I’ll get more time (much more in fact) in the coming weeks. By the end of this week, I hope to have corrected whatever errors that are still in the code, write tests and have a part of the code ready to go in.


by musingsofafriend at July 21, 2013 04:12 PM

Katja Sophie Hotz

Fifth week

I began this week with tracking down two problems with the modular GCD algorithm for multivariate integer polynomials. The first one was fixed by choosing evaluation points randomly instead of in a fixed order. For example, this happened with the polynomials f = x + y and g = x + y + z because z = 0 is unlucky for every prime, but this was never detected.
The second problem occurred with polynomials in many variables. If one of the first variables was evaluated at an unlucky point, the algorithm would try out every combination of evaluation points for the remaining variables before the unlucky one was discarded. This was fixed by giving up an evaluation point if too many failures happen.
The good news is that now bin/test passes when heugcd is replaced by modgcd_multivariate.

After that, I started implementing the modular GCD algorithm for polynomials over (simple) algebraic function fields, as described in [1]. The setting is as follows. We consider the field of rational functions in the parameters t_1, \ldots, t_k, i.e. \mathbb Q(t_1, \ldots, t_k). Since we also want to be able to work with expressions like \sqrt{2}, \, \sqrt{t_1} etc., we construct a suitable algebraic extension L = \mathbb Q(t_1, \ldots, t_k)[z] / \langle m(z) \rangle, where m(z) \in \mathbb Q(t_1, \ldots, t_k)[z] is monic and irreducible. Our goal now is to calculate the GCD of two univariate polynomials f(x), g(x) \in L[x].

First, the problem is reduced to polynomials over \mathbb Z[t_1, \ldots, t_k][z] / \langle m(z) \rangle by clearing fractions in f and g. So basically, we are working with polynomials f, g \in \mathbb Z[t_1, \ldots, t_k][z, x]. As always, the main loop picks suitable prime numbers p, calculates the GCD of f \; \mathrm{mod} \, p and g \; \mathrm{mod} \, p with algorithm P and combines the new result with previous ones (for different primes) using the Chinese Remainder Theorem. Here we only pick those prime numbers and GCDs, where the leading terms match the one of the new GCD and obtain c \; \mathrm{mod}\, m_c. Next we use integer rational reconstruction (see [2]) on the coefficients of c \; \mathrm{mod}\, m_c, which tries to calculate \frac a b \in \mathbb Q from a b^{-1} \; \mathrm{mod} \, m_c. This will only succeed if a and b are smaller than \sqrt{\frac {m_c} 2}. Finally we check if we have found our GCD with trial division.

Algorithm P works by recursively evaluating the parameters t_1, \ldots, t_k for f, g and m. When there are no parameters left, we have two polynomials in \mathbb Z_p[z] / \langle m(z) \rangle [x]. Now we try to use the Euclidean Algorithm to compute their GCD. Because m(z) may not be irreducible anymore, \mathbb Z_p[z] / \langle m(z) \rangle is only a ring in general and we may encounter zero divisors. Therefore, it is possible that the Euclidean Algorithm fails. If a recursive call of algorithm P fails, then we have to choose a different evaluation point. Subresults are later combined with similar techniques as above. Again, we verify the result using trial division.

Next week I hope to finish a working version of this algorithm.

[1] http://www.cecm.sfu.ca/~mmonagan/papers/AFGCD.pdf
[2] http://dl.acm.org/citation.cfm?id=806398


by katjasophie at July 21, 2013 02:58 PM