June 30, 2016

First of all the good news, I have passed the mid-term evaluations. I received the feedback from Kalevi, "I have had some problems with the blog posts.". Though Kalevi, mentioned to not take the post seriously. Well, not to say, that is definitely true. I don't claim that I will be able to remedy this, but lets see what happens.

Kalevi, had sent me a mail, mentioning "next things to do", it included "Cosets in Permuations group" and "Finitely Presented Abelian groups". (its been quite sometime since that mail). It was more of my decision to implement the topic of "presentation of subroups". To be true, I particularly chose this topic, since that makes the output of out work done till now, to look beautiful :) . Earlier I had a constant doubts when I was making proposal about what it exactly means by "presentation of subgroups". So while I read the examples from the Handbook, it became clear to me.

As to the progress in the topic, here's what we have done, I opened the PR on sunday. This week we started with the implementation of subgroup presentations. There are two methods to compute "presentation of a subgroup", first being called "computing presentations on schreier generators" also called the Reidemeister Schreier procedure. Since we have already implemented the "Todd Coxeter" procedure, which is used as the first step in Reidemeister Schreier for coset enumeration. It uses a routine which is called "define_schreier_generators", Second being "computing presentation on the generators of subgroup". Well the pseudo code is there for both the two implementations. The code of

The important part of finding the "presentation of subgroup" being simplifying the "presentations", i.e reducing the three factors |X| (no. of subgroup generators), |R| (no. of subgroup relators) and l (total length of subgroup relators). Though there doesn't seem to any pseudo code available anywhere for the implementation (this makes it quite interesting to implement). But the paper by Havas [Reidemeister Schreier program] (http://staff.itee.uq.edu.au/havas/1974h.pdf) neatly explains the things.

Last 2 days were spent on trying to make "elimination_technique_1" work. Its still not fully functional, since it still returns some dependent relators as a part of the presentation. As for an example, currently Reidemeister Schreier procedure presentation works this way.


>>> F, x, y = free_group("x, y")
>>> f = FpGroup(F, [x**2*y**2, y**-1*x*y*x**-3])
>>> H = [x]
>>> reidemeister_presentation(f, H)
([x_1], [x_1**-4, x_1**-8])


Two things need to be fixed here. First being, removal of -ve powers of single syllable relators, $x_1^{-4}$ -> $x_1^4$, similarly for the other relator. Second thing being, since $x_1^4 = 0$ implies that $x_1^8 = 0$ (dependent relator) so the relator should be removed. Perhaps the issue with this is, where should the code for checking this dependence be introduced? Kalevi seems a bit busy these days with other project. That is not the issue. I believe there may be something more to the simplification in the "Handbook", (have been referring to [2] for the elimination of generators, relators and simplification of presentations).

Yesterday, I Thought of focussing on a new topic "simplification of presentations", the method is short descripted in [2], but that is sufficient enough. It starts by checking whether any relator is of the form $gen^n$, where $n \in \mathbb{N}$. If any such relators are found then all other relators are processed for strings in the $gen$ known order. All string length exceeding length $n/2$ are replaced by their shortest equivalent. Further, for even `n` strings gen$^{-n/2}$ replaced by gen$^{n/2}$. I have written its code, perhaps it only requires a few more tests to be added to it.

I think I will be busy with "presentation of subgroups", this week and next week as well.

References

  • 1. Derek F. Holt, Bettina Eick, Bettina, Eamonn A. O'Brien, "Handbook of computational group theory", Discrete Mathematics and its Applications (Boca Raton). Chapman & Hall/CRC, Boca Raton, FL, 2005. ISBN 1-5848-372-3 .
  • 2. George Havas, "Reidemeister-Schreier program"

June 27, 2016

Logo

This week, I had been working on distinct degree factorization in this PR.

Distinct degree factorization splits a square-free polynomial into a product of polynomials whose irreducible factors all have the same degree.
Suppose we have a polynomial:

x**15 - 1

When we do distinct degree factorization of it, we get:

x**5 + 10
x**10 + x**5 + 1

Both the results consist of product of some polynomials with equal degree.

x**5 + 10

It is product of polynomials of degree 1, while:

x**10 + x**5 + 1

is product of polynomials of degree 2.

The factorization is achieved by exploiting the fact that:

Lemma

So, for every degree i less than half of polynomial’s degree, we find the gcd of above computed polynomial and the given polynomial. If it’s not one, we add it to existing list of factors.

Then we have to find its equal degree factorization, I have implemented it and I am testing it with different cases. It will run on every input provided from distinct-degree factorization, and break it into polynomials with equal degrees.
Currently I am implementing Cantor-Zassenhaus’s algorithm.

June 26, 2016

We’re halfway there!

The past two weeks were supposed to be for implementing Matrices. But as I mentioned in the previous post, things got complicated with evalf methods, and I could not start working on the Matrices. This week PR #987 was merged up for evalf CWrappers. (The spec for the Ruby Wrapper needs some changes, which I have kept aside for now)

In SymEngine, matrices are of two forms, the DenseMatrix and the SparseMatrix. I have implemented the following constructors for them:

mat = SymEngine::DenseMatrix.new()
mat = SymEngine::DenseMatrix.new(SymEngine::DenseMatrix)
mat = SymEngine::DenseMatrix.new(Array)
mat = SymEngine::DenseMatrix.new(no_rows, no_columns)

mat = SymEngine::SparseMatrix.new()
mat = SymEngine::SparseMatrix.new(no_rows, no_columns)

Apart from that, for the DenseMatrix, the following methods were wrapped:

DenseMatrix.get(row, col)
DenseMatrix.set(row, col, SymEngine::Basic)
DenseMatrix.inv
DenseMatrix.det
DenseMatrix.transpose
DenseMatrix.submatrix(row_start, col_start, row_end, col_end, row_step, col_step)
DenseMatrix.rows
DenseMatrix.cols
+ operator for Matrix and Scalar addition
* operator for Matrix and Scalar addition
DenseMatrix.to_s
DenseMatrix.inspect

Soon to be wrapped are:

DenseMatrix.LU
DenseMatrix.LDLDenseMatrix.LU_solve # (to solve Ax = B type equations with LU decomposition)
DenseMatrix.FFLU # (fraction free LU)
DenseMatrix.FFLDU # (fraction free LDU)

The work can be views at PR #992 at symengine repo and PR #56 at symengine.rb repo.

This week was really nice for me in getting significant parts of the Matrices done,and with the luck continuing I will be able to wrap the necessary methods for SparseMatrix, get done with the tests and finish the Matrix part within the coming week, putting me back in track with the time-line.

Wish me luck!


Earlier this week, I continued my work on the method to convert a given Holonomic Function to Hypergeometric.

There were some major bugs that produced wrong results because of the fact that the recurrence relation doesn’t holds for all non-negative n. So I changed it to also store the value, say n0, so the recurrence holds for all n >=n0. I changed the method .to_hyper() accordingly and it works pretty fine now.

I also added the method to convert to symbolic expressions to_sympy() . It uses hyperexpand , however will only work for cases when a hyper representation is possible.

These were implemented at [#11246].

After that I have been majorly working on some issues Ondrej pointed out. These are #11292#11291#11287#11285#11284. The fixes are implemented at [#11297].

This week I will be fixing some more bugs, the ones listed at [Label: Holonomic Functions] and also a bug discussed with Kalevi on Gitter.

Cheers!


Continue nonlinsolve :

PR #11111

  • Removed the dependency of old solvers _invert. It was basically helpful for removing sqrt or Rational power. Now using unrad and little changes in substitution function can handle the system havning sqrt or cube root.

Continue - Diophantine in Solveset :

PR 11234

  • Some points we discussed :

    1. Solveset should return all the solution, so we need to check for which kind of eq. diophantine returns all the solution. We can get the equation type using classify_diop defined in diophantine.py.

    2. If there is the case where diophantine doesn’t return all the solutions then just return ConditionSet or try to make all solution (e.g. if just need to permute sign in values then use permute_signs).

  • After reading the diophantine Documentation and diophantine doctests and examples, I found that :

    1. Currently, following five types of Diophantine equations can be solved using ~sympy.solvers.diophantine.diophantine and other helper functions of the Diophantine module.
    • Linear Diophantine equations:

    a1*x1 + a2*x2 + ... + an*xn = b.

    • General binary quadratic equation:

    ax^2 + bxy + cy^2 + dx + ey + f = 0

    • Homogeneous ternary quadratic equation:

    ax^2 + by^2 + cz^2 + dxy + eyz + fzx = 0

    • Extended Pythagorean equation:

    a_{1} * x_{1}^2 + a_{2} * x_{2}^2 + .... + a_{n} * x_{n}^2 = a_{n+1} * x_{n+1}^2

    • General sum of squares:

    x_{1}^2 + x_{2}^2 + \ldots + x_{n}^2 = k

    1. If I am correct then Diophantine returns all the soln if eq is Linear Diophantine equations, General binary quadratic equation and Homogeneous ternary quadratic equation. I tried some Linear Diophantine equations and cross checked with online Diophantine solvers(one of the solver is here).

    2. In last 2 (Extended Pythagorean equation and General sum of squares) we need to do permute_signs to get all the soln.

    E.g.

      ```
          >>> from sympy.utilities.iterables import permute_signs
          >>> list(permute_signs((1, 12)))
          [(1, 12), (-1, 12), (1, -12), (-1, -12)]
    
      ```
    

    In general if all variables have even powers then we should do permute_signs. Diophantine solves the factors of the eq so if we have eq like this x**2 - y**2, diophantine solves factors x + y and x - y.

      ```
    
        >>> from sympy.solvers.diophantine import diophantine
        >>> from sympy.abc import x, y, z
        >>> diophantine(x**2 - y**2)
        set([(-t_0, -t_0), (t_0, -t_0)])
    
      ```
    

    so we should check even powers and permute sign.

    other examples :

    ```

    >>> from sympy.solvers.diophantine import diop_general_sum_of_squares
    >>> from sympy.abc import a, b, c, d, e, f
    >>> diop_general_sum_of_squares(a**2 + b**2 + c**2 + d**2 + e**2 - 2345)
    set([(15, 22, 22, 24, 24)])
    
    >>> from sympy.solvers.diophantine import diop_general_pythagorean
    >>> diop_general_pythagorean(a**2 + b**2 + c**2 - d**2)
    (m1**2 + m2**2 - m3**2, 2*m1*m3, 2*m2*m3, m1**2 + m2**2 + m3**2)
    >>> diop_general_pythagorean(9*a**2 - 4*b**2 + 16*c**2 + 25*d**2 + e**2)
    (10*m1**2  + 10*m2**2  + 10*m3**2 - 10*m4**2, 15*m1**2  + 15*m2**2  
      + 15*m3**2  + 15*m4**2, 15*m1*m4, 12*m2*m4, 60*m3*m4)
    
    >>> from sympy.solvers.diophantine import diop_general_sum_of_even_powers
    >>> diop_general_sum_of_even_powers(a**4 + b**4 - (2**4 + 3**4))
    set([(2, 3)])
    

    ```

    In above these types of cases we need permute_signs. If we check these solution you can see that permute_signs is needed when solutions is not parameterized.

    e.g.

    ```

      >>> from sympy.solvers.diophantine import diophantine
      >>> from sympy.abc import x, y, z
      >>> diophantine(x**2 - y**2)
      set([(-t_0, -t_0), (t_0, -t_0)])
    

    ```

    solution set([(-t_0, -t_0), (t_0, -t_0)]) is same as set([(-t_0, -t_0), (t_0, t_0), (-t_0, t_0), (t_0, -t_0)]).(because t_0 can take any integer value.)

    I discussed these things with @thilinarmtb (He have worked on diophantine. Blog link : https://thilinaatsympy.wordpress.com/). Main points are :

    Only the linear solver is incomplete, the algorithm in Diophantine module should be fixed (`permute_signs` or something like that won't help). We can use `permute_signs` when we have even powers in all variables. We should update Diophantine module to use permute sign. But we should not returns `ConditionSet` for linear diophantine eq. because for linear diophantine eq `diophantine()`returns parameterized solution which is complete most of the time.
    
    1. classify_diop can returns these diop_type :

      • linear
      • univariate
      • binary_quadratic
      • inhomogeneous_ternary_quadratic
      • homogeneous_ternary_quadratic_normal
      • homogeneous_ternary_quadratic
      • inhomogeneous_general_quadratic
      • inhomogeneous_general_quadratic
      • homogeneous_general_quadratic
      • general_sum_of_squares
      • general_pythagorean
      • cubic_thue
      • general_sum_of_even_powers

    If the equation type is none of these then solveset_integers should returns ConditionSet.Because currently diophantine can handle these kinds of eq only.


continue…

Overview

All of my week was mostly consumed by asking questions like “Should this throw?” and “What are it’s generators?”. My work for this week was to make a mechanism for converting arbitrary expression into polynomials. Turns out, it isn’t as trivial as it initially sounded. I will discuss my progress below. This week has been a lot more of logic involved, and thinking about how to go ahead instead of just development, so I do not have a lot to write about.

The Idea

A multivariate polynomial is made of generators. A univariate polynomial is made of a single generator. Each term in a polynomial is a monomial which is multiplication of it’s generators to a non-negative power, which also has a non-zero coefficient multiplied to it. Here are some polynomials and their generators :

x**2 + x**5          		->      x

sqrt(3) + sqrt(3)**3        ->		sqrt(3)

(1/x**2) + (1/x**5) 		-> 		(1/x)

x**2 + y**2 + x				-> 		x, y

2**x + 1					-> 		2**x

I hope you get the idea. The main task is given a Basic, we need to construct a polynomial with appropriate generator. The task is broken down into two parts. First off, we try and extract a generator from the given expression. Secondly, use the found (or user given) generator to construct the polynomial.

Ideally the API should look as follows:

RCP<Basic> UIntPoly::from_basic(Basic x);
RCP<Basic> UIntPoly::from_basic(Basic x, Basic gen);

My Progress

I have only worked on the univariate polynomial class as of now, and was to write the from_basic method just for the univariate classes for now. Although I just had to deal with the specific case of univariate integer polynomials, I haven’t successfully been able to get the job done just yet.

The work started with converting Symbol var to Basic var. Initially, the generators of the polynomial class could only be symbols, however this needed to be changed, as we’ve seen generators can be anything! This change did not take a lot of effort.

I started off with a find_gen function which given a Basic will try and find a unique generator for the polynomial to be constructed. Ideally, if no generator/ more than one generator is found it will throw a runtime error. I covered most of the cases but a few still remain.

Also, an initial implementation of the to_poly functionality has been done, complete with the required API. Some cases related to conversion still need to be handled. I do not want to go into the logic involved into each conversion as it is tedious and not very structured. All the work done till now can be seen in #998.

I hope with all the information I have gained over this week, I will wind up the Basic to UPoly conversions for both the integer coefficients as well as the general expression coefficients by the end of next week. I also intend to make the logic more concrete, so that the same API can be used in the multivariate case.

Miscellaneous Work

  • I made some changes in the expand function related to polynomials. They mostly have to deal with removal of code, which has already been implemented. It is also a partial fix for #886, work is in #999.

Tata!

June 24, 2016

Hi there! It’s been five weeks and midterm evaluations are going on. I have completed the implementations of Singularity Function module and headed towards the Implementation of Beam Bending Module.

So far

  • In PR 11266 , I have implemented three major classes :
    • DistributedLoad : Have the attributes start, end and value. This object can be used to generate a distributed load. It is a mutable object. The start and  end attribute should be the instance of mechanics Point. The value can take any function as input. This Load can be represented as a Singularity Function (i have not implemented this functionality yet) and used for solving the beam bending problems.
    • PointLoad : There are two types : Moment and Point Load. Moments are basically the couple of forces and are represented by the Doublet function whereas Point Load is the force which acts on a single unique point of the Beam and is represented as a DiracDelta Function.
    • Beam : It is the principle object of the Beam Bending problems. I have started working on it.

Next Week

Thats all for this week. Cheers !!

Happy Coding.

 


Well I started this week off by getting sick and as such productivity took a little bit of a hit. The majority of this week was spent reading Featherstone’s text book. The example documentation showcasing the base class API still hasn’t been reviewed and so that part of the project will just have to be set aside until later. Overall the project will not suffer, however, because of my progress in learning Featherstone’s method.

I’ve done a couple of other things this week as well. In a discussion with Jason it was determined that the LagrangesMethod class must have access to the dynamic system’s bodies through the Lagrangian input. Upon research the Lagrangain turned out to not be an object instance but rather a function that simply returned the Lagrangian for input bodies. This meant that LagrangesMethod did not in fact have access to the dynamic system’s bodies. Due to this I decided that an easy way to get LagrangesMethod to have body information would be to add an optional keyword argument for it. This was LagrangesMethod can have a more similar API to KanesMethod. This change can be found in PR #11263.

This week I reviewed PR #10856 which claimed to fix Issue #10855. Upon review it seemed that the “fix” was to just not run tests that were failing. When researched it looks like a whole module has not been updated for Python 3.X and is failing its relative imports. When run in Python 2.X it’s still not working either but rather is throwing up many KeyError flags. I think this has not been caught sooner due to the module being a component directly dealing with another project (pyglet) thus the tests are not run by TravisCI.

Lastly there were some test errors in the example documentation for the base class on PyDy. I was not too worried about these because the PR is not currently awaiting merging and is simply a discussion PR. The failing tests, however, were not related to the changes in the PR and so a PyDy member submitted a PR that fixed the tests and asked me to review it. After I looked it over and determined that the fix addressed the issue correctly he merged the PR.

Future Directions

Next week I plan to continue forward with reading Featherstone’s book and, if possible, begin implementing one of the methods outlined in the book. Also I plan on beginning work on mirroring Jason’s overhaul of KanesMethod on LagrangesMethod.

PR’s and Issues

  • (Open) Added support for a bodies attribute to LagrangesMethod PR #11263
  • (Open) Added a depencency on older version of ipywidgets PR #100
  • (Open) Blacklisted pygletplot from doctests when pyglet is installed PR #10856
  • (Open) sympy.doctest(“plotting”) fails in python 3.5 Issue #10855
  • (Merged) Fix multiarray import error on appveyor PR #354

Hi folks !

The past couple of weeks were spent of developing heuristics for determining the fundamental period of a given trigonometric function.

In our higher school, we all must have come across Trigonometric Functions. One of the most striking properties of these functions is their periodicty.
The ability of a function to repeat its values in regular intervals has always caught my imagination.

Motivation

Well, SymPy ought to have a functionality to determine the period of a function. The instigated me to implement this function now was the build failure in my PR#11141 on solve_decomposition.

Issue

function_range(sin(x), x, domain=S.Reals) causes the build to time-out as the number of critical points of sin(x) in the entire real domain is infinite. The same goes for other trigonometric functions as well.

However, if we can set the domain argument to be a finite interval which encompasses the entire behaviour of sin(x) over the entire real domain, our issue can be solved.

This led me to the idea of using the periodicity of the function as its domain.

Design

f = sin(x) + sin(2*x) + cos(3*x)

We know that the period of f is 2⋅π i.e. the LCM of the periods of individual function.

It is known !

Hence, in order to find the period of f, we need the functionality to determine the period simpler trigonometric functions such as sin(x), sin(2*x) and cos(3*x)

Property

If the period of g(x) is T, then the period of g(a*x) is T/a.

Using this property, we can easily compute the periods of sin(2*x) and cos(3*x) with our knowledge of the periodicity of the fundamental trigonometric functions.

June 21, 2016

Continue- nonlinsolve :

PR #11111

  • I found the case where complex solution is missed. When solve_poly_system is solving polynomial equation and that result is used in solving non polynomial-equation to get other soln. May be solve_poly_system return only real solution( so complex solution is not being used for further steps) thats why further step is with this real solution.

  • It seems solve_poly_system is doing it’s job since it is not designed for all solution. But we have substitution method using solveset_real and solveset_complex and retuning all solutionn. In new commits I improved the code and now substitution method can solve all kind of system independently.

  • I come across these kind of situation many times :

In [68]: x, y, z, r, t = symbols('x, y, z, r, t', real = True)

In [69]: soln = solveset(sqrt(r)*Abs(tan(t))/sqrt(tan(t)**2 + 1) + x*tan(t),x,S.Reals)

In [70]: soln
Out[70]:
          ⎧      √r⋅│tan(t)│      ⎫
(-∞, ∞) ∩ ⎪───────────────────────⎪
          ⎨   _____________       ⎬
          ⎪  ╱    2               ⎪
          ⎩╲╱  tan (t) + 1 ⋅tan(t)⎭

In [71]: soln = solveset(sqrt(r)*Abs(tan(t))/sqrt(tan(t)**2 + 1) + x*tan(t),x)

In [72]: soln
Out[72]:
⎧     -√r⋅│tan(t)│      ⎫
⎪───────────────────────⎪
⎨   _____________       ⎬
⎪  ╱    2               ⎪
⎩╲╱  tan (t) + 1 ⋅tan(t)⎭

In other words :

In [1]: Intersection(FiniteSet(-x), S.Reals)
Out[1]: (-∞, ∞) ∩ {x}

In [2]: Intersection(FiniteSet(x), S.Reals)
Out[2]: ℝ ∩ {x}

So I am not able to extract the -x here. Because of this nonlinsolve returning some extra soln. One case is in test file : test_issue_5132.

I opened a PR for this #11280

previous PRs update :

  • #11188 - Simplified Solution for Trigonometric equation : I did some minor changes.Now I hope it is good to go.

  • #11234 - Connecting Diophantine and solveset to get Integer solution: added some testcases. I have defined solveset_integers for this, it take take list of symbols(to get soln in that order).t But Currently solveset takes one symbol. I hope this PR is good to go.

  • 11257 - solveset univariate trig inequality solvers : Actually to simplify the soln I need to use PR #11188, so if it got merged that code will help me in this PR.


Meanwhile :

  • I worked on these previous PRs regarding issues I reported #11239, 11280.

continue..

June 20, 2016

Logo

This week Initial Implementation of Finitefield got merged. It will allow us to represent polynomial with integral coefficients in finite field. Talking about the design, we are sticking to the dense representation, i.e. the dict_ is a vector.

This week there was an error in Appveyor build, I had no idea how to fix this. I tried to print with every passed test case and use -VV option for verbose mode, but it was of no use. Isuru, then told me that I have to log into the Appveyor’s VM and debug it. He helped with remmina and we found that there was problem in the gf_istrip() function, where I was erasing elements using iterators, but after any insertion or deletion iterators become invalid. So, it is better to use indices, or do re-assignment of iterators there.

After that I started working on square free algorithms. In mathematics, a square-free polynomial is a polynomial defined over a field that is not a multiple of the square of a non unit factor. In general terms, they can be thought as a polynomial with no repeated roots. And square free factorization is the first step towards factorization of polynomials in finite field. So, I have implemented:

GaloisFieldDict gf_diff() const;
bool gf_is_sqf() const;
std::vector<std::pair<GaloisFieldDict, integer_class>> gf_sqf_list() const;
GaloisFieldDict gf_sqf_part() const;
  • gf_diff differentiates the polynomial in the given field.
  • gf_is_sqf returns whether polynomial is squarefield in the given field.
  • gf_sqf_part returns the square free part of the polynomaial in the given field.
  • gf_sqf_list returns the square free decomposition of polynomial’s monic representation in the given field.

This PR has also been merged. Currently I am working on Distinct Degree factorization.

June 19, 2016

Hi everyone.



Here is a brief summary of what we did in the second and third week of GSoC.

First of all I got late in blogging about our progress for the weeks 2, 3. Since the internet connection was disrupted(reason) for almost 3 weeks in my city, so in between I had to move out to some other place. But still project progress is good :)

Status update:

  • PR #11140 on implementation of strategies of coset enumeration has been merged.

We implemented the Coset Enumeration strategies. Suppose $G$ is defined by a finite presentation, and $H$ is a subgroup of $G$ (for $H$ we currently only list of generators which generate $H$), which is specified by words in the generators of $G$ that generate $H$. The procedure is known as coset enumeration and is one of the most fundamental methods in CGT. No algorithm (its been proved mathematically [4]) can be guaranteed to terminate for coset enumeration, so it can't be defined to have a fixed complexity.

Coset Table is equivalent to the permutation representation of the input group $G$ in its action by right multiplication on the right cosets of $H$. Beginning with the Coset Table, we have initialised it with various attributes in SymPy, most of them are instances of list, they are appended on the way while strategies like HLT, Felsch run over it. Contrary to what I mentioned in my last post, CosetTable is not a subclass of list.

The algorithm we have implemented is known as Todd-Coxeter algorithm. The algorithm can use too much memory and time, but still memory is more important resource than time in this algorithm. This algorithm has got two major implementations:

HLT strategy

In this strategy whenever we use the C.scan_and_fill($\alpha$, $w$) for scanning the word $w$ over coset $\alpha$, routine for scanning which if the scan is incomplete makes a new definition of coset using define then we make new definitions to enable the scan to complete; that is, we fill in the gaps in the scan of the relator or subgroup generator. Kalevi suggested to make some modification from the original pseudo-code, which resulted in quite a few improvements, since the changes removes un-necessary scanning.

For calculating the index of $x^{-1}$, for a generator $x$, we initialised the Coset Table with a dictionary A_dict_inv, which has (gen,index) as (key,value) pair.

>>> for x, index in self.A_dict.items():
...     if index % 2 == 0:
...         self.A_dict_inv[x] = self.A_dict[x] + 1
...     else:
...         self.A_dict_inv[x] = self.A_dict[x] - 1


We changed the slicing of the Free Group elements, which now work this way.

>>> w = x**2*y**6
>>> w[1]
x
>>> w[3]
y

Since earlier it was only possible using the .subword(i, i+1) to obtain the $i_{th}$ "word".


We have now completed the PR #11140 We used the utf-8 encoding in sympy/combinatorics/fp_groups.py in its comments, which was generating the error in Python2.7 but not in Python3.4

SyntaxError: Non-ASCII character '\xce' in file /home/gaurav/Public/sympy/sympy/combinatorics/fp_groups.py
on line 79, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
                        
and then using the line # -*- coding: utf-8 -*- at the top of file resolved the issue, so seems like Python2.x is more sensitive to such issues.

There was one error in the implemted code:

>>> from sympy.combinatorics.free_group import free_group
>>> from sympy.combinatorics.fp_groups import FpGroup, coset_enumeration_r, CosetTable
>>> F, x, y = free_group("x, y")
>>> f = FpGroup(F, [x**2, y**3, (x*y)**3])
>>> Y = [x*y]
>>> C = coset_enumeration_r(f, Y)
>>> C.table # this will return the table

As for refinement, I will paste another one.

# these are the steps that happen internally
>>> C = CosetTable(f, Y)
>>> C.scan_and_fill(0, x*y)
>>> C.scan_and_fill(0, x**2)
>>> C.scan_and_fill(0, y**3)
>>> C.scan_and_fill(0, (x*y)**3)
# till now coset table is fine.
# here the coset table returned is wrong.
                        

In the implementation of scan_and_fill the implemened code differed from that in the book in one significant aspect. In the book, scan_and_fill looped until it filled the $\alpha$ row in the table. ("Made a new definition and continue with scan."). While the implemented code returned after one definition, the error occured since I tried removing the while loop (some testing purpose). Then we also added some "examples" from [2].

Felsch strategy

In this we first find the first undefined coset $\alpha$, in this instead of seeing ahead by making use of lookahead, we make deductions, which are put in a deduction_stack (a list instance which behaves a stack), which contains the pair $(\alpha, x)$, whenever a new definition or a deduction is made, this reduces the number of un-necessary cosets made (loweing the memory use at the cost of time).

Though we have made use of CosetTableDefaultMaxLimit = 409600 (similar to that in GAP), till now I haven't found a single example which would exhaust this much memory in our implementation, every one just seems to take too much of time.

Python utilities learned on the way:

  • At one point we had to make a list of generator and inverse of generators of a finitely presented groups i.e A for a CosetTable, I did a bit of searching a arrived at using the chain.from_iterable from the itertools which works as follows:
    >>> from itertools import chain
    >>> list(chain.from_iterable((x**2, x**3) for x in range(4)))
    [0, 0, 1, 1, 4, 8, 9, 27]
    
  • Use of product routine, since in coset enumeration, we often iterate over $w \in Y$ and $x \in A$.
  • In Python2.x a list instance doesn't have a copy attribute, so list() function or slicing is used to make a shallow copy. [3]

At the end of the post, here's one awesome small example of coset enumeration using the HLT strategy. Here is how the code works!! :)

In[1]: from sympy import *
In[2]: from sympy.combinatorics.free_group import free_group
In[3]: from sympy.combinatorics.fp_groups import FpGroup, CosetTable, coset_enumeration_r
In[4]: F, x, y = free_group("x, y")
In[5]: f = FpGroup(F, [x**2, y**3, (x*y)**4])
In[6]: C = coset_enumeration_r(f, [x])
In[7]: C.table
Out[7]:[[0, 0, 1, 2],
      [3, 3, 2, 0],
      [6, 6, 0, 1],
      [1, 1, 4, 10],
      [5, 5, 10, 3],
      [4, 4, 6, 7],
      [2, 2, 7, 5],
      [8, 8, 5, 6],
      [7, 7, 9, 12],
      [10, 10, 12, 8],
      [9, 9, 3, 4],
      [None, 10, 12, None],
      [12, 12, 8, 9],
      [None, 12, 8, 9],
      [7, None, None, 13]]
In[8]: C.compress()
In[9]: C.standardize()
In[10]: C.table
Out[10]: [[0, 0, 1, 2],
        [3, 3, 2, 0],
        [4, 4, 0, 1],
        [1, 1, 5, 6],
        [2, 2, 7, 8],
        [8, 8, 6, 3],
        [9, 9, 3, 5],
        [10, 10, 8, 4],
        [5, 5, 4, 7],
        [6, 6, 11, 10],
        [7, 7, 9, 11],
        [11, 11, 10, 9]]

                        

My mentor, Kalevi, has been very much supportive, when I informed him about my possible abscence (due to internet unavailability), he even sent me a mail about the things we could do next, even if I am offline. So here they are: Cosets in Permutation Groups, Finitely presented abelian groups.

References

  • 1. Derek F. Holt, Bettina Eick, Bettina, Eamonn A. O'Brien, "Handbook of computational group theory", Discrete Mathematics and its Applications (Boca Raton). Chapman & Hall/CRC, Boca Raton, FL, 2005. ISBN 1-5848-372-3 .
  • 2. John J. Cannon, Lucien A. Dimino, George Havas and Jane M. Watson, Implementation and Analysis of the Todd-Coxeter Algorithm
  • 3. https://mail.python.org/pipermail/tutor/2006-November/051189.html
  • 4. http://www.gap-system.org/Manuals/doc/ref/chap47.html

As week #4 comes to an end, the headway I made last week was lost, with lots of memory related errors. But in terms of experience it was quite a good week.

First of all, some loose ends of PR #51 in symengine.rb was tied up and it was merged, fully completing the FunctionSymbol functionality.

Next, as I explained in the previous blogpost, the evaluation methods were implemented. The method ‘evalf’ provides a single point to evaluate floating point values of mathematics expressions (real & complex), at a given precision.

This was the first time I was contributing code directly to the SymEngine’s C++ code. The coding part was pretty much straight-forward, as the method required only to get the expression to be evaluated, the precision needed, and whether it was a real number or not. Using these information, the evaluation was to be forwarded to eval_double, eval_mpfr, eval_complex_double or eval_mpc. But some memory leaks were present, costing me quite a lot of time to localize the errors and to rectify them.

Now, except for a certain case in the cwrapper tests, everything seems to be working fine, and the work can be checked out under PR #987 in symengine, and PR #55 in symengine.rb repository.

Also, during this week I was checking out the possible implementation of matrices for the Ruby Wrappers. I decided to interface SymEngine matrices to NMatrix through the Ruby API. So, for the starters I will be going ahead with the SymEngine::DenseMatrix class.

In Ruby, I plan to implement the following ways of implementing the DenseMatrix:

mat = SymEngine::DenseMatrix.new()
mat = SymEngine::DenseMatrix.new(SymEngine::DenseMatrix)
mat = SymEngine::DenseMatrix.new(NMatrix)
mat = SymEngine::DenseMatrix.new(Array)
mat = SymEngine::DenseMatrix.new(no_rows, no_columns)

Apart from the coding work, last week’s code reviews were an interesting part of the learning experience. I was reviewing code of Gaurav – another SciRuby GSoCer, who is also writing Ruby wrappers. Looking at his code and hearing what he had to tell about mine was quite interesting.

See you next week, hopefully with better news on the SymEngine matrices!


I have been working on two things since my last blog post and they are: converting Symbolic Functions/Expressions to Holonomic and converting Holonomic to Hypergeometric. Let’s discuss the former first:

I wrote a method for converting Expressions earlier too, but that was very trivial, so this week began by me writing a Look-Up table technique to convert expressions. It is similar to the way meijerint._rewrite1 converts to meijerg in the integrals module. The method recursively searches for chunks in the expression that can be converted to Holonomic (using tables) and then uses the operation +, *, ^ defined on Holonomic Functions to convert the whole expression. #11222

Then I started working on to write a method that converts Holonomic Functions to Hypergeometric or a linear combination of them. (if it’s possible, because every Holonomic Functions is not Hypergeometric.) First I asked questions I had to Kalevi on Gitter about how to do it, and then finally wrote the code here #11246.

Let me also give you an overview of things we have in the module so far.

  • Differential Operators and Recurrence Operators.
  • Holonomic Functions with following operations: addition, multiplication, power, integration and composition).
  • Computing Recurrence relation in Taylor coefficients.
  • Series expansion using the recurrence. (Some know issues are there if origin is a regular singular point.)
  • Numerical Computation (Runge-Kutta 4th Order and Euler’s method.)
  • Converting to Hypergeometric. (Currently working on this.)
  • Converting Hypergeometric to Holonomic.
  • Converting Meijer G-function to Holonomic.
  • Converting expressions to Holonomic. (There doesn’t seem to be a direct algorithm for converting algebraic functions which are not polynomials or rationals, so they are not supported right now.)

This week I will be continuing my work on converting to hyper and then further extending it to convert to expressions using hyperexpand.

Good luck to all fellow GSoCers for Midterm Evaluation.

Goodbye!


Overview

The last week was spent on adding more functionality to all the three types of Integer Polynomials in SymEngine. Which meant that I had to basically wrap methods already existing in Piranha and Flint, and write new methods for the SymEngine polynomials to provide the required functionality. Details about said functions and the problems faced while implementing them are described in the rest of the post.

Also this Saturday, some friends and I decided to visit the hill station of Lonavala for the day. It was a replenishing experience, and a break from the regular routine. Here’s us!

Functions

I wanted to start off by adding a div_upoly functions for dividing two univariate polynomials. It is obvious that division of two polynomials with integer coefficients may very well result in polynomials with rational coefficients. But a rational polynomial class does not exist. So what does it mean to divide two univariate polynomials and get a integer polynomial quotient and remainder? The domain ZZ[x] is not a Euclidean Domain. Here is what we mean by Euclidean division,

Two polynomials p, q can uniquely be written as
p = quo * q + rem 
deg(rem) < deg(q) or rem = 0

Thus the euclidean division that we are so familiar with is not defined for this domain. So how do other libraries handle this division? I tried the division with Flint, Sage and SymPy.

In [14]: q = Poly(2*x)
In [15]: p = Poly(5*x**2)
In [16]: div(p, q, domain=ZZ)

Flint and Sage gave :

quo = Poly(2*x, x, domain='ZZ') 
rem = Poly(x**2, x, domain='ZZ')

while SymPy gave :

quo = Poly(0, x, domain='ZZ') 
rem = Poly(5*x**2, x, domain='ZZ')

This was interesting, as SymPy gave a different answer than the other two. I asked about this behaviour on gitter, and had a brief explanation given by @jksuom,

The operation quo_rem is mathematically well defined only in so-called Euclidean domains. They are principal ideal domains (PIDs) equipped with a ‘Euclidean function. ZZ[x] is not a PID, so there is no universal agreement on how quo_rem should be defined. I think that there are some appealing features in the way it is defined in sage, but one cannot say that SymPy’s way would be wrong, either.

It seems that the main (only?) practical use of quo_rem in non-Euclidean domains is for testing if an element is divisible by another. That happens when the remainder vanishes. Both ways of defining quo_rem in ZZ[x] can be used to decide this.

This explanation made a lot of sense, but I was anyways asked to open up #11240, to know why SymPy handled division the way it did, and if it could be changed. Taking in all the inputs, I decided that I should not go ahead with a div_upoly function for now, instead I made a divides_upoly function which basically tells wether a polynomial divides the other or not. I had to implement the straight forward algorithm in SymEngine, while Flint already had a divides function and Piranha throws an exception on inexact division, which I used to port it for Piranha polynomials.

Other methods which were wrapped were gcd and lcm for both the Flint and the Piranha polynomials. I have not been able to write these functions for SymEngine polynomials yet. The most trivial algorithms for finding GCD of two univariate polynomials depends on euclidean division, and that does not exist in the integer ring polynomials. Some more information from @jksuom,

The ring ZZ[x] is a unique factorization domain (UFD) even if it is not euclidean. It has gcd and lcm, but there is no euclidean algorithm for them.

Isuru suggested a hint for a simple algorithm, but I have not been able to follow it up yet. I’ll catch up with it in the coming week. The next function I worked on was pow. Again this was already implemented in the two other libraries and I just had to wrap them. For SymEngine, I had to decide which algorithm to use, a naive repeated approach or the exponention by squaring algorithm. I also benchmarked the pow methods of Piranha too. All the work and the benchmarks are in #989.

Miscellaneous Work

  • Other functionalities like multieval, derivative and get_lc were also added for all the three intger polynomials.

  • There was an error in the Flint documentation, which was related with incorrect ordering of the arguments being passed to the divides function. It was reported in #267.

  • Piranha required some implementations to be explicitly provided on the coefficient class, we are going to use. Thus, I had to overwrite some implementations like gcd_impl, pow_impl and divexact_impl for SymEngine::intger_class to work with piranha::polynomial.

I will write gcd and lcm for SymEngine polynomials, and start on Basic -> Poly conversion to add to the coercion framework decided for SymEngine, soon.

See you!

June 17, 2016

Hi there! It’s been four weeks at GSoC. This week, I have been working on integrations. I had a meeting with my mentors on 12th June. It was great. I got stuck while implementing the integrations of Singularity Function earlier. I had already discussed that in my previous blog post. But at the meeting, I got some good ideas from Sartaj and Jason which not only cleared my doubts but also helped me to complete the implementation of integration of Singularity Functions. Firstly, they suggested me to start with the integration part in a new PR, since PR 11178 was quite good enough to get merged. Secondly, to integrate the Singularity Functions having exponents -1, we need to rewrite it as  DiracDeltas and then to use the methods of integration which are already available at sympy and then again rewrite it back as Singularity Functions.  Now I am almost completed with the “Implementation of Singularity Functions”.

So Far

  • In  PR 11178, I have added the changes that Jason had suggested. This PR is now ready.
  • In PR 11237, I have almost completed implementing the integrations of Singularity Function both definite and indefinite. I have implemented methods to rewrite DiracDeltas and Heaviside back to Singularity Functions.  The arguments of DiracDeltas and Heavisides should be a linear polynomial to use these methods.
  • I have added a module named “singularityfunctions” in the integrals directory. In there I have defined a function “singularityintegrate”. This one is the core function to integrate the Singularity Functions. SingularityFunction(x, a, -1) perfectly behaves as DiracDelta(x – a).

Next Week

  • To polish PR 11178  and PR 11237 and get them merged.
  • To start working with Beam Bending Module.

Looking forward toward an another great week. Cheers!

Midterm Evaluation is Coming !!! All the best.

Happy Coding.


I started off this week writing the example code for a pendulum defined by x and y coordinates instead of an angle, theta. This was to show how the eombase.EOM class would handle a differential algebraic system. I also altered the simple pendulum example I made early on in the project to show how it would look as an eombase.EOM example. Of the examples I have made for the base class this one stands out as currently being the only one making use of the equations of motion generators (the other two have the equations of motion entered by hand). While addressing comments on the PR, it was mentioned that a more traditional documentation approach would allow greater visibility of the desired results of the code as the output could be explicitly shown. I agreed and moved all three examples to a single .rst document in PyDy and changed the code to represent the documentation format over example code format. At this point I made a list of all of the attributes and methods I though the base class should represent and made sure they were represented in the example documentation. In addition I included multiple ways I thought error messages should be brought up for incorrect uses of the base class. This information is currently awaiting review.

In addition to the work on the base class I had to fix the kane benchmark I made early on in the project. At some point in the last few months the input order for kane.kanes_equations() was flipped and this caused the benchmark to not be able to run on pervious versions of Sympy. My fix was to use a try/except clause to catch the error produced by the older versions of Sympy and alter the input order based on whether or not the error was produced. This code sits at PR #29 and it too is awaiting review/approval.

While I have been waiting for review of the base class PR, I have begun reading through Roy Featherstone’s book, “Rigid Body Dynamics Algorithms”. I have spent time going through Jason’s overhaul of KanesMethod as well and trying to provide as much useful feedback as I can.

Lastly I reviewed PR #11209 this week. The PR correctly alters code that tests for the presence of a key in a dictionary. It also altered the indentation of the code that immediately followed. I eventually came to the conclusion that this was a correct alteration because the variable eq_no is set in the dictionary key test and is used in the code that follows the test. I commented that the PR looks good to me and another member of SymPy merged it. This makes me slightly worried that too much value may be attached to my opinion as I still feel like a beginner.

Future Directions

I will continue reading through Featherstone’s book until I recieve feedback on the proposed base class API at which time I will address the reviewer’s comments and hopefully begin work on the base class itself.

PR’s and Issues

  • (Open) Improved the explanation of the 5 equations in the Kane’s Method docs PR #11183
  • (Open) Created a basis on which to discuss EOM class PR #353
  • (Open) Minor fix in KanesMethod’s docstring PR #11186
  • (Open) Fixed kane benchmark for different input order PR #29
  • (Merged) Fix issue #8193 PR #11209

June 13, 2016

Logo

This week I had been working on the design of the GaloisField class and the GaloisFieldDict class.

Firstly, like UIntDict, GaloisFieldDict was also inheriting ODictWrapper, and had the dict_ as map<unsigned int, integer_class> making it a sparse representation.
Then GaloisField was inherting UPolyBase, in which the Container was a GaloisFieldDict.

The dict_ being a map, I made some more optimization to the code, mainly with the insert function. I gave extra attention to the fact that we can optimize insertion by providing an iterator as hint.
Apart from it, made a inplace copy of almost all the arithmetic and modular operations/functions.
Then, as we had already implemented the division operation in Finite field, so I overloaded the / operator.

Down the way, I saw one test constantly failing in Travis CI. The platform it was failing on was OSX. As I don’t have access to OSX, the debugging took a long time, But after the debugging, I fixed some good bugs. It highlights the importance of writing good tests including corner cases.

It was all fine till now. Isuru (my mentor) realized that as we have to do a lot amount of division, and in that we will have to access all the elements of map less than degree of divisor (for calculating remainder). It will take more time with map than vector. So we decided to shift our dict_ type to std::vector<integer_class>. Now the GaloisFieldDict class doesn’t inherit from ODictWrapper while the GaloisField class still have the same structure. Now GaloisFieldDict’s dict_ being a vector, I had to implement a function gf_istrip(), which strips off the leading zeroes, so that the degree of our polynomial can be accessed directly by dict_.size()-1 and most importantly the number of computation decreases.
As this work is in progress, I will post about it in the next week’s post.

Solveset when domain = S.Integers

PR #11234

Right now we may not get our solution in Integer domain but we can use concept of diophantine equations in solveset. When I messaged about this in giiter channel Aaron told about the diophantine, already defined in solvers/diophantine.py. So we can use diophantine in solveset to get Integer solution.diophantine is coded pretty well and works fine.

Diophantine Equations

Let P(x, y, …) is a polynomial with integer coefficients in one or more variables. A Diophantine equation is an algebraic equation

P(x, y, z, ... ) =  0

for which integer solutions are sought.

Previous work on this :

PR #10994


solveset_univariate_trig_ineq :*

PR #11257 * Problem in current branch:

In [ ]: solveset((2*cos(x)+1)/(2*cos(x)-1) > 0, x, S.Reals)
Out[ ]:
(-oo, pi/3) \ ImageSet(Lambda(_n, 2*_n*pi + 5*pi/3), Integers()) U ImageSet(Lambda(_n, 2*_n*pi + pi/3), Integers()) U (2*pi/3, 4*pi/3) \ ImageSet(Lambda(_n, 2*_n*pi + 5*pi/3), Integers()) U ImageSet(Lambda(_n, 2*_n*pi + pi/3), Integers())

solution expected is :

(1/3)*(3*pi*n - pi) < x < (1/3)*(3*pi*n +pi), n element in Z

I am working on this and have opened PR #11257 but it is failing some cases. I am trying to improve it. It is inspired from previous solve_univariate_inequality but it seems need changes for trig ineq.

Main intention is to get extended solution for Trigonometric inequality.

eg.

In [2]: solveset((2*cos(x)+1)/(2*cos(x)-1) > 0, x, S.Reals)
Out[2]:
⎡⎧      π        ⎫  ⎧      π        ⎫⎤
⎢⎨n⋅π - ─ | n ∊ ℤ⎬, ⎨n⋅π + ─ | n ∊ ℤ⎬⎥
⎣⎩      3        ⎭  ⎩      3        ⎭⎦

n [4]: solveset(sin(x) > 1/sqrt(2), x, S.Reals)
Out[4]:
⎛⎧        π        ⎫  ⎧        3⋅π        ⎫⎞
⎜⎨2⋅n⋅π + ─ | n ∊ ℤ⎬, ⎨2⋅n⋅π + ─── | n ∊ ℤ⎬⎟
⎝⎩        4        ⎭  ⎩         4         ⎭⎠

In [15]: solveset(2*cos(x) + sqrt(3) < 0, x, S.Reals)
Out[15]:
⎛⎧        5⋅π        ⎫  ⎧        7⋅π        ⎫⎞
⎜⎨2⋅n⋅π + ─── | n ∊ ℤ⎬, ⎨2⋅n⋅π + ─── | n ∊ ℤ⎬⎟
⎝⎩         6         ⎭  ⎩         6         ⎭⎠

In [16]: solveset_univariate_trig_inequality(tan(x) > 0, x)
Out[16]:
⎛               ⎧      π        ⎫⎞
⎜{n⋅π | n ∊ ℤ}, ⎨n⋅π + ─ | n ∊ ℤ⎬⎟
⎝               ⎩      2        ⎭⎠
  • Still need some good idea to improve the PR #11257. will continue …

Continue- nonlinsolve :

PR #11111

  • After Amit’s review and comments, I improved the docstring, improved the complements and intersection if block present in substitution function.

  • The main thing I added is : now substitution method will return both Real and Complex solution. That mean now it is using solveset_real and solveset_complex. Previously it uses solveset_complex when there is S.EmpltySet from solveset_real.

  • Since both solveset_real and solveset_complex solution need similar steps. So I am using _solve_using_know_values function a helper for substitution method, where solver parameter can be solveset_real or solveset_complex. Another parameter is result which is list of dict <known_symbol: it’s value> (already solved symbol, mostly from nonlinsolve/_solve_poly_system).

Meanwhile

  • Opened an issue #11236.

  • PR #11239 for the issue

  • I found that right now diophantine can’t handle these types of equations :

In [ ]: diophantine((x+y)**2 - x**3 + y**3)
---------------------------------------------------------------------------
NotImplementedError: No solver has been written for cubic_thue.

continue..

June 12, 2016

The week 3 in GSoC started with a backlog coming from the second week. But I was able to end the week on a happier note, by catching up with that work, and almost finishing the work assigned for this week. Apart from that, I have started reading on implementing a feature, not present in the original timeline as well.

In order to complete the work from RealMPFR and ComplexMPC wrappers, I was able to get merged, PR #972 and PR #984 on symengine repository. The work from second week finally came to an end with PR #49 getting merged in the symengine.rb repository.

For the third week’s work, Series and Polynomials had to be postponed because, the symengine developers are still finalizing them. So, I was left with Derivatives, Substitutions and Abs. Those implementations were covered with PR #982 in symengine repository and PR #50 in symengine.rb repository. PR #51 is currently under review, and a discussion on probable methods to extend the FunctionSymbol initialization function is currently underway.

Speaking of the other extra work I am going to take, it is about the eval methods. Currently the symengine has four eval methods, one each for limited precision reals and complexes, and multiple precision reals and complexes. This needs to be provided through an interfacing method, which is of the following format:

RCP<const Number> eval(const Basic &b, unsigned long bits = 53, bool real=False)

This needs to be written in C++, then wrapped in C and later in Ruby. Though I was planning to start working on it during the weekend, it was only possible to read and plan how to write the code. So next week, along with starting on the Matrices, I will look into achieving this as well.

See you next week!


Overview

I worked on wrapping Piranha polynomials within SymEngine this week. It felt much more interesting than wrapping Flint polynomials, mainly because I was reading the Piranha’s source and was in direct touch with the author of the library. Many thanks to @bluescarni who helped me getting familiar with it. As before, I’ll briefly summarize the work I did this week along with my progress.

Piranha Polynomials

Piranha’s polynomial class is the piranha::polynomial class. It’s templatized as follows:

template<typename Cf, typename Key>
class polynomial : power_series< ... >

There are two major differences about Piranha Polynomials and SymEngine polynomials. Firstly, it uses a custom unordered map implementation called hash_set for storing it’s degree-coefficient pairs. hash_set has been internally optimized for polynomial manipulations in general. On a side note, this makes SymEngine have all three types of polynomial representations too! Secondly, Piranha does not distinguish between univariate polynomials and multivariate polynomials. All polynomials are multivariate polynomials, univariate polynomials being a special case (with just one element in it’s symbol_set).

Here, Cf is the class for storing the coefficients, while Key are the monomials themselves. Unlike Flint, we can use any integer class as the coefficient class for the polynomials. So, the first question was wether to use piranha::integer (and use implicit conversions like I did in Flint polynomials) or SymEngine::integer_class as the integer class for Piranha polynomials. After a brief discussion with Isuru, we decided to go with SymEngine’s integers. The Key class used is piranha::monomial<uint>, which means it will store one unsigned int per symbol (representing the degree, in each monomial).

SymEngine’s integers in piranha::polynomial

For an integer class to be usable as the Cf it should have some basic properties like, default constructibility, copy constructibility, nothrow destructibility, nothrow move assignability and nothrow move constructibility. As of yet, mpz_wrapper, fmpz_wrapper and mpz_class did not pass the nothrow checks. All that had to bee done was that I had to add noexcept to the wrappers we had already written. This allowed mpz_wrapper and fmpz_wrapper to be used as coefficients in Piranha polynomials. Ofcourse, Piranha’s own integer class passes all these checks too.

Currently, there is no solution for having mpz_class as the coefficient class for polynomials. Firstly, these gmpxx integers methods have not been marked nothrow yet. There is a forum post on how it can be added, and actually has been added in the development version but hasn’t been released yet. Another reason is that mpz_class uses expression templates for improved performance. This means that unlike normal integers, any arithmetic operation does not necessarily return the same integer class.

int a, b;
is_a<int>(a+b); 		// true

mpz_class a, b;
is_a<mpz_class>(a+b);	// false

The reason it cannot be used within Piranha is that Piranha checks the coefficient type after each operation so that it knows what ring the polynomial belongs to. Here, it will be unable to detect what the ring is, as the returned coefficient will be an expression template. Thus, it was decided that the integer class mpz_class can’t be used alongside Piranha with SymEngine. So, the following became invalid, on which a warning is thrown and the class is changed to mpz_wrapper :

cmake -DWITH_PIRANHA=yes -DINTEGER_CLASS=gmpxx

All the work on Piranha polynomials and the cmake changes can be found in #980.

Iterators

Currently this is what the print functions for our univariate integer polynomials looked like :

string print(const UIntPoly& p)
{
	for(auto it = p.dict_.begin(); it != p.dict_.end(); ++it)
	{
		// use it->first and it->second
	}
}

string print(const UIntPolyFlint& p)
{
	for(auto it = p.degree(); it >= 0; --it)
	{
		// use it and p.get_coeff(it)
	}
}

These methods are very similar and do not deserve to be separate functions. All that was needed to unify these methods was to have iterators for each of the three classes, and then we can have use the same code for many functions like print, as_symbolic and derivative. This also called for a new base class to be made, UIntPolyBase from which these three integer polynomials inherit and it itself inherits from the original base class UPolyBase.

I stuck with the syntax of it->first and it->second for the unified iterators. Which means operator* needed to return a std::pair<uint, integer_class>. So, for SymEngine polynomials all I needed to do was return iterators to the internal dictionary and that was it. Not going into too much technicality, for Flint and Piranha I used custom iterators which had operator++ defined on them and returned a pair whenever it-> or *it was called on them. They basically iterated over the degree and returned only for non-zero coefficients for both the libraries. The actual implementation of the iterator scheme can be seen in #985.

Initially I was happy with this approach and was able to write (unifying the three) :

string print(const UIntPolyBase& p)
{
	for(auto it = p.dict_.begin(); it != p.dict_.end(); ++it)
	{
		// use it->first and it->second
	}
}

Isuru pointed out that instead of returning the integer_class in the pair, we should return a reference to it. Now, this made sense but was a bit tricky to implement. Also the iterator could no longer return reference like const integer_class&, as Flint did not even store it’s coefficients as integer_class. To tackle this, another template parameter had to be added to the custom iterator class which determined the second term of the pair (which was supposed to be the reference to the coefficient in the internal storage of the polynomial). Also, I had to dig in to the Flint documentation and ask Piranha’s author, for knowing how I could get const references to the coefficients in internal storage. After this was finally implemented, there was a new get_coeff_ref method, which does not copy but returns a reference to the required coefficient. The only downside to all this was that instead of it->second I had to use to_integer_class(it->second) everywhere in the common functions.

Miscellaneous Work

  • I saw that in SymEngine, there were lots of redundant header file includes in many files. I wrote a short python script which makes a header dependency tree, and automatically removes header includes which are not needed. After some manual inspection of the changes I pushed in #981. It was really amazing to see the script flag more than 500 lines of redundant includes. The PR has not been merged yet, though.

I will be working on higher level functionality like gcd in the coming week. Will keep you guys posted.

Laters!

June 11, 2016

Hi! It’s been three weeks into the coding period, & I have managed to get some pace.I had a meeting with Jason and Sartaj on 5th of this month. We exchanged our thoughts on implementing the integration of Singularity Function objects. Then our discussion moved on towards the implementation of the Beam Bending Module. We focused on whether a Beam object will be mutable or immutable.We ended up discussing two approaches for the implementation of the integration of Singularity Function objects.

So far

  •  PR 11103 and PR 11137 has finally got merged.
  • In  PR 11178, I have added two important methods under  Singularity Function class.
      • rewrite(Heaviside)
      • rewrite(DiracDelta)

    These would help to convert a Singularity Function object into a mixture of DiracDeltas and Heavisides.

    But while doing the inverse i.e. rewriting DiracDeltas and Heavisides I got stuck. I have implemented that in a way which can’t handle a bit complex expressions,but good for simple DiracDelta and Heaviside expressions. I need to work on this method.

  • In the same PR, I am also working on the integration of the Singularity Functions. There are two approaches which I have discussed with my mentors.
    • Directly use the rules of integrations:- I mean if the exponent is greater than zero then increase the exponent by 1 and divide the whole by the resulting exponent. Else just increase the exponent by 1.
      The issue with this approach is if we perform Sympy integrations over SingularityFunction(x, a, 0) ,which is basically DiracDelta(x – a), then it doesn’t satisfy the fundamentals properties of DiracDelta integrations. For the purpose, I am trying the next approach.
    • Using Heaviside and DiracDelta :- Convert the Singularity function into expressions containing DiracDeltas and Heavisides and then integrate that. Integrations of DiracDeltas and Heavisides are already there in Sympy. Then rewrite that result back into the Singularity Function. Lastly, just output that resulting expression.
      Currently, I am working on rewriting the DiracDeltas and Heavisides back into the Singularity Function.

Next Week

  • Continue with the implementation of integrations of Singularity Functions.

That’s all for now, looking forward to week 4.

Cheers. Happy Coding.


June 10, 2016

Today is Friday the 10th and thus marks the end of the third week of Google Summer of Code. This week started off with continuing work on making test code and a base class for KanesMethod and LagrangesMethod. The work took a turn early in the week when I started working on an example problem that would use the base class instead of working on the base class and test code itself. This resulted in more reading and studying of code examples. This week I also had the opportunity to review multiple shorter PR’s in addition to a longer one that dealt directly with code in KanesMethod.

At the very beginning of this week I migrated all property attributes from KanesMethod and LagrangesMethod into EOM as a base class. This work shows up in PR #11182 which was originally meant to just be a discussion on the API for the base class. It was suggested to me to stop working on the actual implementation at this point and work on planning out a more complete API.

In order to come up with a more complete plan for the API for EOM I first had to get a better understanding of what was done with the equations of motion after the formation step. To do this I looked over pydy-tutorial-human-standing, Planar Pendulum Example, Double Pendulum Example and PyDy Mass Spring Damper Example. After the equations of motion are formed the most common use for them was for time simulations using ODE integration (or DAE integration) and so the next thing I did was look into the documentation for the integrators (scipy.integrate.odeint and scikit.odes.dae). With this information I was able to begin work on an example problem in for the PyDy repository that would make use of this new class.

I found that Pydy’s System class performed the work of rearrangeing the equations of motion into a form accepted by the integrators and so the main focus of the base class is to have the attributes that system expects. After analyzing System’s internals I went ahead and created a basic example of a base class and submitted the code in PR #353 in the PyDy repository. The PR shows manual entry of the equation of motion for a system with two masses, springs and dampers and will be used for further discussion of the API.

This week I reviewed two shorter PR’s and one longer PR. The shorter PR’s were PR #10698 and PR #10693 and covered sympy documentation. The first requested removing a docstring of one of the modules because the module had detailed documentation online. I suggested that this would be a negative change to sympy overall and others in sympy came to the same conclusion and it was promptly closed. The other documentation PR had a couple of spelling fixes but also had some negative or non-useful changes and is currently awaiting the author to remedy the latter. The Longer PR this week was a continuation on the Kanes Method PR #11183 Jason started last week and is an effort to improve the readibility and overall cleanliness of KanesMethod.

Future Directions

The current plan for the next steps of the project is to come up with a few more examples of the base class in use. Once the example code has laid out an API I will begin implementing the base class itself.

PR’s and Issues Referenced in Post

  • (Open) Improved the explanation of the 5 equations in the Kane’s Method docs PR #11183
  • (Open) kane.old_linearizer Issue #11199
  • (Open) EOMBase class migration of property attributes PR #11182
  • (Open) Created a basis on which to discuss EOM class PR #353
  • (Closed) Remove documented redundant comments PR #10698
  • (Open) Docmentation comments corrections PR #10693

Hey !

This week I worked on implementing a method for finding the range of a function in a given domain. Following from last weeek’s research on the same, I tried to develop these utility functions.

Implementation

Here, I have defined the two functions along with some of their implementation details:

continuous_in(f, x, interval)

The function returns the sub-domains as an Union of Interval in case a discontinuity exists in the interval. If the function is continuous in the entire domain, the interval itself is returned.

For this we need to consider 2 primary conditions:

  • Domain constraints for real functions I have also added some code for domain constraints in sqrt and log functions. Using the solve_univariate_inequality method (as the name suggests, it solves univariate inequalities), we calculate these constraints.
    Given f(x) = sqrt(g(x)), we determine the range of values of x for which the function g(x) >= 0.
    Similarly, for f(x) = log(g(x)), the interval of x in which g(x) > 0 is the constrained interval.

  • Singularities For determining the discontinuities, I tried to solve the reciprocal of the given function using solveset: solveset(1/f, x, interval). The singularities function can also be used here but its implementation is restricted to rational functions only. There are possibilities of improving this function to create a universal function which returns all the possible singularities of the function in a given domain.

function_range(f, x, domain)

Like the name suggests, this method returns the range of a univariate function in a given domain. This function is primarily designed for the purpose of solve_decomposition.

This function calls the above implemented continuous_in method for finding the actual domain of f. Following this, we iterate over each Interval object returned by continuous_in.

By using the boundaries of the interval and first derivate test, we determine the crtical points in the interval and their corresponding critical values.

For determining the values of the function at the singularities, we determine its limit at that point. For this, I use the limit function of SymPy.

After calculating the local extremas, I calculate the global minima and maxima using the inf(infimum) and sup(supremum) of the FiniteSet of all critical values. The range, which is the Interval of these extremasm, is returned.

$ git log

PR#11141: Method for solving equations using Decomposition and Rewriting Opened this week
PR#11224: Methods for finding the range of a function in a given domain

Final Thoughts

That was all for this week.
My task for the upcoming days would be to update my solve_decomposition method to accomodate these methods. I aim to get all these PR merged before the midterm evaluation.

June 08, 2016

Continue: Simplified solution for Trigonometric Equation:

PR #11188

New Problems :

  • When I tried more testcases having sqrt and other types I found new issues on reduce_imageset method. Some of the Simplified solutions contains undesired n values.

Changes in solveset/_solve_radical

  • Sometimes _solve_radical may get exp(I*x) terms and solving them we give Imageset, I added the
if isinstance(result, ImageSet) or any(isinstance(r, ImageSet) for r in result.args):
        return result

in the method.

Using factor_list in solve_trig :

  • There are many cases we can do factor of expression and solve each factor may give us more simplified solution.

  • I added the factor_list and solving each factors and union the solution.

  • When I analyzed how exp form is solved and found correct order to use factor or factor_list ,the summery is in this gist

  • I implemented the above order and shifted the reduce_imageset method in _union for ImageSet. I created _union_simiplify a helper method for _union .

_union_simplify better than my previous implementation :

  • Previous implementation was returning solution in simplified ImageSet, although it passed the all test-case except 2-3 cases. But these 2-3 test-cases taught me you are doing over simplification. So now I understood that we should not simplify the ImageSet(s) of the one factor solution, it may get simplified with other factor solution ImageSet(s).

  • Simplify them if there is difference of pi, that means club [(2n + 1) and (2n) => (npi)] or [(2n* + 1) and (2npi + 2) => (n*pi)].

So now _solve_trig uses factor_list for trig eq and then solve each factor F_i. To solve each factor F_i first do F_i.rewrite(exp) and get this exp form factors F_ij so now the unnecessary exp will come out that dont contribute in final solution. That’s why we get simplified ImageSet that will be Union with previous solution. Inside the Union => _union => _union_simplify checks for simplification with already present ImageSet. This is the process till last factor F_i.

Meanwhile :

  • After working on blog for 1 week created my own blog template powered by Jekyll. and shifted the old blog into github. PR for the blog link update in planet: pull/42

  • I edited my old template(already hosted on s-hacker.info) to make it mobile responsive. Now it works perfectly for mobile. shekharrajak.github.io

  • Found a issue in factor_list. issues/11198 , When I passed f = sqrt(2)*sin(x) -1 into solveset I got this problem.

  • To solve factor_list issue I opened a PR pull/11201.

June 07, 2016

As I wrote in my blog post last week, Our plan was to implement numerical methods to compute values of a Holonomic function at given points and convert symbolic function/expressions. Let’s look at what I implemented during this time one by one.

I started by working on to implement the most basic and popular Euler's method for numerical integration of holonomic ode’s. Given a set a points in the complex plane, the algorithm computes numerical values of Holonomic function at each point using the Euler’s method. #11180

Next I moved on to write a method for converting Meijer G-function to a Holonomic Function. In the module meijerint we have method meijerint._rewrite1 to convert expressions/functions to meijerg . Now this method can be used first to convert to meijerg and then converting this result finally to Holonomic Functions. I wrote methods from_meijerg and from_sympy respectively for this purposes. #11187

We are also familiar with the fact that Euler's method doesn’t give much accurate results and for better accuracy one needs higher order numerical methods e.g. Runge-Kutta 4th Order method A.K.A RK4 method. I wrote this method and made it the default method for numerical computation. As expected, the results are much more accurate using RK4 and enough for our present purposes. #11195

I am now looking for bugs inside everything implemented so far in the module, will be working on solving them and also will make things optimal wherever there is a possibility. More on what to implement next is yet to be discussed.

Thank You!

 


June 06, 2016

Logo

This week, I implemented some algorithms (gf_div, gf_gcd etc.) and apart from it I looked into the design consideration also.

Algorithms Implemented

I implemented - gf_div

void GaloisFieldDict::gf_div(const GaloisFieldDict &o,
                         const Ptr<GaloisFieldDict> &quo,
                         const Ptr<GaloisFieldDict> &rem) const

This will change the value of quo and rem.

  • gf_quo

This will return the quotient only. It will help when we only need quotient after dividing. It is better than gf_div in terms of time complexity.

  • gf_lshift

It is efficient way to multiply a polynomial in GaloisField by x**n

  • gf_rshift

It is efficient way to divide a polynomial in GaloisField by x**n

void GaloisFieldDict::gf_rshift(const integer_class n,
                            const Ptr<GaloisFieldDict> &quo,
                            const Ptr<GaloisFieldDict> &rem) const

Just like gf_div, it also changes the value of quo and rem.

  • gf_sqr

This will square the polynomial in GaloisField.

  • gf_pow

It uses binary multiplication to power a polynomial in GaloisField.

  • gf_monic

Changes a polynomial to its monic representation, i.e. 3*x**2 + 4 in GF(5) becomes x**2 + 3. Here leading coefficient becomes one.

  • gf_gcd and gf_lcm

gf_gcd by Euclidean Algorithm and gf_lcm is product of the two polys divided by their gcd in the finite field.

Design Change

SymEngine’s codebase for UIntPoly has changed, it inherits UPolyBase, which has two private variables, one is to store the variable and other is container. UIntPoly uses UIntDict as a container and UIntDict is inherited from ODictWrapper. Now as GaloisField is just representation of polynomial so I needed something similar, so I made GaloisField inherit from UPolyBase, and made a container named GaloisFieldDict. This prevented a lot of code duplicacy. Still there is a lot of conversation going on this topic, I will better post after the final design is fixed.

Overview

This week has not been as interesting as the last. Most of my work this week involved wrapping Flint Polynomials, for use in SymEngine. I will discuss their internal implementation and how they were integrated to the library.

Flint Polynomials

Flint’s integer polynomial class is the fmpz_poly class. Flint, being a C library also provides a C++ wrapper for most of it’s classes. In our case, it’s the fmpz_polyxx class. As discussed last week, the arithmetic operators for the class have already been overloaded and it can be seamlessly used with the base class which is already set up.

Internally, flint uses a dense representation of polynomials. What this means is that the coefficient for each degree in the polynomial are stored in a vector. This is in contrast to a sparse approach which deals stores only non-zero coefficients of a polynomial in a dictionary. Flint also uses it’s own integer class as the coefficient type. Flint’s integers are basically similar to GMP integers, and also follow the same naming conventions. (eg. fmpz_t is analogous to mpz_t) They have internal optimizations to make them faster for use within the flint library itself. Flint does not provide the option to use a custom integer class as the type of coefficients like Piranha does.

All the work related to flint polynomials can be seen in #971. The majority of the PR concerns introducing a new class UIntPolyFlint and all the necessary functionality like __hash__, eval and compare. Test cases have also been added, to see if the polynomial manipulation is working as intended. There is some additional minor work done here apart from this class introduction.

The polynomial base class now has three methods :

static Poly from_container()
static Poly from_dict()
static Poly from_vec()

These methods are, as indicated from their names, used to construct the polynomial from any of the three sources. So, for example to construct a UIntPolyFlint, you can :

PolyFlint a = PolyFlint::from_container(fmpz_polyxx A);
PolyFlint b = PolyFlint::from_vec(vector<integer_class> B);
PolyFlint a = PolyFlint::from_dict(map<uint, integer_class> C);

integer_class Conversions

Another issue was that methods like eval must return integers of integer_class. Thus, we needed functions to convert fmpzxx to any of the 4 possible integer classes used by SymEngine. The 4 integers used by SymEngine are :

1. mpz_class         The default C++ class provided by GMP
2. mpz_wrapper       SymEngine's wrapper around GMP's mpz_t
3. piranha::integer  Piranha's integer class
3. fmpz_wrapper      SymEngine's wrapper around Flint's fmpz_t

We, might even require such conversion for Piranha polynomials. So, I generalized this and made to_integer_class() which takes in either flint::fmpzxx or piranha::integer and returns an integer_class. There was a small discussion on where these functions should be placed, with Isuru which was insightful. Basically, what I was doing earlier caused a considerable increase in compilation time, which was later corrected. An example of one such function is :

#if SYMENGINE_INTEGER_CLASS == GMP || GMPXX
#ifdef HAVE_SYMENGINE_FLINT
inline integer_class to_integer_class(const flint::fmpzxx &i)
{
    integer_class x;
    fmpz_get_mpz(x.get_mpz_t(), i._data().inner);
    return x;
}
#endif
#endif

A get_coeff() function was also added to the polynomial base is a much needed function for any polynomial manipulation.

Miscellaneous Work

  • There were some changes done to the polynomial structure before the flint polynomials were introduced. Constructor of the polynomials from vectors was removed, as it wasn’t required. And a from_vec was added for the SymEngine polynomial dictionaries UIntDict and UExprDict. The changes can be seen here #965.

  • In Travis CI builds, SymEngine was using flint-2.4.4 from a static location. There’s a bug that I encountered using this version, which dealt with incorrect header declaration in one of the flint files. Now, we clone the flint repository from wbhart/flint2 to use in the CI builds. The change is in #973.

I’m working on wrapping Piranha polynomials next week. After that is done, I plan to start getting higher level functionality like gcd and factorize ready.

Ciao!

June 05, 2016

Hi there! It’s been two weeks into the coding period, & I have managed to flip some bits. I had a meeting with Jason and Sartaj on 30th April. We were discussing on the implementation of Singularity Functions. I have implemented them using the properties of Heaviside and Diracdelta under eval() method. But they suggested me to return Heaviside and Diracdelta directly instead of their properties only. This way we can reuse the functionality that has been already there in Sympy. I tried doing that,but I am having problems with the representation part.

With the former one:-
In [2]: SingularityFunction(x, 3, 4)
Out[2]:                          
       4      
<x - 3>
With the latter one:-
In [3]: SingularityFunction(x, 3, 4)
Out[4]: 
       4 
(x - 3) ⋅Heaviside(x - 3)

I have to figure out the representation part of Singularity Function class. If I can succeed, then we can follow the latter idea of implementation without any worry.

We further discussed a bit about the integrations of Singularity Functions.

Progress of Week 2

The major portion of this week went into working with the printing modules. Due to which I had to switch between the targets of this week and the next week.

  • In  PR 11139, I was finally able to figure out how to implement the pretty printing of derivatives of the DiracDelta Class similar to the one with latex printing. Hence, it got merged.
  • In PR 11178, I’ve enabled the pretty printing and latex printing of Singularity Functions and added their tests too. The output somewhat looks like (both in pretty printing and latex printing :
In [3]: SingularityFunction(x, 3, 4)
Out[3]:                          
       4      
<x - 3>

In [4]: SingularityFunction(x, -5, 4)
Out[4]:                          
       4      
<x + 5>

In [5]: SingularityFunction(x, 0, 4)
Out[6]:                          
   4      
<x>
  • I was looking into  PR 11192, it seems a great work to me. Docstrings are quite well defined.

Next Week

  • To get  PR 11103 and PR 11137 merged.
  • Integrations of Singularity Functions.
  • Improve the Implementation of Singularity Functions.

That’s all for now, looking forward to week 3.

Cheers. Happy Coding.


June 04, 2016

This week I worked on developing methods for computing the range of an univariate function. For this pupose, I spent most of my time in research; reading previous discussions on mailing lists, pull requests and wikis.

Progress

Suppose, we want to solve the following equation in the variable x:

sin(x) = π / 2

We can straight away state that this equations has no solutions as the range of the function sin is -1 to 1
i.e sin(x) ∈ [-1, 1].

Thus, we find that the range of the function can also be used to leverage the solutions of an equation. The implementation of a function to determine the range of a function in a given domain is particularly interesting for checking whether an equation is at all solvable.

In the last meeting, Amit had suggested me to read the conversation on

  • PR#2723 : Fixed imageset for Interval
  • PR#2925 : find singularities for any expression.

Along with this, I read the following mailing lists discussions:

  1. Find minimum value of a function symbolically

  2. Best way to find extrema of function over interval

  3. GSoC 2013 Idea - Find Domain / Range / Continuity / Singularity of a Function

  4. On a general representation for singularities and infinities

  5. Functions: Singularity and Continuity

All this research helped me appreciate the difficulty of the problem at hand. Also, my approach to solving this issue is heavily inspired from these discussions.

Design

The methodology of determining the range of a function:

  1. Determine the points of discontinuities in the concerned domain.
  2. Divide the entire domain into sub-domains about the above-determined singularities.
  3. Use the Derivative Test to locate the critical points of the function within each sub-domain.
  4. Calcuate the corresponding values of the function at the critical points and the boundary values of the sub-domains.
  5. The extremas of the function in the entire domain are the maximum and minimum values in all the smaller domains combined.

For now, I have thought of designing 2 functions to solve this issue:

  1. Function to divide the domain into sub-domains.
  2. Function to calculate the extremum values.

$ git log

PR#11141 : Method for solving equations using Decomposition and Rewriting PR#11164 : Intersection of certain ImageSets with Intervals

Thoughts

After tommorrow’s meeting, I will iron out a more concrete design. My goal for next week would be to implement the above-said functions.

Looking forward to another exciting week !


Older blog entries


Planet SymPy is made from the blogs of SymPy's contributors. The opinions it contains are those of the contributor. This site is powered by Rawdog and Rawdog RSS. Feed readers can read Planet SymPy with RSS, FOAF or OPML.