Planet SymPy

November 06, 2014

Fabian Pedregosa

Plot memory usage as a function of time

One of the lesser known features of the memory_profiler package is its ability to plot memory consumption as a function of time. This was implemented by my friend Philippe Gervais, previously a colleague at INRIA and now working for Google.

With this feature it is possible to generate very easily a plot of the memory consumption as a function of time. The result will be something like this:

where you can see the memory used (in the y-axis) as a function of time (x-axis). Furthermore, we have used two functions, test1 and test2, and it is possible to see with the colored brackets at what time do these functions start and finish.

This plot was generated with the following simple script:

import time

@profile
def test1():
    n = 10000
    a = [1] * n
    time.sleep(1)
    return a

@profile
def test2():
    n = 100000
    b = [1] * n
    time.sleep(1)
    return b

if __name__ == "__main__":
    test1()
    test2()

what happens here is that we have two functions, test1() and test2() in which we create two lists of different sizes (the one in test2 is bigger). We call time.sleep() for one second so that the function does not return too soon and so we have time to get reliable memory measurements.

The decorator @profile is optional and is useful so that memory_profiler knows when the function has been called so he can plot the brackets indicating that. If you don't put the decorator, the example will work just fine except that the brackets will not appear in your plot.

Suppose we have saved the script as test1.py. We run the script as

$ mprof run test1.py

where mprof is an executable provided by memory_profiler. If the above command was successful it will print something like this

$ mprof run test1.py
mprof: Sampling memory every 0.1s
running as a Python program...

The above command will create a .dat file on your current working directory, something like mprofile_20141108113511.dat</ocde>. This file (you can inspect it, it's a text file) contains the memory measurements for your program.

You can now plot the memory measurements with the command

$ mprof plot

This will open a matplotlib window and show you the plot:

As you see, attention has been paid to the default values so that the plot it generates already looks decent without much effort. The not-so-nice-part is that, at least as of November 2014, if you want to customize the plot, well, you'll have to look and modify the mprof script. Some refactoring is still needed in order to make it easier to customize the plots (work in progress).

by Fabian Pedregosa at November 06, 2014 11:00 PM

August 21, 2014

Avichal Dayal

Last blog of my GSoC period with SymPy.

My last few weeks went a bit disorganized. It was realized that Formal Power Series using lazy evaluation scheme could not be done. That is because there was no way to incorporate python generators into SymPy code-base.
So a different approach i.e. formal power series using formula was implemented.
In this approach, series is represented by a general term, also called the generator.
For e.g. series expansion for exp(x) would be captured by:-
exp(x) = Sum(x**k/k!, (k, 0, oo))

A problem came when I started implementing using this approach. Representation of this general term was troublesome. After doing all arithmetics, the same representation must hold.
Initially, the representation used was sparse representation i.e. list of tuples.
E.g.:- If f(x) = Sum(x**k/k! + x**(2*k)/2*k!, (k, 0, oo)) then this would be represented as [(1/k!, k), (1/2*k!, 2*k)].
I had trouble representing this way and ultimately switched to dense representation.

In this series is always represented as single expression c(k) where c(k) is the coefficient of x**k. c(k) is often a Piecewise function.


Overall, the last few weeks were not planned to go like this. They went unplanned and less work was done. It was mainly because I was not clear enough on how to represent formal power series precisely which caused problems in the end.
Anyways, I learned a lot and it was a nice experience working with SymPy.

by Avichal Dayal (noreply@blogger.com) at August 21, 2014 06:48 PM

August 20, 2014

Sachin Joglekar

GSoC end: Vector Module

Phew. Sorry about the late blogpost (and a missed one last week). Google Summer of Code is now officially over, and I am happy to report that I was successful in building a basic vector-manipulation/vector-calculus module for SymPy. The code is now in SymPy’s master, which means it will be released with SymPy 0.7.6. Its been a great 3 months, with intensive software designing, coding and bug-fixing going hand-in-hand to being out ‘sympy.vector’.

I would like to thank Jason Moore, my mentor during this year’s GSoC. Initially, I had begun on my usual path of the ‘lets start coding immediately’ philosophy, but he made me wait it out, figure out extensive use cases, and give stress on the software designing process. This was the first software written by me, with the test-driven development approach. And must say, it does benefit a lot. There were quite a few tricky places, but I am glad I got through it all :-). I couldn’t implement all I wanted to, but atleast I am happy that all I wrote is stable, and works perfectly :-D.

The docs are expected to go in soon, after which you (if interested) may read through them and understand how to use the new module. All in all, its been a great summer!

 

Cheers!


by srjoglekar246 at August 20, 2014 08:29 PM

Jim Crist

GSoC Week 12 & 13: The End

The GSoC program officially ended this Monday, and so my work for SymPy has concluded. I got a lot done in these last two weeks though. Here's what's new. In order of completion:

Complete overhaul of the codeprinting system

I wasn't happy with the way the codeprinters were done previously. There was a lot of redundant code throughout ccode, fcode and jscode (the main three printers). They also had a lot of special case code in the doprint method for handling multiline statements, which I felt could be better accomplished using the visitor pattern that is used by all the other printers. The issue is that some nodes need to know if they are part of a larger expression, or part of an assignment. For example, in C Piecewise are printed as if statements if they contain an assignment, or inline using the ternary operator if they don't.

After some thought, this was solved by adding an Assignment node that contains this information, and then dispatching to it in the printer just like any other node. Less special case code, and allowed the base CodePrinter class to contain a lot of the redundancies. For those implementing a new code printer (perhaps for Octave?) all you'd need to do is add how to print certain operators, and a dictionary of function translations. Everything else should just work. I may add little cleanups here and there, but I'm pretty happy with the refactor.

Code printers now support matrices

This was the original goal, but got put aside to do the previously described refactor. The codeprinters now support matrices - both as inputs and outputs. For example, the following now works:

# Expressions inside a matrix
x, y, z = symbols('x, y, z')
mat = Matrix([x*y, Piecewise((2 + x, y>0), (y, True)), sin(z)])
A = MatrixSymbol('A', 3, 1)
print(ccode(mat, A))
A[0] = x*y;
if (y > 0) {
   A[1] = x + 2;
}
else {
   A[1] = y;
}
A[2] = sin(z);
# Matrix elements inside expressions
expr = Piecewise((2*A[2, 0], x > 0), (A[2, 0], True)) + sin(A[1, 0]) + A[0, 0]
print(ccode(expr))
((x > 0) ? (
    2*A[2]
)
: (
    A[2]
)) + sin(A[1]) + A[0]
# Matrix elemnts in expressions inside a matrix
q = MatrixSymbol('q', 5, 1)
M = MatrixSymbol('M', 3, 3)
m = Matrix([[sin(q[1,0]), 0, cos(q[2,0])],
    [q[1,0] + q[2,0], q[3, 0], 5],
    [2*q[4, 0]/q[1,0], sqrt(q[0,0]) + 4, 0]])
print(ccode(m, M))
M[0] = sin(q[1]);
M[1] = 0;
M[2] = cos(q[2]);
M[3] = q[1] + q[2];
M[4] = q[3];
M[5] = 5;
M[6] = 2*q[4]*1.0/q[1];
M[7] = 4 + sqrt(q[0]);
M[8] = 0;

There even was a Piecewise inside a Matrix in there. As long as there is an assignment between two compatible types (matrix -> matrix, scalar -> scalar), the new codeprinters should print out valid expressions.

codegen now supports matrices

This is more of a continuation of the above. The code generators have been modified to recognize instances of MatrixSymbol as array variables and act accordingly. There actually wasn't that much to change here to make this work. The biggest change that happened is that all C functions that have a return value (non void functions) allocate a local variable of the same type. This is to cover a larger set of expressions, while still generating valid code. So now, when performing codegen on "sin(x)" you won't get "return sin(x)", you'll get:

result = codegen(('sin_c', sin(x)), "C", "file", header=False)
print(result)
double sin_c(double x) {

   double sin_c_result;
   sin_c_result = sin(x);
   return sin_c_result;

}

This isn't as pretty, but handling return inside expressions is a tricky problem, and this solves it without much work. Modern compilers should remove the variable assignment if it's unnecessary, so there shouldn't be a resulting speed loss in the code.

Cython wrapper for autowrap now works

There was a code wrapper for Cython in the codebase, but I don't think it has ever worked. It does now:) It can do everything f2py can do, and I plan on adding more useful features. In it's current state it can:

  • Handle both scalar and matrix input, input-output and output arguments
  • Internally allocate output arguments
  • Pull inferred variables (such as matrix dimensions) out of the function signature
  • Create a multiple return value tuple

The last thing I want to do to make this really nice is to add support for informative docstrings. Even so, this is already usable:

x, y, z = symbols('x, y, z')
mat = Matrix([x*y, Piecewise((2 + x, y>0), (y, True)), sin(z)])
func = autowrap(mat, 'c', 'cython')
func(1, 2, 3)
array([[ 2.        ],
       [ 3.        ],
       [ 0.14112001]])

For some reason the Fortran/f2py has around a 2 microseconds faster than the C/Cython code. I think this has something to do with array allocations, but I'm not sure. For larger expressions they're pretty equal, so this shouldn't be that big of a deal. I still plan to look into code optimizations I could make in the Cython wrapper.

Project Status

Overall, I accomplished most of what I set out to do this summer. Some things (pre-solution linearization) were nixed from the project due to changing goals. Here's a short list of what was done:

  1. General linearization methods added for both KanesMethod and LagrangesMethod.

  2. Code cleanup and speedup for KanesMethod and LagrangesMethod.

  3. Creation of msubs - a specialized subs function for mechanics expressions. This runs significantly faster than the default subs, while adding some niceities (selective simplification).

  4. Complete overhaul of codeprinters. Fixed a lot of bugs.

  5. Addition of support for matrices in code printers, code generators, and autowrap.

  6. Overhaul of Cython codewrapper. It works now, and does some nice things to make the wrapped functions more pythonic.

  7. Documentation for the above.

The Future

I had an excellent summer working for SymPy, and I plan on continuing to contribute. I have some code for discretization that I've been using for my research that may be of interest to the mechanics group. I also want to get common sub-expression elimination added to the code generators, as this kind of optimization may result in speedups for the large expressions we see in mechanics. My contributions will unfortunately be less frequent, as I need to really focus on research and finishing my degree, but I still hope to help out.

I plan on writing another post in the next few days about the GSoC experience as a whole, so I won't touch on that here. Let me just say thank you to Jason, Luke, Oliver, Sachin, Tarun, Aaron, and all the other wonderful people that have offered me guidance and support throughout the summer. You guys are awesome.

by Jim Crist at August 20, 2014 07:43 PM

August 17, 2014

Aaron Meurer

Playing with Swift and SpriteKit

I've always wanted to learn how to write native apps for iOS and the Mac as long as either has existed. However, the barrier of entry has always been too high, given that I only ever had time to play with them as a hobby. The Objective-C programming language is a bit complicated to learn, especially alongside all the memory management that you have to do (and it doesn't help that Apple has gone through several memory management schemes through Mac OS X versions). To add on to that, the Cocoa framework is huge, and it's quite daunting to even know where to get started with it.

With Apple's announcement of the Swift programming language in June, it was clear to me that the language would significantly lower the barrier of entry. The XCode 6 beta is now public (i.e., you do not need to have a paid Apple Developer account to access it), so anyone can play with Swift.

Note that I am still very new to both Swift and iOS development in general, so it's quite likely that some of the things I mention here are actually bad ideas. If you know more than I do and spot a bad thing that I am doing, please mention it in the comments.

It's also possible that some of the assumptions I've made about the Swift language or the SpriteKit framework are actually wrong. Please remember that I am still a beginner and take what I say with a grain of salt.

The Swift Language

If you don't know how to program at all, I don't know how well this will work for you. I already know several language, especially Python, so my experience derives from that.

First, read through the Swift language guide. If you have XCode 6, you can read it interactively as a Playground. I only have read through the first part so far, which gives a high-level overview of the language.

The Swift language is actually quite easy to learn, especially if you already know a high-level language like Python. A few important things:

  • var and let seem a bit confusing. The difference is actually quite simple: var denotes a variable that can change and let denotes a variable that cannot. You could in theory just use var for everything, but let lets the compiler spot mistakes for you, and it also probably lets it make your code faster. If you intend to never change the value of a variable, use let.

  • Swift uses type inference, meaning that you usually don't need to specify types. But when you do, you do so by putting a : after the variable name, like var a: Int = 2 or func f(a: Int). The exception is the return type of a function, which uses the arrow -> (if you are familiar with Python 3 type annotations, the syntax is exactly the same), func f(a: Int) -> Int.

  • Swift uses ? after a type name to indicate that a variable could be its given type, or nil. If you are familiar with Haskell, this is like the Maybe monad. I know very little Haskell, so I don't know if Swift's implementation of ? is truly a Monad.

    Roughly speaking, in many circumstances, you don't know if a variable will actually be the given type or not. A good example of this is with dictionaries. var a: [String: Int] creates a dictionary that maps strings to integers. If you manipulate this dictionary, and then access a key from it, like a[b], there is no way for the compiler to know if that key will really be in the dictionary. If the key is in the dictionary, you will get the value of that key. Otherwise, you will get nil. Hence, the type of a[b] is Int?.

    Swift uses ! to indicate that the value is not nil, which tells the compiler to compile code that doesn't check for that case.

    For the most part, you can ignore this as well, at least when you start. Just write code as you would, let XCode add in the types for you, and only worry about types if the compiler tells you something is wrong.

  • Swift functions often require the parameters be named, for instance, you have to write CGSize(width: 1, height: 2) instead of just CGSize(1, 2). This is both for clarity (the former is much easier to read if you aren't familiar with the API for CGSize), and because Swift allows polymorphism, i.e., you can define different initializers for the same class with different type signatures. For example, CGRect can be initialized as CGRect(origin: CGPoint, size: CGSize) or CGRect(x: Int, y: Int, width: Int, height: Int). This can lead to ambiguities in some cases unless you explicitly tell the compiler which version to use.

I've found Swift to be a very strict language. I don't mean this in the sense described by this Wikipedia article. What I mean is that Swift typically only lets you do things one way. This is similar to Python's "one way to do it," except Swift enforces this at the language level.

A nice example of this is that I rarely get a warning from the Swift compiler. Just about every message I've gotten from the compiler has been an error. The difference is that the program will still compile and run with a warning. This is different from C, C++, and Objective-C, which have many warnings that the compiler will still compile with. These warnings usually are for things like an incorrect pointer type. Since there is really only one type in C, the integer (because all data in memory is just integers), the program can still run even if you mix your types up a bit.

There are also many cases where Swift seems maybe too strict about things, although it's clear that it is doing it to try to stray people away from common mistakes and antipatterns. For example, the condition of an if statement in Swift must always be of type Bool. Unlike languages like Python, things do not have implicit boolean values. if 1 is a syntax error. So is if a unless a is type Bool.

This ends up not being a big problem. The things Swift forces you to do feel like good programming practices. This is not unlike how Python "forces" you to keep your code indented correctly. It feels very different from a language like Java, where the things that you are forced to do all feel like they are there simply to make the compiler writer's life easier. And indeed, unlike Java and Objective-C and much like Python, Swift code requires very little boilerplate. There are no header files for instance.

So all said and done, I like Swift. I don't like it as much as Python (I also don't have my head wrapped around it as much as Python). But it's far better than Objective-C, and that's what matters. Frankly, my biggest gripe with it is the ubiquitous use of CamelCasing and two letter prefixing (NS, CG, SK; I don't know if there's a name for this) in the API. I adamantly refuse to do this with my own variables, because I believe CamelCase reduces readability over underscore_casing. I like the Python convention to use underscore_casing for variables, functions, and methods, and CamelCase for classes (because classes are kind of like proper nouns, and CamelCase is as close to Capitalization as possible in programming language conventions).

Learn to read Objective-C

While it is not necessary to write Objective-C any more, it is a good idea to know how to read it. The reason is that a lot of good resources on the internet are still in Objective-C (also a lot of Apple's example documentation). The API names are the same, so this mainly boils down to understanding how to convert the basic syntax to Swift. Reading the section of the Wikipedia article on the syntax of Objective-C should be enough.

For instance

[touch locationInNode:self]

would be translated to

touch.locationInNode(self)

Use XCode

If you are comfortable with the Swift language itself, you should get started with a project.

First off, you should use XCode to edit your code, at least to begin with, even if you are accustomed to using another editor. The reason is that XCode is going to do a lot of things for you which will make your life easier and reduce the complexity significantly as you get started. Once you are comfortable, you can move to another editor.

Some things that XCode will do for you:

  • Autocompletion: The best way to figure out the Cocoa APIs is to use the autocompletion. This pops up when you want to override a method in a subclass, create an object from an existing class, access an attribute of a class, or instantiate a class or call a function (remember that Swift is polymorphic, so it's useful to know all the possible ways to instantiate a class or call a function).

  • Compiler errors and warnings: Swift, being a strictly typed language, will give you a lot of useful compiler errors. It's pretty hard to write a program incorrectly from a type point of view, and have it even compile. XCode integrates this nicely, and it even offers suggestions on how to fix things every once in a while (so that you can just click the button and have it fixed inline).

  • Direct interaction with the iOS Simulator: One button will compile your code and start the simulator. If your target is Mac OS X, it will open the application.

  • Debugger: Clicking to the left of a line will set a breakpoint in the debugger. The Swift debugger seems pretty limited right now. I wasn't able to get any useful information out of the variables view when I used it. But in my experience using XCode in the past to debug C, its graphical debugger is one of the best.

  • Configuration settings: If you click on the XCode project in the files view (the root node of all the files), you get a view with all the different settings for your project. Most of these you probably won't want to change, but a few are important, like what devices and orientations you want to allow, what operating system versions you want to support, and the name and version of your project. Editing these outside of XCode requires editing an XML file, which is no fun.

Of course, any editor can potentially do these things, and I'm really looking forward to the point where I can just use Emacs to edit Swift code, as the XCode editor as an editor is quite annoying. XCode was the editor that I used before I switched to using Emacs, and it's not gotten much better. There are still several visual glitches in the editor environment, especially with the scope coloring and syntax highlighting. You can edit the keyboard shortcuts in the XCode setting to get some things the way you like them (although I found that trying to set TAB to autoindent did not work). You can also use a tool like Karabiner (previously KeyRemap4MacBook) to enable Vim or Emacs editor shortcuts everywhere (including XCode). It doesn't help that XCode 6 is still in beta (at some point the editor backend died and all syntax highlighting and tab completion stopped working; I managed to fix it by removing a spurious ! in the code)

The iOS Simulator

One disappointing thing that I learned is that you cannot run any iOS program you write on an iOS device unless you are in the paid developer program (or if you Jailbreak and are willing to go through some hoops). The iOS developer program costs $100 a year, and since I'm not sure yet how far I am going to go with this, I am holding off on it.

The only other option then is to run on the simulator. The simulator is fine, the only issue is that there are limits to how you can simulate a touch screen on a computer with a mouse.

A few things to note about the simulator:

  • There are several things you can do with the "hardware" from the hardware menu, such as rotating the device or pressing the home button.

  • It's worth remembering the keyboard shortcut for pressing the home button, ⇧⌘H, as you can press it twice in quick succession just like on a real device to open the task manger. You can then drag your app up to completely reset it, without having to restart the simulator.

  • The retina iPad is taller than your display, even if you have a 15" retina display. So be aware that you will need to scroll up and down to see it all. Alternately, you can use a smaller device, like an iPhone, or rotate it to landscape, where it all fits.

  • The only way to do multitouch is to hold down the Option key. This will create two fingers. However, it's quite limited as the two fingers are always centered around the center of the screen. Therefore if you want to test multitouching two objects, you'll have to position them symmetrically so that you can grab them both.

Getting started with a project

The best way to start is to start a template project with XCode. I personally started with a SpriteKit game for iOS. This created a basic template "Hello World" Swift file with the basic SKScene subclass. Go ahead and compile and run this in the simulator to see what it does.

There are four important methods of SKScene which you will want to override, didMoveToView, touchesBegan, touchesEnded, and update. didMoveToView is the initializer for the scene. Anything that should be set up and appear from the very beginning should go there. touchesBegan and touchesEnded are called when a finger touches the screen and when it leaves the screen, respectively. Remember always that iOS devices are multitouch devices, so these events can happen concurrently, and there can be multiple touches happening at once. The first argument to these methods is a set of touches, which you should iterate over to perform actions (the "Hello World" example shows how to do this). Finally, the update method is called every time the scene is updated, at each "frame" essentially.

There are other methods, for instance, touchesMoved. However, I discovered that you don't actually want to use touchesMoved to do what you would think you'd use it for, namely, to move stuff around. The reason is that there is no easy way to sync up multitouches between touchesBegan (where you know what thing the finger started on) and touchesMoved to move it around. It works well for a single touch, but if you want to be able to move multiple things around at once (which I highly recommend, as it leads to a much nicer user experience), you have to do things a little differently, as I'll explain below.

Adding some objects to your environment

There are a lot of classes to create various objects of various shapes. I started with SKSpriteNode, which creates a simple square, because I wanted to play around with touch events.

I started out with four sprites (yes, it would be better to put these in an array, and probably abstract them to a method):

let sprite1 = SKSpriteNode(color: UIColor(red: 1.0, green: 0.0, blue: 0.0, alpha: 1.0), size: CGSize(width: 30, height: 30))
let sprite2 = SKSpriteNode(color: UIColor(red: 0.0, green: 1.0, blue: 0.0, alpha: 1.0), size: CGSize(width: 30, height: 30))
let sprite3 = SKSpriteNode(color: UIColor(red: 0.0, green: 0.0, blue: 1.0, alpha: 1.0), size: CGSize(width: 30, height: 30))
let sprite4 = SKSpriteNode(color: UIColor(red: 1.0, green: 1.0, blue: 0.0, alpha: 1.0), size: CGSize(width: 30, height: 30))

These lines go at the class level. This lets them be accessed from within any method of the class.

One thing I could not figure out how to do was how to access class variables from within other class variables. In Python, you can do

class Test:
    a = 1
    b = a + 1

But in Swift, if you do

class Test {
    let a = 1
    let b = a + 1
}

it tells you that Test.Type does not have a member named 'a' on the let b = a + 1 line.

You may have to use properties with getters and setters in this case, which I didn't feel like fooling with. The result is that I did not abstract out the CGSize(width: 30, height: 30) into a common variable.

The didMoveToView method then becomes

override func didMoveToView(view: SKView) {
    let center = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame))

    for s in [sprite1, sprite2, sprite3, sprite4] {
        s.position = center

        self.addChild(s)
    }
}

The self.addChild is the most important method here, as it actually puts the sprite in the main view. If you forget this line, none of the sprites will show up.

If you run this, you will only see the yellow box, because you put them all on top of one another in the center of the view.

Adding Basic Physics

We could change the positions so that they do not overlap, but the option I went with was to play around with the physics a little. SpriteKit has a nice 2D physics engine built in, and it's quite easy to use.

So my final didMoveToView was

override func didMoveToView(view: SKView) {
    /* Setup your scene here */
    let center = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame))

    for s in [sprite1, sprite2, sprite3, sprite4] {
        s.position = center

        var physics_body = SKPhysicsBody(rectangleOfSize: CGSize(width: 30, height: 30))

        physics_body.affectedByGravity = false
        physics_body.allowsRotation = false

        s.physicsBody = physics_body
        self.addChild(s)
    }

}

For each sprite, I create an SKPhysicsBody with the exact same size as the SKSpriteNodes (there's probably a more direct way to do this), and attach it to that node. The affectedByGravity property is important. If you don't set it to false, all the objects will fall off the bottom of the screen. I disabled allowsRotation because I wanted my squares to stay upright. Otherwise when when the squares hit one another they will rotate in space.

Now SceneKit will prevent the squares from overlapping with one another, even if we put them on top of each other as we have done.

So now when we start the simulator, we see

Making the squares movable

Now, let's make it so that we can move these squares around. The correct way to do this took me some time to figure out. I finally got some hints from this site.

The key thing here is that the UITouch objects remain the same objects for the duration of the touch. Their position is updated when the touch moves. Hence, you just need to associate each touch with the node that was touched when the touch began, and move the node to the position of that touch with each update.

To start, we will create a dictionary on the class mapping touches to nodes

var selected: [UITouch: SKNode] = [:]

Then, in the touchesBegan method, map every touch to the node that it touches.

override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
    /* Called when a touch begins */

    selected = [:]
    for touch: AnyObject in touches {
        let location = touch.locationInNode(self)

        selected[touch as UITouch] = nodeAtPoint(location)

    }
}

The as UITouch part is needed because the compiler only knows that touch is AnyObject. This was one of the things that was helpfully suggested by the compiler, so I did not really need to know what I was doing to get it right.

Note that even if you touch the background behind the squares, you are still touching a node, namely, the GameScene node itself (the node for the class you are working on). This is a very important observation, as it will tell us how to get the right position for the node when we update it. It also means that we should keep track of which nodes we actually want to be moved by the touch. Trying to move the GameScene node is ignored, at leads to a lot of console logs, so we should avoid it.

Next, let's write the touchesEnded method. This method is simple. If a touch ends (the finger is removed from the screen), we should remove it from the selected dictionary.

override func touchesEnded(touches: NSSet, withEvent event: UIEvent) {
    for touch: AnyObject in touches {
    selected[touch as UITouch] = nil
    }
}

To delete an item from a dictionary in Swift, just set it to nil.

Now, finally, we need to write the update method to move the node to the current position of the touch.

The simplest way to do this is

override func update(currentTime: CFTimeInterval) {
    /* Called before each frame is rendered */
    for (touch, node) in selected {
        if !contains([sprite1, sprite2, sprite3, sprite4], node) {
            continue
        }
        let location = touch.locationInNode(self)
        node.position = location
    }
}

Note that we only modify the position for the four sprite nodes.

The touch.locationInNode(self) part took me a long time to figure out. There are other methods, like touch.locationInView(nil), but this does something very strange where the the horizontal axis was doubled (moving the touch one inch moved the object two inches), and the vertical axis was inverted. If someone knows what was going on there, please let me know.

Modifying the position directly is nice, but it's nice to play around a little bit with a third thing from SpriteKit, actions.

What we will do instead of setting the position of the node is to tell SpriteKit to move the node there in a certain amount of time. If we make this time small enough, like 0.01 seconds, it will appear to act exactly the same. If we up this time, there will be a smooth "lag" where the node catches up to the touch. Because the movement always happens in the same amount of time, it will move faster if the finger is farther away. This gives the squares a nice "frictioney" effect with some springiness to it, which is quite nice.

override func update(currentTime: CFTimeInterval) {
    /* Called before each frame is rendered */
    for (touch, node) in selected {
        if !contains([sprite1, sprite2, sprite3, sprite4], node) {
            continue
        }
        let location = touch.locationInNode(self)
        let action = SKAction.moveTo(location, duration: 0.1)
        node.runAction(SKAction.repeatAction(action, count: 1))
    }
}

There are many other actions we can perform, like rotations and color changes.

Conclusion

Here is an example of the movement. You can see it works even with multitouch. You can also see the collision physics cause the other squares to move out of the way when another square hits them.

<video autoplay="autoplay" loop="loop" src="http://asmeurer.github.io/SpriteKit-Example.mp4" width="500"> Your browser does not support the video tag. </video>

Here you can see the movement lag caused by using SKAction.moveTo with duration: 0.1 (note that the mouse itself jumps a bit at the beginning, but this is due to lag in the recording).

<video autoplay="autoplay" loop="loop" src="http://asmeurer.github.io/SpriteKit-Example-2.mp4" width="500"> Your browser does not support the video tag. </video>

I have uploaded the full code to GitHub.

This isn't exactly a "game", but it does lay down the foundations for what you would need to write a game in Swift using SpriteKit. At least for me, it shows me the technical basics of how to write some games that I had thought about, which mostly involve making shapes and moving them around the screen.

by Aaron Meurer at August 17, 2014 04:31 PM

August 15, 2014

Thilina Rathnayake

More work on Sparse Matrices

Hi All,

During this week I completed more functionalities regarding to CSR Matrices. I mainly implemented a generalized binary operation between two matrices. It’s similar to the SciPy function csr_binop_csr_canonical. This method applies a given binary function, binop to each corresponding elements of the two matrices which are given as input arguments, i.e. (i, j)^{th} entry of the output matrix is equal to binop(a, b) where a and b are the (i, j)^{th} entries of input matrices respectively.

Also, I implemented get(), set() methods for CSRMatrix class. It took me while to get this completely right as I was discarding some of the possible cases.

I originally planned to start writing python wrappers but I couldn’t do it as I was a bit busy with a project in my university. But I hope to get started on this in upcoming week. Although the GSoC is officially over, I will work on the project for two additional weeks as I took a two weeks break during the programme.


by Thilina Rathnayake at August 15, 2014 01:49 PM

August 12, 2014

Soumya Dipta Biswas

GSoC 2014: Week 10 and 11

Hello everyone,

This GSoC post has been long pending, but to be honest the last two weeks have been simply been so hectic that I haven’t managed to find the time to write a lot of code, or blog about it. One of the major things I have worked on is greatly improving the resolution code I wrote earlier. This revision fixes a small bug and greatly improves the speed of the resolution process. Initially if a clause was not entailed by the KB, then it took almost forever for the method to return False. In the scenario where it was entailed, it was still performing lots of useless computations. This has been completely removed from the new resolution, and the method quite efficiently gives the correct value. The other major task has been to fully integrate the Constants class. Now, any non-boolean object that will internally get converted to a Constant and the user can be simply put in strings and numbers and the system will still play nice with it. Except for this, I have finished allSAT (all models for a given formula). I will be sending out a PR for this as soon as I finish testing it properly. I think that is all for now.

Bye!!!

by SD at August 12, 2014 03:09 PM

August 11, 2014

Sudhanshu Mishra

GSoC'14 progress: Week 12


Last week I worked on utility functions which has been merged into the master.
Currently I am working on converting my incopmlete PR related to geometrical optics in 3D into 2D. Creating these classes in 3D will make it difficult to use/extend. I referred to some books and I never saw use of too much 3D in their illustrations. Depending on the use cases we can consider adding it later after gsoc. As of now optics module can not be called complete and hence I'll be working on it post gsoc.
Few other things which I would like to complete this week are as follows:
  • Interference
  • Diffraction
  • IPython notebooks for examples

by Sudhanshu Mishra (noreply@blogger.com) at August 11, 2014 11:27 AM

Thilina Rathnayake

Sparse Matrices

Hi All,

I am writing a blog post for GSoC after three weeks. I couldn’t work on the project for two weeks as I participated in a competition. However I am back again and started working again. Now, I am mainly concerned about finishing my work with Sparse Matrices. Then we can write some wrappers and do some benchmarks.

We decided to use SciPy’s sparse matrices code and I have ported most of the important functionalities. I created a CSRMatrix class and created a method to create a CSRMatrix from a normal COO matrix. I ported the methods for testing the canonical form and sparse matrix multiplication.  Also, I implemented three methods mainly to be used in extracting diagonal elements and scaling.

I am going to implement binary operations with CSR matrices during the coming week and also want to get started with writing wrappers so we can use the Matrix classes from python and SymPy.


by Thilina Rathnayake at August 11, 2014 08:08 AM

Sushant Hiray

GSoC Week 12: Wrapping Up

This week was primarily focussed on adding the remaining special classes I had proposed in my proposal

The special function module primarily included:

  • Zeta function
  • Gamma function
  • Tensor function

Progress

In continuing with interfacing ARB from the previous week, PR 265 was also meant to add the Zeta Module. The Zeta Module primarily comprised of:

  • Reimann-Zeta
  • Dirichlet_Eta

This PR was merged after Ondrej reviewed the ARB interfacing and its use case in Bernoulli number and the Zeta Module.

After the Zeta Module, I sent a pull request, PR 267 to implement the Tensor Module. The Tensor Module primarily comprised of:

  • KroneckerDelta
  • LeviCivita

KroneckerDelta makes uses of the Assumptions Module to define some more properties such as is_above_fermi is_below_fermi. These were not implemented as we donot have assumptions module yet in CSymPy.

LeviCivita accepts variable number of arguments, so it is implemented via a vector arg (vec_basic).



The final module to be added is the Gamma Module. It comprises of :

  • Gamma
  • Upper Gamma
  • Lower Gamma

PR 269 is implementing the Gamma Module. Currently only Gamma is implemented.



The Week Ahead

Complete the 2 small classes and get them reviewed!


Thats all for now :) Will get back next week!

August 11, 2014 07:45 AM

August 10, 2014

Harsh Gupta

Week 11 and Week 12: Wrapping up the project

The GSoC is about to end. We have the suggested deadline tomorrow and the firm deadline next week. We are wrapping up the project now, cleaning up the code, writing the documentation and turning the project into a usable state. Since the new solvers cannot be called compelete I have to make sure that the next person working on it can pick it up easily. I plan come up with a post with the summary of the work that was done, that challenges we faced and thing that needs to done and how can they be tackled.

by Harsh Gupta at August 10, 2014 12:09 PM

Kundan Kumar

GSoC week 11-12: Concluding Ideas

Phew, that was close I just had full week of written tests by recruiting companies. Well, not good for me. I suck at electrical and that too when we have to study ancient electrical subjects which has no longer any use in this modernized world. My college, maybe all Indian colleges, lag behind in electrical subjects. Though I always find online courses of electrical in U.S colleges fascinating, Oh but I am in India, can’t help with it. Let’s see how good I will do in GRE and TOEFL, maybe after getting in better colleges I will get to discover more.

Oh, I got driven by emotions, this is my GSoC blog so I will start with concluding my work I have done till now. Well before I conclude I will admit I wasn’t doing much work from last few days. In my 10th and 11th week I just had my checksysodesol PR merged. And two PRs for doc and nonlinear one are still behind. In the beginning of next week I will complete that and get it merged. This week I started the doc PR to complete all documentation and will add few tests to make coverage_report of ode to increase which is 85% now. I will discuss on nonlinear one with Sean and Tim about what modification is required.

My work in GSoC is on system of ODEs. In this I included linear system of ODEs for first order of two, three and `n` equations and second order of two and three equations and non-linear system of ODEs for first order of two and three equations. In order to solve these I modified `dsolve` function to incorporate solving these equations, made a new function `classify_sysode` to classify these system of ODEs and also made a function `checksysodesol` to check for solutions by substituting those solutions in equations and simplifying to get zero.

As this will be the last official blog for GSoC, as Aaron mailed, I would like to say few things here, offcourse we will be on gitter. This is the best experience of work I had in life. Well working continuously without having any care of anything else, it just feels awesome. I learned a lots and lots of stuffs from Sean, Tim, Aaron, Smith, Ondrej and some from Harsh and Sudhanshu too. I just started working on Sympy from 27 Feb, that was too late :), and then I didn’t knew git, github, opensource and many more things, just knew continuously working hard just pave the way to success but not without you guys. Open source is a great place to learn new stuffs and even better due to interaction with lots of people from other places. I will always be contributing to open source. It has lot of free software which otherwise would have to hard to buy in college life. This is going to my most memorable summer.

Thanks you guys!


by Kundan at August 10, 2014 06:40 AM

August 03, 2014

Avichal Dayal

I returned to my college last week. I couldn't do much for a few days as a result. Also a late blog as a consequence.

I tried solving the bugs presented in my 3 PR's.

In the formal_series PR,
I implemented the necessary `_eval_` methods
- _eval_derivative
- _eval_integral
- _eval_as_leading_term
- as_ordered_terms
There is still the problem in test_args which fails because args of Stream class is not a Basic instance. It is a generator object.
I also added more test cases.

In the special asympt PR,
I have a few bugs which I'm not been able to solve.
Suppose _eval_nseries returns expansion of the form:-
s = (1 + 1/x + 1/x**2 + ... O(1/x**6, (x, oo)))*(exp(-x)/x) then it fails.
getO fails on s as it is not an Add instance. Then it tried to append it with appropriate Order term, so
s + O(1/x**n, (x, oo)) = O(1/x**n, (x, oo)) which is wrong.

I'll try fixing all these bugs in the last few days and get all the PRs ready for merge.
 

by Avichal Dayal (noreply@blogger.com) at August 03, 2014 08:03 PM

Sushant Hiray

GSoC Week 11: Interacing ARB

This week was primarily focussed on interfacting ARB library with CSymPy.

Arb is a C library for arbitrary-precision floating-point ball arithmetic, developed by Fredrik Johansson. It supports efficient high-precision computation with polynomials, power series, matrices and special functions over the real and complex numbers, with automatic, rigorous error control.

Progress

I opened PR 265 to interfact ARB with CSymPy. There were some initial hiccups with using ARB for C++. I opened up an issue arb/#21 which was very promptly fixed by @fredrik-johansson with a minor patch.

Since we were using an earlier release version 2.1.0 without this patch, @fredrik-johansson willingly released a new version 2.2.0 so that we could interface immediately.

I was particularly in twaeaking around with CMAKE to interface ARB. I read quite a bit about it and managed to get a basic version running. But there were quite a few issues:

  • flint and arb were not getting linked
  • arb was not searching /usr/local/include/flint for appropriate files of flint.



I put some minor hacks to make sure arb was linked and was able to find the files appropriately. Since Ondrej was out this week, he wasn’t able to suggest any better ways yet. Isuru Fernando helped me out with a much cleaner way to link the arb and flint library.

I also added the Zeta Module comprising of zeta and dirichlet_eta functions. The zeta function needs harmonic to be implemented, which isn’t implemented yet so for now, only minor simplifications are made.

The Week Ahead

Get the current PR merged, add a new PR for the Tensor Module.


Thats all for now :) Will get back next week!

August 03, 2014 02:22 PM

August 02, 2014

Sachin Joglekar

GSoC Week 11: A major bug fix, and printing

Jim brought my attention to a very crucial bug in my work this week. Well, SymPy now automatically detects folders to test by looking at the __init__.py files. That is, a given folder in the SymPy code directory would _only_ be tested if it has __init__.py. Since I hadn’t put any such file in _sympy/vector/tests_, they were being skipped till now! Thank God this got fixed.

I started out this week with the seemingly ‘easy’ task of implementing different kinds of printing functionalities for sympy.vector. Well, the basic methodology of doing things is pretty simple, if you just follow the documentation in the _sympy.printing.printer_ file. It will tell you all you need to know about – either implementing a new printer, or adding custom code for printing of your class (to an existing printer).

Some things I had to improve/word around-

1. BaseScalars had to be handled differently from BaseVectors, since BaseScalars are a part of other Exprs (that may be measure numbers in vectorial expressions), and they have their own dedicated printing methods. Hence, I had to implement the _latex and _pretty methods in BaseScalar itself.

2. The pretty _as well as_ latex printing for _both_ Vectors and Dyadics with a single _print_BasisDependent method in the respective printers.

3. I used the printing code in _sympy.physics.vector_ as reference, though quite a few things had to be changed keeping in mind the inheritance from Expr, the structure and nature of the args being different, etc.
I did improve on a few things. _physics.vector_’s pretty printing messes up when dealing with vectors whose measure numbers have varying ‘heights’.

Overall, it was quite a learning experience and tricky coding – to implement printing, compared to my initial impression of it being a boring and tedious job.

Well, I have implemented all the requried methods and tests (all of which pass). I have pushed them to my third GSoC PR, so thats in line. Hopefully, the dyadics PR will soon show all tests passing, after which Jason or I will get it merged. It will add the OOP structure, and the dyadic code, as well as fix some crucial bugs.

A demo of the new printing implementations for a tedious vector expression-

Assume N is a CoordSysCartesian instance. Then,
(a**2 + b)*N.i + (Integral(f(b)))*N.k pretty-prints as


u'\u239b 2 \u239e N_i + \u239b\u2320 \u239e N_k\n\u239da + b\u23a0 \u239c\u23ae f(b) db\u239f \n \u239d\u2321 \u23a0 '

and LaTeX-prints as-


(a^{2} + b)\mathbf{\hat{i}_{N}} + (\int f{\left (b \right )}\, db)\mathbf{\hat{k}_{N}}


by srjoglekar246 at August 02, 2014 09:35 PM

Akshay Narasimha

Gsoc Week 10-11

I could not post last week as my college started and I had to travel a bit. I did not do much work last week as per my proposal's schedule but ended up spending a lot of time refactoring the code base of the Geometry module(my work and the existing code). Here are some links to the PR's reflecting these changes.
Apart from that I added a few methods to the Parabola class which is almost complete apart from the tests. I will add those up this weekend and send a commit.

The following week I plan to work on the GeometryResult class and will try to get the pending PR's merged.

That's all for now, Until then cheers!

by Akshay Narasimha (noreply@blogger.com) at August 02, 2014 05:47 AM

Jim Crist

GSoC Week 10 & 11: Bogged down in details

I missed my post last week due to my research for grad school all going to hell at the worst possible time :(. There wasn't much to report on last week, so I'm not too perturbed. Turns out even with PRs done and tested, it still takes a long time to get them merged and have everyone agree on them. Fortunately I made up for it this week; as of now I am officially working on code generation! This is behind schedule on my original timeline, but should have been expected, as per Hofstadter's law:

Hofstadter's Law: "It always takes longer than you expect, even when you take into account Hofstadter's Law."

Things are moving ahead now, and I have some hope left that I can accomplish (most of) everything I set out to do.

Project Status:

I set out to accomplish a number of things this summer. Here's the status of each project goal:

  • General Linearization Form:

    Done!

  • Linearization methods for Kane's and Lagrange's Methods:

    Done! Lagrange stuff got merged last week.

  • Documentation for the above:

    Done? This PR is waiting for review, but I think it's good.

  • Pre-linearized solution methods:

    Nixed from the plan due to lack of interest.

  • Code Generation:

    In Progress...

I also accomplished a few other things, that I found necessary for my project:

  • Refactored KanesMethod and LagrangesMethod:

    This resulted in more readable, pythonic code, and also a speed/memory improvement. PR for this is still awaiting review.

  • Faster subs function for mechanics:

    My msubs implementation is several orders of magnitude faster than subs for the kinds of expressions seen in mechanics (even without the benefit of cacheing that subs has). This is in a PR awaiting review.

  • Soooo much work on the nan and oo issue:

    Still not solved for all cases... :(

TODO:

There are only 3 weeks left of GSoC! In my last remaining weeks, here's what I plan to get done:

  • Get the Bicycle Example working:

    After my work everything runs faster, results in smaller, more tractable expressions, and uses less memory. Except for the bicycle example. For some unknown reason I can not get this thing to result in anything except nan. This is a regress in performance (even though everything else runs better), and needs to be solved.

  • Code generation:

    I've already got some stuff working, and it's really exciting. More on this below.

  • Get all my current stuff merged:

    All that works needs to get into Sympy. As not everyone else is being paid to do this, it can take some time and effort to get things through the review process and into master, but I have hope that my remaining contributions will eventually make it in.

I think I can do it, but it'll be a stretch.

Code Generation

Sympy currently contains some facilities for code generation, but they lack support for the matrices that are necessary for working with dynamics problems. I hope to remedy that, as well as to make general improvements to the entire codegen module.

Code generation in sympy has three levels:

  1. Code Printers ccode, fcode, and the like

    These are printers that know how to print simple sympy expressions using functionality and syntax found in that language. For example, ccode will print exponents using pow, which is found in the math library in C. These printers don't have any knowledge of functions, multiple statements, or header files. They simply print a single expression out on one line.

  2. The codegen submodule

    This submodule contains facilities for representing generalized routines, and generating functions in various languages (currently C and FORTRAN) that can be compiled as a library without any changes. They know about function and variable declarations, header files, library imports, and multi-line statements. However, they have no idea how to make this generated code work with python.

  3. Code wrapping, usually accessed through autowrap

    This is where the functionality for wrapping the generated code lives. Using the functionality provided here, one can compile and wrap generated code, and then call it from python. The autowrap function is the main entry point, allowing for all 3 steps to be done in one call.

The first thing I wanted to fix was getting code generation to work with matrices and matrix expressions. This turned out to be harder (and more confusing) than I expected. There is currently support for a "matrix like" object named sympy.tensor.IndexedBase, but I really don't understand the purpose behind it. Reading through the code and examples though it seems to be for representing indexed loop operations in a concise form. This unfortunately has nothing to do with the indexed types (matrices) that I plan on implementing.

I spent a long time reading through the code and playing around with it using pdb trying to figure out the control flow in the codegen function, and am still a little lost. Most of what's there seems to be for supporting the Indexed operations. After some time trying to bend them to work for matrices, I changed plans and now am supporting Matrix and MatrixExpr types for matrix operations only. Indexed types can be used elsewhere, but they shouldn't be used for representing matrices with expressions inside them.

I currently have this "working", but am not happy with it yet. The current layout of the module made for some hacky work adding in matrix support. I plan on doing some refactoring to make this implementation cleaner. Currently, on my codegen branch the following is supported:

  • Generating C code for a matrix with expressions in each element:

    Matrices are set as input-output type arguments, and are modified in place before being returned.

  • Passing in a MatrixSymbol as an argument:

    Here the plan is to use matrices to pass in a large number of arguments. You can think of this kind of like a vector. There's another symbolic vector type as well in Sympy (DeferredVector). I may end up supporting it, but I'm not really sure what it's for. In its current implementation, the following works:

    q = [q1, q2, q3, q4, q5]
    q_vec = MatrixSymbol('q', 5, 1)
    sub_dict = dict(zip(q, q_vec))
    # Replaces each q with elements from q_vec
    expr = msubs(expr, q_vec)
    # Generate a function that takes a numpy array (q) and returns expr
    # This works if expr is an expression, or a matrix
    func = autowrap(expr)
    

After I clean this up, I plan to add support for:

  • Common Subexpression Elimination (cse):

    Even though modern compilers do this already, experimentation shows that the large expressions generated in mechanics benefit from generated code having cse performed. This will be implemented as a boolean kwarg (default False). When True, sympy's cse function will be run on the expression, and the code for each subexpression will be generated, followed by the final expression. I actually don't think this will be too difficult to implement, and should give some speed improvements on the compiled code (at the cost of slower generation).

  • A ctypes code-wrapper:

    Currently the only code wrappers supported are f2py and cython, neither of which is in the standard library. While the wrappers generated with those functions may be more robust, a ctypes wrapper is also possible, with the added benefit that ctypes is in the standard lib.

  • Support for matrix expressions:

    In an ideal world, I'd implement the excellent work done by Matthew Rocklin, discussed in this video from SciPy 2013. The idea here is that we have some knowledge about each of the matrices involved in an expression (for example $A^{-1} B$). We may know that A is positive definite, or symmetric, or upper triangular, etc... For each case, there may be a faster inversion routine that we could take advantage of rather than using a one-size-fits-all inverse function. As I don't have time to implement support for all possible operations, and the many BLAS/LAPACK routines that support them, I'll focus just on the inverse, as it's commonly found in expressions in mechanics. The thought is, we should be able to run:

    func = autowrap(Inverse(M)*F)
    

    And have code generated that solves the expression in a fast manner, without having to symbolically find the inverse of M and combine it with F into one matrix beforehand.

Of course this is a wishlist, and it's unlikely all of this will be accomplished in the next 3 weeks. Still, I plan to keep supporting sympy after my GSoC ends, so if it's not done by then it will eventually get there.


Other exciting news of the week:

I got accepted to the GSoC reunion at the end of October! As this is the 10th annual GSoC, Google is throwing a big reunion shindig for past and present students. As there are lots of us, only a few were chosen based on a lottery, and I made it through! I'm very excited to meet other students that completed the program, listen to some interesting talks, and see the GooglePlex. I also bought my tickets to get there a day early so I have some time to explore the bay area. Last time I was out there I was 14, and I didn't get to see much of the area. If you also got accepted/live out and would be interested in meeting up, let me know! I'll be in San Jose/San Francisco October 23-26.

by Jim Crist at August 02, 2014 01:44 AM

July 29, 2014

Tarun Gaba

GSoC 14: First Week!

{% include JB/setup %} [ <-Back to posts ](/gsoc14) First week of GSoC 14 has ended. It has been a week full of discussions and brainstorming over how to handle the project, and the collaboration. Most of the time was spent in taking crucial design decisions As decided, I will be publishing these weekly blog posts in A.O.I format(_Accomplishments_, _Objectives_ and _Issues_) ###Accomplishments: The main accomplishments of this week involved finalizing a stable API for the generic visualizer. Discussions were held with Adam and it was decided that a fork of MGView will be used for developing the new visualizer. It will have a UI look similar to MGView, but with additional features pertaining to PyDy and related enhancements as well. Apart from that another main aim was to flesh out an API for the visualizer. The generic visualizer will be made up of following modules: - Parser Module: to parse the JSON and save it in JS objects/variables - SceneGenerator Module: To take the relevant data and information from parsed JSON and create a scene on the canvas. - SceneEditor Module: Using GUI controls to edit the scene and save them in a JSON file. - ParamsEditor Module: Using GUI widgets to modify the simulation parameters, and send/save them as relevant. ###Objectives: The objectives of the upcoming week are: - To develop the Parser Module to be able to consume JSON from both PyDy and MG and parse them into relevant Javascript objects. - To develop methods on PyDy side to generate the JSON in the form that MotionView can consume. - To test some benchmark examples to check this workflow: output from PyDy --> JSON --> consumed by Parser Module. ###Issues: Since actual coding work is not started yet!, there are no technical issues encountered so far. I will keep this blog updated with the regular advancements in the project. Happy Coding! [ <-Back to posts ](/gsoc14)

July 29, 2014 04:00 PM

Harsh Gupta

week 10: Radical equations

This week I worked on the solvers for the equations with radicals. Suppose you have to solve

$$ \sqrt{x} + x - 2 = 0 $$.

then you have to move the radical to the right hand side of the equation.

$$ x - 2 = - \sqrt{x} $$

and then square at both sides

$$ x^2 - 4x + 4 = x $$

Now the equation is a polynomial in \( x \) can be solved with usual polynomial solving methods. Note that squaring both sides produce some extra solutions and we will have to check all the solutions obtained against the original equation. If there are more than one radicals involved we may have to apply the method recursively. For example in solving \( \sqrt{2x + 9} - \sqrt{x + 1} - \sqrt{x + 4} = 0 \) the method will recurse twice.

To implement the method I tried a pattern matching approach. The squaring part is easy the tricky part is identifying which part to move to the right hand side. First I tried to match the expression with the form sqrt(p) + q but it failed even for case like 4*sqrt(x) + x - 2 because no pattern matched to it. I had to use a*sqrt(p) + q with the condition that the expression matched to a shouldn't be zero. Now I can simply move the expression matched with p and terms multiplicated with it to the RHS and square both the sides.

Notice that this method for solving sqrt equation can work with any radical equation, if it were cube root instead of sqrt I just had to cube both the sides. OK so how do I mathch that expression? I tried to pattern matching with assumptions on the wild symbols but it doesn't work. I tried to match with somthing like a*p**Rational(1, m) + q but this also didn't work out because Rational(1, m) raises TypeError no matter what the assumption on the variable are. There is a proposal for a new pattern matcher, I have not closely checked the details but it will be able to work with assumption. You can see the proposal on the wiki here and if it is implemented then things will be good but I can't wait for it. I had no other option to check term by term for rational power. Here's the implementation

def _has_rational_power(expr):
    a, p, q = Wild('a'), Wild('p'), Wild('q')
    pattern_match = expr.match(a*p**q)
    if pattern_match is None or pattern_match[a] is S.Zero:
        return (False, None)
    elif isinstance(pattern_match[q], Rational):
        if not pattern_match[q].q == S.One:
            return (True, pattern_match[q].q)

    if not isinstance(pattern_match[a], Pow) or isinstance(pattern_match[a], Mul):
        return (False, None)
    else:
        return _has_rational_power(pattern_match[a])

by Harsh Gupta at July 29, 2014 12:09 PM

July 28, 2014

Sudhanshu Mishra

GSoC'14 Week 10: Adding more to utils

This week I worked on adding more utility function in optics. Some of them are yet to be added. I'll send a pull request once I write tests for them.
Here's the link to the code pushed till now:
I also worked on one of my pending PR(https://github.com/sympy/sympy/pull/7686) parameterizing a 3D circle. This will make the implementation a bit easier.
Tomorrow I'll be going back to college.
That's all for now. Cheers!

by Sudhanshu Mishra (noreply@blogger.com) at July 28, 2014 09:05 AM

Sachin Joglekar

GSoC Week 10: Polishing off the main vector framework

This was a tiring week, with me being sick with a bad case of sinus for 3 days. Phew. Even then, I did manage to get quite a bit of work done in the time I could manage to sit in front of the computer without getting a headache.
Some main points of progress -

1. I finished off the Orienter classes, with a nice OO-structure, along with docs and tests. I basically changed the API of the orient_new


by srjoglekar246 at July 28, 2014 07:25 AM

July 27, 2014

Sushant Hiray

GSoC Week 10: Complex Complexions Resolved

After a significant amount of work in the past couple of weeks, I’ve finally completed the Complex Module.

Progress

This week I merged PR 248 which covered almost the entire of the complex module.

After merging this PR, Complex Numbers and I could be used in sync with other existing modules in CSymPy. @certik refined the Python wrappers to integrate the Complex Module. Here are the some of the things which could be done using CSymPy python wrappers:

In [1]: from csympy import *

In [2]: I
Out[2]: I

In [3]: I**2
Out[3]: -1

In [4]: I+1
Out[4]: 1 + I

In [5]:  (I+1)**2
Out[5]: 2*I

In [6]: (1+I*2)**(-1)
Out[6]: 1/5 - 2/5*I

In [7]: x = Symbol('x')

In [8]: x+I*x
Out[8]: (1 + I)*x

In [9]: ((2*I + x)*x).expand()
Out[9]: 2*I*x + x^2

In [10]: ((x + I)**2).expand()
Out[10]: -1 + 2*I*x + x^2



The other major work in this week was to work on the expansion of integral powers of complex number. Pull 264 was created to add this functionality. We used a particularly simple yet efficient algorithm to compute the integral power.

def powu(x, n):
    mask = 1
    r = 1
    p = x
    while (mask > 0 and n >= mask):
        if (n & mask):
            r = r*p
        mask <<= 1
        p = p*p
    return r



The current expand is a bit slower than what we were expecting. We will be looking to improve the speed in the coming days.

The Week Ahead

As per the proposal, I will be implementing the Zeta Function.


Thats all for now :) Will get back next week!

July 27, 2014 07:05 PM

Kundan Kumar

Week 10: Test cases in checksysodesol

I am finally in college, going through a long journey from home. Its final year for me now, got a very tight schedule.

I have updated the checksysodesol with test cases. The test cases includes the equations from first and second order linear and non-linear system of ODEs. I debugged few methods of system of ODEs, find out through checksysodesol.

The updated func method PR is merged which closes issues #7736 and #7723. The PR of non-linear of system of ODEs and PR of checksysodesol is open.

Today I will add all the test cases in checksysodesol and start a PR for non-linear system of ODEs of 2nd order.


by Kundan at July 27, 2014 08:06 AM

July 25, 2014

Soumya Dipta Biswas

GSoC 2014: Week 9

Hello Everyone,

I finished up the changes remaining in FOL and will send in the almost final pull request this week. This was only a minor part of the week however. Before jumping on to the implementation of faster SAT, I wanted to revisit faster CNF conversion. I have tweaked things further, but it still seems like very close to CNF turns out to be quite slow. If I can’t imporove the situation any further I will post the changes I made to Tseitin. Except fo that, I worked on tweaking the parameters of SAT, basically EVSIDS, without making any actual code changes. I will be posting the results of the same later.

Thats all for now. Hopefully this weekend will be quite productive.

See ya all later!!!

by SD at July 25, 2014 12:12 PM

July 23, 2014

Sushant Hiray

GSoC Week 9: Complexions in Complex Module

Week 9 for GSoC ended a couple of days back. I’m still working on the Complex Module.

Progress

I started of the week continuing the work on pull 248.

Initially I started working on the add and mul functions and integrating the complex module with them. This part was pretty easy as Complex addition and Complex multiplication was completely implemented. Intially I just added special cases for the case of Complex in pow. But this quickly ran into issues as is_canonical failed in some testcases.

Next I simplified the complex power for the following cases:

  • When exp is 1 or -1
  • When the base is purely imaginary and exp is integer.

Following this I made appropriate changes to is_canonical for pow

Meanwhile @certik sent a couple of PR’s sushant-hiray/csympy/#1 and sushant-hiray/csympy/#2 for implementing the Python wrappers for Complex Module. I spent some time trying to sync the wrappers and make sure they work as intended.

One important fix needed was to improve the printing. Over the entire PR, the printing has been changed multiple times to match the SymPy output. With this PR I’ve updated the current existing printing for add, mul as well as pow

The expand function is also updated to handle the case of Complex Numbers. Currently it doesn’t expand terms like: (2 + 3I)^n. Apart from this all the issues which were found have been fixed.

The Week Ahead

Merge the current PR and send a new PR for handling the cases like (2 + 3I)^n in expand


Thats all for now :) Will get back next week!

July 23, 2014 05:30 PM

July 22, 2014

Avichal Dayal

This week I converted all the operations on FPS to work with generators.
For e.g.:-
Addition for two series is something like
 def add(s1, s2):  
yield s1[0] + s2[0]
s1, s2 = s1.tail, s2.tail
for t in add(s1, s2):
yield t

Almost all the operations use recursive generators. In this case argument to the add function is different each time.
However in some cases the same function with same argument is called recursively.  For e.g:- In case of division
 def div(s1, s2):  
a, b = s1[0], s2[0]
def gen():
yield a/b
iterator = s2 * gen()
for t in iterator:
yield -t/b 
   for t in gen():
      yield t 
gen() is called recursively with the same arguments (no arguments in this case).
I'll try to optimize recursive generators if possible over the next few days.

Other problem I'm facing is args of Stream class. It is necessary that args of all Basic instances should be of type Basic. But the Stream class takes a generator object as an argument.

Next week:-
- Optimize recursive generators if possible
- Generator object as Basic instance? (Find solution for this)
- Implement all necessary ``eval`` routines for FPS like _eval_derivative, _eval_leadterm etc.

by Avichal Dayal (noreply@blogger.com) at July 22, 2014 01:11 PM

Akshay Narasimha

Gsoc Week-9

Sorry for the late post as I had to travel a lot this week. Other than that I finished the implementation of the Hyperbola class including the tests. Here is the link to the PR. Apart from that I fixed a few bugs in the Plane and Line3D class.

This week I plan to implement the Parabola class and hopefully will get the PR on Hyperbola merged.

Until then cheers!

by Akshay Narasimha (noreply@blogger.com) at July 22, 2014 06:05 AM

July 20, 2014

Sudhanshu Mishra

GSoC'14 Progress: Finished refraction at planar surface

I'm sorry for this delayed post. I couldn't work last week due to some other issues so had nothing much to report.
This week I completed refraction function for planar surface, fixed some bugs and wrote tests for it. I also added a function to calculate angle of deviation. Details can be found on the following link
I'm waiting for Sean to start reviewing this.
My next priority is to complete a pending PR on mirrors.

by Sudhanshu Mishra (noreply@blogger.com) at July 20, 2014 02:32 PM

Kundan Kumar

Week 9: Checksysodesol

This week a lot of time is spent debugging solutions. The PR solves the issue #7736 and #7723 which was arising due to wrong calculation of func method while solving non-linear system of ODEs of type5. I sent another PR which contains Checksysodesol, a method for checking the solution of system of ODEs. The PR remains to be updated with the test cases.

The PR of non-linear system of ODEs of three equation for 1st order still remains to be merged.

Thats all for this week. On monday I will be going to my college, hope to work harder there. The most haunting thing at home is electricity, having tough time with it sometime getting meagerly for 3-4 hrs a day. Well thats home, I suppose, getting a delicious meal of mom’s hand.


by Kundan at July 20, 2014 10:13 AM

Thilina Rathnayake

[GSoC] Week 9: Matrix Inverse and Sparse Matrices

Hi All, Sorry for a late blog post. I was kind of busy during last week, preparing for a competition.

During the last week, I mainly did two things, implementing matrix inverse and starting the implementation of sparse matrices.

Implementing Matrix Inverse

I implemented matrix inverse in two different methods, using Gauss Jordan elimination and fraction free LU decomposition. I had only implemented gauss Jordan elimination to solve a system Ax = b where b is a column matrix. I had to enhance the algorithm so that it can be used to solve several systems at once, i.e. b can be a collection of column matrix.

This was not a problem in fraction free LU method because L and U factors can be used to solve column vectors e_{1}, e_{2}, . . . e_{n} after calculating L and U once.

Implementing Sparse Matrices

We decided to adapt the implementation of SciPy sparse matrices. For the time being I have implemented CSR form of a sparse matrix. CSR is an acronym for “Compress Sparse Row”. You can learn more about it in the following links.

Wiki Article

Netlib Article

You can find information about scipy implementation of CSR matrices here.


by Thilina Rathnayake at July 20, 2014 08:32 AM

July 19, 2014

Harsh Gupta

week 9

This week I moved back to college and my classes have restarted. This week I worked on a PR to allow infinitely indexed Range. See https://github.com/sympy/sympy/pull/7741. While doing this PR I discovered that you cannot monkey patch object to assign some attribute. And I want to mention that Sergey(one of my mentors) is damn good reviewer.

by Harsh Gupta at July 19, 2014 06:50 PM

Sachin Joglekar

GSoC Week 9: Dyadics done

Not much to report this week, except that the code for dyadics is now done, with a all-tests-passing PR being pushed. A happy moment was the basic vector framework code finally getting pushed. Phew :-). Now the next PR in line is the dyadics one itself.
I also spent time mapping out the API for the Rotator classes, which I will proceed to discuss with Jason during this week’s meeting. I am still a little doubtful about how useful these classes may be, API wise. Lets see about this one.
A problem that I am facing currently is the confusion about the Del operator. I started out trying to write it as a SymPy function, but then I realised that the _outputs_ from the methods of Del should be the ones being defined as unevaluated functions- like Gradient, Divergence, etc. Will have to take this up with Jason too.

Anyways, I hope the dyadics code goes in soon, and I plan to send one or two PRs with more functionality soon. Till then, have a great week!


by srjoglekar246 at July 19, 2014 06:02 PM

July 18, 2014

Jim Crist

GSoC Week 9: Docs!

This week I spent time on all sorts of little things:

  • Finished up the refactoring of KanesMethod
  • Little fixes to my current PR. Just waiting on my mentors to review this, I want to get it merged soon-ish.
  • Documentation.

Writing documentation is the worst1. After taking time to implement all sorts of new interesting things, the last thing I want to do is go back and write about them in detail. Which is why it's so important to do early on. Good documentation needs to accomplish three things:

  1. Provide motivation for why your software is necessary/better/useful.
  2. Describe the user interface, showing how to use each function or class.
  3. Provide real world examples showing how to tie everything together.

Python's documentation is interesting in that there are varying ways to do it. Some of Sympy's documentation is just a nicely rendered form of the docstrings for all the methods. Other modules have a more prose-y explanation of their functionality. mechanics is one of those modules.

In my opinion the prose documentation approach is the better way. Having good docstrings is important, but they aren't the end-all of documentation2. Of course, if I have a question the first thing I'm going to do is read the docstrings (IPython makes this trivially easy). Only if I still have questions afterwards will I turn to the online documentation. However, it'd be extremely off-putting if the online documentation was just the docstrings again.

With the various changes I've made so far I needed to:

  1. Update the LagrangesMethod documentation to reflect the interface change.
  2. Create a documentation page all about the linearization methods.
  3. Update all the examples to reflect the new functionality.

All of these are "done". I still need to go through and proofread, but overall I'd say that the current state of the documentation is acceptable. I would like to take some time to reorganize the layout of the whole mechanics documentation at some point. The current layout isn't the easiest to navigate for what you're looking for.

With this out of the way, the linearization portion of my project is tentatively done. I say tentatively because I'm still waiting on my PRs to get merged, and am also still playing around with solving the nan issue that I've been writing about these last couple weeks.

With that done, I hope to move on to code generation. I've read the current code generation code and documentation, as well as this pydy wiki page on Jason's ideas about code generation. I'm still a little iffy about the intention of this functionality, so I'm waiting until we can all meet to discuss what needs to be done. That was supposed to have happened this week, but fell through. Hopefully we can set some time aside next week, and I can finally get to work on it.


  1. Not actually the worst. 

  2. This article by Steve Losh is a really good read on this. 

by Jim Crist at July 18, 2014 08:00 PM

Soumya Dipta Biswas

GSoC 2014: Week 8

Hello Everyone,

I have majorly been working on fixes and optimizations this week. There was an interesting error that was coming up because of the way quantifiers had been designed. I have made some significant changes to the Quantifier class. An interesting problem regarding constants has cropped up, which I am yet to fix. Please find the discussion regarding it at FOL Constants and Unification. For the rest or the week, I have majorly been writing tests for the FOL class and making small changes. Additionally I have spent some time writing tutorials for the logic module of SymPy. The documentation available currently only talks about how to do something but not what all can be done with it. The tutorials are far from complete but I will definitely try to finish the entire thing before the summer is over.

That’s all for now.

Farewell

by SD at July 18, 2014 01:12 PM

July 15, 2014

Akshay Narasimha

Gsoc Week - 8

This week was great in terms of the work done. My PR on the Plane class got merged finally, so this completes the  implementation of 3-D geometry. Though my code had less test coverage to which I have sent a PR which improves it .

Apart from that I have been working on the Hyperbola class and have added most of the methods, tests remain though. Here is the link to the PR.

This week I will add  tests for the Hyperbola class and will start implementing the Parabola class once this is done.

Until then cheers!

by Akshay Narasimha (noreply@blogger.com) at July 15, 2014 03:19 AM

July 13, 2014

Avichal Dayal

As it turned out, I had to replace most of my last two week's code. I used lambda functions to implement infinite structures. Lambda function stored the delayed part of the structure.

I scraped that out and now instead use the more Python-ic generators for infinite structures.

Earlier FPS was just for series expansion around x0=0. I added support for asymptotic expansion and series expansion around arbitrary points. This just included the substitutions as done in the series() method.

However I faced an irritating bug regarding this.
The Lazyseries class has arguments: <x>, <generator object>
So if ``l = Lazyseries(x, generator)``
When  ``l.subs(x, x+1)`` is done, it turned out that <generator object> was being iterated. I noticed this after sometime, fixing it by writing a custom _eval_subs()

Regarding the operation on series, I had to rewrite all of them using generators.
Adding series is just term-wise addition which can be accomplished by a simple one-liner i.e. ``imap(operator.add, self, other)``.
But it turned out that this was actually wrong. This didn't give sorted output when adding series.
E.g.:-
s1 = x + x**2 + x**3 + ...
s2 = x + x**3 + x**5 + ...
Now if s = s1 + s2,
s = 2x + (x**2 + x**3) + (x**3 + x**5) + ...
If I want series expansion of s with order = 4, it terminates at the third term after getting the x**5 term. Whereas, it should go further as there is a x**4 term too.


I wrote a custom add function merging two iterators similar to the mergesort algorithm. There is a built-in function called heapq.merge that does the same thing but it does not provide a key to compare values. It turns out that it is one of Cpython's ideas floating around to provide a key for heapq.

I hope to get this done in a few days and get it ready for the merge.


by Avichal Dayal (noreply@blogger.com) at July 13, 2014 06:41 PM

Aaron Meurer

SciPy 2014

I just finished SciPy 2014, a week-long conference in Austin, TX for scientific computing with Python.

This is my third SciPy (I have been to 2011 and 2013). This year, the conference was noticeably larger. Last year there were ~350 people, this year, there were ~450 people. Aside from there being a lot more people, and the main keynotes taking place in a larger room, the most noticeable consequence of this is that there were three days of talks this year, and three concurrent tracks of talks all three days (last year there were two of each). The conference consisted of two days of tutorials, three days of talks, and two days of sprints, running from July 5 to July 12.

Tutorials

The conference started on Sunday with tutorials. I gave a tutorial on SymPy with Matthew Rocklin and Jason Moore. The videos are on YouTube (parts one, two, three, and four). I gave tutorials for SymPy the previous two times I was at SciPy, although with different people (with Mateusz Paprocki in 2011 and Ondřej Čertík in 2013). I really enjoy seeing new people learn about SymPy, and working with Matthew Rocklin, who is a very good speaker and teacher.

I also attended the tutorial on PyDy by Jason Moore, Gilbert Gede, and Obinna Nwanna (parts one and two). This tutorial was also well done, and I highly recommend it if you are interested in Newtonian mechanics.

I unfortunately was unable to attend any of the other tutorials, but I heard good things about them, especially the Julia tutorial.

Talks

From Tuesday to Thursday were talks. The quality of talks this year was very high. The SciPy talks have always been high quality talks, but this year I felt that they were particularly good. I don't think I saw a bad talk.

Thus, I can't really recommend the good talks that I saw without recommending all of them. You should go to YouTube and the SciPy schedule and watch any talk that looks interesting.

I therefore am going to recommend here the very best talks. Two talks in particular stood out to me as the best.

First is Greg Wilson's Thursday keynote, which is among the best talks I've ever seen from any conference.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/1e26rp6qPbA" width="560"></iframe>

Greg mentions a lot of ideas, quite a few of which are controversial, which I think always makes for an interesting talk (it also means that I don't agree with everything he said, although I do agree with most of it). Most of the talk is about pedagogy, especially regarding his experiences at Software Carpentry. Some things he posited:

  • There is actually good research about what methods work and don't work in teaching. He referenced this presentation, which lists just about every possible pedagogical method, and the net difference that it has on students, based on over 50,000 studies. For example, individualized instruction has a very small positive effect, whereas teacher feedback has a very large positive effect. Since each takes time and resources, we should focus on those effects that have the highest impact. Greg pointed out that web-based learning has very little positive effect, and hence is a waste of time and money. The most effective change is removing disruptive students.

    In particular, I liked the quote, "if you want more computing in high school, you have to tell me what to take out." People like to go on that schools need to teach more of this or more of that, and computing and programming tends to be high on that list these days, but anyone who does not discuss what things should be removed from the curriculum, which is already quite full, is not being honest about the discussion.

  • The other big point Greg made is that we need more incremental massive collaboration in teaching. This is the same model that has built open source and Wikipedia, but is mostly absent from teaching. Incremental change is important here, as well. It is more useful for someone to contribute fixes to existing lesson plans, so that they become better for the students, but in his experience, people are much more willing to add new lessons. Greg calls for a "culture of patching". If incremental change could be adopted in teaching, teachers could aggregate methods and lesson plans, removing the massive duplication, and most importantly, making teaching materials that actually work for students to learn. Greg Wilson asks why open source and Wikipedia are able to thrive on massive incremental change, but teaching is not, a question he hasn't found the answer to.

    My thought on the matter is that unlike writing software or collecting and presenting facts, pedagogy is very difficult. If I contribute a patch to an open source project that fixes a bug, I can run the tests to see if my fix is "correct". If I fix an incorrect fact on Wikipedia, it is less easy, but I can still cite and check references to make sure it is correct. But for teaching, it is very difficult to know what methods work and what don't, and as Greg pointed out at the beginning of his talk, the effects of different methods can be very counterintuitive.

The second talk that I recommend is Jake VanderPlas's talk about Frequentism and Bayesianism.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/KhAUfqhLakw" width="560"></iframe>

I won't summarize this talk, as Jake has done a much better job in his blog (parts one, two, three, and four). The best thing is to just watch the talk. I will just point out that before the talk, I did not really understand the difference, not being a statistician or someone who works with statistics regularly, and having seen the talk, I now feel that I do. It's a controversial topic, and if you care about the matter, you should know that Jake is a strong Bayesian, although I felt that he gave both sides a fair exposition.

Again, all talks I saw at the conference were good. But those two I felt were the best. I should also mention here that I myself gave a talk on Conda (more on that later).

The Conference

Of course, the talks are only a part of any conference. The best part of SciPy is the gathering of the community. Each year I meet more new people, as well as talk with people I already know, but don't get to see outside of SciPy.

For me, the biggest part of the interactions this year were on Conda and packaging. The background is that I have been working full time for Continuum since January, and I had interned last summer, working primarily on the Conda package manager and Anaconda, the Python distribution. This year, some of the biggest buzz at the conference was about Conda. I'm obviously selection biased, because people came to me specifically to talk about Conda, but I also overheard it in other people's conversations, in several of the presentations, and frankly, the people who did talk to me about Conda were very excited about it. Just like everyone was talking about the IPython Notebook last year and how it has solved the fundamental problems of sharing and presenting data analysis, this year, everyone thanked me for my work on Conda and how it has basically solved the packaging problem, the ubiquitous problem in Python since people started using it.

Conda: The Packaging Problem Solved

Here is the talk I gave on Conda:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/UaIvrDWrIWM" width="560"></iframe>

I made the claim in my talk that Conda has solved the packaging problem, and the general feel from people I talked to who are using Conda is that it has.

I think this slide from my presentation summarizes why Conda solves the packaging problem.

One of the most amazing things about the scientific Python community, and one of the things that I think really sets it apart from other Python communities, is the use of Python alongside other languages, such as C, C++, Fortran, R, or Julia. No one language is enough to get the job done for serious scientific work. The fundamental brokenness of Python packaging has been that it has focused too much on Python specific tools and processes. The distutils/setuptools/pip/virtualenv stack works great if your code begins and ends with Python. Where it falls over is when you want to link against a C library, compile some Fortran or Cython code, and communicate with other languages like R and Julia. By being a system level package manager, which is fundamentally Python agnostic, Conda is able to deal with all packages equally, whether that package be a Python package, a C extension which other packages link against, or Python itself.

By being truly cross-platform and user installable, Conda is able to reach the maximal number of users, especially those who have historically been hit by the packaging problem the hardest: those who are on Windows or those who do not have admin rights to install necessary tools to install the packages they need.

Finally, Conda installs binaries, not source packages, and its metadata is entirely static (you do not need to execute arbitrary Python code to capture the metadata of a package). These two things remove two of the largest sources of issues with the existing Python packaging tools, such as compiler errors, and nonuniformity in metadata standards (there seem to be as many different ways of writing setup.py as there are packages on PyPI), by removing arbitrary code execution from package installation.

Conda opens up its ecosystem to anybody by making it easy for people to build their own Conda packages using reproducible Conda recipes. And Binstar makes it easy to share those packages. I'm very excited about Binstar, as I think it does for packaging what GitHub has done for open source, i.e., distributes and democratizes it. There are challenges on how to deal with this, of course. As with any distributed democratized system, Binstar can be a wild west of packages. Continuum is thinking about ways to manage this complexity, while still reaping the benefits it provides. If you have any thoughts on things that can be done, let me know in the comments section below.

Of course, solving the packaging problem and removing it are different things. Conda does not make it easier to compile difficult packages. It only makes it so that fewer people have to do it. And there is still work to be done before Conda really takes over the world.

Sprints

The conference ended with two days of sprints. I mainly helped people with Conda packaging. One key thing that happened is that I worked with Aron Ahmadia so that HashDist can generate Conda packages. HashDist is a package compiling framework that makes it easy to have completely reproducible builds by hashing all the information that was used to compile a package, and recompiling when any of that information changes. You can learn more about HashDist by watching Aron's talk from the conference:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/wviHkzk0AkY" width="560"></iframe>

I am now convinced that HashDist is a good solution for people who still want the control of compiling their own packages. Once HashDist is able to produce Conda packages, then you can gain the benefits of both worlds: Conda's powerful package management and environment notion, with HashDist's modular and reproducible package building framework.

Other thoughts

The organizers of SciPy did an excellent job this year. The video crew did something which I have not seen before, which is that they uploaded the videos of the talks on the same day that the talks were held. My talk, which was held right before lunch, was uploaded before the last talk of the day. Something that I saw come out of this is that people not attending the conference were able to watch the talks and take part of the conversation with the conference attendees, via Twitter and other social media, or by joining the sprints after the conference.

The extended three days of talks really took their toll on me. The conference starts early enough in the morning and the social events after go so late in the evening that each day of the conference I become a little more sleep deprived. Usually by two days of tutorials and two days of talks I have hit my limit, and this year, I really had a hard time making it through that fifth day. Fortunately for the sprints I was able sleep in a little bit, as it's not a big deal if you miss the beginning.

This year the conference organizers made a push for diversity, and it shows. There were noticeably more women at the conference this year, and not just insomuch as there were more people at all.

Finally, I leave you with the greatest lightening talk. Ever.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/ln4nE_EVDCg?start=3254" width="560"></iframe>

by Aaron Meurer at July 13, 2014 04:52 PM

Kundan Kumar

Week 7 and Week 8 : Non-linear system of ODEs

In week 7, there was negligible work done, had health issues. Because of that I was unable to update things here. During week 8, I got in non-linear system of ODEs for 3 equations of first order (PR). But most of them is not responding in my system. It takes very long time to respond but still I am not getting the final solutions (before that my system crashes).

Though I have written method for these types, test cases for two types have been included and for other 3 I am not getting any solution. I will needing help from Tim and Sean for making any advances in this.

The one PR of non-linear system of ODEs for 2 equation of first order has been merged and the PR for docs is all ready to go.

The next week I plan to finish the PR of non-linear one and start new for checksysodesol which I have just started and start working on non-linear system of ODEs for 2 equation of second order.


by Kundan at July 13, 2014 09:18 AM

Sushant Hiray

This Week in CSymPy: #8

Week 8 for GSoC just ended. I’m particularly pleased that I’ve managed to get a basic version of Complex Module merged into master.

Progress

I hadn’t made much progress last week as I was traveling back home. So I had only managed to implement some of the functions. This week I implemented all the virtual functions which were necessary. Updated Complex::from_mpq to return the appropriate instance of Complex and Rational. Pull 223 implemented the basic version of complex module.

I also noticed that some of the files I had commented out earlier had a mismatched tab configuration. So I fixed the mismatch identation for all the files in the project. Pull 242 fixed this issue.

While writing tests for the Complex Module, I noticed that there was no explicit divide by zero check in the code and as a result, on division by zero, we were getting a coredumped. This issue was filed as #239 which was subsequently fixed via Pull 241.

After some fixes and adding divide by zero checks in the complex module, the Pull 223 was merged. With this in place, we can perform all basic operations on complex integers as well as rationals. The only exception being pow. It was decided that complex pow shouldn’t evaluated by default. Also since the current definition of pow returns a RCP<const Number> so it needs to be refactored to account the current design.

The Week Ahead

Make sure complex module works with all other submodules of CSymPy and add tests accordingly. Pull 248 has been started to deal with this.


Thats all for now :) Will get back next week!

July 13, 2014 05:45 AM

July 12, 2014

Harsh Gupta

week 7

Week 7, 8

Fu Simplification

In the early part of the week 7 was thinking and working on design decisions. I want the code to be very modular so that it could be easily extended by people other than me. This came up in a meeting with Matthew and I want the solvers to be like the Fu simplification explained by Matthew in this Scipy Talk. The idea was that we can see solving an equation as series of transformations. If we have a lot of small transformations such that the input type is same as output type, and some notion of what makes a "better" output we can search though the list of transformations running one on top of other. I also posted about it on the mailing list which brought out some flaws in the preliminary design. The idea is pretty crude in the current stage and I'll have to look deeper into it, but not now.

I also discussed about the implementing a pattern dispached based solver suggested by F.B on the mailing list. But we decided that it will be better if we finish the equation solver by the current technique first.

Intersection with S.Reals

I decribed in the last post that one way to solve trigonometric equation is rewriting them in terms of \( exp \). But that is \( exp \) in the complex domain and the solution of \(exp(x) = a \) is \( \left\{i \left(2 \pi n + \arg{\left (a \right )}\right) + \log{\left (\left\lvert{a}\right\rvert \right )}\; |\; n \in \mathbb{Z}\right\} \). Hence we have to filter out real solutions from the obtained solutions. The filering is equivalent to the intersection of the solutions with the \( \mathbb{R} \) set. Suppose \( g(x) \) and \( h(x) \) are real valued functions and we have to perform $$ \mathbb{R} \cap \left\{g{\left (n \right )} + i h{\left (n \right )}\; |\; n \in \mathbb{Z}\right\} $$ then the answer will be simply $$ \left\{g{\left (n \right )}\; |\; n \in \left\{h{\left (n \right )} = 0\; |\; n \in \mathbb{Z}\right\}\right\} $$

Separate the real and imaginary parts and equate the imaginary to zero but the problem was with the assumptions on the symbols. For example while separating real and imaginary parts of the equation.

In[]: (n + I*n).as_imag_real()
Out[]:(re(n) - im(n), re(n) + im(n))

That is because n is by default complex, even in the Lambda(..).expr. I wrote some code to decide the assumption on the variable of imageset from the baseset. See PR 7694. There was another issue that needs to be resolved S.Integers.intersect(S.Reals) doesn't evaluate to S.Reals.

LambertW and Multivariate Solvers

The method to solve equation containing exp and log function is using the LambertW function. LambertW function is the inverse of \( x \exp(x) \). The function is multivariate function both for the real and complex domains and Sympy has only one branch implemented. This also leads us to loss of solutions. Aaron gave an example here. But I'm pretty unfamiliar with solving by LambertW and LambertW itself and it will take me some time to build an understanding of them. As an ad hoc solution I'm using the code in the _tsolve in the solvers.solvers module to do at least what the current solvers can do.

When the importing of _tsolve method was done. I started working on the multivariate solvers. Here's how the current multivariate solvers work:

Solving single multivariate equation

  1. count the number of free symbols in f - no of symbols for equation. If the equation has exactly one symbol which is not asked for then use solve_undetermined_coeffs, the solve_undetermined_coeffs find the values of the coefficient in a univariate polynomial such that it always equates to zero.

  2. Then for each symbol solve_linear is tried which tries to find a solution of that symbol in terms of constants or other symbols, the docstring says

    No simplification is done to f other than and mul=True expansion,
    so the solution will correspond strictly to a unique solution.
    

So we don't have to worry about loosing a solution. For every symbol it is checked if doesn't depend on previously solved symbols, if it does that solution is discarded.

  1. For the symbols for which the above method failed, the _solve function is called for the equation for that variable and as above if the solution contains a variable already solved then that solution is discarded.

System of equations in multiple variables

  • Try to convert the system of equations into a system of polynomial equation in variables

  • If all the equations are linear solve then using solve_linear_system, check the result and return it. If asked for particular solution solve using minsolve_linear_system

  • If the number of symbols is same as the size of the system solve the polynomial system using solve_poly_system. In case the system is over-determined All the free symbols intersection the variables asked for are calculated. Then for every subset of such symbols of length equal to that of the system, an attempt to solve the equations by solve_poly_system is made. Here if any of the solution depends on previously solved system the solution is discarded.

  • In the case there are failed equations:

    • For every know result:
    • Substitute every thing into the failed equation and see if the equation turns to zero. if it does accept the result otherwise put it in the bad_results group.
    • Then try to solve try to solve the failed equation using solve for each symbol.
    • If that solution depends on any other previously solved symbols discard it.
    • If it doesn't satisfy other equations, discard it.
    • Check if the solution doesn't set any denominator to zero, if it does discard that solution.
    • If it satisfies the above conditions substitute this value in know solutions and add it as a new result.

by Harsh Gupta at July 12, 2014 02:00 PM

Sachin Joglekar

GSoC Weeks 7,8: Beginning with Dyadics

The last two weeks have been slow, hence a common blog post for both of them. Last week was spent in improving the current PR even more, especially with dedicated rotator methods for each type of rotation (Body/Space/Axis/Quaternion) of Coordinate Systems. Its not much, just methods like ‘orient_new_axis’, ‘orient_new_body’, etc. as add-ons on top of the all-powerful ‘orient_new’ method. The coding involved wasn’t much, since the new methods essentially call the ‘orient_new’ method with the relevant parameters. However, the rotation API is now much simpler than using the ‘orient_new’ method for all kinds of orientations – mainly because the extra flexibility provided by the ‘orient_new’ method implies a more awkward API for the same.
Jason, Mathew and Aaron were together at SciPy, during our last week’s GSoC meeting. There, the common consensus was to try and have Rotator classes to make the orientation framework even more user-friendly. Discussing the interface for the same is on this week’s meeting agenda.
A big change that we thought of doing in the timeline was to let go of spherical/cylindrical systems for a while, and instead focus on dyadic tensors – since the code is mostly done (my work during last year’s GSoC), they will be a useful and powerful addition to the vector module.
So my next few (much shorter than the current one) PRs would be-
1) The code for dyadic tensors
2) Documentation for all the basic, non-time-dependent framework
3) Rotator classes(?)
4) Implementation of the Del operator as an SymPy Function
5) Just some basic helper methods to convert vectors from one form (rect/spherical/cylindrical) to another
The code for Dyadics is mostly done, the docs are on-going, and I am thinking of having a common super-class for Vector and Dyadic- called Tensor(inheriting from Expr) – since many of the SymPy manipulation procedures work in the exact same manner for Vectors as well as Dyadics. Will discuss this with Jason soon.

Anyways, thats all for now, have a great week!


by srjoglekar246 at July 12, 2014 07:37 AM

July 11, 2014

Jim Crist

GSoC Week 8: Try, try, try again...

I'm still struggling to solve the nan and oo issue I've discussed in my post a couple weeks ago. Last week I showed off a custom written subs function for use inside sympy.physics.mechanics that helped with speed considerably, and attempted to solve the nan problem. This worked great for small-medium expressions, but failed on large ones. Or did it? I'm not sure anymore.

This pull request brought up something that I had witnessed, but never really thought about as a potential source of my issues. To summarize, Sympy's current (hopefully soon to be old) caching system never clears. Ever. For interactive work, or short running sessions this is fine. However, for the huge expressions generated in mechanics, this can be a source of memory issues, as the cache grows to hold all sub-expressions that were cached.

It turns out that simplify is one of those functions that is cached. This may explain why when I tried to use msubs with smart_subs=True (which crawls the expression tree and does selective simplification) this resulted in all of my RAM being used up (4 GB!!!). I haven't had a chance to pull in this PR into my repo and test it out, but it sounds like it should fix the problem. Instead of growing infinitely, the cache uses a least recently used (LRU) algorithm to determine what stays and what is removed. The cache size can be set by the user, so those that prefer speed over memory use can still cache everything. Per his benchmarks it seems to be only 10% slower, which shouldn't be much of a problem. Overall, I'm really psyched to start using this. Perhaps with this the smart_subs I wrote up will work, even if it takes a while. If not, I'm kind-of out of ideas.

I spent some time this week trying out a few other ways of solving this problem. So far none of them have worked.

1. Using cse, and applying simplify selectively to the sub-expressions.

The basic idea here was to apply cse on the expression, and then evaluate each sub-expression. If it evaluated to nan, simplify it, then evaluate it again.

This seemed like a good idea at first, but upon closer examination it falls apart. The issue is that the expressions that could cancel/simplify out are often broken into separate sub-expressions. This means that they are evaluated numerically separately, and only once combined will they result in a nan, at which point they can't be simplified anyway.

2. Taking the limit of the bad sub-expressions.

This was another idea that seemed good until I tried it. Similar to the smart_subs I talked about last week, except this time it's taking the limit of the bad sub-expressions as they approach the operating point. The thought being that it may be computationaly cheaper to find the limit than to apply simplify and then evaluate.

There were several problems iwth this design. The first being that Sympy has no functionality for finding multivariable limits. These can't be calculated iteratively either (by that I mean find the limit for x, then the limit for y, then the limit for z, etc...), as the part that could "simplify out" could already be gone.

The second, and more serious issue, is that there was no way to tell if the limit at that point was equal to the value the expression should actually evaluate too, or if it is just the value of the limit at that point. For example:

>>> expr = (a - 1)/(a**2 - 1)
>>> op_point = {a: 1}
>>> expr.subs(op_point)
nan
>>> limit(expr, a, 1, '+')
1/2
>>> limit(expr, a, 1, '-')
1/2

Using the method described above, it would seem that the expression should just evaluate to 1/2. However, if you actually plot this expression, you'll find that there is a discontinuity at a = 1. From either side it approaches 1/2, but at 1 it is actually nan.

3. Numerical perturbation about the setpoint to find the limit of the bad sub-expressions.

The idea here was to calculate the limit of the sub-expressions through numerical evaluation and perturbation. This fails for all the reasons described above, as well as the fact that Sympy is a symbolic computation library, and we should be able to do this symbolically.


Unfortunately those were all the ideas I had to solve this problem. If the algorithm described last week doesn't end up working using the new cacheing system, I'm kind of stumped. Back on the struggle bus...


Meanwhile...

As another potential solution, I've set about refactoring the KanesMethod class in the hope that I'll find some way of generating expressions that are smaller than they currently are. The first step was rewriting to make it readable, more modular, and remove the dead code that had built up over the years. This is done. In it's current state it passes all tests, and runs them in half the time that it had before!!! Still no major reduction in expression size, but I'll hopefully find some magical place in the code that could be made more efficient. We'll see.

I'm also working on the documentation for the linearization stuff that's already done, as well as waiting on someone to finally review my PR for LagrangesMethod support. I hope to get that merged soon so that I can get started on the code generation portion of this project.

by Jim Crist at July 11, 2014 10:00 PM

Thilina Rathnayake

[GSoC] Week 8: Solving Ax = b and Determinant

Hi All,

Sorry, I couldn’t write a blog post last week. During past two weeks, I contributed in solving some of the bugs in CSymPy and the determinant of a Matrix. Also, I implemented the solutions of the system Ax = b by using various decomposition methods.

Determinant of a Matrix

I implemented determinant using two different methods, Bareiss’s method and Berkowitz algorithm.

Bareiss’s algorithm

Bareiss’s algorithm[1] can be used to find the row echelon form or determinant of a Matrix. This is not a division free algorithm, but guarantees that all the divisions are exact, i.e. there is no remainder [2]. The algorithm is based on Sylvester’s identity and a transformation that yields the determinant after successive application. This can be also used to solve a system of equations. You can read more about the algorithm in [2].

Berkowitz Algorithm

Berkowitz algorithm can also be used in calculating the determinant. This algorithms has various other applications as well, like calculating characteristic polynomial of a matrix, principal minors and Eigen values. I am yet to implement the calculation of characteristic polynomial and Eigen values using this algorithm but that won’t be a difficult thing. I wish to do it over the weekend.

Ax = b

I used various matrix decompositions implemented earlier to solve the system Ax = b. Only the fraction free LU decomposition was used earlier, but now we can solve linear systems using LU decomposition and LDL decomposition. QR and Cholesky decomposition can be enabled after figuring out a good way to do expression simplification in CSymPy.

I hope to work on Sparse matrices in upcoming weeks and do some benchmarks of CSymPy’s algorithms with GiNaC.

References

[1] Erwin H. Bareiss, Sylvester’s Identity and Multi step Integer-Preserving Gaussian Elimination

[2] Wikipedia article, http://en.wikipedia.org/wiki/Bareiss_algorithm

[3] Wolfram Mathworld article, http://mathworld.wolfram.com/SylvestersDeterminantIdentity.html

 


by Thilina Rathnayake at July 11, 2014 03:51 PM

July 10, 2014

Soumya Dipta Biswas

GSoC 2014: Week 7

Hello everyone,

I have been extremely busy this week. So I haven’t managed to get the time to write out a proper post. However since it is already very late in the week it is better to create a small outline than wait for the actual product. Firstly, I worked on standardization of FOL expression. However the bulk of the week was consumed in writing code for the most general unifier and resolution. While these functions mark almost the end of my FOL proposal, the entire module is yet not ready and still needs some work. I will explain all these concepts in detail (preferably some time next week, once I have more time) along with examples and how to do it in SymPy.

Cheers!!!

by SD at July 10, 2014 07:13 PM

July 08, 2014

Sudhanshu Mishra

GSoC'14 Progress: Working with geometry

This week I completed refraction_angle. Now it also works with Ray3D and Plane. This function calculates transmitted vector after refraction. medium1 and medium2 can be Mediumor any sympifiable object. Ifincident is an object of Ray3Dnormal also has to be an instance of Ray3D in order to get the output as a Ray3D. If plane of separation is not provided and normal is an instance of Ray3D, normal will be assumed to be intersecting incident ray at the plane of separation. This will not be the case when normal is a Matrix or any other sequence. 
If incident is an instance of Ray3D and plane has not been provided and normal is notRay3D, output will be a Matrix. It is dependent on Plane so I haven’t added tests for it.
Here’s the link to the PR https://github.com/sympy/sympy/pull/7626
This week I also worked on making spherical mirrors in SymPy. There are few issues that I am facing. Currently I’m working(stuck) on locating mirrors in space. It’s an amalgamation of 2D and 3D geometry. I still have to subclass Ray of geometry and make it useful for this. I’ve sent a WIP PR.
I’m waiting for Sean to merge this long waited PR for moving Gaussian optics module.
That's all for now.

by Sudhanshu Mishra (noreply@blogger.com) at July 08, 2014 05:24 PM

Akshay Narasimha

Gsoc Week-7

This has been a decent week in terms of progress. One of my long standing PR's has been merged (Line3D class). The other one on Plane has been reviewed by a lot of people and luckily I got a lot of suggestions as to how I can improve the code. I have made the required changes and am waiting for a final go on this PR as that would speed up the work on the optics module as it has a dependency on this class.

Meanwhile this week I also worked on the Hyperbola class. I have added a few more methods. Here is the link to the PR. I plan to implement more methods this week along with the Rectangular Hyperbola class.

Until then cheers!

by Akshay Narasimha (noreply@blogger.com) at July 08, 2014 04:42 AM

July 06, 2014

Jim Crist

GSoC Week 7: Expression Trees and Substitution

Late post this week due to the celebration of everything that is American. This week I finally got my first PR merged. The general linearization code is now part of the Sympy codebase. I currently have a PR for Lagrange support waiting to be reviewed. After that I just need to write the ReST docs and the first part of the project is "complete". The rest of the week was spent on more optimization work. I'm getting closer to being able to solve the bicycle example in a reasonable amount of time!

Last week's post showed the issues with the large expression size and subs (it takes forever to run). I took some time this week to look into how expressions work in sympy, and wrote a specialized subs function for use in sympy.physics.mechanics. The rest of this post will show give an overview and some benchmarks of this code.

Expression Trees

In sympy, expressions are stored as trees. Each node is an object that has an attribute args that contains a list of it's child nodes. This is best shown by an example:

In [1]:
from sympy import *
from sympy.printing.dot import dotprint
In [2]:
a, b, c = symbols('a, b, c')
In [3]:
test = a*cos(a + b) + a**2/b
test
Out[3]:
a**2/b + a*cos(a + b)
In [4]:
with open('test.dot', 'w') as fil:
    fil.write(dotprint(test))
In [5]:
%%bash
dot -Tpng test.dot -o test.png
In [6]:
from IPython.core.display import Image 
Image(filename='test.png')
Out[6]:

The root node of this tree is an Add object, as the outermost operation is adding two smaller expressions together. Going down the left side first, we see that \(a^2/b\) is stored as multiplying \(a^2\) times \(b^{-1}\). Sympy doesn't have Div objects, fractions are all expressed using Mul, with the denominator wrapped in a Pow(den, -1). Traversing the right side \(a \cos(a + b)\) is stored as multiplying a and a cos object together. The cos object itself contains one child node - an Add - which holds a and b.

Crawling the Tree

The design of sympy expression objects uses two key features: args, and func. As mentioned above, object.args holds a tuple of all the child nodes for that object. Likewise, object.func is a class method that takes in arguments, and returns an instance of that class. For most objects in sympy, running

object.func(*object.args) == object

will be a true expression. I say most because not all objects adhere to this (leaf nodes). There has been a lot of discussion about this here, if you're interested.

One of the great things about this design is that it makes it very easy to write operations that modify these trees using recursion. Normally, recursion in python is frowned upon because the function calls add overhead that could be removed if rewritten as a loop. There is also a maximum recursion depth (default of 1000) to prevent stackoverflow conditions. However, a sympy expression that has 1000 nested nodes is highly unlikely (even in mechanics), and the recursion makes the code much more readable. As the zen of python says, "Readability counts".

A simple crawler to print every node is written below. It consists of two functions.

The first one is a generic function crawl, that crawls the expression tree, calls func on each node, and returns the result if there is one. Otherwise it recurses down a level into the child nodes, forming a list of new_args, and then calls expr.func to rebuild the expression from those args.

The second one is a printer function. As we don't want to modify the expression at all, we'll just print the node, and then return the expression if it doesn't have args (it's a leaf node). Note that there are more efficient ways to traverse the tree and print all the nodes - this is mostly to demonstrate crawl, as it will be used later.

Using these two functions a function that crawls the tree, prints every node, and returns the original expression can be composed using a simple lambda statement.

In [7]:
def crawl(expr, func, *args, **kwargs):
    """Crawl the expression tree, and apply func to every node."""
    val = func(expr, *args, **kwargs)
    if val is not None:
        return val
    new_args = (crawl(arg, func, *args, **kwargs) for arg in expr.args)
    return expr.func(*new_args)

def printer(expr):
    """Print out every node"""
    print(expr)
    if not expr.args:
        return expr

print_expr = lambda expr: crawl(expr, printer)

Let's test this function on our expression from above:

In [8]:
temp = print_expr(test)
# Checking that the expression was unchanged (temp == test)
assert temp == test
a**2/b + a*cos(a + b)
a**2/b
a**2
a
2
1/b
b
-1
a*cos(a + b)
a
cos(a + b)
a + b
b
a

Comparing the printed results with the tree diagram from before, one can see how each big expression can be decomposed into smaller expressions. Further, the rebuilt expression after traversing was identical to the input expression. In the next section we'll write another function that changes the expression tree using crawl.

A Custom subs Function

In sympy.physics.mechanics, we deal with symbols, dynamic symbols (which are of type AppliedUndef), and derivatives of these dynamicsymbols. Unfortunately, the provided subs function traverses inside the Derivative terms, giving underdesired results:

In [9]:
from sympy.physics.mechanics import dynamicsymbols
x, y = dynamicsymbols('x, y')
test = a*(x + x.diff())
test
Out[9]:
a*(x(t) + Derivative(x(t), t))
In [10]:
# Subbing in b for x. Desired result is a*(b + x.diff())
test.subs({x: b})
Out[10]:
a*(b + Derivative(b, t))

To get around this problem, we've been using a custom function _subs_keep_derivs. This function creates two substitution dictionaries - one with Derivative, and one without. Four substitutions then take place:

  1. Perform subs with the terms in the derivative dictionary
  2. Substitute in Dummy symbols for all Derivative terms in the resulting expression
  3. Perform subs with the terms in the non-derivative dictionary
  4. Substitute back the original Derivative terms from the Dummy symbols

This is slow due to the need for four calls to expr.subs. Also, subs applies substitutions sequentially (i.e. each term in the substitution dict requires its own tree traversal). For our purposes in mechanics, this is unecessary. Using the already written crawl function, we can compose our own subs that ignores terms inside derivative objects fairly easily:

In [11]:
def sub_func(expr, sub_dict):
    """Perform expression subsitution, ignoring derivatives."""
    if expr in sub_dict:
        return sub_dict[expr]
    elif not expr.args or expr.is_Derivative:
        return expr
    
new_subs = lambda expr, sub_dict: crawl(expr, sub_func, sub_dict)

That's it. Due to the composable nature of crawl, the code needed to perform this operation is incredibly simple. Let's test it to make sure it on our previous expression:

In [12]:
# Simple example
new_subs(test, {x: b})
Out[12]:
a*(b + Derivative(x(t), t))

So it leaves terms inside derivaties alone, exactly as desired. We can see how this compares to the previous implementation:

In [13]:
# Old way of doing things, taken from sympy.physics.mechanics.functions
def _subs_keep_derivs(expr, sub_dict):
    """Performs subs exactly as subs normally would be,
    but doesn't sub in expressions inside Derivatives."""

    ds = expr.atoms(Derivative)
    gs = [Dummy() for d in ds]
    items = sub_dict.items()
    deriv_dict = dict((i, j) for (i, j) in items if i.is_Derivative)
    sub_dict = dict((i, j) for (i, j) in items if not i.is_Derivative)
    dict_to = dict(zip(ds, gs))
    dict_from = dict(zip(gs, ds))
    return expr.subs(deriv_dict).subs(dict_to).subs(sub_dict).subs(dict_from)
In [14]:
# Benchmark substitution
print("Using _subs_keep_derivs")
%timeit _subs_keep_derivs(test, {x: b})
print("Using new_subs")
%timeit new_subs(test, {x: b})
Using _subs_keep_derivs
100 loops, best of 3: 3 ms per loop
Using new_subs
10000 loops, best of 3: 39.9 µs per loop

So it's significantly faster. For this small benchmark, approximately 75x faster. Also, as it works in only one traversal of the tree it has a smaller computational complexity - meaning that for larger expressions this speed increase will be even higher. For kicks, let's see how it compares to normal subs for an expression without derivatives:

In [15]:
# For kicks, see how it compares to subs for expr without derivative:
test = a*cos(x + b) + (a - x)**2
print("Using subs")
%timeit test.subs(test, {a: 1, b: 2})
print("Using new subs")
%timeit new_subs(test, {a: 1, b: 2})
Using subs
1000 loops, best of 3: 676 µs per loop
Using new subs
1000 loops, best of 3: 247 µs per loop

So our implementation of subs is faster than sympy's. However, this micro benchmark isn't all that meaningful. Normal subs works in multiple passes so that expressions inside the sub_dict are also affected. subs also incorporates math knowledge about the expressions, while ours just does a naive direct match and replace. For our purposes though, this is sufficient. Also, as with all micro-benchmarks, this should be taken with a grain of salt. The main point is that our custom subs function is significantly faster than the old method of _subs_keep_derivs.

Custom Simplification

The other major issue I discussed last week was that some expressions result in nan or oo (infinity) when not simplified, but after simplification result in a realizable expression. For example:

In [16]:
test = sin(a)/tan(a)
print("Before simplification:", test.subs(a, 0))
print("After simplification:", simplify(test).subs(a, 0))
Before simplification: nan
After simplification: 1

For small expressions, doing simplify before evaluation is acceptable, but for larger ones simplification is way too slow. However, these divide by zero errors are caused by subexpressions, not the whole expression. Using some knowledge of the types of expressions present in the mechanics module, we can come up with some simple heuristics for what can result in nan, oo and zoo:

  1. tan(pi/2)
  2. Fractions with 0 in the denominator

In reality, these are the same thing, because

\[ \tan(\pi/2) = \frac{\sin(\pi/2)}{\cos(\pi/2)} = \frac{1}{0} \]

Using this knowledge, we can come up with a simple algorithm for performing subs and catching these issues at the same time:

  1. Replace all tan(*) with sin(*)/cos(*). This will allow us to only have to check for denominator = 0 conditions.
  2. For nodes that are fractions, check if the denominator evaluates to 0. If so, apply simplify to the fraction, and then carry on as normal.

This may not catch every instance that results in would result in a nan without simplification, but it should catch most of them. Also, the algorithm is so simple, that it can be implemented in only a few lines. First, the tan replacement. This requires almost no new code, as it can be composed using the already written crawl function:

In [17]:
def tan_repl_func(expr):
    """Replace tan with sin/cos"""
    if isinstance(expr, tan):
        return sin(*expr.args)/cos(*expr.args)
    elif not expr.args or expr.is_Derivative:
        return expr

tan_repl = lambda expr: crawl(expr, tan_repl_func)

Testing it:

In [18]:
tan_repl(tan(a) + tan(a)/tan(b))
Out[18]:
sin(a)/cos(a) + sin(a)*cos(b)/(sin(b)*cos(a))

So that works as expected. Now for the second pass; the subs with denominator checking and selective simplification. This takes a little bit more code than before, but I've heavily commented it so you should be able to see what's going on:

In [19]:
def smart_subs_func(expr, sub_dict):
        # Decompose the expression into num, den
        num, den = fraction_decomp(expr)
        if den != 1:
            # If there is a non trivial denominator, we need to handle it
            denom_subbed = smart_subs_func(den, sub_dict)
            if denom_subbed.evalf() == 0:
                # If denom is 0 after this, attempt to simplify the bad expr
                expr = simplify(expr)
            else:
                # Expression won't result in nan, find numerator
                num_subbed = smart_subs_func(num, sub_dict)
                return num_subbed/denom_subbed
        # We have to crawl the tree manually, because `expr` may have been
        # modified in the simplify step. First, perform subs as normal:
        val = sub_func(expr, sub_dict)
        if val is not None:
            return val
        new_args = (smart_subs_func(arg, sub_dict) for arg in expr.args)
        return expr.func(*new_args)

def fraction_decomp(expr):
    """Return num, den such that expr = num/den"""
    if not isinstance(expr, Mul):
        return expr, 1
    num = []
    den = []
    for a in expr.args:
        if a.is_Pow and a.args[1] == -1:
            den.append(1/a)
    else:
        num.append(a)
    if not den:
        return expr, 1
    num = Mul(*num)
    den = Mul(*den)
    return num, den

Finally, we can put everything from above inside a nice wrapper function:

In [20]:
def smart_subs(expr, sub_dict):
    """Performs subs, checking for conditions that may result in `nan` or 
    `oo`, and attempts to simplify them out.

    The expression tree is traversed twice, and the following steps are
    performed on each expression node:
    - First traverse: 
        Replace all `tan` with `sin/cos`.
    - Second traverse:
        If node is a fraction, check if the denominator evaluates to 0.
        If so, attempt to simplify it out. Then if node is in sub_dict,
        sub in the corresponding value."""
    expr = crawl(expr, tan_repl_func)
    return smart_subs_func(expr, sub_dict)

Let's see if it works as expected. Using a simple test case:

In [21]:
test = tan(a)*cos(a)
sub_dict = {a: pi/2}
print("Without `smart_subs`:")
print(new_subs(test, sub_dict))
print("With `smart_subs`:")
print(smart_subs(test, sub_dict))
Without `smart_subs`:
nan
With `smart_subs`:
1

And some timings:

In [22]:
print("Using `smart_subs`:")
%timeit smart_subs(test, sub_dict)
print("Using simplification, then normal subs")
%timeit simplify(test).subs(sub_dict)
print("Using trigsimp, then normal subs")
%timeit trigsimp(test).subs(sub_dict)
Using `smart_subs`:
10000 loops, best of 3: 92 µs per loop
Using simplification, then normal subs
10 loops, best of 3: 42.9 ms per loop
Using trigsimp, then normal subs
10 loops, best of 3: 30.8 ms per loop

Using selective simplification, the same results can be obtained for a fraction of the cost. 360 times faster for this small test.

Let's see what the overhead of smart_subs is for an expression that doesn't need it:

In [23]:
test = sin(x + y)/cos(x + y)* (a + b) + a/(a + x)
sub_dict = {x: a, y: 1}
print("new_subs")
%timeit new_subs(test, sub_dict)
print("smart_subs")
%timeit smart_subs(test, sub_dict)
print("Using simplification, then normal subs")
%timeit simplify(test).subs(sub_dict)
print("Using trigsimp, then normal subs")
%timeit trigsimp(test).subs(sub_dict)
new_subs
1000 loops, best of 3: 222 µs per loop
smart_subs
1000 loops, best of 3: 2.34 ms per loop
Using simplification, then normal subs
1 loops, best of 3: 1.11 s per loop
Using trigsimp, then normal subs
1 loops, best of 3: 815 ms per loop

So there's considerable overhead, which was expected. Still, it's much faster than using simplify first, and then running subs.

The best use method would be to first try it with new_subs, and if you get a nan or oo, then try using smart_subs. To aid in this, we can write a nice wrapper function msubs that contains both methods:

In [24]:
def msubs(expr, sub_dict, smart=False):
    """A custom subs for use on expressions derived in physics.mechanics.

    Traverses the expression tree once, performing the subs found in sub_dict.
    Terms inside `Derivative` expressions are ignored:

    >>> x = dynamicsymbols('x')
    >>> msubs(x.diff() + x, {x: 1})
    Derivative(x, t) + 1

    If smart=True, also checks for conditions that may result in `nan`, but
    if simplified would yield a valid expression. For example:

    >>> (sin(a)/tan(a)).subs(a, 0)
    nan
    >>> msubs(sin(a)/tan(a), {a: 0}, smart=True)
    1

    It does this by first replacing all `tan` with `sin/cos`. Then each node
    is traversed. If the node is a fraction, subs is first evaluated on the
    denominator. If this results in 0, simplification of the entire fraction
    is attempted. Using this selective simplification, only subexpressions
    that result in 1/0 are targeted, resulting in faster performance."""

    if smart:
        func = smart_subs
    else:
        func = new_subs
    if isinstance(expr, Matrix):
        # For matrices, func is applied to each element using `applyfunc`
        return expr.applyfunc(lambda x: func(x, sub_dict))
    else:
        return func(expr, sub_dict)
In [25]:
#Check that the results are the same:
msubs(test, sub_dict) == new_subs(test, sub_dict)
msubs(test, sub_dict, smart=True) == smart_subs(test, sub_dict)
Out[25]:
True

This code has been included in my optimization branch, and is performing admirably against all the included tests (so much speed up!!!). The big test though is the bicycle example.

The Big Test

To really put this new code to the test I applied it to the bicycle example, which has operation counts ranging from 400,000 to several million depending on which expression you're working with. How did it fair? Mixed results...

Using msubs with smart=False in the formation of the linearizer object resulted in a huge speed increase. Previously using _subs_keep_derivs there resulted in a run time of several hours. Now it runs in 14 seconds!

The second test is in the application of the operating point to the M, A, and B matrices. Previously evaluating these had resulted in nan and oo, and with operation counts exceeding 400,000 for M pre-simplification is out of the question. I tried using msubs with smart=True on this M, and let it run for 6 hours. It ended up consuming 98% of my RAM (4 GB worth), and it still wasn't done :( So for reasonable sized expressions the smart_subs we've implemented is acceptable, but it still doesn't work for the huge expressions. I'll have to keep working on optimizations to make this faster/reduce the initial expression size.

Still, all is not lost. Going from formulation to M, A, and B without the operating point substitution now only takes 20 minutes - down from the several hours before. This is actually faster than the previous linearize method, but unforunately results in expressions that evaluate to nan.

Future Work

Per my original timeline I was supposed to be working on "pre-linearization" routines for generating the equations of motion directly in linearized form. I think this will be dropped in favor of cleaning up and speeding up the existing codebase. There's some dead code that needs removing, and I have some other ideas for speed-ups that I'd like to try.

After that, I hope to get to the original part 3 of the project, which is adding support for matrices to the code-generation code. Not sure if this would live in sympy or pydy, but it would be extremely useful to myself and others. For now I plan on taking it week-by-week.


If you have thoughts on how to deal with the nan and oo issue, please let me know in the comments. Thanks!

by Jim Crist at July 06, 2014 10:25 PM

July 05, 2014

Sushant Hiray

This Week in CSymPy: #6-#7

Week 7 for GSoC just ended. I successfully cleared the mid-term review, thanks @certik

Progress

I started the week 6 by implementing refactoring the various instances of subs by adding a method create to construct classes with canonicalization. #210 was the relevant issue which was raised. I opened pull 213 to address this issue. This was a good addition as it reduced the line of codes by 117.



After successfully resolving #210 I started working on the Complex Module. By the end of the sixth week, I had discussed with Ondrej regarding the basic structure for the Complex Class. We had decided to use the mpq_class itself for the real and the imaginary part, as opposed to creating an alternate real or complex part.

Pull 223 implements the complex module. The class design structure could be found in the PR.

In Week 7 I was travelling back from University to home, so I couldn’t contribute much code for a couple of days. By the end of Week 7, I had managed to implement some part of the basic API. All the virtual functions are yet to be implemented, so right now I’m not returning an object in from_mpq

I will speed up during the next week and hope to merge the Complex Module.

Week Highlights!

Somewhere around this period, I crossed 100 commits into CSymPy master. Its been a long journey since I first started contributing to CSymPy.

The Week Ahead

Complete the basic API for Complex Module and merge it into master.


Thats all for now :) Will get back next week!

July 05, 2014 01:45 PM

Avichal Dayal

This week was unproductive coding-wise. I spent most of the time designing and writing pseudo-code.

For the Formal Power Series part of my project, last week I implemented the algorithm to find general term for series expansion of a function. Also various operations on series like addition, multiplication, composition were implemented.
I store the coefficients of series in a Stream class which works on lazy evaluation scheme.

However the Stream class currently is not a very generic one. It can only be used to represent series expansion of a function with parameters like "x", "general_term" and methods for series arithmetics.

To make it more generic, I'll make the Stream class act only as a container to store elements (infinitely many in lazy scheme).
To represent infinite series, there will be another class - Lazyseries. It will store its coefficients using the Stream class and will contain the methods for series arithmetics.
This way, the stream class can also be used to store other infinite structures. E.g.:- Fibonacci series can be represented in the following way using "Stream"

 def fib(x, y):  
head = (x, y)
tail = lambda: fib(y, x+y)
return Stream(head, tail)


But I'm a bit unsure if this is the best way for lazy evaluation. Currently I store the first element and the rest of the elements as a lambda function. Lambda function delays the evaluation of the tail. I don't know if this is the most efficient method in terms of memory and speed.

As an example:- Multiplication of two series is defined as:-
head = head1 * head2
tail = head1 * tail2 + head2 * tail1 + tail1 * tail2
Now when tail is evaluated continuously, many stream objects are created in the memory resulting in memory overflow. I'm working on this.

by Avichal Dayal (noreply@blogger.com) at July 05, 2014 08:53 AM

July 04, 2014

Tarun Gaba

GSoC 14: Midterm Evaluations have ended!

{% include JB/setup %} [ <-Back to posts ](/gsoc14) It has been a week after the midterm evaluations are over, and I am back to work after a small break(with permission from my mentor, off course!). I have been working on writing a test suite for the Dynamics Visualizer. This is the wrapping up part of the visualizer for this gsoc. [Here](/blog/visualization/index.html?load=samples/scene_desc.json) is a visualization of a rolling disc(it is slightly buggy though), that i prepared. To view the animation, allow the visualization to load in the webpage(it shall load automatically), and then hit the `Play Animation` button. After writing some tests for visualizer, I am going to start fleshing out API for the module, to provide IPython support to the visualizer. The main aim of writing this module is to make visualizer interactive, in the sense, that a user should be able to change all the variables from the GUI(which is rendered inside notebook's output cell) and then rerun the simulations without having to write any code, or execute any of the code manually. The data of the new simulations will be automatically fed into visualizer, and then it can be viewed as an animation. This whole workflow will be very convenient for the existing PyDy users, as well as the new ones. It will be particularly convenient for those who want to just play around with the existing systems, by changing the system variables, and view how it affects the resulting animations. With the development of this module, as well as ongoing improvements in the other PyDy modules(by my fellow GSoC'ers from PyDy), we should be able to perform lightening fast simulations for a system, as well as view them on a canvas. I will keep posting the new work I will be doing, with better details(once I actually start implementing new stuff!). [ <-Back to posts ](/gsoc14)

July 04, 2014 05:40 PM

July 03, 2014

Soumya Dipta Biswas

GSoC 2014: Week 6

Welcome back everyone,

This week I set about to fix one of my long pending PR associated with Propositional Logic. Following that I worked on conversion to Prenex Normal Form and Skolem Normal Form. Firstly, let me talk about the PR associated with Propositional Logic. It basically involved adding small functions like validity, entailment and some small modifications to the Propositional Knowledge Base. I assume most of the readers would be familiar with the concept of validity and entailment but I will briefly mention the algorithm to compute this and the intuition behind the same. Validity is given by not satisfiable(Not(expr))</expr>. Why does this work? If an expr is valid then it is True under all interpretations. Therefore Not(expr) is always False i.e. it cannot be satisfied by any model. Hence if the negation of the expr is unsatisfiable then the original formula is valid. Moving on to entailment, the conventional definition stands that a set of formulas formula_set entails a formula expr iff expr is True whenever formula_set is True. That basically boils down to And(*formula_set) >> expr. However there is another method, to accomplish the same, namely not satisfiable(And(~expr, *formula_set)). Ok, firstly why does this non-intuitive expression even work? Well whenever formula set is True, expr must be True (for the entailment to hold) hence ~expr must be False thereby making the entire conjunction False. If the formula_set is False then the conjunction is clearly False. So if the entailment is to hold then the conjunction must be unsatisfiable, which is exactly what was implied earlier. Let us look at one special case before moving on. What if the formula_set is empty. While the solution is a matter of convention to a good extent, SymPy uses ideas from the second formula to resolve the ambiguity which yields not satisfiable(And(~expr)) which is exactly the same as validity of expr. Theoretically entailment means “Given this what can we infer”. So, given nothing we can only infer that, which is always the truth. Except for these changes I also made changes to PropKB which is probably best not discussed at the moment.

Coming now to the major interest of the post. The major time was spent in the functionality for conversion to Prenex Normal Form and Skolem Normal Form. Without going into a lot of details let me introduce these to you. A formula in PNF consists of a sequence of Quantifiers (containing both Universal and Existential quantifiers) followed by a Quatifier-less expression. The series of Quantifiers is called a prefix while the expression is called the matrix. Now, every formula in FOL can be converted to an equivalent formula in PNF. The matrix can be compared to a formula in propositional logic (if we think of the predicates as literals of PL) and most of the operations of PL become relevant. However we cannot completely ignore the prefix and hence we try to perform some operations such that we can look at the matrix without any regard to the prefix. This brings us to the Skolem Normal Form. A formula is converted to SNF by first converting it to PNF then dropping the Existential quantifiers (according to some rule). Now the prefix contains only Universal quantifiers, which are assumed to be implicitly implied and hence simply ignored while looking at the matrix. I will be updating the various rules for conversion and examples soon. Till then stay tuned.

Arrivederci

by SD at July 03, 2014 06:13 PM

July 02, 2014

Harsh Gupta

week 6

Solving Trigonometric Function (part II)

There is another technique to solve trigonometric function. Just as every trigonometric function can be written in term of \( \tan \) it can also be written in terms of \( \exp \).

$$ sin(x) = - \frac{i}{2} \left(e^{i x} - e^{- i x}\right) $$ $$ cos(x) = \frac{e^{i x}}{2} + \frac{1}{2} e^{- i x} $$ $$ tan(x) = \frac{i \left(- e^{i x} + e^{- i x}\right)}{e^{i x} + e^{- i x}} $$ $$ cot(x) = \frac{i \left(e^{i x} + e^{- i x}\right)}{e^{i x} - e^{- i x}} $$

So, solving a trigonometric equation is equivalent to solving a rational function in \( \exp \). Note: here the \( \exp \) is in complex domain and equation \( exp(x) = y \) has solution \( \left\{i \left(2 \pi n + \arg{\left (y \right )}\right) + \log{\left (\left\lvert{y}\right\rvert \right )}\; |\; n \in \mathbb{Z}\right\} \) when solved for \( x \).

by Harsh Gupta at July 02, 2014 08:43 AM

Sudhanshu Mishra

GSoC'14 progress: Week 6

It was a busy week though I managed to do some work.
I've been working on following things:
  • Integrating TWave with refraction_angle(7626)
  • Interference class
Some blocking 3D geometry code got merged yesterday which gives me a lot of new things to work on. :-)
I also tried to take a look on very old patch for gaussopt.
This week I wrote a script to update the development documentation when a PR gets merged into the master. This is up and running.
I passed mid term evaluation that took place last week and I would like to thank SeanOndrej,Aaron and Chris for their constant support.
That's all for now. Cheers!

by Sudhanshu Mishra (noreply@blogger.com) at July 02, 2014 03:27 AM

July 01, 2014

Akshay Narasimha

Gsoc Week-6

Sorry for the late post ,had an issue with my data card . First of all I couldn't get my PR's (Line3D, Plane) merged as Stefan was busy, though I addressed all of his issues and hopefully it is time before they get merged.

I started working on the Hyperbola class, you can find the code here.
This week I plan to continue the work on the Hyperbola class and hopefully will get my previous PR's merged.

Until then cheers!

by Akshay Narasimha (noreply@blogger.com) at July 01, 2014 04:22 AM

June 29, 2014

Kundan Kumar

GSoC Week 6: Docs of system of ODEs

This week has been hell of a week of work and enjoyment with marriage preparation of my cousin. This lead to very less progress in my project work. I was unable to do much with just completion of docs of all implemented methods of system of ODEs. The non-linear system of ODEs which was left on last week has not been much progressed since then.

I have confusion too on representation of few things which I posted on mailing list. I need community opinion on these things before I proceed on these facts. This will be all for this week. Rest I intend to implement non-linear system of first order for 3 equations during next week.


by Kundan at June 29, 2014 04:52 PM

Sachin Joglekar

GSoC Week 6: Completing coordinate systems

Phew. This was a busy week. Initially, the plan was to push a WIP PR (without tests maybe) with the code for coordinate systems and point classes. Midway through the week, Jason and I decided to push this new code to the first PR itself, and merge the entire fundamental framework together – in a single PR. Initially I did get a little worked up looking at the amount of work that needed to be done.

However, things moved much faster than expected – though with a lot of issues along the way. There were problems with the args for CoordSysRect, then some issues with the Tree-algorithm for inter-point distance calculation, and then some more with the code that implements orientation of systems wrt each other…you get the drift. But the code is finally done and polished, along with the unit tests and doctests. Thats why the late post this week – I ‘bravely’ decided to get the code fully working, without glitches, and then only do the blogpost for the week. Thankfully, the plan was a success :-D.

Some minor things still remain, like renaming of some attributes to follow conventions  – mostly cosmetic changes, nothing that will affect how the code works.

The next immediate steps now would be -

1. Finish off the PR and get it merged.

2. Study caching mechanisms and implement them in a new ‘patch’ PR.

The first PR is here.

See you later then, next week :-).


by srjoglekar246 at June 29, 2014 11:28 AM

June 28, 2014

Avichal Dayal

<script src="http://latex.codecogs.com/latexit.php?p&amp;li&amp;div" type="text/javascript"></script>
A bit late for this week's blog.

Implementing the solveRE() method took longer than I expected. I spent most of this week doing that. This method solves recurrence equation to get the general term for series expansion of a function.
rsolve() can solve some equations but not recurrences of hypergeometric type.

Hypergeometric recurrence equation is of type:-
Q(k) * r(k + m) = P(k) * r(k)
where P(k) and Q(k) are rational functions in k.

For the case when m = 1, there exists a simple formula to get r(k).
When m > 1, we can represent it as sum of m-shifted m-fold symmetric functions.
For e.g.:- Let the RE be,
r(k + m) = R(k) . r(k)
We replace k with mk and r(mk) with c(k) we get,
r(m(k+1)) = R(mk) . r(mk) which is equivalent to
c(k+1) = R(mk) . c(k)
Now we can use the same formula for the case when m = 1.

Similarly by replacing m with km + 1, km + 2 ...., we can solve for r(k) as sum of m-shifted m-fold symmetric functions.

Here are some results:-


The general terms are not in their simplest form. RisingFactorial must be re-written in terms of factorial to bring it to the usual form.

Now, I'm able to find FPS or infinite series for a wide range of functions. Next week, I'll take care of efficiency and making the code more in line with SymPy's style.

by Avichal Dayal (noreply@blogger.com) at June 28, 2014 07:21 AM

Jim Crist

GSoC Week 6: Just the little things

I was rather busy with my research this week, so no time for a long-winded post like some of my previous updates. There's not much interesting to say anyway. This week was mostly spent on little fixes to get my current pull request merged.

Topping the list of things that are better than they were last week is speed. The profiling I did last week showed that the current function sympy.physics.mechanics uses to solve a system of linear equations (_mat_inv_mul) is sloooooooooow. The underlying reason is because subs is slow - more on that later. I spent some time swapping out all forms of solving ($A x = B$) for LUsolve, the clear winner of last weeks benchmarks. This resulted in a 10x speedup of the formulation of equations for the bicycle model example.

This bicycle example has become the bane of my existence for the last couple weeks. It's a super slow test that I'd never actual gotten to run before. But with the speed improvements made, it actual finishes in a reasonable time. Except it still doesn't work. I'm able to run all the way up to

M, A, B = KM.linearize()

But when I go to sub in values for symbols in these matrices, things get hairy. There are two issues:

Issue 1: Get nan when not simplified

M.subs(val_dict) results in nan and oo upon after subs. But doesn't if it's simplified before the subs. An example of this behavior would be:

>>> M = sin(q1)/tan(q1)
>>> M.subs({q1: 0}
nan

Note that if this is simplified, this results in something completely different:

>>> M = sin(q1)/tan(q1)
>>> M = M.trigsimp()
>>> M
cos(q1)
>>> M.subs({q1: 0})
1

However, for the bicycle case M has over 19 thousand operations. This doesn't simplify quickly. Also, by default we don't simplify before subs in Linearizer (you can opt in to simplify, but it's done right before the return, so it won't affect the subbed result at all). Right now I'm looking through ways to make the resulting expressions smaller after the formulation, as this will result in speedups for all operations. This could be extremely helpful for issue 2...

Issue 2: subs is slow

because A has over 38 million operations!!! In this case subs doesn't even return. Ever. I left it running on my computer for 4 hours and came back and it was still whirring along, fans on high, eating up all my ram. No idea how to solve this. One possible solution is csympy, a fast core written in C++. Once this matures, subs, trigsimp, and other time consuming operations used heavily in sympy.physics.mechanics could rely on the equivalent, faster, C++ versions. I filed an issue with an example expression generated from the bicycle example (this one only had 147,841 operations, not nearly as bad). Hopefully Ondrej and the team can use this as a benchmark problem to help improve subs in csympy.

If you have thoughts on how to overcome these issues, please let me know. I'm kind of stumped right now.

The Good News

I didn't want to end this post on a bad note, so I'll close with the remainder of the things I did last week that actually worked:

  1. Improved documentation! Docstrings that are worth reading, and a start on the sphinx documentation.

  2. Added a deprecation warning for KanesMethod.linearize to warn people about the method change.

  3. Major interface changes. Now all operating points are specified as a single dictionary, or an iterable of dictionaries. This is to aid in consistency across different system implementations. Referring to a dictionary as u_op in LagrangesMethod doesn't really make any sense, as Lagrange's method only uses $q$, $\dot{q}$, and $\ddot{q}$. Also added a kwarg to make simplification of the results optional.

  4. Added a method to the LagrangesMethod class to calculate the value of the multipliers at different points. This is useful for multiple reasons. The multipliers have meaning, so knowing what the solution is symbolically is nice for calculating the constraint forces. Also, when linearizing with Lagrange's method, the multipliers have operating points as well, and these need to be calculated based on the operating point for the other states ($q$, $\dot{q}$, etc...). Now a user can go:

    op_point = dict_or_iterable_of_dicts
    lam_op = LM.solve_multipliers(op_point)
    op_point.append(lam_op)     # Or op_point.update if op_point is a dict, not a list of dicts
    M, A, B = LM.linearize(q_ind=q_ind, qd_ind=qd_ind, op_point=op_point)
    

Hopefully in the next week I can get my PR merged, so the Lagrange stuff can finally be submitted.

by Jim Crist at June 28, 2014 02:21 AM

June 27, 2014

Thilina Rathnayake

[GSoC] Week 6: Cholesky and LDL Algorithms

This week I implemented Cholesky decomposition and LDL decomposition. In addition to that I fixed two errors In CSymPy. I was also bale to finish work with LU decomposition and merge it to master. Also, I could solve the system Ax = b using fraction free LU factorization.

Cholesky Decomposition

Cholesky decomposition can be applied to a Hermitian positive definite matrix. Hermitian matrix is a matrix with complex entries that is equal to it’s conjugate transpose [1]. Hence a symmetric matrix with real entries can be considered as a Hermitian matrix and can be decomposed using Cholesky decomposition if it’s positive definite. A Symmetric matrix A is positive definite if z^TAz is greater than zero for every non zero column matrix z [2]. If the above conditions are satisfied, Cholesky decomposition for a matrix A can be written as A = LL^* where L is an lower triangular Matrix. This is equal to A = LL^T when L is a real matrix. This factorization can be used for fast solution of the system Ax = b. I am yet to use this decomposition in solving above mentioned system.

LDL Factorization

LDL decomposition is closely related to Cholesky decomposition. As the name implies, in LDL decomposition of a matrix A can be written as A = LDL^* where L is a lower triangular matrix and D is a diagonal matrix [3]. This decomposition can be used for some matrices which don’t have a Cholesky decomposition.

CSymPy printing error and simplification errror

I also worked on a printing error of CSymPy and a simplification error in Mul class which is used to represent multiplication types in CSymPy. There is still some work to be done to fix simplification error completely. The most important thing was that we were able to introduce a fix which doesn’t create a considerable slow down in speed after being applied.

References

[1] Hermitian Matrix, Wikipedia Article: http://en.wikipedia.org/wiki/Hermitian_matrix

[2] Positive definite Matrix, Wikipedia Article: http://en.wikipedia.org/wiki/Positive-definite_matrix

[3] LDL Decomposition, Wikipedia Article: http://en.wikipedia.org/wiki/Cholesky_decomposition#LDL_decomposition


by Thilina Rathnayake at June 27, 2014 03:58 PM

June 26, 2014

Harsh Gupta

week 5

Solving Trigonometric Function (part I)

This week I spend time on making trigonometric solvers work. Every trigonometric function can be written in terms of tan.

$$ sin(x) = \frac{2*tan(x/2)}{tan^{2}(x/2)} $$

$$ cos(x) = \frac{-tan^{2}(x/2) + 1}{tan^{2}(x/2) + 1} $$

$$ cot(x) = \frac{1}{tan(x)} $$

A basic technique to solve trigonometric equations can be rewriting the equation in terms of tan. And if the equation is made by addition, multiplication or quotient of trigonometric functions then the transformed equation is a equivalent to a rational function in tan. That equation can be solved by the usual polynomial solving techniques.

Taking the example from the doc \( cos(x) + sin(x) \) gets converted to \( \frac{-tan^{2}(x/2) + 2*tan(x/2) + 1}{tan^{2}(x/2) + 1} \)

The solution of this equations is \( tan(x/2) = 1 +- sqrt(2) \). Since the inverse of tan is \( \left\{2 \pi n + \operatorname{atan}{\left (y \right )}\; |\; n \in \mathbb{Z}\right\} \) the solution of the given equation is $$ \left\{2 \pi n - \frac{\pi}{8}\; |\; n \in \mathbb{Z}\right\} \cup \left\{2 \pi n + \frac{3 \pi}{8}\; |\; n \in \mathbb{Z}\right\} $$

Though it appears this technique should work universally for trigonometric equation it fails for even \( sin(x) = 0 \). From the table above \( sin(x) = \frac{2*tan(x/2)}{tan^{2}(x/2)} \) So, the \( sin(x) = 0 \) occurs at \( tan(x/2) = 0 \) which has solution \( \left\{2 \pi n\; |\; n \in \mathbb{Z}\right\} \) But the solution is \( \left\{ \pi n\; |\; n \in \mathbb{Z}\right\} \) . Why are we missing some solutions? The reason is \( sin(x) = 0 \) also occurs when denominator tends to \( \infty \), i.e., the values where \( tan^{2}(x/2) + 1 \) tends to \( \infty \). We had encountered a similar problem for the solution of $$ \frac{1}{\left(\frac{x}{x + 1} + 3\right)^{2}} $$

here \( x = -1 \) is not a point in the domain of the of the equation. The solver simplifies the equation to

$$ \frac{\left(x + 1\right)^{2}}{\left(4 x + 3\right)^{2}} $$

which extends the domain to include the point \( x = -1 \) which is also the solution to the transformed equation. There we wrote a sub procedure domain_check to verify if the returned solution is part of the domain of the original equation. The problem here is slightly different in the sense that transforming the equation decreases the domain of the solutions and not increase it.

To find such solution we have allow \( \infty \) to be solution to equations, we will be working on extended reals instead of just reals. I think this change will simplify a lot of things.

Another thing which should be taken care off is that we cannot naively search for the values for which the denominator tends to infinity as for the same value numerator might also attain infinitely large value, we will have to conceder the limiting value of the equation.

by Harsh Gupta at June 26, 2014 11:43 AM