Github link: shared.python.serialize

This is a wrapper around json. It serializes your data objects and is able to load them form disk as well. It’s pretty handy.

import shared.python.serialize as pyjson
to_disk ="C:\path\file.json", {"data":0})  # save to disk
from_disk = pyjson.load(r"C:\path\file.json")  # get from saved

shared.python.file 1.0

Github link: shared.python.file

I often get mixed up with what file and folder operations are in os.path and in shutil. I also feel that the os module is not very readable. So, for the sake of readability, I like to add a few functions that help out with file and folder operations.


One example of how unreadable os.path can be is getting a file name.

file_name = os.path.dirname(r"C:\path\fileName.ext")

Pretty horrible, no? I mean the function is called dirname as if we were looking a the name of a directory!

With name(), we can get the name of a file in a much more readable way. And, since we bothered with this, we can get the name of the file without the extension like so:

import shared.python.file as pyfile
file_name ="C:\path\fileName.ext", include_ext=False)


This function will return all files within a folder. You can obviously do this in line, but its really nice to have it as a one line function.

To make this function, I started with the line that gets me the list of files:

for (dirpath, dirnames, filenames) in os.walk(path):
    files.extend([os.path.join(dirpath, x) for x in filenames])

That gives me a list of all the files in all the folders in the given path. That’s useful unless you only want the files in the path, and not in all the sub-folders. So we can add an if for that case:

for (dirpath, dirnames, filenames) in os.walk(path):
    files.extend([os.path.join(dirpath, x) for x in filenames])
    if not recursive:

At this point we now have the correct list of files. This trick I learned from a friend at work. He uses the break command in interesting ways. In this case, it breaks the os.walk() command right after the first loop. This means that it skips all the sub-folders, leaving the list with the files in the base folder only.

Lastly, filter files to the given extensions. I like to feed the extensions as a list. This way, you can assemble your extensions in a loop. So assume the extensions are [“ma”, “mb”]:

for (dirpath, dirnames, filenames) in os.walk(path):
    files.extend([os.path.join(dirpath, x) for x in filenames])
    if not recursive:

if extension:
    filtered = list()
    extenstions = pyutils.make_list(extension)
    for e in extenstions:
        e = "." + e.replace(".", "")
    for i, file_name in enumerate(files):
        add = False
        for e in extenstions:
            if file_name.endswith(e):
                add = True
        if add:
    files = filtered

The first loop makes sure that all extensions start with a “.”.

The second loop adds the files to filtered only if they have the right extension.

And this is how the function is used:

import shared.python.file as pyfile
maya_files = pyfile.list_files(r"C:\path", extension=["ma","mb"], recursive=True)

For the complete code, please refer to the function in git (shared.python.file)

If you have any doubts or questions about any of the functions or the module itself, please add it to the comments. I’ll be happy to add any clarification to this post or to change the module itself (where it makes sense).


This module is quite short, but useful when dealing with multiple users. It’s used to identify the user, IP, and computer. I also have a wrapper function for subprocess, but I’m not fully convinced that it belongs in there.

Most of the discussion around this module will be around readability. Fortunately, I have a post on Python Readability already written. So I’m going to continue under the assumption that you agree with me on the importance of readability.


Let me know how you feel about this function. I added it because a friend of mine uses it that way, and we were working on a core repo together one time and I dont have a real reason to remove it. So I’ve now been adding it to every core library I write.

The reality is that Python does not really have a way to do true asynchronous processes. The only way is to launch separate Python apps. This is why I have kept the wrapper here. But I feel that if its worth making this function, its worth making a full wrapper for subprocess. But i have not done that. Maybe I should. Let me know in the comments if you think it’s worth while and I will put it on the list of modules we will add to our core libraries.

Other than that, I think it’s all pretty self explanatory. Let me know in the comments if you need any clarifications, or if I’m missing something, or if I’m just nuts.

Ok, see you next week.

// Isoparm

shared.python.utils 1.0

Github link: shared.python.utils

Welcome to the second post on python utils. Lets get to it, shall we?


To make an empty list with a specific number of items looks like this:

# generates list full of None values: 
empty_list = [] * number_of_items

While its not terrible, its not great either. I made this function so I could fill the list with something specific. Like so:

empty_list = pyutils.empty_list(number_of_items=0, default_item=True)

time_it() and count_it()

These two are useful decorators. What is a decorator you ask? Think of it as writing a generic wrapper for a function, but you leave the function variable. If you want a step by step on how to make decorators, please visit my decorator discussion page.


I suppose this is where we get specific to problems I’ve had. I needed to sort all items in a list by how many items there are. for example:

data = get_sorted_by_most_common(

# result:
# ["Mouth", "Up", "Corner"]

I’m aware this is not a common thing, but hey, if you ever have a similar issue, here it is.

That’s mostly it. Please ask questions if you have any. Leave a comment. Let me know how I’m doing. Thank you all.


shared.python.utils 1.1

Github link: shared.python.utils

Hello, and welcome to our first module. In this one we will make the most generic scripts we can think of. These are meant to help us in writing Python. They should not have any other dependencies other than Python or other well established Python libraries.


This function converts any python object into a list(). This is particularly useful when dealing with packages such as Maya and 3DsMax. Many functions in those packages will return either a list() or a str(), depending on how many entries are returned. This is super annoying. You end up having to test the result to see if its a list() every time (ugh!). After writing that check seven times, I gave up.

Another added benefit of this function is that it can help you in making functions that take one or more items as arguments. For example:

# you can call a log function with a text:
log("Tell me something")

# or with an array:
log(["Tell me this", "Tell me that"])

Here is how that function would work:

import shared.python.utils as pyutils
# new_list == ["string"] 
new_list = pyutils.make_list("string")

# new_list == ["string"]
new_list = pyutils.make_list(["string"])


Have you noticed that some functions return False when they should return an empty string? Check this example:

results = list()

for item in list_of_items:
    results =+ imported.get_value(item)

Imagine that we dont own ‘imported’, its a imported library. And imagine that get_values() returns False if there are no values to get from item. If item ever has no values to return, you would get an error that looks like this:

TypeError: can only concatenate list (not "bool") to list

The solution is simple but annoying. After writing it several times, I decided to add it to my utils module.

This is how you would use it:

import shared.python.utils as pyutils

# new_list = ["one", "two", "three", "four"]
new_list = pyutils.join_lists(["one","two"],["three", False], ["four"])


This one exists only for readability’s sake. Its not difficult to do this in a single line, but its hard-ish to read.

# its not easy to read this:
sans_duplicates = list(set(objects))

# this is much easier:
sans_duplicates = pyutils.remove_duplicates(objects)


I find myself looking checking a list to see if it has a bunch of items. I do this often. You can use the set.intersection() function, but it’s hard to read. It ends up looking like this:

# how am I supposed to know what this means???
if len(set(full_list).intersection(items)):

# much easier to read, I think.
if pyutils.are_items_in_list(items, full_list):

I suspect you begin to see a pattern. I always try to make code more readable. In many cases I over do it, I know. However, when I go back to my code weeks, months and years later, I don’t struggle with it so much. So I feel it’s worth it.

I think that’s enough for today. I will deal with more of this module on the next post.

Let me know in the comments if this is a helpful format. Also, if you have any questions about any of the modules mentioned, or any of the ones not mentioned, please dont hesitate.

As always, I welcome any suggestions.

// Isoparm

shared.python Intro

In my experience, having a good core library makes a world of difference. I couldn’t even imagine not having one. I owe all of my speed of development to my core library. It’s worth spending the time to get a good library started. And then, it’s worth keeping it up. I can’t stress this enough. To me, success in any Python project starts at your core library.

You will notice I use the words ‘core’ and ‘shared’ interchangeably. I dont like the word ‘core’ because it has baggage that I sometimes don’t fully follow and I end up spending a ton of time arguing semantics, rather than code. However, it’s an accepted word in the community, so it’s sometimes easier to just use it. I go back and forth on it. Maybe we should have a discussion on it in our discussions page?

Here are the modules that I’m currently planning for our core / shared library:

  • python
  • maya
  • math
  • ui
  • startup


We will start with the lowest level modules. These modules are meant to help with writing Python. For example, they can be wrappers for os or subprocess, or they can be a python utilities module for making some base decorators.


These modules will contain wrappers for the maya.cmds, maya.mel, pymel, openMaya, and other Maya related modules. We will try to keep the code as un-opinionated as possible. Solutions here should be as generic as possible. So try to avoid thinking in terms of tools and workflow, and think more about APIs and what you think you will need for when you are making tools.


There are quite a few things that I don’t like doing in Maya. Math is one of them. If I can avoid opening Maya for my solutions, then I will be able to use my code in projects that are not for Maya. Among some of the modules in here, we will have vectors, angles, filters, math functions, point clouds, etc.


This is one I end up arguing over a lot. In reality, this module is here for the sake of practicality, not for the sake of Pythonic purity. One thing I really like doing, is selecting a UI scheme that can work in most situations, and them making it so I can make GUIs quickly. I even developed an auto GUI maker once. I really liked it, since it allowed me to focus on the meat of the tools and code, and not on the GUI. As we develop this module further, I’m sure you will see the benefits.


Yeah, we need one of these. It’s really convenient to have a single entry point for all your Python settings and environment. Here we will set up the logger, pick which Python Host we are in, load the correct versions of plugins, etc.

Alright, enough rambling. Next post, we will start on

// Isoparm

Decorators & Contexts 101

Decorators and contexts are really useful. They let you wrap functions in pre-defined code. They can be incredibly useful for solving a wide array of issues. Here are some examples:

  • Timing how long a function takes to run.
    • I time functions all the time. Having a single line I can add to my function that will tell me how long it took is a great short cut.
  • Clean up Maya dirty scenes.
    • Sometimes you want to do something to your Maya scene that is destructive. Exporters do this all the time. The code could potentially leave the scene in a bad state, and you dont want to save that. You can add a context that saves the scene, does the job, and then restores the scene so it’s left as it was before you started.
  • Restore Maya settings.
    • Animators like to keep AutoKey on. Some tools don’t work well with that on. A decorator could make sure that the setting is off, run the function, and then turn it back on.
    • Batch process is slow if your viewport is updating for every file load. You can make a decorator that will stop updating the viewport while the batch process is happening.

See? They can be quite convenient. I wrote a file manager once that locked, copied, merged, and unlocked files on a server as users opened, changed, and closed those files. Without decorators, the code would have had a ton of repeated code that would then have to be maintained. I saved myself a monster headache by making decorators for all file operations.

Enough pontificating, let us make a context and a decorator.


For this example we will write a context and a decorator for timing code and functions. This is a good example, since the solution is simple, and we can focus on the context and decorator.

1.1 Context

Step 1. write the code that solves the problem.

We want to time a block of code. This will solve that:

# record initial time:
time_start = timeit.default_timer()

# ---------------------------------
# code to be timed:
# ---------------------------------

# record time after code ran:
time_end = timeit.default_timer()

# inform the user:
print('Timed result : {0:.20f} sec'.format(time_end - time_start))

Step 2. Make it into a function:

We want to be able to import this code into other modules and run it. So it needs to be saved in a module as a function.

We need to import contextlib. It’s a pretty standard Python library, you should have no problems importing it.

import contextlib

def time_this():
	Context for timing blocks of code.
	# record initial time:
	time_start = timeit.default_timer()

		# Here is where/when the code to be timed will run:

		# record time after code ran:
		time_end = timeit.default_timer()

		# inform the user:
		print('Timed result : {0:.20f} sec'.format(time_end - time_start))

To understand this function, you must understand two concepts: try/except/finally, and yield.


It’s not too complicated. You are asking Python to “try” to run some code. If it fails, it will skip the failed code and continue running the rest of the function. Then, “finally” will always run, regardless of the success of “try”. So in our example, regardless of weather or not the code evaluates correctly, the function will tell you how long it took.


This one is a bit tricky. I think I’ll do a much more in-depth discussion about yield at a later date. For now, assume that “yield” passes the evaluation stream to the code you want to time. It’s like driving. You yield to a pedestrian, which makes you stop, so he can start. When he is done crossing the street, you start up where you left off (you dont go back home and start the trip again). It’s not exactly this, but for now it will do.

So how do we use this function? It’s not hard. Once you have it written, you can use it all over the place. You can even forget how to time scripts, now.

from time_module import time_this

with time_this:

	# ---------------------------------
	# code to be timed:
	# ---------------------------------

1.2 Decorator

Contexts are great, but you have to write them in every time you want to use them. What if we could tell Python, that every time it calls a specific function, it would time it? This way we would not have to write the with context statement every time.

This is what decorators are for. Granted, timing a function is not a great use of a decorator, but it’s not a bad one either. Lets use it.

Step 1. Write the code that solves the problem.

We did that already on the example above. So, moving along…

Step 2: Make a function out of it.

For the same reasons as above (importing from another module), we need a function. This function, however, will be a bit different.

Instead of having code to run, a decorator evaluates a function every time that function is called. Think of it as a wrapper for the function, except the function is variable. We need to be able to tell the wrapper what function to evaluate. Confusing?

Ok, let me try again.

We are going to tell Python that every time we call function A(), we want it to print how long it took to evaluate. We are also telling Python that we want the same treatment for functions B(), C() and D(), but we only want to write the wrapper once.

So first things first, we need a function that accepts a function as an argument:

def time_it(func):

If you have written wrappers before, you know that you need to capture the return from the main function and return it in the wrapper. If you dont do this, you lose the data you got from the function.

To do this, we define a function inside the wrapper that will capture that result.

def time_it(func):

    # with this function, we capture and return the return value:
    def timed():

        # function to be timed:
        result = func()

        return result

    return timed

As you can see, we are storing the return of the function in a variable called result. Then we return result.

We need time_this() to evaluate func, right? But what if func has arguments? I mean func can be any function at all, with any number and style of arguments. So how do we deal with that?

Simple, *args and *kwargs are your friends here.

def time_it(func):

    # with this function, we capture and return the return value:
    # add args and kwargs to be able to pass any and all args of the func:
    def timed(*args, **kwargs):

        # function to be timed:
        result = func(*args, **kwargs)  # pass the arguments to the function

        return result

    return timed

Great, we now have a decorator that calls the function it passes, and it handles the arguments well. Unfortunately, it does nothing else. So lets add the timing code to it:

def time_it(func):

    # with this function, we capture and return the return value:
    # add args and kwargs to be able to pass any and all args of the func:
    def timed(*args, **kwargs):
        # record initial time:
        time_start = timeit.default_timer()

        # function to be timed:
        result = func(*args, **kwargs)  # pass the arguments to the function

        # record time after code ran:
        time_end = timeit.default_timer()

        # inform the user:
        print('Timed func : {}'.format(func.__name__))
        print('Timed result : {0:.20f} sec'.format(time_end - time_start))
        return result

    return timed

This function should work now for you now. But how do we use it? I really like how this is used. Once ready, check out how easy it is to use a decorator:

from time_module import time_it

@time_it  # <- Right here. Add this to any function you want timed.
def my_function(args):
	print args

Adding the line @time_it above any function declaration will tell Python you want it timed every time it runs. If you were to call my_function(“hello”) in a for loop, you would get a “hello” for every iteration of the loop.

And that’s mostly it. while there are more things we can discuss, this is a good intro to decorators and contexts. I hope this helps. Please leave me a comment with questions, corrections, suggestions, or ideas on how to make this page better. Thank you!

// Isoparm

Env Setup – Part 1.0 – Software Needed For Maya/Python Development

First off, we need to have an appropriate environment. In order for us to be set up for success, we need to well… set ourselves up for success. I spent years using the “wrong” IDE. I fought with it, I yelled at it, i even re installed it a few times thinking it would fix my problems. For the longest time i was convinced that the problem was me, not the software. While there are many right answers to the question of the “Best IDE for Python” questions, there are definitely some wrong ones. So don’t be like me, pick a better one.

Here is the list of software we need. Some of it costs money… well it all costs money, but some of it can be used in trial, education, or limited versions. I will attempt to have an accessible environment during this project, so we can all speak the same language.

  1. Maya. Well sure, this is a given, but I had to put it in here. The software is not exactly cheap, but there are some things you can do. If you are a student enrolled in a school, you probably have access to an education version (that one will do nicely). You can also try out the software for 30 days (we probably wont be finished by then, but 30 days should be enough to figure out if you want to continue).
  2. Python 2.7. Unfortunately, Autodesk has not moved on from Python 2.x to 3.x. This means that we need to do our work in an older version of Python. Since Autodesk is pretty much a monopoly when it comes to rigging, then we must make do. Make sure you download 2.7.x, and not 3.x.
  3. PyCharm. This is my go-to IDE for Python. It integrates nicely with Maya and with Github.
  4. Sublime Text. Why have 2 IDEs? Well, there are some nice things about having a light weight text editor that is fully featured. I use it quite a bit. If you are on a  tight budget, you can skip this one.
  5. Github account. I will share all the code written for this project in Github, so it will be helpful to have an account so you can follow along. To setup Git, follow these instructions: Set Up Git.

OK, now go install all the software and come back, I’ll wait.

// Isoparm

Next: Connect Pycharm and Github

Env Setup – Part 1.2 – Connect PyCharm To Maya

In order for our dev environment to be complete, we need to have PyCharm debug code in Maya. This is something that you ‘could’ live without. I recommend you have this done because it will make your life much easier.

I like to use MayaCharm. It’s a PyCharm plugin that has a Python interpreter for PyCharm that emulates Maya. Also, it can connect directly with Maya to use PyCharm as a debugger. It’s all kinds of wonderful.

If you are interested on the Github page, or want to see the docs, here they are MayaCharm documentation.

For the quick and easy install, it’s best if you do it from inside PyCharm.

Go to File / Settings …


On the Search Bar, type “MayaCharm“. Only one plugin will come up. Install it and restart PyCharm.

You are not done yet. You still have to set up your Python interpreter for your current PyCharm project. So go back to your Settings Window ( File / Settings … ). Click on the “Project Interpreter” drop down, and select “Show All…


A new window appears. Here you need to point PyCharm to mayapy.exe (which is the Maya Python interpreter).

Click the “+” button
Select the System Interpreter on the left, and click on the “…” button.
Navigate to mayapy.exe in your Maya install folder

Click “OK” and you should now see the mayapy.exe interpreter added to your Project Interpreters Window:


Depending on your version of PyCharm, you might need to install the Python Packaging Tools. If it asks for it, do so.

At this point you need to restart PyCharm again. Even if you restarted it for the Python Packaging Tools, you need to restart anyway. If you don’t, the Active Maya SDK section of the Settings Window will be empty and you wont be able to continue. So


You now have to tell Maya to listen for incoming commands from PyCharm. So navigate to your file. This file is normally located here: “C:\Users\…user…\OneDrive\Documents\maya\2018\scripts

Open it and add these lines:

import maya.cmds as cmds

if not cmds.commandPort(':4434', query=True):

Notice how the port name in the code is 4434, and the port in the MayaCharm Settings Window is also 4434. These number HAVE to match in order for the connection to work.

Alternatively, you could run these lines directly from the Script Editor in Maya if you wanted. This will work as well, but you will have to do it every time you open Maya. On the one hand, you don’t worry about Maya keeping the port open when you are not using PyCharm, and on the other, you have to run the command every time you want to debug with PyCharm. Ultimately, the choice is yours.

You are done with the Maya / PyCharm set up!


In the “Run” menu at the top of PyCharm, there are now 3 new lines. These will send the commands to Maya.

Try it out.

highlight a print command and watch it print in the Maya Script History.

I hope this worked out for you guys. If not, leave me a note with what problems you have and we will troubleshoot it together.

// Isoparm

Env Setup – Part 1.1 – Connect PyCharm And Github

Lets setup PyCharm / Github.

You need a Github account, so make one.

Make a new Github Repository. The project I will be starting out using is this one: I recommend you create a similar one.


Yay, you have a repository! Now we will clone it to your system.

Open GitBash (it got installed when you installed Git).

this is the clone command:

git clone C:\code_folder\disorder_project\shared

Once cloned, you can pull (get latest from Github) with this command:

cd C:\code_folder\disorder_project\shared
git pull

Alright, you have your repository set up. Now we move to PyCharm.

Make a File \ New Project

Use the same folder as the one you used for your repository.

Set up Git: File \ Settings


Cool, we are almost done. Now make a change, add a file, etc. Then press Ctrl + K. This is how you commit or check in, so memorize it.


PyCharm will prompt you for your account and password. Make sure you make your settings global.

Now you are done.

Make sure you make all changes repository through PyCharm. Add, remove, edit, and even copy and paste files using PyCharm. That way, Github will know what you are doing at all times, so all you have to do, is commit every now and then.



Next: Connect PyCharm To Maya